this post was submitted on 14 Feb 2026
283 points (99.0% liked)

Technology

81161 readers
4806 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The evolution of OpenAI’s mission statement.

OpenAI, the maker of the most popular AI chatbot, used to say it aimed to build artificial intelligence that “safely benefits humanity, unconstrained by a need to generate financial return,” according to its 2023 mission statement. But the ChatGPT maker seems to no longer have the same emphasis on doing so “safely.”

While reviewing its latest IRS disclosure form, which was released in November 2025 and covers 2024, I noticed OpenAI had removed “safely” from its mission statement, among other changes. That change in wording coincided with its transformation from a nonprofit organization into a business increasingly focused on profits.

OpenAI currently faces several lawsuits related to its products’ safety, making this change newsworthy. Many of the plaintiffs suing the AI company allege psychological manipulation, wrongful death and assisted suicide, while others have filed negligence claims.

As a scholar of nonprofit accountability and the governance of social enterprises, I see the deletion of the word “safely” from its mission statement as a significant shift that has largely gone unreported – outside highly specialized outlets.

And I believe OpenAI’s makeover is a test case for how we, as a society, oversee the work of organizations that have the potential to both provide enormous benefits and do catastrophic harm.

top 17 comments
sorted by: hot top controversial new old
[–] tomiant@piefed.social 18 points 17 hours ago (1 children)

a test for whether AI serves society or shareholders

Gee I wonder which one's gonna win.

[–] Rentlar@lemmy.ca 3 points 16 hours ago

Who has more money? OpenAI needs buttloads of it right about now from all the promises they have made.

[–] FauxLiving@lemmy.world 10 points 16 hours ago

~~Don't be evil~~

[–] Tehdastehdas@piefed.social 32 points 20 hours ago (2 children)

benefit humanity as a whole

The Borg from Star Trek fills that requirement. My headcanon is that the people from its home planet made an AGI with the given goal of “benefiting humanity as a whole”, and it maximised that goal by building the Borg - making humanity as a whole by connecting them to a hive mind and forcibly assimilating all other species to benefit humanity as a whole.

[–] karashta@piefed.social 12 points 19 hours ago

Oh what a cool take!

I always liked to think of the Borg as being almost more like an emergent property of a certain level and type of organic/inorganic interfacing.

So it's not that one species was the Borg, all are in potentia. And every time a species commits the same error or reaches the correct level of "perfection", they find themselves in a universe where they were already existent.

Like a small hive self-creates, opens its mental ears and is already subsumed into the greater Borg whose mind it finds.

I like that it adds almost a whole new level of arrogance to their statement, "Resistance is futile." They believe it not only because they are about to physically assimilate you, but because every advance you make brings you potentially closer to being Borg through your own missteps.

[–] monkeyslikebananas2@lemmy.world 5 points 18 hours ago

Gotta be honest, I thought that was the reason.

[–] brsrklf@jlai.lu 19 points 21 hours ago (1 children)

"Safely" was already an empty promise to begin with, given how LLMs work.

So someone just thought "our investors don't value safety, let's get rid of that on the blurb". They are probably correct.

[–] panda_abyss@lemmy.ca 7 points 19 hours ago

When they had their schism over Altman a couple years ago safety died. 

[–] Paranoidfactoid@lemmy.world 7 points 17 hours ago

EVERYTHING IS FINE

[–] palordrolap@fedia.io 14 points 20 hours ago

Dodge v. Ford Motor Company, 1919.

This case found and entrenched in US law that the primary purpose of a corporation is to operate in the interests of its shareholders.

Therefore OpenAI, based in California, would be under threat of lawsuit if they didn't do that.

This goose is already cooked.

[–] Diplomjodler3@lemmy.world 11 points 20 hours ago

... its new structure is a test for whether AI serves society or shareholders

Gee, I can't wait to see the results of this test!

[–] runsmooth@kopitalk.net 3 points 17 hours ago* (last edited 17 hours ago)

OpenAI is the same as any other publicly traded corporation: it serves society, but this service primarily focuses on the shareholders. We're looking at a vehicle designed to take money, and give it to shareholders. (private in this case or otherwise)

Focus on growth of data centres at public expense, AI slop, the circular nature of some of the investments going into AI, and the productivity (or lack of), are part of it. We are not looking at any exceptionalism. AI isn't unique in its capability for catastrophic harm. What we eat and drink can easily be on that list.

AI and these American companies, just want the money train to continue unabated, and any regulation to go away.

[–] MalReynolds@piefed.social 2 points 16 hours ago

We're so damn lucky that LLMs are a dead end (diminishing returns on scaling even after years of hunting) and they just pivoted to the biggest Ponzi scheme ever, bad as that is (and the economic depression it will cause), it pales into insignificance compared with the damage these fucks would do with AGI (or goddess forbid ASI with the alignment they would try to give it).

[–] winni@piefed.social 1 points 14 hours ago

ai serves society? you are boozing brake fluid

[–] melsaskca@lemmy.ca 3 points 19 hours ago

The government only cares about your safety when they need to push laws through the system so their rich buddies can save a dime. "But it's for your own good", they say. What about the children?

[–] Lembot_0006@programming.dev 3 points 21 hours ago

The marketing department removed some meaningless word from the marketing bla-bla-bla brochure nobody was even supposed to read.

WE ALL ARE GOING TO DIE!!!

[–] silverneedle@lemmy.ca 2 points 20 hours ago* (last edited 20 hours ago)

lol, lmao even

Either they think they're evil 1337 h4xx0r overlords that are gonna enslave the planet or they genuinely think their statistical apparati do anything worthwile outside of making statistics on by now 70% other statistical machines.

+just wait until AI bros about Zip compression being more efficient at classifying than "AIs".