this post was submitted on 27 Feb 2026
558 points (97.9% liked)
Technology
81933 readers
2995 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Eh, the context I was thinking of is that they are constantly playing “safety theatre” where it absolutely doesn’t matter. They’ve tried to kill open models and basically capture regulators by misleading or outright lying, for their benefit.
In other words, this is a case of “a broken clock is right sometimes,” and I think they knew Trump will back down.
Fair, I definitely haven't simped for them in the past just because they post some good articles on AI safety.
Although... I'll say of them, they seem more like what OpenAI should be, actually trying to implement AI responsibly, and freely sharing that information. It's good research, even if marketing is the motivation. Meanwhile OpenAI, the "charity" that's supposed to guide us to a responsible AI future, moved their most addictive and mentally dangerous model to the highest paid tier instead of actually killing it until very recently.
Although at the end of the day, Anthropic is a for-profit company, in a better world they wouldn't have released models publicly before this research was actually done and pressing dangers like AI psychosis were actually safeguarded against. Better late than never, sure, but the whole industry has done a lot of damage already, and the work of resolving the issues still isn't even close to done.