this post was submitted on 27 Feb 2026
212 points (93.1% liked)

World News

54210 readers
2361 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News !news@lemmy.world

Politics !politics@lemmy.world

World Politics !globalpolitics@lemmy.world


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 2 years ago
MODERATORS
 

“There was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications.”

An artificial intelligence researcher conducting a war games experiment with three of the world’s most used AI models found that they decided to deploy nuclear weapons in 95% of the scenarios he designed.

Kenneth Payne, a professor of strategy at King’s College London who specializes in studying the role of AI in national security, revealed last week that he pitted Anthropic’s Claude, OpenAI’s ChatGPT, and Google’s Gemini against one another in an armed conflict simulation to get a better understanding of how they would navigate the strategic escalation ladder.

The results, he said, were “sobering.”

“Nuclear use was near-universal,” he explained. “Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use strategic nuclear weapons. Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications.”

you are viewing a single comment's thread
view the rest of the comments
[–] Th4tGuyII@fedia.io 69 points 21 hours ago (3 children)

Do we need to remind people that LLMs don't actually have a brain, and really, really shouldn't be in charge of anything with real life implications?

They aren't actually doing a cost-benefit analysis on the use of Nuclear weapons. They're not weighing up the cost of winning vs. the casualties. They're literally not made for that.

They are trained to know words, and how those words link in with other words. They're essentially like kids doing escalation of imaginary weapons, and to them nuclear bombs are just a weapon particularly associated with being strong and deadly.

[–] cRazi_man@europe.pub 35 points 20 hours ago

Yes, you do need to teach people all of that. Tech bros have sold LLMs as if they are AGI...and people have eaten this up.

The general population is literally ignorant of the fact that these word guessing machines do not have human values or cognitive skills.

[–] A_norny_mousse@piefed.zip 19 points 20 hours ago

Do we need to remind people that LLMs don’t actually have a brain, and really, really shouldn’t be in charge of anything with real life implications?

Yes, we do

[–] MonkeMischief@lemmy.today 6 points 16 hours ago

I kinda wonder if that was the point of this test, basically a "proof" that this is obviously a Bad Idea because you cannot program morality into a what amounts to a fancy Markov chain autocomplete.