this post was submitted on 28 Feb 2026
106 points (100.0% liked)

Chapotraphouse

14309 readers
557 users here now

Banned? DM Wmill to appeal.

No anti-nautilism posts. See: Eco-fascism Primer

Slop posts go in c/slop. Don't post low-hanging fruit here.

founded 5 years ago
MODERATORS
 

Anthropic statement here indicates Pentagon asked for those use cases, which they rejected, and which OpenAI is now enabling

NBC source

you are viewing a single comment's thread
view the rest of the comments
[–] InevitableSwing@hexbear.net 19 points 2 weeks ago (2 children)

Altman is so full of shit.

Hours after the Trump administration’s comments, OpenAI CEO Sam Altman posted on X Friday night that the company had struck a deal with the Department of Defense to deploy its models on the department’s classified networks. Altman said the Department of Defense “displayed a deep respect for safety and a desire to partner to achieve the best possible outcome” in their interactions.

“AI safety and wide distribution of benefits are the core of our mission,” Altman wrote. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW [Department of War] agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

Altman also said OpenAI will create “safeguards to ensure our models behave as they should, which the DoW also wanted.” It is unclear if or how the safety-focused measures in OpenAI’s agreement differ from those in the Anthropic negotiations.

[–] Awoo@hexbear.net 10 points 2 weeks ago (1 children)

human responsibility for the use of force, including for autonomous weapon systems

Autonomous weapon systems that ask a human for confirmation before killing something is not really autonomous is it? So why say autonomous at all?

[–] Collatz_problem@hexbear.net 10 points 2 weeks ago (1 children)

Do you think operators won't just click OK every time?

[–] Awoo@hexbear.net 2 points 2 weeks ago (1 children)

I'm commenting on how they're obviously talking shit. They will be autonomous. The human part is a lie.

[–] Nacarbac@hexbear.net 4 points 2 weeks ago

Yeah, they might make a PowerPoint about their human in the loop system, but then there'll be a big AUTOKILL toggle next to the operator "for debugging purposes" that oddly enough doesn't log anything.

[–] BodyBySisyphus@hexbear.net 5 points 2 weeks ago

"Hi we would like to use our text classification algorithm to run your murderbots": utterly deranged statement hall of fame contender that somehow managed to impinge on our reality.