this post was submitted on 27 Feb 2026
505 points (98.7% liked)

Technology

81933 readers
2995 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Hacker News.

The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

Regardless, these threats do not change our position: we cannot in good conscience accede to their request.

It is the Department’s prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider. Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place. Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.

you are viewing a single comment's thread
view the rest of the comments
[–] XLE@piefed.social 11 points 1 day ago (1 children)

"Anthropic publicly praised President Trump’s AI Action Plan," said CEO Dario Amodei.

"We have been supportive of the President’s efforts to expand energy provision in the US in order to win the AI race," he continued, apparently talking about Trump's new anti green energy, pro fossil fuel program.

[–] andallthat@lemmy.world 9 points 1 day ago* (last edited 1 day ago) (1 children)

yes... mine was just a play on the title of this post.

Look, I'm not saying that Amodei is a saint and I do find him as full of shit as Altman with their AGI promises, but would you expect Anthropic to take a stand against increasing AI investment, because it's coming from Trump? And I don't like that he went looking for funding in the Middle East either.

I just think there is an ethical line between "I do business with people who do bad things" and "I'm actively helping people who do bad things to do them in a more efficient way". It might be a fine line and it might also be that they are just posturing, but it's still more than other companies did (companies that are a lot richer than Anthropic and that don't need to find a lot of funding just to stay afloat).

[–] XLE@piefed.social 7 points 1 day ago (1 children)

My reply was a continuation of your joke, just using Dario's actual words. My point is that he too lacks a conscience (see also, the other links I've posted)

[–] andallthat@lemmy.world 7 points 1 day ago

Gotcha! Shit, I barely understand my own jokes... 😅