The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.
Regardless, these threats do not change our position: we cannot in good conscience accede to their request.
It is the Department’s prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider. Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place. Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.
Do you mean that all this about principles is a smoke screen and Anthropic are just using it as a front to squeeze for more money?
No, if you want my opinion it seems too risky of a move to make all of this so public if all they want is more money. It's possible, but I'd be surprised.
I believe them when they say that what they want is to have those two particular things, fully autonomous weapons and mass surveillance of US citizens, removed from the contract terms (for now). This could be out of genuine moral principles, or out of fear of bad PR when this would be found out. Most likely a combination of both.
My point was that from my perspective it is a very minor difference. The conclusion I kept after reading this isn't "good guy Anthropic bravely stands against pressure from Hegseth" as some of the Hackernews comments try to paint it. It is "Anthropic mostly bends over backwards and grovels for Pentagon money, willing to massively spy on all foreign nationals and working on creating autonomous weapons - other US AI companies likely to be even worse".
As I said, horrifying.