Anthropic CEO Says He Can’t Accede to Pentagon’s Demands for AI Use
- By The Financial District

- Mar 7
- 1 min read
Anthropic CEO Dario Amodei said that the artificial intelligence (AI) company “cannot in good conscience accede” to the Pentagon’s demands to allow unrestricted use of its technology, deepening a public clash with the Trump administration that threatens to pull its contract and take other drastic steps soon, Konstantin Toropin and Matt O’Brien reported for the Associated Press (AP).

The maker of the AI chatbot Claude said in a statement that it was not walking away from negotiations, but that new contract language received from the Defense Department “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.”
Sean Parnell, the Pentagon’s top spokesman, said earlier on social media that the military “has no interest in using AI to conduct mass surveillance of Americans (which is illegal), nor do we want to use AI to develop autonomous weapons that operate without human involvement.”
Anthropic’s policies prevent its models from being used for those purposes. It is the last among its peers — the Pentagon also has contracts with Google, OpenAI, and
Elon Musk’s xAI — to decline supplying its technology to a new US military internal network.
“It is the Department’s prerogative to select contractors most aligned with their vision,” Amodei wrote in a statement. “But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider.”
![TFD [LOGO] (10).png](https://static.wixstatic.com/media/bea252_c1775b2fb69c4411abe5f0d27e15b130~mv2.png/v1/crop/x_150,y_143,w_1221,h_1193/fill/w_179,h_176,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/TFD%20%5BLOGO%5D%20(10).png)








