The Pentagon is at a standstill with AI firm Anthropic over restrictions on using its Claude AI model for autonomous targeting and domestic surveillance, with officials warning the company could be labeled a “supply chain risk.”
Chief Pentagon spokesman Sean Parnell said Pentagon’s partnership with Anthropic is currently under review. “Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people,” he said.
Claude is currently the only model approved for handling classified documents, under a $200 million contract signed between the Pentagon and Anthropic in 2025.
Designating Anthropic a supply chain risk would require the multitude of companies that work with the Pentagon to certify that they do not use Claude in their own workflows. Some of them almost certainly do. The company recently stated that eight of the 10 biggest U.S. companies use Claude.
According to Axios, Anthropic and the Pentagon have held months of contentious negotiations over the terms under which the military can use Claude.
The news outlet says Anthropic is prepared to loosen its current terms of use, but wants to ensure its tools will not be used to spy on Americans en masse, or to develop weapons that fire with no human involvement.
The Pentagon claims that is unduly restrictive, and that there are all sorts of gray areas that would make it unworkable to operate on such terms. It reportedly wants Anthropic, as well as other AI platform labs such as OpenAI, Google, and xAI, to allow the military to use their tools for “all lawful purposes.”
“We are having productive conversations, in good faith, with the Department of War on how to continue that work and get these new and complex issues right,” an Anthropic spokesperson said.







