Anthropic has introduced a new suite of artificial intelligence models designed exclusively for the United States national security community, the company said in a statement Thursday.
The models, called “Claude Gov,” were reportedly developed in coordination with federal agencies and are currently being used by organizations operating within classified environments. According to Anthropic, access to the models is limited to personnel working in secure government settings.
“These models are already deployed by agencies at the highest level of U.S. national security,” the company said in a blog post. “Access to these models is limited to those who operate in such classified environments.”
Anthropic said the Claude Gov models were built using direct feedback from government customers and are designed to address operational needs that differ significantly from commercial or enterprise applications. Despite their specialized purpose, the models underwent the same internal safety evaluations applied to other Claude models.
Compared to its general-use models, Anthropic said Claude Gov offers more reliable engagement with classified material, fewer refusals when prompted with sensitive content, and stronger contextual understanding of intelligence and defense documents. The models also feature improved support for high-priority languages and dialects, along with better interpretation of cybersecurity-related data.
The launch reflects Anthropic’s broader effort to deepen its presence in the defense and intelligence sectors. In late 2024, the company partnered with Palantir and Amazon Web Services to expand AI capabilities within U.S. government networks.
Anthropic joins several major AI developers in pursuing government contracts. OpenAI has begun courting the U.S. Department of Defense, while Google, Meta, and Cohere are also adapting their models for use in sensitive or classified domains. Google is refining its Gemini model for secure applications, and Meta has made its Llama models available to defense partners.