Senior US military leaders, including Defense Secretary Pete Hegseth, met executives from artificial intelligence firm Anthropic on Tuesday as a simmering dispute over military use of AI entered a critical phase.
At issue is how far the US government should be allowed to go in deploying Anthropic’s powerful large language model, Claude, in sensitive defense operations.
Pentagon has delivered an ultimatum to Anthropic, demanding that the company accept its terms by Friday afternoon or face sweeping consequences.
What is the ultimatum that the Pentagon has given Anthropic?
According to a senior Pentagon official cited by the New York Times, the Trump administration has warned Anthropic that it could invoke the Defense Production Act if the company does not comply by 5:01 p.m. on Friday.
That step would compel the firm to provide its AI technology for military use.
At the same time, officials have threatened to label Anthropic a supply chain risk — a designation usually reserved for companies linked to foreign adversaries.
Such a move could effectively bar the US government from using Anthropic’s products at all.
The two measures are fundamentally at odds.
One would force the military to use Anthropic’s model, while the other would prohibit its use.
However, the contradictory threats reflect both the depth of frustration with Anthropic’s resistance and the strategic value of its technology.
An Anthropic spokesperson said Tuesday’s meeting “continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do,” Reuters reported.
Why Anthropic matters to the military
Anthropic occupies a unique position in the US defense ecosystem.
It is currently the only AI company whose model is operating on classified military systems.
The Department of Defense signed contracts last July with several leading AI firms — including Google and OpenAI — offering deals worth up to $200 million.
Until this week, however, only Anthropic’s Claude model had been cleared for use in classified environments.
The core disagreement over AI safeguards
Anthropic has branded itself as the most safety-forward of the major AI developers, and it is this stance that has increasingly put it at odds with defence officials.
US military leaders have pushed for broader and less restricted access to Claude’s capabilities.
Anthropic, according to people familiar with the talks, has resisted allowing its models to be used for mass surveillance or for autonomous weapons systems that could make lethal decisions without direct human involvement.
The dispute escalated earlier this month after Pentagon officials became concerned that Anthropic had asked questions about how its AI tools were used during a military operation in Venezuela that led to the capture of President Nicolas Maduro.
The Department of Defense has integrated Claude into parts of its workflow but has threatened to sever ties over what it sees as artificial constraints imposed by a private contractor.
Pentagon officials argue that lawful use of software and weapons is the government’s responsibility, not something vendors should dictate, NYT reported.
Supporters of Anthropic meanwhile argue that the company is now being punished for being first to enter that space and for developing a bespoke government-focused model, known as Claude Gov, that differs from its public-facing products.
What happens next: legal and commercial implications
According to Reuters, a person familiar with the matter said Anthropic has no intention of easing its usage restrictions for military purposes, even as discussions with the Pentagon continue.
Reuters also reported that Anthropic chief executive Dario Amodei told Hegseth during Tuesday’s meeting that the company had not raised concerns with the Pentagon or with defense contractor Palantir about the raid.
Amodei also said the safeguards currently in place would not interfere with the Defense Department’s existing operations.
If the Pentagon proceeds with a supply chain risk designation, the consequences for Anthropic could extend well beyond defense contracts.
Such a label could disrupt the company’s relationships with other firms that do business with the US government.
“This specific scenario is unprecedented,” said Franklin Turner, a government contracts lawyer at McCarter & English, in comments cited by Reuters.
He warned that any adverse action could trigger extensive litigation, given the unusual nature of the Pentagon’s threats.
For now, Anthropic says talks are continuing in good faith.
Whether the company can maintain its safety-first stance while remaining a key supplier to the US military may determine not only its future, but also how AI is governed in national security settings.
The post Explained: What is behind the Pentagon’s clash with Anthropic? appeared first on Invezz





















