Pentagon-Anthropic battle pushes other AI labs into major dilemma
· Axios

As the Pentagon and Anthropic wage an ugly and potentially costly battle, three other leading AI labs are also negotiating with the department — and deliberating internally — about the terms under which they'll let the military use their models.
Visit chickenroad.qpon for more information.
Why it matters: Defense Secretary Pete Hegseth wants to integrate AI into everything the military does more quickly and effectively than adversaries like China. He's insisting AI firms give unrestricted access to their models with no questions asked — and showing he's willing to play hardball to force their hands.
Driving the news: The Pentagon is threatening to sever its contract with Anthropic and declare the company a "supply chain risk" because it's unwilling to lift certain restrictions on its model, Claude.
- The company is particularly concerned about Claude being used for mass domestic surveillance or to develop fully autonomous weapons.
- The use of Claude in the Nicolás Maduro raid deepened tensions. The Pentagon claims an Anthropic executive raised concerns after the operation, though Anthropic denies that.
- Administration officials say it's unworkable for the military to have to litigate individual use-cases with Anthropic before or after the fact. "We're dead serious," a senior Pentagon official told Axios of the threat to cut off Anthropic and force its vendors to follow suit.
State of play: Crucially, Claude is the only model available in the military's classified systems through Anthropic's partnership with Palantir.
- Three other models — OpenAI's ChatGPT, Google's Gemini and xAI's Grok — are available in unclassified systems, and have lifted their ordinary safeguards as part of those agreements.
- Negotiations to bring those companies into the classified domain are now more urgent as the Pentagon ponders how to replace Claude if necessary — a process a senior official conceded would be massively disruptive.
- Anthropic says it remains committed to working with the Pentagon, despite the public feud, and both sides say they might still come to an agreement.
- One acknowledged that the fight with Anthropic was a useful way to set the tone for negotiations with the other three.
The intrigue: Officials are adamant they won't budge on a standard allowing the Pentagon "all lawful use" of the AI models, and a senior administration official said one of the three labs already told the Pentagon it was "ok with 'all lawful use' at any classification level."
- A source familiar told Axios that it was xAI, whose founder Elon Musk has ripped rivals like Anthropic and OpenAI as "woke" for their approaches to safety. xAI did not respond to multiple requests for comment.
- Notably, xAI was the only bidder out of the frontier labs in the Pentagon's autonomous drone software contest.
- OpenAI is bidding in a limited way to translate voice commands into digital instructions, but not for drone control, weapon integration, or target selection.
Zoom in: The senior official said the administration was confident the other two labs would agree to "all lawful use" across both domains. But sources familiar with those dynamics tell Axios it's not nearly that clear-cut.
- An OpenAI spokesperson told Axios that moving into classified work "would require us to agree to a new or modified agreement." Google declined to comment.
- The "all lawful use" requirement is hardly relevant for unclassified work. "People are going to use this thing to make their PowerPoint slides a little bit more quickly and easier. They're not going to be developing autonomous weapons," one source said.
- But applying that standard in the classified domain poses thorny ethical dilemmas.
Between the lines: While Anthropic CEO Dario Amodei has been the most vocal about the risks of advanced AI, executives at OpenAI and Google share some concerns about how their models might be used, sources familiar with those dynamics say.
- The companies may also fear revolts among their engineers, like the one Google experienced in 2018 over a previous initiative, Project Maven, that involved using AI to analyze drone footage. Google walked away from that deal after a damaging internal fight.
Then there's the matter of how you ensure the Pentagon is complying with whatever usage terms have been agreed, or even with the law.
- "The whole game" is building infrastructure that ensures what's being deployed is safe, and having oversight on the back end into how it was used, one source said.
- The source was skeptical of Anthropic's claim that the company has sufficient visibility into Pentagon operations to ensure it's comfortable with every use of its model.
- A separate source said Anthropic does have visibility and is confident its usage policies are followed.
Threat level: One source familiar with the ongoing discussions said one issue is that the companies themselves don't fully understand how their models will respond in certain scenarios, or why.
- "That is more challenging than just figuring out like, 'Hey, will this metal withstand this degree of heat or that degree of heat?'"
- The source added: "If there's a one in a million chance that the model might do something unpredictable, is that one in a million chance so catastrophic that it's not worth taking a 1 million chance?"
Consider this: If an AI model enables an autonomous weapon to near-instantly take down dangerous drone swarms, is it ethical to deploy it when there's some small chance it could also fire on a civilian flight?
- Those are the sorts of questions the labs are grappling with.
The bottom line: The Pentagon's position is that such decisions should be made by the military, not by executives in Silicon Valley. Anthropic and its rivals are under pressure to decide whether they can live with that.