- AI & Regulation
- Artificial Intelligence
- National Security
- U.S. Defence Policy
How the Pentagon Chose Its AI Partner and Set the Terms
12 minute read
When the Pentagon banned Anthropic and signed OpenAI in the same night, it exposed a question the industry had spent years avoiding: is the difference between accommodation and principle a matter of ethics, or of engineering?
Key Takeaways
- OpenAI’s Pentagon contract contains the same safeguards that triggered Anthropic’s designation, revealing that the dispute turned on deployment architecture and verifiable compliance rather than ethical substance.
- The use of 10 U.S.C. § 3252 against a domestic firm in a commercial dispute is without precedent and, if upheld, would significantly expand executive authority into private technology markets.
- AI companies must now be evaluated not only on technology and growth but on the durability of their regulatory posture, as February 27 proved that disagreement with federal power carries immediate cost.
Introduction
Thirteen minutes separated a presidential directive from a national security designation. On the evening of February 27, 2026, President Donald Trump ordered every federal agency to cease using Anthropic’s technology immediately. Defence Secretary Pete Hegseth followed by invoking 10 U.S.C. § 3252 to formally designate Anthropic a supply chain risk, extending the prohibition to every military contractor, supplier, and commercial partner of the Department of Defence. Within hours, OpenAI CEO Sam Altman announced a freshly signed agreement giving the Pentagon access to its models in classified environments.
By morning, one of the most sophisticated AI safety companies in the world had been treated as a foreign adversary. Its closest rival had a government contract. What followed was not a straightforward story of principle against pragmatism. It was more unsettling than that.
Months In The Making
The fracture did not arrive without warning. Negotiations between Anthropic and the Department of Defence had been deteriorating for months, snagged on two conditions the company insisted on attaching to any deployment of its Claude model in defence contexts: no use for mass domestic surveillance, and no application to fully autonomous weapons systems, defined as those capable of directing lethal force without a human in the decision chain.
Anthropic’s position was grounded in engineering as much as ethics. Current AI systems, the company argued, are not reliable enough for high-stakes autonomous decisions, and the constitutional risks of AI-driven surveillance at scale are not theoretical. The Pentagon, seeking contractual latitude for all lawful purposes, viewed these conditions as constraints it could not absorb, particularly with China integrating AI into its military programmes at speed and without equivalent hesitation.
By mid-February, the DoD was pressuring contractors who had embedded Claude in their platforms to certify non-use or face consequences. Hegseth escalated publicly, condemning Anthropic’s position as arrogance and betrayal. CEO Dario Amodei’s response was precise: the exemptions, he noted, had not obstructed a single government mission.
Twenty-four hours later, they were used to justify a national security designation.
A Statute Built For Other Enemies
The legal instrument Hegseth deployed was written for a different kind of threat. Section 3252 of Title 10 was enacted to address infiltration of U.S. defence supply chains by foreign actors seeking to embed compromised components in critical systems. It had never been applied to an American company over a contractual disagreement. Anthropic called the action legally unsound and announced its intention to challenge it in court, arguing that the Secretary of Defence has no statutory authority to restrict private commercial dealings beyond the department’s own contracts.
Legal observers agreed with that reading while warning that the remedy would be slow. A court challenge measured in years does not prevent the interim commercial damage. Corporate counsel at defence-adjacent firms does not wait for judicial resolution before advising clients to step back from a designated entity. The DoD had also reportedly considered invoking the Defence Production Act, a Cold War statute that could, under certain readings, compel domestic companies to supply goods or services deemed essential to national security. Legal analysts were divided on whether it could reach software providers, but its potential use confirmed that the government’s leverage in this confrontation was not confined to a single instrument.
In a television interview aired on March 1, Amodei framed the company’s position not as stubbornness but as institutional logic. A safety company that yields on its safety commitments under governmental pressure no longer has a safety proposition. The reasoning was sound, and the commercial risk of abandoning it would have been as significant as the risk of holding it.
The Mirror Problem
Here is where the episode becomes genuinely difficult to read. OpenAI’s Pentagon agreement, finalised the same evening as Anthropic’s designation, contains the same two guardrails. No autonomous weapons. No mass domestic surveillance. The contract restricts deployment to cloud-based environments, excludes edge devices to prevent adaptation for autonomous systems, and preserves OpenAI’s control over its safety stack, with classifiers calibrated to comply with U.S. law and with the requirement for human oversight in lethal decisions.
The substance of what the two companies were offering was nearly identical.
The distinction that proved decisive was architectural. OpenAI’s cloud-only deployment model, combined with on-site monitoring provisions, gave the Pentagon verifiable mechanisms to confirm compliance in operation. The DoD did not want to be told the guardrails existed. It wanted to observe them functioning. Whether Anthropic was given the opportunity to propose an equivalent model before the designation was issued is a question the public record has not yet answered. That gap matters. It is the difference between a government that moved to coercion because negotiation failed, and one that moved before negotiation had concluded.
Altman extended the agreement’s terms to all AI firms as a proposed industry standard. If adopted broadly, it would install OpenAI’s framework as the default architecture for defence AI engagement, a structural position that compounds with every subsequent procurement cycle.
What The Markets Understood
Financial markets reacted quickly. Major technology stocks fell on uncertainty before partially recovering when OpenAI’s deal details became clear. Analysts forecast a migration of defence budgets away from Anthropic-embedded systems toward OpenAI, with downstream benefits for their respective compute partners. The broader signal was unmistakable: government alignment had become a pricing variable in AI valuations.
Anthropic’s commercial position outside the defence sector remained intact. The company’s capitalisation, backed by Amazon and Google, provides durability that most firms could not sustain through a prolonged confrontation with the federal government. More telling is a prior decision the company had made quietly: it had already restricted its models from access by Chinese entities, voluntarily and at cost to revenue. A company designated a national security risk days after declining unrestricted access to the U.S. military, and which had already blocked access to America’s principal strategic adversary, does not fit a simple narrative. The government and Anthropic disagree not on the ends of national security, but on the conditions under which AI capability can be responsibly deployed in its service.
The Valley Watches
Inside the technology sector, the episode has produced visible unease. At Google, a petition opposing DoD concessions gathered over one hundred employee signatories. Modest in scale, pointed in symbolism. The workforce that builds frontier AI systems frequently holds views about their application that diverge from the commercial and governmental pressures now bearing down on it. In an industry where replacing a senior researcher takes years, that tension is not a peripheral management concern.
The discussion across the industry settled into a frame its leaders would prefer to avoid: an AI Cold War conducted not between nations but between the federal government and the domestic companies it simultaneously funds, relies upon, and now seeks to direct by designation. The precedent question will outlast the immediate commercial fallout. Whether the executive branch can deploy a foreign supply chain statute against a domestic firm in a contractual dispute, whether analogous instruments could reach other sectors, none of this has been tested in court. All of it is now in play.
The Position Every AI Company Now Holds
Before February 27, the relationship between AI companies and the federal government was complex but navigable. After it, a new variable sits inside every valuation model and every board-level risk assessment in the sector. A firm that commands a premium on safety credentials faces real exposure if those credentials become a contractual liability in its largest addressable market. A firm that accommodates government requirements secures revenue clarity at the cost of the differentiation that justified the premium.
Neither position is cleanly preferable, and the industry is watching to see which proves more durable. If Anthropic’s legal challenge succeeds and its commercial position holds, the evidence will be that principled constraint can survive direct governmental pressure. If the designation holds and the damage compounds, the evidence will be that it cannot, and the companies built in the years that follow will reflect that lesson.
The cost of the second outcome extends beyond the firms directly involved. Safety-focused AI expertise did not concentrate in the United States by accident. It reflects specific institutional choices and the decisions of researchers who believed they were working in an environment that would not penalise caution. A policy posture hostile to that assumption does not eliminate the work. It moves it.
The Pentagon has the AI access it required, from one provider rather than two. What it also established, in thirteen minutes on the night of February 27, was that the terms of that access are its to set, and that the cost of disagreement is real. The industry negotiates from that understanding now. So do its investors. So do its engineers.