- AI & Regulation
- Artificial Intelligence
- National Security
Anthropic Takes the Pentagon to Court Over AI Ethics
12 minute read
The AI pioneer’s lawsuit against the Defense Department marks a defining moment in how America will govern artificial intelligence for generations to come.
Key Takeaways
- Anthropic filed two simultaneous lawsuits today arguing the Pentagon’s supply chain designation is unconstitutional retaliation for protected speech, a legal theory that, if upheld, would significantly constrain executive power over domestic technology firms.
- The designation has no domestic precedent: historically reserved for companies tied to foreign adversaries, its application to an American AI firm while that firm’s technology remains active in U.S. military operations in Iran creates a factual contradiction the government will struggle to argue around.
- Despite the severity of the blacklisting, both sides remain in dialogue, and Anthropic has been explicit that its lawsuits aim not to compel a partnership but to prevent executive overreach from establishing a precedent that could expose every U.S. technology company to similar pressure.
The Line That Could Not Be Negotiated Away
The relationship between Anthropic and the United States Department of Defense began, less than a year ago, as a model of what public-private partnership in artificial intelligence could look like. In July 2025, Anthropic secured a $200 million contract to deploy its Claude model across classified defense networks, becoming the first AI company to operate inside those systems. The agreement rested on two explicit commitments: Claude would not be used for mass domestic surveillance of American citizens, and it would not be deployed in fully autonomous weapons systems where no human held targeting authority.
By February 2026, those two commitments had become the fault line in one of the most consequential disputes in the short history of commercial AI. The Pentagon, under Defense Secretary Pete Hegseth, demanded unrestricted access to Claude for any lawful purpose. Anthropic’s CEO Dario Amodei declined, arguing that current AI systems are not reliable enough for autonomous lethality and that bulk data profiling of Americans raises serious constitutional concerns. The two sides met on February 24. No agreement was reached.
On February 27, Hegseth moved to formal punishment. On Monday March 9, Anthropic filed suit.
An Instrument Built for Foreign Adversaries
The legal mechanism Hegseth deployed is, by design and by history, a tool for managing threats originating outside the United States. Supply chain risk designations under 10 U.S.C. 3252 have been applied to companies with ties to China or Russia, used to sever relationships with vendors whose security practices presented genuine national security concerns. The designation compels any company seeking Pentagon contracts to certify it does not use the blacklisted firm’s products. It is, in practical terms, an industry-wide exclusion order.
Applying it to a San Francisco-based company founded by American researchers, backed by Google and major institutional investors, and currently embedded in active classified military operations, is without historical parallel. Anthropic’s 48-page complaint calls the actions “unprecedented and unlawful.” A second filing was lodged simultaneously in the D.C. Circuit Court of Appeals, targeting a separate statute the government invoked that can only be challenged in that jurisdiction. Anthropic is pursuing relief on both fronts.
The statutory argument is precise. Congress, in drafting 10 U.S.C. 3252, required that the Pentagon employ the least restrictive means available to address supply chain risk. Anthropic argues that moving directly from a contract renegotiation dispute to a blanket industry exclusion, without intermediate measures, violates that standard on its face. No national security finding rooted in the company’s conduct, ownership, or technical vulnerabilities was offered. The basis was a policy disagreement over usage terms.
The Constitutional Core
Beyond the statutory challenge lies a First Amendment argument with broader implications. Anthropic’s complaint states: “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech.” The company’s positions on autonomous weapons and domestic surveillance are, in its framing, expressions of corporate policy on matters of public concern, not operational interference in military command. The government retaliated against those positions using the full weight of executive procurement authority.
That framing, if accepted by a federal court, would establish meaningful limits on how the executive branch can treat companies that publicly advocate for technology guardrails. It would mean that a firm’s stated views on how its products should be used cannot be grounds for exclusion from federal markets. The precedent would reach well beyond AI.
The Pentagon’s counter-argument is narrower: this is about operational control and the military’s ability to deploy technology without vendor-imposed constraints in a national security context. It is a defensible institutional position, and it may yet prevail. But it does not easily address the specific legal standards the complaint invokes, and the administration’s public communications have been notably more political in tone than the legal theory requires.
The Iran Contradiction
The administration’s legal position is severely complicated by one operational reality it cannot sidestep. Two sources familiar with the U.S. military’s use of artificial intelligence confirm that the U.S. used Anthropic’s Claude model during the attack on Iran and is still using it, despite a government-wide ban on the technology.
The timing is particularly difficult for the government to explain. On Friday, the day before the U.S. and Israel began their strikes, Trump wrote a social media post ordering every federal agency to immediately cease all use of Anthropic’s technology. According to reports, U.S. forces used Claude to assist the strikes that same day.
The scope of that use goes beyond administrative functions. Claude is central to Palantir’s Maven Smart System, which provides real-time targeting for military operations against Iran. The U.S. military leveraged these AI targeting tools to strike over 1,000 targets in Iran during the first 24 hours of the operation. According to reporting from the Washington Post, Claude won’t be phased out until the DoD has found a replacement.
Pentagon chief technology officer Michael told CBS News the Defense Department uses Claude for synthesizing documents and making logistics and supply chains more efficient, among other tasks. Separately, reports indicate Claude’s role was as a decision-support tool, providing insights, summaries and simulations to human operators, rather than independently controlling weapons systems or making lethal decisions without human involvement.
Designating a product a national security threat while relying on it to prosecute an active war is not a posture that lends itself to straightforward legal defense. Anthropic’s legal team will not leave that contradiction unexamined. The result is that the company finds itself in an unusual position: being both actively used as part of the ongoing conflict in Iran while simultaneously decoupling from many of its clients in the defense industry.
The OpenAI Pivot and Its Problems
The administration’s move to redirect AI contracting toward OpenAI, announced within hours of the February 27 boycott order, has generated complications of its own. OpenAI stated that its agreement has “more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.” That formulation, intended as a statement of confidence, implicitly concedes the legitimacy of usage restrictions as a concept. If the Pentagon is prepared to accept guardrails from OpenAI, the argument that Anthropic’s guardrails constituted an intolerable interference in military authority becomes considerably harder to sustain in court.
The broader picture of AI adoption in the military is also more complex than the administration’s framing suggests. AI has become deeply integrated in U.S. military planning and execution, particularly in intelligence assessment, target identification, and operational simulations, and it will be used again. The question is not whether commercial AI enters the architecture of national security, but on what terms, and with what oversight.
Commercial Damage, and Its Limits
Anthropic’s complaint is direct about the financial stakes. Federal contracts are already being canceled. Defense-adjacent private clients face pressure to certify they do not use Claude, putting commercial relationships at risk. The filing warns that hundreds of millions of dollars in near-term revenue is in jeopardy. The threat has led some defense tech firms to switch to other AI models, and many of Anthropic’s investors have remained silent.
The longer picture is more nuanced. Microsoft and Google have confirmed they will continue non-defense work with Anthropic, containing immediate spillover. And the public response to the dispute has been striking: Claude surpassed ChatGPT in the U.S. App Store the day after the Pentagon announced its contract termination, and the company reported more than a million new sign-ups per day as of March 5. Consumer and enterprise demand has moved, for now, in the opposite direction from the government’s pressure.
That dynamic matters to investors assessing the dispute’s long-term implications. Anthropic’s enterprise value does not rest on federal procurement. It rests on the trust of financial institutions, healthcare systems, legal firms, and technology companies that have built workflows around Claude precisely because the company is seen as principled about how its technology is deployed. That reputation is not weakened by this dispute.
A Question the Courts Must Settle
Both sides have left the door open to a negotiated resolution. Defense Undersecretary Emil Michael has said publicly he remains open-minded, noting: “I have a responsibility to the Department of War, and if there was a way to ensure that we had the best technology, I have no ego about it.” Anthropic has pledged to continue supporting the Pentagon’s transition and has stated clearly that its lawsuits are not designed to force a partnership, but to prevent the executive branch from using procurement law as a mechanism of retaliation against companies that hold and express views on their own technology’s use.
That distinction matters considerably. This case is not, at its core, about one contract or one company. It is about whether the executive branch can reach into the commercial market and exclude a domestic firm not because of what it has done, but because of what it has said. And it is about whether a government that declares a technology too dangerous to procure can simultaneously rely on that same technology to conduct active military operations abroad.
Federal courts have not ruled on that question before. They will rule on it now. The terms being set will hold for a long time.