- AI Governance
- Defense Industry
- Enterprise Software
Anthropic Faces Pentagon Pressure Over Claude Safeguards
12 minute read
Washington’s pressure over military use of Claude is forcing the company to defend its safety architecture, testing whether ethical guardrails can withstand state demands.
Key Takeaways
- The Pentagon’s February 27 deadline to remove ethical guardrails from Claude marks the first time a major AI lab has faced government coercion on its core safety architecture, setting a precedent that will shape how frontier AI is governed.
- Anthropic’s $380 billion valuation, up from $183 billion in five months, reflects not just commercial momentum but investor conviction that safety-oriented AI commands a structural premium — a thesis now under acute stress.
- The SaaSpocalypse triggered by Claude Code and Cowork is no longer a theoretical threat: IBM lost $31 billion in market value in a single session, and over $400 billion was erased from software and services sectors in a matter of days.
The Ultimatum
On the morning of February 24, Defense Secretary Pete Hegseth sat across from Dario Amodei in a meeting that neither side could afford to lose. The subject was Claude, Anthropic’s flagship language model, and whether the company would agree to strip away the ethical constraints that prevent it from operating as an unchecked instrument of warfare. Hegseth’s message was unambiguous: comply by February 27, or face the consequences, The New York Times reported, citing officials familiar with the matter.
The consequences, as outlined by the Department of War, are not trivial. Termination of a $200 million contract, active since July 2025, would be the most visible blow. More damaging still would be the designation of Anthropic as a “supply chain risk” — a classification ordinarily reserved for adversarial foreign entities such as Huawei. That label would cascade through the defence contractor ecosystem, compelling partners to cut ties and effectively barring Anthropic from classified environments it has spent considerable effort to enter.
It is a striking escalation. In the two decades since 9/11 shaped the architecture of public-private intelligence cooperation, Washington has generally wielded procurement levers, not existential threats, to align private technology with national security objectives. The Pentagon’s willingness to invoke the Defence Production Act signals something different: a belief that AI’s trajectory is too consequential to be governed by the values of a San Francisco startup, however well-intentioned.
A Company Built on Refusal
Anthropic was founded in 2021 by Dario and Daniela Amodei and a cohort of researchers who left OpenAI over concerns about the pace of safety research relative to capability development. The company’s public benefit corporation structure is not mere branding. It encodes a specific wager: that trust, built through demonstrable restraint, would prove more durable in the market than raw capability alone. The constitutional AI framework that governs Claude’s behaviour reflects that wager, prohibiting uses that include autonomous weapons targeting and large-scale population surveillance.
For three years, the strategy appeared vindicated. Annualised revenue reached $14 billion, growing tenfold annually. Enterprise adoption accelerated across legal, financial, and technology sectors. The February 2026 Series G round, led by GIC and Coatue, raised $30 billion and placed the post-money valuation at $380 billion — more than double the $183 billion figure recorded just five months earlier. Investors, in other words, have not been indifferent to the safety positioning. They have treated it as a competitive moat.
The Pentagon dispute tests whether that moat holds under state pressure. Pentagon officials argue that Claude’s restrictions are arbitrary — that compliance with US law should constitute sufficient ethical oversight, and that private corporations should not be in the business of limiting what sovereign governments can do with licensed technology. Critics within the administration describe Anthropic’s position as ideological overreach dressed in corporate responsibility language. The company sees it differently. When Claude was deployed via Palantir’s AI Platform and Amazon’s Top Secret Cloud and subsequently used in the January 3 raid that captured Venezuelan President Nicolas Maduro — without the human oversight protocols Anthropic had stipulated — the company concluded that its terms had been violated. The question of who controls the guardrails is, for Anthropic, also the question of who is liable for the consequences.
Precedent and Pressure
The comparison most readily reached for is Google’s 2018 withdrawal from Project Maven, when employee protests over drone targeting software led the company to decline contract renewal. The parallel is instructive but imperfect. In 2018, Google was retreating from a contract it had not yet fulfilled in full. Anthropic has already integrated at depth into classified infrastructure, and its technology has already been used in operations of significant geopolitical consequence. Withdrawal now carries a different weight.
There is also the matter of what Anthropic’s decision signals to the industry. If the company yields, competitors and regulators will draw their own conclusions about the durability of voluntary ethical frameworks. If it holds firm and absorbs the commercial damage of a supply chain risk designation, it will have established that private AI developers can and will refuse state demands — a stance with implications well beyond defence contracting. Undersecretary Emil Michael’s description of the relationship as “under review” suggests that Washington has not yet decided whether it wants an example made, or a deal struck.
What both sides share is an understanding that this negotiation is not principally about a $200 million contract. It is about who establishes the norms governing AI in high-stakes environments — and whether those norms originate in law, in corporate policy, or in the practical realities of deployment. That question has no settled answer, and the February 27 deadline will not resolve it. It will, however, force the first major public reckoning.
The SaaSpocalypse Arrives
While the Pentagon standoff dominates headlines, a parallel disruption is reshaping the economics of the software industry. Claude Code, launched in May 2025, had by February 2026 reached $2.5 billion in annualised revenue, with weekly active users doubling month over month. Its fingerprints are visible across 4% of all public GitHub commits — double the figure from January alone. The tool has compressed the labour component of software development in ways that legacy vendors are only beginning to quantify.
The January 30 launch of Claude Cowork extended this logic to professional services. Eleven open-source plugins now automate workflows in sales, legal, finance, and data operations — from contract drafting to compliance review. The market response was swift and severe. Legal technology firms including Thomson Reuters and LegalZoom recorded double-digit share price declines. Cybersecurity names fell 9% in a single session on February 16, as vulnerability assessment capabilities were absorbed into Claude Code Security. IBM suffered its worst single-session loss since 2000 on February 23, declining 13.1% and shedding $31 billion in market capitalisation, as enterprise clients began reassessing the economics of COBOL modernisation contracts that Claude Code now threatens to automate.
Between February 4 and 7, AI-related anxieties erased more than $400 billion from software, financial services, and asset management equities. The S&P North American Technology Software Index had declined more than 20% year-to-date through February 6. Analysts at Wedbush cautioned against interpreting the rout as definitive, describing it as an “AI Ghost Trade” driven by momentum rather than measured assessment of long-term displacement. Citrini Research, taking a more structural view, noted the erosion of application-layer moats while acknowledging that incumbents with the capacity to integrate AI natively retain meaningful competitive options. By February 24, a modest stabilisation was under way, aided by Anthropic’s announcement of partnerships with Infosys for regulated industries and CodePath for education.
Growth at Institutional Scale
The February 25 acquisition of Vercept, which bolsters Claude’s computer-use and cybersecurity scanning capabilities, is representative of a broader expansion strategy. Enterprise-grade features unveiled on February 24 — private plugin marketplaces, connectors to Google Drive and DocuSign, and specialised templates spanning HR, engineering, and investment banking — extend Claude’s functional reach into the daily workflows of institutional clients. Claude Opus 4.6, released February 5, topped benchmark indices for finance and legal reasoning. Claude Sonnet 4.6, released February 17, introduced a one-million-token context window and enhanced tool integration for high-volume analytical tasks.
Anthropic is simultaneously managing the internal pressures that accompany this scale. Bloomberg reported, a $6 billion employee share sale, initiated February 23 at a $350 billion valuation, reflects the liquidity demands of a workforce competing against well-capitalised rivals in a tight talent market. International expansion — a Bengaluru office and a memorandum of understanding with Rwanda for AI deployment in health and education — signals ambitions that extend well beyond Silicon Valley and the defence contracting ecosystem in which the current crisis is centred.
What Holds, and What Yields
There is no comfortable resolution available to Anthropic before the February 27 deadline. Capitulation preserves the contract and forestalls the supply chain risk designation, but at the cost of the safety reputation that underpins its enterprise valuation and its identity as a public benefit corporation. Resistance maintains that identity but invites commercial damage that, while survivable given current capital reserves, would set a precedent of vulnerability that could encourage future pressure from both state and corporate actors.
What is clear is that the question Anthropic now faces — whether private companies can sustain ethical boundaries when governments demand otherwise — is not one that will recede after this particular deadline. AI’s capacity to affect battlefield outcomes, surveillance infrastructure, and the automation of consequential decisions will only intensify the pressure on companies that have made safety their core competitive claim. The willingness of states to treat AI developers as strategic assets subject to coercive management will grow in proportion to the technology’s demonstrated power.
Anthropic built its business on the premise that doing things carefully, and being seen to do them carefully, was worth the commercial cost of refusal. That premise is now being tested by a party with the institutional authority to make the costs very high indeed. The outcome will tell observers something important: not just about Anthropic’s resilience, but about whether voluntary ethical governance of frontier AI was ever more than a proposition waiting to be disproved.