Artificial Intelligence

Anthropic Said No to the Pentagon and Got Banned — Here's the Full Story [2026]

|
March 17, 2026
|
16 min read
Anthropic Said No to the Pentagon and Got Banned — Here's the Full Story [2026] - Featured Image

Not sure which AI model is right for you?

12 models compared • Personalized results • Takes 60 seconds

Find Your AI Model

Anthropic refused to drop two guardrails: no mass domestic surveillance, no autonomous weapons without human oversight. The Pentagon designated it a "supply chain risk" — a label previously reserved for companies like Huawei. OpenAI announced a replacement deal the next day. Sam Altman later admitted it looked "opportunistic and sloppy." Over 430 employees from Google and OpenAI signed an open letter backing Anthropic. Claude surpassed ChatGPT in the US App Store for the first time. Anthropic is now suing the Department of Defense.

On February 27, 2026, the United States government did something it had never done before: it designated an American AI company as a supply chain risk. Not for a security breach. Not for foreign ties. For refusing to remove safety guardrails.

The company was Anthropic. The guardrails were simple: no mass domestic surveillance, no autonomous weapons without a human in the loop. Over the next two weeks, key researchers quit, Anthropic sued the government, 2.5 million people pledged to cancel ChatGPT, and employees at Google and OpenAI publicly sided with their competitor. Nothing like this has happened in tech before.

I've been following this from the first TechCrunch report on February 15 to Anthropic's official statement. Here's the full story.

Anthropic's Pentagon contract
$200M
employees signed open letter
430+
ChatGPT uninstall spike
295%
#QuitGPT pledges
2.5M

The Full Timeline: February 15 to March 10

23 days, from a quiet contract dispute to a federal lawsuit.

DateWhat Happened
Feb 15TechCrunch reports Anthropic and Pentagon are arguing over Claude usage restrictions
Feb 16Pentagon warns Anthropic will 'pay a price' for refusing to loosen guardrails
Feb 23Defense Secretary Hegseth summons CEO Dario Amodei to the Pentagon, threatens 'supply chain risk' label
Feb 24Pentagon gives Anthropic a Friday deadline. Anthropic declines to budge
Feb 26Amodei publicly states: 'We cannot in good conscience accede to their request'
Feb 27Pentagon designates Anthropic a supply chain risk. Trump orders all federal agencies to stop using Anthropic
Feb 27430+ Google and OpenAI employees sign 'We Will Not Be Divided' open letter
Feb 28OpenAI announces Pentagon deal — one day later. Altman claims 'technical safeguards'
Mar 2U.S. strikes Iran. Reports confirm Claude still used for targeting via Palantir Maven system
Mar 3Altman admits the deal looked 'opportunistic and sloppy,' renegotiates contract
Mar 4Leaked Amodei memo: calls OpenAI's messaging 'straight up lies'
Mar 5Pentagon officially formalizes supply chain risk designation
Mar 7OpenAI robotics lead Caitlin Kalinowski resigns over Pentagon deal
Mar 9Anthropic files two federal lawsuits against DOD
Mar 10Microsoft files amicus brief. 30+ OpenAI/Google employees + Jeff Dean file supporting brief. 22 retired military officials file supporting brief

What Anthropic Actually Refused

Two specific guardrails. That's it.

Anthropic had a $200 million Pentagon contract. Claude was the first frontier AI model deployed on classified Pentagon networks — the same model that leads agentic coding benchmarks and powers Claude Cowork. The relationship was working. Then the Pentagon asked for more.

Anthropic had two red lines it wouldn't cross:

  1. No mass domestic surveillance of Americans — Claude could not be used to monitor US citizens at scale without judicial oversight
  2. No fully autonomous weapons — any lethal decision required a human in the loop

The Pentagon wanted "all lawful purposes" access, with no restrictions beyond existing law. Anthropic said no.

Amodei's Public Statement

"These threats do not change our position: we cannot in good conscience accede to their request." — Dario Amodei, February 26, 2026

Anthropic also donated $20 million to "Public First," a political advocacy group backing congressional candidates who favor AI safety regulation. This did not improve relations with the administration.

On February 27, Defense Secretary Pete Hegseth designated Anthropic a supply chain risk. A statement from the administration put it bluntly: "We don't need it, we don't want it, and will not do business with them again."

That label — previously reserved for foreign adversaries like Huawei and Kaspersky — had never been applied to a domestic company. Anthropic became the first American company in history to receive it. Not for a security breach. For having a policy.

OpenAI Took the Deal — Then Admitted It Was 'Sloppy'

One day. That's how long it took.

On February 28, one day after Anthropic was blacklisted, Sam Altman announced OpenAI's Pentagon deal. He claimed the contract included protections against domestic mass surveillance and fully autonomous weapons — the exact same two restrictions Anthropic had demanded.

The optics were bad, and Altman knew it. Within days, in an internal memo that leaked, he wrote: "We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy."

On March 2, OpenAI renegotiated the contract to add stronger surveillance protections, consistent with the Fourth Amendment and FISA.

Critics weren't satisfied. Former Army Undersecretary Brad Carson said publicly: "I don't believe that provision is in the contract." Former DOJ attorney Alan Rozenshtein called OpenAI's approach "not sustainable" and "bizarre," arguing only releasing the full contract could settle the question. The EFF called the contract language "weasel words" that wouldn't prevent AI-powered surveillance.

The Policy Backstory

OpenAI wasn't always open to military work. Before January 2024, their usage policy explicitly banned "military and warfare" applications. In January 2024, they quietly deleted that ban. By December 2024, they had a partnership with defense contractor Anduril. By August 2025, they were practically giving ChatGPT to the government for free. The Pentagon deal was a destination, not an accident.

For a broader look at how the two companies' models compare on capability, see our GPT-5.4 vs Claude Opus 4.6 vs Gemini 3.1 Pro comparison.

The Employee Revolt: 'We Will Not Be Divided'

Google and OpenAI employees publicly sided with Anthropic.

Not sure which AI model to use?

12 models · Personalized picks · 60 seconds

On February 27, the same day the ban dropped, an open letter went live at notdivided.org. By Monday it had close to 900 signatures: about 800 from Google, about 100 from OpenAI. All verified as current employees.

The letter's two demands matched Anthropic's two red lines: no domestic mass surveillance, no fully autonomous weapons. Its most quoted line: "They're trying to divide each company with fear that the other will give in."

Three High-Profile Resignations

Mrinank Sharma (Head of Anthropic's Safeguards Research) resigned February 9, just before the standoff escalated. His departure letter warned: "The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment." He left to pursue poetry and "the practice of courageous speech."

Zoe Hitzig (OpenAI research scientist) resigned February 11 via a New York Times op-ed, warning that ChatGPT advertising creates "potential for manipulating users in ways we don't have the tools to understand, let alone prevent." We covered the ad controversy separately in our ChatGPT ads and alternatives breakdown.

Caitlin Kalinowski (OpenAI robotics lead) resigned March 7, directly over the Pentagon deal. Her statement: "Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got."

The Amicus Briefs

On March 10, the solidarity became legal. 30+ employees from OpenAI and Google DeepMind — including Google chief scientist Jeff Deanfiled an amicus brief supporting Anthropic's lawsuit. Their argument: "This effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States' industrial and scientific competitiveness."

22 retired military officials filed their own supporting briefs. Microsoft filed a separate amicus brief urging a temporary restraining order, arguing contractors would need to "rapidly rebuild offerings" without Claude, "hampering US warfighters at a critical point in time."

Even OpenAI itself issued a statement: "We do not think Anthropic should be designated as a supply chain risk and we've made our position on this clear to the Department of War."

The xAI Double Standard

The company under investigation got rewarded. The company with guardrails got punished.

While Anthropic was being banned for having ethics, xAI was getting a Pentagon contract worth ~$200 million. In January 2026, Defense Secretary Hegseth announced Grok would be integrated into DoD networks including classified systems at Impact Level 5.

At the time of the contract, xAI was under active investigation:

The Contrast

Anthropic had zero safety failures, was the first AI deployed in classified Pentagon systems, and got banned for having guardrails. xAI was under investigation for generating child sexual abuse material and producing antisemitic content, and got a Pentagon contract of comparable value.

The Palantir Loophole: Claude Is Still in the War

Despite the ban, Anthropic's AI keeps running on classified military networks.

This is the part I keep coming back to. Despite the supply chain risk designation, Claude kept running inside the Pentagon's Maven Smart System through Palantir.

When the US and Israel struck Iran on March 2, reports confirmed Claude was used for intelligence assessment, target identification, and simulating battlefield outcomes. The Pentagon officially labeled Anthropic a supply chain risk on March 5 — the same day reporting confirmed the DoD was actively using Claude in combat operations. Our earlier piece on AI cybersecurity threats involving Claude looks at a different side of AI misuse risk.

The ban was political, not operational. Anthropic's technology was too embedded to remove.

Consumer Backlash: #QuitGPT Goes Viral

2.5 million pledges. 295% uninstall spike. Claude hit #1 in the App Store.

The public picked a side. After OpenAI's Pentagon deal, the #QuitGPT movement documented over 2.5 million people pledging to cancel ChatGPT subscriptions. Daily uninstalls spiked 295% above the average.

Claude surpassed ChatGPT in the US App Store for the first time, driven by users who saw Anthropic as the company that stood its ground. Whether this translates to lasting market share remains to be seen, but the brand positioning shift was immediate and measurable. For context on how ChatGPT and Claude compare technically, our Claude Code vs Cursor comparison covers the developer tooling side.

On March 9, Anthropic filed two federal lawsuits — one in California, one in a D.C. appeals court — calling the supply chain risk designation "unprecedented and unlawful."

The legal question is narrow: can the government designate an American company as a supply chain risk for refusing to change a product policy? The Lawfare blog analyzed the case and concluded the designation "won't survive first contact with the legal system."

Anthropic's official statement: "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons."

What This Means for the AI Industry

The precedent goes both ways.

I don't think this story has a clean takeaway. But a few things are worth sitting with.

Anthropic turned down $200M and got federally blacklisted to protect two guardrails. Whatever you think of the company, that cost them something real. This isn't the kind of thing you do for PR value.

At the same time, Anthropic wasn't refusing military work. Claude was already running on classified networks. The company drew two specific lines — domestic surveillance and autonomous weapons — and held them. Everything else stayed on the table. This wasn't pacifism. It was line-drawing.

Then there's the timing. After backlash, OpenAI renegotiated its contract to include the same two prohibitions Anthropic had demanded. If those restrictions are reasonable enough for OpenAI to add voluntarily, it raises an obvious question: why was Anthropic punished for insisting on them first?

The supply chain risk label worries me more than the contract dispute itself. If the government can blacklist a US company for having product policies it doesn't like, every AI company now knows what saying no costs. VCs are already warning that innovation funding could shrink if "Washington weaponizes procurement rules against policy dissent."

And the public actually has opinions on this, which surprised me. 79% of Americans say a human should always make the final call before lethal force. 53% say private AI companies should be allowed to restrict how their technology is used, including for surveillance and weapons. The boycott numbers were real.

The Bigger Pattern

This isn't just about Anthropic. Google walked away from Project Maven in 2018 under employee pressure, then quietly returned to defense work through a $9 billion JWCC cloud contract in 2022. Meta reversed its military AI ban in November 2024. OpenAI deleted its "military and warfare" prohibition in January 2024. The industry trend is clear: every major AI company opposed military use before 2024, and every one has since moved toward it. We tracked this shift in our AI bubble market analysis and Sam Altman analysis. Anthropic is the last company still holding a line. For how long, and at what cost, is the open question.

The lawsuit is ongoing. The supply chain risk designation stands. Claude is still running in Palantir's Maven system, which is the kind of irony that would feel heavy-handed in fiction. Anthropic's stock dropped, then recovered when the boycott numbers came in. I don't know how this ends. But I do know that more people are paying attention to AI ethics right now than at any point since ChatGPT launched, and that feels like it matters.

If you're reconsidering which AI platform to use, we recently compared all three frontier models in our GPT-5.4 vs Claude Opus 4.6 vs Gemini 3.1 Pro breakdown. For a personalized recommendation based on your use case and priorities, take our AI Model Picker quiz.

Free & personalized

Not Sure Which AI Platform to Trust?

Take our free AI Model Picker quiz to find a model that matches your values, use case, and budget.

Find Your AI Model

Free • 60 seconds • No signup required to start