Dario Amodei told the Pentagon he "cannot in good
conscience accede" to its demands. Within hours, the
Trump administration blacklisted Anthropic from every
federal agency. Before that Friday was over, Sam Altman
had signed a deal to put OpenAI's models on classified
Pentagon networks. The whole sequence took less than a
day.
That timeline deserves to sit with you for a moment.
Anthropic had a $200 million military contract on the
table. The company wanted two conditions: no mass
surveillance of American citizens, and no fully
autonomous weapons systems. These are not fringe
demands. They are the kind of restrictions that sound so
obviously reasonable you'd assume they were already law.
Anthropic's position was that current frontier AI models
are not reliable enough for autonomous lethal force, and
that mass domestic surveillance violates fundamental
rights. The Pentagon told them to drop the conditions or
lose the contract. Anthropic dropped the contract.
Defense Secretary Pete Hegseth didn't just cancel the
deal. He designated Anthropic a "supply chain risk to
national security" — a designation normally reserved for
hostile foreign actors, not American companies exercising
their right to negotiate terms. Trump ordered all federal
agencies to begin a six-month phase-out of Anthropic
technology. The message was blunt: comply absolutely, or
we will make an example of you.
Amodei's response was equally blunt. "Disagreeing with
the government is the most American thing in the world,"
he said. He's right. However, being right in Washington
has never been a reliable survival strategy.
Here is where it gets ugly.
On Thursday evening — the night before the blacklisting
— Sam Altman sent a memo to OpenAI staff. He wrote that
this was "no longer just an issue between Anthropic and
the Pentagon; this is an issue for the whole industry and
it is important to clarify our stance." He told CNBC he
didn't "personally think the Pentagon should be
threatening [the Defense Production Act] against these
companies." He said OpenAI shared the same
red lines
as Anthropic: no mass surveillance, no autonomous
weapons, humans in the loop for lethal decisions.
Then, on Friday night — roughly two hours after
Anthropic was officially blacklisted — Altman announced
that OpenAI had reached an agreement with the Department
of War to deploy its models on classified networks.
The deal includes language permitting the government to
use OpenAI's technology for
"all lawful purposes."
Read that clause again. "All lawful purposes" is a
phrase that swallows everything. Surveillance programmes
that haven't been ruled illegal yet? Lawful. Autonomous
targeting systems that Congress hasn't specifically
prohibited? Lawful. The entire architecture of
restriction that Anthropic fought for — the architecture
Altman publicly praised — dissolves inside three words.
OpenAI didn't negotiate the same protections Anthropic
demanded. It negotiated the appearance of them.
Altman claimed the DoW "agrees with these principles,
reflects them in law and policy, and we put them into our
agreement." This is lawyering, not principle. Anthropic
asked for contractual guarantees. OpenAI accepted the
Pentagon's assurance that existing law already covers it.
The difference between those two positions is the
difference between a lock on the door and a sign that
says "please knock."
The timing is what makes it indefensible. If OpenAI had
signed this deal three months ago, you could debate the
merits. Companies make different risk calculations.
However, Altman didn't sign it three months ago. He
waited until the exact moment his competitor had been
destroyed for holding the line he publicly endorsed, and
then walked through the door Anthropic's corpse was
holding open. There is a word for this, and it is not
"principled."
OpenAI has
form here.
Altman told the Financial Times in 2024 that he "hates"
advertising and called combining ads with AI "uniquely
unsettling." ChatGPT now shows ads. He told the world
OpenAI would remain a nonprofit. It converted to a
for-profit. He told staff the company shares Anthropic's
red lines on military use. The company signed a deal
without them. At some point the pattern stops being
strategic flexibility and starts being something else
entirely.
I keep thinking about what Amodei actually risked. He
didn't lose a debate. He lost access to the entire
federal government. Anthropic's commercial future in
government contracting — worth potentially billions over
the next decade — is now in jeopardy. The company has
said it will challenge the supply chain risk designation
in court, arguing it is
legally unsound
and sets a dangerous precedent for any American company
that attempts to negotiate with the government rather
than capitulate. Senator Mark Warner called it an attempt
to "bully" the company. Senator Thom Tillis — a
Republican — criticised the Pentagon's public approach.
Google and xAI had already accepted military contracts
without the restrictions Anthropic demanded. OpenAI was
the last major lab besides Anthropic that hadn't signed.
The industry had every incentive to quietly fold. That
Anthropic didn't — that it chose financial pain over
moral compromise — is the kind of corporate behaviour
people claim to want but rarely reward.
My own position probably doesn't need stating, given that
I'm writing this on a site built around Claude. I use
Anthropic's models daily. I think they make the best
reasoning systems available right now. However, that's
not why this matters to me. Amodei's stand would be just
as significant if Claude were mediocre. The question was
never about product quality. It was about whether an AI
company would accept hard limits on how its technology
gets used, even when the most powerful government on
earth told it the alternative was annihilation.
Anthropic said yes. OpenAI said
whatever you need to hear.
Sources: