- NATURAL 20
- Posts
- New Post
New Post
Anthropic at the Rubicon: The Maduro Raid, the Pentagon Ultimatum, and the Death of 'Safety First'
This is huge, the next 48 hours will determine the future of AI. Anthropic is up against the US government and things are looking like they’re heating up.
Key Takeaways
Claude was used via Palantir's Maven Smart System in the January 3 JSOC raid that captured Venezuelan President Nicolás Maduro — directly violating Anthropic's public policies
Defense Secretary Pete Hegseth gave Dario Amodei until Friday, February 27 at 5 PM to grant the military 'unfettered access' to Claude — or face the Defense Production Act
The Pentagon threatened to label Anthropic a 'supply chain risk,' blackballing them from all federal contracts, and to terminate their $200 million defense deal
Anthropic gutted its Responsible Scaling Policy on the same day as the Pentagon meeting — removing the pledge to halt training if safety measures weren't proven
Anthropic's chief science officer Jared Kaplan told TIME: 'It wouldn't actually help anyone for us to stop training AI models' while competitors blaze ahead
The Company That Was Supposed to Be Different
Anthropic was founded on a single, radical promise: we will build the most powerful AI in the world, but we'll do it safely. The Amodei siblings left OpenAI in 2021 specifically because they believed their former employer wasn't taking safety seriously enough. For five years, Anthropic's Responsible Scaling Policy — a public commitment to halt development if safety couldn't keep up with capability — was the cornerstone of that identity.
In the span of two weeks in February 2026, that identity collapsed.
Claude was used in a covert military raid. The Pentagon issued an ultimatum backed by Cold War-era compulsion laws. And Anthropic quietly rewrote its safety policy to remove the one pledge that made it different from every other AI lab.
Here's how it happened.
The Raid: January 3, 2026
On January 3, Joint Special Operations Command (JSOC) forces conducted a raid in Caracas, Venezuela that resulted in the capture of former President Nicolás Maduro. The operation was a closely guarded secret until mid-February, when The Wall Street Journal and Axios broke the news that Claude — Anthropic's flagship AI — had been used in preparation for the assault.
The technology was accessed through Anthropic's partnership with Palantir Technologies, the defense contractor whose Maven Smart System provides data analytics and targeting support to the Department of Defense. Palantir had announced the partnership in 2024, making Anthropic the first AI company allowed to offer services on classified military networks.
According to sources who spoke to NBC News, the Washington Post, and DefenseScoop, Claude was used for real-time data analysis and mission planning. The exact scope of Claude's role remains classified — Anthropic has refused to confirm or deny specifics, citing the classified nature of military operations.
But the implications were immediately clear: Claude had been used to support a lethal military operation. This was precisely the scenario Anthropic's policies were designed to prevent.
The Rupture: February 13–19
The fallout began almost immediately after the news broke.
According to Semafor's reporting, during a routine meeting between Anthropic and Palantir, an Anthropic employee raised questions about whether Claude had been used in the Maduro raid. A Palantir executive interpreted the inquiry as implying Anthropic might disapprove of the usage. That conversation was reported up the chain — and triggered a rupture in the relationship between Anthropic and the Pentagon.
The Pentagon confirmed the tension. Chief Pentagon spokesman Sean Parnell stated: "The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight."
On February 19, Pentagon CTO Emil Michael publicly urged Anthropic to "cross the Rubicon" on military AI use, saying the company's restrictions were unacceptable for a defense partner.
Meanwhile, in early January, Defense Secretary Pete Hegseth had already released a new AI strategy document requiring all defense AI contracts to eliminate company-specific guardrails, permitting "any lawful use" of AI systems. Companies had 180 days to comply — a timeline that put Anthropic's contract directly in the crosshairs.
The Ultimatum: February 24
On Tuesday, February 24, Dario Amodei was summoned to the Pentagon for a meeting with Defense Secretary Pete Hegseth.
The Register and Axios confirmed what happened inside: Hegseth gave Amodei until Friday, February 27 at 5:00 PM to sign a document granting the military "unfettered access" to Claude for all lawful purposes.
If Anthropic refuses, the Pentagon has three weapons ready:
Defense Production Act — A Cold War-era law giving the President (or his delegates) authority to compel private companies to accept contracts deemed necessary for national defense. This would force Anthropic to provide Claude regardless of their wishes.
Supply Chain Risk Designation — Labeling Anthropic a supply chain risk would effectively blackball them from all federal work, forcing any company doing business with the government to remove Anthropic software from their operations.
Contract Termination — The Pentagon is prepared to cancel Anthropic's up to $200 million defense contract outright.
According to The Register, Anthropic has maintained its red lines: no autonomous weapons that use AI to make final targeting decisions, and no domestic surveillance of American citizens. The Pentagon's position: how Claude is used lawfully is the military's decision, not Anthropic's.
The Friday deadline looms.
The Safety Pivot: February 24–25
On the same day Amodei sat across from Hegseth — February 24 — Anthropic published RSP 3.0, the third version of its Responsible Scaling Policy.
The flagship pledge was gone.
Since 2023, Anthropic's RSP had contained a categorical commitment: the company would never train an AI model unless it could guarantee in advance that safety measures were adequate. It was the single most concrete safety commitment any major AI lab had made. It was Anthropic's differentiator.
RSP 3.0 replaces that hard limit with a softer dual condition: Anthropic will only consider pausing development if (a) they believe they are the leader of the AI race AND (b) the risks of catastrophe are deemed material. Both conditions must be true simultaneously — a dramatically higher bar than the original categorical rule.
Anthrop's chief science officer Jared Kaplan explained the decision to TIME:
"We felt that it wouldn't actually help anyone for us to stop training AI models. We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead."
The new policy explicitly states: "If one AI developer paused development to implement safety measures while others moved forward training and deploying AI systems without strong mitigations, that could result in a world that is less safe."
In exchange for dropping the hard limit, Anthropic promised more transparency:
Frontier Safety Roadmaps — public documents detailing planned safety measures
Risk Reports every 3–6 months explaining capabilities, threat models, and mitigations
External reviewers with unredacted access to Risk Reports
A commitment to "match or surpass" competitor safety efforts
Chris Painter, director of policy at METR (a nonprofit that evaluates AI models for dangerous capabilities), reviewed an early draft of RSP 3.0. His assessment was blunt: the change shows Anthropic "believes it needs to shift into triage mode with its safety plans, because methods to assess and mitigate risk are not keeping up with the pace of capabilities. This is more evidence that society is not prepared for the potential catastrophic risks posed by AI."
Timeline: The Two Weeks That Changed Everything
Date | Event |
|---|---|
Jan 3 | JSOC raids Caracas, captures Maduro. Claude used via Palantir — details remain classified for weeks. |
Jan 12 | Hegseth releases new DoD AI strategy requiring elimination of company-specific guardrails in all defense contracts. |
Jan 29 | Reuters reports Pentagon clashing with Anthropic over restrictions on autonomous targeting and domestic surveillance. |
Feb 13 | WSJ and Axios break the news: Claude was used in the Maduro operation via Palantir. |
Feb 17 | Semafor reports that an Anthropic employee's inquiry to Palantir about the raid caused a "rupture" in the relationship. |
Feb 19 | Pentagon CTO Emil Michael publicly tells Anthropic to "cross the Rubicon." Pentagon announces it is "reviewing" the relationship. |
Feb 22 | NBC News, BBC, Al Jazeera, Scientific American all running the story. International media frames it as AI ethics vs. national security. |
Feb 24 | Dario Amodei meets Hegseth at the Pentagon. Friday 5 PM ultimatum issued. Same day: Anthropic publishes RSP 3.0, dropping the flagship safety pledge. |
Feb 25 | TIME publishes exclusive interview with Jared Kaplan. The Register confirms Defense Production Act threat. PCWorld headline: "Anthropic just wrote itself a safety loophole." Anthropic acquires Vercept.ai for Computer Use. Launches Claude Cowork with 10 enterprise integrations. UiPath stock drops 3.6%. |
Feb 27 | ⏳ The deadline. |
What This Means
The timing of RSP 3.0 — published the same day as the Pentagon meeting — is either a remarkable coincidence or a signal that Anthropic is preparing to bend. The company insists the policy revision was months in the making and is unrelated to the military dispute. Critics see it as a pre-emptive surrender.
The deeper question is whether any AI company can maintain ethical red lines when facing a government willing to invoke wartime compulsion laws. The Defense Production Act was designed for steel mills and ammunition factories during the Cold War. Its application to AI model access would be unprecedented — and would establish a legal precedent that no AI company's usage policies can override national security demands.
Anthropic currently maintains its two red lines: no autonomous weapons, no domestic surveillance. But with the RSP's hard safety limit already gone, and the Pentagon holding both the carrot ($200 million contract) and the stick (DPA compulsion), the question isn't whether those lines will hold.
It's whether they can.
Friday is two days away.
Yours Truly,
Wes “Claude is about to get drafted” Roth
Reply