The most surreal tech-military story of 2026: the AI company that said no to the Pentagon ended up powering its most aggressive war in decades anyway.

On February 27, 2026, Donald Trump exploded on Truth Social.

He called Anthropic — the company behind Claude AI — "an out-of-control, Radical Left AI company run by people who have no idea what the real World is all about." He told every federal agency to immediately stop using Claude. Defense Secretary Pete Hegseth declared Anthropic a "Supply-Chain Risk to National Security" — a label historically reserved for Chinese companies like Huawei, never an American firm.

Less than 24 hours later, the US military used Claude to identify over 1,000 targets in Iran.

That's not irony. That's the story of how AI warfare crossed a threshold nobody was ready for.


What Actually Happened in Iran

On February 28, 2026, the United States and Israel launched a coordinated assault on Iran codenamed Operation Roaring Lion (Israel) and Operation Epic Fury (US). The strikes hit Tehran, Isfahan, Qom, Karaj, and Kermanshah. Iran's Supreme Leader, Ayatollah Ali Khamenei — 86 years old, the most powerful man in Iran for over three decades — was killed in the first wave.

It was the largest and most technologically advanced military operation the US had conducted in years. And AI was at the center of it.

According to reporting from the Washington Post, Wall Street Journal, and Axios, US Central Command used Anthropic's Claude — embedded in Palantir's Maven Smart System on classified military networks — to generate roughly 1,000 prioritized targets on day one alone. The AI processed satellite imagery, signals intelligence, and surveillance feeds in real time, producing target lists complete with GPS coordinates, weapons recommendations, and automated legal justifications for strikes.

To put that in perspective: 1,000 targets. In 24 hours. That's not something human analysts could do alone.

This was, by every account, the first large-scale deployment of generative AI in active US warfighting operations.


The Fight That Started It All

To understand how we got here, you have to understand what Anthropic was fighting against.

The Pentagon had a $200 million contract with Anthropic, signed in July 2025 — making Claude the only AI foundation model approved for use in certain classified Defense environments. The military used it for intelligence analysis, logistics, document synthesis, and operational planning.

Then, in February 2026, the Pentagon pushed for more. Officials demanded that Anthropic remove all restrictions and allow Claude to be used for "all lawful purposes." No exceptions. No guardrails.

Anthropic said no — specifically on two things:

First: The mass domestic surveillance of American citizens. Anthropic argued that today's AI models are not reliable enough for this, and that enabling it would threaten American civil liberties.

Second: Fully autonomous weapons — systems where AI makes the final lethal decision without any human in the loop.

CEO Dario Amodei was direct about it. "We cannot in good conscience accede to their request," he said. "Some uses are simply outside the bounds of what today's technology can safely and reliably do."

He also added something that got less attention at the time: "We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner."

That line would matter later.

The Pentagon pushed back hard. Emil Michael, the Pentagon's chief technology officer, told CBS News: "The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk."

Pentagon officials also accused Anthropic of having a "God-complex" and being "sanctimonious."

And then Trump posted.


Trump vs. Anthropic: The Full Breakdown

Trump's Truth Social post on February 27 was vintage all-caps fury:

"THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS!"

He called Anthropic's move a "DISASTROUS MISTAKE" and said the company was "putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY."

He threatened "major civil and criminal consequences" if Anthropic wasn't cooperative during the phase-out period.

Hegseth followed within the hour, formally designating Anthropic a supply chain risk — meaning any defense contractor or supplier that does business with the US military was now barred from working with Anthropic. That's an extraordinary economic weapon. It had never been used against an American company before.

Within hours of the ban announcement, OpenAI's Sam Altman announced his company had reached a deal with the Pentagon. "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons," Altman wrote — essentially the same position Anthropic held. But OpenAI signed anyway.

Critics noticed.


The Twist Nobody Saw Coming

Here's where it gets genuinely strange.

Despite the ban, despite the "supply chain risk" designation, despite the threats of criminal consequences — the US military used Claude in the Iran strikes anyway.

Two sources familiar with the matter confirmed this to CBS News. The Wall Street Journal reported it first. Axios confirmed it separately.

Claude was so deeply embedded in the Pentagon's classified systems that it would take months to untangle. One military source told the Washington Post: "We're not going to let [Amodei's] decision-making cost a single American life."

The Pentagon gave Anthropic a six-month window to phase out — which effectively meant Claude continued operating in Iran in the meantime.

Amodei called the actions "retaliatory and punitive" and said Anthropic would challenge the supply chain designation in court. He held the door open for a future agreement, telling investors at a Morgan Stanley conference that the two sides "have much more in common than we have differences."

But he did not publicly oppose the use of Claude in the Iran war.

His stated red lines — no domestic surveillance, no autonomous weapons — were never about what Claude was actually doing in Iran. Intelligence assessment, target identification, battle simulation: none of that was covered by his restrictions.


Israel's AI Was There Too

The US wasn't alone in using AI for this war.

Israel deployed its own AI targeting system called Habsora — a system the IDF has been using since at least 2021. It automatically generates target lists from intelligence data at speeds no human analyst can match.

Critics raised serious alarms almost immediately.

Trita Parsi of the Quincy Institute for Responsible Statecraft pointed to a strike on a park in Tehran called "Police Park" — a public green space with no military connection. His reading: Israel's AI identified it as a target because it was scanning for all government-related locations, and no human bothered to verify before the strike was ordered.

"Similarities between Israel's bombing of Gaza and Tehran are growing stronger," Parsi wrote. "In both cases, it appears Israel is using AI without any human oversight."

The comparison to Gaza is deliberate. Israel's AI targeting systems were tested and refined during that conflict — and now they're being deployed against Iran at scale.


What the Numbers Say

By day seven of the conflict:

  • More than 1,230 people killed in Iran, including an estimated 175 children
  • Israel claims 2,500 strikes and 80% of Iran's air defense systems destroyed
  • Iran launched over 500 ballistic missiles and 2,000 drones in retaliation
  • The first 100 hours of Operation Epic Fury cost an estimated $3.7 billion
  • Over 11,000 flights across 10 countries in the region had been cancelled

The scale of targeting — and the speed at which it happened — would not have been possible without AI.


The Bigger Question Everyone Is Avoiding

The debate between Anthropic and the Pentagon was framed as a fight over two specific things: domestic surveillance and autonomous weapons. Anthropic drew those lines publicly, loudly, and with real conviction.

But Claude was used to select targets that resulted in civilian deaths. It was used to help plan strikes that killed a head of state. It generated automated legal justifications for those strikes.

None of that violated Anthropic's stated red lines.

Peter Asaro, an expert on AI and robotics, told the Japan Times that the very short planning phase and massive number of targets pointed clearly to AI involvement. He raised the broader concern that human control of war machinery could be slipping — not because AI is making final decisions, but because the speed and volume of AI-generated targeting makes meaningful human review nearly impossible.

Future of Life Institute president Max Tegmark put it bluntly: "Fully autonomous weapons systems and Orwellian AI-enabled domestic mass surveillance are affronts to our dignity and liberty. Current AI systems are inherently unpredictable and fundamentally brittle, unsuited for very high stakes applications."

The problem isn't just autonomous weapons. It's that when AI is processing 1,000 targets in 24 hours, the human "in the loop" is less a safeguard and more a formality.


What Happened to Public Trust in AI

The drama had a strange side effect.

After Trump's ban and OpenAI's quick pivot to take Anthropic's contracts, the public responded in a way nobody predicted. Claude shot from rank 42 to the number one spot on the Apple App Store over the weekend. ChatGPT uninstalls spiked 295%.

People weren't just downloading Claude because it was good. They were downloading it because Anthropic had, in their view, said no to something the public didn't want done.

Whether that perception matches reality is a harder question.


Where Things Stand Right Now

As of March 7, 2026 — day eight of the war:

  • The US and Israel are still striking Iran
  • Iran's retaliatory attacks have dropped by 80–90% according to CENTCOM
  • Anthropic has officially been notified of its supply chain risk designation and says it will sue
  • Dario Amodei is reportedly back in talks with the Pentagon
  • OpenAI now holds the new AI contract for military use
  • No international agreement on AI oversight in warfare exists — and the Geneva discussions this week are unlikely to produce one fast enough to matter

The war in Iran is not just a military story. It's the first real test of what AI-assisted warfare looks like at scale — and the answer, so far, is: faster, larger, and with less human review than anyone is comfortable admitting.


Sources: Washington Post, CBS News, Wall Street Journal, Axios, Al Jazeera, CNN, Council on Foreign Relations, Nature, Common Dreams, Times of Israel, CNBC, Japan Times, NPR