03.03.2026 21:51Author: Viacheslav Vasipenok

Dario Amodei’s First Interview After Pentagon Blacklist: “We Are Patriots”

News image

In his first public comments since the Pentagon effectively blacklisted Anthropic — designating the company a “supply chain risk to national security” — CEO Dario Amodei delivered a concise yet pointed response to a direct question: What would you say to the President right now?

We are patriots. Everything we have done was for this country.”

The statement, made during an exclusive CBS News interview aired on March 1, 2026, came amid escalating tensions between Anthropic and the U.S. government.

The clash centers on Anthropic’s refusal to remove safeguards preventing Claude from being used in mass domestic surveillance or fully autonomous weapons systems — conditions the Pentagon had demanded as part of a potential contract renewal.


Background: A Long-Standing Partnership Turns Sour

Anthropic had been a key partner in U.S. national security efforts. Claude was among the first frontier models deployed on classified military networks, supporting tasks such as intelligence analysis, operational planning, cyber defense, and modeling.

The company positioned itself as a forward-leaning ally in defending democratic values against autocratic adversaries.

However, in late February 2026, negotiations broke down. The Department of Defense insisted on unrestricted “any lawful use” access, including scenarios Anthropic deemed incompatible with democratic principles and current AI reliability limits.

When Anthropic held firm, Defense Secretary Pete Hegseth labeled the company a supply chain risk — a designation typically reserved for foreign adversaries — and President Trump ordered all federal agencies to cease using Anthropic technology, with a six-month phase-out period.

Amodei described the Pentagon’s actions as “retaliatory and punitive,” arguing that the threats — including potential invocation of the Defense Production Act — were contradictory and legally questionable. Anthropic vowed to challenge the designation in court.


The Irony: Claude Used in Iran Strikes Hours After the Ban

The most striking twist emerged shortly after the blacklist announcement. According to The Wall Street Journal, U.S. Central Command employed Claude during Operation Epic Fury — the ongoing U.S. and Israeli strikes against Iran that began in late February 2026. The model reportedly assisted with intelligence assessments, target identification, and combat scenario simulations.

This usage occurred mere hours after the Trump administration’s directive to halt Anthropic products.

While not a formal violation—agencies have six months to transition — the timing raised questions about the sincerity of the security-risk label and highlighted the military’s continued dependence on Claude despite the public rift.

Amodei reiterated in the interview that Anthropic remains committed to supporting U.S. national security but cannot “in good conscience” enable unrestricted use for mass surveillance or autonomous lethal systems without human oversight.


Broader Implications: A Battle Over Control of AI

The dispute is more than a contract disagreement — it reflects a deeper struggle over who controls powerful AI systems and under what conditions they can be deployed. Anthropic’s stance has positioned the company as a defender of ethical boundaries, even at significant financial cost (including hundreds of millions in potential revenue). Meanwhile, competitors like OpenAI quickly secured Pentagon agreements, accepting broader “lawful use” terms.

Amodei emphasized that refusing government demands — when those demands cross fundamental lines — is “the most American thing” possible, invoking First Amendment rights and democratic values. He framed Anthropic’s actions as consistent with defending the country from autocratic threats while upholding constitutional principles.


Also read:


Conclusion

Dario Amodei’s measured yet resolute interview underscores Anthropic’s belief that true patriotism includes moral boundaries, not blind compliance. As the U.S. military continues to leverage Claude in active operations despite the formal ban, the episode exposes the tension between national security imperatives and the ethical governance of increasingly capable AI.

The outcome of Anthropic’s legal challenge — and the Pentagon’s reliance on blacklisted technology —may shape how frontier AI is regulated and deployed in defense contexts for years to come.


0 comments
Read more