Court Upholds Pentagon’s Blacklisting of Anthropic in Major AI National Security Dispute
U.S. District Judge Rita Lin in California had temporarily blocked the blacklisting on March 26, calling it “classic illegal First Amendment retaliation” and describing the government’s position as “Orwellian.”

WASHINGTON – A federal appeals court has sided with the Pentagon for now, allowing the blacklisting of AI company Anthropic to remain in place while legal challenges continue.
The U.S. Court of Appeals for the D.C. Circuit on April 8, 2026, denied Anthropic’s emergency request to pause the designation issued by Defense Secretary Pete Hegseth in early March. The ruling means Anthropic remains restricted from many federal contracts and that government agencies face limits on using its Claude AI model.
The dispute began in late 2025 when Anthropic signed a roughly $200 million contract to deploy Claude on classified Pentagon networks. Tensions rose when the Defense Department demanded the removal of Anthropic’s built-in safeguards that prohibit the model from being used for mass surveillance or fully autonomous lethal weapons.
Anthropic refused to drop those restrictions, citing its “Constitutional AI” principles and ethical commitments. In response, Hegseth formally designated Anthropic a national security supply-chain risk — the first time a major U.S. AI company has faced such a public label.
U.S. District Judge Rita Lin in California had temporarily blocked the blacklisting on March 26, calling it “classic illegal First Amendment retaliation” and describing the government’s position as “Orwellian.” The Trump administration quickly appealed, and the D.C. Circuit’s decision now keeps the Pentagon’s restrictions in effect during the ongoing litigation.
Anthropic argues the move punishes the company for its public stance on responsible AI development. The Justice Department counters that the decision was based on legitimate national security concerns and standard contract terms requiring “any lawful use” of the technology.
The case is being closely watched across the defense and tech industries. It tests how much control the government can exert over private AI companies and the balance between national security needs and corporate speech rights.
Anthropic says it will continue fighting the designation in court. Legal experts expect the case to move quickly given its importance, with potential further appeals likely to reach higher courts in the coming months.
This remains an active legal battle with major implications for how the U.S. government works with frontier AI developers.
Join the Team
Are you trying to break into news writing but struggling to get published at major outlets? At RWT News, we're always looking for talented, motivated writers who share our commitment to straightforward, factual conservative journalism. If you believe in honest reporting and want real experience and bylines, we'd love to hear from you.
Visit our Join the Team page to learn more and contact us directly.
