Anthropic faces Pentagon ban over military AI use restrictions
Trump administration banned Anthropic's Claude AI from government use after CEO refused to remove safeguards preventing military weapons applications.

The Trump administration on Friday designated artificial intelligence company Anthropic as a supply chain risk and ordered government agencies to stop using its Claude chatbot after CEO Dario Amodei refused to remove ethical safeguards that prevent the technology from being used in autonomous weapons and domestic mass surveillance.
The Pentagon has been given six months to phase out Anthropic's military applications, which had previously been approved for use in classified systems through partnerships with defense contractors like Palantir. Defense contractors including Lockheed Martin are reportedly removing Anthropic's AI systems following the ban.
Anthropic has said it will challenge the Pentagon designation in court once it receives formal notice. Amodei defended the company's stance, arguing that "frontier AI systems are simply not reliable enough to power fully autonomous weapons" and that the company would not "knowingly provide a product that puts America's warfighters and civilians at risk."
The dispute has created a consumer backlash against OpenAI, which announced a deal Friday to replace Anthropic in Pentagon systems. Claude became the most downloaded iPhone app over the weekend, while ChatGPT saw a 775% increase in one-star reviews on Saturday. OpenAI CEO Sam Altman acknowledged the company "shouldn't have rushed" the Pentagon announcement and held an all-hands meeting Tuesday to address employee concerns.
Investors in Anthropic are reportedly pushing to de-escalate the clash with the Pentagon, according to sources, as the ban could pose existential business risks for the fast-growing company. The Financial Times reported that Anthropic's chief is back in talks with the Pentagon about a potential AI deal.
Military experts have expressed mixed views on the dispute. While some applaud Anthropic's ethical stance, others question whether AI chatbots are reliable enough for military applications due to their tendency to produce errors known as "hallucinations." Former Navy pilot Missy Cummings argued that large language models are "inherently unreliable and not appropriate in environments that could result in the loss of life."