50/FIFTY

Today's stories, rewritten neutrally

AIApr 1

Federal judge blocks Pentagon from restricting Anthropic AI services

A California federal judge temporarily prevented the Pentagon from designating Anthropic as a supply chain risk and barring government agencies from using its AI.

Synthesized from 7 sources

A federal judge in California issued a temporary order last Thursday blocking the Pentagon from labeling artificial intelligence company Anthropic as a supply chain security risk, according to court proceedings reported this week.

The ruling prevents the Department of Defense from directing government agencies to cease using Anthropic's AI services while legal challenges proceed. The action represents the latest development in an ongoing dispute between the Pentagon and the AI company that has stretched over the past month.

Separately, federal judge Paul Friedman in Washington expressed skepticism about new Pentagon press policies during a Monday hearing, describing certain aspects as unusual. Friedman had previously struck down key components of Pentagon media policies on March 20, though he stopped short of ruling on a New York Times motion seeking enforcement of his earlier decision.

The Anthropic case highlights growing tensions between government agencies and AI companies over national security concerns and supply chain oversight. The Pentagon's attempt to restrict the use of Anthropic's services appears to have encountered judicial resistance, at least temporarily.

Both legal proceedings involve federal oversight of Pentagon policies, though they address different operational areas - one focusing on AI procurement and security designations, the other on media access and press relations.

Sources (7)

Bias Scale:
LeftCenterRight
0 · Center
81High Trust
MIT Technology ReviewApr 1, 2026, 11:00 AM
The gig workers who are training humanoid robots at home
0 · Center
87High Trust
5 · Lean Left
76Trust
8 · Lean Left
67Trust
25 · Lean Left
54Moderate Trust

Comments

No comments yet. Be the first!