Trump Orders Pentagon to Phase Out Anthropic Over ‘Woke’ Policies

22

President Donald Trump has directed the Department of Defense to cease using technology from AI company Anthropic within six months, escalating tensions over the use of artificial intelligence in national security. The decision, announced Friday via Truth Social, stems from Trump’s accusation that Anthropic is run by “Leftwing nut jobs” attempting to impose restrictions on military operations.

The core dispute centers on Anthropic’s refusal to allow the Pentagon unrestricted access to its AI model, Claude. Anthropic CEO Dario Amodei has publicly stated the company will not enable mass surveillance of American citizens or the development of autonomous weapons systems, a stance rooted in AI safety concerns that led its founders to leave OpenAI. The government reportedly agreed to these terms initially, but contract language proved insufficient for Anthropic’s comfort.

Why This Matters: This confrontation highlights a growing conflict between private AI developers prioritizing ethical safeguards and a government seeking uninhibited technological advantage. The Pentagon’s demand for complete control over AI tools clashes with Anthropic’s cautious approach, reflecting a wider debate over the role of AI in warfare and domestic security. This disagreement is likely to intensify as AI capabilities grow, forcing hard choices about transparency, accountability, and the limits of military applications.

Anthropic’s willingness to restrict use of its technology is rare in the industry. Competitors like OpenAI and Elon Musk’s Grok have been far more compliant with government demands. CIA officials reportedly view Grok as inferior to Anthropic’s model, but the administration may still pursue alternative partnerships to avoid Anthropic’s restrictions. The government has not ruled out invoking the Defense Production Act to compel Anthropic into compliance.

Industry Backlash: Support for Anthropic quickly emerged from within the tech world. OpenAI CEO Sam Altman publicly voiced backing, while dozens of employees at Google and OpenAI signed letters endorsing Amodei’s stance. Even as Anthropic recently softened its safety policies to remain competitive, it maintains a stronger ethical foundation than many rivals.

“Some uses [of AI] are simply outside the bounds of what today’s technology can safely and reliably do,” Anthropic CEO Dario Amodei stated in a recent blog post.

The situation is fluid. It remains unclear if Trump’s directive will lead to renegotiations or an outright severing of ties. The government’s next move could reshape the landscape of AI development, signaling whether national security concerns will override ethical considerations in the rapidly evolving tech sector.

Ultimately, this conflict underscores a fundamental tension: Can private companies dictate how governments wield powerful new technologies, or will military imperatives take precedence? The outcome will likely set a precedent for future AI partnerships between Silicon Valley and Washington.