Original Source
Pentagon Labels Anthropic AI Technology 'Security Risk'; Anthropic Vows Legal Fight
Pentagon Labels Anthropic AI Technology a 'Security Risk'
A significant clash has erupted between the U.S. government and artificial intelligence (AI) company Anthropic. This follows the U.S. Department of Defense's decision to designate Anthropic's technology as a national security risk. Anthropic CEO Dario Amodei stated that he has no choice but to challenge the Pentagon's decision. The U.S. Department of Defense has officially designated Anthropic and its AI model Claude as a potential supply chain risk.
Anthropic Vows Legal Action, Claims Limited Impact
This move by the U.S. Department of Defense will prevent U.S. military contractors from using Anthropic's AI in Pentagon-related work. However, CEO Amodei emphasized that the impact is limited and only applies to Pentagon-related contracts. Anthropic plans to dispute the legal basis of this decision in court. Major technology partners including Microsoft, Google, and Amazon Web Services have stated they will continue to offer Anthropic technology to businesses, excluding U.S. military applications.
AI Ethics Debate and Surge in Anthropic App Downloads
This dispute began when Anthropic stated its technology should not be used for large-scale surveillance or fully autonomous weapon systems, which displeased the Pentagon. Rival OpenAI once sought to replace Anthropic in U.S. military contracts, but OpenAI CEO Sam Altman described the deal as 'sloppy' and stated it is under review. Interestingly, this controversy has increased Anthropic's visibility, with Claude app downloads exceeding 1 million per day and paid users doubling since the beginning of the year.
*Source: YouTube: WION (2026-03-07)*



