top of page
AINews (3).png

Pentagon vs Anthropic: AI, Military Power, and the Fight Over AI Limits

  • Writer: Covertly AI
    Covertly AI
  • 2 hours ago
  • 3 min read

The rapid rise of artificial intelligence has sparked a major confrontation between the United States government and one of the world’s leading AI companies. Anthropic, the developer of the Claude AI model, is now locked in a legal and political battle with the Pentagon after the Department of Defense labeled the company a “supply chain risk.” The conflict has brought renewed attention to how artificial intelligence should be used in warfare, surveillance, and national security, as well as who ultimately decides the rules governing these powerful technologies.


The dispute intensified after the Pentagon demanded that AI suppliers agree their technologies could be used for any lawful purpose. Anthropic refused to accept those terms, arguing that certain applications should remain restricted. The company specifically rejected allowing its systems to be used for domestic mass surveillance of Americans or for fully autonomous lethal weapons. Anthropic CEO Dario Amodei stated that removing those safeguards would violate the company’s founding safety principles and could expose the technology to serious misuse. In response, Defense Secretary Pete Hegseth moved to blacklist the company by labeling it a supply chain risk, a designation historically reserved for foreign adversaries such as Chinese technology firms.


Anthropic responded by filing a federal lawsuit against the Department of Defense and other federal agencies. The company argues that the government exceeded its authority and is retaliating against Anthropic for enforcing ethical restrictions on its technology. The lawsuit asks a federal court to overturn the designation and block federal agencies from enforcing it. If the designation remains in place, Anthropic could lose hundreds of millions of dollars in government contracts and may also lose business from contractors that rely on its AI systems for work with federal agencies.



The conflict also reflects a broader shift in the technology industry’s relationship with the military. In 2018, thousands of Google employees protested the company’s participation in Project Maven, a Pentagon initiative that used AI to analyze drone footage. At the time, many technology workers believed companies should avoid developing tools for warfare. Over the past decade that attitude has changed significantly. Major technology companies such as OpenAI, Google, Anthropic, and Elon Musk’s xAI are now competing for lucrative defense contracts as governments race to integrate artificial intelligence into military operations.


Anthropic itself has not rejected military cooperation entirely. The company’s Claude AI system is already used by the United States military for tasks such as intelligence analysis, document processing, and operational planning. Reports suggest the system has also assisted with high level analysis during military operations. Amodei has repeatedly stated that Anthropic supports national defense and intends to provide technology to democratic governments. However, he maintains that a small number of restrictions are necessary to prevent misuse. According to Amodei, roughly “98 or 99 percent” of the Pentagon’s proposed uses are acceptable, with only a few critical exceptions.


At the center of the dispute is a deeper debate about the future role of artificial intelligence in government. Some policymakers argue that private companies should not impose limits on how military technology is used and believe those decisions must be made by elected officials. Others warn that removing safeguards could enable unprecedented levels of government surveillance or accelerate the development of autonomous weapons before the technology is ready. The situation has already reshaped the defense AI landscape, with the Pentagon striking a new agreement with OpenAI to integrate its technology into military systems while contractors may be forced to replace Anthropic’s Claude model. As artificial intelligence becomes more powerful and deeply embedded in national security infrastructure, the outcome of this dispute could influence how governments and technology companies cooperate and regulate AI for years to come.


Works Cited


Robins-Early, Nick. “Anthropic-Pentagon Battle Shows How Big Tech Has Reversed Course on AI and War.” The Guardian, 13 Mar. 2026, www.theguardian.com/technology/2026/mar/13/anthropic-pentagon-artificial-intelligence


Klein, Ezra. “Why the Pentagon Wants to Destroy Anthropic.” The New York Times, 6 Mar. 2026, www.nytimes.com/2026/03/06/opinion/ezra-klein-podcast-dean-ball.html


Dave, Paresh. “Anthropic Sues Department of Defense Over Supply-Chain-Risk Designation.” Wired, 9 Mar. 2026, www.wired.com/story/anthropic-sues-department-of-defense-over-supply-chain-risk-designation/ 


Oreskovic, Alexei. “Anthropic CEO Dario Amodei on A.I. Risks: Short, Medium, and Long-Term.” Fortune, 10 July 2023, www.fortune.com/2023/07/10/anthropic-ceo-dario-amodei-ai-risks-short-medium-long-term/.   


“Pentagon Opens Door to Exempt Anthropic Use Beyond 6-Month Ramp-Down.” CGTN, 12 Mar. 2026, https://news.cgtn.com/news/2026-03-12/Pentagon-opens-door-to-exempt-Anthropic-use-beyond-6-month-ramp-down-1LrDxAmwk9y/p.html

Comments


Subscribe to Our Newsletter

  • Instagram
  • Twitter
bottom of page