top of page
AINews (3).png

OpenAI Pentagon Deal Sparks Backlash Over AI Surveillance

  • Writer: Covertly AI
    Covertly AI
  • 3 days ago
  • 3 min read

OpenAI is attempting to steady itself after a swiftly arranged deal with the U.S. Department of War (DoW) sparked intense public backlash, internal dissent, and renewed debate over artificial intelligence in military operations. The controversy began when OpenAI stepped in to secure a Pentagon contract almost immediately after its rival, Anthropic, was dropped. Anthropic had refused to loosen safeguards preventing its AI model, Claude, from being used for mass domestic surveillance or fully autonomous weapons, prompting the Trump administration to phase out its technology across federal agencies (Yeo; Vallance and Cress; Milmo and Booth).

When OpenAI first announced the agreement, it claimed the contract contained “more guardrails than any previous agreement for classified AI deployments.” However, critics quickly pointed out that the deal appeared to permit surveillance and AI directed weapons systems as long as such uses were legal. The speed of the announcement intensified concerns. OpenAI CEO Sam Altman later admitted the company rushed the rollout, calling it “opportunistic and sloppy” and acknowledging that the issues were “super complex” and required clearer communication (Yeo; Vallance and Cress; Milmo and Booth).


In response, OpenAI amended the agreement, adding language that explicitly bars its systems from being “intentionally used for domestic surveillance of U.S. persons and nationals.” The revisions also state that intelligence agencies such as the National Security Agency (NSA) cannot use OpenAI’s technology without additional contractual modifications (Yeo; Vallance and Cress; Milmo and Booth). Despite these changes, critics argue that the new wording still hinges on legality rather than ethical limits. Because the restriction depends on “applicable laws,” surveillance could become permissible if laws change. Observers also questioned whether terms like “intentionally” and “deliberate” leave loopholes in autonomous systems that might gather data incidentally (Yeo).


The backlash was swift and measurable. Data from Sensor Tower showed ChatGPT uninstall rates surged dramatically, rising roughly 200% to nearly 295% above normal levels, following the announcement (Yeo; Vallance and Cress). At the same time, Anthropic’s Claude climbed to the top of Apple’s U.S. App Store rankings, overtaking ChatGPT (Yeo; Milmo and Booth). Online campaigns urged users to “delete ChatGPT,” with critics accusing OpenAI of “training a war machine” (Milmo and Booth). Comparisons to the 2013 Snowden revelations about NSA mass surveillance resurfaced, reinforcing fears about government overreach (Yeo; Milmo and Booth).


The controversy has also exposed internal divisions within the tech industry. Nearly 900 employees across OpenAI and Google signed an open letter calling on their companies to refuse government demands that would enable domestic surveillance or autonomous killing without human oversight (Milmo and Booth). While OpenAI has stated that one of its red lines is prohibiting the use of its technology to direct autonomous weapons systems, skeptics, including former OpenAI policy research head Miles Brundage, question whether the company compromised its principles to secure the deal (Milmo and Booth).

Beyond corporate politics, the episode underscores broader questions about AI’s expanding role in warfare. AI systems are already used by militaries to streamline logistics or quickly process massive volumes of intelligence data. Companies such as Palantir provide AI driven platforms that integrate satellite imagery and intelligence reports, enabling faster and potentially more lethal decision making when deemed appropriate (Vallance and Cress). Military officials stress that humans remain “in the loop,” and that AI systems do not independently make battlefield decisions. Still, experts warn that removing more safety conscious actors from defense partnerships could weaken ethical guardrails (Vallance and Cress).

Altman has framed OpenAI’s approach as deference to democratic governance, arguing that governments, not private companies, should make key societal decisions. He has also stated that he would refuse to comply with unconstitutional orders, even at personal cost (Yeo; Milmo and Booth). Yet for many critics, relying solely on legality rather than ethical standards leaves unresolved concerns about accountability and public trust.

As additional U.S. cabinet level agencies move to phase out Anthropic’s tools and OpenAI recalibrates its military partnership, the episode marks a pivotal moment in the governance of artificial intelligence. The debate highlights the fragile balance between national security, corporate responsibility, democratic oversight, and the rapidly growing power of AI technologies (Vallance and Cress; Milmo and Booth).

Works Cited


Yeo, Amanda. “OpenAI Updates Department of War Deal after Backlash.” Mashable, 3 Mar. 2026, www.mashable.com/article/openai-dept-of-war-deal-sam-altman-update-mass-surveillance

Vallance, Chris, and Laura Cress. “OpenAI Changes Deal with US Military after Backlash.” BBC News, 3 Mar. 2026, www.bbc.com/news/articles/c3rz1nd0egro

Milmo, Dan, and Robert Booth. “OpenAI Amends Pentagon Deal as Sam Altman Admits It Looks ‘Sloppy.’” The Guardian, 3 Mar. 2026, www.theguardian.com/technology/2026/mar/03/openai-pentagon-ceo-sam-altman-chatgpt

“Wall Street’s AI Bubble: Sam Altman.” Fortune, 19 Aug. 2025, https://fortune.com/2025/08/19/wall-street-ai-bubble-sam-altman/







Comments


Subscribe to Our Newsletter

  • Instagram
  • Twitter
bottom of page