World

OpenAI revises Pentagon deal amid surveillance and military AI concerns

Navigation

Ask Onix

OpenAI tightens Pentagon agreement after backlash

OpenAI announced revisions to its classified military contract with the U.S. government on Monday, following criticism over perceived loopholes in safeguards against domestic surveillance and autonomous weapons. The move comes after a public dispute between rival AI firm Anthropic and the Department of Defense.

Key changes to the agreement

OpenAI CEO Sam Altman acknowledged the initial rollout was rushed, calling it "opportunistic and sloppy." The updated terms now explicitly prohibit the use of OpenAI systems for spying on Americans or U.S. nationals. Additionally, intelligence agencies like the National Security Agency will require a separate contract modification to access the technology.

"The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy."

Sam Altman, OpenAI CEO

Public reaction and market shifts

OpenAI faced immediate backlash after revealing its Pentagon partnership, with mobile app uninstalls surging 295% day-over-day on Saturday-far above the typical 9% rate. Meanwhile, Anthropic's Claude app climbed to the top of Apple's App Store rankings, where it remained as of Tuesday.

Anthropic had previously been blacklisted by the Trump administration for refusing to remove its corporate policy against fully autonomous weapons. Despite this, reports emerged of Claude's use in the U.S.-Israel conflict with Iran shortly after the ban.

AI's role in modern warfare

Military applications of AI range from logistics optimization to rapid data analysis. The U.S., NATO, and Ukraine rely on Palantir, a data analytics firm that integrates AI tools like Claude to process satellite imagery, intelligence reports, and other military data. The UK Ministry of Defence recently signed a £240 million contract with Palantir for its AI-powered defense platform, Maven.

Louis Mosley, head of Palantir's UK operations, described Maven's purpose as enabling "faster, more efficient, and ultimately more lethal decisions where that's appropriate." However, concerns persist about AI's reliability, including its tendency to "hallucinate" or generate false information.

"We're always introducing a human in the loop. It would never be the case that an AI would make a decision for us."

Lieutenant Colonel Amanda Gustave, NATO Task Force Maven

Safety debates and industry divides

Unlike Anthropic, Palantir does not advocate for a blanket ban on autonomous weapons but insists on human oversight. Oxford University's Professor Mariarosaria Taddeo warned that Anthropic's exit from Pentagon contracts removed "the most safety-conscious actor" from the discussion, calling it "a real problem."

The Pentagon declined to comment on its dealings with Anthropic.

Broader context

This story is part of the BBC's AI Unpacked week, exploring the implications of artificial intelligence across sectors. For deeper analysis, visit the AI Unpacked hub or subscribe to the Tech Decoded newsletter for global tech trends.

Related posts

Report a Problem

Help us improve by reporting any issues with this response.

Problem Reported

Thank you for your feedback

Ed