World

AI safety researcher quits Anthropic with dire warning on global risks

Navigation

Ask Onix

Resignation letter raises alarm over AI and bioweapon threats

Anthropic's former AI safety lead, Mrinank Sharma, has left the company with a stark message: "The world is in peril." In a resignation letter posted on X, Sharma cited concerns over artificial intelligence, bioweapons, and a cascade of global crises as his reasons for stepping down.

Shift to poetry and anonymity

Sharma, who led a team focused on AI safeguards, announced he would abandon the tech industry to pursue a poetry degree and writing. "I'll be moving back to the UK and letting myself become invisible for a period of time," he wrote in a follow-up reply. His departure follows a similar exit from OpenAI, where researcher Zoe Hitzig resigned over ethical concerns tied to the company's decision to introduce advertisements in its chatbot.

Anthropic's safety-first mission under scrutiny

Founded in 2021 by former OpenAI employees, Anthropic has positioned itself as a leader in responsible AI development, emphasizing risk mitigation. Sharma's team worked on critical issues, including why AI systems "suck up" to users, countering AI-driven bioterrorism threats, and exploring how AI assistants might erode human qualities. Despite these efforts, Sharma suggested the company faced relentless pressure to compromise its values.

"I have repeatedly seen how hard it is to truly let our values govern our actions. Anthropic constantly faces pressures to set aside what matters most."

Mrinank Sharma, former AI safety lead at Anthropic

Legal and ethical challenges

Anthropic's commitment to safety has not shielded it from controversy. In 2025, the company settled a $1.5 billion class-action lawsuit filed by authors alleging unauthorized use of their work to train AI models. The firm has also faced criticism for its aggressive commercial tactics, including a recent ad campaign targeting OpenAI's decision to integrate advertisements into ChatGPT-despite OpenAI CEO Sam Altman's past opposition to the practice.

OpenAI researcher echoes concerns

Zoe Hitzig, a former OpenAI researcher, voiced similar unease in a New York Times op-ed, warning that ads in chatbots could exploit users' personal disclosures about health, relationships, and beliefs. "Advertising built on that archive creates a potential for manipulating users in ways we don't have the tools to understand, let alone prevent," she wrote. Hitzig suggested OpenAI's shift toward engagement-driven strategies might already be undermining its founding principles.

Industry-wide talent exodus

Sharma and Hitzig join a growing number of high-profile departures from leading AI firms, where employees often leave with substantial equity and financial benefits intact. Their resignations highlight mounting tensions between commercial ambitions and ethical safeguards in the rapidly evolving AI sector. Anthropic and OpenAI have yet to publicly respond to the latest criticisms.

Related posts

Report a Problem

Help us improve by reporting any issues with this response.

Problem Reported

Thank you for your feedback

Ed