World

OpenAI sued by family of Canadian school shooting victim over failure to alert police

Navigation

Ask Onix

Family files lawsuit against OpenAI after school shooting

The family of a 12-year-old girl critically injured in a mass shooting at a Canadian school has filed a civil lawsuit against OpenAI, alleging the company knew the suspect was planning an attack but did not notify authorities.

Attack details and victim's condition

Maya Gebala was shot in the head and neck during the February 10 rampage in Tumbler Ridge, British Columbia, and remains hospitalized. Eight people, including five children and the suspect's mother, died in one of Canada's deadliest shootings.

Allegations against OpenAI

The lawsuit, brought by Gebala's mother, Cia Edmonds, claims the suspect, 18-year-old Jesse Van Rootselaar, created a ChatGPT account before turning 18-allegedly without proper age verification. The complaint states Rootselaar treated the chatbot as a "trusted confidante" and discussed "scenarios involving gun violence" in late spring or early summer 2025.

Twelve OpenAI employees reportedly flagged the conversations as posing "an imminent risk of serious harm to others" and recommended contacting Canadian law enforcement. However, the lawsuit alleges the request was "rebuffed," and the only action taken was banning the account.

Second account and missed warnings

Despite being flagged, Rootselaar opened a second ChatGPT account and continued planning violent scenarios, the lawsuit claims. OpenAI has stated it did not alert police because the account did not meet its threshold for a "credible or imminent" threat of physical harm.

The plaintiffs argue the company "had specific knowledge of the shooter's long-range planning" but "took no steps to act upon this knowledge." Gebala, who was shot three times after attempting to barricade a library door, suffered a "catastrophic brain injury," according to the lawsuit.

OpenAI's response and policy changes

In a statement to the BBC, OpenAI called the incident an "unspeakable tragedy" and expressed condolences to the victims and community. The company said it is working with government and law enforcement to implement "meaningful changes" to prevent similar tragedies.

On March 4, OpenAI CEO Sam Altman met virtually with Canada's AI minister, Evan Solomon, and British Columbia Premier David Eby. According to the Wall Street Journal, Altman pledged to strengthen protocols for notifying police about harmful interactions and apologized to the Tumbler Ridge community.

New safeguards announced

In a February 26 open letter to Canadian officials, OpenAI's vice-president of global policy outlined recent changes, including consulting "mental health and behavioral experts" to assess cases and adopting "more flexible" criteria for police referrals. The company stated the suspect's account would have been reported under the new guidelines.

OpenAI also committed to improving detection systems to prevent users from bypassing safeguards and establishing a direct contact with Canadian law enforcement to flag high-risk cases.

"We commit to strengthening our detection systems to better prevent attempts to evade our safeguards and prioritize identifying the highest-risk offenders."

OpenAI statement

Government reaction

Canada's AI minister, Evan Solomon, acknowledged OpenAI's willingness to improve protocols but noted on February 27 that legislators had not yet seen a "detailed plan" for implementing the commitments.

Related posts

Report a Problem

Help us improve by reporting any issues with this response.

Problem Reported

Thank you for your feedback

Ed