Canada Orders OpenAI Safety Review After Tumbler Ridge ChatGPT Links
- Covertly AI
- 4 hours ago
- 4 min read

Canada’s federal government is pressing OpenAI for answers and concrete safety changes after the Feb. 10 mass shooting in Tumbler Ridge, B.C., a tragedy in which police say eight people were killed, including six at a secondary school. The case has intensified questions about what responsibility AI companies have when users post, or appear to post, content signaling real-world violence, and how quickly platforms should alert law enforcement. At the centre of the scrutiny is ChatGPT’s connection to the suspected shooter, Jesse Van Rootselaar, whose account OpenAI says it banned in June, months before the attack, after internal reviews flagged troubling activity (Djuric). Despite posts about gun violence and internal enforcement action, the RCMP was not notified until after the killings, with OpenAI saying at the time the activity did not meet the company’s threshold for contacting police (von Stackelberg; Trinh).
In response, Artificial Intelligence Minister Evan Solomon has escalated demands for transparency and accountability. Speaking in Halifax on March 3, Solomon said OpenAI’s explanations had been insufficient and “lacked detail,” and he vowed to press CEO Sam Altman for specific protocol changes “to protect Canadians,” arguing AI chatbots pose new public-safety challenges (Trinh). That pressure carried into a virtual meeting the next day, where Solomon spoke with Altman for about half an hour and later said the CEO expressed “horror and responsibility in general for not flagging,” calling the situation “emotional territory” given the harm to a small community (von Stackelberg). Politico similarly reported that Altman faced pointed questions about why OpenAI did not alert police earlier and why it failed to stop a banned user from bypassing enforcement, and that Altman committed to providing a “full report” on how OpenAI’s systems identify dangerous users and prevent ban evasion (Djuric).
Solomon says OpenAI agreed to changes aimed at making Canadian reporting pathways and risk assessments more rigorous. A central demand is a direct relationship with Canadian law enforcement: Solomon said he asked OpenAI to report threats directly to the RCMP rather than relying only on U.S. channels like the FBI, and he said the company promised a direct line of communication with the Mounties (von Stackelberg). OpenAI also agreed to establish a Canadian point of contact so police can exchange information quickly about dangerous users (Djuric). Beyond reporting, Solomon said Altman agreed to include Canadian experts in mental health and law within OpenAI’s safety office, where the company evaluates threats and decides whether to inform police, so assessments reflect Canadian context and community realities (von Stackelberg; Djuric). OpenAI echoed that direction in a statement, saying it is strengthening law-enforcement referral criteria and improving how systems account for “country and community context” (von Stackelberg).

Canada is also moving from promises to verification. Solomon said he has enlisted the Canadian AI Safety Institute, a federal body within his department, to conduct a full, detailed assessment and testing of OpenAI’s updated safety protocols to ensure the technology does not pose public danger (von Stackelberg; Djuric). He also told OpenAI to go back and re-check safety alerts from the past year using stricter rules, a retrospective review intended to reveal whether other high-risk cases were missed and should have been reported to Canada’s national police (Djuric). OpenAI has acknowledged that if these policy changes had existed earlier, the company would have flagged the suspected shooter’s account to Canadian police (Djuric). Complicating matters further, OpenAI has said it discovered the suspected shooter created a second ChatGPT account after the first was banned, making ban bypass a key focus of the government’s questions and requested reporting (von Stackelberg; Trinh).
Provincially, B.C. Premier David Eby has demanded an apology to the people of Tumbler Ridge and has pushed Ottawa to establish national minimum standards for when platforms must report threats of violence (von Stackelberg; Trinh). Eby has argued AI companies should face “serious and meaningful consequences” if they fail to report in time to prevent violent incidents, and he expects families to pursue legal action, while warning that litigation against a well-capitalized tech giant could be an uneven fight without government support (Trinh). With a coroner’s inquest announced and political pressure rising, Solomon has kept the regulatory door open, reiterating that “all options are on the table,” including legislation, as the government weighs how Canadians can benefit from AI while being protected when risks emerge (von Stackelberg; Trinh; Djuric).
Works Cited
Djuric, Mickey. “Canada Orders OpenAI Safety Review after Grilling Sam Altman over Security Lapses.” Politico, 5 Mar. 2026, www.politico.com/news/2026/03/05/canada-openai-safety-review-altman-00814165.
Trinh, Judy. “AI Minister Pledges More Information from OpenAI CEO after Tumbler Ridge Shooting.” CTV News, 3 Mar. 2026, www.ctvnews.ca/politics/article/ai-minister-pledges-more-information-from-openai-ceo-after-tumbler-ridge-shooting/.
von Stackelberg, Marina. “OpenAI CEO Expressed ‘Horror and Responsibility’ over ChatGPT’s Ties to Tumbler Ridge, AI Minister Says.” CBC News, 4 Mar. 2026, www.cbc.ca/news/politics/evan-solomon-open-ai-meeting-ceo-sam-altman-9.7114767.
“Minister of Artificial Intelligence and Digital Innovation Evan Solomon speaks on Parliament Hill.” CTV News, 3 Mar. 2026, www.ctvnews.ca/politics/article/ai-minister-pledges-more-information-from-openai-ceo-after-tumbler-ridge-shooting/.
“A memorial in Tumbler Ridge, British Columbia, where a mass shooting took place Feb. 10.” The Wall Street Journal, 5 Mar. 2026, www.wsj.com/tech/ai/canada-says-openai-ceo-altman-pledged-to-toughen-safety-protocols-7962b26b.
.png)


Comments