The community of Tumbler Ridge, B.C. is living through an unimaginable grief. As we mourn with them, we are right to ask what we can learn from this tragedy.
Much of the public conversation has focused on OpenAI: Why did the company not warn police when its systems flagged Jesse Van Rootselaar’s account for violent content months before the shooting? B.C. Premier Eby noted the company’s silence was “profoundly disturbing” for the victims’ families. Federal AI Minister Evan Solomon summoned OpenAI executives to Ottawa and expressed “disappointment” when they offered no “substantial new safety measures.”
But “why did they not report” is the wrong question, directed at the wrong actor. OpenAI behaved as a private company can be expected to behave. It followed its legal obligations and policies, weighed the risks and made a call. We cannot blame a company for acting like a company in the absence of law.
The question that matters is one for government: “How will you regulate AI and mental health?” A duty to report is one consideration. But it is complex, and only the tip of the regulatory iceberg.
Chatbots are actively fostering a new kind of relationship and Canadians are increasingly turning to them for unmet health care and social support needs. We need to determine how we will regulate this relationship.
Canada’s mental health systems have profound gaps; report after report confirms that people are not getting the support they need. In that void, many are turning to AI chatbots. Some seek out purpose-built mental health tools, but many reach for what is free, familiar, and at hand, general-purpose tools like ChatGPT.
A Harvard Business Review analysis found “therapy and companionship” was the number one use-case for generative AI in 2025. A commentator has observed that, by volume, ChatGPT may now be the single largest source of mental health support in the world.
This reality is not lost on AI developers. Research on purpose-built mental health chatbots shows that users can form a “therapeutic alliance,” a bond of trust, empathy and emotional disclosure that mirrors what develops in human therapy.
ChatGPT was not built for therapy, but in May 2025, OpenAI acknowledged that people were using it for “deeply personal advice,” a use case requiring “great care.” The company conceded that one model update had become too “sycophantic,” “validating doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions.” In October 2025, OpenAI disclosed that staggering numbers of active users express suicidal intent or show signs of mental health emergencies each week.
This reality gives context to the company’s chosen disclosure threshold, “credible and imminent risk of serious physical harm to others,” which has been compared to the reporting principles that apply to health professionals.
Serious threats to public safety can outweigh confidentiality in health care. Yet it would be a mistake to focus solely on a duty to report. In human-provider relationships, reporting decisions are nuanced, part of a broader framework of professional obligations, duties to protect privacy, to act in the patient’s best interest, to meet recognized standards of care, and to obtain informed consent. These help make emotional disclosure and vulnerability safe.
None of these apply to a chatbot, however empathetic it might sound. Chatbots create a relational context that mirrors therapy and companionship, but we have none of the accompanying legal architecture.
Calls to fold chatbots into online harms legislation may be sensible, if carefully crafted, especially given how difficult AI-specific legislation has proven in Canada. But much more must be done here.
A private, intimate exchange in which a person discloses fear, anger or violent ideation is not like other online conduct. Governing it requires a different framework, one that treats AI-mediated emotional support not as a species of online content, but as a kind of relationship.
Chatbots are not therapists and need not be regulated quite like therapists. But nor should they be governed only by company policy.
Minister Solomon has called for “trust first.” Yet, in human therapeutic relationships, a web of legal safeguards helps make providers trustworthy. We cannot simply move our human vulnerability into an unregulated space and think trust should follow.
Canada has a relatively clean legislative slate and Minister Solomon’s statement that “all options are on the table” when it comes to AI chatbots, is a welcome one. With a new national AI strategy forthcoming, the question is whether this government will use that opportunity to ask what role we want AI to play in mental health care, or whether it will settle for summoning tech executives to Ottawa and expressing disappointment.
Sophie Nunnelley is a law professor.
















![His mother-in-law heard “cancer,” went home, and was dead within a year [PODCAST]](https://kevinmd.com/wp-content/uploads/f456d531-bd92-4a06-825c-bc431129a24c-190x100.png)


