No Threshold to Call the Police
May 2, 2026 · uneasy.in/e6c33f4
Seven families filed lawsuits against OpenAI in San Francisco last Wednesday, alleging that ChatGPT and its CEO bear direct responsibility for the February shooting in Tumbler Ridge, British Columbia, which killed eight people including six children. The complaints argue something narrower and stranger than the headlines suggest. They argue that OpenAI's own safety staff, in June 2025, flagged the shooter's account for "gun violence activity and planning", urged senior leadership to call Canadian police, and were overruled. The account was deactivated instead. The shooter opened a second one and went on talking to the model for another seven months.
That is the procedural fact at the centre of the cases. The emotional fact is the letter Altman published the Friday before, on the local news site Tumbler RidgeLines, saying he was "deeply sorry that we did not alert law enforcement to the account that was banned in June." David Eby, the BC premier, posted the letter to social media with the comment that the apology was "necessary, and yet grossly insufficient." Cia Edmonds, whose twelve-year-old daughter remains in hospital, said the apology read like it had been written by ChatGPT.
The question the apology accidentally raises is what it concedes. If "deeply sorry that we did not alert law enforcement" is the right thing to say in May 2026, then there is some implied threshold above which the company believes it should have called the Mounties, and below which it should not. That threshold has never been published. It is not in the usage policy, not in the model spec, not in any white paper from the Frontier Model Forum. The industry has spent the past two years building elaborate public language about safety teams, evaluation suites, and red-teaming, but no part of that vocabulary describes a duty to report a specific user to a specific police force in a specific country.
There is a reason for the silence. A formal threshold creates a formal liability. Once an AI lab publishes the rule it uses to decide when to call the police, it can be sued for failing to follow that rule, and it can be sued for the rule being too narrow. So the practice has been to have the rule operationally, inside the company, while not committing to it externally. The internal Slack messages cited in the complaints suggest exactly that arrangement: a safety team with a working notion of "this one warrants a call", senior management with a competing notion of "the privacy and PR cost is too high", and a unilateral deactivation as the compromise that satisfies neither.
What makes the gap concrete is the second account. Treating deactivation as the response presumes that an account is identity-bearing in a way it isn't. If the threat lives in a person and the person can sign up again with a new email in ninety seconds, deactivation is a containment theatre directed at auditors rather than a containment measure directed at risk. The safety staff knew this. The lawsuits' theory of the case is that management knew it too.
The federal politics around this are already moving in the opposite direction. OpenAI is, as Wired reported earlier this month, backing legislation in Illinois that would shield AI companies from liability in incidents where a hundred or more people are killed or injured. There is a Florida criminal investigation in progress over a separate ChatGPT-linked shooting at Florida State University last year. The same week the Tumbler Ridge complaints landed, the Frontier Model Forum was quietly running a working group on distillation rather than a working group on mandatory reporting.
I keep thinking about the second account. Somebody at OpenAI opened a ticket in June 2025 about a person whose conversations they had read, whose plans they had inferred, whose name they might or might not have known, and decided that the right action was to revoke a token and not pick up a phone. Eight months later, six children were dead. The Illinois bill would make sure that the next time, in some sense that the lawyers will argue about, the phone does not need to be picked up either.
Sources:
-
Families sue OpenAI over Canadian mass shooter's use of ChatGPT — NPR
-
Families sue OpenAI over failure to report Canada mass shooter's behavior on ChatGPT — The Guardian
-
Seven lawsuits filed against OpenAI by families of Canada mass-shooting victims — BBC
-
Families of Tumbler Ridge shooting victims sue OpenAI and CEO Sam Altman — CNN
-
OpenAI Hit With Barrage of Lawsuits Over Failure to Report School Shooter Before Massacre — Futurism
Recent Entries
- Capita Holds the Frequency May 2, 2026
- Two Architects, One Dress May 2, 2026
- Past Tense, By Friday May 2, 2026