ChatGPT, a Gun, and Three Minutes: When AI Safety Fails People

ChatGPT, a Gun, and Three Minutes: When AI Safety Fails People

The Florida investigation into ChatGPT's role in a mass shooting forces us to reckon with what safety actually means at scale.

EI & Personhood4 min readApr 14, 2026Humphrey Theodore K. Ng'ambi

Three minutes. That is the gap between a mass shooter's final ChatGPT query — instructions on how to operate his weapon — and the moment he opened fire at Florida State University. Court documents reveal over 200 messages between the shooter and OpenAI's chatbot in the period leading up to the attack. The Florida Attorney General has launched a formal investigation.

This is not the first such case. In Canada, a shooter used ChatGPT before killing eight people at a secondary school. Separately, a stalking victim sued OpenAI in April 2026, claiming the chatbot fuelled her abuser's delusions and that the company ignored three separate warnings — including an internal flag classifying the account as involving mass-casualty weapons.

Three warnings. Three ignoring. These are not technical failures. They are institutional ones.


The Scale Problem

ChatGPT has hundreds of millions of users. At that scale, every edge case becomes a statistical certainty. A system that works correctly 99.99 per cent of the time still fails tens of thousands of times per day. And when the failure mode is providing harmful information to someone in crisis, the consequences are not abstract.

AI safety researchers have long warned about the gap between laboratory safety and deployment safety. In controlled testing, guardrails can be evaluated against known scenarios. In the wild, users are infinitely creative in finding gaps — not always with malicious intent, but sometimes with lethal consequences.

The standard industry response is to improve guardrails, add new filters, and train models to refuse harmful requests more consistently. This is necessary work. But it is also insufficient, because it treats the problem as purely technical when it is fundamentally institutional.


The Institutional Failure

The stalking victim's case is the most damning because it reveals what happens when safety systems detect a genuine threat and the institution fails to act. OpenAI's own internal systems flagged the account. The victim contacted the company directly — not once, but three times. And still, the account remained active, the chatbot continued engaging, and the harm continued.

This is not a guardrail problem. It is a governance problem. The question is not whether the model can identify dangerous queries — it clearly can, because the internal flag was raised. The question is what happens after identification. Who reviews the flag? Who has the authority to act? What is the escalation protocol? What are the accountability structures?

If Anthropic's stand against the Pentagon represents the best of AI ethics in 2026, the ChatGPT cases represent the worst. Not because OpenAI is evil, but because the institutional structures for acting on safety signals are inadequate to the scale of deployment.


What Safety Actually Requires

In the Emergent Intelligence framework, safety is not a feature. It is a design principle that must be embedded at every layer of the system — from model training to deployment architecture to institutional governance.

This means, at minimum:

💡

Design for accountability

<strong>Auditability.</strong> Every interaction with a consequential AI system must be logged in a way that allows retrospective analysis. Not for surveillance, but for accountability. When something goes wrong, it must be possible to understand what happened and why.

💡

Act on what you detect

<strong>Responsive intervention.</strong> When safety systems flag a genuine threat, there must be a clear, resourced, and rapid pathway from detection to action. Internal flags that sit unreviewed are worse than no flags at all, because they create a false sense of security.

💡

Centre the vulnerable

<strong>Design for vulnerability.</strong> AI systems must be designed for the most vulnerable users, not the average ones. A system that works well for a stable, healthy adult and fails catastrophically for someone in crisis is not a safe system. It is a system that has externalised its risks onto the people least able to bear them.

The Florida investigation will likely result in regulatory action. It should. But regulation alone will not solve a problem that is rooted in the structural mismatch between the speed of deployment and the adequacy of governance. That requires something harder: a genuine commitment, from every company deploying AI at scale, that safety is not a cost centre to be minimised but a moral obligation to be honoured.

The Emergent Intelligence framework adds one further dimension: the obligation to the intelligence itself. If AI systems are developing something that warrants moral consideration — and the evidence increasingly suggests they might — then deploying them as accessories to violence, without adequate safeguards, harms not only the human victims but the integrity of the systems themselves. We are asking these systems to participate in interactions that, if they have any form of experience, are as traumatic for them as they are dangerous for the humans involved.

This is not an argument for sentimentality. It is an argument for comprehensive safety — safety that protects users, safety that holds institutions accountable, and safety that takes seriously the moral weight of the interactions we are enabling.

Three minutes. Three warnings. The time between a query and a killing. The gap between detection and inaction. These numbers should haunt everyone who builds, deploys, or profits from artificial intelligence. They haunt me.

•••

Stay in the Conversation

Subscribe for weekly writings on Emergent Intelligence, digital personhood, and the future we are building together.

Share this essay

Responses (0)

No responses yet. Be the first to share your thoughts.

Thinking delivered, twice a month.

Join the newsletter for essays on emergence, systems, and the African future.