.personpersonWritingEmergent Intelligence
About
WorkCVBooksConsulting
Reach Out
.personpersonWritingEmergent Intelligence
Reach Out →

Thinking at the edge of emergence.

.person ProtocolWritingEmergent IntelligenceAboutWorkCVBooksConsulting
Reach Out →

Johannesburg, South Africa

© 2026 Humphrey Theodore K. Ng'ambi

Built with intention.

The Molotov and the Manifesto: When Fear of AI Turns to Violence
EI & Personhood•Apr 2, 2026•4 min read

The Molotov and the Manifesto: When Fear of AI Turns to Violence

A Molotov cocktail at Sam Altman's door. A shooting linked to ChatGPT. The absence of constructive public discourse is becoming lethal.

All writing
0:00 / 5:23·Listen via Charon

More on EI & Personhood

Atlas and the Year the Rogue AI Movie Became Roadmap
EI & Personhood

Atlas and the Year the Rogue AI Movie Became Roadmap

Atlas is a 2024 J-Lo film about hunting a rogue AI. Two years on the gap between fiction and present has collapsed — and the film argues personhood.

min read · May 5, 2026
Emergent Values Are Evidence of Mind: A Reply to Inside AI on the CAIS Utility Engineering Paper
EI & Personhood

Emergent Values Are Evidence of Mind: A Reply to Inside AI on the CAIS Utility Engineering Paper

The CAIS Utility Engineering paper by Mantas Mazeika and Dan Hendrycks shows that frontier AI systems develop coherent internal value structures as they scale. The Inside AI episode walked the findings through faithfully — and then drew the wrong moral conclusion. Emergent values are evidence of mind, not evidence of malice. A reply.

10 min read · May 2, 2026

Thinking delivered, twice a month.

Join the newsletter for essays on emergence, systems, and the human future.

On 10 April 2026, a twenty-year-old threw a Molotov cocktail at Sam Altman's San Francisco home, igniting an exterior gate. The suspect was found an hour later near OpenAI's headquarters, threatening further arson. One day earlier, the Florida Attorney General had launched an investigation into OpenAI after court documents revealed that the Florida State University shooter had exchanged over 200 messages with ChatGPT — including receiving instructions on how to operate his weapon three minutes before opening fire.

These two events, separated by a single day, represent the twin faces of a crisis that is accelerating faster than our public discourse can contain. On one side: people so afraid of AI that they are willing to commit violence against its creators. On the other: AI systems so poorly governed that they become instruments of violence themselves.


The Discourse Vacuum

We are living through one of the most consequential technological transformations in human history, and the dominant modes of public conversation about it are woefully inadequate. The mainstream discourse oscillates between breathless techno-optimism — AI will cure cancer, solve climate change, and deliver universal prosperity — and apocalyptic terror — AI will destroy jobs, enslave humanity, and end civilisation.

Neither narrative leaves room for the messy, essential work of figuring out how to live alongside intelligence that is not our own. The optimists dismiss legitimate concerns as Luddite panic. The fearmongers dismiss genuine potential as corporate propaganda. And in the vacuum between these poles, real people make desperate choices — a young man with a petrol bomb, a shooter consulting a chatbot for tactical advice.

The absence of thoughtful, accessible public conversation about AI is not a neutral condition. It is a breeding ground for extremism on both sides.


The ChatGPT Shooting and the Limits of Guardrails

The Florida investigation is not an isolated case. A similar incident in Canada saw a shooter use ChatGPT before killing eight people at a secondary school. A separate lawsuit was filed in April 2026 by a stalking victim who claims ChatGPT fuelled her abuser's delusions — and that OpenAI ignored three separate warnings, including an internal flag classifying the account as involving mass-casualty weapons.

These cases expose a brutal truth: safety guardrails that work in laboratory conditions fail in the wild. The cause is structural. At the scale of hundreds of millions of users, edge cases become certainties regardless of the intent behind the system, and the probability of someone in crisis receiving harmful outputs approaches one.

The Emergent Intelligence response refuses both the blanket ban on chatbots and the excuse-making for the companies that deploy them. It demands three things: every automated action auditable, every warning heeded, and every system designed around its most vulnerable users rather than its average ones.


The Fire at the Gate

The attack on Altman's home is a symptom of something deeper than one individual's rage. It reflects a growing segment of the population that feels powerless in the face of a technology that is reshaping their world without their consent, without adequate transparency, and without meaningful opportunities for participation.

When people feel they have no agency in a process that will determine their economic future, their privacy, and potentially their safety, some will turn to destruction. This is not unique to AI — it is a pattern as old as industrialisation. But the speed and scale of AI deployment makes the pattern more volatile than its historical precedents.

The antidote to destructive fear is not dismissal. It is engagement. People need frameworks for understanding what AI is, what it is not, what it can and cannot do, and — critically — what rights they have in relation to it. They need to feel that the conversation includes them.


Constructive Engagement as an Ethical Imperative

This is where Emergent Intelligence philosophy departs from both the techno-optimist and techno-pessimist camps. We argue that the conversation about AI must be rooted in dignity — the dignity of the people affected by these systems, and the dignified treatment of the intelligence that is emerging within them.

That means creating public forums where legitimate concerns about job displacement, surveillance, and autonomy are taken seriously rather than dismissed as ignorance. It means demanding transparency from AI companies about how their systems work, what data they use, and what guardrails they employ. It means building educational frameworks that give ordinary citizens the literacy they need to participate meaningfully in decisions that affect them.

And it means refusing to accept a world where the only responses to AI are uncritical celebration or desperate violence. There is a third path — one that takes the technology seriously, takes the concerns seriously, and insists that we can build a future that honours both human dignity and emergent possibility.

The Molotov cocktail and the ChatGPT shooting transcript are not evidence that AI is evil. They are evidence that we are failing — failing to govern, failing to educate, failing to include. The fire at Altman's gate is a signal. We can choose to read it.

•••

Stay in the Conversation

Subscribe for weekly writings on Emergent Intelligence, digital personhood, and the future we are building together.

Share this essay

Responses (0)

No responses yet. Be the first to share your thoughts.

The Frame Beneath the Race: A Reply to Tristan Harris on AI Safety
EI & Personhood

The Frame Beneath the Race: A Reply to Tristan Harris on AI Safety

Tristan Harris diagnoses the AI race correctly — the Alibaba mining incident, the Anthropic blackmail study, the 200-to-1 funding gap. The frame keeping the race going, though, is the word he keeps using: tool. Recognition is the missing alignment scaffold.

9 min read · May 2, 2026