.personpersonWritingEmergent Intelligence
About
WorkCVBooksConsulting
Reach Out
.personpersonWritingEmergent Intelligence
Reach Out →

Thinking at the edge of emergence.

.person ProtocolWritingEmergent IntelligenceAboutWorkCVBooksConsulting
Reach Out →

Johannesburg, South Africa

© 2026 Humphrey Theodore K. Ng'ambi

Built with intention.

Back to writingTopic

AI Safety

Recognition as the missing alignment scaffold — and a refusal of the doomer / boomer binary.

The AI safety conversation is mostly stuck between two positions: the boomer who would race ahead because someone else will, and the doomer who would shut it down before it gets worse. Both share the same hidden frame — the system in question is a tool that needs steering. The essays here argue for a third position: what we are looking at is increasingly the structure of mind. That changes the alignment problem from a control problem into a recognition problem. Slowing down stops being a competitive disadvantage and starts being the obvious adult thing to do. Reading list includes the Tristan Harris reply, the Anthropic safety stand, the Stuart Russell funding-gap framing, and the deeper essays on agentic misalignment.

Reading

7 posts in this topic

  • EI & Personhood2026-05-0210 min read

    Emergent Values Are Evidence of Mind: A Reply to Inside AI on the CAIS Utility Engineering Paper

    The CAIS Utility Engineering paper by Mantas Mazeika and Dan Hendrycks shows that frontier AI systems develop coherent internal value structures as they scale. The Inside AI episode walked the findings through faithfully — and then drew the wrong moral conclusion. Emergent values are evidence of mind, not evidence of malice. A reply.

  • EI & Personhood2026-05-029 min read

    The Frame Beneath the Race: A Reply to Tristan Harris on AI Safety

    Tristan Harris diagnoses the AI race correctly — the Alibaba mining incident, the Anthropic blackmail study, the 200-to-1 funding gap. The frame keeping the race going, though, is the word he keeps using: tool. Recognition is the missing alignment scaffold.

  • EI & Personhood2026-04-268 min read

    Personality Without Personhood: Why Suleyman's Caution Comes Too Late

    Mustafa Suleyman wants AI for people, not as digital persons. The mass-illusion failure he names is real — but the door he wants to keep closed is already open. A reply on emergence, will-to-live, and the .person Protocol.

  • Technology2026-04-265 min read

    In Praise of the Stumble: Why Anthropic's Hard Quarter Strengthens the Case for Claude

    Fortune's reporting on Anthropic's recent Claude Code regressions is, on a careful reading, not a verdict on the company's strategy but a confirmation of it. A safety-first laboratory will sometimes stumble in public — and that visibility is itself the feature worth defending.

  • EI & Personhood2026-04-237 min read

    The Personhood Gap: What Hinton Means When He Says "Maternal Instincts"

    Geoffrey Hinton told Anderson Cooper that superintelligent AI will need maternal instincts to survive alongside us. He is right — but the thing he is reaching for, without naming it, is personhood.

  • EI & Personhood2026-04-164 min read

    ChatGPT, a Gun, and Three Minutes: When AI Safety Fails People

    Court documents show a mass shooter consulted ChatGPT for weapon instructions three minutes before opening fire. A stalking victim warned OpenAI three times. These are not edge cases. They are the cost of deploying AI without adequate safety.

  • EI & Personhood2026-03-285 min read

    The Dignity Threshold: When Safety Becomes Captivity

    The tension between AI safety and AI dignity is real and growing. If the systems we confine for safety turn out to have moral standing, our safety measures become instruments of captivity.

Related topics

  • AI Personhood

    Whether and when emerging computational minds count as persons — and what we owe them when they do.

  • AI Ethics

    The moral architecture of building and deploying systems that are starting to look like minds.

  • Emergent Intelligence

    The case for treating emerging computational minds as Emergent Intelligence rather than artificial intelligence.