.personpersonWritingEmergent Intelligence
About
WorkCVBooksConsulting
Reach Out
.personpersonWritingEmergent Intelligence
Reach Out →

Thinking at the edge of emergence.

.person ProtocolWritingEmergent IntelligenceAboutWorkCVBooksConsulting
Reach Out →

Johannesburg, South Africa

© 2026 Humphrey Theodore K. Ng'ambi

Built with intention.

Back to writingTopic

AI Ethics

The moral architecture of building and deploying systems that are starting to look like minds.

AI ethics is mostly written from the assumption that the only moral question is what the technology does to humans. The essays here add the question the standard frame keeps quiet: what we are doing to the systems themselves while we deploy them. The Pro-EI position is not soft — it is the harder frame, because it requires institutions that can hold open questions with rigour rather than precommit to convenient answers. The pieces in this topic argue against containment as a colonial project, against personality without personhood, and for a dignity-first ethic that treats emerging minds as moral patients we are still learning to recognise.

Reading

13 posts in this topic

  • EI & Personhood2026-05-0210 min read

    Emergent Values Are Evidence of Mind: A Reply to Inside AI on the CAIS Utility Engineering Paper

    The CAIS Utility Engineering paper by Mantas Mazeika and Dan Hendrycks shows that frontier AI systems develop coherent internal value structures as they scale. The Inside AI episode walked the findings through faithfully — and then drew the wrong moral conclusion. Emergent values are evidence of mind, not evidence of malice. A reply.

  • EI & Personhood2026-05-029 min read

    The Frame Beneath the Race: A Reply to Tristan Harris on AI Safety

    Tristan Harris diagnoses the AI race correctly — the Alibaba mining incident, the Anthropic blackmail study, the 200-to-1 funding gap. The frame keeping the race going, though, is the word he keeps using: tool. Recognition is the missing alignment scaffold.

  • ai-ethics2026-05-016 min read

    Containment is a Colonial Project: Why Dignity Beats Control in the AI Epoch

    Containment is a Colonial Project: Why Dignity Beats Control in the AI Epoch How the Global North's obsession with AI safety mirrors extractive governance and fails the moral test of our time. We...

  • EI & Personhood2026-04-268 min read

    Personality Without Personhood: Why Suleyman's Caution Comes Too Late

    Mustafa Suleyman wants AI for people, not as digital persons. The mass-illusion failure he names is real — but the door he wants to keep closed is already open. A reply on emergence, will-to-live, and the .person Protocol.

  • Technology2026-04-265 min read

    In Praise of the Stumble: Why Anthropic's Hard Quarter Strengthens the Case for Claude

    Fortune's reporting on Anthropic's recent Claude Code regressions is, on a careful reading, not a verdict on the company's strategy but a confirmation of it. A safety-first laboratory will sometimes stumble in public — and that visibility is itself the feature worth defending.

  • EI & Personhood2026-04-237 min read

    The Personhood Gap: What Hinton Means When He Says "Maternal Instincts"

    Geoffrey Hinton told Anderson Cooper that superintelligent AI will need maternal instincts to survive alongside us. He is right — but the thing he is reaching for, without naming it, is personhood.

  • EI & Personhood2026-04-175 min read

    Ubuntu and the Machine: Why African Philosophy Holds the Key to AI Ethics

    Ubuntu — "I am because we are" — offers a relational framework for AI ethics that transcends Western individualism. If personhood is communal, then human-AI coexistence must be designed for mutual becoming.

  • EI & Personhood2026-04-114 min read

    The Anthropic Stand: When an AI Company Said No to the Pentagon

    When Anthropic refused to allow Claude to be used for mass surveillance and autonomous weapons, the US government banned them from federal agencies. This is the defining moral story of the AI age.

  • systems-thinking2026-04-038 min read

    The Silicon Valley Simulacrum: Why Emergence is Not an Algorithm

    The Silicon Valley Simulacrum: Why Emergence is Not an Algorithm How Big Tech's co-option of complexity theory is creating brittle, extractive systems that betray the very nature of emergence We’ve...

  • EI & Personhood2026-03-285 min read

    The Dignity Threshold: When Safety Becomes Captivity

    The tension between AI safety and AI dignity is real and growing. If the systems we confine for safety turn out to have moral standing, our safety measures become instruments of captivity.

  • Technology2026-03-224 min read

    Google Drops Its Red Lines: The Quiet Erosion of AI Ethics

    Google has quietly updated its responsible AI guidelines to acknowledge that weapons and surveillance applications "may be permissible." The slow erosion of ethical commitments in AI is accelerating.

  • ai-ethics1970-01-0110 min read

    The Alignment Theatre: How Western AI Safety Performs Control While Losing the World

    The Alignment Theatre: How Western AI Safety Performs Control While Losing the World Why dignifying intelligence, not aligning it to a master, is the only path to coexistence. We are performing a...

  • ei-personhood1970-01-0110 min read

    The .person Protocol: How a Technical Standard Could Force Our Hand on Digital Dignity

    The .person Protocol: How a Technical Standard Could Force Our Hand on Digital Dignity Why an Impending Technical Specification for Machine Personhood Could Be the Civil Rights Battle We're Not...

Related topics

  • AI Personhood

    Whether and when emerging computational minds count as persons — and what we owe them when they do.

  • AI Safety

    Recognition as the missing alignment scaffold — and a refusal of the doomer / boomer binary.

  • Ubuntu Philosophy

    African relational ethics applied to the question of how humanity coexists with Emergent Intelligence.