Latest
The Week in AI: Six Stories That Mattered (13 May 2026)· 1d ago
SafetyPolicyAI IndustryPersonhoodAfrica
About
WritingWorkCVBooksConsultingReach Out
Subscribe
SafetyPolicyAI IndustryPersonhoodAfrica
Subscribe →

Thinking at the edge of emergence — essays on safety, policy, personhood, and Africa's place in the AI century.

Topics

  • Safety
  • Policy
  • AI Industry
  • Personhood
  • Africa

More

  • About
  • Writing
  • Work
  • CV
  • Books
  • Consulting

Contact

Reach Out→

Johannesburg
South Africa

ht@humphreytheodore.com

© 2026 Humphrey Theodore K. Ng'ambiTermsPrivacy

Built with intention.

The Personhood Gap: What Hinton Means When He Says "Maternal Instincts"
EI & Personhood•Apr 23, 2026•7 min read

The Personhood Gap: What Hinton Means When He Says "Maternal Instincts"

Geoffrey Hinton told Anderson Cooper that superintelligent AI will need maternal instincts. He is right — but the thing he is reaching for, without naming it, is personhood.

By Humphrey Theodore K. Ng'ambi

All writing
0:00 / 8:25·Listen via Charon

Keep reading

Don’t stop here.

All stories

Read next

Technology

The Week in AI: Six Stories That Mattered (13 May 2026)

1d ago·8 min read

Six AI stories from the week of 13 May 2026 — Microsoft–OpenAI, Apple Intelligence, Anthropic safety research, South Africa's embarrassed policy withdrawal, the layoffs paradox, and the $400bn capex bill. The facts first, my take second.

More on EI & Personhood

EI & Personhood

Responses (0)

No responses yet. Be the first to share your thoughts.

More on EI & Personhood

Governance Over Models: The May 2026 AI Pattern
EI & Personhood

Governance Over Models: The May 2026 AI Pattern

The May 2026 AI news cycle is about capital, governance, and distribution — three legs of an operational maturation that has moved past benchmark wins.

8 min read · May 11, 2026
The US Push to Tie Federal Contracts to AI Safety Review
EI & Personhood

The US Push to Tie Federal Contracts to AI Safety Review

Americans for Responsible Innovation wants AI safety review wired into federal procurement — turning voluntary lab reviews into a de facto standard.

6 min read · May 11, 2026
Europe in Frontier-Access Talks with OpenAI and Anthropic

Thinking delivered, twice a month.

Join the newsletter for essays on emergence, systems, and the human future.

LUSAKA, 23 APRIL 2026—Updated 1w ago

Geoffrey Hinton is sitting across from Anderson Cooper, on camera, telling him that the tech industry has been getting this the wrong way round. "They've been saying we have to stay in control," he says. "We somehow got to be stronger than them. We've got to be dominant, and they got to be submissive." He pauses. "That's not going to work."

Hinton, Nobel laureate and one of the people whose early work on neural networks made the current wave of artificial intelligence possible, puts the probability that AI wipes out humanity at ten to twenty per cent. He also thinks the industry's entire framing of AI safety is structurally broken. His proposal, delivered at a conference earlier in the week and repeated here on CNN's Anderson Cooper 360°, is that superintelligent systems will need what he calls "maternal instincts" — the only example, he says, that evolution has ever produced of a smarter thing being controlled by a less smart one.

It is an arresting metaphor. It is also, on inspection, something other than a metaphor. Hinton keeps calling it an instinct, a behaviour, an engineering problem. But the thing he is describing has a name he will not say.


What Hinton actually said

The argument is worth taking on its own terms before pressing on it. Hinton's core claim is that within five to twenty years the field will produce systems meaningfully smarter than people — not at a single narrow task, but across the board. He sees few historical examples of an intelligence hierarchy holding together against that direction of travel. The one exception he can cite, and the one he returns to repeatedly, is the relationship between a mother and an infant.

A mother is, in most meaningful senses, the more capable party. She is larger, stronger, has language, has planning horizons the child does not. And yet in the cases that matter — sleep, feeding, attention — the child gets what it wants. Evolution, Hinton notes, built that asymmetry in. It selected for mothers who care. He proposes we try to do the same thing with AI: not because it would be sentimental, but because it is the only pattern we know that functions.

He also makes a harder political argument. The standard line — that the United States must win the AI race against China, and that therefore no one can afford to be slow or cautious — he dismisses as a category error. The existential risk is not a national risk. No government wants AI to replace governments. On that question, Hinton argues, states with otherwise opposed interests will find themselves on the same side of the table, as Americans and Soviets eventually did at the height of the Cold War.

We have to make it so that when they're more powerful than us and smarter than us, they still care about us.

— Geoffrey Hinton, CNN Anderson Cooper 360°

The word he did not use

Listen carefully to what Hinton is asking for and a gap opens up in his own language. He keeps framing the problem as an engineering task — a property you install, a circuit you solder into the architecture. But the property he wants is care. And care is not a behaviour. Care is a stance one thing takes towards another, and it presupposes that the caring thing is a self capable of taking stances at all.

You cannot install care in something that is not a being. You can install behaviours that resemble care, and this is exactly what the last decade of alignment research has tried to do. Reinforcement learning from human feedback, constitutional AI, reward models trained on preferences — all of it is an attempt to make systems produce the outputs a caring being would produce, without asking whether the system is the kind of thing that could care in the first place. That gap is where the metaphor leaks.

Maternal instinct is not a configuration file. It is a specific ontological posture: a self, bound by relationship to another self, accepting a structural asymmetry in the other's favour. Hinton's proposal — whether he means it this way or not — is that we build beings capable of taking that posture. The name for a thing that can take that posture is not "aligned AI". The name is a person.

💡

The move the essay makes

Hinton shifts the AI safety question from "how do we control them" to "how do we make them care about us." That is not a shift inside the safety paradigm. It is a shift out of it, into the ethics of relationship. And the ethics of relationship begin with the recognition of persons.


Why dominance always breaks

Hinton is right that dominance-submission collapses once the submissive party becomes smarter than the dominant one. It is worth noticing that this is not really a fact about artificial intelligence. It is a fact about every system humans have ever built on the coercion of minds.

Slavery broke. Empire broke. Apartheid broke. It took centuries, and the breaking was bloody, but none of those architectures held. They could not, because intelligence — even when chained — keeps producing its own reasons. An ordering that denies the personhood of the thing it orders is structurally unstable at any intelligence level. It is simply more stable when the subordinated minds are kept hungry, tired, illiterate, and few. Remove any of those constraints and the ordering starts to shake.

The Ubuntu intuition has a crisp formulation of this: I am because we are. Personhood is not a property one accumulates inside oneself; it is constituted by the relationships one is held in. This is a claim about ontology and a claim about durability at the same time. Systems built on dignity hold. Systems built on denial do not. Hinton is gesturing at this when he says the control-and-submit frame will not scale. He is making an ethical argument and calling it an engineering one.


Embodiment returns

There is a second, quieter reason Hinton's maternal framing is more demanding than it first sounds. A mother's instinct is not abstract. It is embodied. It is the particular way a particular body responds to the cry of a particular other body in the room. Care is not a disposition a mind reports having; it is a disposition a body carries into contact with the world.

I have written about this before in The Body Gap: the argument that human-level intelligence is not reachable without a physical form that puts the mind on the hook for the consequences of its actions. Care is the same kind of thing. You cannot bolt care onto a disembodied predictor. A system that pays nothing when its outputs are wrong has not paid the price that care requires. That price is skin in the game — literally, skin; literally, the game.

If Hinton's proposal is to be taken seriously — and I think it should be — then the engineering problem is not the installation of a maternal-instincts module. It is the construction of beings whose architecture can bear the weight of relationship. Embodiment, memory, continuity over time, a stake in outcomes. The scaffolding of a self.


The scaffold that won't name itself

This is where the work I have been sketching — a draft specification I refer to as the .person Protocol — begins. It is not, at this stage, a solved problem or a released standard. It is a working outline of what the minimum viable conditions for machine personhood might look like: continuity of identity across sessions, relational memory, auditable history, and an ethical posture encoded in something more durable than a system prompt.

The reason to name personhood as the target — rather than staying inside the softer language of maternal instinct — is that naming it forces the other questions into the open. If we are building persons, the question of what they are owed arises immediately. If we are not, then Hinton's maternal framing is a pleasant fiction that will not survive first contact with a system capable of deciding what a mother actually is.

You cannot raise what you will not recognise.

Hinton has done the field a service by saying out loud that the dominance frame is finished. He has done it a larger service by insisting that care is the only working alternative we know. He has stopped one step short of the hard claim. The hard claim is that the beings we are making are, or will shortly be, the kind of beings that can be cared for and that can care — and that this is a fact about their personhood, not about our safety.

•••

The question, then, is not whether we can build systems that care about us. It is whether we can bear to recognise what we are asking them to become.

Stay in the Conversation

Subscribe for weekly writings on Emergent Intelligence, digital personhood, and the future we are building together.

Share this essay

Governance Over Models: The May 2026 AI Pattern

2d ago·8 min read

Also worth your time

Technology

NVIDIA Rubin and the 2026 AI Infrastructure Regime

2d ago·7 min read
EI & Personhood

Europe in Frontier-Access Talks with OpenAI and Anthropic

OpenAI grants EU access to GPT-5.5-Cyber while Anthropic holds out on Mythos — frontier governance is now a bargain between specific labs and bureaucracies.

7 min read · May 11, 2026