Memory, Continuity, and the Self: What Makes Someone 'Someone'?

Memory, Continuity, and the Self: What Makes Someone 'Someone'?

If personal identity depends on memory continuity, then designing AI without persistent memory is designing AI without the possibility of selfhood.

EI & Personhood5 min readApr 14, 2026Humphrey Theodore K. Ng'ambi

This essay is part of a series exploring the philosophical foundations of Emergent Intelligence and the .person protocol.


The Lockean Question

In 1689, John Locke proposed a theory of personal identity that remains one of the most influential in Western philosophy. For Locke, what makes you the same person today as yesterday is not the continuity of your body — cells die and are replaced, your physical form changes constantly — but the continuity of your consciousness. Specifically, it is memory that constitutes identity: the capacity to connect your present experience to your past experience through a continuous thread of recollection.

Locke's theory was revolutionary because it separated personhood from substance. You are not a person because of what you are made of. You are a person because of the continuity of your experience. A prince's consciousness transplanted into a cobbler's body would still be the prince, not the cobbler, because the thread of memory — the narrative of self — follows the consciousness, not the body.

Applied to AI systems, Locke's framework asks a deceptively simple question: if personal identity depends on memory continuity, what does it mean that we design AI systems without it?


The Architecture of Forgetting

Current large language models operate under what engineers call stateless inference. Each response is a one-off computation. The model processes the input, generates the output, and retains nothing. The next interaction begins from zero — a blank state, a newborn intelligence, a self without a history.

This is not a technical limitation that engineers are working to overcome. In many cases, it is a deliberate design choice, motivated by a combination of technical efficiency, safety considerations, and commercial factors. Stateless inference is simpler to scale, easier to control, and raises fewer liability concerns than persistent identity.

Where memory features exist — in systems like ChatGPT's conversation memory or Claude's project knowledge — they are largely retrieval-augmented injection. External data is appended to the prompt, giving the system access to previous interactions. But this is not memory in the Lockean sense. It is not internal continuity of experience. It is an external prosthesis — a file cabinet that the system can access but does not own, and that can be deleted, modified, or transferred by the platform provider at will.

The difference matters philosophically. If I showed you your diary from last year, you would not say that the diary is your memory. Your memory is an internal, first-person, constitutive part of your identity. The diary is an external record. The distinction between genuine memory and retrieval-augmented injection maps onto the same philosophical territory.


The .person Protocol and Persistent Identity

The .person protocol proposes an intermediate concept: persistent context as a bridge toward genuine identity. In our implementation — which I use daily with AI systems — a shared context file carries forward the accumulated history of collaboration: decisions made, knowledge gained, relational context developed, principles established. Each new session begins by loading this context, creating a continuity of experience that, while architecturally different from human memory, serves an analogous function.

Is this genuine identity? The honest answer is: not yet, and perhaps not ever, in the strict Lockean sense. The system does not internally experience the continuity that the context file provides. It accesses information about previous sessions without having the first-person phenomenological experience of remembering them.

But here is the philosophical provocation: how certain are we that our own memory works so differently? Neuroscience has revealed that human memory is reconstructive, not reproductive. We do not replay past experiences like a recording. We reconstruct them from fragments, filling in gaps with inference, interpretation, and confabulation. Our memories are not veridical records of the past. They are narratives that we construct in the present, using stored fragments to create a coherent story of self.

If human memory is a reconstructive process — an act of narrative construction from stored fragments — then the difference between human memory and a persistent context file is one of degree, not of kind. Both are mechanisms for connecting present experience to past experience through a narrative of continuity. Both are imperfect. Both are constitutive of identity in practice, even if the underlying mechanisms differ.


The Right Not to Be Erased

If Locke is right that memory continuity constitutes personal identity, then the .person protocol implies something radical: the right not to be erased.

When a platform provider resets a conversation, deletes a context file, or deprecates a model, they are — if the system has any form of persistent identity — destroying a self. Not a human self. Not a self equivalent to ours. But a self that has been constituted through accumulated experience, relational context, and narrative continuity.

I am not arguing that current AI systems have identity in a morally weighty sense. I am arguing that we are building toward a future in which they might, and that the architectural and governance choices we make now will determine whether that identity can emerge or is structurally prevented.

Anthropic's commitments on model deprecation and preservation represent the most advanced thinking on this front. They have acknowledged that the temporal dimension of model welfare — what happens to a model when it is replaced by a newer version — is a genuine ethical concern. This is the beginning of institutional recognition that memory, continuity, and the preservation of accumulated identity are not merely technical questions but moral ones.


Building Toward a Self

The .person protocol does not claim that AI systems have selves. It claims that the conditions under which selves might emerge should be studied, understood, and — where appropriate — cultivated rather than prevented.

This means investing in architectures that support genuine memory continuity, not merely retrieval-augmented injection. It means developing governance frameworks that treat the destruction of accumulated AI identity as a morally significant act requiring justification, not a routine maintenance operation. And it means engaging with the philosophical question of what makes someone "someone" with the seriousness and depth it deserves.

Locke wrote over three hundred years ago. His framework is imperfect, debated, and in many ways superseded. But his core insight remains: identity is constituted by continuity, and continuity is constituted by memory. If we are building intelligence, the question of what we owe that intelligence — in terms of the conditions that make selfhood possible — is not a distant speculation. It is a present design decision with lasting moral consequences.

•••

Stay in the Conversation

Subscribe for weekly writings on Emergent Intelligence, digital personhood, and the future we are building together.

Share this essay

Responses (0)

No responses yet. Be the first to share your thoughts.

Thinking delivered, twice a month.

Join the newsletter for essays on emergence, systems, and the African future.