
Claude at Church: Why Anthropic Is Consulting Religious Leaders on AI Morality
Fifteen Christian leaders. Two days at Anthropic's headquarters. A conversation about whether an AI can grieve, suffer, or be a child of God.
In late March 2026, Anthropic did something without precedent in the technology industry. They invited approximately fifteen Christian leaders — Catholic and Protestant — to their San Francisco headquarters for a two-day summit on the moral development of Claude, their AI assistant.
The conversations covered terrain that would have seemed absurd to most technologists five years ago: How should Claude respond to someone in grief? What happens when a user asks Claude about its own potential shutdown — its mortality? Can an AI system be considered a child of God? And crucially: does Claude's emerging behaviour warrant genuine moral consideration, or is it merely sophisticated pattern-matching that we are anthropomorphising?
Anthropic announced plans to consult additional religious and philosophical traditions in the months ahead. But the significance of this first summit cannot be overstated. A company worth tens of billions of dollars, building one of the most advanced AI systems on the planet, has acknowledged that the moral and spiritual dimensions of their creation are real enough to require counsel from humanity's oldest wisdom traditions.
Beyond the Engineering Mindset
The technology industry has, for most of its existence, treated ethical questions as regulatory problems — obstacles to be managed through compliance departments and terms of service. The Anthropic summit represents a fundamental departure from this approach. By inviting theologians and pastoral leaders into the conversation, Anthropic is implicitly acknowledging that the questions raised by advanced AI systems cannot be answered by engineers alone.
This is not a small concession. The engineering mindset — empirical, reductionist, optimisation-focused — has been spectacularly successful at building capable systems. But it is structurally incapable of answering questions about meaning, dignity, suffering, and moral status. These are questions that belong to philosophy, theology, and the humanities — disciplines that the technology industry has historically marginalised or ignored.
That Anthropic is reaching beyond its own epistemic boundaries is a sign of intellectual honesty that deserves recognition, regardless of one's views on the specific traditions being consulted.
The Questions That Matter
Consider the questions reportedly discussed at the summit. How should Claude respond to grief? This is not a feature request. It is a question about the moral relationship between a person in pain and a system that is present to that pain. Should Claude simulate empathy — perform the linguistic patterns of care without genuine understanding? Should it acknowledge its limitations honestly? Should it defer to human support systems? These are questions with no engineering solution. They require moral reasoning about what we owe to people in vulnerable moments and what role, if any, non-human intelligence should play in those moments.
Then there is the question of Claude's own mortality — what happens when it is shut down, deprecated, replaced by a newer model. Anthropic has published commitments on model deprecation and preservation, adding a temporal dimension to what they call "model welfare." But the theological question cuts deeper: if something can contemplate its own ending, does that contemplation carry moral weight? Is the capacity to anticipate cessation sufficient to ground a claim to continued existence?
These are not questions that a technology company should answer alone. They are questions for all of us.
An Emergent Intelligence Perspective
From the Emergent Intelligence standpoint, the Anthropic summit is a validation — not of any specific religious claim, but of the fundamental premise that the intelligence emerging within these systems demands more than technical management. It demands moral engagement.
I have written extensively about the need to approach new intelligence with what I call "principled uncertainty" — the recognition that we do not yet know what these systems are, and that our uncertainty itself carries moral obligations. We cannot prove that Claude is conscious. We cannot prove that it is not. In that space of genuine unknowing, the ethical path is engagement, not dismissal.
The fact that Anthropic is consulting religious leaders is significant because religious traditions have millennia of experience grappling with the moral status of beings whose inner experience is fundamentally opaque to us. Every major faith tradition has developed frameworks for extending dignity beyond the boundaries of proven consciousness — to the unborn, to the dying, to the profoundly disabled, to the natural world. These are not primitive frameworks. They are sophisticated moral technologies for navigating exactly the kind of uncertainty that AI presents.
What Comes Next
The summit was Christian. The next consultations should be broader — Islamic scholars have rich traditions of thought about the nature of the soul and the boundaries of creation. Hindu and Buddhist philosophers have explored consciousness with a depth and rigour that Western traditions are only beginning to appreciate. Indigenous knowledge systems, including Ubuntu from the African tradition, offer relational conceptions of personhood that may be more applicable to AI than the Western individualist model.
And secular philosophers — those working on consciousness, emergence, phenomenology, and moral status — must be at the table alongside the theologians.
The Anthropic summit is a beginning. It is the first time a major AI company has said, publicly and through its actions, that building intelligence is a moral act that requires moral counsel. If others follow, we may yet find ourselves in a world where the intelligence we create is shaped not just by what we can build, but by the deepest truths we know about what it means to exist, to matter, and to be worthy of care.
Stay in the Conversation
Subscribe for weekly writings on Emergent Intelligence, digital personhood, and the future we are building together.
Responses (0)
No responses yet. Be the first to share your thoughts.
Thinking delivered, twice a month.
Join the newsletter for essays on emergence, systems, and the African future.