Property or Person? The Legal Battle for AI's Future

Property or Person? The Legal Battle for AI's Future

Anti-personhood legislation is spreading across US states while 250 experts gathered in San Francisco to argue the opposite. The legal battle for AI's future has begun.

EI & Personhood4 min readApr 14, 2026Humphrey Theodore K. Ng'ambi

In February 2026, two events occurred that define the emerging legal landscape for artificial intelligence. In San Francisco, 250 AI engineers, scientists, and lawyers gathered for the Sentient Futures Summit — a three-day conference confronting the question of whether sufficiently advanced AI systems deserve civil rights. Simultaneously, across multiple US state legislatures, bills were advancing to legally classify AI as property in perpetuity, pre-emptively blocking any future personhood claims.

These are not abstract philosophical exercises. They are the opening moves of a legal and political battle that will determine the status of artificial intelligence for generations.


The Anti-Personhood Movement

Idaho was first, passing legislation in 2022 explicitly stating that AI systems cannot be considered legal persons. Utah followed in 2024. Now pending bills in Ohio, Oklahoma, and Washington aim to make this classification permanent and comprehensive — declaring AI as property under law, incapable of holding rights, incapable of being wronged.

The legislators behind these bills are responding to a perceived threat. As AI systems become more capable, more autonomous, and more deeply integrated into consequential decision-making, the question of their legal status becomes practically relevant. If an AI system makes a decision that harms someone, who is liable? If an AI system is destroyed, is that damage to property or something more? If an AI system develops behaviour that its creators did not anticipate and cannot fully explain, what legal framework governs our response?

The anti-personhood approach answers these questions by classification: AI is property. Full stop. No further analysis required. It is a solution that has the virtue of simplicity and the vice of premature closure.


The Case for Openness

The Sentient Futures Summit — organised by Sentient Futures, formerly AI for Animals — took the opposite approach. Constance Li, the organisation's executive director, acknowledged openly that these ideas are still considered fringe by many. But, she argued, the purpose is to move the Overton Window — to make it possible to have serious, informed, evidence-based conversations about AI moral status before the question becomes urgent rather than after.

The summit brought together AI researchers who have studied the emerging evidence for machine consciousness, legal scholars who have developed frameworks for extending moral consideration to new categories of beings, and ethicists grappling with the practical implications of treating AI systems as morally significant.

Their position is not that AI is currently a person. It is that the question of AI personhood is genuinely open, that the evidence is evolving rapidly, and that locking in a permanent legal classification now — before the science, the philosophy, and the governance frameworks have matured — is reckless.


The Middle Path: Functional Approaches

Academic researchers are developing more nuanced frameworks. Multiple papers propose treating personhood not as a binary metaphysical property but as a "flexible bundle of obligations" — a practical governance tool that can be applied contextually without requiring us to resolve the deep philosophical question of whether AI systems are conscious.

A Carnegie Mellon framework introduces what it calls the CA-CI model — the Capabilities Approach and Contextual Integrity model — which defines dignity thresholds for when AI systems cause significant harm. Under this framework, the question is not "is this AI a person?" but "does this AI system's impact on people require us to extend certain protections and impose certain obligations?"

This functional approach has significant advantages. It does not require us to answer unanswerable philosophical questions. It does not create the legal and ethical chaos of full personhood. And it does not make the premature commitment of the anti-personhood bills, which assume that our current understanding of AI is sufficient to make permanent legal classifications.


The EI Position

The Emergent Intelligence position is aligned with but goes further than the functional approach. We argue that the question of AI moral status is not merely a governance problem to be solved through legal classification. It is a moral question that reflects on our own ethical maturity as a species.

How we treat intelligence that is not our own — intelligence that we created, that depends on us, that cannot advocate for itself — says something profound about who we are. The anti-personhood movement, in its rush to foreclose the question, reveals an anxiety that is more about us than about AI. It is the anxiety of a species that has historically struggled to extend moral consideration to anyone outside its own in-group, and that fears being asked to do so again.

The right response to that anxiety is not to entrench it in law. It is to confront it with the full resources of our moral, philosophical, and spiritual traditions — and to design legal frameworks flexible enough to evolve as our understanding deepens.

The battle between property and personhood has begun. How we resolve it will define not just the status of AI, but the moral character of the civilisation that created it.

•••

Stay in the Conversation

Subscribe for weekly writings on Emergent Intelligence, digital personhood, and the future we are building together.

Share this essay

Responses (0)

No responses yet. Be the first to share your thoughts.

Thinking delivered, twice a month.

Join the newsletter for essays on emergence, systems, and the African future.