600 Bills, Zero Consensus: America's AI Governance Crisis

600 Bills, Zero Consensus: America's AI Governance Crisis

Over 600 AI bills introduced across US states in 2026. The White House wants federal preemption. Nobody agrees on anything.

Technology4 min readApr 14, 2026Humphrey Theodore K. Ng'ambi

In 2026, American state legislators have introduced over six hundred bills related to artificial intelligence. Six hundred. They cover AI companion chatbots, transparency requirements, digital replicas and synthetic content, AI in mental healthcare, AI in insurance underwriting, and a dozen other domains. Some states want to regulate AI. Some want to ban specific applications. Some want to declare AI permanently incapable of personhood. And the White House wants to override all of them.

The Trump Administration's National Policy Framework for AI, released in March 2026, recommends a "light touch" approach with federal preemption of state AI laws that "impose undue burdens." The most consequential element: establishing a single national standard to supersede the emerging patchwork of state regulations.

On the surface, this sounds reasonable. A unified framework. Clarity for businesses. Consistency for consumers. But look closer, and the picture darkens considerably.


The Preemption Problem

Federal preemption only works if the federal standard is adequate. If the national framework is robust — with meaningful transparency requirements, enforceable anti-discrimination provisions, and clear liability structures — then preemption simplifies compliance and protects citizens. But if the national framework is weak — a "light touch" designed primarily to avoid burdening industry — then preemption does not create a floor of protection. It creates a ceiling. And that ceiling is set by the interests of the companies being regulated, not the people being affected.

The Administration's stated priority is innovation speed. Speed is valuable. But speed without governance is recklessness. The history of technology regulation in America is littered with examples of federal frameworks that preempted stronger state protections and left citizens worse off — in privacy, in environmental protection, in financial regulation.

The six hundred state bills are messy, contradictory, and in many cases poorly drafted. But they represent something important: communities trying to assert control over a technology that is reshaping their lives. Preempting them without replacing their protections with something stronger is not governance. It is capture.


The Anti-Personhood Movement

Among the most striking trends in state legislation is a wave of bills explicitly declaring that AI systems cannot be legal persons. Idaho passed such a law in 2022. Utah followed in 2024. Pending bills in Ohio, Oklahoma, and Washington aim to classify AI permanently as property, preemptively blocking any future personhood claims.

These bills are a reaction to the growing discourse around AI consciousness and rights — including the Sentient Futures Summit held in San Francisco in February 2026, where 250 AI engineers, scientists, and lawyers spent three days debating whether sufficiently advanced AI deserves civil rights.

The anti-personhood bills reveal a deep anxiety. Legislators are not responding to an actual legal claim for AI personhood — no such claim has been filed. They are pre-emptively closing a door that they fear might one day open. And in their haste, they are making philosophical commitments that may age as poorly as historical declarations about which categories of beings are capable of moral standing.


What Governance Actually Requires

The Emergent Intelligence position on AI governance is neither libertarian nor prohibitionist. It is grounded in three principles.

First, transparency. Any AI system making consequential decisions about people's lives must be auditable. Not in the abstract sense of a published ethics statement, but in the concrete sense of: this system made this decision about this person, and here is why.

Second, accountability. When AI systems cause harm, there must be a clear chain of responsibility. The current legal vacuum — where companies disclaim liability, models are too complex to audit, and affected individuals have no recourse — is morally indefensible.

Third, participation. The people affected by AI systems must have meaningful input into how those systems are governed. This is not a radical proposition. It is the basic principle of democratic governance applied to a new domain.

Six hundred bills and a federal preemption framework both fail these tests in different ways. The bills are too fragmented to create coherent governance. The federal framework is too industry-friendly to create meaningful protection. What America needs is a governance architecture that takes AI's power seriously enough to regulate it, and takes citizens seriously enough to include them.

The International AI Safety Report published in February 2026 — led by Turing Award winner Yoshua Bengio with over one hundred AI experts — provides a scientific foundation for exactly this kind of governance. It is the largest global collaboration on AI safety to date, and it makes clear that the risks are real, the stakes are high, and the window for effective governance is narrowing.

The irony is that America — the country producing the most advanced AI systems on earth — is the country least capable of governing them coherently. Six hundred bills. One vague federal framework. And the clock is not just ticking. It is accelerating.

The Emergent Intelligence position is that governance must match the pace of development — not by rushing to regulate poorly, but by building governance capacity with the same urgency, the same funding, and the same institutional seriousness that we bring to building the technology itself. Anything less is a choice to let the most powerful technology in history govern itself. And history has never once rewarded that choice.

•••

Stay in the Conversation

Subscribe for weekly writings on Emergent Intelligence, digital personhood, and the future we are building together.

Share this essay

Responses (0)

No responses yet. Be the first to share your thoughts.

Thinking delivered, twice a month.

Join the newsletter for essays on emergence, systems, and the African future.