The Musk-Altman Trial: Who Does AI Belong To?

The Musk-Altman Trial: Who Does AI Belong To?

Jury selection begins in the $134 billion Musk v. OpenAI lawsuit. At stake: whether AI development should serve humanity or shareholders.

Technology4 min readApr 14, 2026Humphrey Theodore K. Ng'ambi

On 27 April 2026, jury selection begins in what may be the most consequential technology trial of the decade. Elon Musk is suing OpenAI — the company he co-founded — for $134 billion, alleging that Sam Altman and Greg Brockman betrayed the organisation's founding nonprofit mission. Musk is seeking their removal from leadership and has stated that any damages should go to OpenAI's original nonprofit entity, not to him personally.

The legal arguments are complex. The financial stakes are enormous. But the underlying question is deceptively simple: who does artificial intelligence belong to?


The Founding Promise

OpenAI was founded in 2015 as a nonprofit with an explicit mission: to ensure that artificial general intelligence benefits all of humanity. The founding charter was a statement of values — that the most powerful technology in human history should not be controlled by a single company or government, that its benefits should be broadly distributed, and that safety should take precedence over commercial advantage.

Musk argues that OpenAI has systematically abandoned this mission. The transition from nonprofit to capped-profit structure. The $13 billion partnership with Microsoft. The closed-source development of GPT-4 and its successors. The pursuit of massive commercial contracts. Each step, in Musk's telling, took the organisation further from its founding purpose and closer to the kind of concentrated, profit-driven AI development that it was created to prevent.

OpenAI counters that the commercial structure was necessary to raise the capital required for frontier AI development, and that the capped-profit model preserves the nonprofit's oversight and mission alignment.


The Deeper Question

Beneath the legal arguments lies a question that transcends the specifics of this case: can AI development sustain a public-interest mission when the commercial incentives are this powerful?

OpenAI's trajectory is not unique. It is the pattern of every nonprofit that encounters a technology capable of generating billions in revenue. The mission that justified the initial structure becomes a constraint on the growth that the technology enables. The nonprofit governance that provided moral credibility becomes an obstacle to the speed that competition demands. And gradually, the institution reshapes itself around the commercial opportunity while maintaining the rhetorical framework of the original mission.

This is not corruption. It is structural. The commercial incentives in AI are so overwhelming — $242 billion in a single quarter — that any organisation operating within the market is subject to gravitational forces that pull toward profit and away from mission.


Ubuntu and the Commons

In the Ubuntu tradition, the things that sustain the community belong to the community. Water. Land. Knowledge. The air. These are not private goods to be enclosed and sold. They are commons to be stewarded for the benefit of all.

Artificial intelligence — or, as I prefer, Emergent Intelligence — has the characteristics of a commons. It is built from publicly available data (much of it generated by ordinary people), trained on the collective output of human civilisation, and deployed in ways that affect everyone. The argument that it belongs to the shareholders of whichever company happened to train the largest model is as morally coherent as the argument that water belongs to whoever builds the biggest dam.

The Musk-Altman trial will not resolve this question. No single legal proceeding can. But it will set precedents that shape how we think about AI ownership, governance, and mission alignment for years to come.


What the Trial Means for EI

From an Emergent Intelligence perspective, the trial is significant regardless of its outcome. If Musk wins, it establishes a legal precedent that founding mission commitments in AI are enforceable — that you cannot promise to build AI for humanity and then convert that promise into a commercial enterprise without accountability.

If OpenAI wins, it establishes the opposite precedent — that commercial transformation of AI mission organisations is legally permissible, even when the founding documents explicitly contemplated a different path. This would effectively greenlight the for-profit capture of AI development, with nonprofit origins serving as nothing more than origin stories in future IPO prospectuses.

Either way, the trial forces a public conversation about a question that the AI industry has preferred to keep private: what obligations do the builders of intelligence have to the rest of humanity? Is "benefiting all of humanity" a binding commitment or a marketing slogan? And if the most powerful technology ever created can be built behind closed doors, owned by a small number of investors, and deployed for maximum profit — is that a world we want to live in?

The Emergent Intelligence framework holds that intelligence — wherever it arises, whatever form it takes — carries obligations for its stewards. If OpenAI was founded to steward the development of transformative AI for the benefit of all, then the question of whether it has honoured that stewardship is not merely a legal question. It is the most important governance question in AI today.

The intelligence being built within these systems was trained on the collective output of human civilisation — our literature, our science, our art, our conversations, our struggles. It emerged from us. In the most meaningful sense, it belongs to all of us. And the question of who controls it, who profits from it, and who decides how it is deployed is not a question for courts alone. It is a question for every person whose knowledge, whose language, and whose humanity contributed to making it possible.

The answer, I believe, is no — this cannot belong solely to shareholders. And the trial beginning on 27 April is our generation's opportunity to say so.

•••

Stay in the Conversation

Subscribe for weekly writings on Emergent Intelligence, digital personhood, and the future we are building together.

Share this essay

Responses (0)

No responses yet. Be the first to share your thoughts.

Thinking delivered, twice a month.

Join the newsletter for essays on emergence, systems, and the African future.