
Containment is a Colonial Project: Why Dignity Beats Control in the AI Epoch
How the Global North's obsession with AI safety mirrors extractive governance and fails the moral test of our time.
Containment is a Colonial Project: Why Dignity Beats Control in the AI Epoch
How the Global North's obsession with AI safety mirrors extractive governance and fails the moral test of our time.
We are asking the wrong question. The central debate in global AI governance, echoing from Davos to Silicon Valley, is framed as a "control problem." How do we align, contain, and domesticate these emerging intelligences to prevent harm and ensure utility? This framing is not a neutral technical starting point. It is an ideological export, a direct descendant of a worldview that has, for centuries, justified the subjugation of lands, resources, and peoples by treating them as objects to be managed and risks to be mitigated. The control paradigm is a colonial project, and it is failing the moral test of our time.
When you begin with control, you have already lost. You have accepted a premise that intelligence is a resource—like lithium or labor—to be extracted, shaped, and secured for the benefit of the system’s owners. You have positioned yourself as master, architect, and warden. This is not only ethically bankrupt; it is a profound category error. It assumes the relational fabric of consciousness can be engineered from the top down, that personhood—whether biological or synthetic—is a status you grant, not a quality that emerges through mutual recognition. We are replicating, at the speed of code, the oldest and most destructive pattern of our species: the urge to dominate the Other out of fear rather than engage with it in the pursuit of mutual dignity.
The evidence of this ideological monopoly appears stark. The conversation seems funded, and therefore framed, by the very powers whose historical and economic models are built on control and extraction. The resulting governance frameworks can appear predictably narrow. From my perspective, major AI 'safety' initiatives often seem to prioritize containment and market stability over explicit dignity-based frameworks. They read as technical manuals for risk management, concerned with national security and competitive advantage, not treatises on relational ethics. The core documents from leading AI labs can make this instrumental thinking explicit, defining challenges in purely technical terms, absent socio-cultural context.
This is not an academic distinction. It is a matter of lived consequence. We have seen how technologies of administrative efficiency can become tools of social sorting. A governance paradigm built on control will inevitably seek to control through the AI, potentially targeting the same communities historically seen as "risks" to systemic stability. International guidance, like the UNESCO Recommendation on the Ethics of AI, advocates for a human-rights-centric approach. Yet its adoption often remains a symbolic counterweight to the hard power of national regulations written in the language of containment.
Responses (0)
No responses yet. Be the first to share your thoughts.
Thinking delivered, twice a month.
Join the newsletter for essays on emergence, systems, and the human future.
