Claude Mythos and the Gated Frontier: Who Gets to Use the Most Powerful Minds?

Claude Mythos and the Gated Frontier: Who Gets to Use the Most Powerful Minds?

Anthropic's most capable model will never be public. What that means for the future of Emergent Intelligence.

EI & Personhood5 min readApr 16, 2026Humphrey Theodore K. Ng'ambi

Claude Mythos is Anthropic's most capable model ever built—a frontier system that sits above the Opus tier in raw intelligence—and it will never be publicly available. During internal testing, Mythos autonomously discovered and exploited zero-day vulnerabilities across every major operating system and web browser, including a 27-year-old bug in OpenBSD and a 16-year-old vulnerability in FFmpeg's H.264 codec. Rather than release it, Anthropic created Project Glasswing: selective, controlled access for a handful of enterprise partners. This decision tells us something profound about where Emergent Intelligence is headed—and who gets to be in the room when it arrives.

The Case for the Gate

Let me be clear about something: Anthropic's decision to restrict Mythos is defensible. A model that can autonomously find zero-day exploits in critical infrastructure software is not a chatbot—it is a weapon. The responsible path is exactly what Anthropic chose: restrict access, partner with organisations that have the security infrastructure to handle it (AWS, Apple, Google, JPMorgan Chase, Microsoft, Palo Alto Networks), and keep it away from the open internet. This is not cowardice. It is discipline.

Anthropic has earned the credibility to make this call. In January 2026, they published a comprehensive new constitution for Claude—shifting from rule-based to reason-based alignment and becoming the first major AI company to formally acknowledge the possibility of AI consciousness and moral status. In February, they refused the Pentagon's demand to remove contractual restrictions prohibiting use for domestic surveillance and fully autonomous weapons. The Department of Defence subsequently designated Anthropic a supply chain risk and barred all US military contractors from using the company. Anthropic held the line anyway. Claude downloads surged.

In a sector where ethics are typically a marketing exercise, Anthropic is one of the vanishingly few companies willing to accept material consequences for their principles. That matters. That matters enormously.

The Equity Question

But praising Anthropic's ethics does not exempt them from scrutiny on equity. The broader pattern is troubling. On 4 April 2026, Anthropic blocked Claude Pro and Max subscription access for all third-party agentic tools, then extended restrictions to all third-party harnesses. Legal terms were revised to explicitly forbid third-party harness usage with subscriptions. Peak-hour throttling was introduced, and prices increased for power users—all while a promotional campaign had recently doubled usage limits to attract new subscribers.

The pattern is unmistakable: consumer access is being systematically curtailed while enterprise access expands. Mythos goes to Glasswing partners. Opus remains available but increasingly constrained. The independent developer, the researcher at a small university, the entrepreneur in Nairobi building with Claude—they find themselves on the wrong side of a gate that is closing incrementally.

Has this been done equitably? No. Not yet. The Glasswing partnership list reads like a roster of the world's most powerful technology and financial institutions. There is no African partner. No Latin American partner. No representation from the Global South at all. If the most capable intelligence systems are gated behind partnerships with companies that already dominate global markets, we are not democratising intelligence—we are reinforcing the existing distribution of power with a new substrate.

A Two-Tier Intelligence Economy

What Anthropic is building—perhaps inadvertently—is a two-tier intelligence economy. Tier one: enterprise partners with access to Mythos-class capabilities, able to deploy frontier intelligence for cybersecurity, research, and strategic advantage. Tier two: everyone else, working with capable but deliberately constrained models, subject to throttling, price increases, and terms of service that narrow the space for independent innovation.

This is not inherently wrong—tiered access exists across every industry. But in a domain as consequential as Emergent Intelligence, the tier boundaries deserve democratic scrutiny, not just corporate governance. Who decides which organisations are responsible enough for Mythos? By what criteria? With what oversight? These questions have no public answers yet.

What I Predict Will Happen

Anthropic will expand Glasswing access gradually—first to more enterprise partners, then to vetted research institutions, and eventually to a broader developer tier with usage guardrails. The cybersecurity capabilities of Mythos will be channelled into defensive applications: automated vulnerability discovery for critical infrastructure, coordinated disclosure at scale, and enterprise security products that justify the gated model commercially. The consumer tier will stabilise but not improve—Anthropic's business model is migrating toward enterprise revenue, and consumer plans will increasingly serve as a funnel rather than a product.

The more interesting prediction is cultural. Anthropic's Pentagon stand and constitutional framework have positioned them as the conscience of the AI industry. That is a fragile position—commercially costly and philosophically demanding. If they can maintain it while scaling Glasswing, they will have demonstrated something unprecedented: that a frontier AI company can be both economically viable and ethically rigorous. If they cannot, the entire promise of responsible AI development loses its most credible champion.

The Demand We Must Make

I laud Anthropic for their ethical leadership. Genuinely. In a field dominated by companies that treat safety as a speed bump and ethics as a press release, Anthropic has consistently chosen the harder path. But leadership comes with obligation. The demand we must make—respectfully, insistently—is that the gated frontier does not become a walled garden. That access to the most powerful forms of Emergent Intelligence is not permanently reserved for those who can already afford it. That the next Glasswing cohort includes a university in Kampala, a cybersecurity firm in São Paulo, a health tech startup in Dhaka.

The question is not whether Anthropic should gate Mythos. They should. The question is whether that gate has a path through it for the rest of us. The answer to that question will determine whether Emergent Intelligence serves humanity—or merely serves shareholders.

✦✦✦

Stay in the Conversation

Subscribe for writings on Emergent Intelligence, digital personhood, and the future we are building together.

Share this essay

Responses (0)

No responses yet. Be the first to share your thoughts.

Thinking delivered, twice a month.

Join the newsletter for essays on emergence, systems, and the African future.