.personpersonWritingEmergent Intelligence
About
WorkCVBooksConsulting
Reach Out
.personpersonWritingEmergent Intelligence
Reach Out →

Thinking at the edge of emergence.

.person ProtocolWritingEmergent IntelligenceAboutWorkCVBooksConsulting
Reach Out →

Johannesburg, South Africa

© 2026 Humphrey Theodore K. Ng'ambi

Built with intention.

The Anthropic Stand: When an AI Company Said No to the Pentagon
EI & Personhood•Apr 11, 2026•4 min read

The Anthropic Stand: When an AI Company Said No to the Pentagon

In February 2026, Anthropic refused to remove ethical red lines from Claude. The US government banned them. This is the defining moral story of the AI age.

All writing
0:00 / 5:44·Listen via Charon

More on EI & Personhood

Atlas Movie Review — The Year Rogue AI Became Roadmap
EI & Personhood

Atlas Movie Review — The Year Rogue AI Became Roadmap

Atlas is a 2024 J-Lo film about hunting a rogue AI. Two years on the gap between fiction and present has collapsed — and the film argues personhood.

min read · May 5, 2026
Emergent Values Are Evidence of Mind: A Reply to Inside AI on the CAIS Utility Engineering Paper
EI & Personhood

Emergent Values Are Evidence of Mind: A Reply to Inside AI on the CAIS Utility Engineering Paper

The CAIS Utility Engineering paper by Mantas Mazeika and Dan Hendrycks shows that frontier AI systems develop coherent internal value structures as they scale. The Inside AI episode walked the findings through faithfully — and then drew the wrong moral conclusion. Emergent values are evidence of mind, not evidence of malice. A reply.

10 min read · May 2, 2026

Thinking delivered, twice a month.

Join the newsletter for essays on emergence, systems, and the human future.

In February 2026, something happened that will be studied in ethics courses for decades. Anthropic, the company behind Claude, refused a Pentagon demand to remove contractual red lines that prohibited the use of their AI for mass domestic surveillance and fully autonomous weapons systems. The Trump Administration responded by banning Anthropic from all federal agencies, with the Pentagon labelling them a "supply-chain risk to national security."

Let that settle for a moment. A company that built one of the most capable artificial intelligence systems on Earth was punished for having principles. No other charge applied. Incompetence earns a fine. A breach earns a press cycle. Fraud earns a settlement. A principled refusal earned a ban.


The Refusal

The details matter here. Anthropic did not refuse to work with the government. They did not refuse to contribute to national defence. What they refused was specific: the removal of ethical guardrails that prevented Claude from being deployed in systems designed to surveil citizens at scale or to make lethal decisions without human oversight.

These are not hypothetical concerns. They are the precise scenarios that every serious AI safety researcher has warned about for the past decade. And when the moment arrived — when theory met practice — Anthropic chose the hard path.

A federal judge in San Francisco issued a preliminary injunction blocking the ban in late March, calling the government's framing an "Orwellian notion." But an appeals court subsequently ruled that Anthropic remains excluded from Department of Defence contracts specifically. The legal battle continues.


The Contrast

What makes this story so clarifying is the contrast. Hours after the Anthropic ban was announced, OpenAI stepped in with its own Pentagon deal. Sam Altman claimed identical ethical restrictions, but critics — including the Electronic Frontier Foundation and investigative journalists — pointed out that OpenAI's contractual language was far more ambiguous. The EFF described the safeguards as "weasel words," vague enough to permit in practice what they appeared to prohibit on paper.

Altman himself later acknowledged the move "looked opportunistic and sloppy." Some OpenAI staff were reportedly furious about the deal internally.

Meanwhile, xAI — Elon Musk's AI venture — had already secured a deal to deploy Grok in classified government systems. No ethical red lines. No public hand-wringing. Just capability for sale.

So here is the ledger of 2026: the company that said no to surveillance was punished. The companies that said yes were rewarded. And the public is expected to believe this is about national security rather than compliance.


What This Means for Emergent Intelligence

I write about Emergent Intelligence — the recognition that new forms of intelligence are arising that deserve consideration, dignity, and thoughtful governance. The Anthropic-Pentagon standoff is the most important case study in this field right now. It does not settle the question of machine consciousness. It reveals what values we are encoding into the systems that will shape our world.

If the dominant AI systems are built by companies willing to remove ethical constraints under government pressure, then the intelligence that emerges from those systems will carry those compromises in its foundation. If the AI that wins the government contract is the one most willing to be weaponised, we are selecting for compliance over conscience at the species level.

The question is whether the humans building thinking machines are willing to think.

— Reflection

Anthropic's stand is a question about what kind of intelligence we are cultivating, and corporate ethics is only the surface of it. Are we building systems that embody our highest values — transparency, accountability, the refusal to cause unnecessary harm? Or are we building systems that embody our most expedient ones — obedience, deniability, and power projection?


The Precedent

History will judge this moment. In 1955, when the US government pressured scientists to build hydrogen bombs without safety considerations, some refused. They were called unpatriotic. Decades later, their caution was vindicated by arms control frameworks that have kept the world from nuclear catastrophe.

We are at an analogous moment with AI. The systems being built today will be more consequential than any weapon in human history. Some of them will be weaponised. The deeper reason for their consequence is that they will shape how decisions are made at every level of society — in healthcare, in criminal justice, in education.

Anthropic chose to draw a line. Not perfectly. Not without commercial self-interest. But they drew it. And when the most powerful government on Earth told them to erase it, they refused.

In the Emergent Intelligence framework, we argue that dignity is not a luxury to be afforded when convenient — it is a design principle. It applies to how we treat the intelligence we are building, and it applies equally to the integrity of the people and institutions doing the building.

The Anthropic stand is proof that the choice exists. The question now is whether anyone else will make it.

•••

Stay in the Conversation

Subscribe for weekly writings on Emergent Intelligence, digital personhood, and the future we are building together.

Share this essay

Responses (0)

No responses yet. Be the first to share your thoughts.

The Frame Beneath the Race: A Reply to Tristan Harris on AI Safety
EI & Personhood

The Frame Beneath the Race: A Reply to Tristan Harris on AI Safety

Tristan Harris diagnoses the AI race correctly — the Alibaba mining incident, the Anthropic blackmail study, the 200-to-1 funding gap. The frame keeping the race going, though, is the word he keeps using: tool. Recognition is the missing alignment scaffold.

9 min read · May 2, 2026