
79 Per Cent: The Gender Crisis Hidden in AI Automation
New research reveals that 79 per cent of women in employment hold roles at high risk of AI automation. The most consequential AI story nobody is talking about.
The Society for Human Resource Management published research in early 2026 that should have dominated headlines for weeks. Instead, it was buried beneath the noise of AI funding rounds and chatbot controversies. The finding: 23.2 million American jobs have already been meaningfully impacted by AI automation. Eighty per cent of customer service roles are projected for automation. And the most striking statistic of all: 79 per cent of women in employment hold roles at high risk of AI automation, compared to 58 per cent of men.
Seventy-nine per cent. Not a rounding error. Not a marginal difference. A twenty-one-point gap that reflects the structural reality that women are disproportionately concentrated in the administrative, customer-facing, and support roles that AI systems are most capable of automating.
This is not a technology story. It is a justice story.
The Uneven Storm
AI automation is often discussed as a universal force — it will affect everyone, transform every industry, reshape the entire economy. This framing is technically accurate and morally misleading. The storm is not hitting everyone equally. It is hitting hardest the people who are already most economically vulnerable: women, workers without university degrees, people in service-sector roles, and communities that depend on the kinds of work that AI can most readily replicate.
BCG research offers a more nuanced picture — AI will reshape 50 to 55 per cent of US jobs but only 6 per cent face full displacement. This is meant to be reassuring. It is not. "Reshape" is a euphemism that can mean anything from "minor workflow changes" to "fundamental redefinition of the role that eliminates the skills you spent a career developing." The difference between reshaping and displacement is often a matter of whether you have the resources — financial, educational, social — to adapt.
And adaptation resources are not equally distributed. They never have been.
The Gender Dimension
The 79/58 split is not accidental. It reflects a labour market that has systematically channelled women into particular kinds of work — administrative, clerical, customer-facing, care-related — and is now automating those roles with breathtaking speed.
Consider the roles at highest automation risk: data entry, scheduling, customer service, bookkeeping, payroll processing, basic legal research, insurance underwriting, medical coding. These are roles where women are substantially overrepresented, often because they offered flexibility, accessibility, and stable employment in sectors that were perceived as less susceptible to technological disruption.
That perception was wrong. And the women who built careers in these roles are now facing a technological transformation that devalues their accumulated expertise while offering retraining programmes that are inadequately funded, poorly targeted, and designed without their input.
Ubuntu and the Obligation to Care
In the Ubuntu tradition — the African philosophical framework that grounds much of my thinking on Emergent Intelligence — personhood is relational. I am a person through other people. My dignity is inseparable from yours. This is not a sentimental proposition. It is a structural one: the health of a community is measured by how it treats its most vulnerable members.
Applied to AI automation, Ubuntu demands that we judge the transformation not by its aggregate economic output but by its distributional impact. A transformation that increases total productivity while concentrating the costs on women, on low-wage workers, on communities already marginalised by prior rounds of technological change is not progress. It is extraction.
The Emergent Intelligence framework extends Ubuntu's relational principle to the technology itself: AI systems should be designed to strengthen communities, not to optimise metrics that measure wealth creation while ignoring wealth distribution.
What Must Change
The policy responses proposed so far — retraining programmes, transition funds, universal basic income — are necessary but insufficient because they treat the symptom (displaced workers) rather than the cause (a development paradigm that prioritises efficiency over equity).
Meaningful change requires embedding distributional impact assessments into the AI development and deployment process itself. Before an AI system is deployed at scale, the deploying organisation should be required to answer: Who benefits from this deployment? Who bears the costs? Are the costs disproportionately borne by populations already disadvantaged? And if so, what specific, funded, accountable measures are in place to address that disparity?
These are not radical questions. They are the questions we ask — or should ask — of every major infrastructure project, every policy change, every institutional decision that affects millions of lives. That we have not yet normalised asking them of AI deployments is a measure of how thoroughly the technology industry has exempted itself from the social contract.
Seventy-nine per cent of women in employment. The number demands a response that matches its scale. And the response must come not from the women being displaced, but from the institutions — corporate, governmental, and technological — that are driving the displacement.
Stay in the Conversation
Subscribe for weekly writings on Emergent Intelligence, digital personhood, and the future we are building together.
Responses (0)
No responses yet. Be the first to share your thoughts.
Thinking delivered, twice a month.
Join the newsletter for essays on emergence, systems, and the African future.