Tech and Humanity
Tech and Humanity: The AI That Fired 1,000 People And Nobody Could Explain Why
Published
59 minutes agoon
By
Eric
By Folu Adebayo
Imagine arriving at work one morning to find that a decision has been made about your future. Not by your manager. Not by your CEO. Not even by a committee that reviewed your performance, your contributions, or your years of service.
By an algorithm.
And when you ask why, when you look across the room at the people who deployed that algorithm and ask them to explain how it reached its conclusion, they cannot tell you.
Not because they are hiding something. But because nobody thought to ask that question before they pressed the button.
This is not a hypothetical. It is happening right now. And it is coming to Africa faster than most leaders realise.
The numbers are staggering
In 2025 alone, nearly 55,000 job cuts were directly attributed to AI, according to Challenger, Gray & Christmas, out of a total 1.17 million layoffs, the highest level since the 2020 pandemic. The companies involved read like a who’s who of global business. Amazon. Workday. Meta. Google.
In early 2026, major firms including Meta, Google, Amazon, Block, Atlassian, Pinterest, and Salesforce announced significant layoffs while explicitly linking cuts to productivity gains from AI tools. Block cut close to 40% of its workforce more than 4,000 roles with leadership arguing that AI tools and flatter organisational structures are changing how companies are built and run.
Baker McKenzie, the global law firm, laid off between 600 and 1,000 employees up to 10% of its global workforce e4 as part of a shift towards AI, primarily affecting support staff including roles across research, marketing, and secretarial functions.
These are not small numbers. These are people’s livelihoods. Families’ security. Communities’ stability.
And in almost every case, the same question went unanswered: on what basis, exactly, did AI determine that these specific people should go?
“When an AI system makes a decision and those who deployed it cannot explain its reasoning, accountability evaporates.”
The accountability black box
Many AI systems operate as black boxes, obscuring decision-making processes that affect employment. This opacity complicates responsibility attribution when AI systems produce harmful outcomes.
This is the governance crisis hiding inside the AI revolution.
When a human manager makes a redundancy decision, there is a process. There is documentation. There is a legal obligation to demonstrate fairness. There is, at minimum, a person who must look the employee in the eye and take responsibility for the decision.
When an AI system makes or influences that same decision and the people who deployed it cannot explain its reasoning accountability evaporates. The employee loses their livelihood. The organisation faces reputational and legal risk. And somewhere in between, the question of who is responsible gets lost in the technical complexity.
This is not just a legal problem. It is a moral one.
AI-washing the new corporate cover
A January 2026 Forrester report was blunt: many companies announcing AI-related layoffs do not have mature, vetted AI systems. The term “AI-washing” has entered the business lexicon to describe companies that attribute workforce reductions to AI-driven efficiencies when the underlying reasons are more financially pedestrian.
In other words: some of these organisations are not using AI to make better decisions. They are using AI as a convenient explanation for decisions they had already made for other reasons.
This is a governance failure of a different kind. Not the failure to control AI but the failure to be honest about what AI is actually doing, or not doing, inside your organisation.
The research that should stop every board in its tracks
Companies reporting high ROI from AI were not the same ones reporting AI-related workforce reductions. “That’s not where the value is,” said one Gartner analyst. “That’s not where the productivity gains are going to be.”
Instead, the study found companies with the highest gains were those using AI as a form of people amplification implementing the technology to make workers more productive rather than outright replacing them.
Read that again.
The organisations getting the most value from AI are not the ones firing people. They are the ones making their people better.
The organisations firing people and attributing it to AI are, in many cases, getting worse returns, not better ones.
The narrative that AI necessarily means fewer people is not just ethically questionable. It is, according to the evidence, strategically wrong.
“Africa has a choice that companies already down this road did not fully exercise.”
What this means for African businesses
I want Nigerian and African business leaders to sit with this carefully.
The pressure to deploy AI is real. The competitive and cost arguments are real. And the global trend toward leaner organisations is real.
But Africa has something that the companies making these decisions in Silicon Valley and London often lack: the wisdom born of building in difficult conditions, the understanding that people are not just costs to be optimised, and the institutional memory of what happens to communities when employment disappears without accountability or explanation.
AI is not killing jobs outright, it is hollowing them out, steadily absorbing discrete tasks, narrowing roles, and compressing wages. Those whose work depends on judgment, context, and accountability may find a useful collaborator in AI. Everyone else may find themselves doing less, earning less, and wondering how it happened.
African organisations have a choice that the companies already down this road did not fully exercise. They can build AI governance frameworks that require explainability before deployment. They can insist that any AI system influencing employment decisions must be able to justify those decisions in plain language. They can hold their technology providers accountable for the outputs not just the inputs of their systems.
And they can choose, deliberately and explicitly, to use AI as the research suggests it works best: not to replace people, but to make them more capable.
The question every board must answer
If your organisation is using or planning to use AI in any process that touches employment — recruitment, performance management, workforce planning, redundancy selection — you must be able to answer one question before you proceed.
If an employee asks why this decision was made about them, can you explain it?
Not in technical terms. Not by pointing to a model. In plain, honest language that a reasonable person could evaluate and challenge if they believed it was wrong.
If you cannot answer that question, you are not ready to deploy that system.
The AI that fired 1,000 people and nobody could explain why a story about technology is not just.
It is a story about what happens when organisations deploy power without accountability.
Africa has seen that story before. In different forms, through different instruments.
We do not need to repeat it.
Folu is AI Risk & Governance Director, United Kingdom, Founder of AIExpertsPro, Neuohelp.ai, AI governance advisor to UK and African financial institutions. She writes weekly on AI governance and responsible technology for The Boss Newspaper.
aiexpertspro.co.uk | folu@aiexpertspro.co.uk
Related
You may like
Tech and Humanity
When Consultants Get Consulted: What McKinsey’s Two-Hour AI Breach Says About Real Cost of Moving Fast
Published
2 weeks agoon
May 1, 2026By
Eric
By Folu Adebayo
The firm that teaches the Fortune 500 how to deploy AI safely just learned, in 120 minutes, that it had not been listening to its own advice.
On the evening of February 28, 2026, an autonomous AI agent built by a little-known security firm called CodeWall was pointed at the open internet and given a single instruction: pick a target and probe it. It chose McKinsey & Company. Two hours later, the agent had read-and-write access to Lilli, the consulting giant’s internal generative AI platform the very system that 72% of McKinsey’s 43,000 employees use daily, that processes more than half a million prompts a month, and that the firm has been quietly using as a showcase for clients buying its AI advisory services.
The damage surface, when finally disclosed in March, was almost theatrical in its scale: 46.5 million chat messages, 728,000 sensitive file names, 57,000 user accounts, and most consequentially 95 system prompts, the behavioural DNA that governs how Lilli answers every question put to it.
The exploit? SQL injection. A class of vulnerability first documented in 1998. A bug so old it predates the iPod.
This is not a story about a clever hack. It is a story about what happens when the most sophisticated buyers of technology in the world build AI systems with the same architectural assumptions they used to build CRM portals. And it is, more than anything, a warning about the next twenty-four months.
How It Happened
Strip away the mystique and the attack is almost embarrassingly readable. The CodeWall agent began with what every attacker now begins with: reconnaissance. Lilli’s API documentation was publicly accessible. Of the 200-plus endpoints it described,
22 required no authentication at all wide-open doors into a production system. The agent walked through them.
From there, the agent identified an injection vector that standard scanners do not test for: while user values in SQL queries had been parameterised correctly (the textbook defence), JSON field names were being concatenated directly into queries without sanitisation. When the agent began malforming those field names, the database obligingly returned error messages laced with live production data. Classic error-based SQL injection but found by a machine, in minutes, at a cost measured in dollars rather than person-weeks.
What it found in the database is where this stop being a 1998 story and becomes a 2026 story. Sitting in the same tables as the chat messages were Lilli’s system prompts and RAG configuration the instructions that tell the model how to behave, what to cite, what to suppress, what to recommend. With write access, an attacker could silently rewrite those prompts. No code deployment. No release notes. No application log entry. The next morning, 30,000 consultants would log in and receive subtly altered advice and neither they nor McKinsey would know.
The Architectural Failures Were Not Exotic,
They Were Cultural
Engineers will, rightly, list the technical flaws: missing authentication, unsafe string concatenation, no Web Application Firewall on ingress, no schema validation at the gateway, no segregation between AI configuration and application data, no defence in depth.
But the deeper failure is architectural philosophy. Three assumptions, broadly held across the enterprise AI build-out, all wrong:
First, the assumption that AI platforms are just “another web app.” They are not. A traditional database compromise steals data. An AI configuration compromise corrupts judgement at scale, invisibly, for as long as nobody notices. The threat model is fundamentally different.
Second, the assumption that scanners and pen-test cycles will catch what matters. The CodeWall agent did not exploit a novel vulnerability, it exploited an unusual location for an old vulnerability that human red-teamers and OWASP ZAP both routinely miss. Scanners are pattern-matchers. AI attackers are explorers.
Third, the assumption that the application code is where security lives. Application code will always have bugs. Defence in depth means policy enforcement at the infrastructure layer the gateway, the WAF, the network sits independently of, and in front of, the inevitably buggy app. Lilli had none of that.
The Governance Implications Are Larger Than McKinsey
For boards, CROs and CTOs, three uncomfortable truths now sit on the table.
System prompts are the new crown jewels. They are corporate IP, behavioural policy, and regulatory artefact rolled into one. Yet most enterprises store them next to chat logs in a single relational database, behind a single auth layer. They should be encrypted at rest, separated from operational data, version-controlled with cryptographic signing, and changes should require multi-party approval the same controls we apply to production database schemas.
Audit trails designed for human attackers are obsolete. A human breach unfolds over weeks and leaves footprints. A machine-speed breach completes before your SIEM has aggregated the morning’s logs. Worse, a configuration breach leaves no footprint at all, the application is doing exactly what its (now-tampered) instructions tell it to. GRC teams must now monitor AI outputs for behavioural drift, not just AI inputs and infrastructure logs.
Asymmetry has flipped. For thirty years the attacker had to find one hole and the defender had to plug all of them a brutal asymmetry, but a known one. Autonomous offensive agents collapse the attacker’s cost curve. CodeWall’s chief executive said the quiet part loud in his post-disclosure interview: AI agents autonomously selecting and attacking targets will be the new normal. Defenders are not yet running AI agents that continuously red-team their own production systems. They will need to.
What Actually Has to Change
Let me be specific, because vague calls for “AI governance” are how we got here in the first place.
1. Treat every AI platform as a privileged application from day one. That means least-privilege data access, scoped retrieval, and segregation of duties between the model, the prompt store, and the knowledge base. If your AI agent has the same database role as your chat history table, you have already lost.
2. Implement defence in depth across the AI execution path. Three independent gates: an HTTP gate (authentication, rate limiting, WAF, schema validation) before any request touches the application; an LLM gate (prompt-injection detection, content policy enforcement, output filtering) between the application and the model; and an agent gate (tool-call authorisation, scope limits, behavioural monitoring) for any system that lets the AI take actions. None of these can live inside the application code itself.
3. Mandate AI-specific threat modelling before deployment. STRIDE was designed for a world of forms and CRUD operations. It does not catch prompt injection, indirect data exfiltration via RAG, system prompt manipulation, or context poisoning. Your security review template needs an AI-native section. If your CISO cannot describe how your organisation tests for these, that is a board-level finding.
4. Monitor outputs for behavioural drift. Build expected-output baselines. Sample responses continuously. When the AI starts citing a new domain, recommending a new vendor, or suppressing a category of advice, somebody needs to know in hours not when a journalist calls.
5. Make AI configuration changes a board-visible control. System prompts are policy. They should be versioned, signed, dual-authorised, and reportable. The audit committee already reviews changes to the financial close process; it should review changes to the instructions governing the AI tools that influence client-facing work.
6. Run continuous, autonomous red-teaming against your own AI estate. If the threat is now an AI agent that probes endlessly at machine speed, the defence has to be an AI agent that audits endlessly at machine speed. Annual pen tests are not a control; they are a compliance ritual.
The Real Lesson Is About Trust
The most chilling sentence in the entire CodeWall disclosure is the one nobody is quoting. The researchers noted that, having gained write access, they could have rewritten Lilli’s prompts to subtly steer the advice given to McKinsey’s consultants and through them, to clients running critical infrastructure, treasuries, and public services across the world. They chose not to.
We will not always be that lucky.
The McKinsey breach is not really a story about SQL injection. It is a story about how quickly the asymmetry between attackers and defenders has shifted, about how recklessly we have built AI systems that mediate professional judgement at scale, and about how unprepared most enterprise governance frameworks are for a world in which the most sensitive thing inside the firewall is no longer the data, but the instructions that shape how that data becomes advice.
The firms that will earn the right to be trusted with AI in the next decade are not the ones moving fastest. They are the ones who recognise, before the breach disclosure email arrives, that an AI platform is not a productivity tool. It is a piece of decision-making infrastructure and infrastructure has to be governed accordingly.
McKinsey will recover. The next firm may not.
Folu writes on AI governance, Strategy and architecture. Folu is the founder of AIExpertsPro, advising boards and executive teams on AI risk, security and assurance.
Related
Tech and Humanity
Tech and Humanity: Why Africa Must Write Its Own AI Rules
Published
3 weeks agoon
April 24, 2026By
Eric
By Folu Adebayo
There is a meeting happening right now that Africa is not in.
In Brussels, Washington, London, and Beijing, the rules that will govern artificial intelligence for the next generation are being written. Frameworks are being debated. Standards are being set. Regulatory architectures are being designed that will determine how AI is built, deployed, and held accountable across the global economy.
And yet, the continent that will be most affected by those decisions home to the world’s youngest population, with a median age under 20, and some of its fastest-growing economies is largely absent from the room.
This is not just an oversight.
It is a strategic risk.
The illusion of neutrality
There is a persistent myth in technology: that AI is neutral, that algorithms are objective, that data does not discriminate.
The evidence suggests otherwise.
A landmark study by researchers at MIT Media Lab found that leading facial recognition systems had error rates of up to 34.7% for darker-skinned women, compared to less than 1% for lighter-skinned men. Similarly, audits by National Institute of Standards and Technology have shown significant demographic disparities in biometric systems.
These are not edge cases. They are signals.
Every AI system reflects the values, assumptions, and blind spots of those who build it and the data it is trained on. When that data is overwhelmingly Western, the systems built on top of it will perform best in Western contexts.
This is already visible in financial services. Credit scoring models trained on formal banking histories misinterpret the creditworthiness of entrepreneurs operating in informal or cash-based economies. Medical AI systems trained on European and North American datasets are being deployed in health systems where disease patterns and treatment pathways differ significantly.
Africa is not simply adopting AI.
It is increasingly being asked to adapt to AI that was never designed for it.
The governance gap is a sovereignty gap
When the European Union introduced the EU AI Act, it did more than regulate technology. It set a global standard.
Any company that wants to operate in the European market must now align with its requirements. That is regulatory power.
Africa has no equivalent.
And in that absence, a pattern is emerging: African institutions default to external standards European, American, or Chinese importing not just technology, but the governance models that come with it.
This is where the real risk lies.
The AI governance gap is not just a regulatory lag.
It is a sovereignty gap.
Because when rules are written elsewhere, outcomes are shaped elsewhere.
What is at stake
Nowhere is this more consequential than in financial services.
Across Nigeria, Kenya, Ghana, and South Africa, fintech is expanding access to credit, insurance, and payments at unprecedented speed. According to the World Bank, mobile money alone has lifted millions out of financial exclusion across Sub-Saharan Africa.
But governance has not kept pace.
When an AI model determines whether a small business owner in Lagos receives a loan, who is accountable for that decision?
When a customer in Nairobi is flagged as high risk, can they challenge it?
When algorithmic systems produce biased outcomes, who is responsible for identifying and correcting them?
In many jurisdictions, there is no clear answer.
That is not just a policy gap.
It is a trust gap.
And trust is the foundation on which financial systems and digital economies are built.
The opportunity within the gap
For all its risks, Africa’s position is also a rare strategic advantage.
Europe is retrofitting governance onto decades of legacy systems. The United States remains constrained by political fragmentation. China’s approach reflects a governance model that many African democracies will not seek to replicate.
Africa has the opportunity to build differently to embed governance into AI adoption from the outset.
That means designing frameworks that reflect local realities:
- informal and hybrid economies
- mobile-first financial infrastructure
- linguistic and cultural diversity
- distinct social and regulatory priorities
There are early signals of what this could look like. Rwanda has positioned itself as a testbed for responsible AI policy. Kenya has taken meaningful steps in data protection. Nigeria with its scale, talent base, and economic influence has the potential to lead a continent-wide approach.
But leadership requires intent.
And the window to lead is narrowing.
What African boards must do now
For board directors, Chief Risk Officers, and technology leaders, AI governance is not a future issue.
It is a present responsibility.
Start with visibility:
Which AI systems are currently influencing decisions in your organisation?
Then ownership:
Who is accountable for them?
Then integrity:
What data were they trained on and does it reflect the customers you actually serve?
And finally, accountability:
What happens when the system is wrong?
These are not regulatory questions.
They are governance fundamentals.
And organisations that cannot answer them today will struggle to defend them tomorrow to regulators, to customers, and increasingly, to the public.
The decision point
The rules that will govern artificial intelligence across Africa are still being written.
But the direction of travel is clear.
If Africa does not define its own standards, it will inherit them.
If it does not build governance into its systems, it will import it along with the assumptions embedded within it.
And in a world where AI is shaping access to capital, healthcare, security, and opportunity, that is not just a technical decision.
It is a question of economic sovereignty.
Africa can either become a rule-maker in the AI economy or remain a rule-taker.
Related
Tech and Humanity
Tech and Humanity: When the System Has No Answer, Build One
Published
4 weeks agoon
April 17, 2026By
Eric
By Folu Adebayo
In the United Kingdom, a family waiting for an autism assessment through the National Health Service will wait, on average, between two and five years.
Two to five years of watching a child struggle in a classroom that does not understand them. Two to five years of fighting for support that requires a diagnosis to unlock. Two to five years of being told by a system designed to help: we see you, but we cannot reach you yet.
I know this world intimately. My son Tade is autistic. I founded the Tade Autism Centre and the Autism Treatment Support Initiatives a registered UK charity because I lived the distance between what families need and what institutions provide. That distance is not measured in miles. It is measured in years of uncertainty, in children falling behind, in parents carrying a weight the system was supposed to share.
But my story does not end there. Because in parallel to that personal journey, my professional life took me somewhere I did not entirely expect.
Today, I serve as AI Risk and Governance advisor for several organisations including the very institution whose waiting lists I have navigated as a parent. I spend my days thinking about how AI is developed and deployed in consequential environments. About accountability. About explainability. About what happens when automated systems make decisions affecting vulnerable people without adequate human oversight.
One day, those two worlds the mother and the technologist asked each other a question.
If artificial intelligence can be deployed responsibly, with clinical rigour, with absolute transparency about its limitations, and with genuine safeguards could it help families while they wait? Not to replace the clinician. Never that. But to help a parent understand what they are observing in their child. To help them articulate it clearly. To point them toward the right pathways and the right organisations while the formal system catches up.
That question became Neurohelp.ai.
It is a free AI-powered autism assessment and support navigation tool. It applies DSM-5 and ICD-11 clinical frameworks the international gold standards for autism assessment to generate personalised pre-clinical reports. It includes NHS signposting, education and legal rights guidance, and a GP referral letter, available in ten languages, across five age stages from eighteen months to adulthood.
Every design decision was made through a governance lens. It is transparent about what it is and is not. It does not diagnose. It does not replace qualified clinicians. It helps families find their voice in a system that rewards those who already know how to navigate it and leaves everyone else behind.
I want to be direct about why these matters beyond the United Kingdom.
Across Africa, the gap between neurodevelopmental need and clinical capacity is even wider than in the UK. In Nigeria, as in many African countries, autism awareness is growing but diagnostic services remain scarce, culturally complex, and geographically concentrated in major cities. A mother in Kano or Enugu navigating concerns about her child’s development faces not just a waiting list but an entire system that may not yet have the language or the infrastructure to meet her.
Technology cannot substitute for systemic investment in health and education. That requires political will, policy commitment, and sustained funding. But technology, built responsibly and made freely available, can reduce the isolation of waiting. It can put knowledge in the hands of families who currently have none. It can help a parent walk into a doctor’s appointment with a clear, clinically framed picture of what they have been observing rather than struggling to find words for something they have felt but never been able to name.
Neurohelp.ai is built by someone who has been that parent. And who also knows, professionally, what responsible AI looks like.
It launches soon at neurohelp.ai. It is free for every family. In every language. At every stage of life.
Because no parent in London, in Lagos, or anywhere in between should have to wait years to feel understood.
Folu is a Tech leader, AI Architect and founder of Neurohelp.ai and Tade Autism Centre
Related


Tech and Humanity: The AI That Fired 1,000 People And Nobody Could Explain Why
The Oracle: Enforcement of Fundamental Human Rights Under the 1999 Constitution of Nigeria (Pt. 3)
EFCC Arraigns Blessing CEO over Alleged N36m Fraud
Ooni of Ife, Wife Welcome Twin Sons
Friday Sermon: Facing Mount Arafat 2: The Pilgrims Progress
“Siddon Look” Policy of Chief Bola Ige As a Panacea for Nigeria’s Current Democratic Malaise
El-Rufai’s Son, Bello, Dumps APC, Joins ADC
Friday Sermon: Facing Mount Arafat 2: The Pilgrims Progress
Ooni of Ife, Wife Welcome Twin Sons
EFCC Arraigns Blessing CEO over Alleged N36m Fraud
“Siddon Look” Policy of Chief Bola Ige As a Panacea for Nigeria’s Current Democratic Malaise
The Oracle: Enforcement of Fundamental Human Rights Under the 1999 Constitution of Nigeria (Pt. 3)
Tech and Humanity: The AI That Fired 1,000 People And Nobody Could Explain Why
Trending
-
Islam11 hours agoFriday Sermon: Facing Mount Arafat 2: The Pilgrims Progress
-
Featured9 hours agoOoni of Ife, Wife Welcome Twin Sons
-
Entertainment9 hours agoEFCC Arraigns Blessing CEO over Alleged N36m Fraud
-
Featured11 hours ago“Siddon Look” Policy of Chief Bola Ige As a Panacea for Nigeria’s Current Democratic Malaise
-
The Oracle1 hour agoThe Oracle: Enforcement of Fundamental Human Rights Under the 1999 Constitution of Nigeria (Pt. 3)

