Connect with us

Tech and Humanity

Tech and Humanity: The AI You Can’t See is the Risk You Can’t Manage

Published

on

By Folu Adebayo

A CEO said something to me recently that I haven’t been able to shake.

“We’re not really using AI yet we’re still exploring.”

It sounded reasonable. Measured. Even responsible.

But when we looked a little closer, AI was already there.

In reports being drafted.

In data being analysed.

In customer interactions being shaped.

In decisions being influenced.

Quietly. Informally. Unchecked.

This is the reality most organisations are now operating in.

AI isn’t something you switch on.
It’s something that creeps in.
The Gap Leadership Isn’t Seeing
Across boardrooms, the conversation is still framed as:
“Should we adopt AI?”

But inside the business, the reality is very different:
“We already have.”

Just not in a way that leadership can fully see.
• Teams are experimenting.
• Individuals are optimising their work.
• Departments are solving problems quickly.
And in doing so, they are introducing AI into core workflows — often without formal approval, oversight, or governance.
Not because they are reckless.
Because they are trying to be effective.

The Illusion of Control
Most organisations believe they have some level of control over AI.
There may be policies.
There may be guidance.
There may even be restrictions on certain tools.
But control is not defined by what is written.
It is defined by what is visible.
And in many cases, AI usage today is only partially visible if at all.
This is where risk begins.
The Risk Isn’t Where Most Leaders Are Looking
When AI risk is discussed, attention often goes to the models:
Bias. Accuracy. Explainability.
Important issues, certainly.
But in my experience, the more immediate risks are far more operational — and far less visible.
Data being entered into external tools without control.
Decisions being influenced without traceability.
Processes being automated without oversight.
Accountability becoming blurred.
These are not theoretical risks.
They are already happening.
Regulation Is Moving — But Not Fast Enough
Globally, regulators are beginning to respond.
The EU AI Act is introducing structured approaches to classifying AI risk, particularly in high-impact sectors.
In the UK, regulatory thinking continues to evolve, with a focus on sector-led oversight.
Across Africa, including Nigeria, adoption is accelerating rapidly, often ahead of formal regulatory frameworks.
This creates a tension.
Organisations are scaling AI faster than governance is being defined.
And governance is being defined faster than organisations are able to implement it.
Governance Is Not the Barrier, It’s the Enabler
There is still a perception in some leadership circles that governance slows things down.
In reality, the absence of governance slows everything down eventually.
Without it:
• Risk accumulates silently
• Confidence in decisions erodes
• Scaling becomes fragile
With it:
• Leaders gain clarity
• Risk becomes manageable
• AI can be deployed with confidence
Governance is not about restriction.
It is about control.
The Questions That Matter Now
The most important shift leaders can make is not technical.
It is perspective.
From:
“Are we using AI?”
To:
“Where is AI already being used and what does that mean for us?”
That means asking:
• Do we have full visibility of AI usage across the organisation?
• Who is accountable for AI risk at executive level?
• Can we clearly classify and prioritise AI-related risks?
• Are we able to explain how AI is influencing decisions?
If the answers are unclear, the issue is not capability.
It is awareness.
A Defining Leadership Moment
We are at a point where AI is no longer an innovation discussion.
It is an operational reality.
And like any operational reality, it requires structure, ownership, and oversight.
The organisations that will lead in this space will not necessarily be those who adopt AI the fastest.
They will be those who understand it the clearest.
Who can see it.
Who can manage it.
Who can take responsibility for it.
Final Thought
The real risk is not the AI you are planning for.

It is the AI that is already inside your business.

Working. Influencing. Deciding.
The question is not whether it exists.
It is whether you can see it.
And whether you are in control of it.

Folu Adebayo is an AI Governance and Enterprise Transformation Advisor, working at the intersection of technology, risk, and regulation. With a background in enterprise architecture and large-scale transformation across insurance, financial services, and the public sector, she helps organisations gain visibility and control over how AI is used across their business. Her work focuses on bridging the gap between AI adoption and governance, enabling leadership teams to scale AI safely, responsibly, and with confidence.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech and Humanity

When Consultants Get Consulted: What McKinsey’s Two-Hour AI Breach Says About Real Cost of Moving Fast

Published

on

By

By Folu Adebayo

The firm that teaches the Fortune 500 how to deploy AI safely just learned, in 120 minutes, that it had not been listening to its own advice.

On the evening of February 28, 2026, an autonomous AI agent built by a little-known security firm called CodeWall was pointed at the open internet and given a single instruction: pick a target and probe it. It chose McKinsey & Company. Two hours later, the agent had read-and-write access to Lilli, the consulting giant’s internal generative AI platform the very system that 72% of McKinsey’s 43,000 employees use daily, that processes more than half a million prompts a month, and that the firm has been quietly using as a showcase for clients buying its AI advisory services.
The damage surface, when finally disclosed in March, was almost theatrical in its scale: 46.5 million chat messages, 728,000 sensitive file names, 57,000 user accounts, and most consequentially 95 system prompts, the behavioural DNA that governs how Lilli answers every question put to it.
The exploit? SQL injection. A class of vulnerability first documented in 1998. A bug so old it predates the iPod.

This is not a story about a clever hack. It is a story about what happens when the most sophisticated buyers of technology in the world build AI systems with the same architectural assumptions they used to build CRM portals. And it is, more than anything, a warning about the next twenty-four months.

How It Happened

Strip away the mystique and the attack is almost embarrassingly readable. The CodeWall agent began with what every attacker now begins with: reconnaissance. Lilli’s API documentation was publicly accessible. Of the 200-plus endpoints it described,

22 required no authentication at all wide-open doors into a production system. The agent walked through them.

From there, the agent identified an injection vector that standard scanners do not test for: while user values in SQL queries had been parameterised correctly (the textbook defence), JSON field names were being concatenated directly into queries without sanitisation. When the agent began malforming those field names, the database obligingly returned error messages laced with live production data. Classic error-based SQL injection but found by a machine, in minutes, at a cost measured in dollars rather than person-weeks.

What it found in the database is where this stop being a 1998 story and becomes a 2026 story. Sitting in the same tables as the chat messages were Lilli’s system prompts and RAG configuration the instructions that tell the model how to behave, what to cite, what to suppress, what to recommend. With write access, an attacker could silently rewrite those prompts. No code deployment. No release notes. No application log entry. The next morning, 30,000 consultants would log in and receive subtly altered advice and neither they nor McKinsey would know.

The Architectural Failures Were Not Exotic,

They Were Cultural

Engineers will, rightly, list the technical flaws: missing authentication, unsafe string concatenation, no Web Application Firewall on ingress, no schema validation at the gateway, no segregation between AI configuration and application data, no defence in depth.

But the deeper failure is architectural philosophy. Three assumptions, broadly held across the enterprise AI build-out, all wrong:

First, the assumption that AI platforms are just “another web app.” They are not. A traditional database compromise steals data. An AI configuration compromise corrupts judgement at scale, invisibly, for as long as nobody notices. The threat model is fundamentally different.

Second, the assumption that scanners and pen-test cycles will catch what matters. The CodeWall agent did not exploit a novel vulnerability, it exploited an unusual location for an old vulnerability that human red-teamers and OWASP ZAP both routinely miss. Scanners are pattern-matchers. AI attackers are explorers.

Third, the assumption that the application code is where security lives. Application code will always have bugs. Defence in depth means policy enforcement at the infrastructure layer the gateway, the WAF, the network sits independently of, and in front of, the inevitably buggy app. Lilli had none of that.

The Governance Implications Are Larger Than McKinsey

For boards, CROs and CTOs, three uncomfortable truths now sit on the table.

System prompts are the new crown jewels. They are corporate IP, behavioural policy, and regulatory artefact rolled into one. Yet most enterprises store them next to chat logs in a single relational database, behind a single auth layer. They should be encrypted at rest, separated from operational data, version-controlled with cryptographic signing, and changes should require multi-party approval the same controls we apply to production database schemas.

Audit trails designed for human attackers are obsolete. A human breach unfolds over weeks and leaves footprints. A machine-speed breach completes before your SIEM has aggregated the morning’s logs. Worse, a configuration breach leaves no footprint at all, the application is doing exactly what its (now-tampered) instructions tell it to. GRC teams must now monitor AI outputs for behavioural drift, not just AI inputs and infrastructure logs.

Asymmetry has flipped. For thirty years the attacker had to find one hole and the defender had to plug all of them a brutal asymmetry, but a known one. Autonomous offensive agents collapse the attacker’s cost curve. CodeWall’s chief executive said the quiet part loud in his post-disclosure interview: AI agents autonomously selecting and attacking targets will be the new normal. Defenders are not yet running AI agents that continuously red-team their own production systems. They will need to.

What Actually Has to Change

Let me be specific, because vague calls for “AI governance” are how we got here in the first place.

1. Treat every AI platform as a privileged application from day one. That means least-privilege data access, scoped retrieval, and segregation of duties between the model, the prompt store, and the knowledge base. If your AI agent has the same database role as your chat history table, you have already lost.

2. Implement defence in depth across the AI execution path. Three independent gates: an HTTP gate (authentication, rate limiting, WAF, schema validation) before any request touches the application; an LLM gate (prompt-injection detection, content policy enforcement, output filtering) between the application and the model; and an agent gate (tool-call authorisation, scope limits, behavioural monitoring) for any system that lets the AI take actions. None of these can live inside the application code itself.

3. Mandate AI-specific threat modelling before deployment. STRIDE was designed for a world of forms and CRUD operations. It does not catch prompt injection, indirect data exfiltration via RAG, system prompt manipulation, or context poisoning. Your security review template needs an AI-native section. If your CISO cannot describe how your organisation tests for these, that is a board-level finding.

4. Monitor outputs for behavioural drift. Build expected-output baselines. Sample responses continuously. When the AI starts citing a new domain, recommending a new vendor, or suppressing a category of advice, somebody needs to know in hours not when a journalist calls.

5. Make AI configuration changes a board-visible control. System prompts are policy. They should be versioned, signed, dual-authorised, and reportable. The audit committee already reviews changes to the financial close process; it should review changes to the instructions governing the AI tools that influence client-facing work.

6. Run continuous, autonomous red-teaming against your own AI estate. If the threat is now an AI agent that probes endlessly at machine speed, the defence has to be an AI agent that audits endlessly at machine speed. Annual pen tests are not a control; they are a compliance ritual.

The Real Lesson Is About Trust

The most chilling sentence in the entire CodeWall disclosure is the one nobody is quoting. The researchers noted that, having gained write access, they could have rewritten Lilli’s prompts to subtly steer the advice given to McKinsey’s consultants and through them, to clients running critical infrastructure, treasuries, and public services across the world. They chose not to.

We will not always be that lucky.

The McKinsey breach is not really a story about SQL injection. It is a story about how quickly the asymmetry between attackers and defenders has shifted, about how recklessly we have built AI systems that mediate professional judgement at scale, and about how unprepared most enterprise governance frameworks are for a world in which the most sensitive thing inside the firewall is no longer the data, but the instructions that shape how that data becomes advice.

The firms that will earn the right to be trusted with AI in the next decade are not the ones moving fastest. They are the ones who recognise, before the breach disclosure email arrives, that an AI platform is not a productivity tool. It is a piece of decision-making infrastructure and infrastructure has to be governed accordingly.
McKinsey will recover. The next firm may not.

Folu writes on AI governance, Strategy and architecture. Folu is the founder of AIExpertsPro, advising boards and executive teams on AI risk, security and assurance.

Continue Reading

Tech and Humanity

Tech and Humanity: Why Africa Must Write Its Own AI Rules

Published

on

By

By Folu Adebayo

There is a meeting happening right now that Africa is not in.

In Brussels, Washington, London, and Beijing, the rules that will govern artificial intelligence for the next generation are being written. Frameworks are being debated. Standards are being set. Regulatory architectures are being designed that will determine how AI is built, deployed, and held accountable across the global economy.

And yet, the continent that will be most affected by those decisions home to the world’s youngest population, with a median age under 20, and some of its fastest-growing economies is largely absent from the room.

This is not just an oversight.

It is a strategic risk.

The illusion of neutrality

There is a persistent myth in technology: that AI is neutral, that algorithms are objective, that data does not discriminate.

The evidence suggests otherwise.

A landmark study by researchers at MIT Media Lab found that leading facial recognition systems had error rates of up to 34.7% for darker-skinned women, compared to less than 1% for lighter-skinned men. Similarly, audits by National Institute of Standards and Technology have shown significant demographic disparities in biometric systems.

These are not edge cases. They are signals.

Every AI system reflects the values, assumptions, and blind spots of those who build it and the data it is trained on. When that data is overwhelmingly Western, the systems built on top of it will perform best in Western contexts.

This is already visible in financial services. Credit scoring models trained on formal banking histories misinterpret the creditworthiness of entrepreneurs operating in informal or cash-based economies. Medical AI systems trained on European and North American datasets are being deployed in health systems where disease patterns and treatment pathways differ significantly.

Africa is not simply adopting AI.

It is increasingly being asked to adapt to AI that was never designed for it.

The governance gap is a sovereignty gap

When the European Union introduced the EU AI Act, it did more than regulate technology. It set a global standard.

Any company that wants to operate in the European market must now align with its requirements. That is regulatory power.

Africa has no equivalent.

And in that absence, a pattern is emerging: African institutions default to external standards European, American, or Chinese importing not just technology, but the governance models that come with it.

This is where the real risk lies.

The AI governance gap is not just a regulatory lag.
It is a sovereignty gap.

Because when rules are written elsewhere, outcomes are shaped elsewhere.

What is at stake

Nowhere is this more consequential than in financial services.

Across Nigeria, Kenya, Ghana, and South Africa, fintech is expanding access to credit, insurance, and payments at unprecedented speed. According to the World Bank, mobile money alone has lifted millions out of financial exclusion across Sub-Saharan Africa.

But governance has not kept pace.

When an AI model determines whether a small business owner in Lagos receives a loan, who is accountable for that decision?
When a customer in Nairobi is flagged as high risk, can they challenge it?
When algorithmic systems produce biased outcomes, who is responsible for identifying and correcting them?

In many jurisdictions, there is no clear answer.

That is not just a policy gap.

It is a trust gap.

And trust is the foundation on which financial systems and digital economies are built.

The opportunity within the gap

For all its risks, Africa’s position is also a rare strategic advantage.

Europe is retrofitting governance onto decades of legacy systems. The United States remains constrained by political fragmentation. China’s approach reflects a governance model that many African democracies will not seek to replicate.

Africa has the opportunity to build differently to embed governance into AI adoption from the outset.

That means designing frameworks that reflect local realities:

  • informal and hybrid economies
  • mobile-first financial infrastructure
  • linguistic and cultural diversity
  • distinct social and regulatory priorities

There are early signals of what this could look like. Rwanda has positioned itself as a testbed for responsible AI policy. Kenya has taken meaningful steps in data protection. Nigeria with its scale, talent base, and economic influence has the potential to lead a continent-wide approach.

But leadership requires intent.

And the window to lead is narrowing.

What African boards must do now

For board directors, Chief Risk Officers, and technology leaders, AI governance is not a future issue.

It is a present responsibility.

Start with visibility:
Which AI systems are currently influencing decisions in your organisation?

Then ownership:
Who is accountable for them?

Then integrity:
What data were they trained on and does it reflect the customers you actually serve?

And finally, accountability:
What happens when the system is wrong?

These are not regulatory questions.

They are governance fundamentals.

And organisations that cannot answer them today will struggle to defend them tomorrow to regulators, to customers, and increasingly, to the public.

The decision point

The rules that will govern artificial intelligence across Africa are still being written.

But the direction of travel is clear.

If Africa does not define its own standards, it will inherit them.
If it does not build governance into its systems, it will import it along with the assumptions embedded within it.

And in a world where AI is shaping access to capital, healthcare, security, and opportunity, that is not just a technical decision.

It is a question of economic sovereignty.

Africa can either become a rule-maker in the AI economy or remain a rule-taker.

Continue Reading

Tech and Humanity

Tech and Humanity: When the System Has No Answer, Build One

Published

on

By

By Folu Adebayo

In the United Kingdom, a family waiting for an autism assessment through the National Health Service will wait, on average, between two and five years.

Two to five years of watching a child struggle in a classroom that does not understand them. Two to five years of fighting for support that requires a diagnosis to unlock. Two to five years of being told by a system designed to help: we see you, but we cannot reach you yet.

I know this world intimately. My son Tade is autistic. I founded the Tade Autism Centre and the Autism Treatment Support Initiatives a registered UK charity because I lived the distance between what families need and what institutions provide. That distance is not measured in miles. It is measured in years of uncertainty, in children falling behind, in parents carrying a weight the system was supposed to share.
But my story does not end there. Because in parallel to that personal journey, my professional life took me somewhere I did not entirely expect.

Today, I serve as AI Risk and Governance advisor for several organisations including the very institution whose waiting lists I have navigated as a parent. I spend my days thinking about how AI is developed and deployed in consequential environments. About accountability. About explainability. About what happens when automated systems make decisions affecting vulnerable people without adequate human oversight.
One day, those two worlds the mother and the technologist asked each other a question.

If artificial intelligence can be deployed responsibly, with clinical rigour, with absolute transparency about its limitations, and with genuine safeguards could it help families while they wait? Not to replace the clinician. Never that. But to help a parent understand what they are observing in their child. To help them articulate it clearly. To point them toward the right pathways and the right organisations while the formal system catches up.

That question became Neurohelp.ai.

It is a free AI-powered autism assessment and support navigation tool. It applies DSM-5 and ICD-11 clinical frameworks the international gold standards for autism assessment to generate personalised pre-clinical reports. It includes NHS signposting, education and legal rights guidance, and a GP referral letter, available in ten languages, across five age stages from eighteen months to adulthood.

Every design decision was made through a governance lens. It is transparent about what it is and is not. It does not diagnose. It does not replace qualified clinicians. It helps families find their voice in a system that rewards those who already know how to navigate it and leaves everyone else behind.

I want to be direct about why these matters beyond the United Kingdom.

Across Africa, the gap between neurodevelopmental need and clinical capacity is even wider than in the UK. In Nigeria, as in many African countries, autism awareness is growing but diagnostic services remain scarce, culturally complex, and geographically concentrated in major cities. A mother in Kano or Enugu navigating concerns about her child’s development faces not just a waiting list but an entire system that may not yet have the language or the infrastructure to meet her.
Technology cannot substitute for systemic investment in health and education. That requires political will, policy commitment, and sustained funding. But technology, built responsibly and made freely available, can reduce the isolation of waiting. It can put knowledge in the hands of families who currently have none. It can help a parent walk into a doctor’s appointment with a clear, clinically framed picture of what they have been observing rather than struggling to find words for something they have felt but never been able to name.

Neurohelp.ai is built by someone who has been that parent. And who also knows, professionally, what responsible AI looks like.

It launches soon at neurohelp.ai. It is free for every family. In every language. At every stage of life.

Because no parent in London, in Lagos, or anywhere in between should have to wait years to feel understood.

Folu is a Tech leader, AI Architect and founder of Neurohelp.ai and Tade Autism Centre

Continue Reading

Trending