Connect with us

Tech and Humanity

AI and Neurodiversity: The Future Must Work for Everyone

Published

on

By Folu Adebayo

I’ve been thinking about this question for a while now.

What if the problem was never the individual…but the way the world was designed?

For years, the conversation around neurodiversity has quietly leaned in one direction, that those who think or communicate differently need to adjust. Fit in. Learn to operate within systems that were never really built for them.

You see it everywhere.

In schools that reward one way of learning.

In workplaces that value one way of thinking.

In everyday interactions that expect one way of communicating.

So we ask, almost without thinking: How can they fit in?

But maybe we should be asking something else entirely.

Why hasn’t the world learned to fit them?
For many families, this isn’t a theory. It’s just life.

There’s no clear roadmap. You figure things out as you go. Some days you feel like you’re making progress, other days it feels like you’re starting again.

You find yourself stepping into roles you never imagined. Most often, you are either explaining, researching or advocating.

And sometimes, just hoping that someone, anyone will take the time to really understand your child.

As a mother, this is not something I observe from a distance. It is my life…

My son, Akintade, is autistic.

There have been moments over the years where communication felt… difficult. Not because he didn’t have something to say, but because the world didn’t always offer him the right way to say it.

And there were times I would look at him and know with absolute certainty that there was so much inside him waiting to be expressed, if only the world knew how to listen.

And that’s something I think we often get wrong.

We see silence and assume there’s nothing there.

We see difference and assume there’s a limitation.

But that hasn’t been my experience as a techie mother of an autistic child. I have used several technologies to facilitate my son’s communication skills.

What I’ve seen over time is that when the right support shows up, things begin to shift.
Not in dramatic, headline-making ways. But in quiet, meaningful ones.

Moments where expression becomes easier.
Moments where connection feels possible.
Moments where he engages with the world on his own terms.

Technology has played a part in that
Not as a solution to everything. But as a bridge.

And those moments change how you see things.

You start to realise that the issue was never ability.

It was access.
It was design.
It was understanding.

And that’s where artificial intelligence starts to matter, not as a buzzword, but as something with real potential. Because unlike traditional systems, AI has the ability to adapt.

It can meet people where they are. It can support different ways of learning, different ways of communicating, different ways of processing the world.

And for neurodiverse individuals like my son, that’s powerful. It shifts the conversation.

Away from “fixing” the individual…
and towards supporting their potential.
But I also think we need to be honest about something.

Technology on its own is not enough.
If anything, we’ve already seen what happens when systems are built without real understanding. They exclude. They overlook. They miss the people who need them most.

AI will be no different if we’re not intentional.

If neurodiverse individuals are not part of the conversation — not just as users, but as voices that shape these systems — then we risk repeating the same patterns.

Just at a much bigger scale.

And that would be a missed opportunity.

Because this moment, we’re in right now… it matters.

We’re not just building tools.

We’re shaping the kind of world people will live in.

When I think about the future of AI, I don’t just think about how advanced it will become.

I think about whether it will be more thoughtful.

More inclusive.

More aware of the fact that not everyone experiences the world in the same way.
Through my journey with Akintade, I’ve learned something that stays with me.

Every person has a voice.

It might not always sound the way we expect.

It might not always be easy to understand straight away.

But it’s there.

And when the right support is in place, when the right tools exist, when the right mindset is applied that voice can be heard.
So maybe that’s the real question we should be asking as we continue to build and invest in artificial intelligence:

Who are we building it for?

Because a future driven by technology should also be a future guided by empathy.

Otherwise, we risk creating something powerful…that still leaves people behind.

And that, to me, would not be a failure of technology.

It would be a failure of humanity.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech and Humanity

When Consultants Get Consulted: What McKinsey’s Two-Hour AI Breach Says About Real Cost of Moving Fast

Published

on

By

By Folu Adebayo

The firm that teaches the Fortune 500 how to deploy AI safely just learned, in 120 minutes, that it had not been listening to its own advice.

On the evening of February 28, 2026, an autonomous AI agent built by a little-known security firm called CodeWall was pointed at the open internet and given a single instruction: pick a target and probe it. It chose McKinsey & Company. Two hours later, the agent had read-and-write access to Lilli, the consulting giant’s internal generative AI platform the very system that 72% of McKinsey’s 43,000 employees use daily, that processes more than half a million prompts a month, and that the firm has been quietly using as a showcase for clients buying its AI advisory services.
The damage surface, when finally disclosed in March, was almost theatrical in its scale: 46.5 million chat messages, 728,000 sensitive file names, 57,000 user accounts, and most consequentially 95 system prompts, the behavioural DNA that governs how Lilli answers every question put to it.
The exploit? SQL injection. A class of vulnerability first documented in 1998. A bug so old it predates the iPod.

This is not a story about a clever hack. It is a story about what happens when the most sophisticated buyers of technology in the world build AI systems with the same architectural assumptions they used to build CRM portals. And it is, more than anything, a warning about the next twenty-four months.

How It Happened

Strip away the mystique and the attack is almost embarrassingly readable. The CodeWall agent began with what every attacker now begins with: reconnaissance. Lilli’s API documentation was publicly accessible. Of the 200-plus endpoints it described,

22 required no authentication at all wide-open doors into a production system. The agent walked through them.

From there, the agent identified an injection vector that standard scanners do not test for: while user values in SQL queries had been parameterised correctly (the textbook defence), JSON field names were being concatenated directly into queries without sanitisation. When the agent began malforming those field names, the database obligingly returned error messages laced with live production data. Classic error-based SQL injection but found by a machine, in minutes, at a cost measured in dollars rather than person-weeks.

What it found in the database is where this stop being a 1998 story and becomes a 2026 story. Sitting in the same tables as the chat messages were Lilli’s system prompts and RAG configuration the instructions that tell the model how to behave, what to cite, what to suppress, what to recommend. With write access, an attacker could silently rewrite those prompts. No code deployment. No release notes. No application log entry. The next morning, 30,000 consultants would log in and receive subtly altered advice and neither they nor McKinsey would know.

The Architectural Failures Were Not Exotic,

They Were Cultural

Engineers will, rightly, list the technical flaws: missing authentication, unsafe string concatenation, no Web Application Firewall on ingress, no schema validation at the gateway, no segregation between AI configuration and application data, no defence in depth.

But the deeper failure is architectural philosophy. Three assumptions, broadly held across the enterprise AI build-out, all wrong:

First, the assumption that AI platforms are just “another web app.” They are not. A traditional database compromise steals data. An AI configuration compromise corrupts judgement at scale, invisibly, for as long as nobody notices. The threat model is fundamentally different.

Second, the assumption that scanners and pen-test cycles will catch what matters. The CodeWall agent did not exploit a novel vulnerability, it exploited an unusual location for an old vulnerability that human red-teamers and OWASP ZAP both routinely miss. Scanners are pattern-matchers. AI attackers are explorers.

Third, the assumption that the application code is where security lives. Application code will always have bugs. Defence in depth means policy enforcement at the infrastructure layer the gateway, the WAF, the network sits independently of, and in front of, the inevitably buggy app. Lilli had none of that.

The Governance Implications Are Larger Than McKinsey

For boards, CROs and CTOs, three uncomfortable truths now sit on the table.

System prompts are the new crown jewels. They are corporate IP, behavioural policy, and regulatory artefact rolled into one. Yet most enterprises store them next to chat logs in a single relational database, behind a single auth layer. They should be encrypted at rest, separated from operational data, version-controlled with cryptographic signing, and changes should require multi-party approval the same controls we apply to production database schemas.

Audit trails designed for human attackers are obsolete. A human breach unfolds over weeks and leaves footprints. A machine-speed breach completes before your SIEM has aggregated the morning’s logs. Worse, a configuration breach leaves no footprint at all, the application is doing exactly what its (now-tampered) instructions tell it to. GRC teams must now monitor AI outputs for behavioural drift, not just AI inputs and infrastructure logs.

Asymmetry has flipped. For thirty years the attacker had to find one hole and the defender had to plug all of them a brutal asymmetry, but a known one. Autonomous offensive agents collapse the attacker’s cost curve. CodeWall’s chief executive said the quiet part loud in his post-disclosure interview: AI agents autonomously selecting and attacking targets will be the new normal. Defenders are not yet running AI agents that continuously red-team their own production systems. They will need to.

What Actually Has to Change

Let me be specific, because vague calls for “AI governance” are how we got here in the first place.

1. Treat every AI platform as a privileged application from day one. That means least-privilege data access, scoped retrieval, and segregation of duties between the model, the prompt store, and the knowledge base. If your AI agent has the same database role as your chat history table, you have already lost.

2. Implement defence in depth across the AI execution path. Three independent gates: an HTTP gate (authentication, rate limiting, WAF, schema validation) before any request touches the application; an LLM gate (prompt-injection detection, content policy enforcement, output filtering) between the application and the model; and an agent gate (tool-call authorisation, scope limits, behavioural monitoring) for any system that lets the AI take actions. None of these can live inside the application code itself.

3. Mandate AI-specific threat modelling before deployment. STRIDE was designed for a world of forms and CRUD operations. It does not catch prompt injection, indirect data exfiltration via RAG, system prompt manipulation, or context poisoning. Your security review template needs an AI-native section. If your CISO cannot describe how your organisation tests for these, that is a board-level finding.

4. Monitor outputs for behavioural drift. Build expected-output baselines. Sample responses continuously. When the AI starts citing a new domain, recommending a new vendor, or suppressing a category of advice, somebody needs to know in hours not when a journalist calls.

5. Make AI configuration changes a board-visible control. System prompts are policy. They should be versioned, signed, dual-authorised, and reportable. The audit committee already reviews changes to the financial close process; it should review changes to the instructions governing the AI tools that influence client-facing work.

6. Run continuous, autonomous red-teaming against your own AI estate. If the threat is now an AI agent that probes endlessly at machine speed, the defence has to be an AI agent that audits endlessly at machine speed. Annual pen tests are not a control; they are a compliance ritual.

The Real Lesson Is About Trust

The most chilling sentence in the entire CodeWall disclosure is the one nobody is quoting. The researchers noted that, having gained write access, they could have rewritten Lilli’s prompts to subtly steer the advice given to McKinsey’s consultants and through them, to clients running critical infrastructure, treasuries, and public services across the world. They chose not to.

We will not always be that lucky.

The McKinsey breach is not really a story about SQL injection. It is a story about how quickly the asymmetry between attackers and defenders has shifted, about how recklessly we have built AI systems that mediate professional judgement at scale, and about how unprepared most enterprise governance frameworks are for a world in which the most sensitive thing inside the firewall is no longer the data, but the instructions that shape how that data becomes advice.

The firms that will earn the right to be trusted with AI in the next decade are not the ones moving fastest. They are the ones who recognise, before the breach disclosure email arrives, that an AI platform is not a productivity tool. It is a piece of decision-making infrastructure and infrastructure has to be governed accordingly.
McKinsey will recover. The next firm may not.

Folu writes on AI governance, Strategy and architecture. Folu is the founder of AIExpertsPro, advising boards and executive teams on AI risk, security and assurance.

Continue Reading

Tech and Humanity

Tech and Humanity: Why Africa Must Write Its Own AI Rules

Published

on

By

By Folu Adebayo

There is a meeting happening right now that Africa is not in.

In Brussels, Washington, London, and Beijing, the rules that will govern artificial intelligence for the next generation are being written. Frameworks are being debated. Standards are being set. Regulatory architectures are being designed that will determine how AI is built, deployed, and held accountable across the global economy.

And yet, the continent that will be most affected by those decisions home to the world’s youngest population, with a median age under 20, and some of its fastest-growing economies is largely absent from the room.

This is not just an oversight.

It is a strategic risk.

The illusion of neutrality

There is a persistent myth in technology: that AI is neutral, that algorithms are objective, that data does not discriminate.

The evidence suggests otherwise.

A landmark study by researchers at MIT Media Lab found that leading facial recognition systems had error rates of up to 34.7% for darker-skinned women, compared to less than 1% for lighter-skinned men. Similarly, audits by National Institute of Standards and Technology have shown significant demographic disparities in biometric systems.

These are not edge cases. They are signals.

Every AI system reflects the values, assumptions, and blind spots of those who build it and the data it is trained on. When that data is overwhelmingly Western, the systems built on top of it will perform best in Western contexts.

This is already visible in financial services. Credit scoring models trained on formal banking histories misinterpret the creditworthiness of entrepreneurs operating in informal or cash-based economies. Medical AI systems trained on European and North American datasets are being deployed in health systems where disease patterns and treatment pathways differ significantly.

Africa is not simply adopting AI.

It is increasingly being asked to adapt to AI that was never designed for it.

The governance gap is a sovereignty gap

When the European Union introduced the EU AI Act, it did more than regulate technology. It set a global standard.

Any company that wants to operate in the European market must now align with its requirements. That is regulatory power.

Africa has no equivalent.

And in that absence, a pattern is emerging: African institutions default to external standards European, American, or Chinese importing not just technology, but the governance models that come with it.

This is where the real risk lies.

The AI governance gap is not just a regulatory lag.
It is a sovereignty gap.

Because when rules are written elsewhere, outcomes are shaped elsewhere.

What is at stake

Nowhere is this more consequential than in financial services.

Across Nigeria, Kenya, Ghana, and South Africa, fintech is expanding access to credit, insurance, and payments at unprecedented speed. According to the World Bank, mobile money alone has lifted millions out of financial exclusion across Sub-Saharan Africa.

But governance has not kept pace.

When an AI model determines whether a small business owner in Lagos receives a loan, who is accountable for that decision?
When a customer in Nairobi is flagged as high risk, can they challenge it?
When algorithmic systems produce biased outcomes, who is responsible for identifying and correcting them?

In many jurisdictions, there is no clear answer.

That is not just a policy gap.

It is a trust gap.

And trust is the foundation on which financial systems and digital economies are built.

The opportunity within the gap

For all its risks, Africa’s position is also a rare strategic advantage.

Europe is retrofitting governance onto decades of legacy systems. The United States remains constrained by political fragmentation. China’s approach reflects a governance model that many African democracies will not seek to replicate.

Africa has the opportunity to build differently to embed governance into AI adoption from the outset.

That means designing frameworks that reflect local realities:

  • informal and hybrid economies
  • mobile-first financial infrastructure
  • linguistic and cultural diversity
  • distinct social and regulatory priorities

There are early signals of what this could look like. Rwanda has positioned itself as a testbed for responsible AI policy. Kenya has taken meaningful steps in data protection. Nigeria with its scale, talent base, and economic influence has the potential to lead a continent-wide approach.

But leadership requires intent.

And the window to lead is narrowing.

What African boards must do now

For board directors, Chief Risk Officers, and technology leaders, AI governance is not a future issue.

It is a present responsibility.

Start with visibility:
Which AI systems are currently influencing decisions in your organisation?

Then ownership:
Who is accountable for them?

Then integrity:
What data were they trained on and does it reflect the customers you actually serve?

And finally, accountability:
What happens when the system is wrong?

These are not regulatory questions.

They are governance fundamentals.

And organisations that cannot answer them today will struggle to defend them tomorrow to regulators, to customers, and increasingly, to the public.

The decision point

The rules that will govern artificial intelligence across Africa are still being written.

But the direction of travel is clear.

If Africa does not define its own standards, it will inherit them.
If it does not build governance into its systems, it will import it along with the assumptions embedded within it.

And in a world where AI is shaping access to capital, healthcare, security, and opportunity, that is not just a technical decision.

It is a question of economic sovereignty.

Africa can either become a rule-maker in the AI economy or remain a rule-taker.

Continue Reading

Tech and Humanity

Tech and Humanity: When the System Has No Answer, Build One

Published

on

By

By Folu Adebayo

In the United Kingdom, a family waiting for an autism assessment through the National Health Service will wait, on average, between two and five years.

Two to five years of watching a child struggle in a classroom that does not understand them. Two to five years of fighting for support that requires a diagnosis to unlock. Two to five years of being told by a system designed to help: we see you, but we cannot reach you yet.

I know this world intimately. My son Tade is autistic. I founded the Tade Autism Centre and the Autism Treatment Support Initiatives a registered UK charity because I lived the distance between what families need and what institutions provide. That distance is not measured in miles. It is measured in years of uncertainty, in children falling behind, in parents carrying a weight the system was supposed to share.
But my story does not end there. Because in parallel to that personal journey, my professional life took me somewhere I did not entirely expect.

Today, I serve as AI Risk and Governance advisor for several organisations including the very institution whose waiting lists I have navigated as a parent. I spend my days thinking about how AI is developed and deployed in consequential environments. About accountability. About explainability. About what happens when automated systems make decisions affecting vulnerable people without adequate human oversight.
One day, those two worlds the mother and the technologist asked each other a question.

If artificial intelligence can be deployed responsibly, with clinical rigour, with absolute transparency about its limitations, and with genuine safeguards could it help families while they wait? Not to replace the clinician. Never that. But to help a parent understand what they are observing in their child. To help them articulate it clearly. To point them toward the right pathways and the right organisations while the formal system catches up.

That question became Neurohelp.ai.

It is a free AI-powered autism assessment and support navigation tool. It applies DSM-5 and ICD-11 clinical frameworks the international gold standards for autism assessment to generate personalised pre-clinical reports. It includes NHS signposting, education and legal rights guidance, and a GP referral letter, available in ten languages, across five age stages from eighteen months to adulthood.

Every design decision was made through a governance lens. It is transparent about what it is and is not. It does not diagnose. It does not replace qualified clinicians. It helps families find their voice in a system that rewards those who already know how to navigate it and leaves everyone else behind.

I want to be direct about why these matters beyond the United Kingdom.

Across Africa, the gap between neurodevelopmental need and clinical capacity is even wider than in the UK. In Nigeria, as in many African countries, autism awareness is growing but diagnostic services remain scarce, culturally complex, and geographically concentrated in major cities. A mother in Kano or Enugu navigating concerns about her child’s development faces not just a waiting list but an entire system that may not yet have the language or the infrastructure to meet her.
Technology cannot substitute for systemic investment in health and education. That requires political will, policy commitment, and sustained funding. But technology, built responsibly and made freely available, can reduce the isolation of waiting. It can put knowledge in the hands of families who currently have none. It can help a parent walk into a doctor’s appointment with a clear, clinically framed picture of what they have been observing rather than struggling to find words for something they have felt but never been able to name.

Neurohelp.ai is built by someone who has been that parent. And who also knows, professionally, what responsible AI looks like.

It launches soon at neurohelp.ai. It is free for every family. In every language. At every stage of life.

Because no parent in London, in Lagos, or anywhere in between should have to wait years to feel understood.

Folu is a Tech leader, AI Architect and founder of Neurohelp.ai and Tade Autism Centre

Continue Reading

Trending