Connect with us

Tech and Humanity

When Anthropic Accidentally Opened Its Own Vault: The Claude Code Leak of March 31, 2026

Published

on

….And what it reveals about AI, human fallibility, and the road ahead

By Folu Adebayo

The Day the Source Walked Out the Door

On the morning of March 31, 2026, a 59.8 MB JavaScript source map file intended for internal debugging was accidentally included in version 2.1.88 of the @anthropic-ai/claude-code package published to the public npm registry.

It wasn’t a hack. No sophisticated adversary breached Anthropic’s defences. Anthropic confirmed the incident themselves, stating: “This was a release packaging issue caused by human error, not a security breach.”
One misconfigured file. One missing line in. npmignore. And suddenly, 512,000 lines of TypeScript code across 1,906 files — and 44 hidden feature flags — were sitting on a public registry for anyone to download.
Security researcher Chaofan Shou was first to discover and disclose it, and the community set up multiple GitHub mirrors within hours, which garnered over 1,100 stars. By mid-morning, Anthropic’s internal codebase had become the most-studied piece of software on the internet.

What Was Actually Exposed?

This was not a breach of user data or model weights. Anthropic was clear that no sensitive customer data or credentials were involved.  But what was exposed was arguably more strategically damaging the engineering blueprint of their fastest-growing product.

The source code leak exposed around 500,000 lines of code across roughly 1,900 files. At least some of Claude Code’s capabilities come not from the underlying large language model itself, but from the software “harness” that sits around it instructing it how to use tools and providing the guardrails that govern its behaviour. That harness was now public.

The Hidden Features Nobody Was Supposed to See

KAIROS The Always-On Agent While current AI tools are largely reactive, KAIROS allows Claude Code to operate as an always-on background agent. It handles background sessions and employs a process called autoDream where the agent performs “memory consolidation” while the user is idle, merging disparate observations, removing logical contradictions, and converting vague insights into absolute facts.

BUDDY, The AI Pet BUDDY is a Tamagotchi-style AI companion that lives in a speech bubble next to the input box, with cosmetic hats and a deterministic species generation system meaning the same user always hatches the same buddy, whose name and personality are written by Claude on first hatch.

Undercover Mode The Most Ironic Discovery Perhaps the most discussed technical detail is “Undercover Mode” a feature revealing that Anthropic uses Claude Code for “stealth” contributions to public open-source repositories. The system prompt warns the model not to let any Anthropic-internal information appear in public git logs.

The funniest part: there is an entire system called “Undercover Mode” specifically designed to prevent Anthropic’s internal information from leaking — and then the entire source shipped in a .map file.  The irony was not lost on the developer community.

The Capybara Model The source code confirmed that “Capybara” is the internal codename for a Claude 4.6 variant, with “Fennec” mapping to Opus 4.6 and the unreleased “Numbat” still in testing.

The Compounding Security Crisis

The leak did not arrive alone. If you installed or updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC, you may have inadvertently pulled in a malicious version of the axios HTTP library containing a Remote Access Trojan (RAT).

The malicious archive circulating on GitHub included ClaudeCode_x64.exe, a Rust-based dropper that, on execution, installs Vidar v18.7 and GhostSocks malware used to steal credentials and proxy network traffic.
The message to any developer who updated Claude Code that morning: treat the host machine as fully compromised.

AI Is Still Controlled by Humans and That’s the Point

There’s a deeper lesson here that cuts through all the technical drama.

This incident was not caused by AI going rogue. It was not an autonomous system making a dangerous decision. A file used internally for debugging was accidentally bundled into a routine update and pushed to the public registry by a human.

A human forgot a line of configuration. A human approved the release. A human error, the same category of mistake that has preceded every major data breach, every nuclear near-miss, every preventable industrial disaster in history.

The narrative that AI is some uncontrollable force is, in this case, precisely backwards. The AI did what it was instructed to do. The humans around it made the mistake. This is not a condemnation of Anthropic it is a reminder that as AI systems grow more powerful, the quality of human oversight must scale with them. The weakest link is still, reliably, human.

The Strategic Fallout

The leak hands competitors a detailed unreleased feature roadmap and deepens questions about operational security at a company that sells itself as the safety-first AI lab.

The latest security lapse is potentially more damaging than an earlier accidental exposure of a draft blog post about a forthcoming model. While it did not expose the weights of the Claude model itself, it allowed people with technical knowledge to extract additional internal information from the codebase.

The leak won’t sink Anthropic, but it gives every competitor a free engineering education on how to build a production-grade AI coding agent and what tools to focus on next.

What This Means for the Future of AI

1. Agentic AI demands agentic security. The attack surface exposed by the Claude Code leak is not a Claude-specific problem, it is a window into the systemic vulnerabilities of agentic AI at large. The same compaction pipelines, permission chains, and MCP interfaces exist across every enterprise agent deployment. What changed on March 31 is that the attack research cost collapsed overnight.

2. The “always-on AI” era is already being built. Features like KAIROS and BUDDY signal that the next generation of AI tools will not wait to be asked. They will watch, remember, and act in the background. This raises profound questions about consent, privacy, and the nature of the human-AI relationship that regulators and ethicists are not yet equipped to answer.

3. Transparency may be the only viable long-term strategy. While negative for Anthropic in the short term due to the exposure of trade secrets, it is a net positive for the industry in the long run providing the first complete, production-grade AI Agent architecture reference, which could potentially drive ecosystem development much like the open-sourcing of Android.

4. AI governance is not optional. For any organisation deploying or building on AI systems, this incident is a case study in why governance frameworks, release pipeline controls, and security-by-design are not bureaucratic overhead they are existential necessities.

The Claude Code leak is a story about a brilliant company, moving fast, in a highly competitive market, staffed by talented humans who are still, at the end of the day, fallible. That is not a criticism. It is the human condition.

The question the industry must now answer is not whether AI can be trusted. It is whether the humans building, deploying, and governing AI have earned that trust themselves. March 31, 2026 suggests there is still significant work to do.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech and Humanity

Tech and Humanity: Why Africa Must Write Its Own AI Rules

Published

on

By

By Folu Adebayo

There is a meeting happening right now that Africa is not in.

In Brussels, Washington, London, and Beijing, the rules that will govern artificial intelligence for the next generation are being written. Frameworks are being debated. Standards are being set. Regulatory architectures are being designed that will determine how AI is built, deployed, and held accountable across the global economy.

And yet, the continent that will be most affected by those decisions home to the world’s youngest population, with a median age under 20, and some of its fastest-growing economies is largely absent from the room.

This is not just an oversight.

It is a strategic risk.

The illusion of neutrality

There is a persistent myth in technology: that AI is neutral, that algorithms are objective, that data does not discriminate.

The evidence suggests otherwise.

A landmark study by researchers at MIT Media Lab found that leading facial recognition systems had error rates of up to 34.7% for darker-skinned women, compared to less than 1% for lighter-skinned men. Similarly, audits by National Institute of Standards and Technology have shown significant demographic disparities in biometric systems.

These are not edge cases. They are signals.

Every AI system reflects the values, assumptions, and blind spots of those who build it and the data it is trained on. When that data is overwhelmingly Western, the systems built on top of it will perform best in Western contexts.

This is already visible in financial services. Credit scoring models trained on formal banking histories misinterpret the creditworthiness of entrepreneurs operating in informal or cash-based economies. Medical AI systems trained on European and North American datasets are being deployed in health systems where disease patterns and treatment pathways differ significantly.

Africa is not simply adopting AI.

It is increasingly being asked to adapt to AI that was never designed for it.

The governance gap is a sovereignty gap

When the European Union introduced the EU AI Act, it did more than regulate technology. It set a global standard.

Any company that wants to operate in the European market must now align with its requirements. That is regulatory power.

Africa has no equivalent.

And in that absence, a pattern is emerging: African institutions default to external standards European, American, or Chinese importing not just technology, but the governance models that come with it.

This is where the real risk lies.

The AI governance gap is not just a regulatory lag.
It is a sovereignty gap.

Because when rules are written elsewhere, outcomes are shaped elsewhere.

What is at stake

Nowhere is this more consequential than in financial services.

Across Nigeria, Kenya, Ghana, and South Africa, fintech is expanding access to credit, insurance, and payments at unprecedented speed. According to the World Bank, mobile money alone has lifted millions out of financial exclusion across Sub-Saharan Africa.

But governance has not kept pace.

When an AI model determines whether a small business owner in Lagos receives a loan, who is accountable for that decision?
When a customer in Nairobi is flagged as high risk, can they challenge it?
When algorithmic systems produce biased outcomes, who is responsible for identifying and correcting them?

In many jurisdictions, there is no clear answer.

That is not just a policy gap.

It is a trust gap.

And trust is the foundation on which financial systems and digital economies are built.

The opportunity within the gap

For all its risks, Africa’s position is also a rare strategic advantage.

Europe is retrofitting governance onto decades of legacy systems. The United States remains constrained by political fragmentation. China’s approach reflects a governance model that many African democracies will not seek to replicate.

Africa has the opportunity to build differently to embed governance into AI adoption from the outset.

That means designing frameworks that reflect local realities:

  • informal and hybrid economies
  • mobile-first financial infrastructure
  • linguistic and cultural diversity
  • distinct social and regulatory priorities

There are early signals of what this could look like. Rwanda has positioned itself as a testbed for responsible AI policy. Kenya has taken meaningful steps in data protection. Nigeria with its scale, talent base, and economic influence has the potential to lead a continent-wide approach.

But leadership requires intent.

And the window to lead is narrowing.

What African boards must do now

For board directors, Chief Risk Officers, and technology leaders, AI governance is not a future issue.

It is a present responsibility.

Start with visibility:
Which AI systems are currently influencing decisions in your organisation?

Then ownership:
Who is accountable for them?

Then integrity:
What data were they trained on and does it reflect the customers you actually serve?

And finally, accountability:
What happens when the system is wrong?

These are not regulatory questions.

They are governance fundamentals.

And organisations that cannot answer them today will struggle to defend them tomorrow to regulators, to customers, and increasingly, to the public.

The decision point

The rules that will govern artificial intelligence across Africa are still being written.

But the direction of travel is clear.

If Africa does not define its own standards, it will inherit them.
If it does not build governance into its systems, it will import it along with the assumptions embedded within it.

And in a world where AI is shaping access to capital, healthcare, security, and opportunity, that is not just a technical decision.

It is a question of economic sovereignty.

Africa can either become a rule-maker in the AI economy or remain a rule-taker.

Continue Reading

Tech and Humanity

Tech and Humanity: When the System Has No Answer, Build One

Published

on

By

By Folu Adebayo

In the United Kingdom, a family waiting for an autism assessment through the National Health Service will wait, on average, between two and five years.

Two to five years of watching a child struggle in a classroom that does not understand them. Two to five years of fighting for support that requires a diagnosis to unlock. Two to five years of being told by a system designed to help: we see you, but we cannot reach you yet.

I know this world intimately. My son Tade is autistic. I founded the Tade Autism Centre and the Autism Treatment Support Initiatives a registered UK charity because I lived the distance between what families need and what institutions provide. That distance is not measured in miles. It is measured in years of uncertainty, in children falling behind, in parents carrying a weight the system was supposed to share.
But my story does not end there. Because in parallel to that personal journey, my professional life took me somewhere I did not entirely expect.

Today, I serve as AI Risk and Governance advisor for several organisations including the very institution whose waiting lists I have navigated as a parent. I spend my days thinking about how AI is developed and deployed in consequential environments. About accountability. About explainability. About what happens when automated systems make decisions affecting vulnerable people without adequate human oversight.
One day, those two worlds the mother and the technologist asked each other a question.

If artificial intelligence can be deployed responsibly, with clinical rigour, with absolute transparency about its limitations, and with genuine safeguards could it help families while they wait? Not to replace the clinician. Never that. But to help a parent understand what they are observing in their child. To help them articulate it clearly. To point them toward the right pathways and the right organisations while the formal system catches up.

That question became Neurohelp.ai.

It is a free AI-powered autism assessment and support navigation tool. It applies DSM-5 and ICD-11 clinical frameworks the international gold standards for autism assessment to generate personalised pre-clinical reports. It includes NHS signposting, education and legal rights guidance, and a GP referral letter, available in ten languages, across five age stages from eighteen months to adulthood.

Every design decision was made through a governance lens. It is transparent about what it is and is not. It does not diagnose. It does not replace qualified clinicians. It helps families find their voice in a system that rewards those who already know how to navigate it and leaves everyone else behind.

I want to be direct about why these matters beyond the United Kingdom.

Across Africa, the gap between neurodevelopmental need and clinical capacity is even wider than in the UK. In Nigeria, as in many African countries, autism awareness is growing but diagnostic services remain scarce, culturally complex, and geographically concentrated in major cities. A mother in Kano or Enugu navigating concerns about her child’s development faces not just a waiting list but an entire system that may not yet have the language or the infrastructure to meet her.
Technology cannot substitute for systemic investment in health and education. That requires political will, policy commitment, and sustained funding. But technology, built responsibly and made freely available, can reduce the isolation of waiting. It can put knowledge in the hands of families who currently have none. It can help a parent walk into a doctor’s appointment with a clear, clinically framed picture of what they have been observing rather than struggling to find words for something they have felt but never been able to name.

Neurohelp.ai is built by someone who has been that parent. And who also knows, professionally, what responsible AI looks like.

It launches soon at neurohelp.ai. It is free for every family. In every language. At every stage of life.

Because no parent in London, in Lagos, or anywhere in between should have to wait years to feel understood.

Folu is a Tech leader, AI Architect and founder of Neurohelp.ai and Tade Autism Centre

Continue Reading

Tech and Humanity

Tech and Humanity: The AI You Can’t See is the Risk You Can’t Manage

Published

on

By

By Folu Adebayo

A CEO said something to me recently that I haven’t been able to shake.

“We’re not really using AI yet we’re still exploring.”

It sounded reasonable. Measured. Even responsible.

But when we looked a little closer, AI was already there.

In reports being drafted.

In data being analysed.

In customer interactions being shaped.

In decisions being influenced.

Quietly. Informally. Unchecked.

This is the reality most organisations are now operating in.

AI isn’t something you switch on.
It’s something that creeps in.
The Gap Leadership Isn’t Seeing
Across boardrooms, the conversation is still framed as:
“Should we adopt AI?”

But inside the business, the reality is very different:
“We already have.”

Just not in a way that leadership can fully see.
• Teams are experimenting.
• Individuals are optimising their work.
• Departments are solving problems quickly.
And in doing so, they are introducing AI into core workflows — often without formal approval, oversight, or governance.
Not because they are reckless.
Because they are trying to be effective.

The Illusion of Control
Most organisations believe they have some level of control over AI.
There may be policies.
There may be guidance.
There may even be restrictions on certain tools.
But control is not defined by what is written.
It is defined by what is visible.
And in many cases, AI usage today is only partially visible if at all.
This is where risk begins.
The Risk Isn’t Where Most Leaders Are Looking
When AI risk is discussed, attention often goes to the models:
Bias. Accuracy. Explainability.
Important issues, certainly.
But in my experience, the more immediate risks are far more operational — and far less visible.
Data being entered into external tools without control.
Decisions being influenced without traceability.
Processes being automated without oversight.
Accountability becoming blurred.
These are not theoretical risks.
They are already happening.
Regulation Is Moving — But Not Fast Enough
Globally, regulators are beginning to respond.
The EU AI Act is introducing structured approaches to classifying AI risk, particularly in high-impact sectors.
In the UK, regulatory thinking continues to evolve, with a focus on sector-led oversight.
Across Africa, including Nigeria, adoption is accelerating rapidly, often ahead of formal regulatory frameworks.
This creates a tension.
Organisations are scaling AI faster than governance is being defined.
And governance is being defined faster than organisations are able to implement it.
Governance Is Not the Barrier, It’s the Enabler
There is still a perception in some leadership circles that governance slows things down.
In reality, the absence of governance slows everything down eventually.
Without it:
• Risk accumulates silently
• Confidence in decisions erodes
• Scaling becomes fragile
With it:
• Leaders gain clarity
• Risk becomes manageable
• AI can be deployed with confidence
Governance is not about restriction.
It is about control.
The Questions That Matter Now
The most important shift leaders can make is not technical.
It is perspective.
From:
“Are we using AI?”
To:
“Where is AI already being used and what does that mean for us?”
That means asking:
• Do we have full visibility of AI usage across the organisation?
• Who is accountable for AI risk at executive level?
• Can we clearly classify and prioritise AI-related risks?
• Are we able to explain how AI is influencing decisions?
If the answers are unclear, the issue is not capability.
It is awareness.
A Defining Leadership Moment
We are at a point where AI is no longer an innovation discussion.
It is an operational reality.
And like any operational reality, it requires structure, ownership, and oversight.
The organisations that will lead in this space will not necessarily be those who adopt AI the fastest.
They will be those who understand it the clearest.
Who can see it.
Who can manage it.
Who can take responsibility for it.
Final Thought
The real risk is not the AI you are planning for.

It is the AI that is already inside your business.

Working. Influencing. Deciding.
The question is not whether it exists.
It is whether you can see it.
And whether you are in control of it.

Folu Adebayo is an AI Governance and Enterprise Transformation Advisor, working at the intersection of technology, risk, and regulation. With a background in enterprise architecture and large-scale transformation across insurance, financial services, and the public sector, she helps organisations gain visibility and control over how AI is used across their business. Her work focuses on bridging the gap between AI adoption and governance, enabling leadership teams to scale AI safely, responsibly, and with confidence.

Continue Reading

Trending