Connect with us

The Oracle

The Oracle: The Right to Development, Public Interest Litigation, and the Rule of Law in Nigeria (Pt. 1)

Published

on

By Prof Mike Ozekhome SAN

INTRODUCTION

Right to Development

The right to development is an inalienable human right affirmed by the United Nations Declaration on the Right to Development (1986). It asserts that every individual and all peoples are entitled to participate in, contribute to, and enjoy economic, social, cultural, and political development, in which all human rights and fundamental freedoms can be fully realized (“Right to development” – history and UN Declaration (1986) (Wikipedia, Nnamdi Azikiwe University Journals, Serap Nigeria, Unilag Law Review, Gravitas Review)). Regionally, this right is embedded in Article 22 of the African Charter on Human and Peoples’ Rights, to which Nigeria is a signatory (Article 22, African Charter on Human and Peoples’ Rights (recognized by Nigeria) (Wikipedia)).

Public Interest Litigation (PIL)

Public Interest Litigation in Nigeria refers to legal actions brought not by directly affected individuals, but by public spirited citizens or civil society organizations to protect collective rights or public goods. The scope for PIL broadened significantly with Centre for Oil Pollution Watch v. NNPC (2018), where the Supreme Court expanded locus standi doctrine—allowing NGOs to sue on behalf of affected communities even absent direct injury (Centre for Oil Pollution Watch v. NNPC (2013–2018); locus standi liberalisation by Supreme Court (Nigerian Law Forum, Unilag Law Review)).

Rule of Law

The rule of law encompasses principles such as equality before the law, accountability, separation of powers, transparency, and judicial independence. In Nigeria, public interest litigation is often seen as both reinforcing and challenging this principle by promoting access to justice but also raising concerns about judicial overreach and institutional balance (IP/Law commentary and doctrinal analysis of COPW decision (Nigerian Journals Online)).

Significance in Nigeria’s Socio Political Context

Nigeria is marked by severe socio-economic inequality, inadequate service delivery, environmental degradation, pervasive corruption, and impunity among authorities. In such a context, the right to development becomes critical as a framework for citizens’ demands for basic needs—education, health, sanitation, environment. Public interest litigation emerges as a vehicle for marginalized groups to assert these rights, elevate the rule of law, and press the state toward accountability, despite structural and constitutional limitations.

Thesis Statement

This essay contends that although the right to development remains non justiciable under Nigeria’s domestic Constitution, it is incrementally enforced through public interest litigation grounded in international and regional law. Case law from both Nigerian courts and the ECOWAS Court demonstrates how PIL helps to bolster the rule of law—yet institutional obstacles, weak enforcement, and constitutional constraints continue to limit its transformative potential.

The Right to Development in the Nigerian Context

Origin and Definition under International Law

The right to development was formally recognized by the United Nations in December 1986. Its Preamble frames development as “a comprehensive economic, social, cultural and political process” aimed at improving well being through inclusive participation and equitable benefit sharing. The African Charter (Article 22) further affirms this right as legally binding for member states, including Nigeria.

Constitutional Silence: Implicit vs. Explicit Recognition

While Nigeria ratified the African Charter and incorporated it domestically through the African Charter Act (2004), its 1999 Constitution does not explicitly guarantee the right to development. Instead, Chapter II sets out non justiciable Directive Principles of State Policy, which are aspirational statements rather than enforceable rights.

Socio Economic Rights in Chapter II of the 1999 Constitution

Chapter II outlines goals such as provision of education, health, housing, and social welfare. However, Section 6(6)(c) explicitly removes jurisdiction of courts over these principles, rendering them unenforceable in litigation.

Justifiability, Debate and Attempts at Enforcement

Despite constitutional limitations, litigants have tried to enforce socio economic rights through clever reliance on international obligations and fundamental rights provisions. The ECOWAS Court, empowered by treaties Nigeria ratified, has been pivotal in bypassing domestic procedural constraints by directly enforcing rights under the African Charter (Justiciability via ECOWAS Court and reliance on African Charter despite domestic silence uncaccoalition.org+3worldcourts.com+3Serap Nigeria+3).

Role of ECOWAS Court Decisions

In SERAP v. Nigeria & UBEC (2010), the ECOWAS Community Court ruled that Nigeria’s failure to provide free and compulsory basic education violated Articles 17 and especially Article 22 of the African Charter, regardless of the domestic non justiciability of educational goals. The Court dismissed Nigeria’s objection that the issue fell under Nigeria’s Chapter II—and reaffirmed its authority to enforce African Charter rights even where municipal law does not confer such rights.

Public Interest Litigation (PIL) as a Tool for Enforcing the Right to Development

Evolution of PIL: From Restrictive Locus Standi to Liberal Interpretation
Historically, Nigerian courts required plaintiffs to demonstrate direct personal injury to have standing. For instance, in the environmental case filed by Centre for Oil Pollution Watch, both the Federal High Court and Court of Appeal dismissed the suit due to lack of locus standi. But in 2018, the Supreme Court overturned those decisions, liberalizing standing rules to allow NGOs to sue for environmental harm on behalf of affected communities (Ibid).

Landmark Cases Using PIL
• Centre for Oil Pollution Watch v. NNPC (2018) became a landmark PIL ruling. The Supreme Court acknowledged environmental degradation as a public interest issue and permitted an NGO to seek remedies on behalf of impacted communities, recognizing the right to life and a healthy environment within Section 33 and Section 20 of the Constitution (ibid).

• SERAP’s action at ECOWAS Court used PIL to demand free basic education. The Court not only recognized the right to education under the African Charter but issued binding orders for the Nigerian government to implement free, compulsory education, underscoring PIL’s potential in enforcing development rights.

PIL and Access to Justice for Marginalized Groups
The liberalization of locus standi enables NGOs and advocacy groups to represent communities otherwise unable to afford litigation. This expansion improves access to justice, especially for rural or marginalized Nigerians facing environmental hazards or educational deprivation.

The Nigerian Judiciary’s Attitude toward Socio-Economic Rights Claims

While constitutional restrictions persist, judicial activism via PIL and international law has gradually induced a more receptive adjudication of socio economic rights. Nonetheless, courts remain constrained; executive non-compliance and weak enforcement mechanisms often undermine the effectiveness of PIL rulings.

Rule of Law: Interplay with Right to Development and PIL

Defining the Rule of Law (Dicey, Constitutionalism, Judicial Independence)
The rule of law, as articulated by A.V. Dicey, has three central tenets: the absolute supremacy of law (no one is punished except for breach of law), equality before the law, and the predominance of legal precedents over discretionary authority. Modern constitutionalism expands this understanding by insisting the state must be bound by law, that individual rights are protected through fair legal procedures, and that courts operate independently in interpreting and enforcing those laws. Judicial independence thus becomes pivotal: judges must be able to decide cases impartially, without undue influence from the executive or legislative branches.

Does PIL Promote or Undermine the Rule of Law?
PIL undoubtedly strengthens several dimensions of the rule of law: it democratizes access to justice, holds government institutions accountable, and enforces compliance with legal and constitutional norms. By broadening locus standi, PIL empowers civil society and marginalized communities to seek remedies—even when formal constitutional channels are blocked. However, critics argue that PIL may lead to judicial activism, where courts make policy decisions or governance choices best left to elected bodies. Such scenarios raise questions about institutional balance and the separation of powers.

Weak Enforcement of Judgments & Executive Non Compliance (Dasuki case)
The Dasuki case underscores the fragility of judicial authority when the executive refuses to comply with court or regional tribunal orders. In 2016, the ECOWAS Court ruled that Col. Sambo Dasuki’s re-arrest and continued detention—despite bail granted by multiple Nigerian courts—was arbitrary, unlawful, and a clear mockery of the rule of law, ordering his immediate release and payment of ₦15 million in damages. Yet successive Nigerian governments ignored these rulings. Multiple Federal High Court judges reaffirmed his bail, but the State Security Service (SSS) refused to release him for several years. In 2024, even the ECOWAS Court itself dismissed Dasuki’s enforcement action, citing procedural technicalities and lack of jurisdiction to compel enforcement, effectively illustrating how executive impunity erodes the rule of law. (SERAP v. Nigeria ECOWAS Court judgment on education – right to development (SERAP / ECOWAS judgments summaries).

Tensions Between Populist Litigation and Strict Legalism
PIL can sometimes generate tension between populist demand for justice and strict adherence to procedural legalism. On one hand, it serves the people by bringing rights-based litigation when formal channels are blocked. On the other, it exposes the judiciary to accusations of overreach or policymaking under legal guise. The Dasuki case reflects how popular sentiment around arbitrary detention can drive litigation, yet harsh procedural rules and political reluctance can frustrate enforcement, leaving legal victories hollow.

CASE STUDIES / EXAMPLES

SERAP v. Federal Government (Mismanagement of Education Budget)

Through public interest litigation, the Socio-Economic Rights and Accountability Project (SERAP) challenged the federal government over education fund mismanagement. They petitioned the ECOWAS Court, which held that Nigeria’s failure to provide free and compulsory basic education constituted a breach of the African Charter’s Article 22. The Court ordered systemic reform but enforcement remains partial due to domestic non-compliance and political inertia.
Ken Saro-Wiwa’s Legacy & the Ogoni Nine: Environment as Development Right

The struggle of Ken Saro-Wiwa and the Ogoni activists underscored environmental exploitation and lack of economic inclusion as key issues of the right to development. Their execution in 1995 galvanized international condemnation and modern Nigerian environmental litigation. The case establishes the link between environmental justice, community development, and public interest legal action against multinational extractive actors—paving the way for later PILs such as COPW v. NNPC. (To be continued).

THOUGHT FOR THE WEEK

“Sustainable development is the pathway to the future we want for all. It offers a framework to generate economic growth, achieve social justice, exercise environmental stewardship and strengthen governance” – Ban Ki-moon.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

The Oracle

The Oracle: The New Digital Colonialism: Navigating AI Policy Uunder Foreign Tech Dominance (Pt. 3)

Published

on

By

By Prof Mike Ozekhome SAN

INTRODUCTION

The last installment of this intervention traced the evolution of AI, reviewed notable developments in its trajectory; its African dimension and policy trend therein and beyond. This week’s feature goes further afield, reviewing the position in the US, the EU and China. Thereafter we consider the dangers of weak localized and disproportionate influence of foreign technology on African innovation ecosystem. This is followed by a discussion of the issues generated by AI policy and what African States need to do – using Nigeria as an example/template. Enjoy.

USA, EU, CHINA’S PREFERENCES (Continues)

In Africa, the policy landscape is accelerating but uneven. The Global AI Index (www.diplomacy.edu/resource/report-stronger-digital-voices-from-africa/ai-africa-national-policies/ > (Diplomacy.Edu) Accessed on 10th September, 2025) categorizes most African countries as lagging: Egypt, Nigeria and Kenya as nascent, and Morocco, South Africa and Tunisia as waking up (Techpoint Africa, < www.facebook.com/TechpointAfrica/posts/africas-ai-policy-why-a-copy-and-paste-approach-will-fail-this-time-every-countr/1064672189125910/> (Facebook.com, 22nd July, 2025) Accessed on 10th September, 2025). Mauritius led with an AI strategy (Mauritius Artificial Intelligence Strategy, November, 2018 < https://treasury.govmu.org/Documents/Strategies/Mauritius%20AI%20Strategy.pdf > (Treasury.govmu.org) Accessed on 10th September, 2025), followed by Kenya’s AI and blockchain task force (2019) (Kenya Artificial Intelligence Strategy < https://ict.go.ke/sites/default/files/2025-03/Kenya%20AI%20Strategy%202025%20-%202030.pdf > (Ict.go.ke) Accessed on 10th September, 2025), its Digital Master Plan (2022) (Kenya Digital Master Plan, 2022 – 2032 < https://cms.icta.go.ke/sites/default/files/2022-04/Kenya%20Digital%20Masterplan%202022-2032%20Online%20Version.pdf > (Ict.go.ke) Accessed on 10th September, 2025), and Rwanda’s AI policy (Thompson Gyedu Kwarkye, ‘AI policies in Africa: lessons from Ghana and Rwanda’ (TheConversation.com, 25th April, 2025) < https://theconversation.com/ai-policies-in-africa-lessons-from-ghana-and-rwanda-253642 > Accessed on 10th September, 2025), which reflects its national security priorities. Nigeria, Ghana, Uganda, Algeria and South Africa have also announced or drafted
AI policies, often framed around economic growth and innovation.
Continental initiatives, such as the African Union’s Digital Transformation Strategy (African Union, ‘THE DIGITAL TRANSFORMATION STRATEGY FOR AFRICA (2020-2030)’ < https://au.int/sites/default/files/documents/38507-doc-dts-english.pdf > Accessed on 10th September, 2025) and the World Bank’s DE4A program (< www.worldbank.org/en/programs/all-africa-digital-transformation > Accessed on 10th September, 2025), emphasize infrastructure, skills and inclusion, but implementation remains fragmented.

Still, foreign influence looms large. Many African AI and data governance frameworks are modeled directly on external templates, particularly the EU’s General Data Protection Regulation (GDPR) (< https://gdpr.eu/what-is-gdpr/ > Accessed on 10th September, 2025). Nigeria’s NDPR (< https://nitda.gov.ng/wp-content/uploads/2021/01/NDPR-Implementation-Framework.pdf > Accessed on 10th September, 2025), a near copy of the GDPR, introduced concepts like consent, data subject rights and cross-border transfers. While it helped raise awareness and created local compliance industries, it omitted key protections (such as breach notifications, children’s rights and strong enforcement). Similar GDPR-inspired laws have been enacted in Ghana, Kenya and South Africa. This copy-paste strategy provides structure but often lacks localization, leaving gaps in enforcement and contextual fit (Bolu Abiodun ‘Africa’s AI policy: Why a copy and paste approach will fail this time’ (Techpoint.Africa, 22nd July, 2025) < https://techpoint.africa/insight/africas-ai-policy-copy-paste/ > Accessed on 10th September, 2025).
Critics warn that the real problem is not copying but exclusion. As Mozilla’s Kiito Shilongo and other researchers argue, many African AI policies are drafted with heavy input from foreign agencies and consultants, while local communities, startups, and civil society are sidelined. This participatory deficit means policies risk reflecting donor interests more than citizens’ rights. In Rwanda, for example, AI policy was shaped through government agencies and international NGOs with a strong focus on security. Ghana’s was more inclusive, involving startups, academia and telecoms, but leaned toward development goals over safety. Both approaches highlight the political nature of AI policymaking and the different ways foreign partnerships shape outcomes.

DANGERS OF WEAK LOCALIZATION

The consequences of weak localization are serious. AI systems trained abroad often misidentify African faces, misinterpret African languages, and replicate systemic biases, raising concerns about discrimination and digital rights. Yet, while African AI strategies often mention ethics and human rights, we lack the institutions and consultation processes such as the six-month public consultations typical in the EU that make such commitments enforceable. As Shilongo notes, perhaps Africa should copy less of the content of Western frameworks and more of the participatory processes that make them legitimate.

In short, Africa’s AI policy moment reflects both progress and peril: policies are emerging, but without deeper local ownership, institutional capacity and participatory design, we risk entrenching dependency rather than building sovereignty.

DISPROPORTIONATE INFLUENCE OF FOREIGN TECHNOLOGY ON AFRICAN INNOVATION ECOSYSTEMS – REAL LIFE EXAMPLES

The critique of foreign dominance in Africa’s digital space is best illustrated through concrete examples that reveal how global technology companies shape local innovation ecosystems, often in ways that mirror older colonial patterns of extraction and dependency.

Language exclusion: Africa is home to over 2,000 languages (https://alp.fas.harvard.edu/introduction-african-languages > Accessed on 16th September, 2025), around one-third of the world’s total, yet, as of May 2024, Apple’s Siri, Google Assistant and Amazon’s Alexa collectively support none of them. This linguistic exclusion reinforces dependency on foreign platforms while marginalizing African cultures in the digital sphere.

Exploited labour: In 2019, South African graduate Daniel Motaung began work as a content moderator for Sama, a subcontractor for Facebook. Relocated to Kenya, he earned $2.20 per hour to review traumatic content described by colleagues as “mental torture”. When Motaung and others attempted to unionize, he was dismissed and later sued Sama and Facebook for union-busting and exploitation. This case underscores how “responsible outsourcing” in Africa often conceals exploitative labor practices.

Resource extraction: The Democratic Republic of Congo holds nearly half of the world’s known cobalt reserves, vital for powering smartphones and electric cars. In Kolwesi alone, thousands of children reportedly mine cobalt under dangerous conditions, while profits flow largely abroad. Much like colonial resource extraction, Africa provides the raw materials that power global digital economies but sees little local benefit.

Surveillance and bias: In Johannesburg, Vumacam has deployed more than 5,000 CCTV cameras integrated with AI analytics for private security firms. Activists warn that this reliance on facial recognition, already proven to misidentify darker-skinned faces at disproportionately high rates entrenches South Africa’s long history of racialized surveillance. Foreign-designed technologies thus risk reinforcing systemic inequalities under the guise of safety.

Connectivity myths: Mark Zuckerberg’s Internet.org initiative (launched in 2013) was marketed as a philanthropic effort to connect the unconnected. Projects like Free Basics promised free access to online services in over 60 countries. Yet leaked documents revealed that millions of Global South users were secretly charged for “free” data, generating nearly $100 million in 2021 alone. Framed as altruism, these projects extended Facebook’s market reach while extracting revenue from vulnerable populations.

Taken together, these examples reveal how global technology firms, mostly U.S.-based, operate in Africa with strategies that echo colonial logics. They build critical infrastructures (clouds, platforms, connectivity) aligned with their own commercial interests, entrench market monopolies and rely on low-wage labour or raw resource extraction with little local reinvestment. Their technologies often embed cultural and racial biases reflective of narrow developer demographics, yet are exported globally under the banner of “progress,” “development,” or “connecting people.”

As Western jurisdictions strengthen data protection and AI regulation, African countries often remain vulnerable due to weaker frameworks and limited enforcement capacity. This asymmetry creates fertile ground for digital colonialism; a modern-day “Scramble for Africa” where foreign firms extract and control data much like colonial powers once extracted minerals (Danielle Coleman, ‘Digital Colonialism: The 21st Century Scramble for Africa Through Extraction and Control of User Data and the Limitations of Data Protection Laws’ (Law.Umich.Edu) < https://repository.law.umich.edu/mjrl/vol24/iss2/6/ > Accessed on 16th September, 2025). Under the guise of innovation, these companies wield disproportionate influence over African AI and digital ecosystems, shaping policy choices, technical architectures, and even societal norms, while leaving Africa in a position of dependency rather than empowerment.

THE ISSUES GENERATED BY AI POLICY

While global AI policy is advancing through risk-based regulation, ethical standards, and participatory governance, Africa’s AI landscape remains fragmented, heavily modeled on external frameworks, and vulnerable to digital dependency. The disproportionate power of foreign technology companies manifested in many ways including linguistic exclusion, exploitative labour, resource extraction, biased surveillance and deceptive connectivity projects echoes colonial logics of extraction and control. Without decisive intervention, the continent risks entrenching digital colonialism, a new form of dependency in which policy choices, infrastructures and innovation ecosystems are shaped externally, undermining both democratic values and long-term development.

WHAT AFRICAN STATES MUST DO

To avoid replicating historical asymmetries in digital form, African states must assert sovereignty over their AI policies, data governance and digital infrastructures. This requires moving beyond passive adoption toward active regulatory design, investment in local infrastructure (such as data centers, compute resources and research capacity) and strengthening institutional oversight with technically competent regulators. Equally critical is the creation of participatory policy processes that center human rights, economic development, and indigenous innovation. Only by combining legal safeguards, domestic capacity, and strategic partnerships built on equality, not dependence, can Africa transform digital technologies into engines of genuine development rather than renewed extraction.

THE NIGERIAN EXAMPLE: DATA SOVEREIGNTY OR DATA SURRENDER

With the rapid expansion of national digital infrastructure across Nigeria, a far more pressing issue has risen to the fore: the question of who truly owns and governs the data that powers this infrastructure. As digital systems increasingly underpin the delivery of public services, financial transactions, education platforms, health records, and national security functions, data becomes not only a technical asset but a core element of state power. Data sovereignty means that data generated within a country’s borders is governed by that nation’s laws and regulatory frameworks; this ensures local control over data access, storage, and usage (Folashadé Soulé, ‘Digital Sovereignty in Africa: Moving beyond Local Data Ownership’ CIGI (2024) <https://www.cigionline.org/publications/digital-sovereignty-in-africa-moving-beyond-local-data-ownership/> Accessed on the 14th of June, 2025.). It has become a critical aspect of national policy and governance. In Nigeria, this issue has grown increasingly complex, particularly in light of the pervasive presence of foreign cloud providers, offshore data processors, and international technology firms that collect, process, and sometimes export Nigerian user data without clear or enforceable jurisdictional frameworks.

Foreign digital platforms have historically played a central role in the Nigerian data ecosystem either as providers of essential services like email, storage, and analytics, or as developers of social media and financial applications used daily by millions of Nigerians (Fola Odufuwa et al., ‘Digital Technology Adoption by Microenterprises: Nigeria Report’ (2024) <https://www.researchgate.net/publication/383202125_Digital_Technology_Adoption_by_Microenterprises_Nigeria_Report> Accessed on the 14th of June, 2025.). While these platforms often promise global connectivity and technical sophistication, they also introduce serious risks. Data generated within Nigeria is frequently routed through foreign servers, stored in jurisdictions with significantly different privacy protections, and subjected to external political and commercial interests (Patrick Aloamaka, ‘DATA PROTECTION AND PRIVACY CHALLENGES IN NIGERIA: LESSONS FROM OTHER JURISDICTIONS’ UCC Law Journal (2023) 3 (1).). This dislocation of Nigerian data is what scholars term extraterritorial data flow which raises serious questions about control, privacy, and national security. The potential misuse of this data, whether for commercial exploitation, surveillance, or even geopolitical leverage, makes the issue of domestic data governance all the more urgent. (To be continued).

THOUGHT FOR THE WEEK

“Over time I think we will probably see a closer merger of biological intelligence and digital intelligence”. (Elon Musk).

Continue Reading

The Oracle

The Oracle: The New Digital Colonialism: Navigating AI Policy Under Foreign Tech Dominance (Pt. 2)

Published

on

By

By Prof Mike Ozekhome SAN

INTRODUCTION

The inaugural installment of this piece was necessarily foundational, providing the background to the emergence of AI; how it transformed the digital space; applicable regulatory frameworks; its algorithimic transparency/accountability; its ethical dimensions and implications and the threat of foreign tech dominance/digital colonialism. This sophomore edition traces the evolution of AI; notable developments; the history of technological dependency in Africa and policy trends in the continent and beyond. Enjoy.

THE EVOLUTION OF AI

AI has progressed from rule-based systems to machine learning and deep learning models capable of autonomous decision-making. Applications range from healthcare diagnostics to autonomous vehicles, predictive policing, and financial algorithms. While AI enhances productivity, concerns arise over:

Job displacement due to automation. (Brynjolfsson, E., & McAfee, A. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W. W. Norton & Company).

Surveillance capitalism, where personal data is exploited for profit.

Algorithmic governance, where AI influences public policy without sufficient oversight (O’Neil, C. (2016). Weapons of Math Destruction. Crown Publishing.). The conceptual origins of Artificial Intelligence (AI) can be traced to the mid-20th century, when pioneering figures such as Alan Turing and John McCarthy began to explore the possibility of creating machines capable of simulating human intelligence. Turing’s seminal 1950 paper, “Computing Machinery and Intelligence,” posed the provocative question, “Can machines think?”—a question that laid the philosophical groundwork for modern AI research. (Turing, Alan M. “Computing Machinery and Intelligence.” Mind 59, no. 236 (1950): 433–460.) McCarthy, who coined the term “artificial intelligence” in 1956, convened the historic Dartmouth Conference, widely considered the birth of AI as a formal field of inquiry (McCarthy, John et al., “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,” (1955).

EARLY ASPIRATIONS AND TECHNOLOGICAL MILESTONES

Early AI efforts focused on symbolic logic, rule-based systems, and expert systems, which relied on hand-coded rules to simulate decision-making processes. These systems, while limited in scope, found application in fields such as medical diagnostics (e.g., MYCIN) and chess-playing algorithms. The emergence of machine learning in the late 20th century—particularly supervised learning techniques—ushered in a new era in which machines could learn patterns from data rather than rely solely on pre-programmed rules.
The exponential growths in computing power, availability of big data, and algorithmic innovation have since culminated in what many scholars refer to as the “AI revolution.”

NOTABLE DEVELOPMENTS

Notable developments include deep learning techniques powered by artificial neural networks, natural language processing exemplified by large language models (LLMs), and computer vision systems that rival or exceed human performance in specific domains (LeCun, Yann, Bengio, Yoshua, and Hinton, Geoffrey. “Deep Learning.” Nature 521, no. 7553 (2015): 436–444).

FROM AUTOMATION TO AUTONOMY

AI has transitioned from automating repetitive tasks to performing complex cognitive functions previously thought to be the exclusive domain of humans. Self-driving cars, AI legal assistants, autonomous drones, and AI-generated art demonstrate the breadth of AI’s applications. As these systems grow in sophistication, they increasingly exhibit autonomy—the capacity to make decisions and take actions without direct human intervention. This shift raises profound questions about accountability, transparency, and control.

ACCOUNTABILITY, TRANSPERENCY AND CONTROL

For example, autonomous weapons systems capable of selecting and engaging targets without human oversight challenge existing norms under international humanitarian law (IHL). Similarly, AI systems deployed in judicial or parole decisions raise concerns about bias, fairness, and due process, especially when the logic behind decisions is opaque even to their developers—a phenomenon referred to as the “black box problem.” (Pasquale, Frank. The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press, 2015).

HISTORY OF TECHNOLOGICAL DEPENDENCY IN AFRICA

The critique that digital technologies embody, enable, or reproduce colonial power relations is not new. As early as the 1970s, debates around communication and technology were linked to questions of sovereignty, inequality and dependency. In March 1976, at the Non-Aligned Media Seminar in Tunis, representatives from 38 NAM states and 13 observers declared that “colonialist, imperialist and racist powers have created effective means of information and communication which are conditioning the masses to the interests of these powers.” This seminar built on earlier efforts of the Non-Aligned Movement (est. 1955), which, by its 1973 Algiers summit, had embraced the decolonization of information, communication, and culture as part of the wider struggle for independence.

The Tunis meeting marked the birth of the New World Information and Communication Order (NWICO); a call to redress global inequalities in media ownership, information flows, and infrastructure. Tunisian minister Mustapha Masmoudi highlighted the imbalance: “Almost 80 percent of the world news flow emanates from the major transnational agencies; however these devote only 20 to 30 percent of news coverage to the developing countries”. NWICO gained traction at UNESCO, culminating in the 1980 MacBride Report, which directly challenged the Western doctrine of “free flow of information.” The United States and the UK eventually withdrew from UNESCO in protest, but NWICO left a lasting intellectual and political legacy: it framed global communication as a site of structural inequality and technological dependency.

Building on these debates, communication scholars introduced the idea of electronic colonialism. Herbert Schiller’s Mass Communication and American Empire argued that U.S. commercial media systems were becoming instruments of empire. Thomas McPhail later extended this, defining electronic colonialism as “the dependent relationship of poorer regions on post-industrial nations, caused and established by the importation of communication hardware and foreign-produced software, along with engineers, technicians and related information protocols” (Jacob Mahlangu, ‘Technological Apartheid: The Digital Divide Between Africa and the West’ (Sagepub.com, 6th May, 2025) < https://advance.sagepub.com/doi/full/10.31124/advance.174652029.93416488/v1#:~:text=The%20digital%20divide%20between%20Africa%20and%20the%20West%20represents%20not,colonialism%2C%20and%20contemporary%20digital%20imperialism). This lens made clear that dependency was not only economic but also infrastructural and epistemic.

Parallel critiques arose in anthropology and development studies. Post-development theorists such as Arturo Escobar and James Ferguson argued that development projects often failed to empower but instead re-entrenched colonial hierarchies. They identified technology as a key tool in this process, framed as a “solution” but often deployed in paternalistic ways that deepened dependency. ICT4D (Information and Communication Technologies for Development) initiatives of the late 1990s and early 2000s exemplified this tension. While promising to democratize knowledge and spur development, many projects replicated older patterns: reliance on imported technology, disregard for local context, and reinforcement of global asymmetries.

By the late 2000s, scholars in postcolonial computing extended these critiques to human–computer interaction (HCI). They demonstrated how design practices in “development tech” mirrored colonial flows: low-cost labor and raw materials from the Global South, transformed into finished products exported back under narratives of benevolence. The One Laptop Per Child (OLPC) project (< https://laptop.org/ > Accessed on 16th September, 2025) epitomized this, marketed as a humanitarian innovation but dependent on the feminized labour of Asian workers in global supply chains.

In 2013, Dal Yong Jin introduced platform imperialism, analyzing how U.S. tech giants like Google, Apple and Facebook exerted global dominance through platform capitalism, intellectual property regimes and cross-border expansion (Jin, Dal Yong, ‘“The Construction of Platform Imperialism in the Globalization Era.” Triple C: Communication, Capitalism & Critique. Open Access Journal For a Global Sustainable Information Society’ ( Researchgate.net, January, 2013) < https://researchgate.net/publication/275652379_Jin_Dal_Yong_2013_The_Construction_of_Platform_Imperialism_in_the_Globalization_Era_Triple_C_Communication_Capitalism_Critique_Open_Access_Journal_For_a_Global_Sustainable_Information_Society_111_145-#:~:text=Abstract,accumulation%20in%20the%20digital%20age. > Accessed on 16th September, 2025). His argument made explicit that digital platforms were not neutral infrastructures but instruments of geopolitical power.

These intellectual trajectories resonate strongly with dependency theory, advanced by Samir Amin, which argued that underdevelopment in the Global South is not accidental but structurally produced through dependence on the North. Applied to technology, this means Africa’s reliance on imported hardware, software, and infrastructures reinforces systemic subordination in the global digital hierarchy. Postcolonial thinkers like Frantz Fanon and Edward Said similarly highlighted how colonialism survives in cultural, psychological, and technological forms, keeping the Global South positioned as consumer rather than producer.

From NWICO to electronic colonialism, from ICT4D critiques to postcolonial computing and platform imperialism, the throughline is clear: each era has witnessed renewed forms of technological dependency. What changes are the technologies themselves: satellites, mass media, ICTs, platforms, and now AI, but the structural critique persists. Today’s debates on digital colonialism continue this intellectual lineage, reframing old concerns around sovereignty, extraction and dependency in terms of data, algorithms and artificial intelligence. Far from a rupture, this is the latest chapter in a long struggle for technological self-determination in Africa and the wider Global South.

AI POLICY TRENDS GLOBALLY AND IN AFRICA

Global AI policy is crystallizing around a few core themes: risk-based regulation of high-impact systems, the embedding of human rights (< https://2021-2025.state.gov/risk-management-profile-for-ai-and-human-rights/#:~:text=In%20March%202024%2C%20all%20193,the%20enjoyment%20of%20human%20rights.%E2%80%9D > (State.gov, 25th July, 2024) Accessed on 10th September, 2025) and ethics principles, and the development of technical standards to operationalize trustworthiness. The European Union’s AI (https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai > Accessed on 10th September, 2025) Act illustrates this risk-based approach by classifying systems according to potential harm and imposing proportionate obligations, while still promoting innovation. Similarly, the OECD AI Principles (< https://oecd.ai/en/ai-principles > (OECD.ai) Accessed on 10th September, 2025), the NIST AI Risk Management Framework (US) (< www.nist.gov/itl/ai-risk-management-framework > (NIST.gov) Accessed on 10th September, 2025), and UNESCO’s global AI ethics recommendations (< www.unesco.org/en/artificial-intelligence/recommendation-ethics#:~:text=Recommendation%20on%20the%20Ethics%20of%20Artificial%20Intelligence,human%20oversight%20of%20AI%20systems. > (UNESCO.Org) Accessed on 10th September, 2025) provide international benchmarks centered on transparency, accountability, robustness, and human oversight.

USA, EU, CHINA’S PREFERENCES

National strategies, however, diverge. The United States favours voluntary, sector-specific frameworks to preserve innovation flexibility (Tatevik Davtyan, ‘THE U.S. APPROACH TO AI REGULATION: FEDERAL LAWS, POLICIES, AND STRATEGIES EXPLAINED’ (scholarlycommons.law.case.edu, 24th January, 2025) < https://scholarlycommons.law.case.edu/cgi/viewcontent.cgi?params=/context/jolti/article/1172/&path_info=auto_convert.pdf > Accessed on 10th September, 2025). China pursues a state-driven, techno-industrial strategy linking AI to national development goals(Kyle Chan, Gregory Smith, Jimmy Goodrich, Gerard Dipippo, Konstantin F, Pilz ‘China’s Evolving Industrial Policy for AI’ (Rand.org, 26th June, 2025) < www.rand.org/pubs/perspectives/PEA4012-1.html > Accessed on 10th September, 2025). The EU relies on its regulatory power (“the Brussels effect”) to set global supplier standards (Marco Almada, Anca Radu, ‘The Brussels Side-Effect: How the AI Act Can Reduce the Global Reach of EU Policy’ (Cambridge.org, 19th February, 2024) < www.cambridge.org/core/journals/german-law-journal/article/brussels-sideeffect-how-the-ai-act-can-reduce-the-global-reach-of-eu-policy/032C72AEC537EBB6AE96C0FD90387E3E > Accessed on 10th September, 2025). Together, these approaches create a patchwork of norms that countries and companies must navigate. (To be continued).

THOUGHT FOR THE WEEK

“Like all technologies before it, Artificial Intelligence will reflect the values of its creators. So inclusivity matters – from who designs it to who sits on the company boards and which ethical perspectives are included” – Kate Crawford

Continue Reading

The Oracle

The Oracle: The New Digital Colonialism: Navigating AI Policy Under Foreign Tech Dominance (Pt. 1)

Published

on

By

Prof Mike Ozekhome SAN

ABSTRACT

This article interrogates the intersection of Artificial Intelligence (AI), digital transformation and sovereignty in the African context, with particular focus on Nigeria. It critiques the growing dominance of foreign technologies in shaping the continent’s AI policies, innovation ecosystems and legal frameworks, often without commensurate local input or contextual grounding. The work warns that the unchecked proliferation of imported AI systems risks entrenching digital dependency, algorithmic inequality and policy misalignment with local constitutional values, especially the right to dignity, privacy and non-discrimination.

The author posits that Africa’s technological renaissance must not be outsourced to external actors whose platforms may embed biases, opaque logic and extractive data practices. He advocates for a homegrown model of AI governance rooted in the principle of “Ethics by Design”, one that reclaims human dignity and aligns technological progress with constitutional and cultural realities. The study highlights the Nigeria Data Protection Act 2023 as a positive, albeit preliminary, effort toward asserting regulatory control. However, it urges a more robust framework that includes mandatory data localization, algorithmic accountability and institutional capacity-building.

The paper further calls attention to the geopolitical dimensions of digital transformation, where Africa must negotiate its place not as a passive consumer but as an active co-creator of ethical, inclusive technologies. In conclusion, the author proposes a new social contract for the AI age, one that places human dignity, data sovereignty and indigenous innovation at the center of Africa’s digital future. Without this, foreign dominance in AI may reproduce colonial power asymmetries in digital form, undermining both democratic governance and developmental autonomy.

KEYWORDS: Artificial Intelligence and Digital Transformation, Regulatory Frameworks, Data Localization, Data Sovereignty, Algorithmic Accountability, Algorithmic Transparency, Ethics by Design, Foreign Tech Dominance, Digital Colonialism.

INTRODUCTION

In situating arguments advanced in this article, it is essential to clarify certain operative terms that recur throughout our discourse. Artificial Intelligence, digital transformation and related regulatory concepts are often deployed with varying meanings across technical, legal and policy discourses. Without clear definitional grounding, the analysis of foreign technology dominance in Africa’s innovation ecosystem risks being blurred by semantic ambiguity.

Accordingly, the following section sets out key terms as used in this study, providing not only conventional definitions but also the contextual nuances most relevant to Africa’s socio-legal environment. These definitions are drawn from authoritative international sources, comparative regulatory frameworks and scholarly discourses and they are tailored to the themes of sovereignty, accountability and digital justice that underpin the critique of “new digital colonialism.”

Artificial Intelligence (AI)

This term refers to the field of computer science and engineering devoted to building systems capable of performing tasks that ordinarily require human intelligence, such as reasoning, learning, perception, decision-making and natural language processing (Cole Stryker, Eda Kavlakoglu, ‘What is Artificial Intelligence? (IBM.com, 9th August, 2024) <www.ibm.com/think/topics/artificial-intelligence> accessed on 9th September, 2025). It encompasses a broad set of techniques, including machine learning, deep learning, expert systems, and natural language understanding, through which systems recognize patterns in data, build predictive models, and adapt through feedback (https://cloud.google.com/learn/what-is-artificial-intelligence> accessed on 9th September, 2025).

AI powers a wide range of applications: autonomous vehicles, healthcare diagnostics, financial risk analysis, e-commerce personalization and governance tools. Beyond its technical utility, AI also raises profound legal and policy questions about accountability, ethics, bias, privacy and sovereignty

Digital Transformation

Digital Transformation is the comprehensive integration of digital technologies, particularly artificial intelligence (AI), data analytics, cloud computing and automation, into every facet of economic, social and institutional life. It goes beyond mere digitization to fundamentally reshape how businesses, governments and societies operate, create value and deliver services.

In practice, digital transformation involves rethinking business models, optimizing operations and enhancing stakeholder experiences through data-driven decision-making. AI is its central driver: by automating routine processes, enabling predictive analysis, and personalizing interactions, AI not only improves efficiency but also generates entirely new modes of production, governance, and innovation.

At the societal level, digital transformation promises economic growth, financial inclusion and more adaptive public institutions. Yet it also introduces vulnerabilities such as cyber-security threats, dependency on foreign digital infrastructures and risks of algorithmic biases. In regions like Africa, where much of the enabling infrastructure is controlled by foreign technology providers, digital transformation intersects directly with questions of sovereignty, regulatory autonomy and the equitable distribution of technological benefits.

Regulatory Frameworks (for AI and Digital Technologies)

This concept refers to the system of laws, policies, institutions and enforcement mechanisms that govern the design, deployment, and use of emerging technologies. They establish permissible uses, set technical and ethical standards, protect fundamental rights (privacy, dignity, non-discrimination) and ensure accountability of both domestic and foreign actors operating within a jurisdiction.

In the context of AI, regulatory frameworks commonly rest on principles of algorithmic accountability, transparency, fairness, human oversight and data protection. They are meant to balance innovation with safeguards against harms such as bias, opacity, or exploitative data practices.

Comparatively, the EU’s proposed AI Act (< https://artificialintelligenceact.eu/ > (Artificialintelligenceact.eu) Accessed on 9th September, 2025.) exemplifies a risk-based approach, regulating AI systems according to their potential impact on rights and society. In Nigeria, emerging efforts such as the Data Protection Act (2023, < https://placng.org/i/wp-content/uploads/2023/06/Nigeria-Data-Protection-Act-2023.pdf > (Place.org) Accessed on 9th September, 2025.), the Startup Act, the Advertising Regulatory Council of Nigeria (ARCON) Act, and initiatives like the National Centre for Artificial Intelligence and Robotics (NCAIR) (< https://ncair.nitda.gov.ng/ > Accessed on 9th September, 2025.) under National Information Technology Development Agency (NITDA) (< https://nitda.gov.ng/ > Accessed on 9th September, 2025) signal movement toward structured oversight. Together, these instruments reflect attempts to localize data control, regulate AI-related services and guide innovation within Nigerian values and constitutional guarantees.

For Africa, the challenge is sharper: regulatory frameworks must also contend with foreign technology dominance, ensuring that imported AI systems and platforms are adapted to local contexts, protect sovereignty and advance developmental priorities rather than replicate external power asymmetries.

Algorithmic Transparency and Accountability

These are complementary principles designed to ensure that algorithmic systems operate in ways that are both understandable and responsible. Transparency requires that the processes, logic, data inputs and decision rules shaping algorithmic outcomes be visible and interpretable to users, regulators and other affected stakeholders (< https://en.wikipedia.org/wiki/Algorithmic_transparency > Accessed on 9th September, 2025). It is a precondition for effective oversight, enabling independent review, auditing and informed consent. While transparency alone does not guarantee fairness, it makes unfair or biased practices detectable and open to challenge. Its key components include explainability, documentation of data sources, model interpretability and disclosure of decision pathways, with global benchmarks such as the European Union’s “right to explanation” and the European Centre for Algorithmic Transparency (ECAT) illustrating its growing importance.

Accountability, on the other hand, extends beyond visibility to place direct responsibility on the organizations that design, deploy, or rely on algorithms for the outcomes they generate (< https://en.wikipedia.org/wiki/Algorithmic_accountability > (Wikipedia.org) Accessed on 9th September, 2025). It encompasses proactive measures such as algorithmic impact assessments, audits and bias testing, as well as reactive mechanisms including remedies for harm, liability before regulators or courts, and obligations to correct discriminatory or harmful results.

Taken together, transparency and accountability form the backbone of ethical AI governance. They ensure not only that algorithmic systems can be scrutinized, but also that those who use them remain answerable for their consequences, thereby aligning technological innovation with legal standards, human rights, and democratic values.

ETHICS BY DESIGN

This is a proactive philosophy and operational approach that integrates ethical principles such as fairness, privacy, human dignity, non-discrimination and accountability directly into the design and development of technological systems, especially AI (Philip Brey, Brandt Dainow, ‘Ethics by Design for Artificial Intelligence’ (Springer.com, 21st September, 2023) < https://link.springer.com/article/10.1007/s43681-023-00330-4 > Accessed on 9th September, 2025). Unlike “ethics as compliance,” which treats ethics as a regulatory checkbox, Ethics by Design embeds ethical impact assessments, stakeholder consultations, bias testing and data protection safeguards into the technical architecture and governance frameworks from the outset.

Its purpose is to ensure that technologies are not only efficient but also equitable and humane, preventing harms such as systemic bias, privacy violations, or opaque decision-making. Global concerns around algorithmic discrimination, data misuse, and failed digital rollouts underscore the risks of neglecting this approach. In contexts like Nigeria, Ethics by Design must go beyond code and courtrooms, extending to grassroots participation, inclusive innovation and civil society engagement to ensure that AI systems respect democratic values of dignity, autonomy and justice.

Foreign Tech Dominance

The situation in which a small number of large foreign technology firms hold disproportionate influence over infrastructure, platforms, data, algorithms, investment and policy in sectors like AI in Africa, often shaping agendas, norms and capacities, sometimes at the expense of local innovation, control, or sovereignty.

This dominance can manifest via cloud services, data storage and processing, algorithmic platforms, AI model deployment, foreign intellectual property, foreign regulatory templates.

Implications include dependency, technology transfer gaps, limited local capacity building, reduced bargaining power, risks of exporting bias, unfair terms, and potentially extractive data practices.

Digital Colonialism

This refers to the new forms of control, dependency and power asymmetry in the digital and AI sphere, where developing or formerly colonized societies remain subject to external influence through foreign-owned infrastructures, platforms, algorithms, investment and data flows. Like classic colonialism, which relied on railways and trade routes to extract value, digital colonialism operates through proprietary software, corporate cloud systems and centralized internet services that capture, exploit and commodify local data for external profit.

This phenomenon compromises digital sovereignty when critical infrastructural, legal, or algorithmic decisions are determined abroad, raising urgent questions about who sets global standards, whose values are embedded in AI systems, who profits from data, and whether fundamental rights: privacy, dignity, non-discrimination are preserved. Scholars have described it as a continuation of extractive logics under new technological guises, with Big Tech corporations imposing cultural norms, business models and algorithmic biases designed to maximize profit while presenting them under the rhetoric of “progress,” “development,” or “connecting people.”

Digital colonialism frames the global digital order as one in which the Global South risks remaining a consumer and data supplier, rather than an equal co-creator of the technologies that increasingly govern economic and social life. (To be continued).

THOUGHT FOR THE WEEK

“Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we’ll augment our intelligence” (Ginni Rometty).

Continue Reading

Trending