Examining global AI governance, Africa’s disproportionate risks, and South-South cooperation.
The United Nations' Pact for the Future and Global Digital Compact highlight artificial intelligence (AI) as a rapidly evolving technology with significant implications for peace and security, demanding global governance. However, the risks and benefits of AI are unevenly distributed, with Africa facing greater exposure to harms due to limited infrastructure (accounting for only 2% of data centers). These risks include technological dependencies, harmful data extraction, and exploitative work conditions. This policy brief argues that while global AI governance is essential, it is insufficient without robust regional and national responses. From an African perspective, the brief emphasizes the importance of global governance for knowledge exchange and the regulation of lethal autonomous weapons. It uses case studies from Kenya and Ethiopia to illustrate the consequences of weak institutions. The brief concludes with three key recommendations for African states: integrating South-South AI cooperation into national strategies, advocating for the African Union's participation in UN discussions on lethal autonomous weapons, and developing independent national AI oversight mechanisms.
UN Secretary-General Antonio Guterres emphasized in 2020 the transformative yet uncertain trajectory of digital technology, questioning its impact on human dignity, equality, and security. Recent calls to pause powerful AI experiments underscore the technology's potential existential implications, reinforcing the need for global AI governance. The UN's 2019 High-Level Panel on Digital Cooperation and the subsequent Global Digital Compact (GDC), adopted in the 2024 UN Summit alongside the Pact for the Future, establish a comprehensive multilateral framework for AI. These documents advocate for digital cooperation, sustainable development, addressing the climate crisis, and bridging the technological divide between the Global North and South, particularly focusing on inclusive digital economies, human rights, responsible data governance, and equitable AI governance. Africa, with its minimal data center infrastructure, is uniquely vulnerable to AI risks, including technological dependencies and exploitative data practices. This policy brief contends that global governance must be complemented by strong national and regional commitments, exemplified by the African Union's (AU) Continental AI Strategy. The brief prioritizes knowledge sharing, regulation of lethal autonomous weapons, and robust state institutions as critical for Africa’s development, peace, security, and human rights, especially given the continent's growing AI deployment amid significant technical and practical limitations.
Global AI governance mechanisms are crucial for Africa in two primary areas: knowledge exchange and technology transfer, and the regulation of lethal autonomous weapons systems (LAWS). Regarding knowledge exchange, international platforms like the UN Scientific Panel on AI and the Global Dialogue on AI Governance, which actively include African experts, help bridge knowledge asymmetries and foster the development of trustworthy AI systems. Both the Pact for the Future and the GDC advocate for North-South and South-South collaboration in capacity building, research, access to open AI systems, data, and culturally localized AI solutions, with private sector initiatives like Google's Masahkane African Languages AI Hub exemplifying this support. On LAWS regulation, the ongoing use of drones and advanced weapons in volatile African regions (e.g., Libya, Ethiopia, Mali, Sudan) underscores an urgent need for global governance. Autonomous weapons, capable of independent targeting, raise profound ethical and legal questions. The 2024 UN General Assembly resolution on LAWS reflects a growing international consensus for a two-tiered approach—prohibiting certain systems and regulating others under international humanitarian law. A new treaty by 2026 is advocated to restrict autonomous targeting and deployment. Global governance is vital to prevent conflict escalation, uphold human dignity, and ensure responsible development and deployment of these weapons, especially in conflict-affected states where they can easily proliferate among state and non-state actors.
Effective AI governance depends on strong state institutions capable of upholding rights and enforcing regulations, but cases in Kenya and Ethiopia demonstrate how weak states can misuse AI. In Kenya, the June 2024 protests against the Finance Bill revealed a dual use of AI. Activists effectively employed AI chatbots and translation tools for public education and democratic engagement, disseminating information across 68 languages and coordinating a 'digital insurrection.' Conversely, the Kenyan state exploited AI to spread disinformation, using organized accounts and AI-generated imagery to promote pro-government narratives and falsely accuse protestors of foreign meddling. Further undermining democratic principles, the government implemented internet shutdowns and allegedly detained and disappeared activists, highlighting the state's role in violating human rights with AI. Similarly, Ethiopia’s 2025 visual propaganda campaign, following the Grand Ethiopian Renaissance Dam's inauguration, leveraged AI to create synthetic videos and patriotic songs promoting territorial ambitions, specifically the acquisition of Eritrea's Port of Assab. These AI-generated narratives, often featuring embellished visuals and altered speeches of public leaders, significantly escalated regional tensions, posing a serious concern for an already destabilized area. These examples underscore the critical need for robust, independent state institutions to prevent AI misuse and ensure accountability.
The African Union (AU) is actively addressing AI governance challenges across the continent, recognizing the need for regional support to mitigate risks and maximize benefits. Its July 2024 Continental Artificial Intelligence Strategy outlines a 'people-centric, development-oriented, and inclusive' framework, emphasizing Africa’s proactive role in shaping AI governance. A core principle of the strategy is 'local first,' committing the AU to support member states in developing national AI strategies, establishing independent oversight institutions for compliance and violation management, and fostering regional and international cooperation to enhance AI capabilities. While some nations like Rwanda and Nigeria have published AI strategies, major players like Kenya and South Africa are still drafting theirs. Conflict-affected countries, in particular, lack sufficient AI readiness and national strategies, making them highly vulnerable. The strategy's call for independent oversight institutions is critical, as existing mechanisms (like national task forces) are often government-affiliated, limiting accountability when the state itself commits AI-related rights violations. Independent bodies are essential for ensuring data security, protecting human rights, and enforcing ethical AI deployment across the continent.
Effective AI governance for Africa requires multi-faceted strategies across global, regional, and national levels. Firstly, for Partnerships on Knowledge Sharing and Technology Transfer, African states should proactively leverage UN commitments and platforms like the UN Office for South-South Cooperation. This includes advocating for 'next-generation' partnerships that promote knowledge exchange, capacity development, youth workforce training, and collaborative research with peer countries, reducing financial barriers. The AU is crucial in facilitating these efforts for its members, particularly those with low AI readiness, and in working with the Group of 77 to tailor AI frameworks to the Global South's needs. Secondly, for Global African Engagement on LAWS, to mitigate risks from Lethal Autonomous Weapons Systems (LAWS) and influence global norms, African states must clearly articulate their positions and actively participate in high-level multilateral discussions. This includes more African states acceding to the Convention on Certain Conventional Weapons (CCW). The AU's participation in forums like the ETALAWS (Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems) is vital to represent non-CCW members and contribute to global standard-setting. Lastly, for Independent AI Oversight Institutions, African states must heed the Continental Strategy's call to establish independent oversight bodies to ensure AI accountability. Cases in Kenya and Ethiopia illustrate state abuses of AI, necessitating independent mechanisms to protect individual rights and hold states accountable. While financial and implementation challenges exist, empowering existing institutions with independent mandates could be a practical alternative, ensuring that ethical AI governance is upheld nationally.
The comprehensive governance of AI at international, regional, and national levels is essential for Africa's future, offering the potential to significantly advance the continent while mitigating substantial risks. While global frameworks provide crucial guidance and support for development, African nations ultimately hold the responsibility to establish robust internal systems and institutions. These domestic structures are vital for curbing AI-related harms, safeguarding human rights, and ensuring equitable access to technological innovation. The African Union, through its strategic position, can effectively harmonize continental frameworks to promote economic development and protect the rights of its people.