Next Article in Journal
Developing Complex Thinking Skills to Foster Intercultural Citizenship: Mixed-Methods Evidence from Four Latin American Contexts
Next Article in Special Issue
Conceptualising Digital Democracy—From Technocracy and Populism to a New Concept of Democratic Authority and Participation?
Previous Article in Journal
From Social Marketing to Transformative Communication: Innovation and Social Awareness in Social Services
Previous Article in Special Issue
Making Digital Transformation Discussable: An Institutional Action Design Research Approach for Municipal Governance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Is Africa Ready for AI? Digital Information Privacy Awareness and AI Adoption on the Continent

Independent Researcher, 69000 Lyon, France
Soc. Sci. 2026, 15(3), 155; https://doi.org/10.3390/socsci15030155
Submission received: 6 October 2025 / Revised: 2 February 2026 / Accepted: 7 February 2026 / Published: 1 March 2026
(This article belongs to the Special Issue Technology, Digital Transformation and Society)

Abstract

Respect for privacy has been identified as a guiding principle for the development and use of responsible or ethical artificial intelligence (AI), but also as an endangered value in many countries, including those in Africa. Yet, on the African continent, awareness of personal information privacy remains in its early stages, and awareness-raising initiatives are still limited, fragmented, and non-governmental-driven. Given the current global and local enthusiasm surrounding the adoption and development of AI technologies, I examine the key interrelated factors driving the poor digital information privacy awareness and limited awareness-raising in African countries. Key factors include limited digital literacy; the widespread use and reliance on free and freemium services offered by global North digital technology multinationals; the lack of harmonized data protection legislation and regulation across the continent, which facilitates corporate neocolonialism; and the general apathy of many African governments towards privacy awareness-raising, given their own involvement in privacy-violating surveillance. Subsequently, I recommend strategic actions applicable to diverse stakeholders that could contribute towards reinforcing digital information privacy awareness, particularly within the context of the ongoing adoption and anticipated widespread use of AI technologies on the continent.

1. Introduction

In the late afternoon on a weekday, I was in a virtual meeting with a colleague, Luke,1 a project partner located in Spain, with whom I was working on an e-learning project. Shortly after, we were joined by Morris, an edtech educator and edupreneur acquaintance located in Kenya. We had scheduled the meeting so as to walk him through the functioning of the application we were developing. On seeing the ‘Privacy awareness’ label on one of the learning modules, Morris immediately questioned its relevance, “Privacy doesn’t mean much to Kenyans. Privacy? In Kenya? Njeri, we’re not yet there. Security, yes. Privacy, no.” To Morris, our focus on digital security was topical and relevant, but focusing on raising digital and information privacy awareness in Kenya was premature.
Morris’s assertion was not a total surprise. Just a few weeks earlier, he had sent an AI virtual assistant into our meeting room ahead of his own arrival—without informing us or asking for our consent, either before or during the meeting. I was one of four participants present, and it was not until I asked about the assistant’s presence that Morris briefly explained that he had recently adopted the tool to take notes and capture images in meetings he attended. After I clarified that I did not wish to have the AI assistant take notes or images during our meeting, Morris acknowledged that image-taking “might be sensitive” and agreed to have us remove the assistant. However, for our next meeting several weeks later, he again sent in the virtual assistant in the very same way. This time around, we did not wait for him to join the meeting or hesitate—we removed it yet again. As a particular type of irony, Morris, a computer science graduate and private sector edtech educator, also taught digital citizenship to young students in Kenya. However, rather than take offense, I saw it as a sociological opportunity, highlighting how even educators specializing in digital citizenship might sometimes overlook consent protocols in practice.
My experience with Morris, as relayed through this micro autoethnography,2 reflects a broader socio-technical pattern observable across many African digital spaces: in online interactions, the norms around consent, transparency, and data governance are generally ambiguous and evolving. For example, consent is usually assumed rather than explicitly obtained, especially when new technologies are framed as progress, development, or tools for the public good. Hence, Morris’s actions reflect a blind spot—a well-intentioned push to embrace the conveniences offered by emerging technologies, which inadvertently overlooks the ethical questions that need to be at the forefront of any responsible use of AI technologies. This paradox stems, in part, from the rapid diffusion of global technologies into local social contexts, where the lack of digital literacy, information privacy awareness, and regulatory frameworks underscores a wider state of unreadiness.
Indeed, as is the case in many countries around the world, the general public in African countries is far from adequately prepared to embrace AI technologies. On the continent, this reality manifests itself in a variety of ways. For example, on the one hand, many population segments are openly adopting and accepting AI technologies, including entrepreneurs and tech enthusiasts. Among them are developer and designer communities—many of whom have limited knowledge and awareness of the link between possible human rights violations and AI (Gaffley et al. 2022). On the other hand, there is a reluctance to embrace AI technologies, observed in other population segments that perceive them as harmful to individuals and society, as noted in a study conducted by Lloyd’s Register Foundation (2023), and through my personal experiences.
Such distrust is not baseless. It is well established that alongside their potential benefits, the development and use of AI systems inevitably entail a number of risks. Among them are privacy risks, similar to those observed during the past two and a half decades of internet commercialization and the relatively unlimited collection of data. However, the difference between the preceding and current technological periods lies in the scale of the amassed data and the number of exacerbated and new risks and potential harms that come with AI technologies (King and Meinhardt 2024; Markelius et al. 2024).
The current situation raises the question of what is being done to facilitate the public’s informed and confident adoption of AI technologies in Africa. My analysis of some national AI strategies and the continental strategy shows that regulatory approaches—policy and indirect legislation—are the predominant means being adopted to address AI-related risks. To date, national governments and the African Union (AU) have focused on defining guiding ethical principles and normative frameworks for the responsible adoption of AI. However, while risks related to individuals’ privacy are a prominent part of AI-related risks, they are not being addressed in ways that directly empower and encourage the active involvement of the public. More specifically, African governments and the AU recognize privacy risks, but education initiatives and awareness campaigns remain limited, fragmented, and are not government-driven.
And yet, behavioral studies and evidence from numerous domains—for example, cybersecurity and public health (in particular COVID-19 and HIV prevention)—have shown that accurate knowledge and appropriate attitudes serve as catalysts for creating understanding and initiating change in people’s behavior and practices (De Kok et al. 2020; Fana 2021; Nwagbara et al. 2021). I contend that in the context of the anticipated widespread use of AI technologies, education and awareness-raising are also necessary safeguards. Accurate and adequate knowledge would provide the foundation for individuals and communities to understand the associated risks and identify privacy threats, while their attitudes shape their behaviors, practices, and actions.
Hence, given the growing enthusiasm surrounding AI, it is essential to ask what is contributing to the poor digital information privacy awareness3 and limiting awareness-raising on the continent. Although this paper focuses on Africa, personal information privacy awareness in the context of AI adoption is globally relevant and presents particular challenges in the global South countries. Therefore, the discussions and insights in this paper are also relevant to readers interested in other locations or geopolitical unions in the global South.
The rest of this article proceeds as follows: in the first part, I briefly present my methodological approach. I then focus on the literature on ethical AI—or responsible AI—in relation to Africa, situating information privacy and awareness-raising within this context. In the next section, I briefly situate privacy and privacy awareness within global privacy scholarship. I then offer some background on AI technologies and adoption trends in Africa. In the second part, I examine the key interrelated factors that counteract information privacy awareness and limit awareness-raising on the continent, more so in relation to AI. I specifically identify and discuss limited digital literacy; the widespread use and dependency on free or freemium services developed in the global North, and that incorporate AI technologies; the absence of harmonized data protection laws, which facilitates corporate neocolonialism; and African states’ apathy towards raising privacy awareness, given their own involvement in privacy-violating surveillance and control over their populations. Finally, before concluding, I offer some strategic, multi-stakeholder approaches and practical suggestions that could contribute towards raising the levels of privacy awareness among the general public.

1.1. Methodological Approach

This paper is grounded on a qualitative document analysis (Bowen 2009) that I conducted between June 2024 and February 2025 to explore the place assigned to personal information privacy awareness and AI awareness in public policy across African countries. It is important to note that discussing the findings from that study is neither the aim nor the focus of this paper. Nevertheless, I provide information about the aforementioned study to facilitate an understanding of the larger context shaping the focus and discussions in this paper.
I used a purposeful sampling strategy (Creswell and Creswell 2018) to select national and regional AI strategy/policy documents, other digital technology-related policy documents, and grey literature (such as working papers and think tank reports). For a more conclusive analysis, I further filtered the documents and established the inclusion criteria as follows: only official strategy/policy documents directly related to artificial intelligence and formally adopted by national African governments or the African Union. The documents I selected for analysis were the finalized national AI strategy and policy documents of Mauritius, Egypt, Rwanda, Senegal, Benin, and the African Union (Ministère de la Communication, des Télécommunications et de l’Économie Numérique, République du Sénégal 2023; National Council for Artificial Intelligence 2021, 2025; Ministry of ICT and Innovation, Republic of Rwanda 2023; Ministère du Numérique et de la Digitalisation, République du Bénin 2023; Working Group on Artificial Intelligence 2018; African Union 2024), published between 2018 and January 2025. All but one were available in English. I excluded other draft national AI policy proposals or non-binding AI policy frameworks,4 except for the Artificial Intelligence for Africa Blueprint (Smart Africa 2021).5
Based on the research question, I defined an initial coding scheme, deductively structured around two categories: information privacy awareness and AI awareness. I reviewed all of the selected documents manually using keyword-based scanning. Each identified keyword occurrence was followed by a contextual reading of the surrounding text to assess thematic relevance. Keywords included privacy, information privacy, privacy awareness, personal data, personal data protection, AI literacy, general public, public awareness, sensitization, public involvement, and public participation.6
I coded segments only if they referred specifically to personal information privacy awareness or AI awareness in relation to the general public. Segments with other or unrelated uses of the keywords were excluded, and ambiguous cases were annotated for review to ensure consistency. This approach allowed me to identify both direct and indirect references to the two focus areas within the policy texts. On the whole, I analyzed policy documents with attention to policy context, policy text, and policy consequences (Cardno 2018; Taylor et al. 1997).
Where raising public awareness of AI is concerned, the analysis revealed that across the five national AI strategy documents, engagement with it varied from country to country. In two documents, there was a single implicit and one explicit cursory mention of it with no elaboration (Mauritius7 and Benin, respectively), and the other three included explicit mentions with elaboration. In this last category, engagement was further differentiated with regard to whether promoting public awareness of AI was/is considered a pillar of national AI adoption, on equal footing with other factors (such as data infrastructure development, human capital and capacity building, research, innovation and partnership development, and governance framework elaboration and implementation).8 Globally, raising public AI awareness is linked to encouraging policy adoption, promoting research and development, or developing national AI ecosystems, rather than to enabling the general public’s understanding and involvement. Where personal information privacy awareness-raising for the general public is concerned, its absence is notable in all five national strategy/policy documents. This finding suggests a general regulatory-oriented rather than people-centered approach9 and informs the focus of this paper.
In contrast, the African Union’s Continental AI Strategy (African Union 2024) places emphasis on public education as a core element of responsible AI adoption. It stresses the need for both formal education and informal efforts to help people throughout Africa understand how AI works and impacts their lives.10 A key focus is on media and information literacy, which (in the context of a brief reference to people’s loss of privacy due to free services), it acknowledges—if done effectively—can help people understand how their personal data is collected and used (African Union 2024).
In this paper, I build on the aforementioned findings to explore the factors hindering digital information privacy awareness and limiting awareness-raising on the continent within the context of AI adoption. I take an interpretive approach (Denzin 1994; Wiesner 2022), making use of my lived experience and observation of personal data and information privacy practices in countries and supranational unions in Africa and Europe.
Personal experience is relevant for understanding structural privacy norms. As Proferes (2022) has argued, privacy norms are not static or universally agreed upon—they emerge and evolve through people’s everyday interactions with technology, shaped by social, cultural, and technical factors. Users of (new) digital technologies actively participate in constructing privacy expectations (often without formal guidance), through their use of platforms and tools in everyday socio-technical interactions. Hence, personal experience offers a concrete view into how structural norms are being improvised, negotiated, reframed or overlooked in practice.11 Also, my approach is critical, underpinned (selectively), by science and technology studies (Bijker et al. 2012), critical technology studies (Feenberg 1999; Winner 1980), and postcolonial dependency theories (Rodney 1973; Amin 1974).
As a caveat, relying predominantly on documentary data for this paper presents several limitations. First, the documents—national and regional strategy documents, other policy documents, news articles, and other grey literature—were not originally written for research purposes. As a result, the information they contain was not necessarily adequate or directly relevant to the focus of my study. In addition, determining the time frame of certain documents was not always straightforward and required extra effort to accurately situate them chronologically. For example, some statistical data could not be verified due to a lack of methodological transparency and had to be excluded from the analysis. Also, the availability of data related to countries is uneven—some countries have a significant amount of documentation, while others have little. Finally, it is worth considering that even the publicly available sources used for this paper are very likely to reflect the biases of their authors, and such biases may not necessarily be easily identifiable. All these limitations are, of course, not unique to the present study—similar challenges associated with the use of documentary data have been discussed by other researchers in various fields (Ahmed 2010; Cardno 2018). Incorporating interviews or other forms of primary data generation methods could have mitigated some of these issues.

1.2. Global AI Guidelines and Responsible AI in Africa

It is necessary first to understand the concept of AI as used for the purposes of this paper. On reviewing any portion of scientific literature on artificial intelligence, one is bound to notice that AI is considered a science, a study, a branch of computer science, as well as an ability (Kaplan and Haenlein 2019; McCarthy 2007; Minsky 1968; Rich 1983; Russell and Norvig 2020; Winston 1993). Here, artificial intelligence is a broad term that includes a number of technologies that enable computers or computerized devices to perform tasks requiring intelligence that imitates or surpasses human abilities. These technologies and techniques are used for different purposes (Hintze 2016; Russell and Norvig 2020), which include machine learning (where systems learn from data), natural language processing (computers understand and respond to human language), robotics (physical task automation), and computer vision (visual data interpretation).
Turning to the current literature on AI in Africa, it is worth noting that it concentrates primarily on responsible or ethical AI (see CIPIT 2023; Eke et al. 2023a, 2023b). This focus stems from both global dialogues on the ethical development of AI technologies and the growing expectation of widespread AI adoption across the continent. At the global level, it has generated much discussion and literature over the past four years. This is evident from the plethora of papers authored by private companies, research institutions, and public sector organizations across different (predominantly) global North regions and sectors, eighty-four of which were reviewed by Jobin et al. (2019). In their extensive analysis of these documents12 providing principles or guidelines on ethical AI, Jobin et al. (2019) identified eleven key ethical principles: transparency, fairness and justice, non-harm (non-maleficence), accountability, privacy, doing good (beneficence), individual freedom and autonomy, trust, sustainability, human dignity, and solidarity. The reviewed literature converged around five principles: transparency, justice and fairness, non-maleficence, responsibility, and privacy (Jobin et al. 2019).
Relatedly, African researchers and regional research organizations have pointed out numerous challenges and ethical issues regarding the development and use of AI technologies in Africa. Key challenges they identified include the notable absence of African perspectives and contributions to the global AI ethics debate, the unavailability of local data or its (poor) quality, inadequate digital and telecommunication infrastructure, unreliable electricity supply, limited internet connectivity, low digital literacy within and between countries, a shortage of professionals trained in AI technologies, and not least in importance, financial constraints and nonexistent or inadequate policies (CIPIT 2023; Eke et al. 2023a, 2023b; Gadzala 2018; Okolo et al. 2023; Smart Africa 2021).
With regard to ethical issues, these and other researchers have noted that, as elsewhere in the world, the use of AI systems in African countries comes with the risk of perpetuating bias and discrimination (stemming from the data used to train the AI models), exacerbating unequal access to technology, and posing risks to transparency, accountability, cultural sensitivity, autonomy, and human rights (Buolamwini and Gebru 2018; Okolo et al. 2023). Additional ethical considerations have been identified as needing particular attention (Borokini et al. 2023; CIPIT 2023; Okolo et al. 2023; Smart Africa 2021), and include aggravated unemployment and the potential for economic displacement of local workers due to automation or a digital skills gap, as well as the environmental impact of AI.
Where information privacy is concerned, it is worth noting that in the Africa-focused responsible AI literature, mention of privacy more often than not comes paired with surveillance. The dominant view holds that privacy and surveillance concerns do and would arise from the extensive data that AI systems require to function effectively. Consequently, researchers have underscored data protection, individual consent, and potential acts of surveillance in which governments and corporations may engage, as key concerns (Okolo et al. 2023). Nevertheless, it is important to note that engagement with privacy remains superficial at best across the research literature and other related publications focusing on AI in Africa. In particular, little if any attention is given to the issue of information privacy awareness, despite the evident need for awareness-raising among the general public in countries in Africa, especially with regard to the adoption and development of AI tools, products, and systems, and more concretely, how it can be done.

1.3. Privacy and Privacy Awareness in Global Privacy Scholarship

Providing a precise definition of privacy would be simplistic, as there is a long history of philosophical, legal, anthropological, and sociological engagement with the concept (Roessler and DeCew [2002] 2023). Two decades ago, Solove (2006) described it as “a concept in disarray” whose meaning “nobody can articulate”, highlighting the persistent complexity and ambiguity surrounding the concept.13 A number of theories and frameworks have nonetheless been articulated, and continue to shape how privacy is understood and managed. I will not go into detail about them here. Instead, I will briefly mention three intertwined conceptual trends in global privacy scholarship14 that can be considered responses to the complexities of the ever increasing digital, networked, and data-intensive socio-technical environments within which information practices occur.
First, scholarly focus is increasingly moving beyond abstract, absolute, and static definitions of privacy, towards more concrete, practice-oriented understandings and solutions. For instance, in response to the limitations in traditional privacy conceptualizations, Solove (2006) argued for the necessity to redirect privacy debates away from abstract definition, towards a systematic examination of the harms and practices that give rise to privacy concerns. Exemplary of this shift is his development of a taxonomy of privacy—a framework to help identify and categorize privacy harms, intended to be useful for law and policymaking.
Second, and closely related to the aforementioned shift, is a growing recognition that individual-centered conceptions of privacy (focusing on individual disclosure and control over personal information) are insufficient in addressing the complexities of information flows. The theory of contextual integrity formulated by Nissenbaum (2004) has gained currency in relation to these two shifts. It reconceptualizes privacy in terms of the appropriateness of information flow, grounded in social context and norms, rather than reducing privacy solely to the information disclosure decisions of individuals. Other scholars, among them Floridi (2016a) and Suh and Metzger (2022), de-center the individual even further, emphasizing groups as data subjects and rights holders.
Third, the multi-dimensionality of privacy is increasingly recognized within broader interdisciplinary scholarship. Privacy is understood as involving the prevention of harm arising from information practices, among them information disclosure (Solove 2006); individual and group control over personal information (Floridi 2016a; Petronio 2002; Suh and Metzger 2022; Westin 1967 cited in Suh and Metzger 2022); social norms and contexts of disclosure (Nissenbaum 2004; Proferes 2022); and ethics, power relations, and human dignity (Aftab 2024; Floridi 2016b; Solove 2025). Scholars, notably Wisniewski and Page (2022), have thus argued that due to privacy’s multi-dimensionality, no single theory or framework fully captures its complexity, and drawing on multiple theories and frameworks—depending on the context—could provide a richer understanding of privacy and privacy violations.
These contemporary refinements are at the tail end of a much longer intellectual and legal history, and to appreciate how privacy’s multifaceted notion developed, it would be useful to briefly look back to its debut. The origin of privacy as a concept is attributed to the influential work of Louis Brandeis and Samuel Warren in North America (Czubik 2016; Halpérin 2005). The two are credited with articulating the modern understanding of privacy rights in their article ‘The Right to Privacy’, which was published in the Harvard Law Review in 1890. In the article, Brandeis and Warren argued for the recognition of privacy as a legal right. They highlighted the need for protection under common law, against the intrusive use of technologies and practices of their time, which included being photographed without consent and becoming the focus of sensational journalism (Brandeis and Warren 1890; Czubik 2016; Halpérin 2005).15
The influence of their ideas also expanded internationally, and over time has gradually shaped global perspectives on privacy (Halpérin 2005).16 One can expect that its influence on privacy perspectives in African countries is present, but in a relatively less direct way. As King’ori (2022) has argued, privacy (in its Western understanding) entered the legal systems of countries in Africa, first via colonial law, then through human-rights-inspired post-independence constitutions, and more recently through modern data protection statutes, initially reflecting an individualistic, Western conception, but is gradually being adapted to local contexts.
However, privacy is not only a legal concept—it is a philosophical principle, considered a fundamental component of human rights, and is deeply connected to personal dignity and autonomy (Aftab 2024; Floridi 2016b; Privacy International 2017). As noted by Aftab (2024), it is considered a right that deserves protection on its own—even when no clear harm is caused by its violation, because privacy is essential to human dignity and has moral, deontological, and instrumental value.
While privacy has been the focus of scholarship, privacy awareness is rarely named explicitly in major theories and frameworks. It nonetheless emerges clearly from how privacy itself is conceptualized. For instance, in the boundary regulation theory elaborated by Altman (1975), privacy is defined as an active process of managing personal boundaries, which implicitly requires a person’s awareness of when those boundaries are being, or are likely to be approached or crossed. Risk-based approaches like the privacy calculus (Culnan and Armstrong 1999; Dinev and Hart 2006; Wisniewski and Page 2022) frame privacy as a reasoned evaluation of potential harms and benefits, making awareness a matter of recognizing what one stands to lose or gain through disclosure. Similarly, the communication privacy management theory by Petronio (2002) (which builds on Altman’s work), and the earlier-mentioned contextual integrity theory by Nissenbaum (2004) do not explicitly develop a concept of privacy awareness but imply forms of it through the conditions under which privacy is expected to function.
For the purposes at hand, privacy can be understood as a broad concept that covers the right of each person to keep activities related to their personal lives and their personal information concealed from scrutiny and unwanted access (Privacy International 2017). At the core, it is about maintaining control over who has access to information concerning oneself and how it is used (Bélanger and Crossler 2011; Boyd 2010; King and Meinhardt 2024) in different contexts. Such information is generally related (but not limited) to combinations of one’s identity,17 location, habits, personal life-related events, and communications. Hence, privacy concerns both the physical (freedom from physical intrusion) and the informational.
As for privacy awareness, I propose viewing it as a spectrum of understandings and consciousness that different individuals may have about the importance of preserving their or another person’s privacy, in both digital and physical spaces. It includes knowledge of the risks, benefits, rights, and best practices associated with data privacy, enabling individuals to make informed decisions about the sharing and safeguarding of personal information, be it theirs or that of another person. This awareness can be analyzed at a personal level, but also at a cultural level because values and norms are defined within cultural environments.

1.4. Digital Business Models, AI Technologies and Adoption Trends on the African Continent

Long before AI rose to prominence, business models based on digital technologies were already well established and constantly evolving. This is evident from the early efforts made to explain and classify them, over two decades ago (see Rappa 2000). In the 1990s, digital businesses focused on data collection and basic online advertising. Then the early-mid 2000s saw the start of targeted online advertising as companies like Google and Facebook began using data generated by service users to offer more personalized ads (Zuboff 2019b).
As is widely known, Google was the first large-scale digital business model built primarily around the appropriation of service user data without consent or remuneration. The company’s approach was groundbreaking as it not only used data to improve web search functionality but also to transform advertising through targeted, data-driven ads (Zuboff 2019a, 2019b). By monetizing the vast amounts of data generated by service users via search engine queries, it set a precedent for using data as a core business asset—an approach that quickly became a standard in internet-based businesses and which led to further innovations across search engines, e-commerce, and social media (Zuboff 2019b).
Over time, AI has not only been gradually incorporated into the products of such early digital business models but has also enhanced and expanded their data-driven foundations, enabling deeper insights, personalization, automation, and predictive analytics on an unprecedented scale. More recently, innovations in generative AI have opened up new ways to engage and empower smaller businesses and individual users. Indeed, up until three years ago, the development and use of sophisticated AI technologies were largely confined to large digital technology companies, high-tech laboratories, and specific, often costly consumer products. Currently, a much broader user base (mainly non-technical users) is taking advantage of AI capabilities. Since 2022, AI-focused tools have been available to anyone with a smartphone, an internet connection, and a basic level of digital literacy. For example, the use of AI-powered chatbots and language models like ChatGPT and Gemini, along with text-to-image generators like DALL-E and Midjourney, has become widespread for various tasks, which include answering queries, generating text, code, images, video, and audio content, and providing support across a wide range of subject areas.
On the African continent, the adoption and hype over AI-based tools and applications have not mirrored global adoption trends. The reasons for this difference can be deduced further along in part 2. Nevertheless, as in other parts of the world, AI-powered technologies and fascination with AI are very present in countries, where millions of people have for a number of years now been using the more commonplace digital tools and services like web browsers, emailing, and text messaging—oblivious to the AI technologies boosting the tools’ efficacy and the appropriation of their data by large global digital technology corporations. Many have also purposefully adopted global North AI-powered productivity tools that are freely available online, recommendation systems incorporated into entertainment services like YouTube and Netflix, social and professional media platforms like TikTok and LinkedIn, as well as e-learning platforms like Coursera or Udemy, in addition to the image recognition, speech, and face recognition systems on their digital devices.
Also, with large digital technology companies having made GPT models available to the global public in 2023, barriers to entry into software development are now slightly lower in a few countries, as it is now possible for aspiring techies and non-developers to learn, create, and customize applications without needing extensive programming knowledge. At the same time, experienced developers are able to enhance their abilities and efficiency by automating routine tasks and using AI tools to generate code. However, people are also motivated to use such AI tools for other reasons that range from fun and creativity to personal productivity, and more recently in Kenya, for political action within social movements (See for example, Musau 2024).
AI technologies are not only being consumed or experienced through tools and platforms developed in the global North, but also through digital products created in Africa. Since 2018, in South Africa, Nigeria, Kenya, Ghana, and Ethiopia, AI solutions have been integrated into business operations and into digital tools that were developed previously to address social issues, particularly in the finance and banking, agriculture, health, and education sectors (Eke et al. 2023c). For example, M-Pesa, the mobile money transfer and financial services company launched in Kenya by Safaricom in 2007, has been using an AI customer service chatbot since 2018. In early 2023, M-TIBA18 announced the activation of an AI system it had tested for over three years, intending to optimize insurance claims processing in Kenya (Lukhanyu 2024). Similarly, banks in Nigeria have gradually incorporated customer service chatbots into their mobile and web applications (Borokini et al. 2023). Also, in an attempt to improve their learning products, edtech companies in Nigeria, Kenya, Uganda, and South Africa have been incorporating different AI features into their digital applications and platforms, or are expected to do so (UNESCO 2019).
Africa-based startups and global technology-focused corporations have been actively investing in AI technologies in efforts to secure their positions in regional and global markets (See for example, Ajene 2023; Okolo et al. 2023). They are doing so, having recognized the opportunities and potential that AI technologies hold for business expansion, entrepreneurship, and more generally for the economies of African countries. Safaricom is a case in point: through a Vodafone and Microsoft partnership, M-Pesa is expected to employ generative AI to enhance customer satisfaction (Tech-Ish 2024). It is pertinent to note that according to a survey conducted by South Africa’s AI Media Group in 2022, over 2360 companies in Africa listed AI as their specialty (AI Media Group South Africa 2022; Ngila 2022).
Some African governments are also increasingly cognizant of the social and economic opportunities that the development and adoption of AI technologies could bring to their respective countries and the continent as a whole, having witnessed some of the advantages afforded by AI technologies in a range of sectors (Smart Africa 2021). At digital technology-related conferences, several African governments have pledged to implement supportive policies and enhance digital infrastructure and digital technology education to drive AI innovation and adoption. As part of this support, a few countries have published final or draft national AI strategies, while others have announced that theirs are currently being developed (Teleanu and Kurbalija 2022).
Efforts are also being made to promote AI development across national borders. South Africa, for example, significantly contributed to developing the ‘AI for Africa Blueprint’, a document designed to help AU member states develop policies, strategies, and plans to make use of AI for economic growth and social development, as part of the framework of the Fourth Industrial Revolution (4IR) (Smart Africa 2021). More recently, in August 2024, the ‘Continental Artificial Intelligence Strategy’, which emphasizes the continent’s commitment to prioritizing development and Africa-centered approaches to these technologies, was released by the African Union (2024).
As inferred in this section, AI has been lauded as a key driver of economic and social transformation and is being adopted rapidly on the continent. However, when used irresponsibly or without accurate knowledge, AI technologies can pose significant risks, among which are information privacy risks. As outlined in part one, there are a number of key factors hampering information privacy awareness—I examine them in the next section.

2. Key Factors Impeding Digital Information Privacy Awareness and Limiting Awareness-Raising on the Continent

2.1. Limited Digital Literacy

In a number of African countries, digital interaction has become a way of life. It is a component of several sectors, notably agriculture, health, education, retail, communication, personal finance, and public administration (CIPIT 2023; Eke et al. 2023c). Citizens and customers are increasingly required to interact with public and private service providers through online portals and accounts. Yet, as studies have shown, digital literacy levels in Africa remain low, inadequate, and unevenly distributed (Domingo et al. 2024; Krönke 2020).
As a concept, digital literacy—like privacy—has a complex, albeit shorter history with various (often contested) definitions (see Buckingham 2007; Eshet-Alkalai 2004; Gilster 1997). For this paper, I partly adopt one of the most widely used conceptualizations offered by the American Library Association (ALA): the ability to make use of information and communication technologies to find, evaluate, create, and communicate information using both cognitive and technical skills (ALA 2011). However, ALA’s definition lacks depth and an explicit focus on the social and emotional dimensions of digital interactions, which calls for some theoretical eclecticism.
First, it is necessary to explicitly incorporate into that definition the notion of socio-emotional literacy, which is considered the most complex type of digital literacy, as it combines sociological, emotional, and cognitive dimensions (Eshet-Alkalai 2004, p. 102). In the context of Eshet-Alkalai’s framework, socio-emotional literacy refers to the ability to critically and ethically navigate the social and emotional dynamics of digital environments. It includes collaborating effectively, sharing knowledge responsibly, demonstrating empathy and critical thinking in interactions, as well as understanding digital-related risks like scams or misinformation (Eshet-Alkalai 2004, p. 101).
Digital information privacy awareness is a social and emotional competency, and one that the vast majority of people in African countries have not yet acquired. This reality manifests itself in diverse social contexts and often in troubling ways. For instance, in Kenya, administrators of residential childcare institutions have a history of posting the personal and sensitive data of children in their care on their institutions’ public-facing websites19. Similarly, in educational settings, educators frequently share students’ images and videos on social and professional media for self-promotion, without student or parental consent. Additionally, it is quite common to find highly sensitive personal images of private individuals or public figures posted on digital press media or social media, without the concerned individuals’ consent.
Second, it is necessary to understand digital literacy as non-monolithic, and rather, as a spectrum of varied competencies tailored to different contexts (Buckingham 2007; Lankshear and Knobel 2008). The concept of digital literacies, as advanced by Lankshear and Knobel (2008), expands on the aforementioned foundational understanding to emphasize the diverse and context-specific ways individuals interact with digital technologies. This characterization is crucial for understanding the manifestation of digital literacy in African countries, where individuals’ information and communication competencies are fragmented and highly context-dependent. Knowledge, skills, and skill levels tend to vary significantly across digital technology users. For instance, a rural artisan skilled in using a social media platform like Facebook to market their handmade goods may struggle with using secure payment gateways. Similarly, an urban government official adept at using centralized systems for public service delivery (for example, related to issuing IDs or permits) might not know how to use privacy settings to safeguard their personal online privacy.
The reasons behind such fragmented and context-dependent digital literacies are numerous. On the one hand, they stem from necessity, as individuals adapt to meet immediate digital needs with the limited resources available to them. On the other hand, they arise from an absence of opportunities, which is an outcome of economic constraints and a lack of necessary public infrastructure (notably, reliable electricity, internet coverage), essential digital devices (notably, computers, tablets, or mobile phones), and technologies (for example, productivity software, collaborative tools, and cloud computing). All these impediments, combined with insufficient education, training, and limitations in regular use, significantly impede the development and strengthening of digital literacy and skills (more so among rural or marginalized urban populations).
As Domingo et al. (2024) have shown, compared to efforts and resources directed towards developing digital infrastructure and governance frameworks in global North and global South countries, digital literacy and associated skills are often overlooked in geopolitical discussions. Moreover, while governments are increasingly recognizing the importance of addressing digital skills development comprehensively—which includes advanced ICT expertise for specialized sectors and foundational digital literacies for the wider population—African countries are focusing primarily on technical (hard) digital skills. This focus is certainly well-intentioned as it is driven by the urgency to tackle high unemployment rates and drive economic growth through industrialization and the digital economy. However, it is producing undesirable results as softer digital skills, which include online safety, media literacy, and online civic participation (which are key elements of digital citizenship), are receiving insufficient attention and resources (Domingo et al. 2024). Privacy awareness—a key component of digital citizenship—gets even less attention.
The idea that individuals have a right to information privacy remains an emerging concept in African countries. While there are notable differences across countries and regions, public awareness of digital information privacy rights and data protection is not yet widespread. Even in countries that enacted data protection laws in recent years, like South Africa, Nigeria, and Kenya, large numbers of citizens remain unaware of their legal rights with regard to respect for their privacy. This gap is further widened by inadequate public education on privacy issues. Hence, a state of limited digital literacy tends to impact privacy negatively, representing a critical barrier to meaningful participation in increasingly digitalized African societies. Meaningful participation here involves not only accessing digital tools, but also exercising one’s rights to privacy and making informed choices about how one’s personal information is used or shared.
The disparity between regulated and non-regulated sectors in African countries further illustrates the scope of the lack of privacy awareness. In healthcare, traditional banking, and legal services, strong regulations—like medical confidentiality, financial data protection regulations, and lawyer-client confidentiality—mandate strict safeguards for personal information privacy. These frameworks ensure that professionals in these sectors are trained to handle personal and sensitive information responsibly. In contrast, non-regulated sectors rarely feature similar protections. For instance, in education or social media contexts, there are no universally enforced policies to govern the responsible handling of personal data. This absence of legal and ethical guidance leaves many professionals unaware of both information privacy rights and their obligations. This situation is particularly concerning given the place that free and freemium services occupy in people’s everyday lives in African countries, as users often unknowingly consent to the violation of their own or others’ privacy, as discussed in detail in the next section.

2.2. Dependency on Free and Freemium Digital Services

The large majority of individuals and small businesses in African countries predominantly use and depend on free or freemium services offered by large digital technology companies. For clarity, freemium is a service model offering a free-of-charge service, and where premium features remain available for purchase (Holm and Günzel-Jensen 2017; Lee et al. 2022). This model is widely used by the dominant global digital platforms (Google, Microsoft, WhatsApp, Instagram, among others). For example, Google and Meta’s free and freemium platforms and services have an important place in the communication, education, business, and entertainment digital spaces of African countries—the most popular services being YouTube, Gmail, Google Search, Google Maps, Google Drive, Google Classroom, Google Photos, Facebook, WhatsApp, Instagram, Facebook Lite, and WhatsApp Business.
One cannot deny that the gradual integration of AI into such free services has significantly improved their functionality—they are more user-friendly and tend to enable users to be more efficient. However, across countries, most users are generally unaware of the extent to which AI is integrated into such services and experience the benefits without necessarily being aware of the underlying technology or the corresponding privacy concerns.
The appropriation of user data, through what is euphemistically referred to as “data collection” or “the collection of data”20 is integral to how these services are operated and used to generate revenue. This is even more so where the service itself is free and the company relies on targeted advertising or on the sale of insights derived from the data of its service users.
It follows that a primary privacy issue with free and freemium models adopted in African countries stems from the amassing of user data intended for commodification and profit-making, as part of a business model that Zuboff (2019a, 2019b) termed surveillance capitalism. First, sharing one’s pieces of personal information like names, email addresses, and phone numbers, is commonly imposed when signing up. Then, user behavior (for example, a person’s search queries, their interactions with apps and location pings, and the timing and duration of their interactions), which forms the raw material for what Zuboff (2019a, 2019b) referred to as behavioral surplus21 is meticulously tracked and analyzed. In addition, other content generated by service users, which includes messages, photos, videos, and emails, is also analyzed for the delivery of targeted ads.
The lack of transparency related to the collection and use of these and other data is a major concern. This is particularly so since the vast majority of users of free and freemium communication, education, business, and entertainment services in African countries remain unaware of the extent to which their data is being collected, with whom it is shared, and even that it is monetized. There is no doubt that the terms and conditions of service and privacy policies are automatically made available to them when signing up for the services. However, as studies elsewhere have shown, terms and conditions of service and privacy policies are rarely read by individuals, with the former often being particularly long, complex, and filled with legal jargon, which makes it difficult for the average subscriber to grasp their full implications (Obar and Oeldorf-Hirsch 2018; Steinfeld 2016). As the majority of users tend to consent by default, they often remain unaware of the privacy trade-offs involved in using freemium (now increasingly AI-enhanced) tools.
Data control and privacy are also concerns, as these corporations have the capacity to create detailed user profiles from their extensive data collection, which is highly attractive to their partners and third-party advertisers. Building on this concern, one cannot exclude the likelihood of corporate surveillance and behavioral manipulation, a known strategy of market-dominating digital companies offering free or freemium services. For instance, Google collects data across its extensive ecosystem of free services, which, among others, includes search history, location data via Google Maps, email content in Gmail, and viewing habits on YouTube. Similarly, Meta gathers data from user profiles, posts, interactions, and third-party websites and apps through tools like the Meta Pixel.22 Some tools and platforms (for example, those related to search, social, and entertainment media) have been developed to allow the companies to engage in highly personalized, often invisible, tactics that not only predict but also influence user thought, behavior, consumption patterns, and political opinions using curated content and echo chambers.23 Indeed, the platforms’ algorithms have been proven to sway public opinion, limit exposure to diverse viewpoints, and amplify existing beliefs.24
Again, as is the case in other regions of the world, the large majority of users in African countries do not necessarily take the time to read the terms and conditions and privacy policies, or do not understand them. Consequently, they unintentionally consent to practices that are not in their own interest. Such include the extensive collection of their data, its limitless usage, and surveillance by service providers. The underlying unintentionality here needs to be understood as stemming from a lack of knowledge, certainly, but also from limited choice. This is the case for those who might be aware of the dominating digital platforms’ unfair practices, but still use the services, as a study by Mugadza and Mwalemba (2023) in South Africa has shown. These users still sign up for the services as a means of accessing the plethora of other services and opportunities that are directly or indirectly dependent on being a free service subscriber of an essential digital service.25
It is important to note that this general complacency with free services, combined with limited awareness or a lack thereof, poses even greater risks when people interact with companies offering AI-focused services. The reason is simple: data gathering is even more extensive, and its usage even more pervasive and sophisticated. The means through which such companies are collecting or will get to collect service user data in African countries does raise ethical concerns. As with freemium services offered by digital platforms dominating African and global markets, it can be expected that the large majority of users do and will agree to the terms of service and privacy policies without fully understanding the extent to which their data is collected and used by both public and private service providers. They will also consent to the terms and policies because they have little or no choice but to use the services in question to meet their needs.
Although service users’ data is often integral to improving the functionality and personalization of AI tools, it is also used for other purposes that users would not necessarily consent to—if informed in clear language. For example, service user data is used to train machine learning models.26 This practice can involve analyzing personal information to improve algorithms, with the aim of making them more effective at tasks like image recognition, language processing, and predictive analytics. Though this could lead to better services for people in African countries, it also raises questions about the ethical use of personal data, more so if access to the data is given to third parties within or outside the continent and is used for purposes beyond what the service users initially agreed to.
At the continental level, privacy awareness is further weakened, given that free and freemium services are consumed within the context of a very diverse personal data protection regulatory framework. This is discussed in the next section.

2.3. Diverse and Non-Unified Personal Data Protection Regulatory Frameworks

For over a decade, the African Union has been planning to effect a continent-wide cybersecurity and data protection policy. To date, its ‘African Union Convention on Cyber Security and Personal Data Protection’ (also known as the Malabo Convention), adopted in 2014, is the most significant instrument. The convention is intended to establish a framework for cybersecurity and personal data protection in member countries.
For clarity, although personal data protection is intricately related to privacy—more specifically, to personal data or information privacy—they are distinct concepts. Personal data privacy is about a person’s right to control how their personal information is collected, used, and shared. It ensures that organizations handle data transparently, obtain proper consent, and allow individuals to make decisions about their own information (King and Meinhardt 2024). Relatedly, personal data protection is the ensemble of measures and practices put in place to safeguard individuals’ personal information. It is generally viewed as being grounded in a cluster of procedural rights, intended to ensure that personal data is processed, among others, lawfully, fairly, and transparently, with its processing limited to a specified purpose and restricted to only what is necessary27 (Filippidis Semino 2023; King and Meinhardt 2024; Macmillan 2023).
Under the AU’s Malabo Convention, every member country is expected to establish a data protection framework to guide the processing of personal information in accordance with the key data protection principles, which is to be enforced by a national data protection authority. Although this sets a significant precedent for regional data protection standards, it has taken nine years for the convention to come into force, as the fifteenth member country (there are fifty-five AU member countries) only ratified the convention in May 2023 (African Union 2023). At the time of writing, according to ALT Advisory, thirty-six African countries have active country-specific data protection laws, sixteen have no data protection laws in place, and three (Namibia, Ethiopia, and Malawi) are in the process of drafting or implementing data protection laws to enhance privacy standards.28. Noteworthy is the fact that the right to privacy is spelled out in the constitutions of fifty-four countries (ALT Advisory 2024).
The EU GDPR has been the model against which several countries around the world have defined their data protection laws, and this applies to some African countries as well (Boshe et al. 2022). For instance, Mauritius updated its 2017 Data Protection Act, which now aligns closely with GDPR principles; South Africa’s Protection of Personal Information Act (POPIA) incorporates many GDPR elements (OneTrust Data Guidance n.d.; Yaron 2023); Kenya’s Data Protection Act reflects several GDPR standards; and similarly, the Nigeria Data Protection Regulation draws inspiration from GDPR guidelines (Babalola 2021).
What is important in relation to this paper’s focus is the fact that data protection legislation has a key role in the governance of AI, given the large amounts of personal information handled by AI technologies and systems, and the extreme potential for privacy violations. In addition, such legislation represents a possibility for people on the continent to rightfully demand their rights to privacy and personal data protection, and benefit from it. However, such demands are rarely made. As a survey conducted by Agbenonwossi et al. (2021) in eight African countries confirms, this is because, first, there is generally not much public debate around matters related to technology, privacy and personal data protection, and their related legislation; second, the implementation of data protection laws is generally quite problematic.
Indeed, the experiences of global North countries and regions that are more or less successfully implementing data protection laws attest to this reality—the implementation of data protection is a complex, long-haul process that calls for a range of significant resources. It requires skilled personnel, technical infrastructure, understanding of and compliance with both local and international laws, incident response, monitoring and enforcement, and, not least important, public awareness and education. Many African countries are still struggling, among other challenges, with insufficient infrastructure, expertise, funding to implement and enforce such regulations effectively, and limited awareness of the general public.
As one would expect from the focus of this section, a third reason for the scarcity of such demands is the lack of harmonized personal data protection laws across African countries. This regulatory fragmentation has significant consequences. For instance, it simultaneously complicates compliance for multinational companies and facilitates their non-compliance with privacy and data protection standards and regulations. This situation, in turn, reinforces corporate exploitation, which in this context is best termed corporate neocolonialism. Reminiscent of the resistance of African societies to European colonialism, each country tends to face digital technology multinationals separately, often acquiescing to unequal terms and opaque privacy policies. While the AU could support collective action to safeguard the continent’s inhabitants’ digital and information privacy rights, the prevailing implicit “every country for itself” approach across member states, alongside the generally passive regulatory stance of many African governments, remains a major obstacle. This lack of a unified continental approach weakens negotiation power and results in varying data protection standards in Africa, which contrasts with the EU GDPR regulatory approach.
A case in point is South Africa’s opposition to WhatsApp’s region-differentiated privacy policy update. In 2021, WhatsApp implemented a controversial update that required users to agree to share their data with Facebook, which, like WhatsApp, is owned by Meta. The news was met with global scrutiny and regulatory pushback in many countries. While the European Union, through its GDPR, was able to enforce a different privacy policy, the response from African governments was fragmented. Only South Africa, through its Information Regulator, took steps to engage with WhatsApp and demanded that the app’s privacy practices align with the country’s POPIA29 It is worth noting that POPIA aligns closely with international standards, notably the GDPR, upon which it was modeled (BusinessTech 2021).
Although South Africa took a stand, it did so in isolation, as other African governments remained largely silent on WhatsApp’s updated privacy policy. While the silence may have stemmed from a lack of regulatory scrutiny or enforcement capacity in most African countries, these factors alone do not fully explain the response. WhatsApp’s regionally differentiated privacy policies (where users in the global North enjoy stronger privacy protections than those in the global South) highlight a fourth reason for the scarcity of the aforementioned demands: the more digitally advanced African nations (notably Nigeria, Kenya, and Ghana) are engaged in relationships with global North digital technology multinationals that reflect corporate neocolonialism.
Echoing European colonial policies, corporations like Google and Microsoft are investing in these countries through digital research centers, business partnerships, educational programs, digital technology-related infrastructure, and support programs for startups. These initiatives provide valuable technical and financial resources as well as local employment opportunities, but they also create dependencies that limit the scope of action for African leaders. While these investments bring immediate benefits, dependency theory30 would suggest that, instead of encouraging the self-reliance of local economies and communities, these relationships create an unhealthy dependency on external technology, expertise, and funding, primarily for the benefit of corporate interests.
Taking Kenya as an example, government leaders and public authorities often overlook the activities of big digital platforms owned by dominating digital companies, even when their policies conflict with local laws. This inaction largely stems from the country’s social and economic dependency on platforms like WhatsApp, Facebook, Google, and Microsoft, which are essential for local businesses and private communication alike.
The influence of technology multinationals is further strengthened through the employment opportunities they provide via contractors, despite the exploitative nature of these roles. For example, Meta and Open AI outsource their most morally offensive data assessment and content review work to expendable workers, particularly in Kenya (See Kannampilly and Malalo 2024; Perrigo 2023). Given such dependency, it becomes clearer why African governments may have chosen not to respond when WhatsApp’s 2021 privacy policy introduced more exploitative terms for African countries than for those in the global North.
In the next section, I discuss state-sanctioned surveillance and violation of privacy rights as another key factor impeding digital information privacy awareness and limiting awareness-raising in African countries.

2.4. State Surveillance and the Violation of Privacy Rights

As previously mentioned, in discussions on ethical AI or responsible AI regarding African countries, mention of privacy often comes paired with surveillance. This can be attributed to the following reality: according to several studies, in the last three decades, surveillance by national governments has expanded unchecked and is impinging upon citizens’ privacy rights, both within and beyond African countries (Privacy International 2014; Roberts 2021; Roberts et al. 2021), in ever greater ways.
Research conducted by Roberts et al. (2021) on surveillance laws in six African countries has shown that in Egypt, Kenya, Nigeria, Senegal, Sudan, and South Africa, constitutional provisions protect the privacy of individuals and are often backed by international agreements like the UDHR and the International Covenant on Civil and Political Rights (Farahat on Egypt and Sudan, Mutung’u on Kenya and South Africa, Oloyede on Nigeria and Senegal in Roberts et al. 2021; Roberts 2021). However, despite these protections, weak legal frameworks have facilitated the growth of surveillance by states, which is often justified on the basis of concerns over national security, terrorism, and public order (Farahat on Egypt and Sudan, in Roberts et al. 2021). National governments continue to use these justifications to expand their surveillance capabilities, and in the absence of adequate oversight, these technologies have been misused to monitor activists, journalists, and political opponents (Milo 2021; Mutung’u on South Africa, Oloyede on Nigeria and Senegal, in Roberts et al. 2021; Privacy International 2014).
Both academic and third sector research reveal that African states are increasingly turning to foreign suppliers for surveillance tools, among which are AI-powered tools: companies in Israel, China, Germany, the United Kingdom and the United States are providing cutting-edge technology that is allowing African governments to strengthen their targeted and mass surveillance capabilities (Amnesty International 2020; Feldstein 2019; Roberts et al. 2021). This external influence raises concerns that mass surveillance is becoming a norm across the continent, as AI makes it easier to justify and implement surveillance technologies on a large scale, often without the necessary legal and ethical checks in place (Roberts 2021).
It is important to bear in mind that currently, no African country has legislation to regulate AI directly, a reality already noted by ALT Advisory (2022) three years back. In the course of conducting research for this paper, the African Union released the ‘Continental Artificial Intelligence Strategy’ (African Union 2024) following its previous ‘Digital Transformation Strategy for Africa (2020–2030)’ (African Union 2020), which—though not specifically AI-focused—includes AI as a priority area. While both emphasize the need for responsible AI, data governance, and digital rights, they lack enforceable legal standards.
Based on the findings by Roberts et al. (2021), it is possible to hypothesize that the widespread adoption of AI technologies will significantly accelerate and intensify surveillance by African states. More powerful AI-driven tools like facial recognition systems, other biometric identification systems, and predictive policing algorithms would provide governments with unprecedented capabilities for monitoring people. For example, live facial recognition systems can be integrated with existing CCTV networks, enabling real-time tracking of individuals across cities. Other biometric identification tools, which are already being implemented in Kenya and Nigeria,31 among other countries, will become more efficient and invasive when combined with AI, and could allow national governments to create detailed profiles of citizens, residents, and visitors alike. Predictive policing software powered by AI can identify so-called ‘threats’ based on data analysis; however, if adopted in countries with limited oversight, these tools could be used to target political opponents or marginalized groups under the pretense of maintaining security.32
Evidently, the existing legal frameworks of African countries are ill-prepared for the complexities of AI-driven surveillance. While privacy rights are constitutionally guaranteed in many countries, the loopholes that facilitate state surveillance under certain conditions (such as broad national security claims) could be exploited through AI. For example, Nigeria has invested heavily in surveillance infrastructure, including AI-powered tools, but lacks robust legal safeguards to prevent misuse (Oloyede on Nigeria, in Roberts et al. 2021). As for Kenya, research has shown that its fragmented legal provisions regarding surveillance create confusion (Mutung’u on Kenya, in Roberts et al. 2021). Such a state of affairs would make it difficult to regulate AI systems effectively.
Lastly, the connection between the lack of information privacy awareness among the general public and surveillance by states can be understood through the lens of technological leapfrogging and the perceived value of digital services. Technological leapfrogging is the process by which individuals, communities, companies or countries skip intermediate stages of technological development and directly adopt more advanced technologies (Goldemberg 2011; Swartz et al. 2023). A common example is the bypassing of landlines and going straight to mobile phones. This bypass enables them to avoid costly and outdated systems, accelerate progress, and potentially gain competitive advantages in global markets (Goldemberg 2011; Swartz et al. 2023). Although these leaps are advantageous for modernization, in African countries, the rapid adoption of AI technologies enabled by technological leapfrogging is taking place without the desirable gradual development of legal safeguards or public awareness. The subsequent gaps in the general public’s understanding of individual privacy rights, make them vulnerable to the misuse of their data, within and outside the African continent.
At the same time, the increasing reliance on digital services (for instance, mobile banking, health platforms, and government e-services) creates a dependency that overshadows privacy concerns. Often, service subscribers prioritize the immediate benefits of the range of services offered in the private sector and by public administration services, without necessarily recognizing the long-term implications of sharing their personal data. In doing so, they inadvertently enable unchecked surveillance. This combination of rapid technological adoption and a lack of public awareness amplifies the privacy risks of AI-driven surveillance across the continent.

3. Next Steps: Boosting Digital Information Privacy Awareness in African Countries

In what follows, I propose a series of (high-level) strategies applicable to diverse stakeholders, which together could contribute to reinforcing digital information privacy awareness among the general public and encourage commitment to promoting privacy awareness and protection within the context of AI adoption.

3.1. Grassroots Digital Literacy and Privacy Awareness Campaigns

In Section 2.1, my analysis has shown that currently, digital literacy among the general public in African countries is limited, fragmented and context-dependent. Moreover, it tends to fall short of socio-emotional competencies that are vital for individuals’ awareness of the rights and responsibilities associated with their or others’ personal data.
It is unlikely that many African governments—especially those using AI for surveillance—will independently initiate and effectively raise awareness about information privacy rights or facilitate the acquisition of the associated digital literacy skills among their populations. Hence, non-governmental organizations, civil society groups, and academic institutions can play a pivotal role. Their efforts can focus on community outreach, which would involve engaging the wide range of societies’ social and professional communities (in both urban and rural areas, while bearing in mind the intersectional inequalities of access to technological resources and know-how).
Grassroots engagement is already taking place. For instance, Unwanted Witness, a Ugandan civil society organization, focuses on promoting digital rights and online safety. Among other goals, it works to increase the general public’s understanding of how local and global digital companies collect, use, and sometimes misuse personal data. A strong example of their work in this domain is the “There’s a Spy in Your Pocket” initiative, an interactive digital story campaign through which the organization seeks to demystify how everyday apps collect personal user data that is not necessary for the service being offered. Building on such work, civil society organizations can also help communities understand the broader implications of emerging technologies, including the risks and benefits associated with AI, and provide practical guidance on how individuals can take control of their digital information privacy.
Since significant portions of populations in African countries do not necessarily understand the languages in which most digital privacy learning resources are available (principally English and French), offering them in local languages can also improve engagement and understanding.33 It is worth bearing in mind that many people on the continent trust influencers and content creators and have access to social media platforms like Facebook, WhatsApp, TikTok, and Instagram. This role makes them potentially effective messengers and media for privacy awareness campaigns.

3.2. Local Advocacy for Privacy and Data Protection

As discussed in Section 2.3, public debates and demands related to digital technology, privacy, and personal data protection, and their corresponding legislation, remain rare occurrences. This absence is not surprising given the contexts of digital literacy limitations and state surveillance discussed in Section 2.1 and Section 2.4. It is necessary that African civil society take on a more active role in advocating for privacy rights and data protection, even in the face of government surveillance. This can be done through building regional and global partnerships. For example, advocacy organizations in African countries can partner with international associations or organizations (among others, the Electronic Frontier Foundation, Privacy International, Access Now, and Paradigm Initiative) to build momentum for privacy rights and data protection reforms.
This approach is not unprecedented—the ongoing collaboration between Uganda’s Unwanted Witness and Privacy International illustrates the tangible impact of partnerships between local and international actors. The two organizations’ joint actions, which range from exposing unethical state and corporate data-related practices to advocating for transparency in data handling, as well as stronger data protection legislation, demonstrate that such collaborations are not only feasible, but also effective in achieving meaningful policy and legal reforms.
National-level organizations can form coalitions to lobby for stronger digital information privacy protections, creating pressure on national governments and the African Union to act. Alongside other categories of stakeholders, African civil society groups can encourage governments to hold public consultations on digital technology, privacy rights, and data protection laws.

3.3. Unified Continental Legal Framework Combined with a Digital Education Framework

As noted in Section 2.3 and Section 2.4, the lack of a unified and continent-wide enforceable legislation (with extraterritorial applications) allows dominating digital platforms owned by global digital technology companies, and surveillance-oriented national governments to exploit loopholes at both national and continental levels. As AI adoption is bound to intensify the privacy violations facilitated by such gaps, a concerted push for continent-wide regulatory standards is needed.
Civil society organizations and privacy advocates could lobby the African Union to adopt a binding protocol for personal information privacy and AI use in member states. Cues can come from Europe’s GDPR, the EU AI Act, or the data protection frameworks of Latin American countries to promote a Pan-African standard that prioritizes the right to privacy while addressing the unique challenges of AI and surveillance technologies on the continent. To realize the objectives of the AU’s 2020–2030 Digital Transformation Strategy and the Continental Artificial Intelligence Strategy, the African Union could initiate the development of a Pan-African AI Regulatory Framework that sets enforceable standards for ethical AI use, data governance, and digital rights protection and awareness-raising.
Combining strong data protection and AI regulatory frameworks with digital education initiatives that prioritize enhancing privacy awareness for all citizens would help ensure that the continent’s inhabitants are not only better protected, but also better prepared to thrive in digital spaces that are increasingly AI-driven. Like the EU, the AU can develop a digital competence framework that is adapted to the needs and realities of people on the continent. These frameworks need to be developed collaboratively with input from member states, industry leaders, civil society groups, researchers, and technical experts, to ensure that they are both adaptable and contextually relevant.

3.4. African Digital Technology Companies and Startups’ Commitments

Behind Africa’s digital companies and startups, like those mentioned in Section 1.4, are business leaders whose digital literacies influence product design and their organizations’ approaches to privacy and data protection. As these organizations are operating in environments that are characterized by weak regulation and dominance of global platforms, their leaders may be prompted to adopt extractive data practices as a default business strategy, or to replicate privacy-violating models in order to remain competitive.
Digital technology companies and startups in Africa need to adopt better business and privacy practices. To encourage them to do so, multiple actors need to collaborate in building a privacy-conscious digital technology ecosystem that balances innovation with respect for user privacy and data protection. As key players, African digital technology startups and the more traditional companies need to commit to designing projects that prioritize user privacy (privacy by design) and implementing transparent data practices. Governments, despite their surveillance interests, can contribute by setting baseline legal frameworks that encourage companies to adopt privacy-first practices.
One notable initiative that reinforces this accountability is the Unwanted Witness–Privacy Symposium Africa’s Scorecard. The Scorecard initiative publicly evaluates companies across sectors and countries based on their data protection performance. This kind of independent benchmarking pressures businesses to improve; as an added benefit, it gives regulators and users clear visibility into privacy practices. Other digital rights–focused NGOs and civil society organizations could build on such models or examples to further advocate for transparency and push for stronger corporate commitment to privacy. For their part, universities, researchers, and research institutions could provide expertise and resources to startups, while investors and venture capital firms could prioritize funding for privacy-respecting startups. Meanwhile, media platforms, journalists, and influencers focusing on digital technology could raise public awareness and promote companies that adopt privacy-first approaches.

3.5. Public Pressure and Media Involvement

As discussed in Section 2.4, AI-enabled systems are already being used in some African countries to expand and intensify state surveillance, within environments marked by regulatory fragility and technological dependency. While governments that engage regularly in surveillance tactics may not have an interest in promoting privacy awareness, public pressure could shift policies. Investigative journalists need to focus on technology and privacy issues on the continent: they can play a key role in exposing privacy violations and the misuse of AI technologies by both governments and corporations. Mass communication media campaigns can shine a spotlight on privacy abuses and educate the public, although their impact could be inconsistent. In countries with a strong tradition of media freedom, these initiatives could work well; however, in countries where the media is more restricted or government-controlled, it will be harder.
Kenya’s 2019–2022 public backlash against Huduma Namba, a biometric mass registration system (See Mungai 2019 and Nyabola 2019 (Aljazeera)), is an example of how a relatively independent media ecosystem and legal advocacy can amplify privacy concerns and hold a state project accountable. Where traditional media faces restrictions, social media offers alternative platforms. It is also important that African media companies examine and reform their own privacy practices to ensure they lead by example, rather than perpetuate privacy violations, as is currently the norm.

3.6. Privacy-Respecting Technologies and Local Alternatives

As noted in Section 2.2, African countries’ heavy reliance on global technology companies’ free and freemium products makes millions of users vulnerable to privacy violations. African technology ecosystems need to offer alternatives to counter this dependency. This can be done by encouraging the development of locally built privacy-respecting software that serve the basic and more advanced digital needs of users on the continent, while protecting their personal data. These tools would need to be accessible and affordable (open source).
African countries do not lack innovators, as seen through Ayoba, a Pan-African messaging app with over 35 million users, designed to combine messaging with a wide range of services and content that is tailored to the continent’s diverse markets (See MTN 2024). African governments also need to support and be encouraged to prioritize digital sovereignty by investing in local technological industries and encouraging homegrown platforms and services that prioritize privacy. This approach would reduce the reliance on platforms and services availed by exploitative global technology companies.

3.7. Incentivizing Digital Technology Multinationals to Prioritize User Privacy

As demonstrated in Section 2.3 through the 2021 WhatsApp privacy policy update, digital multinationals provide stronger privacy protections in Europe than in Africa. This divergence does not stem from differences in the technical architecture of their platforms, but rather from their selective compliance with regulatory obligations. Multinational corporations like Meta, Google, and Amazon adapt to stricter privacy standards in Europe (more specifically in the EU) due to the GDPR and potential penalties. However, where African countries are concerned, there is less incentive for them to self-regulate. Events elsewhere (in Europe and in the United States) indicate that, unless obliged by a legal framework or consumer demand, the leaders of these companies are unlikely to voluntarily adopt strict privacy standards for users of their services located in African countries. As noted in Section 2.1, the freemium model benefits these companies financially through user data harvesting and appropriation. They are therefore less likely to make changes to their business models in markets where there is less regulatory pressure.
However, Nigeria’s recent experience with Meta illustrates that even in Africa, heavy fines and judicial backing can incentivize global tech firms to respect user privacy.34 This precedent can serve other African nations and the AU when claiming geographical parity in data privacy and protection rights for their citizens and asserting regulatory authority over global technology companies.
In African countries where governments fail to push large technological companies to uphold privacy standards, advocacy groups and international bodies can. Together, they could lobby large technology companies to adhere to international privacy standards like the Malabo Convention and/or the GDPR when operating on the continent, even in the absence of local regulations, while pressuring governments to enforce those standards.

4. Conclusions

In this paper, I have explored factors contributing to the lack of information privacy awareness and the socio-political and economic practices and policies impeding awareness-raising efforts in Africa. I have examined these issues within the current context of rapid developments in AI technologies around the world.
In this process, I analyzed and discussed what would constitute characteristics of AI adoption in African countries. At the continental level, the discourse on AI is being shaped by highly enthusiastic public and private sector actors who advocate the rapid adoption of these technologies. Morris, the Kenyan acquaintance I mentioned in the introduction, exemplifies such enthusiasm. Skepticism and caution are also present among segments of the African population—as findings of the Lloyd’s Register Foundation (2023) study show—but these perspectives are far less audible. Nevertheless, what actors in both these categories and perhaps those in between them have in common is a general unfamiliarity with privacy issues involved in AI technologies. It is therefore reasonable to expect that millions of people in African countries are actively or passively adopting AI and other emerging digital technologies, without sufficient understanding of the privacy implications, for both themselves and others whose data is implicated through their use.
A second and significant characteristic of AI adoption in African countries is the scarce attention to empowering the general public with knowledge about them, and raising information privacy awareness. This failure persists despite the presence of strong discourses on the benefits of AI technologies for economic and social development and the acknowledgment that these technologies also come with risks and ethical concerns, which include privacy violations. To date, regulatory means—such as policy initiatives and indirect legislation—are the main measures that national governments have taken to address AI-related risks, including those tied to privacy.
Four intertwined key factors have emerged from my exploration of factors contributing to the current limited state of information privacy awareness among the citizenries of African countries and those impeding awareness-raising efforts: limited digital literacy, dependency on free and freemium services, a varied and disjointed regulatory framework for data protection, and the growth of state-sanctioned surveillance and privacy violations. Hence, considering the vast amounts of personal data currently being processed—and likely to be processed by AI systems in the course of the expected widespread adoption of these technologies in Africa—and the high likelihood of privacy infringements, it is vital, now more than ever, that stakeholders’ concerted efforts focus on:
  • enhancing digital literacy among the general population, with particular emphasis on social and emotional competencies (of which information privacy is a component);
  • breaking the continent’s dependency on free and freemium services offered by global digital technology giants;
  • creating a unified data protection legal framework and presenting a unified front in its enforcement in African countries, more so in the face of mass violations by digital technology multinationals.
Making substantive progress in these three areas is contingent on the development of strong civil societies that readily question and hold governments accountable, in efforts to denormalize state and corporate surveillance and privacy violations.
While regulating the development, use, and adoption of AI is key to ensuring its responsible and ethical integration in African societies, it is also important to view the issue of privacy awareness and awareness-raising among the general public as core components of the digital transformation ecosystem. As my analysis in this paper has shown, it is not enough to focus solely on increasing internet access, building hard digital skills, or expanding digital rights protections without addressing how individuals can actively safeguard their privacy within the changing digital environment. The definition of strong, enforceable data protection and AI regulatory frameworks needs to be complemented by digital education initiatives that raise awareness about AI and personal information privacy for all.
In sum, the general public of countries throughout Africa would not only be better protected, but also better prepared if strong data protection and AI regulatory frameworks were combined with digital education initiatives that prioritize awareness of information privacy, foundational knowledge about AI for all citizens, and encouraging appropriate attitudes—across people of all ages and education status. Increasing public awareness of information privacy and AI literacy would significantly contribute towards helping individuals and communities on the continent understand and effectively engage with the challenges and responsibilities tied to their or others’ personal data.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The author would like to thank the anonymous reviewers, F. Seidel, and S. Westwood for their constructive comments, which helped improve this paper.

Conflicts of Interest

The author declares no conflict of interest.

Notes

1
All names are pseudonyms.
2
I have used autoethnography in previous research (see Chege 2015, 2023), where I also discussed the method in detail. While my earlier use of it focused on other research areas, the approach remains well suited to the present study.
3
In this paper digital information privacy awareness is used as shorthand for digital personal information privacy awareness. It refers to the varying knowledge and understanding that individuals may have of how personal data—whether their own or that of other individuals—is collected, used, stored, and shared in digital environments, the risks related to its misuse, as well as the privacy rights of individuals, and the responsibilities of those who handle such data.
4
I restricted the country analysis to the five finalized national AI strategy/policy documents to ensure the analysis was based on authoritative and actionable commitments. Further research is necessary to expand the analysis to a wider range of African countries, as additional national AI policy/strategy documents are finalized and made publicly available; to track national policy/strategy over time; and to assess how the stated commitments to AI awareness—alongside the limited attention to personal information privacy awareness—translate into implementation outcomes.
5
Although it is not a legally binding document, it is an official regional policy framework and is intended to serve as a strategic guide that African Union member states are encouraged to consult when formulating or refining their national AI strategies. It was prepared by Smart Africa Alliance, a pan-African initiative that is endorsed by the AU, with support from GIZ (German Development Cooperation). It was published when President Ramaphosa of South Africa chaired the AU in 2020, and the Digital Transformation Strategy for Africa was being promoted.
6
The equivalent French keywords were used for the Senegalese government document.
7
The only implicit reference is “sensitization campaigns,” which is mentioned once throughout the seventy-page document and recommended as one of several main strategy focus areas (see Working Group on Artificial Intelligence 2018, p. 22).
8
Promoting public awareness of AI is considered a key pillar in two of the three national documents (Egypt and Senegal), however their respective rationales and strategies differ. Senegal seeks to bridge the digital divide within the country by educating the general public on both the benefits and risks of AI, to this end, it outlines a grassroots, inclusive approach that involves local actors. Like Senegal, Egypt recognizes public AI knowledge as a key pillar of the AI ecosystem and intends to raise awareness about AI development, its benefits, as well as encouraging positive discussions about AI across social media. It also aims to empower citizens to use AI tools and applications responsibly. Rwanda intends to launch a public awareness campaign to promote a broad understanding of AI, which would include information on the potential advantages and risks associated with AI technologies. Unlike Senegal, Rwanda does not define a standalone public awareness pillar focused exclusively on general population outreach (Ministère de la Communication, des Télécommunications et de l’Économie Numérique, République du Sénégal 2023; National Council for Artificial Intelligence 2021, 2025; Ministry of ICT and Innovation, Republic of Rwanda 2023).
9
My presentation of the findings here remains high-level.
10
In the national strategy documents, AI literacy is associated with formal education (from primary through to university or specialized technical education).
11
Autoethnography’s emphasis on reflexivity and situated knowledge offers a way to trace how personal experiences—like my interaction with Morris—shed light on broader societal issues. Rather than reintroduce the method in full here or rehash methodological justifications, I have used an example in the introduction to show how everyday engagements with AI reveal tensions that may otherwise remain obscured in analyses of policies alone. The validity of autoethnography as a research method has been extensively discussed and exemplified by numerous researchers.
12
These papers were published between 2011 and 2019, with most published in 2018.
13
While Solove’s observation was made in the context of U.S. legal scholarship grappling with the limits of existing privacy law, in response to networked information systems and evolving data practices, this conceptual condition was nonetheless reflected in numerous other countries.
14
For an overview of theories and frameworks in the area of Human and Computer Interaction networked privacy, see for example Wisniewski and Page (2022).
15
Brandeis and Warren’s definition of privacy was borrowed from T. M. Cooley, a judge who in 1888 had expressed privacy as “the right to be let alone” (as cited in Czubik 2016; Halpérin 2005). The two authors defined it explicitly, and their more comprehensive treatment of privacy as a legal right, laid the groundwork for modern privacy law and jurisprudence in the United States, where the concept remains a cornerstone of American privacy law and has influenced established legal principles, theories, and court decisions (Czubik 2016; Halpérin 2005).
16
Among other examples, Canada (Michaud 1996), New Zealand, European countries, and Japan (Halpérin 2005).
17
These can be revealed through singular or combinations of one’s details (name, date of birth, social security number, email address, phone number, and IP address, among many other possibilities).
18
M-TIBA is a digital platform that facilitates access to healthcare services through mobile technology. It was launched in Kenya in 2015 as a collaboration between PharmAccess Foundation, CarePay, and Safaricom. It uses Safaricom’s M-Pesa mobile money service to enable financial transactions. Through it, users manage their health insurance and health savings via a mobile health wallet, and it facilitates connections between members, healthcare providers, and payers, which makes healthcare related transactions more transparent, efficient, and affordable. For more information, see https://mtiba.com/.
19
See for example Chege (2018). Following the enactment of the country’s data protection act, there has been a gradual and observable decline in the practice.
20
The primary asset for digital businesses is property rights; simply gathering data is not useful for businesses.
21
Behavioral surplus is a concept introduced by Zuboff in her analysis of surveillance capitalism and refers to the excess personal data collected from users beyond what is required to provide a service (Zuboff 2019a, p. 13).
22
Meta Pixel (formerly Facebook Pixel) is a piece of JavaScript code provided by Meta that businesses running Meta-served ads can install on their websites. Its main purpose is to track visitor activities (for example, page views, conversions, purchases, form submissions) after they click or interact with a business’s/advertiser’s ads on Facebook, Instagram, Messenger, or a third-party site in the Meta Audience Network. The data collected by the pixel is sent back to Meta servers for analysis, which can help the business/advertiser evaluate the effectiveness of their ads, redefine visitor re-targeting, and enables Meta to adjust the delivery of the ads, so that they are shown to the visitors who are most likely to complete the desired action (convert). See https://developers.facebook.com/docs/meta-pixel (accessed on 5 July 2024) and https://instapage.com/blog/meta-pixel (accessed on 5 July 2024).
23
For more on echo chambers, see Lin (n.d.) and Mahmoudi et al. (2024).
24
See for example, articles on Facebook by the Wall Street Journal (n.d.) and BBC (2021), and an article on algorithmic amplification on Twitter, based on research by Huszár et al. (2022).
25
For instance, accessing most if not all online services is dependent on having an email address or being a user of a direct messaging service.
26
For more on this, see a series of articles by Axios entitled “What AI knows about you”: https://www.axios.com/2024/11/04/ai-training-data-llm-privacy-big-tech (accessed on 5 July 2024), https://www.axios.com/2024/11/25/microsoft-ai-training-data-privacy (accessed on 5 July 2024), https://www.axios.com/2024/11/05/meta-ai-user-data-information (accessed on 5 July 2024), https://www.axios.com/2024/11/18/google-ai-gemini-user-data-training (accessed on 5 July 2024).
27
Broadly speaking, personal data protection involves addressing technical security and legal and procedural aspects of safeguarding personal data. However, as per prevailing conventions, personal data protection primarily refers to the legal and procedural components.
28
The number of countries with active data protection laws increased to thirty-nine in the course of 2025.
29
South Africa’s POPIA is used to regulate the processing of personal data. It was enacted in 2013 and became fully effective on 30 June 2021 (OneTrust Data Guidance n.d.).
30
For a short introduction and discussion of dependency through the lens of Samir Amin’s work, see Kvangraven (2017); it outlines the three main schools of dependency theory (global historical materialism, Latin American dependencia, and world-systems analysis), and their shared view on the profit-driven economy as a global system that creates structural inequalities between dominant and dependent countries. See Rodney (1973), for a foundational contribution to dependency theory and postcolonial critique, offering an Africa-centered analysis
31
Nigeria has a public administration system whereby citizens regardless of age are assigned a unique identification number termed National Identifiation Number (NIN), which is stored with their biometric data (fingerprints, head-to-shoulder facial image, height, among others) in a national database. Kenya has a similar system through its Maisha Namba (life umber) project. Both countries have recently introduced biometric ID cards that are expected to serve their citizens in multiple ways, in both offline and online environments.
32
For brief, general information on predictive policing, see OHCHR (2024) and Jansen Reventlow (2021).
33
Organizations like Paradigm Initiative and Smart Africa are already active in the digital literacy space but do not specifically focus on raising privacy awareness.
34
In July 2024, Nigeria’s Federal Competition and Consumer Protection Commission (FCCPC), working with the Data Protection Commission (NDPC), fined Meta $220 million for violations that include unauthorized data transfers, lack of meaningful consent mechanisms, and discriminatory privacy treatment compared to other world regions. In April 2025, Nigeria’s Competition Tribunal upheld the fine, rejecting Meta’s appeals and affirming the legal findings. See Bala-Gbogbo and MacDonald Dzirutwe (2024), Eboh (2025) (Reuters).

References

  1. African Union. 2020. African Union Digital Transformation Strategy. Available online: https://au.int/en/documents/20200518/digital-transformation-strategy-africa-2020-2030 (accessed on 3 July 2024).
  2. African Union. 2023. African Union Convention on Cyber Security and Personal Data Protection. April 11. Available online: https://au.int/sites/default/files/treaties/29560-sl-AFRICAN_UNION_CONVENTION_ON_CYBER_SECURITY_AND_PERSONAL_DATA_PROTECTION.pdf (accessed on 3 July 2024).
  3. African Union. 2024. Continental Artificial Intelligence Strategy. Available online: https://au.int/en/documents/20240809/continental-artificial-intelligence-strategy (accessed on 17 August 2024).
  4. Aftab, Sohail. 2024. The Concept of the Right to Privacy. In Comparative Perspectives on the Right to Privacy, Ius Gentium: Comparative Perspectives on Law and Justice. Berlin and Heidelberg: Springer, vol. 109, pp. 39–98. [Google Scholar]
  5. Agbenonwossi, Emmanuel E., Alan Finlay, Kinfe M. Yilma, Sigi W. Mwanzia, Pria Chetty, Alon Alkalay, Fola Odufuwa, Gabriella Razzano, Rebecca Ryakitimbo, and Paul Kimumwe. 2021. Privacy and Personal Data Protection in Africa, A Rights-Based Survey of Legislation in Eight Countries. Johannesburg: African Declaration on Internet Rights and Freedoms Coalition. Available online: https://africaninternetrights.org/sites/default/files/Privacy%20and%20Personal%20Data%20Protection%20in%20Africa%20-%20A%20rights-based%20survey%20of%20legislation%20in%20eight%20countries_Data_Protection_Reports_May%202021.pdf (accessed on 14 December 2024).
  6. Ahmed, Jashim U. 2010. Documentary Research Method: New Dimensions. Indus Journal of Management and Social Sciences 4: 1–14. [Google Scholar]
  7. AI Media Group South Africa. 2022. State of AI in Africa, 2022 Report. Available online: https://aiafricareport.gumroad.com/ (accessed on 13 July 2024).
  8. Ajene, Emeka. 2023. Generative AI in Africa: How African Startups Are Building for the New AI Revolution. Lagos: AfriDigest. Available online: https://afridigest.com/generative-ai-in-africa/ (accessed on 5 July 2024).
  9. ALA. 2011. What Is Digital Literacy? Digital Literacy Issue Brief. Available online: https://alair.ala.org/items/ce142b8e-c935-4fce-ab4f-35b654a92d6c/full (accessed on 23 July 2024).
  10. ALT Advisory. 2022. AI Governance in Africa: An Overview of Regulation and Policy Work on Artificial Intelligence in Africa. Available online: https://ai.altadvisory.africa/wp-content/uploads/AI-Governance-in-Africa-2022.pdf (accessed on 8 August 2024).
  11. ALT Advisory. 2024. Data Protection Africa. Available online: https://dataprotection.africa/analysis/ (accessed on 25 October 2024).
  12. Altman, Irwin. 1975. The Environment and Social Behavior: Privacy, Personal Space, Territory, and Crowding. Boston: Brooks/Cole. [Google Scholar]
  13. Amin, Samir. 1974. Accumulation on a World Scale: A Critique of the Theory of Underdevelopment. New York: Monthly Review Press. [Google Scholar]
  14. Amnesty International. 2020. German-Made FinSpy Spyware Found in Egypt, and Mac and Linux Versions Revealed. Available online: https://www.amnesty.org/en/latest/research/2020/09/german-made-finspy-spyware-found-in-egypt-and-mac-and-linux-versions-revealed/ (accessed on 23 October 2024).
  15. Babalola, Olumide. 2021. The EU GDPR and Nigeria’s NDPR: A comparative analysis. Journal of Data Protection Privacy 4: 372–87. [Google Scholar] [CrossRef]
  16. Bala-Gbogbo, Elisha, and And MacDonald Dzirutwe. 2024. Nigeria Fines Meta $220 Million for Violating Consumer, Data Laws. London: Reuters. Available online: https://reuters.com/technology/nigerias-consumer-watchdog-fines-meta-220-million-violating-local-consumer-data-2024-07-19/ (accessed on 17 March 2025).
  17. BBC. 2021. Twitters Algorithm Favours Right-Leaning Politics, Research Finds. BBC, October 22. Available online: https://www.bbc.com/news/technology-59011271 (accessed on 3 December 2024).
  18. Bélanger, France, and E. Robert Crossler. 2011. Privacy in the digital age: A review of information privacy research in information systems. MIS quarterly 35: 1017–41. [Google Scholar] [CrossRef]
  19. Bijker, Wiebe E., Thomas P. Hughes, and Trevor Pinch. 2012. The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology. Cambridge: MIT. [Google Scholar]
  20. Borokini, Favour, Kutoma Wakunuma, and Simisola Akintoye. 2023. The Use of Gendered Chatbots in Nigeria: Critical Perspectives. In Responsible AI in Africa: Challenges and Opportunities. Edited by Damian Okaibedi Eke, Kutoma Wakunuma and Simisola Akintoye. London: Palgrave Macmillan, pp. 119–38. [Google Scholar] [CrossRef]
  21. Boshe, Patricia, Moritz Hennemann, and Ricarda von Meding. 2022. Data Protection Laws: Current Regulatory Approaches, Policy Initiatives, and the Way Forward. Global Privacy Law Review 3: 56–88. [Google Scholar] [CrossRef]
  22. Bowen, Glenn. 2009. Document Analysis as a Qualitative Research Method. Qualitative Research Journal 9: 27–40. [Google Scholar] [CrossRef]
  23. Boyd, Danah. 2010. Making Sense of Privacy and Publicity [Conference Presentation]. Paper presented at SXSW, Austin, TX, USA, March 12–21; Available online: https://www.danah.org/papers/talks/2010/SXSW2010.html (accessed on 13 December 2024).
  24. Brandeis, Louis D., and Samuel D. Warren, Jr. 1890. The Right to Privacy. Harvard Law Review 4: 193–220. [Google Scholar] [CrossRef]
  25. Buckingham, David. 2007. Digital Media Literacies: Rethinking media education in the age of the Internet. Research in Comparative and International Education 2: 43–55. [Google Scholar] [CrossRef]
  26. Buolamwini, Joy, and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research 81: 77–91. [Google Scholar]
  27. BusinessTech. 2021. WhatsApp Changes South Africans Should Know About: Legal Expert. Available online: https://businesstech.co.za/news/mobile/491095/whatsapp-changes-south-africans-should-know-about-legal-expert/ (accessed on 2 September 2024).
  28. Cardno, Carol. 2018. Policy Document Analysis: A Practical Educational Leadership Tool and a Qualitative Research Method. Educational Administration: Theory and Practice 24: 623–40. [Google Scholar] [CrossRef]
  29. Chege, Njeri. 2015. “What’s In It For Me?”: Negotiations of Asymmetries, Concerns and Interests Between the Researcher and Research Subjects. Ethnography 16: 463–81. [Google Scholar] [CrossRef]
  30. Chege, Njeri. 2018. Children’s Personal Data: Discursive Legitimation Strategies of Private Residential Care Institutions on the Kenyan Coast. Social Sciences 7: 114. [Google Scholar] [CrossRef]
  31. Chege, Njeri. 2023. Calling It Participatory Research When It Is Not: Positionality and Reflexivity Across Individual and Collaborative Research Projects. San Francisco: Academia. Available online: https://www.academia.edu/112107683 (accessed on 3 July 2025).
  32. CIPIT. 2023. The State of AI in Africa 2023. Center of Intellectual Property and Technology Law, Strathmore University. Available online: https://cipit.org/wp-content/uploads/2023/06/Final-Report-The-State-of-AI-in-Africa-Report-2023.pdf (accessed on 5 July 2024).
  33. Creswell, John W., and David J. Creswell. 2018. Research Design Qualitative, Quantitative, and Mixed Methods Approaches. Thousand Oaks: Sage. [Google Scholar]
  34. Culnan, Mary J., and Pamela K. Armstrong. 1999. Information Privacy Concerns, Procedural Fairness, and Impersonal Trust: An Empirical Investigation. Organization Science 10: 104–15. [Google Scholar] [CrossRef]
  35. Czubik, Agnieszka. 2016. “The Right to Privacy” by S. Warren and L. Brandeis—The Story of a Scientific Article in the United States. Ad Americam 17: 211–19. [Google Scholar] [CrossRef]
  36. De Kok, Lisa, Deborah Oosting, and Marcel Spruit. 2020. Influence of Knowledge and Attitude on Intention to Adopt Cybersecure Behaviour. Information & Security: An International Journal 46: 251–66. [Google Scholar] [CrossRef]
  37. Denzin, Norman K. 1994. The art and politics of interpretation. In Handbook of Qualitative Research. Edited by Norman K. Denzin and Yvonna S. Lincoln. Thousand Oaks: Sage, pp. 500–15. [Google Scholar]
  38. Dinev, Tamara, and Paul Hart. 2006. An Extended Privacy Calculus Model for E-Commerce Transactions. Information Systems Research 17: 61–80. [Google Scholar] [CrossRef]
  39. Domingo, Ennatu, Sabine Muscat, Stephanie Arnold, Maelle Salzinger, and Pria Chetty. 2024. The Geopolitics of Digital Literacy and Skills Cooperation with Africa, ECDPM, Discussion Paper 369. Available online: https://ecdpm.org/application/files/8717/1921/9245/The-geopolitics-digital-literacy-skills-coorperation-with-Africa-ECDPM-Discussion-Paper-369-2024.pdf (accessed on 23 March 2025).
  40. Eboh, Camillus. 2025. Nigerian Tribunal Upholds $220 Million Fine Against Meta Forviolating Consumer, Data Laws. London: Reuters. Available online: https://reuters.com/sustainability/boards-policy-regulation/nigerian-tribunal-upholds-220-million-fine-against-meta-violating-consumer-data-2025-04-25/ (accessed on 30 April 2025).
  41. Eke, Damian O., Kutoma Wakunuma, and Simisola Akintoye, eds. 2023a. Introducing Responsible AI in Africa. In Responsible AI in Africa. Social and Cultural Studies of Robots and AI. London: Palgrave Macmillan, pp. 1–11. [Google Scholar] [CrossRef]
  42. Eke, Damian O., Kutoma Wakunuma, and Simisola Akintoye, eds. 2023b. Responsible AI in Africa. Social and Cultural Studies of Robots and AI. London: Palgrave Macmillan. [Google Scholar] [CrossRef]
  43. Eke, Damian O., Schmidt S. Chintu, and Kutoma Wakunuma. 2023c. Towards Shaping the Future of Responsible AI in Africa. In Responsible AI in Africa. Social and Cultural Studies of Robots and AI. Edited by Damian O. Eke, Kutoma Wakunuma and Simisola Akintoye. London: Palgrave Macmillan, pp. 169–92. [Google Scholar] [CrossRef]
  44. Eshet-Alkalai, Yoram. 2004. Digital Literacy: A Conceptual Framework for Survival Skills in the Digital Era. Journal of Educational Multimedia and Hypermedia 13: 93–106. [Google Scholar]
  45. Fana, Thanduxolo. 2021. Knowledge, Attitude and Practices Regarding HIV and AIDS among High School Learners in South Africa. The Open Aids Journal 15: 84–92. [Google Scholar] [CrossRef]
  46. Feenberg, Andrew. 1999. Questioning Technology. London: Routledge. [Google Scholar]
  47. Feldstein, Steven. 2019. The Global Expansion of AI Surveillance, Carnegie Endowment for International Peace. Available online: https://carnegie-production-assets.s3.amazonaws.com/static/files/WP-Feldstein-AISurveillance_final1.pdf (accessed on 13 December 2025).
  48. Filippidis Semino, Mariel. 2023. Data Protection Principles. In European Data Protection, Law and Practice. Edited by Eduardo Ustaran. Portsmouth: IAPP. [Google Scholar]
  49. Floridi, Luciano. 2016a. Group privacy: A defence and an interpretation. In Group Privacy: New Challenges of Data Technologies. Edited by Linnet Taylor, Luciano Floridi and Bart Van der Sloot. Cham: Springer, pp. 83–100. [Google Scholar]
  50. Floridi, Luciano. 2016b. On human dignity as a foundation for the right to privacy. Philosophy and Technology 29: 307–12. [Google Scholar] [CrossRef]
  51. Gadzala, Aleksandra. 2018. Coming to life: Artificial intelligence in Africa. Atlantic Council. Available online: https://atlanticcouncil.org/wp-content/uploads/2019/09/Coming-to-Life-Artificial-Intelligence-in-Africa.pdf (accessed on 22 October 2024).
  52. Gaffley, Mark, Rachel Adams, and Ololade Shyllon. 2022. Artificial Intelligence. African Insight. A Research Summary of the Ethical and Human Rights Implications of AI in Africa. HSRC & Meta AI and Ethics Human Rights Research Project for Africa—Synthesis Report. Available online: https://africanaiethics.com/wp-content/uploads/2022/02/Artificial-Intelligence-African-Insight-Report.pdf (accessed on 12 December 2024).
  53. Gilster, Paul. 1997. Digital Literacy. Hoboken: John Wiley & Sons. [Google Scholar]
  54. Goldemberg, José. 2011. Technological Leapfrogging in the Developing World. Georgetown Journal of International Affairs 12: 135–41. [Google Scholar]
  55. Halpérin, Jean-Louis. 2005. L’essor de la « privacy » et l’usage des concepts juridiques. Droit Société Cairn 3: 765–82. [Google Scholar] [CrossRef]
  56. Hintze, Arend. 2016. Understanding the Four Types of AI, from Reactive Robots to Self-Aware Beings. The Conversation. Available online: https://theconversation.com/understanding-the-four-types-of-ai-from-reactive-robots-to-self-aware-beings-67616 (accessed on 7 July 2024).
  57. Holm, Anna B., and Franziska Günzel-Jensen. 2017. Succeeding with freemium: Strategies for implementation. Journal of Business Strategy 38: 16–24. [Google Scholar] [CrossRef]
  58. Huszár, Ferenc, Sophia I. Ktena, Conor O’Brien, Luca Belli, Andrew Schlaikjer, and Moritz Hardt. 2022. Algorithmic amplification of politics on Twitter. Proceedings of the National Academy of Sciences 119: e2025334119. [Google Scholar] [CrossRef]
  59. Jansen Reventlow, Nani. 2021. How Artificial Intelligence Impacts Marginalized Groups. Amsterdam: Digital Freedom Fund. Available online: https://digitalfreedomfund.org/how-artificial-intelligence-impacts-marginalised-groups (accessed on 10 January 2025).
  60. Jobin, Anne, Marcello Ienca, and Effy Vayena. 2019. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1: 389–99. [Google Scholar] [CrossRef]
  61. Kannampilly, Ammu, and Humphery Malalo. 2024. Kenya Court Finds Meta Can Be Sued Over Moderator Layoffs. London: Reuters. Available online: https://www.reuters.com/world/africa/kenya-court-rules-meta-can-be-sued-over-layoffs-by-contractor-2024-09-20/ (accessed on 29 October 2024).
  62. Kaplan, Andreas, and Micheal Haenlein. 2019. Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons 62: 15–25. [Google Scholar] [CrossRef]
  63. King, Jeniffer, and Caroline Meinhardt. 2024. Rethinking Privacy in the AI Era: Policy Provocations for a Data-Centric World. White Paper. Stanford University, Human-Centered Artificial Intelligence. Available online: https://hai-production.s3.amazonaws.com/files/2024-02/White-Paper-Rethinking-Privacy-AI-Era.pdf (accessed on 17 August 2024).
  64. King’ori, Mercy. 2022. Looking Back to Forge Ahead: Challenges of Developing an “African Conception” of Privacy. Washington: Future of Privacy Forum. Available online: https://fpf.org/blog/looking-back-to-forge-ahead-challenges-of-developing-an-african-conception-of-privacy/ (accessed on 26 May 2025).
  65. Krönke, Mathias. 2020. Africa’s Digital Divide and the Promise of e-Learning. Policy Paper 66. Accra: Afrobarometer. [Google Scholar]
  66. Kvangraven, Ingrid H. 2017. A dependency pioneer—Samir Amin. In Dialogues on Development. Volume 1: On Dependency. Edited by Ushehwedu Kufakurinani, Ingrid H. Kvangraven, Frutuoso Santanta and Maria D. Styve. New York: Institute for New Economic Thinking, pp. 12–17. [Google Scholar]
  67. Lankshear, Colin, and Michele Knobel. 2008. Introduction: Digital Literacies—Concepts, Policies and Practices. In Digital Literacies: Concepts, Policies and Practices. Edited by Colin Lankshear and Michele Knobel. Lausanne: Peter Lang, pp. 1–16. [Google Scholar]
  68. Lee, Saerom, Chulhyun Kim, and Hakyeon Lee. 2022. What should be offered for free and what for premium in a freemium service? A two-stage approach of Kano & path analysis to the design of freemium services. Technology Analysis & Strategic Management 36: 1476–89. [Google Scholar] [CrossRef]
  69. Lin, Tracey. n.d. Breaking the Echo: How AI Shapes Our Digital Echo Chambers. San Francisco: Propelland. Available online: https://propelland.com/intelligence/how-ai-shapes-our-digital-echo-chambers/ (accessed on 2 December 2024).
  70. Lloyd’s Register Foundation. 2023. Eastern Africa Is Not Ready to Accept Artificial Intelligence. Available online: https://wrp.lrfoundation.org.uk/news/eastern-africa-is-not-ready-to-accept-artificial-intelligence (accessed on 6 November 2024).
  71. Lukhanyu, Val. 2024. M-TIBA Adopts AI for Insurance Claims Processing, Reducing Approval Waiting Time. Nairobi: Techmoran. Available online: https://techmoran.com/2024/04/10/m-tiba-adopts-ai-for-insurance-claims-processing-reducing-approval-waiting-time/ (accessed on 14 June 2024).
  72. Macmillan, Mac. 2023. Data Protection Concepts. In European Data Protection, Law and Practice. Edited by Eduardo Ustaran. Portsmouth: IAPP. [Google Scholar]
  73. Mahmoudi, Amin, Dariusz Jemielniak, and Leon Ciechanowski. 2024. Echo Chambers in Online Social Networks: A Systematic Literature Review. IEEE Access 99: 9594–620. [Google Scholar] [CrossRef]
  74. Markelius, Alva, Connor Wright, Joahna Kuiper, Natalie Delille, and Yu-Ting Kuo. 2024. The mechanisms of AI hype and its planetary and social costs. AI and Ethics 4: 727–42. [Google Scholar] [CrossRef]
  75. McCarthy, John. 2007. What is AI? Available online: http://jmc.stanford.edu/artificial-intelligence/index.html (accessed on 13 November 2023).
  76. Michaud, Martin. 1996. Le droit au respect de la vie privée dans le contexte médiatique: De Warren et Brandeis à l’inforoute. Montréal: Wilson and Lafleur. [Google Scholar]
  77. Milo, Dario. 2021. Parliament Has Three Years to Fix Problems with Rica, EngineerIT. Available online: https://www.engineerit.co.za/article/parliament-has-three-years-fix-problems-rica (accessed on 16 October 2024).
  78. Ministère de la Communication, des Télécommunications et de l’Économie Numérique, République du Sénégal. 2023. Stratégie Nationale et Feuille de Route du Sénégal sur l’Intelligence Artificielle à l’Horizon 2028. Vérsion Résumée. Available online: https://www.mctn.sn/documentation (accessed on 14 June 2024).
  79. Ministère du Numérique et de la Digitalisation, République du Bénin. 2023. National Artificial Intelligence and Big Data Strategy 2023–2027. Available online: https://numerique.gouv.bj/assets/documents/national-artificial-intelligence-and-big-data-strategy-1682673348.pdf (accessed on 13 June 2024).
  80. Ministry of ICT and Innovation, Republic of Rwanda. 2023. The National AI Policy. Available online: https://www.ictworks.org/wp-content/uploads/2023/12/Rwanda_Artificial_Intelligence_Policy.pdf (accessed on 13 June 2024).
  81. Minsky, Marvin. 1968. Semantic Information Processing. Cambridge: MIT Press. [Google Scholar]
  82. MTN. 2024. Ayoba, the African Super-App Announces Achievement of 35 Million Monthly Active Users. Available online: https://mtn.com/ayoba-the-african-super-app-announces-achievement-of-35-million-monthly-active-users/ (accessed on 19 November 2024).
  83. Mugadza, Kimberly, and Gwamaka Mwalemba. 2023. Online Platform Privacy Policies: An Exploration of Users’ Perceptions, Attitudes and Behaviours Online. South African Computer Journal 35: 78–96. [Google Scholar] [CrossRef]
  84. Mungai, Christine. 2019. Kenya’s Huduma: Data Commodification and Government Tyranny. Doha: Al Jazeera. Available online: https://aljazeera.com/opinions/2019/8/6/kenyas-huduma-data-commodification-and-government-tyranny (accessed on 17 March 2025).
  85. Musau, Dennis. 2024. How Custom ChatGPT Tools Are the New Face of Kenyan Civic Education. Citizen Digital. Available online: https://citizen.digital/tech/finance-bill-corrupt-politicians-how-custom-chatgpt-tools-are-the-new-face-of-kenyan-civic-education-n345946 (accessed on 5 September 2024).
  86. National Council for Artificial Intelligence. 2021. Egypt National Artificial Intelligence Strategy; Cairo: National Council for Artificial Intelligence. Available online: https://mcit.gov.eg/Upcont/Documents/Publications_572021000_Egypt_Nation_%20Artificial_Intelligence_Strategy_05072021.pdf (accessed on 16 June 2024).
  87. National Council for Artificial Intelligence. 2025. Egypt National Artificial Intelligence Strategy, 2nd ed.; 2025–2030. Cairo: National Council for Artificial Intelligence. Available online: https://ai.gov.eg/SynchedFiles/en/Resources/AIstrategy%20English%2016-1-2025-1.pdf (accessed on 7 February 2025).
  88. Ngila, Faustine. 2022. Africa Is Joining the Global AI Revolution. Mount Ida: Quartz. Available online: https://qz.com/africa/2180864/africa-does-not-want-to-be-left-behind-in-the-ai-revolution (accessed on 3 October 2024).
  89. Nissenbaum, Helen. 2004. Privacy as contextual integrity. Washington Law Review 79: 119–57. [Google Scholar]
  90. Nwagbara, Ugochinyere Ijeoma, Emmanuella Chinonso Osual, Rumbidzai Chireshe, Obasanjo Afolabi Bolarinwa, Balsam Qubais Saeed, Nelisiwe Khuzwayo, and Khumbulani W. Hlongwana. 2021. Knowledge, attitude, perception, and preventative practices towards COVID-19 in sub-Saharan Africa: A scoping review. PLoS ONE 16: e0249853. [Google Scholar] [CrossRef] [PubMed]
  91. Nyabola, Nanjala. 2019. If You Are a Kenyan Citizen, Your Private Data is Not Safe. Doha: Al Jazeera. Available online: https://aljazeera.com/opinions/2019/2/24/if-you-are-a-kenyan-citizen-your-private-data-is-not-safe/ (accessed on 17 March 2025).
  92. Obar, Jonathan A., and Anne Oeldorf-Hirsch. 2018. The Biggest Lie on the Internet: Ignoring the Privacy Policies and Terms of Service Policies of Social Networking Services. Information, Communication & Society 23: 128–47. [Google Scholar] [CrossRef]
  93. OHCHR. 2024. Racism and AI: “Bias in the Past Leads to Bias in the Future”. Available online: https://www.ohchr.org/en/stories/2024/07/racism-and-ai-bias-past-leads-bias-future (accessed on 10 January 2025).
  94. Okolo, Chinasa T., Kehinde Aruleba, and George Obaido. 2023. Responsible AI in Africa—Challenges and Opportunities. In Responsible AI in Africa. Social and Cultural Studies of Robots and AI. Edited by Damian Okaibedi Eke, Kutoma Wakunuma and Simisola Akintoye. London: Palgrave Mcmillan, pp. 40–64. [Google Scholar] [CrossRef]
  95. OneTrust Data Guidance. n.d. Comparing Data Privacy Laws GDPR v. POPIA. Available online: https://www.dataguidance.com/sites/default/files/onetrustdataguidance_comparingprivacylaws_gdprvpopia.pdf (accessed on 30 October 2024).
  96. Perrigo, Billy. 2023. Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic. New York: The Time. Available online: https://time.com/author/billy-perrigo/ (accessed on 17 September 2024).
  97. Petronio, Sandra S. 2002. Boundaries of Privacy: Dialects of Disclosure. Albany: SUNY Press. [Google Scholar]
  98. Privacy International. 2014. Surveillance Follows Ethiopian Political Refugee to the UK. Available online: https://privacyinternational.org/blog/1199/surveillance-follows-ethiopian-political-refugee-uk (accessed on 23 October 2024).
  99. Privacy International. 2017. What Is Privacy? Explainer. Available online: https://privacyinternational.org/explainer/56/what-privacy (accessed on 30 July 2024).
  100. Proferes, Nicholas. 2022. The Development of privacy Norms. In Modern Socio-Technical Perspectives on Privacy. Edited by Bart P. Knijnenburg, Xinru Page, Pamela Wisniewski, Heather R. Lipford, Nicholas Proferes and Jennifer Romano. Berlin and Heidelberg: Springer. [Google Scholar]
  101. Rappa, Michael. 2000. Business Models on the Web: Managing the Digital Enterprise. Available online: https://www.academia.edu/84728689/Business_models_on_the_web (accessed on 13 September 2024).
  102. Rich, Elaine. 1983. Artificial Intelligence. Columbus: McGraw-Hill. [Google Scholar]
  103. Roberts, Tony. 2021. Surveillance Laws Are Failing to Protect Privacy Rights: What We Found in Six African Countries. San Francisco: The Conversation. Available online: https://theconversation.com/surveillance-laws-are-failing-to-protect-privacy-rights-what-we-found-in-six-african-countries-170373 (accessed on 23 October 2024).
  104. Roberts, Tony, Abrar Mohamed Ali, Mohamed Farahat, Ridwan Oloyede, and Grace Mutung’u. 2021. Surveillance Law in Africa: A Review of Six Countries. Brighton: Institute of Development Studies. [Google Scholar] [CrossRef]
  105. Rodney, Walter. 1973. How Europe Underdeveloped Africa. London: Verso. [Google Scholar]
  106. Roessler, Beate, and Judith DeCew. 2023. Privacy. In The Stanford Encyclopedia of Philosophy. Winter Edition. Edited by Edward N. Zalta and Uri Nodelman. First published 2002. Available online: https://plato.stanford.edu/archives/win2023/entries/privacy (accessed on 26 October 2024).
  107. Russell, Stuart, and Peter Norvig. 2020. Artificial Intelligence: A Modern Approach. London: Pearson. [Google Scholar]
  108. Smart Africa. 2021. Artificial Intelligence for Africa BluePrint. Available online: https://smartafrica.org/wp-content/uploads/2023/11/70029-eng_ai-for-africa-blueprint-min.pdf (accessed on 3 October 2024).
  109. Solove, Daniel J. 2006. A taxonomy of privacy. University of Pennsylvania Law Review 154: 477–560. [Google Scholar] [CrossRef]
  110. Solove, Daniel J. 2025. Daniel Solove on Privacy, Technology, and the Rule of Law (Transcript). In The Tech Policy Press Podcast. Edited by Justin Hendrix. Available online: https://www.techpolicy.press/daniel-solove-on-privacy-technology-and-the-rule-of-law (accessed on 13 December 2025).
  111. Steinfeld, Nili. 2016. “I agree to the terms and conditions”: (How) do users read privacy policies online? An eye-tracking experiment. Computers in Human Behavior 55: 992–1000. [Google Scholar] [CrossRef]
  112. Suh, Jennifer J., and Miriam J. Metzger. 2022. Privacy beyond the Individual level. In Modern Socio-Technical Perspectives on Privacy. Edited by Bart P. Knijnenburg, Xinru Page, Pamela Wisniewski, Heather R. Lipford, Nicholas Proferes and Jennifer Romano. Berlin and Heidelberg: Springer. [Google Scholar]
  113. Swartz, Ethné, Caren B. Scheepers, Adam Lindgreen, Shumaila Yousafzai, and Marianne Matthee, eds. 2023. Introduction to Technological Leapfrogging and Innovation in Africa. In Technological Leapfrogging and Innovation in Africa. Worcester: Edward Elgar, pp. 1–16. [Google Scholar] [CrossRef]
  114. Taylor, Sandra, Fazal Rizvi, Bob Lingard, and Miriam Henry. 1997. Education Policy and the Politics of Change. London: Routledge. [Google Scholar]
  115. Tech-Ish. 2024. Microsoft to Host M-Pesa Platform on Azure Cloud. Available online: https://tech-ish.com/2024/01/16/microsoft-to-host-m-pesa-platform-on-microsofts-azure-cloud/ (accessed on 3 October 2024).
  116. Teleanu, Sorina, and Jovan Kurbalija. 2022. Stronger Digital Voices from Africa: Building African Digital Foreign Policy and Diplomacy. Los Angeles: DIPLO. Available online: https://www.diplomacy.edu/resource/report-stronger-digital-voices-from-africa/ (accessed on 9 October 2024).
  117. UNESCO. 2019. Artificial Intelligence in Education: Challenges and Opportunities for Sustainable Development—UNESCO Biblioteca Digital. Technical Report. Working Papers on Education Policy. Paris: UNESCO. Available online: http://www.nied.edu.na/assets/documents/05Policies/NationalCurriculumGuide/Artificial_Intelligence_(AI)-challenges_and_opportunities_for_sustainable_development.pdf (accessed on 10 October 2024).
  118. Wall Street Journal. n.d. The Facebook Files. Available online: https://www.wsj.com/articles/the-facebook-files-11631713039? (accessed on 2 December 2024).
  119. Westin, Alan F. 1967. Privacy and Freedom. New York: Atheneum. [Google Scholar]
  120. Wiesner, Claudia. 2022. Doing qualitative and interpretative research: Reflecting principles and principled challenges. Political Research Exchange 4: 2127372. [Google Scholar] [CrossRef]
  121. Winner, Langdon. 1980. Do Artifacts Have Politics? In Daedalus, Modern Technology: Problem or Opportunity? Cambridge: MIT Press, vol. 109, pp. 121–36. [Google Scholar]
  122. Winston, Patrick H. 1993. Artificial Intelligence. Boston: Addison-Wesley. [Google Scholar]
  123. Wisniewski, Pamela J., and Xinru Page. 2022. Privacy Theories and frameworks. In Modern Socio-Technical Perspectives on Privacy. Edited by Bart P. Knijnenburg, Xinru Page, Pamela Wisniewski, Heather R. Lipford, Nicholas Proferes and Jennifer Romano. Berlin and Heidelberg: Springer. [Google Scholar]
  124. Working Group on Artificial Intelligence. 2018. Mauritius Artificial Intelligence Strategy. Available online: https://mdpa.govmu.org/mdpa/wp-content/uploads/2024/04/MauritiusAIStrategy2018.pdf (accessed on 17 June 2024).
  125. Yaron, Gilad. 2023. An In-Depth Look at South Africa’s Protection of Personal Information Act (POPIA) and Its Comparison. GY Data Protection Matters. Available online: https://www.data-protection-matters.com/post/an-in-depth-look-at-south-africa-s-protection-of-personal-information-act-popia-and-its-comparison (accessed on 25 November 2023).
  126. Zuboff, Shoshana. 2019a. Surveillance Capitalism and the Challenge of Collective Action. New Labor Forum 28: 10–29. [Google Scholar] [CrossRef]
  127. Zuboff, Shoshana. 2019b. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Washington: Public Affairs. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chege, N. Is Africa Ready for AI? Digital Information Privacy Awareness and AI Adoption on the Continent. Soc. Sci. 2026, 15, 155. https://doi.org/10.3390/socsci15030155

AMA Style

Chege N. Is Africa Ready for AI? Digital Information Privacy Awareness and AI Adoption on the Continent. Social Sciences. 2026; 15(3):155. https://doi.org/10.3390/socsci15030155

Chicago/Turabian Style

Chege, Njeri. 2026. "Is Africa Ready for AI? Digital Information Privacy Awareness and AI Adoption on the Continent" Social Sciences 15, no. 3: 155. https://doi.org/10.3390/socsci15030155

APA Style

Chege, N. (2026). Is Africa Ready for AI? Digital Information Privacy Awareness and AI Adoption on the Continent. Social Sciences, 15(3), 155. https://doi.org/10.3390/socsci15030155

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop