Next Article in Journal
Creative Education of Future Information: On the Importance of Philosophical Basic Literacy Education
Previous Article in Journal
Sustainable Interactions as Design Objects That Promote Digital Humanism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Ethical Governance of AI in the Global South: A Human Rights Approach to Responsible Use of AI †

1
Departamento de Filosofía I Edificio de la Facultad de Psicología, Universidad de Granada, 18071 Granada, Spain
2
Instituto de Filosofía-CSIC (IFS-CSIC), 28037 Madrid, Spain
3
Departamento de Filosofía, Universidad de Murcia, C. Campus Universitario, 11, 30100 Murcia, Spain
*
Author to whom correspondence should be addressed.
Presented at Philosophy and Computing Conference, IS4SI Summit 2021, online, 12–19 September 2021.
Proceedings 2022, 81(1), 136; https://doi.org/10.3390/proceedings2022081136
Published: 29 April 2022

Abstract

:
There is a growing debate on how to regulate and make responsible use of digital technologies, particularly artificial intelligence (AI). In an increasingly globalized scenario, power relations and inequalities between different countries and regions need to be addressed. While developed countries are leading the building of an ethical governance architecture for AI, in the so-called global south (e.g., countries with a post-colonial history, also called non-developing countries), their situation of vulnerability and dependence on northern domination leads them to import digital technology, capital and modes of organization from these developed countries. This imbalance, in the absence of an ethical reflection, can have a significantly negative impact on their already excluded, oppressed and discriminated populations. In this paper, we want to explore to what extent countries from the global south that import digital technology from developed countries may be affected if we do not take into account the need for multi-level and ethical global governance of AI from a human rights/democratic perspective. In particular, we want to address two problems that may arise: (a) Lack of governance capacity in southern populations resulting from their dependence from northern leadership on technological innovations and regulations, and (b) material and workforce extractivism inflicted by the northern countries on southern ones.

1. Introduction

Artificial intelligence (AI) is the most important general-purpose technology of our era [1]. The transformative potential of this technology is very large, and its applications can be found in many fields. For example, AI can be applied to medicine and healthcare [2], transport and mobility [3,4], law [5], administrations and governments [6] or even the army [7]. As a consequence of this transformative potential in many areas, Europe has developed a policy framework for trustworthy AI [8] and lay down harmonized rules to regulate it, such as EU 2021. Many countries had followed the lead and established their own national strategies on AI. However, other countries, which are part of the so-called global south, are affected by AI risks because they lack the appropriate institutions and mechanisms to meet the control requirements of such technology. For the purpose of this article, the “global south” refers to developing countries in Africa, Latin America and Asia, including the Middle East. “Global south” is a term that replaces “third world” and “developing countries” in many scholarly debates, although it is not without controversy. The term global south transcends borders and encompasses countries that share a colonial past or maintain oppressed and disenfranchised populations even in the west or developed countries. In considering the impact of AI on the global south, it is good to remind ourselves of the power asymmetries and inequalities that exist between different countries, regions and populations. While developed countries, are leading the building of an ethical governance architecture for AI, in the so-called global south, their situation of vulnerability and dependence on northern domination leads them to import digital technology, capital and modes of organization from these developed countries. Perhaps most worrying is the exploitation of the global south in multiple dimensions, including the appropriation and plundering of natural resources or raw materials [9] but also the existence of ghost work [10]. Kate Crawford, professor of communication and STS at USC Annenberg and co-founder of AI Institute—an organization that studies the social implications of AI—and Vladan Joler, professor at the New Media department of the University of Novi Sad and leader of SHARE Lab, detail in their visual essay how building an AI system requires a large amount of human labor, data and planetary resources. Moreover, Mary Gray, senior principal researcher at Microsoft Research and faculty associate at Harvard University’s Berkman Klein Center for Internet and Society, and Siddharth Suri, computational social scientist and senior principal researcher at Microsoft Research, unveil how the services of tech companies, such as Google, Amazon, Uber, etc., can only function thanks to the invisible human labor force, present mostly in countries of the global south. In this short paper, we want to address two problems that may arise in relation to the geopolitics and ethical governance of AI: (a) Lack of governance capacity in southern populations resulting from their dependence from northern leadership on technological innovations and regulations, and (b) Material and workforce extractivism inflicted by the northern countries on southern ones. In the second section, we will briefly comment on the geopolitics of AI or how technology, and particularly AI, is weaponized. In the third section, we will present AI as an extractive industry that mainly affects countries in the global south. Finally, we will briefly discuss how ethical governance of AI is needed from a human rights perspective.

2. The Geopolitics of AI

To understand the role of AI in the world, not only do we have to understand the more technical aspects of dozens of layers in neural networks, weights, thresholds, code, software, servers, modeling or even hardware. It is necessary to know the ideology behind AI, what is optimized, for whom and who makes the decisions. GAFA, FAANG or the The Big Nine [11] are all acronyms or expressions to refer to the tech companies vying for innovation leadership in the digital age. All these companies are fighting each other to dominate the new digital world we are heading for. The new world order is not based on geography anymore. The modern world is no longer forged in the control of geography, control of the oceans or territory, but rather in the control of data flows and the connections with technology. For this reason, technology is weaponized. In this sense, AI or tech is politics by other means, to paraphrase Clausewitz. The AI industry is not only products and services or tangible materials and infrastructure. The geopolitics of AI comes not primarily from technology itself, but rather from ideology. The tech companies involved in the AI industry conform to the way we see the world, how we create economic value, how we drive innovation, how we interact with others, how we work, how we entertain… and so on. Increasingly, tech companies, the companies that develop AI algorithms, systems and platforms, become gatekeepers of free speech in our democracies, influence the way politics is conducted and reflect and amplify the biases of society with pernicious effects that threaten the social contract [12,13]. Those tech companies that control the majority of supply chains and trade routes, the process of extraction of raw materials intended to form part of the electronic devices, products and materialities, will accumulate enough economic and political power to be at the forefront of AI geopolitics. The possibility of one and only AI superpower, or even a duopoly [14], can exacerbate the geostrategic conflict because it can lead to an arms race for technological dominance and leadership without taking into account the need for an ethical AI governance, and more importantly, the rights of people in the global south. The geostrategic conflict will also be amplified because of the differences in tech norms (soft and hard law) across political systems. Another important aspect is the fact that tech companies tend to have a lot of control not just over consumers but also in politics. They become monopolies and exert tremendous pressure, acting as lobbies on the way in which policy is constructed. To have a glimpse of big tech´s effort in influencing politics and legislation, during 2020, Facebook, Amazon and Apple spent over USD 20 million on lobbying [15]. It is not clear what these companies were looking for, but their lobby battles could shape the industry´s future and, as a consequence, have an impact on people´s lives.

3. The New Extractivism: Materials and Ghost Work in AI

The common denominator of all the actors within the tech industry that use AI in their services and products is that they respond to the dual logic of AI: abstraction and extraction [16] (p. 18). Regarding abstraction, they abstract away the material conditions of their making or, in other words, one does not realize how the creation of AI systems depends on exploiting energy and mineral resources from the planet. The production of AI systems requires a high demand for minerals, including lithium for batteries used in computers and electronic devices, but also other rare earth minerals. In the context of the AI industry, we see a repeated pattern of extractive operation of contemporary capitalism, which feeds on the natural resources of the biosphere at a cost that is paid by many at the expense of a few and that is often deferred to future generations. Extraction, the second part of the dual logic of AI, is a targeted strategy, which consists of “extract more information and resources from those least able to resist” [16] (p. 18), usually the populations of the global south. Rare earth minerals are extracted from African countries such as the Congo; deposits of toxic substances and products are stored on vast tracts of land in the global south, and all of this has a cost and environmental impact on the local ecosystem and the people living nearby. AI industry depends on infrastructures, supply chains and cheap human labor that stretch around the global south. One of the under-looked facts is how much AI systems need the underpaid workers, the precariat, “to help build, maintain, and test AI systems” [16] (p. 63). Mary Gray and Siddharth Suri call this type of hidden labor “ghost work” [10]. This type of work, which is mainly carried out by people from the global south, takes on many forms. For example, labeling entire datasets, reviewing, curating and moderating harmful content, and even training and feeding data to machine-learning models in crowd-working tasks. The AI industry extracts cheap human labor to operate AI systems. The end consumer who buys the products and services of tech companies also acts as a “ghost worker” who offers his or her labor force for free. When we read a website and have to prove our human identity, we are training recognition algorithms for free. So, whether through the hard work of miners, assembly line workers, “ghost workers” or the everyday users of products and services, the AI industry extracts information and value from us, and especially from the citizens of the global south. The philosophy of dataism that underlies the development of AI is the thesis that everything is data and that it is there to be exploited and extracted. When it is believed that the whole world, the whole of reality, can be computed and, therefore, data can be extracted, people become data points. AI tools, such as, for example, facial recognition systems, are being used against populations in the global south. In particular, biometric systems, facial recognition systems, etc., are used to build smart borders where technology registers and tracks travelers but also refugees [17]. These systems identify objects and faces in images, but they fail dramatically in misidentifying people of color [18]. There is a growing concern in the AI ethics community that there are dire consequences, especially for marginal communities in the global south. Facial recognition systems are on the rise in many countries, and rogue governments used them to target, monitor and survey vulnerable populations and ethnic minorities. Because of this logic of the new technological extractivism, where the planet´s materials and resources are exploited and underpaid labor is needed to build AI systems, it is necessary to propose an ethical governance of AI at the international level where human rights are respected. We advocate for inclusivity as a guiding principle in the growing socio-technological system of human–machine interaction [19].

4. Conclusions

In this short paper, we addressed briefly the geopolitics of AI and in particular the lack of governance capacity in southern populations resulting from their dependence from northern leadership on technological innovations and regulations and also the new technological extractivism represented by an AI industry, which demands a lot of energy, natural resources and ghost work to operate. Now is the time to focus on how to build an ethical governance framework for responsible use of AI under the aegis of human rights. Human rights are rights that we have simply by virtue of being human—discoverable by ordinary human moral reasoning [20]. We are agnostic about the concrete metaphysical and epistemological foundations of human rights, but they confer obligations and duties on others on the basis of their intrinsic value. Therefore, AI systems should be respectful of human rights (freedoms, equality, justice…). Human rights protect the primary interests and needs of individuals, regardless of culture and context, and this implies that the application of AI systems in both the global north and the global south requires human rights compliance. We advocate an ethical governance framework for AI that distinguishes between “hard ethical governance” based on law and “soft ethical governance”. This distinction is useful, so that it can be accepted by the skeptics of AI regulation. Many skeptics of AI regulation believe that regulation or control of AI can stifle development and innovation. However, “soft ethical governance” describes standards such as ISO or IEEE frameworks that can be used in the early stages of digital technology development. On the other hand, “hard (law-based) ethical governance” refers directly to prohibitions, and it prevents the use of a technology when the risk outweighs the benefits. Perhaps the best recommendation for global governance of AI is to build on the experience from other technologies, such as atomic energy, with the intention of creating global ethical governance of AI from a human rights approach. It is also useful to make the case for global ethical governance of AI as an existential risk to humanity, similar to how global warming turns out to be. The problem of global warming goes beyond the borders of a single country and becomes a global problem of collective coordination between countries. If we want to tackle the risks posed by increasing temperatures due to anthropogenic greenhouse gas emissions into the atmosphere, a coordinated global response is needed. Similarly, disruptive technologies, such as AI, also require global ethical governance because the risks are shared and not exclusive to a single country.

Author Contributions

Conceptualization, A.M.A.; methodology, A.M.A.; validation, A.M.A., T.A., B.L., M.T., M.A. and D.L.; investigation, A.M.A., T.A., B.L., M.T., M.A. and D.L.; writing—original draft preparation, A.M.A.; writing—review and editing, A.M.A., T.A., B.L., M.T., M.A. and D.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors gratefully acknowledge the support by the project EXTEND: Bidirectional Hyper- Connected Neural System” (H2020 Research Project, ref. 779982), EthAI+3 (PID2019-104943RB-100) and INEDyTO II-Bioética y Final de la Vida-PID2020-118729RB-I00. Belén Liedo thanks the Spanish Ministry of Universities grant FPU19/06027.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Brynjolfsson, E.; McAfee, A. The Business of Artificial Intelligence. Harvard Business Review. 2017. Available online: https://hbr.org/2017/07/the-business-of-artificial-intelligence (accessed on 30 September 2021).
  2. Greenfield, D. Artificial Intelligence in Medicine: Applications, Implications, and Limitations. Science in the News. 2019. Available online: https://sitn.hms.harvard.edu/flash/2019/artificial-intelligence-in-medicine-applications-implications-and-limitations/# (accessed on 30 September 2021).
  3. Bonnefon, J.-F.; Shariff, A.; Rahwan, I. The social dilemma of autonomous vehicles. Science 2016, 352, 1573–1576. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Awad, E.; Dsouza, S.; Kim, R.; Schulz, J.; Henrich, J.; Shariff, A.; Bonnefon, J.F.; Rahwan, I. The moral machine experiment. Nature 2018, 563, 59–64. [Google Scholar] [CrossRef] [PubMed]
  5. Kirchner, L.; Angwin, J.; Larson, J.; Mattu, S. Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks, ProPublica. 2016. Available online: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (accessed on 5 October 2021).
  6. Van Buren, E.; Chew, B.; Eggers, W. AI Readiness for Government. 2020. Available online: https://govwhitepapers.com/whitepapers/ai-readiness-for-government (accessed on 5 October 2021).
  7. Asaro, P. On banning autonomous weapon systems: Human rights, automation, and the dehumanization of lethal decision making. Int. Rev. Red Cross 2012, 94, 687709. [Google Scholar] [CrossRef] [Green Version]
  8. European Union COM(2021)206 Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. Available online: https://ec.europa.eu/newsroom/dae/items/709090 (accessed on 15 August 2021).
  9. Crawford, K.; Joler, V. Anatomy of an AI System: The Amazon Echo As An Anatomical Map of Human Labor, Data and Planetary Resources. AI Now Institute and Share Lab. 2018. Available online: https://anatomyof.ai (accessed on 5 August 2021).
  10. Gray, M.; Suri, S. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass; HMH: Boston, MA, USA, 2019. [Google Scholar]
  11. Webb, A. The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity; Public Affairs: New York, NY, USA, 2019. [Google Scholar]
  12. O´neil, C. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy; Crown Publishing Group: New York, NY, USA, 2016. [Google Scholar]
  13. Aral, S. The Hype Machine: How Social Media Disrupts Our Elections, Our Economy, and Our Health and How We Must Adapt; DoubleDay: New York, NY, USA, 2020. [Google Scholar]
  14. Lee, K.F. AI Superpowers: China, Silicon Valley, and the New World Order; HMH: Boston, MA, USA, 2018. [Google Scholar]
  15. Schwartz, B. Big Tech Spends over $20 Million on Lobbying in First Half of 2020, Including on Coronavirus Legislation. Available online: https://www.cnbc.com/2020/07/31/big-tech-spends-20-million-on-lobbying-including-on-coronavirus-bills.html (accessed on 7 October 2021).
  16. Crawford, K. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence; Yale University Press: New Haven, CT, USA, 2021. [Google Scholar]
  17. Galdon, G. Protect rights at automated borders. Nature 2017, 543, 34–36. [Google Scholar]
  18. Buolamwini, J.; Gebru, T. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency. PMLR 2018, 81, 77–91. [Google Scholar]
  19. Monasterio Astobiza, A.; Toboso, M.; Aparicio, M.; Ausín, T.; López, D.; Morte, R.; Pons, J.L. Bringing inclusivity to robotics with INBOTS. Nat. Mach. Intell. 2019, 1, 164. [Google Scholar] [CrossRef]
  20. Griffin, J. On Human Rights; Oxford University Press: Oxford, UK, 2008. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Monasterio Astobiza, A.; Ausín, T.; Liedo, B.; Toboso, M.; Aparicio, M.; López, D. Ethical Governance of AI in the Global South: A Human Rights Approach to Responsible Use of AI. Proceedings 2022, 81, 136. https://doi.org/10.3390/proceedings2022081136

AMA Style

Monasterio Astobiza A, Ausín T, Liedo B, Toboso M, Aparicio M, López D. Ethical Governance of AI in the Global South: A Human Rights Approach to Responsible Use of AI. Proceedings. 2022; 81(1):136. https://doi.org/10.3390/proceedings2022081136

Chicago/Turabian Style

Monasterio Astobiza, Aníbal, Txetxu Ausín, Belén Liedo, Mario Toboso, Manuel Aparicio, and Daniel López. 2022. "Ethical Governance of AI in the Global South: A Human Rights Approach to Responsible Use of AI" Proceedings 81, no. 1: 136. https://doi.org/10.3390/proceedings2022081136

Article Metrics

Back to TopTop