Next Article in Journal
Evaluation and Lessons Learned from a Campus as a Living Lab Program to Promote Sustainable Practices
Next Article in Special Issue
Turning Crisis into Opportunities: How a Firm Can Enrich Its Business Operations Using Artificial Intelligence and Big Data during COVID-19
Previous Article in Journal
A Framework of Professional Transferable Competences for System Innovation: Enabling Leadership and Agency for Sustainable Development
Previous Article in Special Issue
Poverty Classification Using Machine Learning: The Case of Jordan
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

AI in Context and the Sustainable Development Goals: Factoring in the Unsustainability of the Sociotechnical System

Henrik Skaug Sætra
Faculty of Business, Languages, and Social Science, Østfold University College, N-1757 Remmen, Norway
Sustainability 2021, 13(4), 1738;
Submission received: 20 December 2020 / Revised: 31 January 2021 / Accepted: 3 February 2021 / Published: 5 February 2021


Artificial intelligence (AI) is associated with both positive and negative impacts on both people and planet, and much attention is currently devoted to analyzing and evaluating these impacts. In 2015, the UN set 17 Sustainable Development Goals (SDGs), consisting of environmental, social, and economic goals. This article shows how the SDGs provide a novel and useful framework for analyzing and categorizing the benefits and harms of AI. AI is here considered in context as part of a sociotechnical system consisting of larger structures and economic and political systems, rather than as a simple tool that can be analyzed in isolation. This article distinguishes between direct and indirect effects of AI and divides the SDGs into five groups based on the kinds of impact AI has on them. While AI has great positive potential, it is also intimately linked to nonuniversal access to increasingly large data sets and the computing infrastructure required to make use of them. As a handful of nations and companies control the development and application of AI, this raises important questions regarding the potential negative implications of AI on the SDGs. The conceptual framework here presented helps structure the analysis of which of the SDGs AI might be useful in attaining and which goals are threatened by the increased use of AI.

1. Introduction

AI has great potential, and it is increasingly often argued, and seemingly proved, that artificial intelligence (AI) has the power to change the world for the better [1]. The UN-driven initiative AI4Good is one example of how AI is seen as a force for good [2]. Despite much enthusiasm, many remain wary, both of the evidence purported to demonstrate AI’s efficacy and with regard to the many potential negative effects of AI. As a response, AI ethics has developed into a vibrant field of study, but as most new disciplines, it is still in its infancy, and there is little agreement as to what it entails or how it should be pursued.
In 2015, the United Nations (UN) set 17 Sustainable Development Goals (SDGs) to be achieved by 2030 [3]. These entail environmental, social, and economic goals. As AI becomes increasingly prevalent in modern societies, the SDGs provide a useful framework for analyzing and categorizing the potential benefits and harms it produces. The SDGs are here used as a framework for analyzing the overall effects of AI on issues of sustainability broadly understood, and one major contribution of the article is to show how the SDGs are useful tools for evaluating the ethics of AI.
AI is here considered in context and not as a neutral and decoupled technology. This entails seeing AI as a part of a sociotechnical system consisting of various structures and economic and political systems, rather than performing an isolationist analysis of this technology [4]. Modern AI is intimately linked to nonuniversal access to increasingly large data sets and the computing infrastructure required to make use of them. As a handful of nations and companies control the development and application of AI, this raises important questions regarding the potential negative implications of AI on the SDGs. This article presents a framework for understanding which of the SDGs AI might be useful in attaining and which goals are threatened by increased use of AI—directly or indirectly.
In Section 2, the method and conceptual framework employed in this article is presented. In Section 3, the SDGs are introduced, along with a discussion of why they constitute a useful framework for analyzing AI. Section 4, Section 5 and Section 6 contain the results, as the impact of AI on the various goals are discussed on the basis of the categorization presented at the end of Section 2. Section 7 contains the discussion, which focuses on the implications of these results and the limitations of using the SDGs as a framework of AI ethics.

2. The SDGs and AI

The SDGs were presented by the United Nations in the 2015 document Transforming our World: The 2030 Agenda for Sustainable Development [3]. The 17 goals (see Figure 1) were a continuation of the Millennium Development Goals (MDG)—a framework of 8 goals established in 2000 with the purpose of reaching them by 2015 [5]. The SDG framework is connected to a wide range of human rights, but it is still clearly distinct from them, as the SDGs emphasize people, planet, prosperity, peace, and partnership [3].
In order to understand why the goals are categorized as they are in this article, it is necessary to take account of the fact that each goal has a number of subgoals. For example, SDG 13 consists of the heading “Take urgent action to combat climate change” and the following subgoals:
  • 13.1: Strengthen resilience and adaptive capacity to climate-related hazard and natural disasters in all countries
  • 13.2: Integrate climate change measures into national policies, strategies and planning
  • 13.3: Improve education, awareness-raising and human and institutional capacity on climate change mitigation, adaption, impact reduction and early warning [3].
In addition, there is 13.a regarding funding for action in developing countries and 13.b regarding capacity building in the most exposed and least developed countries [3].
The SDGs are extremely ambitious and wide ranging [6]. They are thus what can be called stretch goals, which may seem close to impossible to reach, but which are nevertheless pursued in order to inspire and stimulate radical and ground-breaking approaches and efforts to make progress [7]. The realism of the goals is not discussed here, as the purpose of the article is to use the framework to evaluate the various potential impacts of AI on sustainable development. As such, the fact that the SDGs are not vague and limited by consideration of feasibility are benefits rather than shortcomings.
One key criticism of the SDGs focuses on the potential disconnect between combatting climate change and achieving the rest of the SDGs. Nerini, et al. [8] show how climate change threatens the achievement of 16 SDGs but note that taking action to combat climate change may actually undermine 12 other SDGs. The linkages between the various goals are important [9,10], and this is at the core of the categorization and analysis of AI in context based on the direct and indirect effects AI has on them.
The SDG’s top-level goals are also often categorized into broader categories. One such categorization is the split into economy, society, and environment, according to what the United Nations [3] present as the “three dimensions of sustainability” [1]. One problem with this categorization is that politics is largely lost. Others have used the framework of the ESG (environment, social, governance) known from finance and investing as the basis of categorizing the SDGs [11]. As this article will show, emphasizing the political dimension of sustainability and the SDGs seems both important and necessary for achieving real progress in reaching the SDGs. Furthermore, it is necessary for understanding and limiting the potential negative impact of AI on the SDGs.

2.1. Previous Work on AI and the SDGs

Following the introduction of the SDGs and initiatives like AI4Good [2], others have provided important analyses of isolated AI impacts, and a large number of these are detailed in the supplementary materials of Vinuesa, et al. [1]. Despite having garnered less attention, Chui, et al. [12] preceded Vinuesa, et al. [1] and arguably provided a more comprehensive and balanced evidence-based account of the potential for beneficial use of AI, also connected to the SDGs. Other recent examples of efforts related to the goal of this article are Di Vaio, et al. [13] (who mainly consider SDG12 and sustainable business models) and Khakurel, et al. [14] (who delve much deeper into the technological aspects of AI and the SDG framework). Others have examined issues related to the sustainability of AI, without explicitly connecting this to the SDGs, or by only focusing on some of the goals, such as Toniolo, et al. [15] and Yigitcanlar and Cugurullo [16].
Still others have tackled partially overlapping questions, without engaging directly with the SDG framework. Floridi, et al. [17] propose an ethical framework branded AI4People, and this also relates to much work done on responsible AI [18]. The legal and general ethical issues of AI are not considered in this article, as various frameworks for ethical, responsible, human-centered AI, etc., are thoroughly covered in much of the literature here referred to.

2.2. What Is AI?

This article is partially based on a broad definition of AI, which entails that AI refers to a wide array of technologies and applications. The definition is borrowed from Vinuesa, et al. [1], as one purpose of the article is to build on and critique their analyses and conclusions:
We considered as AI any software technology with at least one of the following capabilities: perception—including audio, visual, textual, and tactile (e.g., face recognition), decision-making (e.g., medical diagnosis systems), prediction (e.g., weather forecast), automatic knowledge extraction and pattern recognition from data (e.g., discovery of fake news circles in social media), interactive communication (e.g., social robots or chat bots), and logical reasoning (e.g., theory development from premises). This view encompasses a large variety of subfields, including machine learning [1].
AI must be clearly demarcated from digital technologies more generally, such as mobile phones and apps for banking, etc. Truby [19], for example, partly conflates AI and all things digital as he in the discussion of AI and SDG8 notes that cellular phones and digital technology increase the access to banking services in developing countries. This is of course true, but it is not a development dependent on AI. If a device or application could easily serve the same functions without the use of AI, it makes no sense to argue that AI is the cause of the benefits derived from such technologies. Only where AI is a main factor contributing to a phenomenon will it be considered an enabler or inhibitor of SDGs.
AI is often related to Big Tech, which usually refers to the big four or five tech companies [20,21]. The four companies are GAFA (Google, Amazon, Facebook, and Apple), and those that speak of five include Microsoft [22]. These are the major players in the new global technological systems, accompanied and chased by up-and-comers from countries such as China (i.e., Alibaba and Tencent).
AI ethics has emerged as a vibrant field of study, and this article proposes to use SDGs as a system for analyzing the effects of AI. Among the most researched areas related to AI ethics are issues of privacy and surveillance [23,24,25,26], how technology can aid efforts to manipulate and persuade [27,28], biased systems [29,30,31], issues of power and technology [32,33,34,35], how technology changes human relations [36], and the potentially polarizing effects of AI-based social media [37,38].
Lastly, AI is intimately connected to the generation of and access to data, and data is not, and can never be, neutral [39]. Furthermore, as data becomes increasingly valuable, access to it constitutes an issue of justice, as will become apparent in the discussion of numerous SDGs. Zuboff [40] highlights how AI is connected to access to and the accumulation of data—a valuable perspective to take into account when AI and sustainability are considered.
Some of these issues are readily applicable to the SDG framework, while others seem more peripheral. This potential weakness of using the SDGs as a framework for AI ethics is discussed in the Discussion section.

3. Methods and Conceptual Framework

The aforementioned AI4Good initiative aims to show the potential for using AI to accelerate the SDGs [2]. Others have also begun to explore the linkages between AI and the SDGs, and Vinuesa, et al. [1] produced one of the most recent and most comprehensive efforts thus far. In their article, they first theoretically examine the potential positive and negative impacts of AI on the SDGs before they examine how empirical evidence supports or contradicts this analysis. The current article relates intimately to and builds on their research, as it is both becoming influential and it is based on a comprehensive literature review. Another reason is that the conclusions and methods employed are insufficient and partially problematic, and it thus stands in need of correction.
The main problem with Vinuesa, et al. [1] is that the study is highly quantitative and empirical in nature, and it attempts to describe all goals and subgoals in the scope of one very short article. The result is an article with many bold conclusions and attractive figures and percentages that stand in need of more comprehensive explanations and a deeper analysis. They call their method an “expert elicitation process”, which seems to be something akin to a Delphi process whereby the authors (who constitute the experts involved) reach an agreement; this is thus presented as the unanimous verdict of the experts.
This article adds another voice to the expert opinions they have already collected and systematized, as a more qualitative and theoretical examination of the impact of AI is here performed. Vinuesa, et al. [1] strongly favor (empirical) evidence, despite their acknowledgement that much of the evidence of AI impact is derived from experimental closed systems. Our reality is notorious for the prevalence of open systems, which entails the need for a hefty dose of skepticism about the generalizability of experiments and empirical research from closed systems [41,42].
This article contains a theoretically based development of a framework for evaluating AI both as a general technology and as more specific applications. Such an approach involves using existing empirical and theoretical research on AI and the SDGs as the basis for a discussion of how these should be interpreted when AI is seen as a part of a larger sociotechnical system and seen as dependent and resting upon a technological system and a technical substrate [4]. The analytical approach here developed is comparatively deeper than that of Vinuesa et al.’s [1] approach, as they fail to account for the interconnected nature of (a) AI and other technological phenomena and societal issues, and (b) the SDGs themselves. While they focus on finding evidence for various subgoals and score AI on this basis, this article presents a theoretical and non-isolationist analysis of how the various goals are affected by AI in different ways.
The conceptual framework applied in this article distinguishes between direct and indirect impacts, and it consequently also factors in whether impact on one goal entails ripple effects on other goals. Direct effects imply that applying AI may directly impact the SDG, while indirect impact refers to how AI might impact one goal which in turn has consequences for another. If a goal has ripple effects, this implies that efforts related to this goal will have consequences for other goals as well. For example, AI might have a direct effect on economic growth (SDG 8), but this growth might be of a kind that exacerbates inequality (SDG 10). At the end of this section, each SDG is categorized based on what sort of effects are considered to be most important for that goal. It is acknowledged that all SDGs will to some degree be both positively and negatively affected by AI, and that AI to a certain degree entails both direct effects and indirect effects for all goals.
In order to choose how the different effects of AI are ranked, the analysis and categorization has been divided into three different levels, as shown in Figure 2. As AI is considered in context, this refers to how AI is not analyzed on the basis of the isolated effects it has on the micro or meso level. With such an approach, a researcher could find a use case where AI in some isolated or local setting is shown to have some sort of positive effect on an SDG and subsequently conclude that AI indeed has a positive impact on that goal. While true, this is a shallow approach that is unable to deal satisfactorily with the fact that AI has many such isolated effects on each goal, and more importantly, that AI is a part of a of a sociotechnical system consisting of various structures and economic and political systems that can only be understood by also taking account of macro level effects. An isolationist account involves counting and focusing on the isolated micro and meso impacts [4], while this article applies a framework where AI is seen in context, and the overarching effects are emphasized.
As shown in Figure 2, AI might have a positive impact on, for example economic growth in a particular region (i.e., a country), but this could simultaneously be a kind of growth that exacerbates differences between countries (macro level) and within the nation (micro level).
AI is here not considered as an isolated and neutral technology, but as a part of something larger, as explained in the previous section. AI is considered to be connected to a range of other technologies and societal phenomena. To understand the actual effects of AI, it must be considered in relation to the sociotechnical system in which it is a keystone, and not in isolation [4]. This system is today often referred to as surveillance capitalism or the data economy [23,40]. Some prefer to emphasize the role of platforms, the politics of platforms, and platform capitalism [33,34,43]. AI is also often considered an integral part of what Schwab [44] refers to as the fourth industrial revolution, a view Barley [4] objects to. The exact details of this sociotechnical system are beyond the scope of this article, and the conceptual framework must be considered tentative and as a proposal and invitation for further research and analysis. Distinguishing between direct and indirect effects and effects on the micro, meso, and macro level, however, is sufficient for showing that much current research overstates the positive impact of AI, while being blind to some of the negative impacts.
This article presents a complementary approach to the evidence-based one, and both are necessary, as finding empirical evidence for all the potential indirect long-term consequences related to AI is impossible. A shallow and isolationist approach alone runs the risk of concealing both great threats and important nuances related to the consequences of AI, and this is remedied by the deeper approach here presented. As a consequence of these choices, this article mainly refers to the top-level goals and the overarching implications of AI for the SDGs, rather than detailed examinations of isolated use cases.
In order to structure the analysis in the Results sections, the SDGs are grouped with regard to (a) the level of impact, (b) whether AI mainly has direct or indirect effects on the SDG, and (c) whether or not the goals have clear and important ripple effects. A categorization based on the analysis based on this conceptual framework is shown in Figure 3.

4. Results 1: Top-Level Goals—High Potential Impact, High Ripple Effect (Group 1)

The top-level goals in the context of AI are goals in which both positive and negative impacts are likely and in which the impacts are potentially vitally important for reaching, or not reaching, the SDGs. These goals are also intimately connected to a wide range of other goals and understanding the ripple effects of AI’s impact on these goals allows for a more nuanced analysis of the overall impacts of AI. The two goals in this category are SDG8 and SDG9.
For these two particular goals, it will also be necessary to consider them as compound goals, consisting of several distinct goals. SDG8, for example, is named decent work and economic growth. Economic growth can surely impact the chances of finding “decent work”, but we also know that AI influences the nature of work directly. The details of what constitutes decent work are scarce, but it can, for example, be argued that AI-powered surveillance and manipulation of workers is detrimental to work decency.
Of these two goals, SDG9 is considered to have the most impact, and it will thus be considered first. The impact of AI on innovation is argued to be the most important contribution of AI bar none, and this also partially explains how and why AI also impacts economic growth.

4.1. SDG9: Build Resilient Infrastructure, Promote Inclusive and Sustainable Industrialization and Foster Innovation

SDG9 consists of multiple concepts—namely, innovation, infrastructure, and industry. Henceforth, 9a will refer to innovation, 9b will refer to infrastructure, and 9c will refer to industry. The three elements are clearly related, but they are also clearly distinct. Innovation mainly refers to subgoals 9.5, 9.b, and 9.c, which detail the need for scientific research and technological capabilities (9.5), domestic technology development (9.b), and increasing access to information and communications technology (ICT) (9.c). Vinuesa, et al. [1] found evidence of positive impacts on 91% of the subgoals of SDG9 (all goals potentially affected), while 34% of the goals were evidently negatively affected (50% were potentially negatively affected).

4.1.1. Innovation

While AI is potentially important for innovation in both the public and the private sector, a slightly more careful reading of the goal reveals that domestic development and better access to ICT is of great importance. AI as it exists today is developed in a limited set of countries, and while it may be applied throughout the globe, the benefits and profits from said applications largely fall back to the home countries of the major companies in control of data. Meso-level benefits could here be associated with macro-level harms. One important reason for this is that access to ever larger amounts of data is at the core of recent progress in AI [45]. Access to data is not fair and equal and neither is access to the computing power and infrastructure required to benefit from cutting edge AI.
Nevertheless, modern AI—even if unevenly distributed—can lead to general innovation and scientific and technical progress that can potentially benefit all in the long run. Taking a page from neoliberal economics, one might argue that AI leads to a trickle-down effect, even if access to the technology and profits from AI is today largely controlled by a limited set of nations and companies.
Scientific progress is argued to have very high impact, and the ripple effects are also substantial. In fact, most of the impact of AI could be attributed to the effects of reaching goal 9a; new technology and insight will enable us to reach most other goals more effectively, and it thus makes more sense to emphasize the contribution of AI to this goal, rather than arguing that AI will in effect contribute positively to just about all the goals, as Vinuesa, et al. [1] tend to do.
However, while innovation and scientific advances are desirable, it is paramount to note that innovation and science in the hands of “evil” is just as much a force for evil as it is a force for good in the hands of the good. Scientific “progress” is never neutral, and it must always be evaluated on the basis of (a) the goals of our societies and (b) the particular applications [46].
This leads to the conclusion that AI has a high potential for negative and positive impact on SDG9a, and that the ripple effects of innovation and scientific progress are large. SDG9 is a keystone goal for evaluating AI, and it is important to distinguish the direct effect of AI on all the other targets from the indirect effects of reaching SDG9. In addition, it must be remembered that AI innovation in Chinese or American private companies is in no way by definition conducive to the kind of innovation referred to in SDG9, and that rapid innovation in such companies can easily exacerbate and widen the gulf between developed and developing nations. The macro perspective is essential for analyzing these impacts.

4.1.2. Infrastructure

Innovation is also mentioned with regards to developing infrastructure, which is the main focus of goals 9.1 and 9.a, detailing the development of infrastructure (including regional and transborder) with an emphasis on universal, affordable, and equitable access, and facilitating such development in developing countries.
While AI could be considered a part of the drive for affordable and equitable access to infrastructure, this is not necessarily so in reality. Again, access to data and computing infrastructure is far from “universal, affordable, and equitable”, and this is further exacerbated by the fact that much innovation and development is performed in private companies producing proprietary solutions. Once again, we see that micro- and meso-level benefits can lead to macro-level harms.
The problems related to infrastructure are further exacerbated by the fact that AI is becoming an integral part of modern infrastructure, particularly as both cities and infrastructure in general are made “smart” [47]. AI is becoming part of everyday technologies of work and communication: public and private corporations rely on it, smart cities are built on it, and it is increasingly being built into all digital solutions—even those that do not strictly rely on AI to function [48]. The technology-dominated sociotechnical system is built on increasing privatization and it shapes new infrastructure, creating a clear and obvious threat to the achievement of SDG9b. It is not universal, affordable, and it is not characterized by equitable access.
There are, however, also arguments in favor of AI promoting infrastructure. By improving the efficiency of existing infrastructure and being part of new and potentially affordable solutions, AI could in principle, if developed openly and disseminated, promote better local and transborder infrastructure.

4.1.3. Industry

The preceding goals are also linked to industry and detailed in subgoals 9.2, 9.3, and 9.4. The emphasis is on sustainable and inclusive industrialization, with the related goals of increasing the role and size of industry—particularly in the least developed countries. Furthermore, the possibility of building small-scale industry is mentioned, as well as financing, and lastly the use of innovation and sustainable infrastructure to make industry environmentally and people friendly.
AI might be argued to enable small scale and efficient industry, but this requires access to AI systems, data sets, and computing infrastructure. In order to realize the potential meso- and micro-level benefits, macro-level change is required. The most important benefits of AI on industry seem to relate to innovation and automation—aspects that are better understood through SDG9a and SDG8. While the potential impact on industry is high, it seems unlikely that AI as it exists today is a force for an environmentally and people-friendly industry that is particularly beneficial to the least developed countries. The opposite might be the case.

4.2. SDG8: Promote Sustained, Inclusive and Sustainable Economic Growth, Full and Productive Employment and Decent Work for All

SDG8 will be referred to as 8a (referring to economic growth) and 8b (referring to decent work). Vinuesa, et al. [1] argue that AI potentially positively affects 92% of the subgoals (77% with proof) and negatively affects 33% of the goals (25% with proof).

4.2.1. Economic Growth

SDG8a is categorized as high impact, as AI has already proven to be an important catalyst of economic growth and a creator of value. The ripple effects of economic growth are also highly important, as growth could enable us to eliminate, for example, poverty and starvation.
The growth that has followed the rise of the Big Tech giants, however, has not been conducive to reaching such goals. There has been growth, but one that simultaneously has increased inequality, as the top richest percent in rich countries (and the world) now control an unprecedented share of total wealth [49]. Despite various issues related to tracking inequality precisely, Chancel [49] shows that inequality is a real and pressing issue, and that both in-country and between-country inequality are high and non-abating. In-country inequality is, however, becoming increasingly important, and class, rather than nationality, is now seen as a determinant of global inequality [49].
While liberal theorists have long argued that a rising tide will lift all boats—or that rain on the rich will eventually trickle down on the poor—modern history has not been kind to these theories. Economic growth has been achieved, but this has generally—not just in the US—led to a situation of increased inequality [50].
The kind of growth that would be conducive to reaching SDG8a has to be sustained, inclusive, and sustainable [3]. This implies that growth in itself matters relatively little but that the kind of growth matters considerably. Taking this into account, it becomes obvious that AI is a force that both enables and inhibits the reaching of SD8a. The indirect effects of economic growth on the other goals will be discussed in relation to these goals, and with regard to SDG8a, the main questions are a) whether AI promotes growth and b) whether the growth is sustained, inclusive, and sustainable. The first has been proven true while the latter is much more uncertain. There is little evidence to suggest that AI promotes inclusive growth and the goal of at least 7% GDP growth per year in developing countries. While the increase of tech startups in the developing world has increased, the fact that these companies usually ultimately become owned and controlled by Western owners leads some to label this a form of new colonialism [19,51].

4.2.2. Decent Work

The impact of AI on decent work—SDG8b—is potentially high, but the ripple effects are less obvious. If inclusive and sustainable economic growth and innovation are achieved, new jobs are most likely to be created, providing new opportunities for decent work. However, it is also obvious that AI can hurt the decency of work and cause micro-level harms, both through surveillance and manipulative practices at the workplace and through automation. Employers may implement surveillance and manipulative techniques to promote efficiency and increase their control over employers [52]. Furthermore, peer surveillance may emulate and amplify such techniques [53].
AI is also increasingly enabling the automation of work. While Danaher [54] argues that this process may free us from tedious work, and work in general, this is not the same as creating decent work opportunities for all. Furthermore, in lieu of solutions akin to citizen salaries or extensive welfare states, automation is likely to disrupt the work opportunities and general situation for those worst off.

5. Results 2: Direct Effects with Ripple Effects (Group 2) and Varied Direct Effects without Ripple Effects (Group 3)

The second group of goals are potentially directly affected by AI—both positively and negatively. The impact of AI on these goals are considered non-trivial, and the impact will also have non-negligible ripple effects. The goals in this group are SDG3, SDG4, SDG11, and SDG16.
In the third group, we find SDGs that are also potentially directly affected by AI. While we may identify direct effects, these goals are also indirectly affected by AI, in particular through the impacts on goals in group 1. These goals are assumed to have limited ripple effects in comparison to the goals in groups 1 and 2. That is, however, not to say that there are no ripple effects or that the ripple effects are not important. The goals in this group are SDG5, SDG10, and SDG13.

5.1. SDG3: Ensure Healthy Lives and Promote Well-Being for All at All Ages

Few things rival health and well-being as goals of human-oriented sustainability. One might be encouraged, then, as Vinuesa, et al. [1] find evidence of positive contributions to 69% of the subgoals and negative implications for only 8% of them.
Positive impacts on health can be indirect and can occur through AI-driven innovation and research, for example. In addition, some might argue that various technologies of the self and self-tracking can lead people to live healthier lives. AI is also being used in therapy and can plausibly improve the mental health of those without the opportunity to see human therapists [55].
However, self-tracking and the quantification of the self are not necessarily good things [56,57]. Furthermore, social media and the various new arenas in which AI plays a key role seem to foster mental unhealth rather than improve the same, although the evidence is still unconclusive [58]. A further concern is that, for example, AI-powered workout equipment and nutritional analyses and plans could easily improve the health of some, while increasing the differences between those with access to such technologies and those without. This SDG shows the complex nature of AI impact, as macro- and meso-level benefits must be considered against potential micro- and meso-level harms.

5.2. SDG4: Ensure Inclusive and Equitable Quality Education and Promote Lifelong Learning Opportunities for All

Big Tech is heavily invested in education, and much research and investment are aimed at capturing the education market. Learning analytics is one area in which AI may enable analyses that improve education, but AI can also be used as a teacher. There are obvious limitations to current intelligent tutoring systems (ITS), but they have proven their potential, and as technology progresses, so will the effectiveness of AI in the area of education [59,60].
AI might enable us to reach SDG4 through remote teaching in particular. Providing education even in the remotest areas of the world becomes much more affordable and potentially quite effective. This, however, requires affordable ITSs built and developed for a wide range of languages and subjects. If the companies developing these systems focus on their home markets—very profitable markets—the benefits will lead to increased inequalities in education rather than the opposite.
Vinuesa, et al. [1] state that AI could enable us to reach all the subgoals of SDG, while finding evidence for positive impact on 93%. Potential negative impact was found for 70%, with evidence for impact on 60%. The potential for AI is doubtlessly great, but it requires a disconnect from Big Tech and proprietary and Western-focused systems. The need to make sure that top quality educational systems become available to all is shown both in SDG4 but also in the highly important ripple effects that would follow from reaching this goal. Better education in the developing world would lead to huge benefits for these societies, and goals such as SDG5, SDG10, SDG8, SDG9, and SDG16 are all positively affected by providing quality education for all.

5.3. SDG11: Make Cities and Human Settlements Inclusive, Safe, Resilient, and Sustainable

Vinuesa, et al. [1] argue that AI positively impacts 90% of the subgoals of SDG11 and negatively impacts only 10%. Technology intensive smart cities, for example, are the result of a UN initiative for promoting the development of sustainable cities and knowledge transfer between the many smart cities across the globe [61]. Smart cities are technologically advanced, and digital technologies are involved in the provision of a range of city infrastructures. Despite AI being important, it is important to note that many of the innovations related to smart cities are not necessarily AI-based, even if they are digital.
However, SDG11 entails more than highly advanced science fiction cities—it is about creating inclusive, safe, resilient, and sustainable settlements. AI can surely be involved in the provision of safety through surveillance and predictive policing, among other things [62]. However, parts of the AI ethics community are deeply concerned about the “prison-to-tech pipeline”, with repeated calls for the abolition of technologies such as facial recognition, which is argued to perpetuate racism and other problematic biases—now camouflaged by the shiny veil of high tech AI [63].
In addition, there is very little to support the notion that AI leads to inclusive settlements or that it directly fosters resilience for all or sustainability in general. On the contrary, surveillance and AI are enthusiastically employed by the more ominous societies of the modern world, in which it is used to exercise what approaches complete control over citizens—a development that can hardly be seen as compatible with SDG11 in any way. While governments can weaponize AI and use it to control its populations, it is also connected to increased polarization, another potential inhibitor of safe and inclusive settlements [37].

5.4. SDG16: Promote Peaceful and Inclusive Societies for Sustainable Development, Provide Access to Justice for All and Build Effective, Accountable and Inclusive Institutions at All Levels

The most politics-oriented goal is SDG16, which focuses on societies, access to justice, and effective, accountable, and inclusive institutions. While Vinuesa, et al. [1] argue that a majority of the subgoals can be, and are, positively affected by AI, there are several reasons to be wary of this analysis. They also find that only 15% of the subgoals are negatively affected by AI.
The first question should be: how can AI lead to just, inclusive, and sustainable institutions and societies? One might argue that AI can be used to secure inclusion and non-discrimination, but such a stance would reveal a lack of understanding of the deeply problematic issues related to automated decision-making, bias, and the troubles involved with uncovering such biases [30,64].
One hypothetical way to foster democracy by AI would be to pursue the ways in which an AI technocracy could paradoxically lead to a revitalized democracy [65]. This would, however, entail radical changes in our political structures, and there are few reasons to believe that such solutions would a) work or b) not be associated with important negative consequences as well.
Turning to the negative impacts, many argue that AI promotes various forms of polarization through such mechanisms as filter bubbles and echo chambers [37]. Fake news is also connected to AI-based social media [66]. In addition, AI allows for more effective manipulation—or nudging—of individuals, and while Thaler and Sunstein [67] argue that nudging should be done for good, it is quite unlikely that actors with other intentions will abstain from using such techniques to promote their own self-interest over that of other citizens, consumers, and individuals in general [27,28].
The ripple effects of negative AI impact on SDG16—if increased polarization and in-group preferences are real and manifest themselves in international relations as well as intranationally—entail negative effects on most SDGs, including, but not limited to SDG1, SDG2, and in particular, the important SDG17.

5.5. SDG5: Achieve Gender Equality and Empower All Women and Girls

Equality is in this context exclusively related to gender. However, it is also relevant to include issues of race, ethnicity, sexuality, etc., when considering the impact of AI. Vinuesa, et al. [1] find proof of positive AI influence on 44% of the targets and negative impact on 31% of the goals.
Starting with the good, AI might be argued to enable us to reach this goal through various innovations that emancipate women from work at home, etc. Furthermore, it might be argued that AI and automated decision making might enable us to overcome human bias in choices related to work, financing, politics, and life in general. A computer is in principle free from bias, but such a stance does not reflect the theory-laden nature of data and the unavoidable human involvement in the development and application of AI [39,68].
Turning to potential negative aspects of AI, numerous authors have done important work on how AI negatively impacts the vulnerable, including women and minorities [30,31]. While subgoal 5.b—using enabling technology to empower women—seems to indicate that AI has a positive impact on SDG5, a) AI is not a necessary part of enabling technologies, and b) the vast majority of the developers are male (and white), and it seems naïve to assume that AI in context is a force for making women and other minorities more equal and less vulnerable to various forms of marginalization and discrimination.

5.6. SDG10: Reduce Inequality within and among Countries

Inequality is one of the areas in which Vinuesa, et al. [1] find much scope for negative AI impact, with 70% of subgoals potentially impacted and 55% actually impacted negatively. On the other hand, positive impacts are conceived for 90% of the subgoals, with evidence found for 75% of them.
The positive effects of AI on SDG10 would most likely come about through achieving inclusive economic growth. However, as AI in context does not seem to foster such growth, the indirect effects on SDG10 are most likely negative. AI will potentially exacerbate differences between the rich and poor in many areas, and the danger that AI inhibits SDG10 is substantial. This applies both within nations and between nations, as both local inequalities (i.e., the US, where Big Tech resides) and global inequalities are not showing any signs of disappearing—at least not as a consequence of AI [49].

5.7. SDG13: Take Urgent Action to Combat Climate Change and Its Impacts

Climate change is increasingly perceived as a real and important threat to human (and nonhuman) livelihood. AI excels where it can optimize decisions in highly complex environments, and it could be argued that AI might potentially improve decision making related to climate policy. Some have even argued that a limited AI technocracy could be founded on the need to combat such changes that human politicians are neither willing nor able to face satisfactorily [65]. Indirectly, climate change might also be mitigated through AI powered innovation, infrastructure, and industry (SDG9). Another potential application of AI is in the development of controversial—but feasible—geoengineering solutions that allow people to sidestep the need for substantial and radical changes in economic and political systems [69]. Vinuesa, et al. [1] find a positive AI influence on 70% of the subgoals and a negative influence on 20% of the goals.
However, Big Tech has thus far not seemed particularly green in the sense of promoting a green transition. AI is used just as much to make the extraction of oil and gas more efficient as it is in the planning of windmill parks. These are all indirect effects, and a more pressing concern is how modern AI directly creates increasing amounts of emissions from the data-intensive training of machine learning algorithms [70]. In fact, Timnit Gebru—a pioneer in critical AI research—recently departed (or was departed from) Google after a row over a paper she co-authored. Here, the authors argued that the current trajectory of AI, with natural language models trained on increasingly large data sets, are unsustainable and harmful in a number of ways—including its massive carbon footprint [71]. This relates to the efforts to develop green information technologies, as people are increasingly coming to the realization that mass use and production of technology entail significant environmental costs [72].
AI and Big Tech are parts of an economic system in which conventional growth is the driving force, and it seems likely that AI will have to be divorced from this system before the overall implications of AI can be considered a significant positive force for combatting climate change [70].

6. Results 3: High Impact, Indirect Effects (Group 4) and Minor/No Effects (Group 5)

Certain goals are potentially highly influenced by AI, but mainly through indirect effects. These goals are gathered in group 4, consisting of goals for which AI has a high impact, but mainly indirectly, and for which there are limited ripple effects. The goals in this group are SDG1, SDG2, and SDG12.
Finally, there are the goals in group 5, on which AI is considered to have minor or no direct effects and limited indirect effects. It must be noted that this classification is based on the impact of AI, and that this is in no way connected to the overall importance of the SDGs in question. On the contrary, these goals may be vitally important for creating a sustainable future and labelling them as group 5 goals simply means that the impact of AI on reaching these goals is limited. The goals in this group are SDG6, SDG7, SDG14, SDG15, and SDG17.

6.1. SDG1: End Poverty in All Its Forms Everywhere

Perhaps the most ambitious goal of all is to end all poverty, in all forms, everywhere. It is quite impressive, then, when Vinuesa, et al. [1] argue that AI positively influences all subgoals of SDG1 and negatively affects only 43% of them.
While seemingly impressive, the effects on SDG1 are almost entirely the indirect effects of reaching SDG8a, as this relates to economic growth. As noted above, it is highly doubtful that the economic growth promoted by modern AI is sustained, inclusive, and sustainable, and if so, it is also highly unlikely that AI will help eliminate poverty. This is particularly true if we recognize that poverty is both an absolute and relative concept, which implies that economic growth combined with increasing inequality will lead to more people living in relative poverty.
Arguing that AI positively affects all subgoals of SDG1 thus hinges on an isolationist and context free analysis of AI—one that is likely to create a misleading view of AI and also fuel a general AI hype, as will be emphasized in the conclusion.

6.2. SDG2: End Hunger, Achieve Food Security and Improved Nutrition and Promote Sustainable Agriculture

Agriculture is no exception from the march of automation, and AI is a key technology in this development. Examples of uses of AI are self-driving farm equipment and machinery, monitoring and managing of farms, and the development of new genetically modified crops.
On the other hand, AI-based agriculture is costly, and there is a real danger that all the aforementioned applications may indeed make agriculture more effective in the rich world (meso), while further exacerbating the gulf between developed and developing country farmers (macro). This would be akin to infrastructural technological change [4], such as the introduction of snowmobiles in Skolt Lapland, which led to broad and deep effects in societal relations and in the distribution of both wealth and work [73].
In addition, AI in itself has no impetus toward more nutritional crops and may just as easily be used to develop and grow profitable crops—crops that are popular and tasty but lack nutritional value. Lastly, there is no reason to assume that AI-based agriculture will be more sustainable or that more effective food production in the rich countries will lead to an end of hunger. Rich countries have produced more food than they require for a long time, but this does not mean that the surplus is distributed justly. Rather, it is burnt, and there is little reason other than naïveté to assume that AI changes this. Despite this, Vinuesa, et al. [1] portray AI as an enabler for 69% of the subgoals and an inhibitor for only 13%.

6.3. SDG12: Ensure Sustainable Consumption and Production Patterns

Consumption is connected to production, and the main focus of SDG12 is the interaction between the two, enabling consumers to consume sustainably. Vinuesa, et al. [1] find evidence of positive impact on 59% of the subgoals and negative impact on 16% of the subgoals.
Sustainable industry and general innovation may impact this goal indirectly, and the main direct positive effect of AI on this goal would be through enabling people to monitor and track their consumption and the associated environmental consequences that ensue—hopefully producing sustainable consumption patterns.
However, AI enabled products proliferate, and the phenomenon of internet-of-things (IoT) is intimately connected to AI and data driven innovation. In addition to manufacturing new and previously unknown needs with these new products, it is simultaneously a new way to plan obsolescence for products that have previously been long lasting. While older TVs were obsolete when new physical technology made them less attractive, “smart” TVs, etc., are routinely telling customers that new apps, etc., will not run on their TVs, and that they will have to replace them in order to stay up to date. The positive effects of AI on sustainable consumption seems negligible, while the negative effects—as AI is considered in its sociotechnical context—are potentially great.

6.4. SDG6: Ensure Availability and Sustainable Management of Water and Sanitation for All

Water and sanitation systems can surely be more effectively managed by AI systems and potentially improved through innovation. These are, however, relatively minor effects and for the most part dependent on achieving SDG9a (innovation) and SDG9b (infrastructure). Furthermore, increasingly sophisticated water systems become more susceptible to error and attack, and access to AI powered water systems may also lead to increasing inequality between those with access and those without. In contrast to the rather modest AI impacts suggested by this article, Vinuesa, et al. [1] argue that AI enables all the subgoals for this SDG, while it negatively affects 28%.

6.5. SDG7: Ensure Access to Affordable, Reliable, Sustainable and Modern Energy for All

Energy use and distribution is portrayed by Vinuesa, et al. [1] as an area in which AI positively affects all subgoals, while simultaneously inhibiting 40% of the same goals. The effects of AI on energy systems are non-negligible, but they are mainly indirect effects connected to innovation and infrastructure. The main benefit would seem to be more efficient systems of production, planning, and distribution of energy, and if properly disseminated to developing nations, these effects might be of more importance, as better access to energy would cause ripple effects related to SDG8 and SDG9—improving infrastructure, promoting industry, and thus leading to economic growth [10].

6.6. SDG14: Conserve and Sustainably Use the Oceans, Seas and Marine Resources for Sustainable Development

Sustainability related to the oceans includes considerations about marine ecosystems and marine resources, including biological diversity in the seas. This is an area in which AI has long been portrayed as important in terms of surveillance of boats and the monitoring of fisheries and fish stocks. This may, in theory, prevent another instance of the kind of overfishing that led to the near-demise of the north Atlantic cod stocks [74].
Ocean acidification is another issue related to SDG14 and one that is partly caused by a lack of achieving SDG13, as acidification is a result of uptake of CO2 from the atmosphere and thus an indirect effect of the causes of climate change. While AI may be helpful in monitoring marine ecosystems, the added benefits of AI in this respect are considered to be insubstantial.
Furthermore, as overfishing is one of the main concerns in SDG14, the threat posed by AI and monitoring systems in the wrong hands is potentially worse than the benefits of using AI for good. AI can be used both to find and track fish stocks more effectively, and also to find and monitor—and thus evade—those tasked with preventing illegal activities at sea.

6.7. SDG15: Protect, Restore and Promote Sustainable Use of Terrestrial Ecosystems, Sustainably Manage Forests, Combat Desertification, and Halt and Reverse Land Degradation and Halt Biodiversity Loss

SDG 15—life on land—encompasses terrestrial ecosystems, issues related to land use and degradation, and biodiversity. According to Vinuesa, et al. [1], AI may potentially impact all subgoals positively, and they find proof of positive contributions for 88% of the subgoals. They can imagine negative impacts for 33% of the targets but find proof of it for only 8%.
The major reason for imagining positive impacts is related to increased surveillance and monitoring systems, enabling us to more effectively identify species and areas at risk and to counteract negative developments more effectively. Various sources of data are important in this context, including registry data regarding weather, geology, and species, and also satellite imagery, etc.
However, the impact of AI systems on these goals is currently largely unknown and hypothetical, and it must be noted that surveillance and monitoring systems could potentially also be used to prevent the attainment of this goals. Businesses may identify more effective ways to exploit land and natural resources, and poachers, for example, could also easily use satellite imagery and prediction systems to more effectively hunt rare animals and exacerbate biodiversity loss.

6.8. SDG17: Strengthen the Means of Implementation and Revitalize the Global Partnership for Sustainable Development

The implementation of the SDGs and the promotion of partnerships is an area in which AI may serve a supportive role, for example, by way of monitoring systems for compliance, etc., but it is not widely assumed that AI will play a key role in achieving SDG17. Vinuesa, et al. [1] find evidence of positive AI contributions on 15% of the subgoals and negative contributions to 5% of the subgoals.
AI can, however, play a key role as the subject matter both for regulations and policy for the partnership for sustainable development and, in particular, as a technology that must be transferred and made more readily available for all in order to achieve just about any of the SDGs in which capacity building in developing nations and the promotion of equality and fairness is involved (which is, incidentally, just about all of them). The system in which data-intensive AI is almost exclusively controlled by a small number of powerful nations is a key inhibitor of the SDGs in general and thus an issue of great concern for those working toward SDG17.

7. Discussion

This article shows that influential attempts to analyze the effects of AI on the SDGs, for example by Vinuesa, et al. [1], overstate and overplay the evidence available for positive AI impact. Simultaneously, potential negative impacts are disregarded. The conceptual framework here presented shows that there are several reasons why this occurs.
First, they do not consider the impact of AI on the various subgoals. While a trivial positive contribution to a subgoal is counted as one instance of positive AI contribution, a potentially severe negative impact is counted as one instance of negative AI impact. The threat could be many times greater than the positive contribution, but this is partially neglected in their article, as they perform more of a counting game than a comprehensive analysis of the ultimate impact of AI. By seeing AI in context and distinguishing between the micro, meso, and macro levels, it becomes possible to evaluate a) the overall impact on a goal and b) to see how a purported positive impact might simultaneously be associated with negative impacts on other levels of analysis.
Secondly, they neglect the important insight that proof of the potentially positive impact of AI will often simultaneously be proof that AI can be used to inhibit the goals. Truby [19] demonstrates this in the context of finance and anti-money laundering (AML), as he shows that the models used to detect money laundering can easily be used to make money laundering more effective and avoid AML. This applies to a vast number of AI models. Only by engaging theoretically with proofs-of-concept and the intentions of the goals will such important implications be uncovered. Furthermore, as this implies that a single proof-of-concept will often entail that AI might have both positive and negative consequences, being able to evaluate the impact on the various levels of analysis is required for reaching any conclusion regarding overall impact.
Thirdly, they overstate the possibility of evaluating the impact of AI through empirical and quantitative methods as they downplay and disregard deeper theoretical analyses and what they call “speculative” research. This article has shown that research based on theoretical analysis and conceptual development is vitally important for understanding the various dependencies between the SDGs and also how AI is connected to a wider sociotechnical system that should not be analytically separated from the narrow technical details of AI. Vinuesa, et al. [1] at times show that they are aware of the fact that AI may benefit some groups and nations more than others, but their methods prevent them from sufficiently reflecting this in their findings and quantitative reporting of the benefits and harms of AI in an SDG setting.
As these points show, and as supported by the analysis, empirical and evidence-based research must be supplemented by comprehensive theoretical analyses of the linkage between the various SDGs and of AI as a part of a broader economic, social, and political system [4].
Preparing the ground for using the SDGs as a framework for AI ethics in which context and the sociotechnical system is taken seriously has been one of the main purposes of this article, and the findings presented highlight the need for more research on the impact of AI based on the conceptual framework presented. The article has also clearly demonstrated that while AI can indeed be a force for good, it is also a cause for concern, and policymakers and regulators alike must account for such concerns. One important role for the AI ethics community is to help elucidate the impacts of AI so that policymakers and regulators can work on the basis of a more comprehensive understanding of AI impacts. This article has shown how the SDGs constitute one important framework for AI ethics.

The Limitations of Using the SDGs for Evaluating the Impacts of AI

While the SDGs allow us to highlight many of the ethical issues related to AI, there is also a blind spot related to the impact AI has on individuals in general, regardless of where—or who—they are. While the SDGs allow us to highlight the problematic nature of facial recognition, for example, this must be done by highlighting how AI is biased and non-neutral. Issues of privacy are also not sufficiently understood through any of the SDGs, unless they lead to unjust and discriminatory outcomes and practices. These issues may be important regardless of their unequal impact. This highlights one limitation of the SDGs and shows that complementary critical frameworks are also required. Moral and legal theories, and the more fundamental human rights framework, provide a wide range of tools to complement the SDGs. This will allow us to factor in the negative impacts related to a loss of autonomy and liberty through, for example, automated decision making and a loss of privacy.
Another limitation is that the SDGs are based on a fundamental techno-optimism, as a wide range of goals entail using technology to solve the problems faced—even if technology is often at the very root of the problems we seek to solve. If we turn to more radical critical approaches, we might label the SDGs a shallow approach to the issues we face today, while a deeper approach that allows us to consider the foundations of the problems and imagine radically different solutions may be required [46]. The way of the West has been to pursue science with the aim of reining it in and using it to control nature, but other perspectives, such as indigenous perspectives [75,76], might serve as valuable correctives that allow us to rethink our fundamental assumption about the nature of our politics and what truly sustainable relations between humans and nature entail.

8. Conclusions

AI has great potential, but this potential is repeatedly overstated. This article has emphasized that AI must been seen in a larger context, and that previous research on AI and the SDGs have systematically understated the potential negative effects of AI on the SDGs. Firstly, it has been shown that while AI may conceivably have isolated positive effects on various goals, they are also part of a system that simultaneously counteracts many of these effects. In addition, the article has shown that it is important to remember that many proofs-of-concepts related to potential positive AI potential will in effect simultaneously demonstrate how AI may be used in ways to lead to negative effects on the SDGs. Overlooking these aspects may take us from recognizing the actual positive potential of AI into the realm of AI hype.
AI hype usually refers to overstating the capabilities of AI, and that is part of the problem here discussed, as limited successes in closed experimental settings are not necessarily transferable to the open systems of real life. Another way to hype AI is by analyzing the SDGs as isolated and atomistic goals in which there are no important interlinkages. By factoring in the indirect effects of reaching a few central SDGs, we can avoid overstating the importance of AI by not counting the same things multiple times. In addition, a proper reading of the SDGs includes factoring in considerations of inclusivity, sustainability, universality, inequality, and equity. By doing so it becomes much harder to label AI an enabler of many of the goals.
Furthermore, AI can also be hyped by ignoring the context in which AI has been developed and is still being developed and used. AI resides firmly in the hands of a small set of actors, companies, and countries, and its ties to the capitalism of our age are strong and cannot be ignored. The modern era has seen great progress—and growth—but it has also fostered unrivalled inequality and environmental degradation. Seeing AI in context requires the analysis of AI as a part of a larger structure and doing so shows that it is intimately tied to severe threats to most of the SDGs. Ignoring this and arguing that AI is a decoupled and neutral technology that can save the world, is at best ignorant and at worst deeply irresponsible and dangerous. These issues highlight the need for independent and rigorous research on AI’s ethical implications and sustainability, and we would do well to be cautious of regarding the research performed within the very system here referred to as the final word on the sustainable nature of AI.


This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.


  1. Vinuesa, R.; Azizpour, H.; Leite, I.; Balaam, M.; Dignum, V.; Domisch, S.; Felländer, A.; Langhans, S.D.; Tegmark, M.; Nerini, F.F. The role of artificial intelligence in achieving the sustainable development goals. Nat. Commun. 2020, 11, 1–10. [Google Scholar] [CrossRef] [Green Version]
  2. ITU. AI4Good Global Summit. Available online: (accessed on 31 January 2021).
  3. United Nations. Transforming Our World: The 2030 Agenda for Sustainable Development; Division for Sustainable Development Goals: New York, NY, USA, 2015. [Google Scholar]
  4. Barley, S.R. Work and Technological Change; Oxford University Press: Oxford, UK, 2020. [Google Scholar]
  5. Sachs, J.D. From millennium development goals to sustainable development goals. Lancet 2012, 379, 2206–2211. [Google Scholar] [CrossRef]
  6. Pekmezovic, A. The UN and goal setting: From the MDGs to the SDGs. In Sustainable Development Goals: Harnessing Business to Achieve the SDGs through Finance, Technology, and Law Reform; Walker, J., Pekmezovic, A., Walker, G., Eds.; John Wiley & Sons Ltd: West Sussex, UK, 2019; Volume 1. [Google Scholar]
  7. Gabriel, I.; Gauri, V. Towards a new global narrative for the sustainable development goals. In Sustainable Development Goals: Harnessing Business to Achieve the SDGs through Finance, Technology, and Law Reform; Walker, J., Pekmezovic, A., Walker, G., Eds.; John Wiley & Sons Ltd: West Sussex, UK, 2019; Volume 3. [Google Scholar]
  8. Nerini, F.F.; Sovacool, B.; Hughes, N.; Cozzi, L.; Cosgrave, E.; Howells, M.; Tavoni, M.; Tomei, J.; Zerriffi, H.; Milligan, B. Connecting climate action with other Sustainable Development Goals. Nat. Sustain. 2019, 2, 674–680. [Google Scholar] [CrossRef]
  9. Le Blanc, D. Towards integration at last? The sustainable development goals as a network of targets. Sustain. Dev. 2015, 23, 176–187. [Google Scholar] [CrossRef]
  10. Nilsson, M.; Griggs, D.; Visbeck, M. Policy: Map the interactions between sustainable development goals. Nature 2016, 534, 320–322. [Google Scholar] [CrossRef]
  11. BERENBERG. Understanding the SDGs in Sustainable Investing (A Berenberg ESG Office Study); Joh Berenberg, Gossler & Co. KG: Hamburg, Germany, 2018. [Google Scholar]
  12. Chui, M.; Manyika, J.; Miremadi, M.; Henke, N.; Chung, R.; Nel, P.; Malhotra, S. Notes from the AI Frontier: Applying AI for Social Good; McKinsey Global Institute: New York, NY, USA, 2018. [Google Scholar]
  13. Di Vaio, A.; Palladino, R.; Hassan, R.; Escobar, O. Artificial intelligence and business models in the sustainable development goals perspective: A systematic literature review. J. Bus. Res. 2020, 121, 283–314. [Google Scholar] [CrossRef]
  14. Khakurel, J.; Penzenstadler, B.; Porras, J.; Knutas, A.; Zhang, W. The rise of artificial intelligence under the lens of sustainability. Technologies 2018, 6, 100. [Google Scholar] [CrossRef] [Green Version]
  15. Toniolo, K.; Masiero, E.; Massaro, M.; Bagnoli, C. Sustainable business models and artificial intelligence: Opportunities and challenges. In Knowledge, People, and Digital Transformation; Springer: Berlin, Germany, 2020; pp. 103–117. [Google Scholar]
  16. Yigitcanlar, T.; Cugurullo, F. The sustainability of artificial intelligence: An urbanistic viewpoint from the lens of smart and sustainable cities. Sustainability 2020, 12, 8548. [Google Scholar] [CrossRef]
  17. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef] [Green Version]
  18. Dignum, V. Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way; Springer: Cham, Switzerland, 2019. [Google Scholar]
  19. Truby, J. Governing artificial intelligence to benefit the UN sustainable development goals. Sustain. Dev. 2020, 28, 946–959. [Google Scholar] [CrossRef]
  20. Herrman, J. We’re Stuck with the Tech Giants. But They’re Stuck with Each Other; New York Times Magazine: New York, NY, USA, 2019. [Google Scholar]
  21. Sen, C. The ‘Big Five’ Could Destroy the Tech Ecosystem; Bloomberg: New York, NY, USA, 2017. [Google Scholar]
  22. Foer, F. World without Mind; Random House: New York, NY, USA, 2017. [Google Scholar]
  23. Véliz, C. Privacy Is Power; Bantam Press: London, UK, 2020. [Google Scholar]
  24. Solove, D.J. Privacy and power: Computer databases and metaphors for information privacy. Stan. L. Rev. 2000, 53, 1393. [Google Scholar] [CrossRef] [Green Version]
  25. Sætra, H.S. Freedom under the gaze of Big Brother: Preparing the grounds for a liberal defence of privacy in the era of Big Data. Technol. Soc. 2019, 58, 101160. [Google Scholar] [CrossRef]
  26. Sætra, H.S. Privacy as an aggregate public good. Technol. Soc. 2020, 63, 101422. [Google Scholar] [CrossRef]
  27. Yeung, K. ‘Hypernudge’: Big Data as a mode of regulation by design. Inf. Commun. Soc. 2017, 20, 118–136. [Google Scholar] [CrossRef] [Green Version]
  28. Sætra, H.S. When nudge comes to shove: Liberty and nudging in the era of big data. Technol. Soc. 2019, 59, 101130. [Google Scholar] [CrossRef]
  29. Müller, V.C. Ethics of artificial intelligence and robotics. In Stanford Encyclopedia of Philosophy; Zalta, E.N., Ed.; CSLI Publications: Stanford, CA, USA, 2020. [Google Scholar]
  30. Noble, S.U. Algorithms of Oppression: How Search Engines Reinforce Racism; New York University Press: New York, NY, USA, 2018. [Google Scholar]
  31. Buolamwini, J.; Gebru, T. Gender shades: Intersectional accuracy disparities in commercial gender classification. Conference on Fairness, Accountability and Transparency. 2018, pp. 77–91. Available online: (accessed on 10 December 2020).
  32. Culpepper, P.D.; Thelen, K. Are we all amazon primed? Consumers and the politics of platform power. Comp. Political Stud. 2020, 53, 288–318. [Google Scholar] [CrossRef] [Green Version]
  33. Gillespie, T. The politics of ‘platforms’. New Media Soc. 2010, 12, 347–364. [Google Scholar] [CrossRef]
  34. Sagers, C. Antitrust and Tech Monopoly: A General Introduction to Competition Problems in Big Data Platforms: Testimony Before the Committee on the Judiciary of the Ohio Senate. 2019. Available online: (accessed on 10 December 2020).
  35. Sattarov, F. Power and Technology: A Philosophical and Ethical Analysis; Rowman & Littlefield: Lanham, MD, USA, 2019. [Google Scholar]
  36. Turkle, S. Alone Together: Why We Expect More from Technology and Less from Each Other; Hachette: London, UK, 2017. [Google Scholar]
  37. Sætra, H.S. The tyranny of perceived opinion: Freedom and information in the era of big data. Technol. Soc. 2019, 59, 101155. [Google Scholar] [CrossRef]
  38. Sunstein, C.R. Republic: Divided Democracy in the Age of Social Media; Princeton University Press: Princeton, NJ, USA, 2018. [Google Scholar]
  39. Sætra, H.S. Science as a vocation in the era of big data: The philosophy of science behind big data and humanity’s continued part in science. Integr. Psychol. Behav. Sci. 2018, 52, 508–522. [Google Scholar] [CrossRef] [Green Version]
  40. Zuboff, S. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power: Barack Obama’s Books of 2019; PublicAffairs: New York, NY, USA, 2019. [Google Scholar]
  41. Davidsen, B.-I. Towards a critical realist-inspired economic methodology. J. Philos. Econ. 2010, 3, 74–96. [Google Scholar]
  42. Bhaskar, R. General introuduction. In Critical Realism: Essential Readings; Archer, M., Collier, A., Lawson, T., Norrie, A., Eds.; Routledge: London, UK, 2013. [Google Scholar]
  43. Mills, S. Delete Facebook: From popular protest to a new model of platform capitalism? New Political Econ. 2020. [Google Scholar] [CrossRef]
  44. Schwab, K. The Fourth Industrial Revolution; Currency Press: Redfern, Australia, 2017. [Google Scholar]
  45. Marcus, G.; Davis, E. Rebooting AI: Building Artificial Intelligence We Can Trust; Pantheon: New York, NY, USA, 2019. [Google Scholar]
  46. Næss, A. Ecology, Community and Lifestyle: Outline of an Ecosophyl; Cambridge University Press: Cambridge, UK, 1989. [Google Scholar]
  47. Serrano, W. Digital systems in smart city and infrastructure: Digital as a service. Smart Cities 2018, 1, 134–154. [Google Scholar] [CrossRef] [Green Version]
  48. Engström, E.; Strimling, P. Deep learning diffusion by infusion into preexisting technologies—Implications for users and society at large. Technol. Soc. 2020, 63, 101396. [Google Scholar] [CrossRef]
  49. Chancel, L. Ten Facts About Inequality in Advanced Economies, Working Paper 2019/15; World Inequality Lab: Paris, France, 2019.
  50. Piketty, T. Capital in the Twenty-First Century; The Belknap Press of Harvard University Press: Cambridge, MA, USA, 2014. [Google Scholar]
  51. Müller, V.C. (Ed.) Risks of Artificial Intelligence; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
  52. Anderson, E. Private Government: How Employers Rule Our Lives (And Why We Don’t Talk about It); Princeton University Press: Princeton, NJ, USA, 2017. [Google Scholar]
  53. Andrejevic, M. The work of watching one another: Lateral surveillance, risk, and governance. Surveill. Soc. 2004, 2, 4. [Google Scholar] [CrossRef]
  54. Danaher, J. Automation and Utopia: Human Flourishing in a World without Work; Harvard University Press: Cambridge, MA, USA, 2019. [Google Scholar]
  55. D’Alfonso, S.; Santesteban-Echarri, O.; Rice, S.; Wadley, G.; Lederman, R.; Miles, C.; Gleeson, J.; Alvarez-Jimenez, M. Artificial intelligence-assisted online social therapy for youth mental health. Front. Psychol. 2017, 8, 796. [Google Scholar]
  56. Lupton, D. Data Selves: More-Than-Human Perspectives; John Wiley & Sons: Hoboken, NJ, USA, 2019. [Google Scholar]
  57. Lupton, D. The Quantified Self; John Wiley & Sons: Hoboken, NJ, USA, 2016. [Google Scholar]
  58. Appel, M.; Marker, C.; Gnambs, T. Are social media ruining our lives? A review of meta-analytic evidence. Rev. Gen. Psychol. 2020, 24, 60–74. [Google Scholar] [CrossRef]
  59. Heaven, D. Two minds are better than one. New Sci. 2019, 243, 38–41. [Google Scholar] [CrossRef]
  60. Nwana, H.S. Intelligent tutoring systems: An overview. Artif. Intell. Rev. 1990, 4, 251–277. [Google Scholar] [CrossRef]
  61. International Telecommunication Union. United 4 Smart Sustainable Cities. Available online: (accessed on 19 December 2020).
  62. Shapiro, A. Reform predictive policing. Nat. News 2017, 541, 458. [Google Scholar] [CrossRef] [Green Version]
  63. Coalition for Critical Technology. Abolish the #TechToPrisonPipeline. Available online: (accessed on 1 October 2020).
  64. Smith, R.E. Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All; Bloomsbury Academic: London, UK, 2019. [Google Scholar]
  65. Sætra, H.S. A shallow defence of a technocracy of artificial intelligence: Examining the political harms of algorithmic governance in the domain of government. Technol. Soc. 2020, 62, 101283. [Google Scholar] [CrossRef]
  66. Allcott, H.; Gentzkow, M. Social media and fake news in the 2016 election. J. Econ. Perspect. 2017, 31, 211–236. [Google Scholar] [CrossRef] [Green Version]
  67. Thaler, R.H.; Sunstein, C.R. Nudge: Improving Decisions about Health, Wealth, and Happiness; Yale University Press: New York, NY, USA, 2008. [Google Scholar]
  68. Sayer, A. Method in Social Science: A Realist Approach; Routledge: London, UK, 1992. [Google Scholar]
  69. Samui, P. Application of artificial intelligence in geo-engineering. In International Conference on Information Technology in Geo-Engineering; Springer: Berlin, Germany, 2019; pp. 30–44. [Google Scholar]
  70. Brevini, B. Black boxes, not green: Mythologizing artificial intelligence and omitting the environment. Big Data Soc. 2020, 7, 2053951720935141. [Google Scholar] [CrossRef]
  71. Bender, E.M.; Gebru, T.; McMillan-Major, A.; Shmitchell, S. On the dangers of stochastic parrots: Can language models be too big. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency; Association for Computing Machinery: New York, NY, USA, 2021. [Google Scholar]
  72. García-Berná, J.A.; Fernández-Alemán, J.L.; De Gea, J.M.C.; Nicolás, J.; Moros, B.; Toval, A.; Mancebo, J.; García, F.; Calero, C. Green IT and sustainable technology development: Bibliometric overview. Sustain. Dev. 2019, 27, 613–636. [Google Scholar]
  73. Pelto, P.J. The Snowmobile Revolution: Technology and Social Change in the Arctic; Waveland Press Inc.: Long Grove, IL, USA, 1987. [Google Scholar]
  74. Hutchings, J.A.; Myers, R.A. What can be learned from the collapse of a renewable resource? Atlantic cod, Gadus morhua, of Newfoundland and Labrador. Can. J. Fish. Aquat. Sci. 1994, 51, 2126–2146. [Google Scholar] [CrossRef]
  75. Broadhead, L.A.; Howard, S. Deepening the debate over ‘sustainable science’: Indigenous perspectives as a guide on the journey. Sustain. Dev. 2011, 19, 301–311. [Google Scholar] [CrossRef]
  76. Gellers, J. Rights for Robots: Artificial Intelligence, Animal and Environmental Law; Routledge: Abingdon, UK, 2020. [Google Scholar]
Figure 1. The Sustainable Development Goals (SDGs) [3].
Figure 1. The Sustainable Development Goals (SDGs) [3].
Sustainability 13 01738 g001
Figure 2. Three levels of analysis.
Figure 2. Three levels of analysis.
Sustainability 13 01738 g002
Figure 3. A framework for categorizing the SDGs in terms of AI impact.
Figure 3. A framework for categorizing the SDGs in terms of AI impact.
Sustainability 13 01738 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sætra, H.S. AI in Context and the Sustainable Development Goals: Factoring in the Unsustainability of the Sociotechnical System. Sustainability 2021, 13, 1738.

AMA Style

Sætra HS. AI in Context and the Sustainable Development Goals: Factoring in the Unsustainability of the Sociotechnical System. Sustainability. 2021; 13(4):1738.

Chicago/Turabian Style

Sætra, Henrik Skaug. 2021. "AI in Context and the Sustainable Development Goals: Factoring in the Unsustainability of the Sociotechnical System" Sustainability 13, no. 4: 1738.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop