The progress and opportunities of artificial intelligence (AI) have been discussed by both technology enthusiasts (those who believe technology creates opportunities and eliminates inequalities) and technophobes (those who are disproportionately afraid of technology) [1
A controversial subject, AI has been discussed ever since its inception in the 1950s by John McCarthy [2
]. However, even earlier, the possibilities of “machine intelligence” or “artificial intelligence” were already recognized and discussed in the mid-1940s by Turing [3
]. Technology enthusiast, physicist, and AI researcher Max Tegmark talks about the opportunities of AI and is convinced we can grow the world’s prosperity through automation, without leaving people lacking income or purpose; according to Tegmark, when AI is utilized in this manner, humanity does not have to fear an arms race [4
]. Yuval Harari argues against this by pointing out that “instead of fearing assassin robots that try to terminate us, we should be concerned about hordes of bots who know how to press our emotional buttons better than our mother” [5
]. One current example that has received a lot of attention is the debate surrounding the last U.S. election and how voters were influenced by [6
]. The ability to scrape data from across multiple social media platforms and capture user behavior patterns and comments combined with a mix of machine learning, statistics, robust programming skills, and both artificial and natural intelligence enables one to capture and influence human behavior [7
Rather than worry about an unlikely existential threat, Grady Booch urges the consideration of how AI will enhance human life [8
]. In line with this, digital visionary Kevin Kelly argues that AI can bring on a second Industrial Revolution [9
In contrast, neuroscientist Sam Harris states that although scientists are going to build superhuman machines, we have not yet grappled with the problems associated with creating something that may treat people the way we treat ants [10
]. More specifically, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that are different from human error patterns, ways we do not expect or are prepared for and that call for holding on ever tighter to human values and ethics [11
According to Tegman, the “elephant in the room” that we should be discussing is where we want to go with AI, that is, which society we are aiming toward, rather than focusing on how to make AI more powerful and steer it better [12
]. Fisher [13
] states in the context of AI that, “Sustainability is a vast concern, or should be, and presents challenges stemming from interactions between the natural and human-developed spheres across temporal and spatial scales” [13
] (p. 4852). Fisher further notes this as a motivation for computer science researchers to apply their knowledge of working on environmental and societal sustainability challenges to AI. He concludes that computational sustainability has taken hold as a vibrant area of use-driven basic research for AI.
We take this as an opportunity to explore the relation between AI and sustainability, as well as sustainable development, in terms of a technology impact assessment. This leads to the following central question for our research, which we base on previous work in sustainability assessment [14
Research question: What are the potential long-term impacts of AI on sustainability and, more specifically, on sustainable development?
Research design: We use a simplified version of the template for a sustainability analysis by Becker et al. [14
] to explore the potential positive and negative influences of AI on the dimensions of sustainability. The first two authors of this paper elaborated the first version on the basis of previous work and literature study based on snowballing from our results to keywords for AI and the individual sustainability dimensions. We then iterated the analysis in discussion amongst all authors. Subsequently, we performed a focus group with a set of experts, the Karlskrona Consortium.
Outline: Section 2
describes the background of artificial intelligence, the sustainability analysis, and what sustainable development is. Section 3
presents the sustainability analysis of the domains of AI, Section 4
opens an in-depth discussion of the issues caused by AI, and Section 5
summarizes the next steps in AI sustainability and other areas of research.
In this paper, we see the different effects that AI can have as factors that can intensify development either way, towards more or less globalization, towards more or less equality, and towards more or less justice and peace. As pointed out by Ehrenfeld [87
], many of our current sustainability interventions via IT are measures to reduce unsustainability instead of creating sustainability, which means that we have to significantly shift our thinking towards a transformation mindset for a joint sustainable vision of the future. In this section, we discuss how we could reduce such negative impact that was discovered during analysis before AI becomes more widespread across multiple application domains, which, in turn, may also affect the UN’s Sustainable Development Goals [36
Different categories of values—such as individual or personal values, object values, environmental values, professional or work values, national values, group values, and societal values [88
]—are underlined within individual and organizational behaviors. These values are “determinants of virtually all kinds of behavior that could be called social behavior or social action, attitudes and ideology, evaluations, moral judgments and justifications of self to others, and attempts to influence others” [89
] (p. 5). AI-enabled applications are dependent on how humans train them, while training value conflicts may arise because of the involvement of different stakeholders. Such conflicts of values may negatively impact society instead of proving to be beneficial. Therefore, in order for the society to benefit from AI, one way of reducing these negative impacts within the sustainability dimension is through aligning the values of all stakeholders during the design process of AI-enabled applications so that their goals and behaviors resemble the human values. For example, the AI-based applications designed to support law enforcement require datasets derived from the law, imposed by the national values of the government. In such scenarios, the designers or trainers should not mix their own self-oriented (or egocentric) values and other-oriented (or disinterested) values [90
] with the national values, but should solely train the AI to align with the national values. With such alignment provided, AI systems could be less of a possible threat to humanity and strictly remain machines for solving tasks assigned to them. However, the framework for characterizing and organizing value systems that could help in aligning the values of each stakeholder is still missing.
On the other hand, we live in a world of limited resources, including time, energy, money and the great transitions [91
]. In this context, nations, organizations compete to design AI enabled system to gain power and to have influence over others [92
]. Having such desire in power, could help one nation to the achieve their long-term sustainability goals, however other to lose the three “pillars,” i.e., economic development, environmental protection and social progress/equity [91
]. Therefore, for society to benefit from AI, it is essential for all the stakeholders’ technological designers, application developers, researchers and users (business and consumers), and government to collaborate and share the responsibilities rather than having influence over others. There are a different ways stakeholders could share the limited resources and great transitions in to actions. First could be with an “open, inclusive, and continuing global dialogue about what ‘the good life’ should look like, how to live it, and the values, attitudes and behaviors, both individual and collective, that will support it” [91
] (p. 41).
Second with a proposal to update current strategies and policies on the organizational, national and global levels could improve the effect of AI in the five dimensions of sustainability. In such case, new strategies and policies should start from the national level, involving stakeholders, including citizens, civil society groups, the news media and corporations. These updates may entail a significant expenditure of resources but will create stronger national level policies in accordance with the ethics, values, paradigms and sustainable development goals of the United Nations. As Frieden et al. [93
] state, “National policies, especially of large countries, affect the international economy in important ways” (p. 27). One example is the initial step taken by the German government, which has just released a strategy paper on the cornerstones the federal government, where they identified a need to support a strategy for AI. It states “Usable, high-quality data must be significantly increased without violating personal rights, the right to informational self-determination or other fundamental rights. Data from the public sector and science are increasingly being opened up for AI research, enabling their economic and public benefit use in the sense of an open-data strategy.” [94
] (p. 1). The paper lists 13 goals, starting with establishing an “Artificial Intelligence made in Germany” seal of quality and ending with the commitment to adhere to the recommendations of the Commission on Data Ethics [94
] (p. 2). However, policies on the fulfillment of these strategies have yet to be created. Similarly, all European Union (EU) members have signed a Declaration of Cooperation on AI to put forward the European approach to Artificial Intelligence based on three pillars [95
]. These pillars are: being at the forefront of technological developments and encouraging their uptake by the public and private sectors; preparing for socio-economic changes brought about by AI; and ensuring an appropriate ethical and legal framework. Formation of these pillars is the crucial initial step toward the implementation of AI by EU nations. As Papadopoulos et al. [41
] point out, exploring and decoding the relevant contagion mechanisms is a major way to prevent the spread of economic crisis. Therefore, in the future, to reduce the global impact of AI, there should be deceleration of cooperation between nations at the global level to prepare shared standards of practice for the global socio-economic shift as it occurs in both production and service outsourcing, without competitive advantage in mind.
Likewise, as stated by the related work referred to in the background section and in our sustainability analysis, ethics is a major consideration when making sure AI contributes to what we want, all without imposing serious humanistic, social, and legal concerns. To do this, guidance from a proper code of ethics is needed. However, developing such a ‘proper’ one is a significant challenge. For example, the current version of the ACM Code of Ethics and Professional Practice created a debate, specifically about principle 1.2, Avoid Harm
, which now reads, “In this document, ‘harm’ means negative consequences, especially when those consequences are significant and unjust. Examples of harm include unjustified physical or mental injury, unjustified destruction or disclosure of information, and unjustified damage to property, reputation, and the environment.” and then proceeds to request ethical justifications for exceptions. The question is when is harm “ethically justified” and who takes that decision on which basis if the Code of Ethics does not provide guidance on that. However, the guidance of “to minimize the possibility of indirectly or unintentionally harming others, computing professionals should follow generally accepted best practices unless there is a compelling ethical reason to do otherwise” gives little concrete advice, instead circling back to already accepted practices and a call to think ethically. Other professional associations have produced similar efforts, but they too have had similar struggles in phrasing effective guidance; for example, the German Society for Informatics (Gesellschaft für Informatik
) put out a principle for social responsibility, holding the engineer accountable for the social and societal impacts of his or her technological work, but the organization did not mention harm of any kind [96
To take this further, the Future of Computing Academy (FCA), which is part of the ACM, calls for researchers to consider the negative societal consequences of their work and to make this a part of their peer-reviewed publications [97
]. Specifically, for AI, it seems that it may then carry the values of the human that coded it, either by how an algorithm is designed (the choices that are prescribed, e.g., if x equals z then do y) or the training data that is supplied to a neural network, which also has choices encoded. Consequently, further research is required along the lines of values in software engineering. We conclude that teaching students about their responsibility for the long-term potential impact of their work and applying their code of ethics is crucial.
To sum up, one can see that value, collaboration, sharing responsibilities, ethics are important measures that should be taken in to consideration by all stakeholders to reduce the negative impact of AI towards sustainability. If these measures are taken in to consideration, there is possibilities that, “No matter how clever or artificially intelligent computers get, and no matter how much they help us advance, they will always be strictly machines and we will be strictly humans” [98
] (p. 59).
In this paper, we reviewed the potential long-term impacts of AI on sustainability and, more specifically, on sustainable development by performing a sustainability analysis [14
]. We explored the impacts of AI by using the five analysis categories—individual, social, economic, technical, and environmental—reviewing the current scientific literature on AI in each of the fields, and iterating the analysis with a focus group of experts.
Our main findings on how AI may impact sustainable development are as follows: On an economic level, AI is already a major industry and can displace low-skilled workers. On the technical level, with the advancement, AI may learn and teach how to code itself causing disruption towards jobs in the Information Technology (IT) industry. On an environmental level, AI can impact waste and pollution management and can also negatively impact sustainability in the form of power and resource consumption. On the individual level, AI may impact work, empower users with agents, and affect interactions or social isolation. Finally, on the social level, AI can take a minor role in assisting in communities, managing social media, automating routine tasks that are commonly outsourced, and participating in digital storytelling. It seems almost peculiar that a main outcome of our sustainability analysis is that AI can have positive and negative impacts on all five dimensions, and then again that makes sense, because it is a means, not an end. AI is a tool and as such it can be used for good or bad, and it is up to the developers as well as all stakeholders involved to take sound ethical decisions based on values commonly shared amongst citizens for the joint vision of a sustainable and resilient future.
The key findings of the present study are beneficial for all stakeholders, such as citizens, researchers, companies, application developers, and governmental organizations that have both a direct and indirect influence on the implementation, adoption, and regulation of AI. The presented sustainability analysis diagram can be used as a tool to understand both the positive and negative impact of ANI, ASI, and AGI on the five dimensions of sustainability. For future work, we envision several more detailed studies:
AI Application Domains: A more in-depth sustainability analysis should be performed for several application domains of AI whereby an analysis of the three orders of effect (life cycle, enabling, and structural) is included.
Ethics and transparency of AI: An interdisciplinary analysis that considers the transparency and ethical aspects of AI should be performed in a joint effort by behavioral psychologists, philosophers of science, psychologists, and computer scientists.
Responsibility & accountability for AI: A qualitative analysis should be conducted on how much citizens are willing to give up the freedom of choice and have AI take somewhat optimized decisions for them, how much operators are willing to pass on their responsibility to AI, and how much developers are willing to be accountable in case something fails, along with how to allow for and ensure transparency; and
Perceptions of AI: A larger-scale empirical analysis should be carried out on individuals’ perceptions in diverse stakeholder roles toward having AI integrated in society on several levels of technological intervention, e.g., as small-scale personal assistants, as substitute teachers, nurses, and doctors, or as decision support systems for governments and legislation.