Next Article in Journal
Observed Changes in the Frequency, Intensity, and Spatial Patterns of Nine Natural Hazards in the United States from 2000 to 2019
Next Article in Special Issue
From an Ethics of Carefulness to an Ethics of Desirability: Going Beyond Current Ethics Approaches to Sustainable AI
Previous Article in Journal
Reducing NOx Emissions through Microwave Heating of Aftertreatment Systems for Sustainable Transport in the Inland Waterway Sector
Previous Article in Special Issue
Sustainability Budgets: A Practical Management and Governance Method for Achieving Goal 13 of the Sustainable Development Goals for AI Development
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Acknowledging Sustainability in the Framework of Ethical Certification for AI

Center for Science and Thought, University of Bonn, Poppelsdorfer Allee 28, 53115 Bonn, Germany
Authors to whom correspondence should be addressed.
Sustainability 2022, 14(7), 4157;
Received: 21 February 2022 / Revised: 15 March 2022 / Accepted: 30 March 2022 / Published: 31 March 2022


In the past few years, many stakeholders have begun to develop ethical and trustworthiness certification for AI applications. This study furnishes the reader with a discussion of the philosophical arguments that impel the need to include sustainability, in its different forms, among the audit areas of ethical AI certification. We demonstrate how sustainability might be included in two different types of ethical impact assessment: assessment certifying the fulfillment of minimum ethical requirements and what we describe as nuanced assessment. The paper focuses on the European, and especially the German, context, and the development of certification for AI.

1. Introduction

Due to growing concerns about ethical, legal, and social issues around AI systems, over the past few years, both private corporations and public institutions have started developing quality and trustworthiness certification for AI [1] (p. 26 f.). In the EU, in April 2021, a proposal for an “Artificial Intelligence Act” was published, which foresees “standards, conformity assessment, certificates [and] registration” as a means to deal with “high-risk AI systems” [2] (Chapter 5, Art. 6). Although the route to a standardized and generally accepted certification is still a long one, several actors have laid the groundwork for the development of an assessment of what constitutes trustworthy AI. A High-Level Expert Group (HLEG) on Artificial Intelligence appointed by the European Commission indicated four ethical principles for AI based on fundamental rights and seven key AI requirements [3]. The ethical principles are respect for human autonomy, the prevention of harm, fairness, and explicability. The key requirements are supporting human agency and oversight; technical robustness and safety; respecting privacy and allowing good data governance; transparency; guaranteeing diversity; non-discrimination and fairness; improving societal and environmental well-being; and accountability for the outcomes of AI systems [3,4]. Part of the “social and environmental well-being” requirement is the need for a “sustainable and environmentally friendly AI” [3] (p. 30). In 2018, the European Group on Ethics in Science and New Technologies (EGE) had already identified sustainability as one of nine “ethical principles and democratic prerequisites” for a “shared Ethical Framework for Artificial Intelligence, Robotics and ‘Autonomous’ Systems,” alongside human dignity, autonomy, responsibility, justice, equity and solidarity, democracy, the rule of law and accountability, security, safety, bodily and mental integrity, data protection and privacy. In parallel with the work of the HLEG, many stakeholders published guidelines for the development of trustworthy AI systems identifying, among other things, what ethical and technical minimum requirements should be considered in their development and audited using an AI certification [5,6,7,8]. To name but a few, the German Data Ethics Commission listed in a report the indispensable ethical and legal principles that should guide the development of AI systems and their regulation: these being human dignity, self-determination, privacy, safety, democracy, justice and solidarity, and sustainability [9]. The platform “Lernende Systeme” indicated the following minimum requirements for AI certification: transparency, traceability, verifiability, and accountability; product safety and reliability; avoidance of unintended consequences on other systems, people, and the environment; justice in the sense of equality and non-discrimination; privacy and personal rights protection; allowing human self-determination, guaranteeing transparency about the use of the AI system and the role of the human being in the decision-making process [10]. Similarly, in a white paper by Fraunhofer IAIS, in cooperation with the Universities of Bonn and Cologne, Cremer et al. defined the following minimum requirements as a basis for an audit catalog: respect for social values and laws, human autonomy and control, fairness, transparency, reliability, security, and data protection [11]. An initial suggestion on how to put these requirements into actual practice has been detailed in an inspection catalog [12]. In its “Standardization Roadmap for AI”, the German Institute for Standardization (DIN) refers to these requirements as quality criteria for AI products, also noting the contributions by the Data Ethics Commission and the platform “Lernende Systeme” [13]. Remarkably, only the Data Ethics Commission, whose report does not directly focus on the development of certification, indicates that sustainability is a basic principle. The white paper by Fraunhofer IAIS et al. does not mention sustainability as an audit area for AI certification and the platform of “Lernende Systeme” states that sustainability might be considered an additional, optional requirement for a “Certification plus”—but not as a minimum requirement [10] (p. 25).
In contrast to the position put forward by “Lernende Systeme”, we will argue that assessing sustainability should be a key part of any ethical certification for AI. Since this is a philosophical paper, we deploy the method of conceptual analysis. Addressing the three dimensions of sustainability, in Section 2, we briefly review some of the major issues that AI systems present when considering the environmental, economic, and social impact of system development and use. Starting from the idea of ethical behavior as embodying just and responsible behavior toward other human and non-human beings, in Section 3, we show that sustainability is at root an ethical issue, since it involves responsibility toward other human beings and the environment, and is required to guarantee international, intergenerational, and interspecies justice. Based on this, in Section 4, we highlight the relevance of a sustainability audit in the context of ethical certification for AI and suggest two audit methods that could be used in the process of a certification: a “minimum requirements” checklist demanding the fulfillment of specific prerequisites, and a “nuanced assessment” attributing a score to evaluate the performance of a system in a given audit area. In conclusion, we call to action the stakeholders responsible for the development of ethical certification of AI, to implement AI sustainability.

2. AI and the Three Dimensions of Sustainability

Sustainability is defined differently by different actors depending on their aims and fields of interest. One famous definition of sustainability, or, more precisely, of “sustainable development”, is often quoted from the Brundtland Report, also known as “Our Common Future”: “Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs” [14] (p. 41). Regarding AI, it became clear that this ability addresses all three so-called “pillars of sustainability” [5] (p. 395), [15], namely, the environmental, economic, and social dimensions of sustainability. “Environmental sustainability” generally refers to the impact of our actions on planet Earth. To be environmentally sustainable, human development should identify planetary boundaries that must not be transgressed and work to prevent unacceptable environmental change [16]. “Economic sustainability” refers to “practices that support long-term economic growth without negatively impacting [the] social, environmental, and cultural aspects of the community” [17]. Finally, “social sustainability” includes, among other things, “achieving a fair degree of social homogeneity, equitable income distribution, employment that allows the creation of decent livelihoods, and equitable access to resources and social services” [18], as well as “[encouraging] communities to promote social interaction and [fostering] community investment while respecting social diversity” [19]. In the following, we will list some of the major issues concerning AI in these three domains.
(1) Environmental Sustainability is not only important regarding those CO2 emissions caused by the electricity needed for computing operations, but also concerning the whole life cycle of products. This includes—but is not limited to—the production of the very hardware needed to run AI and software in general, using, for instance, plastics, metal, and raw materials. Considering the whole life-cycle does also include recycling and re-use processes (e.g., addressing the premature obsolescence of hardware and software by designing them in a “technically sustainable” way [20] or unifying electric chargers for smartphones to reduce attendant electronic waste [21,22]). The complex issues related to environmental sustainability can be illustrated by the example of electric vehicles and the common narrative that asserts that they will solve many problems related to CO2 emissions. This narrative might be somewhat misleading since it does not take into account those other environmental costs involved in the production of electric vehicles [23]. Indeed, to assess the sustainability of a product, its whole “material footprint” should be considered [24]. In addition, the rhetoric about “cloud computing” contributes to cloaking the fact that physical computers are performing the computing operations. This is part of the overall claim that the use of AI technologies should not only prevent negative outcomes for the environment but also, in a broader sense, be “favorable to the environment” [25].
(2) Economic Sustainability. As with the case of environmental sustainability, many economic sustainability issues result from the very hardware production process for AI systems. Indeed, a type of new colonialist exploitation of those populations who live near raw material extractions sites can be observed and reported. Returning to the already noted case of electric vehicles, the production of car batteries can raise serious ethical issues, e.g., when the cobalt mining for the batteries is being done by children (and adults) working in conditions of slavery and without adequate safety measures [26,27,28]. Likewise, extracting and processing the necessary materials used to build hardware raises important sustainability issues concerning health, working conditions, and the environmental and resource exploitation of many populations in developing countries, directly contributing to inequality between the global north and global south. To address this problem, it has been claimed that we need a decolonizing engagement to fight institutionalized oppression [29].
(3) Social Sustainability. Regarding the real-world applications of AI systems, biased data and a lack of a diversity-oriented perspective in the designing, developing, testing, launching, and post-marketing phases of a given product might lead companies to release software that discriminates against minorities and vulnerable social groups, replicating racist and sexist biases in application fields such as risk scoring in justice [30] or credit scoring for mortgage lending [31]. These algorithmic-fairness issues affect the social sustainability of a product directly since they facilitate the spread of inequality and social conflicts, resulting in compromising societal well-being.
To address social, economic, and environmental fairness at large, several approaches show how “AI for social good” [32] and “AI for sustainability” [15] should be used to foster the achievement of the UN’s Sustainable Development Goals through a value-sensitive design [33] aiming, among other things, at “reducing inequality within and among countries” (Goal 10) and achieving gender equality (Goal 5) [34,35]. These three levels of sustainability lay the groundwork for the investigation of sustainability as an ethical topic and, therefore, shall be considered in the ultimate assessment of the ethical implications of trustworthy AI. It should be remarked that these dimensions are tightly interwoven in actual real-world scenarios. The sustainability assessment in the framework of a given certification should, therefore, focus on those concrete sustainability issues pertaining to cases of specific use, and these issues might encompass different dimensions all at the same time.

3. Sustainability and the Ethics of Responsibility

Today, it is possible for us to understand that our economic behavior has material consequences on a global scale. In 2020, this evidence was amplified, for instance, by the shortage of many products during the first lockdown of the coronavirus pandemic, highlighting how heavily many societies depend on outsourced work in the globalized world economy [36,37]. How goods are designed and produced, what goods we purchase, how long we use them for, and how we dispose of them can positively or negatively affect the environment and other humans—and their rights—around the world, even though the causal connections might be neither straightforward nor directly visible. At the same time, the increasing number of catastrophic climatic events over the past few years shows the impact that human behavior, and especially mass consumption, has on our environment in a way that cannot be ignored anymore.
Even though the negative consequences felt on a global scale produced by individual behaviors might not be caused intentionally and might “just” be the result of shortsighted and profit-oriented conduct, sustainability issues raise questions of (in-) justice. Justice, in a philosophical sense at least, can be understood as respect for others, as the struggle to ensure equal rights and preserve human dignity, and as the will not to harm others through violence or subjugation. In moral philosophy, it is generally considered unjust and wrong to conceive of and treat others as a mere means to one’s own ends, and to not see them at the same time as an end in themselves [38] (p. 428), to reduce otherness to the totality of one’s own limited and limiting representation of it, ignoring the fact that the other infinitely exceeds this representation because of their own complexity and freedom [39] (Chapters I.C. and III.B.), or to deny recognition of their identity, values, and rights [40]. Behaviors that are unsustainable on an environmental and/or social level evidence this lack of consideration toward others. For example, there is an unfair distribution in bearing the cost of pollution since only relatively few enjoy the benefits of polluting production processes and activities, but everyone in the world is, in some way and to some degree, affected by them [41,42]. In this sense, those polluting the most overlook other people’s needs, suffering, and discomfort and focus solely on their own advantages. Similarly, exploited workers in countries to which production is outsourced are looked upon merely as means if no thought is given to their working conditions. Accordingly, in a global ethical framework, the undeniable evidence of the impact of consumers’ and producers’ actions makes them not only causally co-responsible for climatic and humanitarian disasters but also morally accountable for the injustice caused by their economic behavior.
Ignoring the consequences of one’s actions necessarily implies ignoring the central capacity of modern humanity: to plan one’s actions and to assess the possible consequences and future risks for other human beings and their environment. This is also stressed in the above mentioned Brundtland Report: “Humanity has the ability to make development sustainable to ensure that it meets the needs of the present without compromising the ability of future generations to meet their own needs” [14] (p. 16). In 1979, Hans Jonas highlighted that while “new” challenges do come with the development of “modern” technology, previous generations had neither the knowledge nor the power to take the potential future outcomes of their immediate actions into account. Acting ethically was, therefore, synchronous, considered to only affect humans directly surrounding the actor, and responsibility was backward-looking [43,44]. However, even if from today’s point of view, current developments look more complex, human action itself has been thought to produce unforeseeable outcomes, as, for instance, Hannah Arendt argues. According to her, this had at least three consequences: (a) in politics people tried to “substitute making for acting” and behavior for action to control other humans; (b) as a remedy for the “irreversibility” of one’s actions the “power to forgive” was suggested; and (c) the “power of promise” was supposed to deal with the unpredictability of actions [45] (p. 220 ff.). This forward-looking aspect of responsibility, thus, adds up to the retrospective liability for one’s actions. For Hannah Arendt, the limits of human responsibility are related to human “plurality”, the fact that all humans are born into a world that has already been inhabited and shaped by other human beings [45] (p. 234).
Despite its limitations, the assessment of the long-term consequences of unsustainable behavior should, according to Jonas and the Brundtland Report, include some consideration of the issue intergenerational justice: if the enjoyment of goods today will cause harm to and limit the freedom of the next generations, and equally those who have already been born and those who will inhabit Earth in the future [42,46,47] and this is also unfair behavior that distributes the environmental and social costs of the actions of fewer people unequally [48]. Hans Jonas sees the reason for this also in the anthropological argument that he adds to the temporal dimensions mentioned above. According to him, we should not only think about other human beings of future generations but also about humankind as a whole and whether we consider it desirable that humans keep on living (on Earth). Destroying our planet logically might mean destroying humanity’s habitat and, hence, stripping future generations of the chance to exist at all:
“[…] we are, strictly speaking, not responsible to the future human individuals but to the idea of Man, which is such that it demands the presence of its embodiment in the world. […] It is this ontological imperative, emanating from the idea of Man, that stands behind the prohibition of a va-banque gamble with mankind. Only the idea of Man, by telling us why there should be men, tells us also how they should be” [43] (p. 43).
This becomes even more relevant when we consider that, since the twentieth century, humans have been able to destroy not only what they have made, as they have always been able to do, but also, with the invention of the atomic bomb, they even have the capability to destroy what they have not made: nature, all species, and the whole planet [45] (p. 3), [49] (p. 6). Likewise, climate change can be seen as a slow process of destroying life forms and things that humans did not create. In addition to taking future generations into account when reflecting on the possible impact of our behavior, we, therefore, also need to consider its consequences for other species, especially given the complex relationships between biodiversity, nutrition, habitat conservation, etc. [50]. What is even more striking is that, for decades now, the global north has more than it needs while the global south is—still—being exploited. Human beings are starving while others are wasting food, water, and other resources. By virtue of our duty to our fellow human beings, the environment and future generations, sustainability as an aware and responsible practice should nowadays be a top ethical priority for a globalized society. The evidence that unsustainable behavior results in harm and injustice, and is, therefore, unethical, cannot be ignored anymore. Assessing that we can do, invent, or develop something is not enough of an argument to say we should do it—as, for instance, the so-called technological imperative [49] (p. 7) or Silicon Valley’s mantra “Move fast and break things” claim [51] (p. 60). Instead, we need regulations to guide businesses in the sustainable development of new technologies, and the instruments to empower consumers to make responsible choices.

4. Sustainability as an Audit Area for an Ethical Certification of AI

We argue that sustainability, as an ethical issue, should be considered when certifying ethical and trustworthy AI. More specifically, auditing the environmental, economic, and social sustainability of an AI system should be one of the core requirements of an ethical assessment, and not just an option [10] (p. 28). Moreover, sustainability as a core requirement can be seen as matching at least two of the abovementioned requirements for the development of trustworthy AI, namely “Diversity, Non-Discrimination, Fairness,” and “Societal and Environmental Well-Being” [4] (pp. 15–20).
The first step in assessing and rating the fulfillment of an ethical requirement is to identify concrete ethical risks that are specific to a particular field of application through expert and stakeholder consultation. In the European context, this is considered an essential procedure in proposals for the development of an ethical impact assessment, among others by the CEN Workshop Agreement for an Ethical Impact Assessment Framework [52]. This allows the translation into practice of ethical goals that otherwise would remain simply abstract and, therefore, non-auditable. For instance, the ethical implications of a creditworthiness-scoring algorithm and for an AI-powered customer assistance chatbot are different when it comes to ensuring social fairness. In the first case, the algorithm should not (directly or indirectly) discriminate against people based on ethnicity, gender, nationality, or any other category by attributing a lower score to an individual belonging to specific groups than to other individuals with a similar profile. In the second case, the algorithm should not output offensive language and should not discriminate against any social group by perpetrating racist, sexist, homophobic, or other stereotypes. Moreover, if the chatbot uses voice recognition, people with non-native or regional accents and people with speech impairments should be able to communicate with the machine in the same way as people whose pronunciation is considered “standard”.
Once the concrete risks concerning sustainability and other ethical issues of AI products in a specific field of application have been identified, an effective way to operationalize the results of the risk assessment in the framework of an audit process and to state the ethical acceptability of an AI system in a particular use case scenario is to set specific minimum requirements, which should be met to avoid unethical consequences, such as, in the case of sustainability, the unnecessary waste of resources or social discrimination. In the German framework, the “minimum requirements approach” is suggested as a basis for the certification of AI systems by different developers [10,11]. As well as the fulfillment of minimum requirements, a nuanced assessment might be a useful resource to audit the abovementioned sustainability issues. The reason is that, once a threshold for ethical acceptability has been determined, different software may perform differently within the acceptability domain. Therefore, the specific function of this more fine-grained level of audit is to provide different stakeholders, such as producers, consumers, governments with a common tool to compare similar products. Nevertheless, if a minimum requirement is not satisfied, the product should be classified as being unsustainable, irrespective of how well the product performs in the nuanced assessment of other features.
Sustainability, therefore, could and should have a direct impact on an ethical assessment in at least two ways. Primarily, adding environmental, economic, and social sustainability to the minimum ethical requirements of an AI application in the form of concrete, domain-specific goals to be fulfilled will prevent unsustainable products from being certified as ethical in the first place. As the CEN Workshop Agreement suggests, in light of the complexity of the process, the defined threshold criteria to be met should be carried out by a multidisciplinary board of experts and stakeholders [52] (pp. 19–21). Indeed, concerning sustainability, the definition of specific threshold values might be particularly difficult in those cases in which an integrated consideration of different dimensions of sustainability is required. Moreover, in the case of a nuanced assessment, the attribution of an audit-area-specific score showing the (expected) performance of a product in the domains of environmental, economic, and social sustainability, would affect the choice of those consumers valuing sustainability and will increase developers’ attention toward these audit areas. However, it should be remarked that similar metrics, as accurate as they may be, are just proxies and should not be mistaken for sustainability as a moral and societal goal. Indeed, this goal might be missed if businesses excessively focus on quantitative proxy measures [53]—e.g., by neglecting other important issues, by misallocating funds or, in a worst case scenario, by cheating.
Nuanced assessments already exist in the field of environmental sustainability, for example in assessing the energy performance of household appliances, and there is already at least one attempt to produce a similar “Care Label” certification suite for Machine Learning, labeling not only energy consumption but also other features such as runtime, memory usage, expressivity, usability, and the reliability of the AI software [54,55]. This kind of assessment would be an excellent tool to audit the mentioned features separately and optimize AI systems to achieve a better balance of the performances in the different audit areas. To accomplish this, the assessment could be embedded into a more general ethical framework. An attempt to unify aspects of the three pillars of sustainability (environment, society, and economy) in a unique, comprehensive sustainability index, is being carried out by the project “SustAIn”, funded by the German Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection. The SustAIn team proposed sets of criteria, indicators, and operationalizable sub-indicators for the evaluation of AI systems’ sustainability [56] (pp. 57–64). Among the sub-indicators for social sustainability, they list the level of discrimination potential based on an impact assessment, the proportion of a company’s AI systems that use methods to measure fairness and bias, and the diversity of the developer team, measured by the represented percentage of gender, age group, and ethnicity group. Among the sub-indicators for economic sustainability, the evaluation of working conditions throughout the entire production chain is mentioned. Finally, environmental sustainability sub-indicators include, among others, the measurement of energy consumption and direct CO2 emissions, the percentage of recycled material in the hardware, and the recommendation of environmentally sustainable alternatives by automated decision making (ADM) and recommender systems. Because of their operationalizability, such sustainability sub-indicators could be easily integrated into a certification process either in the definition of the minimum requirements or as relevant indicators for a nuanced assessment. In the first case, a use-case-specific threshold value should be agreed on for the chosen sub-indicator. Not exceeding (or falling below, according to the case) the threshold value should then be taken as a minimum requirement. In the second case, the level of performance above the acceptance threshold of a given AI system could be showcased and rated. We suggest that this kind of integration is essential for the full development of a comprehensive ethical assessment.
It should be stressed that, due to the constant evolution of society and technology, no single certification can guarantee conformity with ethical standards indefinitely. On the one hand, concrete ethical requirements will need to be continually adapted to absorb new research findings from different disciplines and societal dynamics [57]. On the other hand, the development of more advanced technology will create new application scenarios and new ethical challenges. This problem directly concerns the so-called “Collingridge dilemma,” according to which it is impossible to predict the impact of a new technology until the given technology is fully developed and deployed [58]. Therefore, ethical assessments should have a de facto expiry date and the continued conformity of a product with the ethical standards of society should be revisited periodically.
Certifying that AI systems are compliant with periodically updated ethical standards would allow us to acknowledge the achievement of increasingly more challenging sustainable and other ethical goals. Indeed, the advancement of technology should be valued not only from a technical point of view but also from a moral perspective. Technology should help translate into reality those values that ethical reflection recognizes as indispensable for future life and well-being in society, such as respecting human rights, protecting the environment, and distributing resources and opportunities fairly. These values should not remain abstract, and it is possible to measure their gradual achievement. Among other factors, the larger our CO2 emissions, the larger our raw material exploitation and waste production will be in turn, and the further society will be from climate justice. The further exploitative practices to produce goods are spread and minorities are discriminated against, the further society will be from global justice. These trends are reversible. Striving to achieve ethical goals through the improvement of technology and its regulation can be defined as moral progress [59]. An ethical certification aims to foster moral progress by providing consumers and producers with a clear assessment of a product’s compliance with these ethical goals.

5. Conclusions

Global ethics of responsibility need a broad picture of the moral community [60] (p. 119). Human beings should be considered moral actors, while the group addressed by moral actions should be even broader than humanity, taking into account the environment at large. It is possible to outline different, coexisting dimensions of ethical responsibility. First, there is an international, global dimension: people all around the world might bear the consequences of our actions. Moral actors should consider this. Moreover, intergenerational justice should also be considered since future generations will be affected by our current actions and decisions. Finally, especially when considering environmental sustainability, we should address an interspecies dimension: in a globalized society, consumerism is affecting lives and ecosystems all around the world. A rising number of environmentalists are claiming that respect for life and dignity should not be granted only for humans, and destroying ecosystems causes the suffering and impoverishment of life quality for those living beings who survive, directly impacting their freedom and dignity [60] (p. 111), as well as all species’ livelihoods.
Fortunately, moral awareness about the ecological and social impacts of globalization and consumerism is rising fast and the urgency of achieving the UN’s sustainability goals need new, institutionalized tools to motivate people to act ethically and treat fellow humans, other species, and our planet with respect. Together with sustainability-oriented regulations, a certification for AI software could be effective in motivating consumers to use sustainable products and seek further information about the impact of the products they are using. Furthermore, if the fulfillment of the certified minimum ethical and technical requirements to commercialize a product is made mandatory by law, governments could use certifications to ensure that the AI systems in circulation are sustainable. None of this can be done through lip service. While the European Commission is beating an important path, notably by fostering Responsible Research and Innovation (RRI) [61], pure ethical and conduct codes for enterprises such as the Corporate Digital Initiative Action [62] or strategies for Corporate Social responsibility (CRS) and Responsible Business Conduct (RBC) [63] are not enough in themselves. We need action.

Author Contributions

The authors contributed equally to this paper. All authors have read and agreed to the published version of the manuscript.


Our research is funded by MWIDE NRW in the framework of the project “Zertifizierte KI” (“Certified AI”); funding number 005-2011-0050.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors assert no conflict of interest.


  1. DKE/DIN. Ethik und Künstliche Intelligenz. Was Können Technische Normen und Standards Leisten? DIN: Berlin, Germany, 2020. [Google Scholar]
  2. European Commission. Proposal for a Regulation of the European Parliament and of the Council Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts; European Commission: Brussels, Belgium, 2021. [Google Scholar]
  3. HLEG on AI. Ethics Guidelines for Trustworthy AI. 8 April 2019. Available online: (accessed on 29 March 2022).
  4. HLEG on AI. The Assessment List for Trustworthy Artificial Intelligence. 17 July 2020. Available online: (accessed on 29 March 2022).
  5. Jobin, A.; Ienca, M.; Vayena, E. The global landscape of AI ethics guidelines. Nat. Mach. Intell. 2019, 1, 389–399. [Google Scholar] [CrossRef]
  6. Hagendorff, T. The Ethics of AI Ethics: An Evaluation of Guidelines. Minds Mach. 2020, 30, 99–120. [Google Scholar] [CrossRef][Green Version]
  7. Algorithm Watch. AI Ethics Guidelines Global Inventory. Available online: (accessed on 29 March 2022).
  8. Zicari, R.V.; Brodersen, J.; Brusseau, J.; Düdder, B.; Eichhorn, T.; Ivanov, T.; Kararigas, G.; Kringen, P.; McCullough, M.; Möslein, F.; et al. Z-Inspection®: A Process to Assess Trustworthy AI. IEEE Trans. Technol. Soc. 2021, 2, 83–97. [Google Scholar] [CrossRef]
  9. Datenethikkommission. Gutachten der Datenethikkommission. 2019. Available online: (accessed on 29 March 2022).
  10. Heesen, J.; Müller-Quade, J.; Wrobel, S. Zertifizierung von KI-Systemen—Kompass für die Entwicklung und Anwendung Vertrauenswürdiger KI-Systeme; Lernende Systeme: München, Germany, 2020. [Google Scholar]
  11. Cremers, A.; Englander, A.; Gabriel, M.; Hecker, D.; Mock, M.; Poretschkin, M.; Rosenzweig, J.; Rostalski, F.; Sicking, J.; Volmer, J.; et al. Trustworthy Use of Artificial Intelligence. Priorities from a Philosophical, Ethical, Legal, and Technological Viewpoint as a Basis for Certification of Artificial Intelligence. 2019. Available online: (accessed on 29 March 2022).
  12. Poretschkin, M.; Schmitz, A.; Akila, M.; Adilova, L.; Becker, D.; Cremers, A.B.; Hecker, D.; Houben, S.; Mock, M.; Rosenzweig, J.; et al. Leitfaden zur Gestaltung Vertrauenswürdiger Künstlicher Intelligenz. 2021. Available online: (accessed on 29 March 2022).
  13. Wahlster, W.; Winterhalter, C. Deutsche Normungsroadmap. Künstliche Intelligenz; DIN: Berlin, Germany, 2020. [Google Scholar]
  14. World Commission on Environment and Development or Brundtland Commission: Our Common Future. Brundtland Report. 1987. Available online: (accessed on 29 March 2022).
  15. van Wynsberghe, A. Sustainable AI: AI for sustainability and the sustainability of AI. AI Ethics 2021, 1, 213–218. [Google Scholar] [CrossRef]
  16. Rockström, J.; Steffen, W.; Noone, K.; Persson, A.; Chapin, F.S.; Lambin, E.F.; Lenton, T.M.; Scheffer, M.; Folke, C.; Schellnhuber, H.J.; et al. A safe operating space for humanity. Nature 2009, 461, 472–475. [Google Scholar] [CrossRef] [PubMed]
  17. University of Mary Washington, Office of Sustainability. Economic Sustainability. Available online: (accessed on 31 December 2021).
  18. Sachs, I. Social sustainability and whole development: Exploring the dimensions of sustainable development. In Sustainability and the Social Sciences: A Cross-Disciplinary Approach to Integrating Environmental Considerations into Theoretical Reorientation; Becker, E., Ed.; Zed Books: London, UK, 1999; ISBN 1856497089. [Google Scholar]
  19. University of Mary Washington, Office of Sustainability. Social Sustainbility. Available online: (accessed on 31 December 2021).
  20. Penzenstadler, B.; Femmer, H. A Generic Model for Sustainability with Process- and Product-Specific Instances. In Proceedings of the 2013 Workshop on Green in/by Software Engineering; Association for Computing Machinery: New York, NY, USA, 2013; pp. 3–8. ISBN 9781450318662. [Google Scholar]
  21. European Commission. One Common Charging Solution for All. Available online: (accessed on 31 December 2021).
  22. Fanta, A. How Apple Lobbied EU to Delay Common Smartphone Charger; EUobserver: Brussels, Belgium, 2019. [Google Scholar]
  23. Dhara, C.; Singh, V. The Delusion of Infinite Economic Growth. Available online: (accessed on 29 March 2022).
  24. Wiedmann, T.O.; Schandl, H.; Lenzen, M.; Moran, D.; Suh, S.; West, J.; Kanemoto, K. The material footprint of nations. Proc. Natl. Acad. Sci. USA 2015, 112, 6271–6276. [Google Scholar] [CrossRef] [PubMed][Green Version]
  25. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef] [PubMed][Green Version]
  26. McKie, R. Child Labour, Toxic Leaks: The Price We Could Pay for a Greener Future. Available online: (accessed on 31 December 2021).
  27. European Parliament. Answer Given by Ms Urpilainen on Behalf of the European Commission, Question Reference: E-001002/2020. Available online: (accessed on 31 December 2021).
  28. Bergmann, R.; Solomun, S. A New AI Lexicon: Sustainability from Tech to Justice: A Call for Environmental Justice in AI. Available online: (accessed on 31 December 2021).
  29. Mohamed, S.; Png, M.-T.; Isaac, W. Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. Philos. Technol. 2020, 33, 659–684. [Google Scholar] [CrossRef]
  30. Angwin, J.; Larson, J.; Mattu, S.; Kirchner, L. Machine Bias. Available online: (accessed on 29 March 2022).
  31. Lee, M.S.A.; Floridi, L. Algorithmic Fairness in Mortgage Lending: From Absolute Conditions to Relational Trade-offs. Minds Mach. 2021, 31, 165–191. [Google Scholar] [CrossRef]
  32. Floridi, L.; Cowls, J.; King, T.C.; Taddeo, M. How to Design AI for Social Good: Seven Essential Factors. Sci. Eng. Ethics 2020, 26, 1771–1796. [Google Scholar] [CrossRef][Green Version]
  33. Umbrello, S.; van de Poel, I. Mapping Value Sensitive Design onto AI for Social Good Principles. In AI and Ethics; Springer: Berlin, Germany, 2021; Volume 1. [Google Scholar]
  34. Vinuesa, R.; Azizpour, H.; Leite, I.; Balaam, M.; Dignum, V.; Domisch, S.; Felländer, A.; Langhans, S.D.; Tegmark, M.; Fuso Nerini, F. The role of artificial intelligence in achieving the Sustainable Development Goals. Nat. Commun. 2020, 11, 233. [Google Scholar] [CrossRef] [PubMed][Green Version]
  35. Ryan, M.; Antoniou, J.; Brooks, L.; Jiya, T.; Macnish, K.; Stahl, B. The Ethical Balance of Using Smart Information Systems for Promoting the United Nations’ Sustainable Development Goals. Sustainability 2020, 12, 4826. [Google Scholar] [CrossRef]
  36. Gabriel, M. We Need a Metaphysical Pandemic. In In the Realm of Corona-Normativities: A Momentary Snapshot of a Dynamic Discourse, 2020th ed.; Gephart, W., Ed.; Vittorio Klostermann: Frankfurt am Main, Germany, 2020; ISBN 9783465145318. [Google Scholar]
  37. Genovesi, S. Support your Local. In In the Realm of Corona-Normativities: A Momentary Snapshot of a Dynamic Discourse, 2020th ed.; Gephart, W., Ed.; Vittorio Klostermann: Frankfurt am Main, Germany, 2020; ISBN 9783465145318. [Google Scholar]
  38. Kant, I. Kritik der Reinen Vernunft (1. Aufl. 1781). Prolegomena. Grundlegung zur Metaphysik der Sitten. Metaphysische Anfangsgründe der Naturwissenschaften, Studienausg, Nachdr. der Ausg. 1968; de Gruyter: Berlin, Germany, 1978; ISBN 3110014378. [Google Scholar]
  39. Levinas, E. Totalité et Infini; Martinus Nijhoff: Den Haag, The Netherlands, 1961. [Google Scholar]
  40. Fraser, N. Justice Interruptus: Critical Reflections on the “Postsocialist” Condition; Routledge: New York, NY, USA; London, UK, 2014; ISBN 9781315822174. [Google Scholar]
  41. IPCC. Annex I: Glossary [Matthews, J.B.R. (ed.)]. In Global Warming of 1.5 °C. An IPCC Special Report on the Impacts of Global Warming of 1.5 °C above Pre-Industrial Levels and Related Global Greenhouse Gas Emission Pathways, in the Context of Strengthening the Global Response to the Threat of Climate Change, Sustainable Development, and Efforts to Eradicate Poverty; Masson-Delmotte, V., Zhai, P., Pörtner, H.-O., Roberts, D., Skea, J., Shukla, P.R., Pirani, A., Moufouma-Okia, W., Péan, C., Pidcock, R., Eds.; IPCC: Geneva, Switzerland, 2018; Available online: (accessed on 31 December 2021).
  42. Caney, S. Climate Justice; Springer: Berlin, Germany, 2020. [Google Scholar]
  43. Jonas, H. The Imperative of Responsibility: In Search of an Ethics for the Technological Age; Univ. of Chicago Press: Chicago, IL, USA, 1984; ISBN 0226405966. [Google Scholar]
  44. Heidbrink, L.; Langbehn, C.; Loh, J. (Eds.) Handbuch Verantwortung; Springer: Berlin, Germany, 2017. [Google Scholar]
  45. Arendt, H. The Human Condition; The University of Chicago Press: Chicago, IL, USA, 1998. [Google Scholar]
  46. Caney, S. Justice and Future Generations. Annu. Rev. Polit. Sci. 2018, 21, 475–493. [Google Scholar] [CrossRef]
  47. Order of the First Senate of 24 March 2021—1 BvR 2656/18, Paras. 1-270; Federal Constitutional Court Germany: Karlsruhe, Germany, 2021.
  48. Page, E. Climate Change, Justice and Future Generations; Reprinted; Edward Elgar: Cheltenham, UK; Northampton, MA, USA, 2007; ISBN 9781847204967. [Google Scholar]
  49. Lenk, H.; Rophol, G. (Eds.) Technik und Ethik; Reclam: Stuttgart, Germany, 1993. [Google Scholar]
  50. Robert Garner. A Theory of Justice for Animals: Animal Rights in a Nonideal World; Oxford University Press: New York, NY, USA, 2013. [Google Scholar]
  51. Véliz, C. Privacy is Power; Bantam Press: London, UK, 2020. [Google Scholar]
  52. CEN/CENELEC. Ethics Assessment for Research and Innovation—Part 2: Ethical Impact Assessment Framework (SATORI); CENELEC: Brussels, Belgium, 2017. [Google Scholar]
  53. Braganza, O. Proxyeconomics, a theory and model of proxy-based competition and cultural evolution. R. Soc. Open Sci. 2022, 9, 211030. [Google Scholar] [CrossRef] [PubMed]
  54. Morik, K.; Kotthaus, H.; Heppe, L.; Heinrich, D.; Fischer, R.; Mücke, S.; Pauly, A.; Jakobs, M.; Piatkowski, N. Yes We Care!—Certification for Machine Learning Methods through the Care Label Framework. 2021. Available online: (accessed on 29 March 2022).
  55. Morik, K.; Kotthaus, H.; Heppe, L.; Heinrich, D.; Fischer, R.; Pauly, A.; Piatkowski, N. The Care Label Concept: A Certification Suite for Trustworthy and Resource-Aware Machine Learning. 2021. Available online: (accessed on 29 March 2022).
  56. Rohde, F.; Wagner, J.; Reinhard, P.; Petschow, U.; Mayer, A.; Voss, M.; Mollen, A. Nachhaltigkeitskriterien für Künstliche Intelligenz. Entwicklung Eines Kriterien- und Indikatorensets für die Nachhaltigkeitsbewertung von KI-Systemen Entlang des Lebenszyklus; Schriftenreihe des IÖW 220/21; IÖW: Berlin, Germany, 2021. [Google Scholar]
  57. Rességuier, A.; Rodrigues, R. AI ethics should not remain toothless! A call to bring back the teeth of ethics. Big Data Soc. 2020, 7, 2053951720942541. [Google Scholar] [CrossRef]
  58. David Collingridge. The Social Control of Technology; St. Martin’s Press: New York, NY, USA, 1980. [Google Scholar]
  59. Gabriel, M. Moralischer Fortschritt in dunklen Zeiten; Ullstein: Berlin, Germany, 2020. [Google Scholar]
  60. Coeckelbergh, M. Green Leviathan or the Poetics of Political Liberty: Navigating Freedom in the Age of Climate Change and Artificial Intelligence; Routledge/Taylor & Francis Group: Abingdon, UK, 2021. [Google Scholar]
  61. European Commission. Responsible Research Innovation. Available online: (accessed on 14 January 2022).
  62. Corporate Digital Responsibility Initiative. Digitalisation Calls for Responsibility. Available online: (accessed on 14 January 2022).
  63. European Commission. Corporate Social Responsibility & Responsible Business Conduct. Available online: (accessed on 14 January 2022).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Genovesi, S.; Mönig, J.M. Acknowledging Sustainability in the Framework of Ethical Certification for AI. Sustainability 2022, 14, 4157.

AMA Style

Genovesi S, Mönig JM. Acknowledging Sustainability in the Framework of Ethical Certification for AI. Sustainability. 2022; 14(7):4157.

Chicago/Turabian Style

Genovesi, Sergio, and Julia Maria Mönig. 2022. "Acknowledging Sustainability in the Framework of Ethical Certification for AI" Sustainability 14, no. 7: 4157.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop