Next Article in Journal
Digital Consciousness: The Business of Sensing, Modeling, Analyzing, Predicting, and Taking Action
Previous Article in Journal
The Development of Human Perception, Which Aim for Directness by Means of Indirection and on the Directness of Artistic Aesthetics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

How Ethical Design of Artificial Intelligences Systems Is Possible in a Transcultural Perspective †

School of Marxism, Xi’an Jiaotong University, Xi’an 710049, China
*
Author to whom correspondence should be addressed.
Presented at the 5th International Conference of Philosophy of Information, IS4SI Summit 2021, Online, 12–19 September 2021.
Proceedings 2022, 81(1), 102; https://doi.org/10.3390/proceedings2022081102
Published: 10 March 2022

Abstract

:
In the face of today’s culturally diverse social reality, the ethical design of artificial intelligences systems (AIS) no longer follows only one cultural tradition, but must take into account multiple cultural traditions. Therefore, it is necessary to study the possibility of ethical design of AIS from a transcultural perspective. This paper answers the question of the possibility of ethical design of AIS in a transcultural perspective from four aspects: ethical value base, ethical consensus, designer’s ethical responsibility, and the internal ethical decision-making mechanism of AIS from both theoretical and practical levels.

1. The Broadest Basis of Ethical Value: “Common Value”

Normativity can only be a function of ethics, not its foundation; the moral foundation of ethics is precisely value, which reveals what makes virtue possible. The value ethical path reflects more the problem of the foundation of ethics, so it is obvious that the value ethical path is more profound than the normative ethical path to explore the ethical problems of AI. Moreover, value is not only the moral foundation of ethics, but also the core issue of culture. Culture and values are intrinsically linked, and any culture carries certain values and is the embodiment of certain values. Since value is both the basic issue of ethics and the core issue of culture, it is deservedly the natural combination of culture and ethical issues. The study of transcultural AI ethics necessarily requires the penetration of value ethics theory, and its core is to explore whether ethical norms of different value orientations can be embedded in AIS.
Transcultural AI ethics needs a globally shared value concept as a theoretical support. The community with a shared future for mankind is based on the basic value of “there is only one earth for human beings and one world for all countries”, and believes that today’s globalization, informatization, and the complex situation faced by all countries have linked the future of all countries into one; no country can act alone. It can be seen that this value consensus based on today’s reality is not a stream without a source, or a tree without roots, but a consensus among the stakeholders of the community with a shared future, which is an important embodiment of the concept of “common values”. Therefore, it is more realistic and operable to take the “common value” concept of human shared future community as the value concept support of transcultural AI ethics.

2. The Biggest Ethical Consensus: “The Good Life”

“The good life” is a kind of value, one of the common values pursued by human beings. Since ancient times, the question concerning what is the good life and how to achieve it have been ethical issues that people all over the world have been pursuing and discussing. Therefore, it can be said that “the good life” is the greatest ethical consensus in different cultures. Coeckelbergh argues that the ethics of artificial intelligence should take into account cultural differences and focus on how artificial intelligence can help human beings achieve prosperity and well-being [1]. Duan Weiwen believes that the development of artificial intelligence should be consistent with the value goal of human well-being [2]. Capurro believes that robotics ethics represents the use of robots to pursue “the good life” according to different customs and cultures [3]. According to Triolo, “the obvious obstacle to the widespread use of AI is the extremely difficult balance of privacy issues,” but there is also a broad consensus such as “AI should benefit all mankind” and “the principle of public welfare” [4].
The pursuit of “the good life” is diverse and includes concepts, such as “material abundance”, “spiritual abundance”, “convenience of life”, and being “eco-friendly”. When the application of AI technology is consistent with the pursuit of “the good life”, it will become a value leader, and people in most countries or regions will be interested in the application of AI technology to create abundant material wealth and environmental protection. Most people in most countries or regions welcome the application of AI technology to create rich material and spiritual wealth to improve the quality of life and achieve the value goal of “the good life”. Therefore, the ethical consensus of “the good life” has a good cohesive effect on transcultural values and is one of the prerequisites for the implementation of the ethical design of AIS from a transcultural perspective.

3. Designers’ Ethical Responsibility and Transcultural Understanding

The experiment of machine ethics surveyed the ethical decision-making preferences of millions of people in 233 countries and regions around the world online, concluding that cultural traditions, economies, and the state of the legal system in different regions are closely related to human ethical decision-making preferences [5]. Obviously, the ethical design of AIS cannot be carried out without the role of designers and other engineers, and the ethical decision preferences of designers directly affect the ethical preferences of the designed AIS. Therefore, several scholars have explored the transcultural issues of AIS from the designers’ perspective, respectively. Anniina Huttunen and others argue that to address transcultural AIS ethical issues, in addition to developing universally accepted ethical provisions, engineers should learn a “neo-Archimedean oath” [6]. Takeshi Kimura believes that robotics engineers should be familiar with different social, cultural, and ethical connotations and should have a good sense of ethical responsibility [7]. Coeckelbergh believes that artificial intelligence ethics should shift from the “robot-centric” paradigm to the “human-centric” paradigm, combining cultural differences to examine the issue of virtue ethics (that is, the “human good”) and its connotation is to encourage people to use existing human-computer interaction [1]. “Ethical experience” and “moral imagination” build a human-machine co-living life that enhances the prosperity and well-being of mankind [1]. Peter-Paul Verbeek also advises designers on “moral imagination,” arguing that designers can feed expectations into the design process when they try to imagine the mediating role that the technology they are designing might play in user behavior [8].
Therefore, it can be seen from these discussions that the ethical design of AIS from a transcultural perspective is very demanding for designers, and how to enhance and ensure that designers have a strong sense of ethical responsibility and a deeper transcultural understanding is the key to the problem.

4. Cultural Inclusive Moral Decision-Making Mechanism of AIS

The design thinking and values of the AIS designers are ultimately reflected in the internal ethical decision-making mechanism of the AIS. Therefore, the success of designing a culturally inclusive internal ethical decision-making mechanism is the key to the successful implementation of the ethical design of AIS from a transcultural perspective. Wallach and Allen argue that in the face of transcultural diversity and complexity, the design of AI moral decision-making systems should combine both “bottom-up” moral learning and “top-down” moral evaluation [9]. Although Wallach and Allen do not explicitly state that the design of their ethical decision-making system is specific to transcultural AIS, their approach nevertheless provides good ideas for the ethical design of transcultural AIS. However, further adaptation of their approach is needed. A clear idea would be to analyze in greater depth how AIS can be made to accommodate cultural diversity in the process of ‘bottom-up’ ethical learning and ‘top-down’ ethical assessment. Some of the existing research by other scholars provides a good reference for the author. Xu Yingjin profoundly analyzed the relationship between the computational model of artificial neuron network in artificial intelligence and the metaphorical projection of the Confucian tradition of “virtue cultivation” and the “Confucian virtue model database” [10]. Dehghani and others advocate the creation of cultural databases to inform ethical decision making in artificial intelligence [11]. Moon and others argue that an open-source model should be used to co-create an ethical knowledge base for artificial intelligence in the face of cultural differences [12]. Therefore, one of the ideas is that the “bottom-up” approach of intercultural ethical design of AIS is feasible if the database of intercultural ethics is constructed to provide abundant intercultural ethical cases for the "bottom-up" moral learning of AIS. In addition, if we consider the “top-down” ethical assessment approach, the value embedding study of AIS from a transcultural perspective provides us with good insights and theoretical pavement. Li Lun and Sun Baoxue believe that AIS should have built-in value correction mechanisms [13]. Aimee van Wynsberghe believes that the ethical embedding of AIS should include value-sensitive mechanisms [14]. Nagenborg argues that values are always consciously embedded in “ethical subroutines” in the design of artificial moral agents, and that one should consciously reflect on these (western or non-western) values [15]. Spiekermann regards the setting and understanding of value goals as the first step of AIS ethical design, and argues that this step is based on value ethics [16].
Therefore, another idea is to provide reasonable value reference and value guidance for “top-down” moral evaluation by embedding value consensus mechanism. Then the “top-down” approach of intercultural ethical design of AIS would be feasible. It can be seen that the combination of the “bottom-up” moral learning approach based on transcultural ethics database and the “top-down” moral evaluation approach based on value consensus greatly improves the possibility of the ethical design of AIS from the transcultural perspective on a practical operation level.

5. Conclusions

This paper mainly answers the possibility of ethical design of AIS from the perspective of theory and practice. Specifically, it can be divided into four aspects: ethical value basis, ethical consensus, ethical responsibility of designers, and the internal moral decision-making mechanisms of AIS. In terms of the first aspect, “common value” is the most extensive ethical value basis in transcultural ethical issues. The second aspect is the biggest ethical consensus of artificial intelligence ethics from a transcultural perspective to make artificial intelligence technology create more benefits for human beings and create “the good life”. In terms of the third aspect, “moral experience”, “moral imagination” and the establishment of global ethical norms are all helpful to enhance and guarantee designers’ awareness of ethical responsibility and transcultural understanding. In terms of the fourth aspect, the strong support of the transcultural ethics database for the “bottom-up” moral learning and the value consensus for the “top-down” moral evaluation provide a very useful reference for the ethical design of AIS from the transcultural perspective.

Author Contributions

The original draft preparation and the main ideas in the article are from L.W. In addition, L.W. completed the review and editing. The final draft of the article was also completed by L.W., K.Z. was involved in conceptualization, especially the discussion of “common value”. K.Z. also participated in data curation and the article format proofreading. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Social Science Foundation of China under the Youth Program: Research on the Ethical Embedding Mechanism of Artificial Intelligence in Transcultural Perspective (19CZX018).

Institutional Review Board Statement

This study did not require ethical approval.

Informed Consent Statement

Not applicable.

Data Availability Statement

This study did not report any data.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Coeckelbergh, M. Personal robots, appearance, and human good: A methodological reflection on roboethics. Int. J. Soc. Robot. 2009, 1, 217–221. [Google Scholar] [CrossRef] [Green Version]
  2. Duan, W. Value Reflection and Ethical Adjustment in the Era of Artificial Intelligence. J. Renmin Univ. China 2017, 31, 98–108. (In Chinese) [Google Scholar]
  3. Capurro, R. Intercultural Roboethics for a Robot Age. 2017. Available online: http://www.capurro.de/intercultural_roboethics_2017.pdf (accessed on 8 June 2019).
  4. Triolo, P. The Challenge of Establishing Global Ethical Norms for Artificial Intelligence. 30 October 2018. Available online: https://www.sohu.com/a/272210929_468736 (accessed on 8 March 2019). (In Chinese).
  5. Awad, E.; Dsouza, S.; Kim, R.; Schulz, J.; Henrich, J.; Shariff, A.; Bonnefon, J.F.; Rahwan, I. The moral machine experiment. Nature 2018, 563, 59–64. [Google Scholar] [CrossRef]
  6. Huttunen, A.; Brace, W.; Kantola, V.; Lechner, L.; Kulovesi, J.; Silvennoinen, K. Cross-Cultural Application of Ethical Principles in the Design Process of Autonomous Machines. 2010. Available online: http://www.iiis.org/CDs2010/CD2010IMC/ICEME_2010/PapersPdf/FB476NT.pdf (accessed on 5 August 2019).
  7. Kimura, T. Roboethical arguments and applied ethics: Being a good citizen. In Cybernics; Yoshiyuki, S., Kenji, S., Yasuhisa, H., Eds.; Springer: Tokyo, Japan, 2014; pp. 289–298. [Google Scholar]
  8. Verbeek, P.P. Moralizing Technology: Understanding and Designing the Morality of Things; University of Chicago Press: Chicago, IL, USA, 2011; p. 99. [Google Scholar]
  9. Wallach, W.; Allen, C. Moral Machines: Teaching Robots Right from Wrong; Oxford University Press, Inc.: New York, NY, USA, 2009. [Google Scholar]
  10. Xu, Y. Confucian Virtue Ethics, Neural Computation and Cognitive Metaphors. Wuhan Univ. J. (Soc. Sci.) 2017, 70, 49–59. (In Chinese) [Google Scholar]
  11. Dehghani, M.; Tomai, E.; Forbus, K.D.; Klenk, M. An Integrated Reasoning Approach to Moral Decision-Making. 2008. Available online: https://www.aaai.org/Papers/AAAI/2008/AAAI08-203.pdf (accessed on 5 December 2021).
  12. Moon, A.; Calisgan, C.; Operto, F.; Veruggio, G.; Van der Loos, H.M. Open Roboethics: Establishing an Online Community for Accelerated Policy and Design Change. 2012. Available online: http://robots.law.miami.edu/wp-content/uploads/2012/01/Moon_et_al_Open-Roboethics-2012.pdf (accessed on 5 March 2019).
  13. Li, L.; Sun, B. Enduing Artificial Intelligence “Moral Chips (Conscience)”: Four Dimensions of Ethical Research of Artificial Intelligence. Teach. Res. 2018, 8, 72–79. (In Chinese) [Google Scholar]
  14. Van Wynsberghe, A. Designing robots for care: Care centered value-sensitive design. Sci. Eng. Ethics 2013, 19, 407–433. [Google Scholar] [PubMed] [Green Version]
  15. Nagenborg, M. Artificial moral agents: An intercultural perspective. Int. Rev. Inf. Ethics 2007, 7, 129–133. [Google Scholar] [CrossRef]
  16. Spiekermann, S. Ethical IT innovation: A Value-Based System Design Approach; Auerbach Publications: New York, NY, USA, 2015; pp. 239–241. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, L.; Zhang, K. How Ethical Design of Artificial Intelligences Systems Is Possible in a Transcultural Perspective. Proceedings 2022, 81, 102. https://doi.org/10.3390/proceedings2022081102

AMA Style

Wang L, Zhang K. How Ethical Design of Artificial Intelligences Systems Is Possible in a Transcultural Perspective. Proceedings. 2022; 81(1):102. https://doi.org/10.3390/proceedings2022081102

Chicago/Turabian Style

Wang, Liang, and Kehao Zhang. 2022. "How Ethical Design of Artificial Intelligences Systems Is Possible in a Transcultural Perspective" Proceedings 81, no. 1: 102. https://doi.org/10.3390/proceedings2022081102

APA Style

Wang, L., & Zhang, K. (2022). How Ethical Design of Artificial Intelligences Systems Is Possible in a Transcultural Perspective. Proceedings, 81(1), 102. https://doi.org/10.3390/proceedings2022081102

Article Metrics

Back to TopTop