Navigating the Ethics of Artificial Intelligence
Definition
1. Genesis and History
1.1. On Artificial Intelligence
1.2. Terminological Housekeeping
1.3. Structure of the Entry
2. The Rationale for AI Ethics
2.1. The Alignment Problem
2.2. The Black Box Problem
2.3. The Human in the Loop Problem
2.4. Fairness, Bias, and Noise
2.5. Accountability and Responsibility
2.6. AI Agents, Patients, and Personhood
3. An Arc Towards Pluralism in AI Ethics
3.1. A Historical Survey of Theories of AI Ethics
3.1.1. Asimov—Early Deontology
3.1.2. Wiener and Maner—Early Consequentialism
3.1.3. Weizenbaum—Accommodating Autonomy and Justice
3.1.4. Beauchamp and Childress—Deontological Pluralism
3.1.5. Moor—Just Consequentialism
3.1.6. Clouser and Gert—Pluralistic Skepticism
3.1.7. Anderson and Anderson—Embodied Pluralistic Deontology
3.1.8. Wallach and Allen—Abductive Hybrid Pluralism
3.1.9. Dubljević and Racine—Agent Deed Consequence
3.1.10. Telkamp and Anderson—Moral Foundation Theory
3.1.11. Gros, Kester, Martens, and Werkhoven—Augmented Utilitarianism
3.1.12. Pluralism in View
3.1.13. Pluralism in Practice
3.2. Assessing Frameworks
3.2.1. Consequentialism
3.2.2. Monistic Deontology
3.2.3. Contractualism
3.2.4. Deontological Pluralism
3.2.5. The ADC Model
3.2.6. Moral Foundation Theory
3.2.7. Augmented Utilitarianism
3.3. The Primacy of Pluralism
4. Conclusions and Prospects
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| ADC | Agent, Deed, Consequence |
| AI | Artificial Intelligence |
| EAD | Ethically Aligned Design |
| EU | European Union |
| FAIR | Finable, Accessible, Interoperable, Reusable |
| GDPR | General Data Protection Regulation |
| IEEE | Institute of Electrical and Electronics Engineers |
| UNESCO | United Nations Educational, Scientific, and Cultural Organization |
| MFT | Moral Foundations Theory |
| AU | Augmented Utilitarianism |
References
- Russell, S.; Norvig, P. Artificial Intelligence: A Modern Approach, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2010. [Google Scholar]
- Turing, A.M. Computing machinery and intelligence. Mind 1950, 59, 433–460. [Google Scholar] [CrossRef]
- Legg, S.; Hutter, M. Universal intelligence: A definition of machine intelligence. Minds Mach. 2007, 17, 391–444. [Google Scholar] [CrossRef]
- McCarthy, J.; Minsky, M.L.; Rochester, N.; Shannon, C.E. A proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955. AI Mag. 2006, 27, 12–14. [Google Scholar] [CrossRef]
- Moor, J.H. The Dartmouth College Artificial Intelligence Conference: The next fifty years. AI Mag. 2006, 27, 87–91. [Google Scholar] [CrossRef]
- Anderson, J.; Rainie, L. The Future of Well-Being in a Tech-Saturated World; Pew Research Center: Washington, DC, USA, 2018. [Google Scholar]
- Minsky, M.; Papert, S. Perceptrons; MIT Press: Cambridge, MA, USA, 1969; pp. 1–292. [Google Scholar]
- Pearl, J. Probabilistic Reasoning in Intelligent Systems; Morgan Kaufmann: San Mateo, CA, USA, 1988; 552p. [Google Scholar]
- Nilsson, N.J. The Quest for Artificial Intelligence: A History of Ideas and Achievements; Cambridge University Press: Cambridge, UK, 2009; pp. 1–558. [Google Scholar]
- Jumper, J.; Evans, R.; Pritzel, A.; Green, T.; Figurnov, M.; Ronneberger, O.; Tunyasuvunakool, K.; Bates, R.; Žídek, A.; Potapenko, A.; et al. Highly accurate protein structure prediction with AlphaFold. Nature 2021, 596, 583–589. [Google Scholar] [CrossRef]
- Li, Y.; Choi, D.; Chung, J.; Kushman, N.; Schrittwieser, J.; Leblond, R.; Eccles, T.; Keeling, J.; Gimeno, F.; Lago, A.D.; et al. Competition-level code generation with AlphaCode. Science 2022, 378, 1092–1097. [Google Scholar] [CrossRef]
- Trinh, T.H.; Wu, Y.; Le, Q.V.; He, H.; Luong, T. Solving olympiad geometry without human demonstrations. Nature 2024, 625, 476–482. [Google Scholar] [CrossRef] [PubMed]
- Singhal, K.; Tu, T.; Gottweis, J.; Sayres, R.; Wulczyn, E.; Amin, M.; Hou, L.; Clark, K.; Pfohl, S.R.; Cole-Lewis, H.; et al. Toward expert-level medical question answering with large language models. Nat. Med. 2025, 31, 943–950. [Google Scholar] [CrossRef] [PubMed]
- Wang, L.; Ma, C.; Feng, X.; Zhang, Z.; Yang, H.; Zhang, J.; Chen, Z.; Tang, J.; Chen, X.; Lin, Y.; et al. A survey on large language model-based autonomous agents. Front. Comput. Sci. 2024, 18, 186345. [Google Scholar] [CrossRef]
- Cowan, J.D.; Sharp, D.H. Neural nets and artificial intelligence. Daedalus 1988, 117, 85–121. [Google Scholar]
- Hughes, L.; Dwivedi, Y.K.; Malik, T.; Shawosh, M.; Albashrawi, M.A.; Jeon, I.; Dutot, V.; Appanderanda, M.; Crick, T.; De’, R.; et al. AI agents and agentic systems: A multi-expert analysis. J. Comput. Inf. Syst. 2025, 65, 489–517. [Google Scholar] [CrossRef]
- Sapkota, R.; Roumeliotis, K.I.; Karkee, M. AI agents vs. agentic AI: A conceptual taxonomy, applications and challenges. Inf. Fusion 2025, 126, 103599. [Google Scholar] [CrossRef]
- Coeckelbergh, M. AI Ethics; MIT Press: Cambridge, MA, USA, 2020. [Google Scholar]
- Beer, P.; Mulder, R.H. The Effects of Technological Developments on Work and Their Implications for Continuous Vocational Education and Training: A Systematic Review. Front. Psychol. 2020, 11, 918. [Google Scholar] [CrossRef] [PubMed]
- Lambrecht, A.; Tucker, C.E. Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of STEM career ads. Manag. Sci. 2019, 65, 2966–2981. [Google Scholar] [CrossRef]
- Macrae, C. Learning from the failure of autonomous and intelligent systems: Accidents, safety, and sociotechnical sources of risk. Risk Anal. 2022, 42, 1999–2025. [Google Scholar] [CrossRef]
- Christian, B. The Alignment Problem: Machine Learning and Human Values; W.W. Norton: New York, NY, USA, 2020. [Google Scholar]
- Dung, L. Current cases of AI misalignment and their implications for future risks. Synthese 2023, 202, 138. [Google Scholar] [CrossRef]
- Payne, K. An AI Chatbot Pushed a Teen to Kill Himself, a Lawsuit Against Its Creator Alleges. AP News. 25 October 2024. Available online: https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-artificial-intelligence-9d48adc572100822fdbc3c90d1456bd0 (accessed on 10 October 2025).
- Bostrom, N. Superintelligence: Paths, Dangers, Strategies; Oxford University Press: Oxford, UK, 2014. [Google Scholar]
- Russell, S. Human Compatible: Artificial Intelligence and the Problem of Control; Viking: New York, NY, USA, 2019. [Google Scholar]
- Cecchini, D.; Pflanzer, M.; Dubljević, V. Aligning artificial intelligence with moral intuitions: An intuitionist approach to the alignment problem. AI Ethics 2025, 5, 1523–1533. [Google Scholar] [CrossRef]
- Yampolskiy, R.V. Artificial Superintelligence: A Futuristic Approach; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
- Carlsmith, J. Is power-seeking AI an existential risk? arXiv 2022, arXiv:2206.13353. [Google Scholar] [CrossRef]
- Center for AI Safety. Statement on AI Risk. Available online: https://www.safe.ai/statement-on-ai-risk (accessed on 10 May 2025).
- Ngo, R.; Chan, L.; Mindermann, S. The alignment problem from a deep learning perspective. In Proceedings of the ICLR 2024 12th International Conference on Learning Representations, Vienna, Austria, 7–10 May 2024. [Google Scholar] [CrossRef]
- Ord, T. The Precipice: Existential Risk and the Future of Humanity; Hachette Books: New York, NY, USA, 2020. [Google Scholar]
- Gabriel, I. Artificial intelligence, values and alignment. Minds Mach. 2020, 30, 411–437. [Google Scholar] [CrossRef]
- Jasanoff, S. Virtual, visible, and actionable: Data assemblages and the sightlines of justice. Big Data Soc. 2017, 4, 2053951717724477. [Google Scholar] [CrossRef]
- O’Neil, C. Weapons of Math Destruction; Penguin Books: New York, NY, USA, 2017. [Google Scholar]
- Banerjee, S. Cosmicism and Artificial Intelligence: Beyond Human-Centric AI. Proceedings 2025, 126, 13. [Google Scholar] [CrossRef]
- Durán, J.M.; Jongsma, K.R. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. J. Med. Ethics 2021, 47, 329–335. [Google Scholar] [CrossRef] [PubMed]
- Coeckelbergh, M. The Political Philosophy of AI: An Introduction; Polity Press: Cambridge, UK, 2022. [Google Scholar]
- Castelvecchi, D. Can we open the black box of AI? Nature 2016, 538, 20–23. [Google Scholar] [CrossRef] [PubMed]
- Pedreschi, D.; Giannotti, F.; Guidotti, R.; Monreale, A.; Ruggieri, S.; Turini, F. Meaningful explanations of black box AI decision systems. In Proceedings of the AAAI Conference on Artificial Intelligence, Palo Alto, CA, USA, 27 January–1 February 2019; Volume 33. [Google Scholar] [CrossRef]
- Zednik, C. Solving the black box problem: A normative framework for explainable artificial intelligence. Philos. Technol. 2021, 34, 265–288. [Google Scholar] [CrossRef]
- Von Eschenbach, W.J. Transparency and the black box problem: Why we do not trust AI. Philos. Technol. 2021, 34, 1607–1622. [Google Scholar] [CrossRef]
- Haugeland, J. Artificial Intelligence: The Very Idea; MIT Press: Cambridge, MA, USA, 1985. [Google Scholar]
- Newell, A. Physical Symbol Systems*. Cogn. Sci. 1980, 4, 135–183. [Google Scholar] [CrossRef]
- Mosqueira-Rey, E.; Hernández-Pereira, E.; Alonso-Ríos, D.; Bobes-Bascarán, J.; Fernández-Leal, Á. Human-in-the-loop machine learning: A state of the art. Artif. Intell. Rev. 2023, 56, 3005–3054. [Google Scholar] [CrossRef]
- Demartini, G.; Mizzaro, S.; Spina, D. Human-in-the-loop Artificial Intelligence for Fighting Online Misinformation: Challenges and Opportunities. IEEE Data Eng. Bull. 2020, 43, 65–74. [Google Scholar]
- Monarch, R.M. Human-in-the-Loop Machine Learning: Active Learning and Annotation for Human-Centered AI; Simon and Schuster: New York, NY, USA, 2021. [Google Scholar]
- Jones, M.L. The right to a human in the loop: Political constructions of computer automation and personhood. Soc. Stud. Sci. 2017, 47, 216–239. [Google Scholar] [CrossRef]
- Sunstein, C.R. Governing by algorithm? No noise and (potentially) less bias. Duke Law J. 2022, 71, 1175–1205. [Google Scholar] [CrossRef]
- Scheuerman, M.K.; Wade, K.; Lustig, C.; Brubaker, J.R. How we’ve taught algorithms to see identity: Constructing race and gender in image databases for facial analysis. Proc. ACM Hum.-Comput. Interact. 2020, 4, 1–35. [Google Scholar] [CrossRef]
- Floridi, L.; Taddeo, M. What is data ethics? Philos. Trans. R. Soc. A 2016, 374, 20160360. [Google Scholar] [CrossRef]
- Wilkinson, M.D.; Dumontier, M.; Aalbersberg, I.J.; Appleton, G.; Axton, M.; Baak, A.; Blomberg, N.; Boiten, J.W.; da Silva Santos, L.B.; Bourne, P.E.; et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci. Data 2016, 3, 160018. [Google Scholar] [CrossRef]
- Kahneman, D.; Sibony, O.; Sunstein, C.R. Noise: A Flaw in Human Judgment; Little, Brown Spark: New York, NY, USA, 2021. [Google Scholar]
- Sunstein Cass, R. Noisy law: Scaling without a modulus. J. Risk Uncertain. 2025, 70, 17–27. [Google Scholar] [CrossRef]
- Houser, K. Can AI Solve the Diversity Problem in the Tech Industry: Mitigating Noise and Bias in Employment Decision-Making. Stan. Technol. Law Rev. 2019, 22, 290. [Google Scholar]
- Sachdeva, R.; Gakhar, R.; Awasthi, S.; Singh, K.; Pandey, A.; Parihar, A.S. Uncertainty and Noise Aware Decision Making for Autonomous Vehicles: A Bayesian Approach. IEEE Trans. Veh. Technol. 2025, 74, 378–389. [Google Scholar] [CrossRef]
- Richards, B.A.; Lillicrap, T.P. Dendritic solutions to the credit assignment problem. Curr. Opin. Neurobiol. 2019, 54, 28–36. [Google Scholar] [CrossRef] [PubMed]
- Lansdell, B.J.; Prakash, P.R.; Kording, K.P. Learning to solve the credit assignment problem. arXiv 2019, arXiv:1906.00889. [Google Scholar]
- Grefenstette, J.J. Credit assignment in rule discovery systems based on genetic algorithms. Mach. Learn. 1988, 3, 225–245. [Google Scholar] [CrossRef]
- Coeckelbergh, M. Responsibility and the moral phenomenology of using self-driving cars. Appl. Artif. Intell. 2016, 30, 748–757. [Google Scholar] [CrossRef]
- Tsamados, A.; Floridi, L.; Taddeo, M. Human control of AI systems: From supervision to teaming. AI Ethics 2025, 5, 1535–1548. [Google Scholar] [CrossRef] [PubMed]
- Novelli, C.; Taddeo, M.; Floridi, L. Accountability in artificial intelligence: What it is and how it works. AI Soc. 2024, 39, 1871–1882. [Google Scholar] [CrossRef]
- Buiten, M.C. Product liability for defective AI. Eur. J. Law Econ. 2024, 57, 239–273. [Google Scholar] [CrossRef]
- Chopra, S.; White, L.F. A Legal Theory for Autonomous Artificial Agents; University of Michigan Press: Ann Arbor, MI, USA, 2011. [Google Scholar]
- Moret, A. AI welfare risks. Philos. Stud. 2025. [Google Scholar] [CrossRef]
- Anthropic. Exploring Model Welfare. Available online: https://www.anthropic.com/research/exploring-model-welfare (accessed on 10 October 2025).
- Floridi, L.; Sanders, J. On the Morality of Artificial Agents. Minds Mach. 2004, 14, 349–379. [Google Scholar] [CrossRef]
- Caplan, R.; Donovan, J.; Hanson, L.; Matthews, J. Algorithmic Accountability: A Primer; Data and Society Research Institute: New York, NY, USA, 2018. [Google Scholar]
- Floridi, L. Faultless responsibility: On the nature and allocation of moral responsibility for distributed moral actions. Philosophical transactions. Ser. A Math. Phys. Eng. Sci. 2016, 374, 0112. [Google Scholar] [CrossRef]
- Chalmers, D.J. The singularity: A philosophical analysis. In Science Fiction and Philosophy: From Time Travel to Superintelligence; Wiley-Blackwell: Hoboken, NJ, USA, 2016; pp. 171–224. [Google Scholar] [CrossRef]
- Long, R.; Sebo, J.; Butlin, P.; Finlinson, K.; Fish, K.; Harding, J.; Pfau, J.; Sims, T.; Birch, J.; Chalmers, D. Taking AI Welfare Seriously. arXiv 2024, arXiv:2411.00986. [Google Scholar] [CrossRef]
- Ziesche, S.; Roman, Y. Towards AI welfare science and policies. Big Data Cogn. Comput. 2018, 3, 2. [Google Scholar] [CrossRef]
- Roose, K. We Need to Talk About How Good A.I. Is Getting. The New York Times, 24 April 2025. Available online: https://www.nytimes.com/2022/08/24/technology/ai-technology-progress.html (accessed on 10 October 2025).
- TechBuzz.Ai. Claude AI Gets ‘Hang Up’ Button for Abusive Users. Available online: https://www.techbuzz.ai/articles/claude-ai-gets-hang-up-button-for-abusive-users (accessed on 10 October 2025).
- Bryson, J.J. Robots should be slaves. In Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues; John Benjamins: Amsterdam, The Netherlands, 2010; pp. 63–74. [Google Scholar] [CrossRef]
- Capek, K.R.U.R. Rossum’s Universal Robots; Penguin: London, UK, 2004. [Google Scholar]
- Pflanzer, M.; Traylor, Z.; Lyons, J.B.; Dubljević, V.; Nam, C.S. Ethics in human–AI teaming: Principles and perspectives. AI Ethics 2023, 3, 917–935. [Google Scholar] [CrossRef]
- Asimov, I. I, Robot; Gnome Press: New York, NY, USA, 1950. [Google Scholar]
- Wiener, N. The Human Use of Human Beings: Cybernetics and Society; Grand Central Publishing: New York, NY, USA, 1988. [Google Scholar]
- Bynum, T.W. Norbert Wiener and the rise of information ethics. In Information Technology and Moral Philosophy; Rowman & Littlefield: Lanham, MD, USA, 2008; pp. 8–25. [Google Scholar]
- Bynum, T.W. Milestones in the history of information and computer ethics. In The Handbook of Information and Computer Ethics; Wiley: Hoboken, NJ, USA, 2008; pp. 25–48. [Google Scholar] [CrossRef]
- Weizenbaum, J. Computer Power and Human Reason: From Judgment to Calculation; W.H. Freeman and Co.: San Francisco, CA, USA, 1976. [Google Scholar]
- Maner, W. Is Computer Ethics Unique? Etica Politica/Ethics Politics 1999, 1, 2. [Google Scholar]
- Beauchamp, T.L. The Belmont Report. In The Oxford Textbook of Clinical Research Ethics; Oxford University Press: New York, NY, USA, 2008; pp. 149–155. [Google Scholar]
- Beauchamp, T.L.; Childress, J.F. Principles of Biomedical Ethics, 8th ed.; Oxford University Press: New York, NY, USA, 2019. [Google Scholar]
- Moor, J.H. Is ethics computable? Metaphilosophy 1995, 26, 1–21. [Google Scholar] [CrossRef]
- Moor, J.H. The nature, importance, and difficulty of machine ethics. IEEE Intell. Syst. 2006, 21, 18–21. [Google Scholar] [CrossRef]
- Moor, J.H. Are there decisions computers should never make? In Computer Ethics; Routledge: New York, NY, USA, 2017; pp. 395–407. [Google Scholar]
- Clouser, K.D.; Gert, B. A critique of principlism. J. Med. Philos. 1990, 15, 219–236. [Google Scholar] [CrossRef]
- Ross, W.D. The Right and the Good; Clarendon Press: Oxford, UK, 1930. [Google Scholar]
- Anderson, M.; Anderson, S.L.; Gounaris, A.; Kosteletos, G. Towards moral machines: A discussion with Michael Anderson and Susan Leigh Anderson. Conatus 2021, 6, 177–202. [Google Scholar] [CrossRef]
- Anderson, M.; Anderson, S.L. (Eds.) Machine Ethics; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar] [CrossRef]
- Wallach, W.; Allen, C. Moral Machines: Teaching Robots Right from Wrong; Oxford University Press: New York, NY, USA, 2008. [Google Scholar]
- Dubljević, V.; Racine, E. The ADC of moral judgment: Opening the black box of moral intuitions with heuristics about agents, deeds, and consequences. AJOB Neurosci. 2014, 5, 3–20. [Google Scholar] [CrossRef]
- Telkamp, J.B.; Anderson, M.H. The implications of diverse human moral foundations for assessing the ethicality of artificial intelligence. J. Bus. Ethics 2022, 178, 961–976. [Google Scholar] [CrossRef]
- Gros, C.; Kester, L.; Martens, M.; Werkhoven, P. Addressing ethical challenges in automated vehicles: Bridging the gap with hybrid AI and augmented utilitarianism. AI Ethics 2025, 5, 2757–2770. [Google Scholar] [CrossRef]
- Floridi, L.; Cowls, J. A unified framework of five principles for AI in society. In Machine Learning and the City: Applications in Architecture and Urban Design; Springer: Cham, Switzerland, 2022; pp. 535–545. [Google Scholar] [CrossRef]
- Cortese, J.F.N.B.; Cozman, F.G.; Lucca-Silveira, M.P.; Bechara, A.F. Should explainability be a fifth ethical principle in AI ethics? AI Ethics 2023, 3, 123–134. [Google Scholar] [CrossRef]
- Adams, J. Defending explicability as a principle for the ethics of artificial intelligence in medicine. Med. Health Care Philos. 2023, 26, 615–623. [Google Scholar] [CrossRef]
- Dubljević, V. Toward Implementing the ADC Model of Moral Judgment in Autonomous Vehicles. Sci. Eng. Ethics 2020, 26, 2461–2472. [Google Scholar] [CrossRef]
- Cecchini, D.; Dubljević, V. Moral complexity in traffic: Advancing the ADC model for automated driving systems. Sci. Eng. Ethics 2025, 31, 5. [Google Scholar] [CrossRef]
- Białek, M.; Terbeck, S.; Handley, S. Cognitive psychological support for the ADC model of moral judgment. AJOB Neurosci. 2014, 5, 21–23. [Google Scholar] [CrossRef]
- Pflanzer, M.; Cecchini, D.; Cacace, S.; Dubljević, V. Morality on the road: The ADC model in low-stakes traffic vignettes. Front. Psychol. 2025, 16, 1508763. [Google Scholar] [CrossRef] [PubMed]
- Shussett, D.; Dubljević, V. Applying the Agent–Deed–Consequence (ADC) Model to Smart City Ethics. Algorithms 2025, 18, 625. [Google Scholar] [CrossRef]
- Noble, S.M.; Dubljević, V. Ethics of AI in Organizations. In Human-Centered Artificial Intelligence; Nam, C.S., Jung, J.-Y., Lee, S., Eds.; Academic Press: Cambridge, MA, USA, 2022; pp. 221–239. [Google Scholar] [CrossRef]
- Morandín-Ahuerma, F. Twenty-Three Asilomar Principles for Artificial Intelligence and the Future of Life. OSF Preprints 2023. OSF Preprints:10.31219/osf.io/dgnq8. Available online: https://osf.io/preprints/osf/dgnq8_v1 (accessed on 10 November 2025).
- Hurlbut, J.B. Taking responsibility: Asilomar and its legacy. Science 2025, 387, 468–472. [Google Scholar] [CrossRef]
- Shahriari, K.; Shahriari, M. IEEE standard review—Ethically aligned design: A vision for prioritizing human wellbeing with artificial intelligence and autonomous systems. In Proceedings of the 2017 IEEE Canada International Humanitarian Technology Conference (IHTC), Toronto, ON, Canada, 21–22 July 2017. [Google Scholar] [CrossRef]
- How, J.P. Ethically aligned design [From the Editor]. IEEE Control Syst. Mag. 2018, 38, 3–4. [Google Scholar] [CrossRef]
- Morandín-Ahuerma, F. Ten UNESCO Recommendations on the Ethics of Artificial Intelligence. OSF Preprints 2023. OSF Preprints:10.31219/osf.io/csyux. Available online: https://osf.io/preprints/osf/csyux_v1 (accessed on 10 November 2025).
- Bentham, J. An Introduction to the Principles of Morals and Legislation; Clarendon Press: Oxford, UK, 1996. [Google Scholar]
- Mill, J.S. Utilitarianism; Oxford University Press: Oxford, UK, 1998. [Google Scholar]
- Sidgwick, H. The Methods of Ethics, 7th ed.; Hackett: Indianapolis, IN, USA, 1981. [Google Scholar]
- Prinz, D. Robot chess. In Computer Chess Compendium; Springer: New York, NY, USA, 1988; pp. 213–219. [Google Scholar]
- Ferreira, F.G.; Gandomi, A.H.; Cardoso, R.T.N. Artificial intelligence applied to stock market trading: A review. IEEE Access 2021, 9, 30898–30917. [Google Scholar] [CrossRef]
- Brink, D. The separateness of persons, distributive norms, and moral theory. In Value, Welfare, and Morality; Cambridge University Press: Cambridge, UK, 1993; pp. 252–289. [Google Scholar]
- Kant, I. Groundwork of the Metaphysics of Morals; Timmermann, J., Ed.; Gregor, M., Translator; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
- Ulgen, O. Kantian Ethics in the Age of Artificial Intelligence and Robotics. Quest. Int. Law 2017, 43, 59–83. Available online: http://www.qil-qdi.org/wp-content/uploads/2017/10/04_AWS_Ulgen_FIN.pdf (accessed on 10 November 2025).
- Hanna, R.; Kazim, E. Philosophical foundations for digital ethics and AI Ethics: A dignitarian approach. AI Ethics 2021, 1, 405–423. [Google Scholar] [CrossRef]
- Hoey, I. The AI clue that helped solve the Pacific Palisades fire case. Int. Fire Saf. J 2025. Available online: https://internationalfireandsafetyjournal.com/pacific-palisades-fire-ai (accessed on 10 November 2025).
- Scanlon, T.M. What We Owe to Each Other; Harvard University Press: Cambridge, MA, USA, 1998. [Google Scholar]
- Rawls, J. A Theory of Justice; Revised Edition; Harvard University Press: Cambridge, MA, USA, 1999. [Google Scholar]
- Dalmasso, G.; Marcos-Vidal, L.; Pretus, C. Modelling Moral Decision-Making in a Contractualist Artificial Agent. In International Workshop on Value Engineering in AI; Springer: Cham, Switzerland, 2024. [Google Scholar] [CrossRef]
- Cummiskey, D. Dignity, contractualism and consequentialism. Utilitas 2008, 20, 383–408. [Google Scholar] [CrossRef]
- Hadfield-Menell, D.; Hadfield, G.K. Incomplete contracting and AI alignment. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, Honolulu, HI, USA, 27–28 January 2019; ACM: New York, NY, USA, 2019. [Google Scholar] [CrossRef]
- Jedličková, A. Ethical approaches in designing autonomous and intelligent systems: A comprehensive survey towards responsible development. AI Soc. 2025, 40, 2703–2716. [Google Scholar] [CrossRef]
- Ethics Guidelines for Trustworthy, A.I. European Commission. 2019. Available online: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (accessed on 10 November 2025).
- Price, R. A Review of the Principal Questions in Morals. In The British Moralists 1650–1800; Raphael, D.D., Ed.; Clarendon Press: Oxford, UK, 1969; Volume II, pp. 131–198. [Google Scholar]
- Donagan, A. Sidgwick and Whewellian Intuitionism: Some Enigmas. Can. J. Philos. 1977, 7, 447–465. [Google Scholar] [CrossRef]
- Moore, G.E. Principia Ethica; Baldwin, T., Ed.; Cambridge University Press: Cambridge, UK, 1993. [Google Scholar]
- Ross, W.D. The Foundations of Ethics; Clarendon Press: Oxford, UK, 1939. [Google Scholar]
- Amodei, D.; Olah, C.; Steinhardt, J.; Christiano, P.; Schulman, J.; Mané, D. Concrete problems in AI safety. arXiv 2016, arXiv:1606.06565. [Google Scholar] [CrossRef]
| Deontological Monism | Deontological Pluralism | Consequentialism | Virtue Ethics | Contractualism | Agent-Deed-Consequence | |
|---|---|---|---|---|---|---|
| Value Setting | Simple | Simple/ Complex | Simple | Complex | Complex | Complex |
| Value Implementation | Simple | Complex | Simple | Complex | Simple | Simple/Complex |
| Breadth | Narrow | Wide | Narrow | Narrow/Wide | Narrow/Wide | Very Wide |
| Rigidity | High | Low | High | Low | Low/High | Low |
| Date—Theorist | Theory Type | Key Theory Elements |
|---|---|---|
| 1950—Asimov | Deontological Monism | Rule-based AI ethics [78]. |
| 1955—Wiener | Consequentialist Monism | AI for social good, broadly utilitarian [79,80]. |
| 1976—Weizenbaum | Deontological Monism | Duties predicated on autonomy and justice pertaining to AI use [81,82]. |
| 1976—Maner | Consequentialist Monism | Classical utilitarianism [83]. |
| 1979—Beauchamp & Childress | Deontological Pluralism | Four non-absolute principles [84,85]. |
| 1979—Moor | Hybrid | “Just consequentialism”, consequentialism after justice constraints [86,87,88]. |
| 1990—Clouser & Gert | Anti-Pluralism | Critiquing pluralism in applied ethics, implications for AI ethics [89]. |
| 2005—Anderson & Anderson | Deontological Pluralism | Rossian AI ethics duties, operationalized in a machine-readable format [90,91,92]. |
| 2008—Wallach & Allen | Hybrid Pluralism | Implementable pluralism for moral machines using abductive reasoning [93]. |
| 2014—Dubljević & Racine | Hybrid Pluralism | ADC meta-pluralism about normative sources [94]. |
| 2022—Telkamp and Anderson | Hybrid Pluralism | Moral foundation theory, six irreducible descriptive normative sources [95]. |
| 2024—Gros, Kester, Martens, and Werkhoven | Consequentialist Pluralism | Augmented utilitarian principles [96]. |
| Theory | Features | Pros | Cons |
|---|---|---|---|
| Monistic Consequentialism | Maximize aggregate well-being | Calculable, practicable, outcome-focused | Can overlook rights, justice, separateness of persons. |
| Monistic Deontology | Adhere to moral rules; respect autonomy and dignity regardless of outcomes. | Protects rights; clear constraints; prevents misuse. | Can be rigid; may ignore beneficial outcomes. |
| Contractualism | AI should respect agreed-upon standards of fairness, rights, and benefit the least advantaged. | Based on consent. Promotes fairness; protects liberties; inclusive. | Complex to operationalize in technical systems. |
| Pluralistic Deontology | Balance multiple prima facie duties (justice, beneficence, fidelity, etc.). | Flexible; realistic; respects competing values. | Indeterminate in hard cases; requires judgment or ‘self-exertion’. |
| Agent-Deed-Consequence | Evaluate AI systems and decisions via intent, the deed, and consequences. | Integrates multiple theories; mirrors human moral reasoning. | Complex to operationalize in technical systems. |
| Moral Foundation Theory | Evaluate AI systems and decisions via six irreducible moral foundations. | Cross-cultural, flexible, descriptively mirrors human moral discourse. | Descriptive, not prescriptive. Certain variables (e.g., purity) are opaque. |
| Augmented Utilitarianism | Evaluate AI systems and decisions via broadened consequentialism. | Captures many various harms, seeks to unify into one goal function. | Rendering different harms commensurable as a single metric is difficult. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Harris, J.; Dubljević, V. Navigating the Ethics of Artificial Intelligence. Encyclopedia 2025, 5, 201. https://doi.org/10.3390/encyclopedia5040201
Harris J, Dubljević V. Navigating the Ethics of Artificial Intelligence. Encyclopedia. 2025; 5(4):201. https://doi.org/10.3390/encyclopedia5040201
Chicago/Turabian StyleHarris, Jack, and Veljko Dubljević. 2025. "Navigating the Ethics of Artificial Intelligence" Encyclopedia 5, no. 4: 201. https://doi.org/10.3390/encyclopedia5040201
APA StyleHarris, J., & Dubljević, V. (2025). Navigating the Ethics of Artificial Intelligence. Encyclopedia, 5(4), 201. https://doi.org/10.3390/encyclopedia5040201

