Next Article in Journal
Conflicting Objectives in Non-Conventional Water Valorization in the Mediterranean
Previous Article in Journal
Regulatory Intentionality in Artificial Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Axiology and the Evolution of Ethics in the Age of AI: Integrating Ethical Theories via Multiple-Criteria Decision Analysis †

1
Division of Computer Science and Software Engineering, School of Innovation, Design and Engineering, Mälardalen University, 721 23 Västerås, Sweden
2
School of Innovation, Design and Engineering, Mälardalen University, 721 23 Västerås, Sweden
3
Department of Computer Science and Engineering, Chalmers University of Technology and University Gothenburg, 412 96 Gothenburg, Sweden
*
Author to whom correspondence should be addressed.
Presented at the 1st International Online Conference of the Journal Philosophies, 10–14 June 2025; Available online: https://sciforum.net/event/IOCPh2025.
Proceedings 2025, 126(1), 17; https://doi.org/10.3390/proceedings2025126017
Published: 6 November 2025
(This article belongs to the Proceedings of The 1st International Online Conference of the Journal Philosophies)

Abstract

The fast advancement of artificial intelligence presents ethical challenges that exceed the scope of traditional moral theories. This paper proposes a value-centered framework for AI ethics grounded in axiology, which distinguishes intrinsic values like dignity and fairness from instrumental ones such as accuracy and efficiency. This distinction supports ethical pluralism and contextual sensitivity. Using Multi-Criteria Decision Analysis (MCDA), the framework translates values into structured evaluations, enabling transparent trade-offs. A healthcare case study illustrates how ethical outcomes vary across physician, patient, and public health perspectives. The results highlight the limitations of single-theory approaches and emphasize the need for adaptable models that reflect diverse stakeholder values. By linking philosophical inquiry with governance initiatives like Responsible Artificial Intelligence (AI) and Digital Humanism, the framework offers actionable design criteria for inclusive and context-aware AI development.

1. Introduction

Artificial intelligence (AI) is advancing at a pace that consistently outstrips the development of the corresponding ethical frameworks. As AI systems increasingly influence decisions in various key sectors such as healthcare, finance, criminal justice, and education, their rapid and widespread deployment continues to expose complex ethical challenges [1]. These challenges require adaptive and evolving mechanisms for ethical and regulatory oversight [2]. The urgency of this mismatch has intensified as AI models have become more autonomous, raising critical questions about responsibility, fairness, and legitimacy.
These tensions are not merely theoretical, but manifest in concrete trade-offs. In healthcare, diagnostic AI tools must balance accuracy against patient privacy, as broader data access can improve accuracy while simultaneously exposing sensitive information [3]. In governance, predictive policing systems may enhance crime prevention, but risk reinforcing systemic bias [4]. Similar dilemmas arise across domains: Should human autonomy yield to machine control for the sake of efficiency? Should fairness be sacrificed for speed? Should transparency be compromised to protect proprietary algorithms? These conflicts underscore the urgent need for ethical frameworks capable of navigating competing ethical imperatives rather than privileging a single normative dimension.
Traditional ethical theories offer valuable insights but often fall short in addressing the complex dilemmas posed by contemporary AI systems. Deontological ethics provides clear moral rules but struggles when duties conflict, such as transparency versus privacy [5]. Utilitarianism focuses on maximizing overall benefit but may overlook individual rights and minority harms [6]. Virtue ethics emphasizes moral character and human dignity but lacks the operational precision needed for algorithmic implementation [7]. While each approach contributes meaningfully, no single theory can fully address the multidimensional challenges of AI. A more flexible and context-sensitive ethical framework is therefore essential [8].
This paper presents an integrative framework for addressing ethical complexity in AI. The framework is grounded in axiology, the philosophical study of value, and is implemented through Multi-Criteria Decision Analysis (MCDA). Axiology provides a conceptual foundation by distinguishing between intrinsic values and instrumental values, which supports ethical pluralism and enables sensitivity to different contexts. For example, healthcare may prioritize patient safety and privacy, while education may emphasize fairness and transparency. MCDA facilitates structured stakeholder deliberation, transparent trade-offs, and shared responsibility, contributing to inclusive and accountable AI governance [9,10].

2. Theoretical Foundations

The integrative framework begins with a normative foundation grounded in Responsible AI and Digital Humanism, which together emphasize transparency, accountability, human dignity, and democratic agency as essential principles for human-centric AI development.

2.1. Responsible AI: From Principles to Practice

Responsible Artificial Intelligence (RAI) is grounded in principles such as transparency, accountability, fairness, and human oversight, which are widely recognized as essential for building trustworthy systems [11,12]. Regulatory instruments like the European Union’s AI Act seek to operationalize these principles by classifying AI systems according to risk levels and imposing governance mechanisms, such as conformity assessments and documentation requirements, for high-risk applications [13]. However, ethical guidelines often remain abstract and offer limited practical direction when navigating tensions between competing values [14]. Dignum similarly critiques the gap between ethical ideals and implementation, advocating for “ethics by design” approaches that embed moral reasoning into AI systems and development processes [12]. This principle-to-practice gap underscores the need for context-sensitive decision-making frameworks that can reconcile trade-offs and ensure that AI systems reflect societal, legal, and moral values in concrete, actionable ways.

2.2. Digital Humanism: A Philosophical Foundation

Digital Humanism has emerged as a normative framework that challenges the dominance of technocratic paradigms in AI governance. It emphasizes that technological development must uphold human dignity, autonomy, and democratic participation, rather than reducing individuals to quantifiable data [15]. As the Vienna Manifesto on Digital Humanism (2019) [16] advocated, it promotes interdisciplinary and participatory governance models to ensure AI aligns with fundamental rights and democratic values. By confronting the “neutrality myth”, the belief that algorithms function independently of cultural and societal contexts, Digital Humanism reframes AI ethics as part of a broader struggle for legitimacy, accountability, and public trust [17].

2.3. Axiology and Ethical Pluralism

Responsible AI provides a governance framework, and Digital Humanism defines the ethical goals of technological development. Axiology offers a flexible foundation for ethical pluralism. Unlike single-theory approaches, such as deontology, utilitarianism, or virtue ethics, axiology distinguishes between the followuing:
  • Intrinsic Values: Dignity, fairness, and autonomy—valued for their own sake.
  • Instrumental Values: Accuracy, efficiency, scalability—valued as means to an end.
This distinction supports ethical pluralism, which recognizes that multiple, sometimes conflicting, values can coexist and must be balanced rather than subordinated to a single dominant principle. Ethical pluralism is especially relevant in AI ethics, where value trade-offs are inevitable and context-dependent. Recent scholarship emphasizes the importance of value-sensitive and context-aware approaches, particularly in domains like healthcare, education, and justice, where normative tensions reflect sector-specific priorities.

2.4. Multi-Criteria Decision Analysis (MCDA) in Ethical AI

Multi-Criteria Decision Analysis is increasingly recognized as a valuable method for embedding ethical reasoning into AI design and governance [18]. Rather than automating moral judgment, MCDA facilitates structured deliberation by enabling stakeholders to weigh competing criteria and evaluate alternatives transparently. This approach makes value trade-offs explicit, countering the opacity often associated with algorithmic decision-making. The inclusion of ethical aspects in MCDA has been identified to support inclusive and reflective decision-making within complex socio-technical systems [18]. Applications of MCDA span domains such as healthcare, environmental policy, and algorithmic governance, where it has been used to balance tensions between privacy and accuracy or fairness and efficiency [19,20]. Crucially, MCDA supports ethical pluralism by structuring participatory processes that reflect diverse stakeholder values and sector-specific demands.

3. Integrating the Axiology–MCDA Framework

This section presents the framework (Figure 1) that operationalizes the theoretical foundations outlined in Section 2 into a structured tool for ethical decision-making in AI governance. It consists of four interlinked components: a normative foundation, value classification, an operational method, and an ethical outcome.

3.1. Normative Foundation

The Integrated Axiology–MCDA Framework brings together four complementary components: Responsible AI, Digital Humanism, axiology, and Multi-Criteria Decision Analysis (MCDA). These elements collectively bridge the gap between abstract ethical principles and practical implementation.
Responsible AI ensures alignment with governance and regulatory standards, while Digital Humanism anchors AI systems in human dignity, autonomy, and democratic values. Axiology provides the philosophical basis for distinguishing and balancing intrinsic and instrumental values. MCDA operationalizes these considerations through a structured, transparent decision-making process. Together, these foundations establish the normative orientation of the framework and guide its ethical reasoning across diverse application domains.

3.2. Value Classification

Values are organized into intrinsic and instrumental categories, as previously discussed. This classification not only clarifies the ethical relevance of different design choices but also supports the identification of context-specific criteria. By distinguishing what ought to be preserved (e.g., dignity, fairness) from what serves functional goals (e.g., accuracy, efficiency), the framework enables structured reasoning about trade-offs, helping stakeholders navigate complex ethical dilemmas in applied settings.
Figure 1. The Integrated Axiology–MCDA Framework for ethical AI decision-making.
Figure 1. The Integrated Axiology–MCDA Framework for ethical AI decision-making.
Proceedings 126 00017 g001

3.3. From Values to Action: MCDA Operational Method

Axiology defines what matters; MCDA provides a workflow to operationalize values in decision-making and enables structured evaluation of alternatives across multiple, potentially conflicting criteria [21]. The process follows four steps:
  • Select Criteria: Stakeholders collaboratively identify relevant ethical values (e.g., accuracy, privacy, fairness, dignity).
  • Score Alternatives: Each AI system is rated on a common scale (e.g., 1–4) for each criterion.
  • Assign Weights: Stakeholders assign relative importance to each criterion based on context.
  • Calculate Integrated Score: Mathematical aggregation is used to combine weighted performance scores across all ethical criteria.
The ethical score is calculated using a weighted sum model commonly used in MCDA [22]:
S ( a ) = i = 1 n w i · s i ( a )
where S ( a ) represents the overall ethical score for alternative a, w i denotes the contextual weight assigned to criterion i, s i ( a ) indicates the performance score of alternative a on criterion i, and n is the total number of ethical criteria under consideration.
This mathematical formulation does not attempt to “solve” ethics through computation but rather creates a transparent structure that makes value trade-offs visible and subject to democratic deliberation. The strength of this approach lies in surfacing ethical tensions systematically, enabling stakeholders to understand how different priority assignments lead to different outcomes.

3.4. Ethical Outcome

The result is a context-sensitive ethical decision that integrates normative principles with practical considerations. By making value trade-offs explicit and traceable, the framework enhances transparency and accountability. Through multi-stakeholder deliberation and adaptation to sector-specific priorities, it supports decisions that are ethically informed and socially legitimate. Rather than offering definitive solutions, the approach facilitates ongoing reflection and alignment with democratic values in dynamic environments.

4. Illustrative Scenario: Ethical Evaluation of AI Diagnostics in Healthcare

This section presents a conceptual illustration of the Integrated Axiology–MCDA Framework through a hypothetical scenario involving the selection of AI diagnostic systems in a hospital setting. While conceptual in nature, the example highlights how ethical decision-making can vary across contexts and stakeholder perspectives, showcasing the framework’s relevance for healthcare governance.

4.1. Scenario and System Alternatives

A hospital is evaluating two AI diagnostic systems:
  • System A: This system is optimized for diagnostic accuracy, using extensive patient data (e.g., genomics, family history) to enhance precision.
  • System B: This system is designed for privacy, using consent-driven data minimization and differential privacy techniques to protect patient information.
The ethical evaluation is shaped by three key stakeholder groups, each influencing the weighting of ethical criteria in different contexts:
  • Physicians prioritize accuracy to support clinical effectiveness, especially in high-stakes environments such as emergency care.
  • Patient advocates emphasize privacy, autonomy, and informed consent, reflecting broader societal concerns about data protection and patient dignity.
  • Public health officials seek a balanced approach, valuing both clinical performance and community trust, particularly in population-level health initiatives.

4.2. Applying the MCDA Framework

Using the four-step MCDA process introduced in Section 3.3—criteria selection, scoring, weighting, and aggregation of criteria—the systems were evaluated in three healthcare contexts: emergency care, outpatient care, and community health.
The ethical scores for each system were calculated based on stakeholder-defined weights for accuracy(Acc) and privacy(Priv). The results are summarized in Table 1.
This table illustrates how ethical preferences shift across clinical contexts.
  • In routine outpatient care, System B is clearly preferred, reflecting the importance of patient autonomy, informed consent, and data minimization in routine clinical interactions.
  • In emergency care, System A slightly outperforms system B, demonstrating that while diagnostic accuracy is paramount, privacy remains ethically relevant even under clinical urgency.
  • In community health, system B significantly outperforms System A, emphasizing the importance of trust, privacy, and social accountability in public health initiatives targeting vulnerable populations.

4.3. Insights

This case demonstrates that there is no single “right” choice. The ethical results depend on context-specific values and stakeholder priorities. The MCDA process ensures that these trade-offs are made explicit, rather than embedded invisibly in technical design decisions. By grounding deliberation in axiology, institutions can balance intrinsic values (e.g., dignity, fairness) with instrumental values (e.g., efficiency, accuracy), fostering more transparent and legitimate AI governance.

5. Conclusions

The strengths of the framework include its ability to clarify ethical tensions, incorporate diverse viewpoints of stakeholders, and balance intrinsic values with instrumental values [14,18]. However, challenges remain. Weighting values can be subjective, and context sensitivity may hinder cross-domain comparisons. Although simple MCDA methods are accessible, more advanced techniques such as Analytic Hierarchy Process (AHP) or Elimination Et Choice Translating Reality (ELECTRE) could offer deeper insights but may reduce transparency [23].
Future research should apply this framework in other domains, such as education, finance, and criminal justice, to test its robustness in different ethical tensions and stakeholder groups [24]. Integrating advanced MCDA methods with participatory design and digital humanism could enhance transparency, legitimacy, and democratic accountability in AI governance [25,26].

Author Contributions

Conceptualization, G.D.-C., F.S. and D.I.; methodology, F.S.; formal analysis, F.S.; investigation, F.S.; writing—original draft preparation, F.S.; writing—review and editing, D.I. and G.D.-C.; supervision, D.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kakarala, M.R.K.; Rongali, S.K. Existing Challenges in Ethical AI: Addressing Algorithmic Bias, Transparency, Accountability and Regulatory Compliance. World J. Adv. Res. Rev. 2025, 25, 549–554. [Google Scholar] [CrossRef]
  2. AlJadaan, O.T.; Zaidi, H.; Al Faress, M.Y.; Jabas, A.O. Ethics in AI and Computation in Automated Decision-Making. In Enhancing Automated Decision-Making Through AI; Hai-Jew, S., Ed.; IGI Global Scientific Publishing: Hershey, PA, USA, 2025; pp. 397–424. [Google Scholar] [CrossRef]
  3. Williamson, S.M.; Prybutok, V. Balancing Privacy and Progress: A Review of Privacy Challenges, Systemic Oversight, and Patient Perceptions in AI-Driven Healthcare. Appl. Sci. 2024, 14, 675. [Google Scholar] [CrossRef]
  4. Lau, T. Predictive Policing Explained. Available online: https://www.brennancenter.org/our-work/research-reports/predictive-policing-explained (accessed on 30 September 2025).
  5. Alexander, L.; Moore, M. Deontological Ethics. In The Stanford Encyclopedia of Philosophy; Winter 2021 Edition; Zalta, E.N., Ed.; Stanford University: Stanford, CA, USA, 2021. [Google Scholar]
  6. Crisp, R. Utilitarianism. In The Stanford Encyclopedia of Philosophy; Fall 2017 Edition; Zalta, E.N., Ed.; Stanford University: Stanford, CA, USA, 2017. [Google Scholar]
  7. Hursthouse, R.; Pettigrove, G. Virtue Ethics. In The Stanford Encyclopedia of Philosophy; Spring 2022 Edition; Zalta, E.N., Ed.; Stanford University: Stanford, CA, USA, 2022. [Google Scholar]
  8. Hagendorff, T. The Ethics of AI Ethics: An Evaluation of Guidelines. Minds Mach. 2020, 30, 99–120. [Google Scholar] [CrossRef]
  9. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef] [PubMed]
  10. Dubljevic, V.; Yim, M.; Poel, I. Toward a Rational and Ethical Sociotechnical System of Autonomous Vehicles: A Novel Application of Multi-Criteria Decision Analysis. Philos. Technol. 2021, 34, 137–160. [Google Scholar] [CrossRef] [PubMed]
  11. Mittelstadt, B. Principles Alone Cannot Guarantee Ethical AI. Nat. Mach. Intell. 2019, 1, 501–507. [Google Scholar] [CrossRef]
  12. Dignum, V. Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way; Springer: Cham, Switzerland, 2019. [Google Scholar] [CrossRef]
  13. European Union. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act). Official Journal of the European Union, L 2024/1689, 12 July 2024. Available online: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng (accessed on 30 September 2025).
  14. Whittlestone, J.; Nyrup, R.; Alexandrova, A.; Cave, S. The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, Honolulu, HI, USA, 27–28 January 2019; pp. 195–200. [Google Scholar]
  15. Nida-Rümelin, J.; Staudacher, K. Philosophical Foundations of Digital Humanism. In Introduction to Digital Humanism; Werthner, H., Ghezzi, C., Kramer, J., Nida-Rümelin, J., Nuseibeh, B., Prem, E., Stanger, A., Eds.; Springer: Cham, Switzerland, 2024. [Google Scholar] [CrossRef]
  16. Vienna Manifesto on Digital Humanism. Vienna, May 2019. Available online: https://dighum.ec.tuwien.ac.at/wp-content/uploads/2019/07/Vienna_Manifesto_on_Digital_Humanism_EN.pdf (accessed on 30 September 2025).
  17. Prem, E. Approaches to Ethical AI. In Introduction to Digital Humanism; Werthner, H., Ghezzi, C., Kramer, J., Nida-Rümelin, J., Nuseibeh, B., Prem, E., Stanger, A., Eds.; Springer: Cham, Switzerland, 2024; pp. 225–239. [Google Scholar] [CrossRef]
  18. Sapienza, G.; Dodig-Crnkovic, G.; Crnkovic, I. Inclusion of Ethical Aspects in Multi-Criteria Decision Analysis. In Proceedings of the 2016 1st International Workshop on Decision Making in Software Architecture (MARCH), Venice, Italy, 5 April 2016; pp. 1–8. [Google Scholar] [CrossRef]
  19. Triantaphyllou, E. Multi-Criteria Decision Making Methods: A Comparative Study; Springer: Boston, MA, USA, 2000. [Google Scholar] [CrossRef]
  20. Tsoukiàs, A. From Decision Theory to Decision Aiding Methodology. Eur. J. Oper. Res. 2008, 187, 138–161. [Google Scholar] [CrossRef]
  21. Belton, V.; Stewart, T.J. Multiple Criteria Decision Analysis: An Integrated Approach; Springer: Boston, MA, USA, 2002. [Google Scholar] [CrossRef]
  22. Keeney, R.L.; Raiffa, H. Decisions with Multiple Objectives: Preferences and Value Trade-Offs; Cambridge University Press: Cambridge, UK, 1993. [Google Scholar] [CrossRef]
  23. Ferrell, O.C.; Harrison, D.E.; Ferrell, L.K.; Ajjan, H.; Hochstein, B.W. A Theoretical Framework to Guide AI Ethical Decision Making. AMS Rev. 2024, 14, 53–67. [Google Scholar] [CrossRef]
  24. Collins, B.X.; Bélisle-Pipon, J.-C.; Evans, B.J.; Ferryman, K.; Jiang, X.; Nebeker, C.; Novak, L.; Roberts, K.; Were, M.; Yin, Z.; et al. Addressing Ethical Issues in Healthcare Artificial Intelligence Using a Lifecycle-Informed Process. JAMIA Open 2024, 7, ooae108. [Google Scholar] [CrossRef] [PubMed]
  25. Badawy, W. Algorithmic Sovereignty and Democratic Resilience: Rethinking AI Governance in the Age of Generative AI. AI Ethics 2025, in press. [CrossRef]
  26. Ryan, J. Democracy in the Digital Age: Reclaiming Governance in an Algorithmic World. Toda Peace Institute Policy Brief No. 223. 2025. Available online: https://toda.org/policy-briefs-and-resources/policy-briefs/report-223-full-text.html (accessed on 30 September 2025).
Table 1. Ethical Evaluation of AI diagnostic systems using MCDA.
Table 1. Ethical Evaluation of AI diagnostic systems using MCDA.
ScenarioWeights (Acc/Priv)System A ScoreSystem B ScorePreferred System
Routine Outpatient Clinic0.5/0.53.03.5B
Emergency Department0.7/0.33.43.3A
Community Health0.3/0.72.63.7B
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, F.; Isovic, D.; Dodig-Crnkovic, G. Axiology and the Evolution of Ethics in the Age of AI: Integrating Ethical Theories via Multiple-Criteria Decision Analysis. Proceedings 2025, 126, 17. https://doi.org/10.3390/proceedings2025126017

AMA Style

Sun F, Isovic D, Dodig-Crnkovic G. Axiology and the Evolution of Ethics in the Age of AI: Integrating Ethical Theories via Multiple-Criteria Decision Analysis. Proceedings. 2025; 126(1):17. https://doi.org/10.3390/proceedings2025126017

Chicago/Turabian Style

Sun, Fei, Damir Isovic, and Gordana Dodig-Crnkovic. 2025. "Axiology and the Evolution of Ethics in the Age of AI: Integrating Ethical Theories via Multiple-Criteria Decision Analysis" Proceedings 126, no. 1: 17. https://doi.org/10.3390/proceedings2025126017

APA Style

Sun, F., Isovic, D., & Dodig-Crnkovic, G. (2025). Axiology and the Evolution of Ethics in the Age of AI: Integrating Ethical Theories via Multiple-Criteria Decision Analysis. Proceedings, 126(1), 17. https://doi.org/10.3390/proceedings2025126017

Article Metrics

Back to TopTop