1. Introduction
The fast development of artificial intelligence (AI) has reshaped industrial engineering through its essential role in achieving efficiency and competitiveness. However, the growing use of AI systems in industrial support and decision-making processes has brought ethical and trustworthy concerns to the forefront. Bias in algorithms, lack of transparency, and potential risks to privacy, security and autonomy are concerns that emphasize on the need to integrate ethical principles into AI-driven systems. The implementation of ethical and trustworthy AI systems in industrial settings becomes essential because decisions made by these systems produce extensive impacts on workers, customers, and supply chain operations. The integration of ethical AI principles serves two essential purposes: it builds stakeholder trust and prevents adverse effects while ensuring technological progress matches human and societal values. The translation of ethical AI principles into industrial practices faces substantial challenges despite the existing proposals [
1,
2,
3].
This work explores the existing literature and identifies practical approaches for embedding ethical and trustworthy AI into industrial processes. It addresses the following key research question:
RQ: What approaches can industrial engineers adopt to ensure ethics and trustworthiness in industrial AI applications?
The research evaluates current methods and successful examples to develop an ethical framework for AI systems in industrial engineering which supports sustainable responsible innovation.
In this context, the European Union advanced its AI regulation efforts in June 2024 by publishing a proposal from the European Commission which introduced rules for AI system market placement, commissioning and use within the Union [
4]. The EU Ethics Guidelines for Trustworthy AI published by the High-Level Expert Group on AI (HLEG) in 2019 served as the basis for this regulation which established fundamental principles for developing ethical, reliable, and robust AI systems [
3,
5]. The AI Act transforms numerous principles into binding regulatory standards which apply to high-risk AI systems.
The HLEG guidelines, which promote values such as transparency, human oversight, and technical robustness, provided the ethical framework on which the regulation is based, translating a voluntary vision into a binding legal structure that will ensure trust and security in the use of AI within the European Union [
5].
Overall, this work follows the following structure. After a short introduction,
Section 2 gives an overview of the main guidelines to give readers an introduction to the current situation.
Section 3 uses a Systematic Literature Review as a research methodology for identifying and analyzing existing works on this topic.
Section 4 provides a framework of practical approaches for the integration of AI, while
Section 5 is a discussion, and finally
Section 6 provides a conclusion with the outlook for the future.
2. Overview of International Guidelines
The following guidelines were identified and selected based on their impact on global AI governance, their broad applicability, and their emphasis on ethical AI development. Each guideline provides valuable insights into how different global actors approach the challenge of governing AI systems in a responsible and trustworthy manner.
2.1. EU Guidelines
The EU Ethics Guidelines for Trustworthy AI represent a 2019 publication by the High-Level Expert Group on AI (HLEG) under the European Commission which provides an ethical framework that focuses on social and moral aspects of AI development [
4]. The guidelines use a complete approach to infuse ethical values throughout all phases of AI development. The main goal supports the development of AI systems which are both lawful and ethical and robust.
The guidelines establish three fundamental requirements for trustworthy AI which include existing laws and regulations, respecting ethical principles and values as well as maintaining resilience against technical and societal risks of unintended harm. The four essential ethical principles form the core of this vision which AI systems need to maintain: (1) The system must respect human autonomy by enabling decision support for people while maintaining their authority over technology operations. The system needs to prevent any form of manipulation or coercion or subordination of people. (2) The design and deployment of AI systems must incorporate mechanisms to reduce the potential for physical or psychological or societal harm that occurs through intentional or unintentional means. (3) The AI system needs to operate with fairness by preventing discrimination and bias while providing equal access and treatment while maintaining procedural fairness with decision contestation and appeal mechanisms. (4) The ability to explain decisions and maintain transparency stands as a fundamental requirement for building trust. AI systems need to present their decision-making processes in a clear manner so users can both review and when needed dispute the generated outcomes.
The HLEG defines seven operational requirements to implement the described ethical principles for trustworthy AI: (1) human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, (6) societal and environmental wellbeing, (7) accountability [
4].
2.2. OECD Guidelines
The Recommendation on Artificial Intelligence which the OECD Council adopted at its Ministerial level in May 2019 and updated in 2024 serves as one of the initial international attempts to establish a worldwide system for AI governance [
6]. The OECD seeks to promote innovation which respects human rights and democratic values while ensuring safety and ethical practices through its principles of transparency and international cooperation. The Recommendation presents five fundamental principles to guide the development, deployment and utilization of artificial intelligence.
The OECD promotes AI usage to achieve inclusive growth and sustainable development along with enhanced human well-being and reduced social inequalities. The organization supports AI technology development which addresses worldwide challenges including climate change, environmental protection and distributes economic advantages to disadvantaged groups.
AI systems must maintain respect for the rule of law along with human rights and democratic values through fairness and privacy practices according to the Recommendation. It places strong emphasis on fairness together with non-discrimination, data protection and privacy considerations. The organization demands protection against disinformation together with measures that safeguard labor rights, social justice and freedom of expression.
Trust in AI systems relies heavily on transparency and explainability. The OECD promotes the development of AI technologies which should provide clear information about their operations, data usage and decision-making processes. The principle supports both individual rights to understand AI decisions and to contest them while maintaining human oversight with meaning.
AI systems need to demonstrate reliability and security throughout their complete operational period including when misuse occurs. The Recommendation highlights the requirement of monitoring systems and control mechanisms alongside the ability to handle unexpected security threats or system behavior. Responsible AI deployment requires operational stability alongside cyber risk resilience as fundamental elements.
All actors responsible for the AI life cycle must take full responsibility and accountability for both ethical system behavior and proper system functioning. Organizations must preserve traceable data and decisions along with ethical business practices and risk management protocols for their operations. The OECD promotes joint initiatives to handle human rights impacts and algorithmic bias throughout the AI value chain.
2.3. UNESCO Guidelines
The global AI governance framework includes UNESCO as its leading organization through its Recommendation on the Ethics of Artificial Intelligence which passed in November 2021 [
7]. The United Nations lacks a single framework to regulate AI but its agencies including UNESCO have created guidelines to support ethical AI practices. The UNESCO Recommendation serves as a worldwide solution to AI ethical issues while supporting other supranational programs such as the European Union and OECD initiatives. The UNESCO document stands out because it combines universal ethical principles with social justice promotion, environmental sustainability and global cooperation which address the deep AI impacts of our present era. The UNESCO Recommendation establishes core values which direct national and international policy implementation. The core values consist of human rights respect together with dignity, gender equality, social justice and ecosystem protection. The values work to create fair AI development and usage that provides equal technological access to all communities while reducing adverse effects on vulnerable populations. The recommendation establishes essential principles which include the following: proportionality and Do No Harm; safety and security; fairness and non-discrimination; sustainability; right to privacy and data protection; human oversight and determination; transparency and explainability; responsibility and accountability; awareness and literacy; and multi-stakeholder and adaptive governance and collaboration.
2.4. Limitations of and Gaps in These Guidelines
Although guidelines such as those of the EU, the OECD, and UNESCO offer an important theoretical framework for the development of ethical and reliable AI systems, they have significant limitations in terms of practical application. As pointed out in [
8], a significant gap exists between the principles established in the frameworks and their actual implementation. Similarly, it is pointed out in [
9] how these theoretical frameworks are often perceived as too abstract, difficult to interpret concretely, and offering little practical guidance. Given these considerations, it becomes necessary to bridge the gap between theory and practice in the integration of values in AI systems. The next section will be devoted to a systematic literature review, whose aim will be to explore existing methodologies that can support the design and validation of ethical AI systems, with a focus on applications in the industrial sector, such as smart factories.
3. Systematic Analysis of the Existing Literature
The aim of the current paper is to explore how ethical principles can be implemented in artificial intelligence systems applied to industrial contexts, with a specific focus on the digital twin for a smart ceramics factory. In particular, the review intends to identify the fundamental values for the design of an AI-based digital twin that is trustworthy and ethically sustainable and identify possible methodologies already present in the literature that would allow these principles to be translated into practical solutions, as well as to validate the AI system once developed.
3.1. Research Methodology
This research study is based on a Systematic Literature Review, in line with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) Statement. PRISMA is a tool to ensure that systematic reviews are carried out and reported in a transparent and comprehensive manner, which provides a checklist to improve the quality of the selection and synthesis process of the studies included in a systematic review [
10].
3.2. Research Aims
The study is guided by three primary research objectives:
The first one is to identify ethical values and/or principles that have to be integrated into the design of AI systems to ensure their trustworthiness and ethical sustainability. In this regard, 13 selected articles have been examined to unveil the essential ethical values and compare them with the relevant European Union guidelines (
Table 1).
Among the ethical values derived from the Systematic Literature Review (SLR), the following can be cited: transparency, privacy and data governance, human agency and supervision, technical robustness and safety diversity, non-discrimination and fairness, social and environmental well-being, and accountability.
For instance, transparency and technical robustness are extensively explored through Explainable AI (XAI) [
12]; however, values such as diversity, as well as social and environmental well-being, are scarcely operationalized.
The literature also demonstrates that human participation plays an essential part in AI decision-making processes. Chen et al. [
14] demonstrate that Human-in-the-Loop systems function as safeguards because they maintain interpretability while providing essential oversight to prevent potential risks. Researchers have developed Ashby’s Ethical Regulators [
15] as a system to monitor and modify real-time system behavior for ethical compliance. The majority of current research exists in theoretical form without delivering practical solutions that organizations can use in operational environments.
The second objective relies on understanding how these ethical values could be integrated into the design of an AI-based digital twin for a smart ceramics factory. For this, various articles [
2,
11,
12,
13,
14,
15,
16] have been analyzed in depth to extract the methodologies for integrating ethical values into AI-based digital twins, which offer adaptable approaches for various industries, including ceramics. Among the methodologies adopted in the analyzed papers are the following: Explainable Artificial Intelligence methods, Human-in-the-Loop, Ethics by Design, the AI Trust and Maturity Model, Ethical Regulators, and Risk Analysis.
Methodologies such as Explainable AI (XAI) [
12,
16] and Human-in-the-Loop [
14] systems have emerged, enhancing decision-making transparency and integrating human oversight into critical decisions. The Ethics by Design approach offers a framework for embedding ethical values early in development. However, these approaches are not directly applied to digital twins in specific industrial contexts, such as the ceramics industry, which presents unique challenges related to sustainability, ethical data management, and operational complexity.
The third objective of the review is to find tools and methodologies that can be used to assess and validate the trustworthiness of AI systems in the post-development phase. For instance, the literature presents models like the AI Trust and Maturity Model developed by Mylrea et al. [
11]. These models work to achieve a balance between governance, security, and performance across the entire system lifecycle. XAI techniques such as Class Activation Maps and Contrastive Gradient-Based Saliency Maps [
16] enhance the transparency and help users understand how decisions are made, which in turn builds trust in the system’s reliability.
In addition, tools based on XAI have been used to monitor the accuracy of predictions in digital twins [
12], while the AI Maturity Model developed by Cho et al. [
21] provides a way to evaluate both the ethical and technical readiness of AI systems. Together, these approaches support ongoing improvement and promote transparency, and dependability throughout the operational life of the system.
4. Framework for the Ethical and Trustworthy Design of AI Systems in Industry
A systematic approach must be used to ensure ethical and reliable AI system development in industrial environments by integrating ethical standards throughout the entire AI life cycle. According to Mezgár et al. [
2] ethical considerations should be incorporated at the beginning of agent-based model implementation. The Responsible AI (RAI) framework enables the transformation of ethical principles into measurable and executable regulations. RAI promotes human-centered AI development through its focus on transparency, and explainability alongside accountability, security and privacy which builds trust between users and AI systems.
The “Ethics by Design” methodology was developed to ensure that ethical issues are addressed in the early stages of AI system development to reduce the potential risks such as operational failures or unintended consequences that may affect human safety, industrial effectiveness, and environmental sustainability [
2].
Among these ethical principles, transparency holds a notably important position, as highlighted in EU guidelines and thoroughly examined in the recent literature. Kobayashi et al. [
12] stress the significance of Explainable AI (XAI) in industrial digital twins, demonstrating that improving the clarity of AI-generated decisions builds user trust and improves predictive maintenance, particularly in estimating the Remaining Useful Life (RUL) of industrial parts. Their study demonstrates how visualization tools and interpretable models provide engineers and operators with better understanding of system behavior which helps them detect errors and make better decisions in complex industrial settings. Chamola et al. [
18] also describe Explainable AI as a vital approach to enhance transparency and accountability, particularly in critical domains such as autonomous systems, where fairness and bias reduction are essential. Huang [
20] further develops this idea by connecting it to Industry 4.0 and emphasizing the significance of transparency in AI-based network systems. Their research underscores the need for auditability and traceability to build trust and resilience in complex industrial systems.
The study of Milossi et al. [
19] highlights transparency as a vital factor for maintaining accountability in automated decision-making systems which often operate without adequate human supervision. Huang [
20] links AI transparency to Industry 4.0 sustainability through a single evaluation framework that merges social welfare metrics with environmental performance indicators which track emissions reductions and resource utilization efficiency. Goldman et al. [
16] study Explainable AI approaches to enhance trust in complex industrial classification models while noting transparency stands as a core ethical requirement for AI systems. The importance of validation after development (post-development) has become more recognized because it ensures the long-term dependability of AI-powered systems. Kobayashi et al. [
12] propose a method which integrates explainability tools into the continuous monitoring of Remaining Useful Life (RUL) forecasts to enable real-time assessment of operational data predictions against actual results. The validation framework serves two purposes by detecting operational discrepancies while ensuring digital twin systems maintain their trustworthy and transparent operation throughout their operational period. Moreover, their methodology incorporates dynamic validation which enables AI models to actively adapt to shifting operational conditions and promotes continuous verification of system consistency and compliance.
In industrial contexts such as the ceramics industry, integrating ethical values into AI systems requires methodologies that account for both technical precision and socio-environmental impact. Approaches such as Explainable AI, Ethics by Design, and Human-in-the-Loop systems are particularly relevant. These methodologies enable the incorporation of stakeholder values, transparency, and accountability into AI-driven decision-making processes, such as quality control, resource optimization, and predictive maintenance. Applying these ethical frameworks within the ceramics industry can support sustainable production, fair labour practices, and responsible automation.
5. Discussion
The research paper reveals major obstacles and potential opportunities for ethical AI system design and validation, specifically for of the industrial digital twin applications. The literature review demonstrates that trustworthiness depends on core ethical principles which include transparency and privacy alongside fairness and technical robustness. The practical execution of these principles remains underdeveloped particularly when applied to sector-specific applications including the ceramics industry.
The current Explainable AI and Human-in-the-Loop systems demonstrate potential for ethical AI system integration, yet they do not directly apply to digital twins in dynamic industrial operations. The “Ethics by Design” framework provides an effective base yet requires additional investigation to determine its suitability for industrial settings with their complex operational needs.
The ceramics sector experiences an intensified challenge because it needs to address sustainability issues together with ethical data handling and operational performance requirements. The research demonstrates that current AI ethics frameworks do not adequately address social and environmental welfare needs. The essential values receive recognition, but their practical implementation remains ineffective particularly when applied to industry-specific situations. AI governance structures need to incorporate sustainability and corporate social responsibility factors because digital twin technologies must create positive impacts on environmental and social goals.
6. Conclusions
The research paper examines ethical design and validation of AI systems together with digital twins in industrial applications through a comprehensive literature review. The study conducted a systematic evaluation to determine essential ethical principles while investigating methods for designing ethical AI systems and tools to assess AI trustworthiness. The field has made substantial progress, but researchers need to address multiple gaps that exist when implementing ethical guidelines in industrial digital twin systems. The research develops an industry-specific framework which directs both ethical design procedures and validation protocols for digital twins in ceramics manufacturing. The framework connects universal ethical principles to practical implementation through specific tools that help create responsible AI systems in complex manufacturing settings. It also needs industrial testing through practical insights to achieve its design improvement, with keeping in mind that the development responsible for AI-driven digital twins needs ethical guidelines to work together with modern technological advancements.
Author Contributions
Conceptualization, S.D.S. and M.D.M.; methodology, S.D.S. and M.D.M.; formal analysis, S.D.S. and O.D.Y.; writing—original draft preparation, S.D.S.; writing—review and editing, O.D.Y., M.D.M. and E.R. All authors have read and agreed to the published version of the manuscript.
Funding
This research was co-funded by the Italian Ministry of Enterprises and Made in Italy under the measure “Development Contracts” (DM 31 December 2021), grant number: F/310087/01-05/X56 (START—SusTainable dAta dRiven manufacTuring).
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Acknowledgments
The authors thank the Editor, Guest Editors, and anonymous reviewers for their helpful comments on this paper.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Hagendorff, T. The Ethics of AI Ethics: An Evaluation of Guidelines. Minds Mach. 2020, 30, 99–120. [Google Scholar] [CrossRef]
- Mezgár, I.; Váncza, J. From ethics to standards–A path via responsible AI to cyber-physical production systems. Annu. Rev. Control 2022, 53, 391–404. [Google Scholar] [CrossRef]
- Smuha, N.A. The EU Approach to Ethics Guidelines for Trustworthy Artificial Intelligence. Comput. Law Rev. Int. 2019, 20, 97–106. [Google Scholar] [CrossRef]
- Ethics Guidelines for Trustworthy, AI. Available online: https://www.europarl.europa.eu/cmsdata/196377/AI%20HLEG_Ethics%20Guidelines%20for%20Trustworthy%20AI.pdf (accessed on 13 January 2025).
- Regulation (EU) 2024/1689 of the European Parliament and of the Council. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ%3AL_202401689 (accessed on 13 January 2025).
- Recommendation of the Council on Artificial Intelligence. Available online: https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 (accessed on 13 January 2025).
- Recommendation on the Ethics of Artificial Intelligence. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000381137 (accessed on 13 January 2025).
- Corrêa, N.K.; Galvão, C.; Santos, J.W.; Del Pino, C.; Pinto, E.P.; Barbosa, C.; Massmann, D.; Mambrini, R.; Galvão, L.; Terem, E.; et al. Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance. Patterns 2023, 4, 100857. [Google Scholar] [CrossRef] [PubMed]
- Sadek, M.; Constantinides, M.; Quercia, D.; Mougenot, C. Guidelines for Integrating Value Sensitive Design in Responsible AI Toolkits. In Proceedings of the CHI Conference on Human Factors in Computing Systems, O’ahu, HI, USA, 11–16 May 2024; ACM: Honolulu, HI, USA, 2024. [Google Scholar]
- Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 134, 372. [Google Scholar]
- Mylrea, M.; Robinson, N. Artificial Intelligence (AI) Trust Framework and Maturity Model: Applying an Entropy Lens to Improve Security, Privacy, and Ethical AI. Entropy 2023, 25, 1429. [Google Scholar] [CrossRef] [PubMed]
- Kobayashi, K.; Alam, S.B. Explainable, interpretable, and trustworthy AI for an intelligent digital twin: A case study on remaining useful life. Eng. Appl. Artif. Intell. 2024, 129, 107620. [Google Scholar] [CrossRef]
- Schmitz, A.; Akila, M.; Hecker, D.; Poretschkin, M.; Wrobel, S. The why and how of trustworthy AI: An approach for systematic quality assurance when working with ML components. Automatisierungstechnik 2022, 70, 793–804. [Google Scholar] [CrossRef]
- Chen, X.; Wang, X.; Qu, Y. Constructing Ethical AI Based on the “Human-in-the-Loop” System. Systems 2023, 11, 548. [Google Scholar] [CrossRef]
- Ashby, M. Ethical Regulators and Super-Ethical Systems. Systems 2020, 8, 53. [Google Scholar] [CrossRef]
- Goldman, C.V.; Baltaxe, M.; Chakraborty, D.; Arinez, J.; Diaz, C.E. Interpreting learning models in manufacturing processes: Towards explainable AI methods to improve trust in classifier predictions. J. Ind. Inf. Integr. 2023, 33, 100439. [Google Scholar] [CrossRef]
- Winfield, A.F.; Michael, K.; Pitt, J.; Evers, V. Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems [Scanning the Issue]. Proc. IEEE 2019, 107, 509–517. [Google Scholar] [CrossRef]
- Chamola, V.; Hassija, V.; Sulthana, A.R.; Ghosh, D.; Dhingra, D.; Sikdar, B. A Review of Trustworthy and Explainable Artificial Intelligence (XAI). IEEE Access 2023, 11, 78994–79015. [Google Scholar] [CrossRef]
- Milossi, M.; Alexandropoulou-Egyptiadou, E.; Psannis, K.E. AI Ethics: Algorithmic Determinism or Self-Determination? The GPDR Approach. IEEE Access 2021, 9, 58455–58466. [Google Scholar] [CrossRef]
- Huang, J. Digital engineering transformation with trustworthy AI towards industry 4.0: Emerging paradigm shifts. J. Integr. Des. Process Sci. 2023, 26, 267–290. [Google Scholar] [CrossRef]
- Cho, S.; Kim, I.; Kim, J.; Woo, H.; Shin, W. A Maturity Model for Trustworthy AI Software Development. Appl. Sci. 2023, 13, 4771. [Google Scholar] [CrossRef]
- Upreti, R.; Lind, P.G.; Elmokashfi, A.; Yazidi, A. Trustworthy machine learning in the context of security and privacy. Int. J. Inf. Secur. 2024, 23, 2287–2314. [Google Scholar] [CrossRef]
Table 1.
List of the selected articles on ethical values in AI design.
Table 1.
List of the selected articles on ethical values in AI design.
Reference | Title |
---|
[11] | “Artificial Intelligence (AI) Trust Frame-work and Maturity Model: Applying an Entropy Lens to Improve Security, Privacy, and Ethical AI” |
[12] | “Explainable, interpretable, and trustworthy AI for an intelligent digital twin: A case study on remaining useful life” |
[2] | “From ethics to standards—A path via responsible AI to cyber-physical production systems” |
[13] | “The why and how of trustworthy AI An approach for systematic quality assurance when working with ML components” |
[14] | “Constructing Ethical AI Based on the “Human-in-the-Loop” System” |
[15] | “Ethical regulators and super-ethical systems” |
[16] | “Interpreting learning models in manufacturing processes: Towards explainable AI methods to improve trust in classifier predictions” |
[17] | “Machine ethics: The design and governance of ethical AI and autonomous systems” |
[18] | “A Review of Trustworthy and Explainable Artificial Intelligence (XAI)” |
[19] | “AI Ethics: Algorithmic Determinism or Self-Determination? The GPDR Approach” |
[20] | “Digital engineering transformation with trustworthy AI towards industry 4.0: Emerging paradigm shifts” |
[21] | “A Maturity Model for Trustworthy AI Software Development” |
[22] | “Trustworthy machine learning in the context of security and privacy” |
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).