This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Open AccessReview
Evaluating Trustworthiness in AI: Risks, Metrics, and Applications Across Industries
1
Faculty of Computer Science and Engineering, Ss. Cyril and Methodius University, 1000 Skopje, North Macedonia
2
Department of Computer Science, Metropolitan College, Boston University, Boston, MA 02215, USA
*
Authors to whom correspondence should be addressed.
Electronics 2025, 14(13), 2717; https://doi.org/10.3390/electronics14132717 (registering DOI)
Submission received: 5 June 2025
/
Revised: 30 June 2025
/
Accepted: 1 July 2025
/
Published: 4 July 2025
Abstract
Ensuring the trustworthiness of artificial intelligence (AI) systems is critical as they become increasingly integrated into domains like healthcare, finance, and public administration. This paper explores frameworks and metrics for evaluating AI trustworthiness, focusing on key principles such as fairness, transparency, privacy, and security. This study is guided by two central questions: how can trust in AI systems be systematically measured across the AI lifecycle, and what are the trade-offs involved when optimizing for different trustworthiness dimensions? By examining frameworks such as the NIST AI Risk Management Framework (AI RMF), the AI Trust Framework and Maturity Model (AI-TMM), and ISO/IEC standards, this study bridges theoretical insights with practical applications. We identify major risks across the AI lifecycle stages and outline various metrics to address challenges in system reliability, bias mitigation, and model explainability. This study includes a comparative analysis of existing standards and their application across industries to illustrate their effectiveness. Real-world case studies, including applications in healthcare, financial services, and autonomous systems, demonstrate approaches to applying trust metrics. The findings reveal that achieving trustworthiness involves navigating trade-offs between competing metrics, such as fairness versus efficiency or privacy versus transparency, and emphasizes the importance of interdisciplinary collaboration for robust AI governance. Emerging trends suggest the need for adaptive frameworks for AI trustworthiness that evolve alongside advancements in AI technologies. This paper contributes to the field by proposing a comprehensive review of existing frameworks with guidelines for building resilient, ethical, and transparent AI systems, ensuring their alignment with regulatory requirements and societal expectations.
Share and Cite
MDPI and ACS Style
Nastoska, A.; Jancheska, B.; Rizinski, M.; Trajanov, D.
Evaluating Trustworthiness in AI: Risks, Metrics, and Applications Across Industries. Electronics 2025, 14, 2717.
https://doi.org/10.3390/electronics14132717
AMA Style
Nastoska A, Jancheska B, Rizinski M, Trajanov D.
Evaluating Trustworthiness in AI: Risks, Metrics, and Applications Across Industries. Electronics. 2025; 14(13):2717.
https://doi.org/10.3390/electronics14132717
Chicago/Turabian Style
Nastoska, Aleksandra, Bojana Jancheska, Maryan Rizinski, and Dimitar Trajanov.
2025. "Evaluating Trustworthiness in AI: Risks, Metrics, and Applications Across Industries" Electronics 14, no. 13: 2717.
https://doi.org/10.3390/electronics14132717
APA Style
Nastoska, A., Jancheska, B., Rizinski, M., & Trajanov, D.
(2025). Evaluating Trustworthiness in AI: Risks, Metrics, and Applications Across Industries. Electronics, 14(13), 2717.
https://doi.org/10.3390/electronics14132717
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
here.
Article Metrics
Article Access Statistics
For more information on the journal statistics, click
here.
Multiple requests from the same IP address are counted as one view.