Trust Management for Artificial Intelligence: A Standardization Perspective
Abstract
:1. Introduction
2. Standardization Trend for AI Trust
3. Trust Management in AI Technologies
3.1. Basic Configuration and Operation Procedure of Artificial Intelligence System
3.2. Trends Related to ICT Trust Management Technology
- Trust attribute: This indicates the characteristics of an entity and is of qualitative and quantitative types including direct and indirect trust. They represent the attributes and capabilities of trusted entities. Qualitative attributes require a quantification process to accumulate quantitative attributes.
- Trust indicator: This is used to calculate the confidence index by combining the qualitative and quantitative attributes of trust. The objective trust indicator represents the ability to quantitatively represent the trustworthiness of an entity. The subjective trust indicator reflects either the subjective or personal attributes of the trusting entity. A confidence indicator is calculated as a measurement instance of confidence because its value changes over time.
- Trust index: This is a composite and relative value that combines several trust indicators into one benchmark measure of the trustworthiness of an entity similar to an ICT development index or stock market index. It is a comprehensive accumulation of objective and subjective trust indicators for calculation. The trust index evaluates and quantifies the trustworthiness of the trustee.
3.3. AI Trust Related Trends
3.4. Analysis of Limitations and Requirements of Artificial Intelligence Trust Research
- Measurement and calculation: It is difficult to derive trust as a generalized formula due to the diversity of AI systems and differences in their intrinsic characteristics. However, quantifying the level of trust in AI systems is important. It should be able to define measurable AI trust metrics and determine the level of trust in AI systems through trust calculations. The level of trust in AI can be measured by classifying it into an objective method that is quantitatively measured such as quality of service (QoS) or a subjective method that is qualitatively calculated such as quality of experience (QoE). Different AI services and applications may require different trust attributes [20].
- Trust relationship: In addition to the human-to-human trust that has been reviewed in the traditional social domain, the trust relationship between AI-applied systems and people, AI systems and AI systems, etc., should be defined, and trust-based interactions between them should be analyzed [21,22,23].
- Trust management: In an AI system, trust interacts with all layers, from the upper AI application to the lower physical layer. Therefore, similar to security, trust management technology is required as a separate common layer that covers all vertical layers. Trust management has key functions such as monitoring management, data management, algorithm management, expectations management, and decision management. Trust information about reputation and recommendations, in particular, can be used to support these functions [24].
- Dynamically changing properties: Trust indicator values for AI systems are not kept constant and may fluctuate depending on data and surrounding circumstances; therefore, continuous tracking and management are required [19].
- Constraint environment: Constraints in hardware performance such as CPU/GPU/TPU, memory, and storage constituting the AI system, the types of AI algorithms applied, and restrictions in data collection must be considered.
- Lifecycle management: Human oversight may be required as a safeguard throughout the lifecycle of an AI system, from design, development, launch, use, and disposal. Risk assessment is vital because the autonomous operation and function update of a specific AI system during its lifecycle can have a significant impact on safety [25].
- Data quality: The quality of the data set used in the AI system has a decisive effect on training machine learning algorithms and performing classification and decision-making. Feeding malicious data into the system could change the behavior of AI solutions. It should be possible to remove this data before it is applied to training if the collected data are biased. Validation and testing of the data set should be carefully performed before applying to the AI system, and the data supplied to the AI system should be recorded at all times, and audits should be performed in case of future problems [26].
- Non-discrimination: Direct or indirect discrimination based on ethnicity, gender, sexual orientation, or age may lead to exclusion of certain groups. Discrimination in AI systems can occur unintentionally due to data problems such as bias and incompleteness or design errors in AI algorithms. Those who control AI algorithms may seek to achieve unfair or biased results such as by deliberately manipulating data to exclude certain groups of people [27].
- Privacy protection: Digital records of human behavior contain highly sensitive data such as gender, age, religion, sexual orientation, and political views, as well as in terms of preferences. Privacy and data protection must be ensured at all stages of the AI system lifecycle, including any data provided by the user, as well as any information generated about the user in their interactions with the AI system [28].
- Robustness: AI systems must be robust and secure enough to handle errors or inconsistencies in the design, development, execution, deployment, and use phases to respond appropriately to erroneous results.
- Reproducibility: Despite the complexity and sensitivity of the AI system to training and model-building conditions, it should be able to produce consistent results according to the input data in a given situation. Lack of reproducibility can result into unintended discrimination in AI decisions.
- Accuracy: AI systems must ensure accuracy such as the ability to classify data into the correct categories or the ability to make correct predictions or decisions based on data or models.
- Security: Like all software systems, AI systems can contain vulnerabilities that attackers can exploit. When an AI system is attacked such as by hacking or malware, data and system behavior can be altered, causing the system to make different decisions or shut down the system completely. Cyber security management that can quickly remove and manage vulnerabilities in AI systems as soon as they are discovered and prevent infection of malicious codes such as viruses, worms, and ransomware must be applied.
- Explainability: Explainability should be applied so that the mechanisms by which AI systems make decisions can be interpreted, inspected, and reproduced [29].
4. AI Trust Management Framework
4.1. Trust Target and Management Elements by a Layer of Artificial Intelligence System
4.1.1. Data Layer
4.1.2. Model Layer
4.1.3. Application Layer
4.2. Artificial Intelligence Systems Trust Framework
4.2.1. Trust Analysis and Management in Collector
4.2.2. Trust Analysis and Management in Data Pre-Producer (PP)
4.2.3. Trust Analysis and Management in Model (M)
4.2.4. Trust Analysis and Management in SINK (App)
4.3. Trust Analysis Model for Artificial Intelligence Systems
5. Challenges for Standardization of AI Trust Management
5.1. In-Depth Understanding of AI Trust and Its Core Technologies
5.2. Trust by Design Applied Trust-Based Lifecycle Operation Model Design
5.3. AI Trust Reference Model
5.4. An Artificial Intelligence Model That Evolves and Develops Transparently and Autonomously with Humans
5.5. High-Reliability Application Support through Quality Control of Artificial Intelligence Models
5.6. Artificial Intelligence Trust Analysis Mechanism
5.7. Risk Management System
5.8. AI Trust Technology Verification and Certification
5.9. Artificial Intelligence Ethics and Social Issues
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Focus Group on AI for Autonomous and Assisted Driving (FG-AI4AD). Available online: https://www.itu.int/en/ITU-T/focusgroups/ai4ad/Pages/default.aspx (accessed on 1 April 2022).
- Focus Group on Environmental Efficiency for Artificial Intelligence and other Emerging Technologies (FG-AI4EE). Available online: https://www.itu.int/en/ITU-T/focusgroups/ai4ee/Pages/default.aspx (accessed on 1 April 2022).
- Focus Group on “Artificial Intelligence for Health”. Available online: https://www.itu.int/en/ITU-T/focusgroups/ai4h/Pages/default.aspx (accessed on 1 April 2022).
- Lui, K.; Karmiol, J. AI Infrastructure Reference Architecture; IBM Systems: Armonk, NY, USA, 2018. [Google Scholar]
- 6 Open-Source AI Frameworks You Should Know about. Available online: https://www.cmswire.com/digital-experience/6-open-source-ai-frameworks-you-should-know-about/ (accessed on 1 April 2022).
- Zhang, C.; Li, W.; Luo, Y.; Hu, Y. AIT: An AI-Enabled Trust Management System for Vehicular Networks Using Blockchain Technology. IEEE Internet Things J. 2020, 8, 3157–3169. [Google Scholar] [CrossRef]
- Zhang, T.; Qin, Y.; Li, Q. Trusted Artificial Intelligence: Technique Requirements and Best Practices. In Proceedings of the 2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), Shenyang, China, 20–22 October 2021; pp. 1458–1462. [Google Scholar] [CrossRef]
- Ağca, M.A.; Faye, S.; Khadraoui, D. A Survey on Trusted Distributed Artificial Intelligence. IEEE Access 2022, 10, 55308–55337. [Google Scholar] [CrossRef]
- Gasser, U.; Almeida, V.A.F. A Layered Model for AI Governance. IEEE Internet Comput. 2017, 21, 58–62. [Google Scholar] [CrossRef] [Green Version]
- New AI Can Guess Whether You’re Gay or Straight from a Photograph. The Guardian, 7 September 2019. Available online: https://www.theguardian.com/technology/2017/sep/07/new-artificial-intelligence-can-tell-whether-youre-gay-or-straight-from-a-photograph(accessed on 1 April 2022).
- Recommendation Y.3052, Overview of Trust Provisioning in ICT Infrastructures and Services; ITU: Geneva, Switzerland, 2017.
- Truong, N.B.; Lee, G.M.; Um, T.; Mackay, M. Trust Evaluation Mechanism for User Recruitment in Mobile Crowd-Sensing in the Internet of Things. IEEE Trans. Inf. Forensics Secur. 2019, 14, 2705–2719. [Google Scholar] [CrossRef] [Green Version]
- Jayasinghe, U.; Lee, G.M.; Um, T.; Shi, Q. Machine Learning Based Trust Computational Model for IoT Services. IEEE Trans. Sustain. Comput. 2019, 4, 39–52. [Google Scholar] [CrossRef]
- Oleshchuk, V. A trust-based security enforcement in disruption-tolerant networks. In Proceedings of the 2017 9th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), Bucharest, Romania, 21–23 September 2017; pp. 514–517. [Google Scholar] [CrossRef]
- uTRUSTit, FP7 Project, ICT-2009.1.4—Trustworthy ICT. Available online: https://cordis.europa.eu/project/id/258360 (accessed on 1 April 2022).
- TRUSTe Privacy Certification Standards. Available online: https://trustarc.com/consumer-info/privacy-certification-standards/ (accessed on 1 April 2022).
- Adadi, A.; Berrada, M. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 2018, 6, 52138–52160. [Google Scholar] [CrossRef]
- Peters, D.; Vold, K.; Robinson, D.; Calvo, R.A. Responsible AI—Two Frameworks for Ethical Design Practice. IEEE Trans. Technol. Soc. 2020, 1, 34–47. [Google Scholar] [CrossRef]
- Technical Report, Trust Provisioning for Future ICT Infrastructures and Services; ITU: Geneva, Switzerland, 2016.
- Jayasinghe, U.; Truong, N.B.; Lee, G.M.; Um, T.-W. RpR: A Trust Computation Model for Social Internet of Things. In Proceedings of the 2016 Intl IEEE Conferences on Ubiquitous Intelligence & Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People, and Smart World Congress (UIC/ATC/ScalCom/CBDCom/IoP/SmartWorld), Toulouse, France, 18–21 July 2016; pp. 930–937. [Google Scholar]
- Social Internet of Thing. Available online: http://www.social-iot.org./ (accessed on 1 April 2022).
- Atzori, L.; Iera, A.; Morabito, G.; Nitti, M. The Social Internet of Things (SIoT)—When social networks meet the Internet of Things: Concept, architecture and network characterization. Comput. Netw. 2012, 56, 3594–3608. [Google Scholar] [CrossRef]
- Huang, F. Building social trust: A human-capital approach. J. Inst. Theor. Econ. 2007, 163, 552–573. [Google Scholar] [CrossRef]
- Zheng, Y.; Zhang, P.; Vasilakosd, A.V. A survey on trust management for Internet of Things. J. Netw. Comput. Appl. 2014, 42, 120–134. [Google Scholar]
- Hummer, W.; Muthusamy, V.; Rausch, T.; Dube, P.; El Maghraoui, K.; Murthi, A.; Oum, P. ModelOps: Cloud-Based Lifecycle Management for Reliable and Trusted AI. In Proceedings of the 2019 IEEE International Conference on Cloud Engineering (IC2E), Prague, Czech Republic, 24–27 June 2019; pp. 113–120. [Google Scholar] [CrossRef]
- Tao, C.; Gao, J.; Wang, T. Testing and Quality Validation for AI Software–Perspectives, Issues, and Practices. IEEE Access 2019, 7, 120164–120175. [Google Scholar] [CrossRef]
- Srivastava, B.; Rossi, F. Rating AI systems for bias to promote trustable applications. IBM J. Res. Dev. 2019, 63, 5:1–5:9. [Google Scholar] [CrossRef]
- Curzon, J.; Kosa, T.A.; Akalu, R.; El-Khatib, K. Privacy and Artificial Intelligence. IEEE Trans. Artif. Intell. 2021, 2, 96–108. [Google Scholar] [CrossRef]
- Joshi, G.; Walambe, R.; Kotecha, K. A Review on Explainability in Multimodal Deep Neural Nets. IEEE Access 2021, 9, 59800–59821. [Google Scholar] [CrossRef]
Standard Body | Standard Group | Standard Document | Main Content |
---|---|---|---|
ITU–T | SG13 and FG–ML5G | Y.3172 | An architectural framework for the application of machine learning in future networks including IMT-2020. |
Y.3173 | A method for measuring the intelligence level of future networks including IMT-2020. | ||
Y.3170 | Data processing framework to apply machine learning to future networks including IMT-2020. | ||
Supplement 55 to Y.3170-series | Machine Learning Use Cases in Future Networks such as IMT-2020. | ||
Y.ML-IMT2020-NA-RAFR | AI-based resource control and failure recovery automation in future networks including IMT-2020. | ||
Y.ML-IMT2020-serv-prov | AI-based user-driven network service provisioning in future networks including IMT-2020. | ||
Y.ML-IMT2020-MP | Machine Learning Marketplace in Future Networks including IMT-2020 | ||
Y.IMT2020-AIICDN-arch | AI integrated cross-domain network structure in future networks including IMT-2020. | ||
SG16 | Y.Sup.AI4IoT | AI role in IoT data management and implementation of AI-based technology for smart cities | |
Y.4472 | An open data application programming interface for IoT data in smart cities. | ||
FG–AI4H | FG–AI4H Whitepaper | White Paper for the ITU/WHO Focus Group on AI Health. |
Standard Body | Standard Group | Standard Document | Main Content |
---|---|---|---|
ITU–T | SG13 | Y.3051 | Basic principles for a trusted environment in ICT infrastructure. |
Y.3052 | Trust provisioning overview for ICT infrastructure and services. | ||
Y.3053 | Trust networking framework with trust-centric network domains. | ||
Y.3054 | Trust-based media services framework. | ||
Y.3057 | Trust index for ICT infrastructure and services. | ||
Y.trust-arch | Functional architecture for trust-based service provisioning. | ||
Y.3056 | An open bootstrap framework that supports trust networking and services for distributed ecosystems. | ||
Y.3055 | Trust-based personal data management platform framework. | ||
FG–DPM | TR D4.3 | Technical enabler overview for trust data. | |
SG17 | X.5GSec-t | Trust relationship-based security framework in the 5G ecosystem. | |
ISO/IEC | JTC1 SC42 WG3 | TR 24028 | Artificial intelligence trust overview. |
TR 24368 | Artificial intelligence ethics and social importance concept. | ||
CD 23894 | Artificial intelligence—risk management | ||
TR 24027 | AI systems and AI-based decision-making bias. | ||
TR 5254 | Goals and methods for exploitability of ML models and artificial intelligence systems. |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Um, T.-W.; Kim, J.; Lim, S.; Lee, G.M. Trust Management for Artificial Intelligence: A Standardization Perspective. Appl. Sci. 2022, 12, 6022. https://doi.org/10.3390/app12126022
Um T-W, Kim J, Lim S, Lee GM. Trust Management for Artificial Intelligence: A Standardization Perspective. Applied Sciences. 2022; 12(12):6022. https://doi.org/10.3390/app12126022
Chicago/Turabian StyleUm, Tai-Won, Jinsul Kim, Sunhwan Lim, and Gyu Myoung Lee. 2022. "Trust Management for Artificial Intelligence: A Standardization Perspective" Applied Sciences 12, no. 12: 6022. https://doi.org/10.3390/app12126022
APA StyleUm, T.-W., Kim, J., Lim, S., & Lee, G. M. (2022). Trust Management for Artificial Intelligence: A Standardization Perspective. Applied Sciences, 12(12), 6022. https://doi.org/10.3390/app12126022