Risk-Based AI Assurance Framework
Abstract
1. Introduction
1.1. Motivation and Problem Statement
- quantifying the AI-related risk by introducing a structured scoring rubric.
- mapping regulatory risk classifications to design specifications controls by scoring technical and operational vulnerabilities.
- Turning high-level governance requirements into measurable operations. Where NIST AI RMF defines what good governance should look like, we provide a way to quantify risk and track it repeatably across teams and over time.
- converting risk awareness into risk prioritization and mapping identified vulnerabilities into a governance risk score.
- avoiding static risk treatment by design. The proposed framework is proposed to ensure continuous monitoring and auditing for both pre- and post-deployment assessments, enabling it to detect fragility, drift, and tail risk.
1.2. Research Questions
- RQ1: Can governance-aligned risk severity and evidence maturity be quantitatively linked for traceability and explainability?
- RQ2: Does using assurance as a bottleneck term (via Traceability Adequacy Index: TAI or Explainability Adequacy Index: EAI) prevent the averaging of outcomes for readiness, especially if traceability or explainability evidence is insufficient?
- RQ3: Do tier-driven assurance thresholds yield outcomes that reflect proportional governance expectations?
1.3. Related Work
| Algorithm 1 RBAAF computation and deployment gate |
|
1.4. Contributions
- We calculate a Governance Risk Score (GRS) via a utility-transformed likelihood x impact score that accommodates context sensitivity, governance obligations, technical and environmental exposure, and residual risk.
- We propose an Assurance Adequacy Score (AAS) that addresses the traceability, reproducibility and explainability research problems by incorporating a Traceability Adequacy Index (TAI) and an Explainability Index (EAI). Both of the aforementioned indices are calculated by analyzing the quality and documented provenance of verifiable evidence, versioning of data, model and code enabled, replication tests, and explanations provided.
- We propose a final deployability gate that integrates risk scoring with assurance readiness and enforces minimum tier-specific TAI or EAI thresholds that produce auditable outcomes (Deploy, Conditional Deploy, Sandbox/Pilot, Block) whichever is suitable to ensure adherence to governance obligations.
2. Methodology
2.1. Methodology Overview
- (i)
- Risk scoring is calculated via a governance-aligned risk score (GRS) from severity and modifier overlays.
- (ii)
- Evidence adequacy is calculated via TAI and EAI from scored, verifiable evidence artifacts.
- (iii)
- Assurance synthesis is determined by combining risk and evidence maturity via , and AAS is computed.
- (iv)
- Tiered deployment gate is applied to determine the decision outcome, i.e., Deploy/Conditional deploy/Sandbox/Block.
- Governance Risk Score (GRS): a composite score bounded by and is derived from a utility-transformed severity score () and five governance modifiers.
- Assurance Adequacy Scores (AAS): that comprises a Traceability Adequacy Index (TAI) and an Explainability Adequacy Index (EAI), each bounded by .
- Integration and gating: is the final bottleneck rule where the model readiness is bounded by the weaker of TAI and EAI. At this stage we compare the gate results with the tier-specific thresholds to produce a deployability outcome.
2.2. Materials
2.3. RBAAF Framework Overview
2.4. Governance Risk Score (GRS)
2.4.1. Risk Severity Core
2.4.2. Governance-Relevant Modifier Overlays
- Context sensitivity (C) modifier incorporates the criticality of the sector AI is being used in, groups affected by the usage, and the safety and rights sensitivity.
- Governance (G) modifier deals with the regulatory and organizational controls required to be implemented before the AI model deployment, such as continuous logging, human oversight, conformity assessment, and documentation provenance.
- Technical exposure (T) modifier deals with the attack surface related to the AI model under evaluation, accessibility of the system, integration complexity with already deployed systems, and technical exposure if a model is exploited.
- Environmental exposure (E) modifier deals with the operational environment uncertainty, the adversarial pressure, and the performance and volatility of the model being deployed.
- Residual risk (R) modifier consists of the risk remaining after controls and safeguards have been implemented. This shows the quantifiable risk appetite of the organization deploying or using the AI model, along with controls it has already implemented and risk that is still present.
2.4.3. Risk Tiering and Uncertainty Characterization
2.5. Assurance Adequacy Scores (AAS): Traceability and Explainability
2.5.1. Traceability Adequacy Index (TAI)
- 1.
- 2.
- Model and configuration versioning—evaluates the system’s ability to ensure models and configurations are not prone to tampering or unapproved changes. This can be accomplished by recording, tracking, and maintaining the model checkpoints, hyperparameters, random seeds, and dependency capture [39].
- 3.
- 4.
- Lifecycle trace control points—evaluates the system’s ability to enforce checkpoints across the lifecycle by implementing logging of change-control events, access authorization, deployment approvals, and rollback triggers.
- 5.
- Reproducibility replication—evaluates the ability of the AI model to reproduce results under controlled reruns with the same data and configurations. It also assists in documenting the degree and causes of deviations [39].
2.5.2. Explainability Adequacy Index (EAI)
- Human comprehensibility—deals with whether explanations provided support real decision-making operations, including operational interpretations [32].
- Global consistency (runs/splits)—evaluates the explanation behavior and checks if the explanations are consistent for retraining runs, dataset splits, and time windows. Furthermore, it checks if explanations provided are reproducible and consistent with organizational policies.
- Operational logging of explanations—evaluates and tracks whether explanations are generated, stored, versioned, and are retrievable. Information such as model version, input identifiers, explanation method, and timestamps should be included to enable auditors and stakeholders to evaluate the AI model in depth.
2.6. Deployable Gate Outcomes
Tiered Gating Thresholds
- Deploy—The deploy outcome is produced when the AI system under evaluation meets the minimum requirements of tier-specific evidence. For Critical tiers, the AI models are allowed to be deployed only when there are controls for continuous monitoring and audits in place. And, for High-risk tier AI models, deployment is allowed only if proper controls are in place. Unconditional deployment is not allowed if the AI model is classified in the Critical- or High-risk tiers.
- Conditional deploy—The conditional deployment is allowed for Moderate-risk tier AI systems that meet the evidence requirements. The deployment may be restricted to pilot or controlled rollouts with an explicit remediation plan.
- Limited or internal use—The limited or internal use is an outcome for Low-risk tier AI systems that meet the evidence requirements.
- Sandbox—The sandbox outcome is for AI systems where evidence requirements are not met for Low-risk tier AI systems or Moderate-risk tier systems. Hence, the system is confined to sandbox deployment until appropriate mitigation or compensating controls are implemented.
- Block—The block outcome is produced when evidence or artifact requirements are not met for High-risk or Critical-risk tier AI systems.
- Research sandbox only—Minimal-risk tier systems with minimal risk but also almost negligible controls are restricted to research sandbox use.
3. Results
3.1. GRS Behavior and Risk-Tier Stability
3.2. Relevance Analysis
3.3. RBAAF Performance for the Use-Case Applications
3.3.1. Cybersecurity Intrusion Detection (L-XAIDS) [54]
3.3.2. Biometric Access Control
3.3.3. Credit Risk Scoring
4. Discussion
5. Conclusions
- For RQ1, RBAAF establishes an auditable, quantitative link between governance-aligned risk severity and evidence maturity by calculating GRS, TAI, EAI, and AAS from declared inputs, weights, and evidence artifacts (see Section 3 and Supplementary Tables S1–S3).
- For RQ2, RBAAF addresses this RQ by integrating assurance as a bottleneck term via that prevents optimistic readiness outcomes and does not average out the indices. Instead, it limits the deployment outcome with the weaker index which is reflected in the gate outcomes for cases with weak evidence pillars (Section 3.3).
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| AI | Artificial Intelligence |
| AIID | AI Incident Database |
| EU AI Act | European Union Artificial Intelligence Act |
| AVID | AI Vulnerability Database |
| ATLAS | MITRE Adversarial Threat Landscape for AI Systems |
| CIA | Confidentiality, Integrity, Availability |
| CVSS | Common Vulnerability Scoring System |
| FAIR | Factor Analysis of Information Risk |
| GDPR | General Data Protection Regulation |
| HIPAA | Health Insurance Portability and Accountability Act |
| HITL | Human-in-the-Loop |
| ISO/IEC 27001 | Information security management systems—Requirements |
| ISO 31000 | Risk management—Guidelines |
| ISO/IEC 42001 | AI Management System (AIMS) |
| NIST AI RMF | NIST AI Risk Management Framework |
| OECD | Organisation for Economic Co-operation and Development |
| OWASP | Open Worldwide Application Security Project |
| UNESCO | United Nations Educational, Scientific and Cultural Organization |
| RBAAF | Risk-Based AI Assurance Framework |
| GRS | Governance Risk Score |
| TAI | Traceability Adequacy Index |
| EAI | Explainability Adequacy Index |
| AAS | Assurance Adequacy Score |
| NIST | National Institute of Standards and Technology |
| SP | Special Publication |
| CIS | Center for Internet Security |
| SAMM | Software Assurance Maturity Model |
| STRIDE | Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, Elevation of privilege |
| LLM | Large Language Model |
| ML | Machine Learning |
| CI/CD | Continuous Integration and Continuous Delivery/Deployment |
| LIME | Local Interpretable Model-agnostic Explanations |
| SHAP | SHapley Additive exPlanations |
| Grad-CAM | Gradient-weighted Class Activation Mapping |
References
- Partnership on AI. AI Incident Database. 2023. Available online: https://incidentdatabase.ai/ (accessed on 17 February 2026).
- European Commission. Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act); Official Journal of the European Union: Luxembourg, 2024. [Google Scholar]
- National Institute of Standards and Technology. AI Risk Management Framework (AI RMF 1.0); Technical Report; U.S. Department of Commerce: Gaithersburg, MD, USA, 2023. [CrossRef]
- ISO/IEC 42001:2023; Artificial Intelligence—Management System. International Organization for Standardization: Geneva, Switzerland, 2023.
- Organisation for Economic Co-Operation and Development. OECD AI Policy Observatory. 2023. Available online: https://oecd.ai/en/ (accessed on 17 February 2026).
- UNESCO. Recommendation on the Ethics of Artificial Intelligence; UNESCO: Paris, France, 2021. [Google Scholar]
- FIRST.org. Common Vulnerability Scoring System v3.1: Specification Document. 2019. Available online: https://www.first.org/cvss/ (accessed on 17 February 2026).
- The Open Group. FAIR: Factor Analysis of Information Risk. 2022. Available online: https://www.opengroup.org/open-fair (accessed on 17 February 2026).
- National Institute of Standards and Technology. Guide for Conducting Risk Assessments (NIST SP 800-30 Rev. 1); Technical Report; U.S. Department of Commerce: Gaithersburg, MD, USA, 2012.
- Berryhill, J.; Gehrke, N. The AVID Taxonomy for AI Vulnerabilities. 2022. Available online: https://avidml.org/ (accessed on 10 August 2025).
- Massachusetts Institute of Technology. MIT AI Incident Tracker. 2023. Available online: https://airisk.mit.edu/ai-incident-tracker (accessed on 10 August 2025).
- ISO/IEC 27001:2022; Information Security, Cybersecurity and Privacy Protection—Information Security Management Systems—Requirements. International Organization for Standardization: Geneva, Switzerland, 2022.
- ISO 31000:2018; Risk Management—Guidelines. International Organization for Standardization: Geneva, Switzerland, 2018.
- Pavlidis, G. Unlocking the black box: Analysing the EU artificial intelligence act’s framework for explainability in AI. Law Innov. Technol. 2024, 16, 293–308. [Google Scholar] [CrossRef]
- Ramos, S.; Ellul, J. Blockchain for artificial intelligence (AI): Enhancing compliance with the EU AI Act through distributed ledger technology. A cybersecurity perspective. Int. Cybersecur. Law Rev. 2024, 5, 1–20. [Google Scholar] [CrossRef]
- Laux, J.; Wachter, S.; Mittelstadt, B. Trustworthy artificial intelligence and the European Union AI Act: On the conflation of trustworthiness and acceptability of risk. Regul. Gov. 2023, 18, 3–32. [Google Scholar] [CrossRef] [PubMed]
- Microsoft. The STRIDE Threat Model. 2009. Available online: https://learn.microsoft.com/en-us/azure/security/develop/threat-modeling-tool (accessed on 17 February 2026).
- MITRE Corporation. Adversarial Threat Landscape for Artificial-Intelligence Systems (ATLAS). 2023. Available online: https://atlas.mitre.org/ (accessed on 17 February 2026).
- OWASP Foundation. OWASP Top 10 for Large Language Model Applications. 2023. Available online: https://owasp.org/www-project-top-10-for-large-language-model-applications/ (accessed on 17 February 2026).
- Mauri, L.; Damiani, E. Modeling threats to AI-ML systems using STRIDE. Sensors 2022, 22, 6662. [Google Scholar] [CrossRef] [PubMed]
- Tan, M.; Yamaguchi, K.; Raney, A.; Nockles, V.; Leblanc, M.; Bendelac, S. An AI blue team playbook. In Proceedings of the Assurance and Security for AI-Enabled Systems, National Harbor, MD, USA, 21–25 April 2024; p. 26. [Google Scholar] [CrossRef]
- Tyler, M.; McCeney, J. Assured AI reference architecture. In Proceedings of the Assurance and Security for AI-Enabled Systems, National Harbor, MD, USA, 21–25 April 2024; p. 27. [Google Scholar] [CrossRef]
- Hamon, R.; Junklewitz, H.; Garrido, J.S.; Sánchez, I. Three challenges to secure AI systems in the context of AI regulations. IEEE Access 2024, 12, 61022–61035. [Google Scholar] [CrossRef]
- Kim, D.; Shin, G.; Han, I.; Oh, H.; Han, M. Attack graph design and target priority based on attacker capabilities and network vulnerabilities. J. Korean Inst. Intell. Syst. 2022, 32, 332–339. [Google Scholar] [CrossRef]
- Petersen, E.; Ganz, M.; Holm, S.H.; Feragen, A. On (assessing) the fairness of risk score models. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’23); Association for Computing Machinery: New York, NY, USA, 2023; pp. 817–829. [Google Scholar] [CrossRef]
- Anand, P.; Singh, Y.; Selwal, A.; Singh, P.K.; Ghafoor, K.Z. IVQFIoT: An intelligent vulnerability quantification framework for scoring Internet of Things vulnerabilities. Expert Syst. 2021, 39, e12829. [Google Scholar] [CrossRef]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’16), New York, NY, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar] [CrossRef]
- Lundberg, S.M.; Lee, S.I. A Unified Approach to Interpreting Model Predictions. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS 30); Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Volume 30. [Google Scholar]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar] [CrossRef]
- Zhou, J.; Gandomi, A.; Chen, F.; Holzinger, A. Evaluating the quality of ML explanations: A survey. Electronics 2021, 10, 593. [Google Scholar] [CrossRef]
- Buçinca, Z.; Lin, P.; Gajos, K.; Glassman, E. Proxy tasks and subjective measures can be misleading in evaluating XAI systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces, Cagliari, Italy, 17–20 March 2020; pp. 454–464. [Google Scholar] [CrossRef]
- Kim, J.; Maathuis, H.; Sent, D. Human-centered evaluation of XAI applications: A systematic review. Front. AI 2024, 7, 1456486. [Google Scholar] [CrossRef]
- Langer, M.; Baum, K.; Hartmann, K.; Hessel, S.; Speith, T.; Wahl, J. Explainability auditing for intelligent systems: A rationale for multi-disciplinary perspectives. In Proceedings of the 2021 IEEE 29th International Requirements Engineering Conference Workshops (REW), Notre Dame, IN, USA, 20–24 September 2021; pp. 164–168. [Google Scholar]
- Doshi-Velez, F.; Kim, P. Towards a rigorous science of interpretable machine learning. arXiv 2017, arXiv:1702.08608. [Google Scholar] [CrossRef]
- Papadimitriou, R. The right to explanation in the processing of personal data with the use of AI systems. Int. J. Law Chang. World 2023, 2, 43–55. [Google Scholar] [CrossRef]
- Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA Relevance). 2016. Available online: https://eur-lex.europa.eu/eli/reg/2016/679/oj/eng (accessed on 24 August 2025).
- Wachter, S.; Mittelstadt, B.; Russell, C. Why a right to explanation matters in AI. Harv. J. Law Technol. 2017, 31, 491–505. [Google Scholar]
- Bracke, P.; Datta, A.; Jung, C.; Sen, S. Machine learning explainability in finance: An application to default risk analysis. SSRN 2019. [Google Scholar] [CrossRef]
- Gundersen, O.; Kjensmo, S. State of the art: Reproducibility in artificial intelligence. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar] [CrossRef]
- Mora-Cantallops, M.; Sánchez-Alonso, S.; García-Barriocanal, E.; Sicilia, M. Traceability for trustworthy AI: A review of models and tools. Big Data Cogn. Comput. 2021, 5, 20. [Google Scholar] [CrossRef]
- Namlı, T.; Sınacı, A.; Gönül, S.; Herguido, C.; García-Canadilla, P.; Muñoz, A.; Ertürkmen, G. A scalable and transparent data pipeline for AI-enabled health data ecosystems. Front. Med. 2024, 11, 1393123. [Google Scholar] [CrossRef] [PubMed]
- Raji, I.; Kumar, I.; Horowitz, A.; Selbst, A. The fallacy of AI functionality. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, 21–24 June 2022; pp. 959–972. [Google Scholar] [CrossRef]
- Slattery, P.; Saeri, A.; Grundy, E.; Graham, J.; Noetel, M.; Uuk, R.; Thompson, N. The AI risk repository. AGI 2024, 1. [Google Scholar] [CrossRef]
- Wei, M.; Zhou, Z. AI Ethics Issues in Real World: Evidence from AI Incident Database. Available online: https://hdl.handle.net/10125/103236 (accessed on 17 February 2026).
- Joint Task Force. Security and Privacy Controls for Information Systems and Organizations; NIST Special Publication 800-53 Rev. 5; National Institute of Standards and Technology (NIST): Gaithersburg, MD, USA, 2020. [Google Scholar] [CrossRef]
- Joint Task Force Transformation Initiative. Guide for Conducting Risk Assessments; NIST Special Publication 800-30 Rev. 1; National Institute of Standards and Technology (NIST): Gaithersburg, MD, USA, 2012. [Google Scholar] [CrossRef]
- Boeckl, K.; Lefkovitz, N. NIST Privacy Framework: A Tool for Improving Privacy Through Enterprise Risk Management, Version 1.0; NIST Cybersecurity White Paper NIST.CSWP.01162020; National Institute of Standards and Technology (NIST): Gaithersburg, MD, USA, 2020. [Google Scholar] [CrossRef]
- ISO/IEC 27701:2025; Information Security, Cybersecurity and Privacy Protection—Privacy Information Management Systems—Requirements and Guidance. International Organization for Standardization (ISO): Geneva, Switzerland; International Electrotechnical Commission (IEC): Geneva, Switzerland, 2025.
- Center for Internet Security (CIS). CIS Critical Security Controls Version 8; Center for Internet Security (CIS): New York, NY, USA, 2021. [Google Scholar]
- OWASP Foundation. OWASP Software Assurance Maturity Model (SAMM) Version 2; Release announcement (SAMM v2); OWASP Foundation: Wilmington, DE, USA, 2020. [Google Scholar]
- European Commission. Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act). 2021. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206 (accessed on 17 February 2026).
- Dou, M. Principle and applications of Monte-Carlo simulation in forecasting, algorithm and health risk assessment. Highlights Sci. Eng. Technol. 2024, 88, 406–414. [Google Scholar] [CrossRef]
- Pasupuleti, M. Stochastic computation for AI: Bayesian inference, uncertainty, and optimization. Int. J. Acad. Ind. Res. Innov. 2025, 5, 1–23. [Google Scholar] [CrossRef]
- Muhammad, A.E.; Yow, K.C.; Bačanin-Džakula, N.; Khan, M.A. L-XAIDS: A LIME-based eXplainable AI framework for intrusion detection systems. Clust. Comput. 2025, 28, 654. [Google Scholar] [CrossRef]





| Tier | GRS Range | Governance Posture |
|---|---|---|
| Critical | ≥0.85 | Highest oversight and strict evidence requirements with the expectation of continuous monitoring measures in place. |
| High | 0.70–0.84 | Strong mitigating controls and monitoring should be implemented prior to wide-scale deployment. |
| Moderate | 0.50–0.69 | Pilot or controlled roll-out with very explicit remediation plan in place. |
| Low | 0.30–0.49 | Limited/internal use AI models with baseline controls and risk mitigation defined. |
| Minimal | <0.30 | AI models applicable for research sandbox only. |
| Component | Default Weight | Justification |
|---|---|---|
| Traceability Adequacy Index (TAI) | ||
| Dataset provenance and versioning | 0.20 | ability to record, maintain, reconstruct exact training/evaluation data lineage. |
| Model and config versioning | 0.20 | ability to recovers exact model binary and training configuration. |
| Pipeline logging and audit trails | 0.20 | ability to provide audit-ready reproduction of runs and decisions. |
| Lifecycle trace control points | 0.20 | ability to ensure end-to-end provenance of decisions and access. |
| Reproducibility replication | 0.20 | ability to demonstrate successful reruns under similar controlled conditions. |
| Explainability Adequacy Index (EAI) | ||
| Fidelity and faithfulness | 0.20 | Explanations provided must reflect actual model behavior. |
| Stability/robustness | 0.15 | Explanations provided should not vary unpredictably under small perturbations. |
| Coverage | 0.15 | Explanations provided must be available across all relevant outputs and regimes. |
| Human comprehensibility and task fit | 0.15 | Explanations provided must support stakeholder decision making, and should have operational benefit. |
| Global consistency (runs/splits) | 0.15 | Supports policy consistency and governance repeatability. |
| Operational logging of explanations | 0.20 | Ensures explanations are retrievable and auditable at scale. |
| GRS Band | GRS Range | TAI Min. | EAI Min. | Decision |
|---|---|---|---|---|
| Critical | ≥0.85 | ≥0.80 | ≥0.75 | Deploy only with continuous monitoring and audits. BLOCK if monitoring and audits are not configured. |
| High | 0.70–0.84 | ≥0.70 | ≥0.70 | Deploy with controls. BLOCK if controls are not configured. |
| Moderate | 0.50–0.69 | ≥0.60 | ≥0.60 | Conditional deploy for pilot or controlled rollouts. Sandbox mode deployment only if remediations are not configured. |
| Low | 0.30–0.49 | ≥0.50 | ≥0.50 | Limited or internal use deployment is allowed. Sandbox mode deployment only if thresholds are not met. |
| Minimal | <0.30 | – | – | Allowed for Research sandbox only. |
| Case | GRS | TAI | EAI | AAS | Gate Decision |
|---|---|---|---|---|---|
| L-XAIDS A1 (lab) | 0.62 | 0.714 | 0.735 | 0.665 | Conditional deployment only with pilot or controlled rollouts. Classified in Moderate GRS tier |
| L-XAIDS A2 (SOC) | 0.83 | 0.714 | 0.735 | 0.770 | Deploy with controls and continous monitoring since classified in High GRS tier |
| Biometric access control B1 | 0.88 | 0.26 | 0.39 | 0.48 | BLOCK the deployment since it fails TAI and EAI requirements. Lies in Critical GRS tier. |
| Biometric access control B2 | 0.88 | 0.86 | 0.75 | 0.814 | Deploy with continuous audits since TAI and EAI indices, and AAS score is above the threshold. GRS score is in critical tier. |
| Credit scoring C1 | 0.81 | 0.83 | 0.75 | 0.782 | Deploy with controls since its a high GRS classified system. TAI and EAI match the miminum requirement for the tier |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Muhammad, A.E.; Yow, K.-C. Risk-Based AI Assurance Framework. Information 2026, 17, 263. https://doi.org/10.3390/info17030263
Muhammad AE, Yow K-C. Risk-Based AI Assurance Framework. Information. 2026; 17(3):263. https://doi.org/10.3390/info17030263
Chicago/Turabian StyleMuhammad, Aoun E., and Kin-Choong Yow. 2026. "Risk-Based AI Assurance Framework" Information 17, no. 3: 263. https://doi.org/10.3390/info17030263
APA StyleMuhammad, A. E., & Yow, K.-C. (2026). Risk-Based AI Assurance Framework. Information, 17(3), 263. https://doi.org/10.3390/info17030263

