PRIME–INSPECT: A Socio-Technical Framework for Trustworthy Intelligent Automation and Real-Time Decision-Making in Industry 4.0
Abstract
1. Introduction
2. Literature Review and Conceptual Foundations
2.1. AI-Enabled RTD in High-Risk Industrial Systems
2.2. Adoption and Managerial Decision-Making Perspectives on AI
2.3. Trust, Reliance, and Calibration in Human–AI Decision-Making
2.4. Explainable AI as an Operational Requirement in Industrial Contexts
2.5. Human Oversight: Supervision, Control Rights, and Teaming
2.6. AI Governance and Risk Management Frameworks
2.7. Synthesis: The Integration Gap Motivating PRIME–INSPECT
- Micro-level process gap: There is limited formalization of the end-to-end RTD operational flow (from prediction to execution) with embedded risk controls;
- Macro-level institutional gap: Governance maturity, accountability, and collaboration are often discussed without explicit coupling to the operational pipeline;
- Human–AI coupling gap: Trust calibration is recognized as critical, but is seldom treated as a first-class design objective jointly supported by explainability, oversight, and governance mechanisms.
3. The PRIME–INSPECT Framework: Architecture and Operationalization
3.1. Formal Representation of Sequential Decision Logic
3.2. The PRIME Layer: Operational Decision Flow
3.3. The INSPECT Layer: Governance and Institutional Conditions
3.4. Trust Calibration as a Cross-Layer Coupling Mechanism
4. Empirical Grounding of the PRIME–INSPECT Framework
4.1. Research Design and Objectives
4.2. Survey Instruments and Data Collection
4.3. Sample Characteristics
4.4. Operationalization of PRIME–INSPECT Dimensions
4.5. Descriptive Results Mapped to PRIME–INSPECT
4.5.1. PRIME-Related Operational Perceptions (IT Sample)
4.5.2. INSPECT-Related Governance Perceptions (TMT Sample)
4.6. Alignment and Divergence Between TMT and IT Perspectives
4.6.1. Areas of Alignment
4.6.2. Areas of Divergence
4.6.3. Implications for Trust Calibration and Socio-Technical Integration
4.7. Illustrative Application in Metallurgy
- PRIME Layer (Operational Flow):
- Predict: As shown in Figure 2, the AI system forecasts the Remaining Useful Life (RUL) of specific tuyeres based on temporal thermal patterns and correlated process variables, rather than merely flagging the current temperature level.
- Regulate: The prediction is constrained by physics-based and safety rules. For example, implausible temperature jumps, conflicting sensor readings, or deviations exceeding predefined process-safety margins are filtered before any recommendation is escalated into action, consistent with the regulation logic formalized in Equation (3).
- Interpret: The system provides operators with a visual heat map or ranked feature explanation that contextualizes the alert and clarifies whether the risk is associated with a localized hotspot, cooling instability, or a broader overheating trend, in line with the interpretability layer formalized in Equation (4).
- Mitigate: Recognizing the high-risk context, the system can trigger a controlled mitigation protocol, such as reducing blast intensity, increasing inspection priority, or switching to a more conservative operating mode, while awaiting human confirmation, consistent with the mitigation logic formalized in Equation (5).
- Execute: The final execution step—such as scheduling replacement, temporarily reducing load, or ordering shutdown of the affected unit—remains a human-led or explicitly authorized action, supported by the system’s preparatory adjustments and logged recommendations, consistent with the execution mapping formalized in Equation (6).
- INSPECT Layer (Governance and Safeguards):
- Integrity and Navigability: This requires that training data include historically verified burn-through events or near-failures, that sensor calibration routines are documented, and that alerts are displayed in formats familiar to operators and furnace supervisors.
- Supervisory Control: As depicted in the diagram, the Shift Manager or responsible process engineer retains explicit override authority to dismiss or defer a recommendation if field inspection, auxiliary measurements, or contextual production constraints contradict the model output.
- Trust Calibration: A feedback loop ensures that both true alarms and near-miss predictions are reviewed against subsequent outcomes. This dynamic calibration helps to prevent automation bias and supports an appropriate balance between reliance and verification under real operating conditions.
5. Discussion
5.1. Positioning PRIME–INSPECT Within the Existing Literature
5.2. Theoretical Implications
5.3. Practical Implications
5.4. Limitations
5.5. Future Research Directions
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Brynjolfsson, E.; McAfee, A. The Business of Artificial Intelligence: What It can—And cannot—Do for Your Organization. Harv. Bus. Rev. 2017, 7, 1–2. Available online: https://hbr.org/2017/07/the-business-of-artificial-intelligence (accessed on 26 December 2025).
- Duan, Y.; Edwards, J.S.; Dwivedi, Y.K. Artificial intelligence for decision making in the era of Big Data—Evolution, challenges and research agenda. Int. J. Inf. Manag. 2019, 48, 63–71. [Google Scholar] [CrossRef]
- Rojas, L.; Peña, Á.; Garcia, J. AI-Driven Predictive Maintenance in Mining: A Systematic Literature Review on Fault Detection, Digital Twins, and Intelligent Asset Management. Appl. Sci. 2025, 15, 3337. [Google Scholar] [CrossRef]
- Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
- Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
- Cao, G.; Duan, Y.; Edwards, J.S.; Dwivedi, Y.K. Understanding managers’ attitudes and behavioral intentions towards using artificial intelligence for organizational decision-making. Technovation 2021, 106, 102312. [Google Scholar] [CrossRef]
- Marocco, S.; Barbieri, G.; Talamo, A. Exploring facilitators and barriers to managers’ adoption of AI-based systems in decision making: A systematic review. AI 2024, 54, 2538–2567. [Google Scholar] [CrossRef]
- Glikson, E.; Woolley, A.W. Human trust in artificial intelligence: Review of empirical research. Acad. Manag. Ann. 2020, 14, 627–660. [Google Scholar] [CrossRef]
- Shin, D. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. Int. J. Hum.-Comput. Stud. 2021, 146, 102551. [Google Scholar] [CrossRef]
- Giudici, P.; Figini, S.; Ferri, G. Artificial intelligence risk measurement. Expert Syst. Appl. 2024, 232, 120858. [Google Scholar] [CrossRef]
- Dietvorst, B.J.; Simmons, J.P.; Massey, C. Algorithm aversion: People erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 2015, 144, 114–126. [Google Scholar] [CrossRef] [PubMed]
- Lyell, D.; Coiera, E. Automation bias and verification complexity: A systematic review. J. Am. Med. Inform. Assoc. 2017, 24, 423–431. [Google Scholar] [CrossRef]
- Madhavan, P.; Wiegmann, D.A. Similarities and differences between human–human and human–automation trust: An integrative review. Theor. Issues Ergon. Sci. 2007, 8, 277–301. [Google Scholar] [CrossRef]
- Romeo, G.; Conti, D. Exploring automation bias in human–AI collaboration: A review and implications for explainable AI. AI Soc. 2026, 41, 259–278. [Google Scholar] [CrossRef]
- Naiseh, M.; Al-Thani, D.; Jiang, N.; Ali, R. How the different explanation classes impact trust calibration: The case of clinical decision support systems. Int. J. Hum.-Comput. Stud. 2023, 169, 102941. [Google Scholar] [CrossRef]
- Tatasciore, M.; Loft, S. Calibrating reliance on automated advice: Transparency and trust calibration feedback. Int. J. Hum.-Comput. Interact. 2025, 41, 14723–14733. [Google Scholar] [CrossRef]
- Jacovi, A.; Marasović, A.; Miller, T.; Goldberg, Y. Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ′21), Virtual Event, 3–10 March 2021; pp. 624–635. [Google Scholar] [CrossRef]
- Amaliah, N.R.; Tjahjono, B.; Palade, V. Human-in-the-loop XAI for predictive maintenance: A systematic review of interactive systems and their effectiveness in maintenance decision-making. Electronics 2025, 14, 3384. [Google Scholar] [CrossRef]
- Ribeiro, M.; Singh, S.; Guestrin, C. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016. [Google Scholar] [CrossRef]
- Amershi, S.; Weld, D.; Vorvoreanu, M.; Fourney, A.; Nushi, B.; Collisson, P.; Suh, J.; Iqbal, S.; Bennett, P.N.; Inkpen, K.; et al. Guidelines for Human-AI Interaction. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019. [Google Scholar] [CrossRef]
- Tsamados, A.; Floridi, L.; Taddeo, M. Human control of AI systems: From supervision to teaming. AI Ethics 2025, 5, 1535–1548. [Google Scholar] [CrossRef]
- Papagiannidis, E.; Mikalef, P.; Conboy, K. Responsible artificial intelligence governance: A review and research framework. J. Strateg. Inf. Syst. 2025, 34, 101885. [Google Scholar] [CrossRef]
- Liu, H.; Wang, Y.; Fan, W.; Liu, X.; Li, Y.; Jain, S.; Liu, Y.; Jain, A.K.; Tang, J. Trustworthy AI: A Computational Perspective. ACM Trans. Intell. Syst. Technol. 2022, 14, 1–59. [Google Scholar] [CrossRef]
- Li, B.; Qi, P.; Liu, B.; Di, S.; Liu, J.; Pei, J.; Yi, J.; Zhou, B. Trustworthy AI: From Principles to Practices. ACM Comput. Surv. 2023, 55, 1–46. [Google Scholar] [CrossRef]
- National Institute of Standards and Technology (NIST). Artificial Intelligence Risk Management Framework (AI RMF 1.0); NIST: Gaithersburg, MD, USA, 2023. [CrossRef]
- Lundberg, S.M.; Lee, S.-I. A Unified Approach to Interpreting Model Predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017; pp. 4768–4777. [Google Scholar]
- Baldi, S.; Korkas, C.D.; Lv, M.; Kosmatopoulos, E.B. Automating occupant–building interaction via smart zoning of thermostatic loads: A switched self-tuning approach. Appl. Energy 2018, 231, 1246–1258. [Google Scholar] [CrossRef]
- Norman, G. Likert scales, levels of measurement and the “laws” of statistics. Adv. Health Sci. Educ. 2010, 15, 625–632. [Google Scholar] [CrossRef]
- Nunnally, J.C. Psychometric Theory, 2nd ed.; McGraw-Hill: New York, NY, USA, 1978. [Google Scholar]
- European Parliament. Regulation (EU) 2024/1689 of the European Parliament and of the Council Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act). Off. J. Eur. Union 2024, 1689. [Google Scholar]
- ISO/IEC 42001:2023; Information Technology–Artificial Intelligence—Management System. ISO: Geneva, Switzerland, 2023.
- OECD. Recommendation of the Council on Artificial Intelligence; OECD/LEGAL/0449; OECD: Paris, France, 2019; Amended 3 May 2024; Available online: https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 (accessed on 29 March 2026).
- McGrath, M.J.; Duenser, A.; Lacey, J.; Paris, C. Collaborative Human–AI Trust (CHAI-T): A Process Framework for Active Management of Trust in Human–AI Collaboration. Comput. Hum. Behav. Artif. Hum. 2025, 6, 100200. [Google Scholar] [CrossRef]
- Chiou, E.K.; Lee, J.D. Trusting Automation: Designing for Responsivity and Resilience. Hum. Factors 2023, 65, 137–165. [Google Scholar] [CrossRef] [PubMed]


| Trigger | Human Action | Audit Outcome | Calibration Response |
|---|---|---|---|
| High-risk prediction with high confidence | Accept recommendation | Failure prevented | Reinforce confidence; consider threshold adjustment subject to supervisory review |
| High-risk prediction (false alarm) | Override decision | No failure occurs | Adjust thresholds, refine model sensitivity |
| Low-risk prediction but failure occurs | No intervention | Failure detected | Increase model sensitivity, update features |
| Frequent overrides of correct alerts | Override despite accurate prediction | Pattern detected in logs | Initiate review, retraining, or policy adjustment |
| Variable | TMT Sample | IT Sample |
|---|---|---|
| Sample size (N) | 85 | 151 |
| Primary respondent profile | Senior managers, directors, executives | IT managers, engineers, specialists |
| Dominant sectors | Technology, Finance, Telecommunications, Retail | Cross-sector technical roles |
| Organizational size | 1–50 to >1000 employees | Mixed organizational environments |
| Years of professional experience | Senior managerial experience | Majority >10 years |
| AI familiarity | Mostly moderate to high | Mostly moderate to advanced |
| Exposure to AI initiatives | Strategic/governance perspective | Operational/implementation perspective |
| Participation in AI-related projects | Indirect or governance-level | ~65% directly involved |
| PRIME–INSPECT Component | Layer | Survey Source | Indicative Survey Constructs |
|---|---|---|---|
| Predict | PRIME (Operational) | IT survey | Perceived reliability of AI predictions; confidence in model outputs; adequacy of real-time data inputs |
| Regulate | PRIME (Operational) | IT survey | Awareness of operational constraints; perceived adequacy of rule-based limits and safety thresholds |
| Interpret | PRIME (Operational) | IT survey | Importance of explainability for decision-making; clarity of AI-generated recommendations; usefulness of explanations |
| Mitigate | PRIME (Operational) | IT survey | Availability of human intervention points; perceived effectiveness of override and escalation mechanisms |
| Execute | PRIME (Operational) | IT survey | Scope of automation in decision execution; balance between automated and human-led actions |
| Integrity | INSPECT (Governance) | TMT survey | Data quality assurance; trust in data sources; confidence in model robustness |
| Navigability (XAI) | INSPECT (Governance) | IT survey | Cognitive accessibility of explanations; alignment between explanations and operational needs |
| Supervisory Control | INSPECT (Governance) | Both | Clarity of control rights; responsibility allocation; perceived ability to intervene in AI-driven decisions |
| Policy and Governance Maturity | INSPECT (Governance) | TMT survey | Existence of formal AI policies; clarity of accountability; risk management practices |
| Ethical Compliance | INSPECT (Governance) | TMT survey | Perceived alignment with safety standards, ethical responsibility, and regulatory compliance |
| Collaboration | INSPECT (Governance) | Both | Quality of coordination between management and IT; shared understanding of AI risks and capabilities |
| Trust Calibration | Cross-layer | Both | Appropriate reliance on AI outputs; avoidance of blind trust or algorithm aversion; confidence adjusted to context and risk |
| Dimension | NIST AI RMF [25] | EU AI Act [30] | ISO/IEC 42001 [31] | OECD AI [32] | PRIME–INSPECT |
|---|---|---|---|---|---|
| Operational pipeline | Not specified | Not specified | Not specified | Not specified | Equations (1)–(6) |
| Risk classification | MAP function | Article 6 | Clause 6.1 planning | Not specified | Equation (3) |
| Explainability req. | Narrative | Article 13 | Annex A controls | Transparency and explainability principles | Equation (4) |
| Human oversight | Narrative | Article 14 | Narrative | Human-centered values and human oversight | Equation (5) |
| Trust calibration | Not addressed | Not addressed | Not addressed | Not addressed | Section 3.4 |
| Real-time decision loop | Not specified | Not specified | Not specified | Not specified | Equation (1) |
| Dimension | IAAAM [6] | CHAI-T [33] | Responsivity/Relational Trust [34] | PRIME–INSPECT |
|---|---|---|---|---|
| Primary focus | AI adoption (acceptance vs. avoidance) | Dynamic trust management in human–AI collaboration | Trust formation through interaction and system behavior | Integrated socio-technical architecture for AI-driven decision-making |
| Analytical level | Individual/organizational perception | Interaction process (human–AI collaboration) | Behavioral and relational dynamics | Multi-level (operational + governance) |
| Operational decision pipeline | Not specified | Not specified | Not specified | Structurally defined decision workflow |
| Real-time decision support | Not addressed | Partially addressed | Context-dependent | Explicitly supported |
| Trust conceptualization | Implicit (as adoption factor) | Dynamic and process-based | Emergent and interaction-based | Explicitly formalized as calibration mechanism |
| Trust calibration mechanisms | Not specified | Feedback-based trust adaptation | Experience-based trust adjustment | Embedded cross-layer calibration (Section 3.4) |
| Explainability integration | Not central | Indirect (via interaction) | Indirect | Explicitly integrated within the decision workflow |
| Human oversight | Not specified | Implicit (collaboration) | Emphasized | Explicitly embedded within operational decision stages |
| Governance integration | Not specified | Not specified | Not specified | Explicit (INSPECT layer) |
| Applicability to high-risk real-time systems | Limited | Partial | Partial | Illustratively demonstrated in high-risk industrial contexts |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Avramović, N.; Marković, A.; Čomić, T.; Čavoški, S.; Zornić, N.; Vujović, V. PRIME–INSPECT: A Socio-Technical Framework for Trustworthy Intelligent Automation and Real-Time Decision-Making in Industry 4.0. Appl. Sci. 2026, 16, 4825. https://doi.org/10.3390/app16104825
Avramović N, Marković A, Čomić T, Čavoški S, Zornić N, Vujović V. PRIME–INSPECT: A Socio-Technical Framework for Trustworthy Intelligent Automation and Real-Time Decision-Making in Industry 4.0. Applied Sciences. 2026; 16(10):4825. https://doi.org/10.3390/app16104825
Chicago/Turabian StyleAvramović, Nebojša, Aleksandar Marković, Tijana Čomić, Sava Čavoški, Nikola Zornić, and Vladimir Vujović. 2026. "PRIME–INSPECT: A Socio-Technical Framework for Trustworthy Intelligent Automation and Real-Time Decision-Making in Industry 4.0" Applied Sciences 16, no. 10: 4825. https://doi.org/10.3390/app16104825
APA StyleAvramović, N., Marković, A., Čomić, T., Čavoški, S., Zornić, N., & Vujović, V. (2026). PRIME–INSPECT: A Socio-Technical Framework for Trustworthy Intelligent Automation and Real-Time Decision-Making in Industry 4.0. Applied Sciences, 16(10), 4825. https://doi.org/10.3390/app16104825

