Next Article in Journal
Using Multi-Criteria Decision-Making for Evaluating SDGs’ Implementation on Higher Education Institutions: A Framework
Previous Article in Journal
Generative AI in Research Group Formation: Academic Perceptions and Institutional Pathways
Previous Article in Special Issue
Machine Learning-Enhanced Architecture Model for Integrated and FHIR-Based Health Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Human-in-the-Loop AI Use in Ongoing Process Verification in the Pharmaceutical Industry

by
Miquel Romero-Obon
1,*,†,
Khadija Rouaz-El-Hajoui
1,2,*,†,
Virginia Sancho-Ochoa
1,
Ronny Vargas
3,4,
Pilar Pérez-Lozano
1,2,
Marc Suñé-Pou
1,2 and
Encarna García-Montoya
1,2
1
Department of Pharmacy and Pharmaceutical Technology and Physical Chemistry, Faculty of Pharmacy and Food Sciences, University of Barcelona, Av. Joan XXIII, 27-31, 08028 Barcelona, Spain
2
Pharmacotherapy, Pharmacogenetics and Pharmaceutical Technology Research Group, Bellvitge Biomedical Research Institute (IDIBELL), Av. Gran via de l’Hospitalet, 199-203, 08090 Barcelona, Spain
3
Department of Industrial Pharmacy, Faculty of Pharmacy, University of Costa Rica, Ciudad Universitaria Rodrigo Facio, San José 11501, Costa Rica
4
Pharmaceutical Research Institute (INIFAR), Faculty of Pharmacy, University of Costa Rica, Ciudad de la Investigación, San José 11501, Costa Rica
*
Authors to whom correspondence should be addressed.
These authors contribute equally to this work.
Information 2025, 16(12), 1082; https://doi.org/10.3390/info16121082 (registering DOI)
Submission received: 28 October 2025 / Revised: 24 November 2025 / Accepted: 3 December 2025 / Published: 6 December 2025
(This article belongs to the Special Issue Artificial Intelligence-Based Digital Health Emerging Technologies)

Abstract

The pharmaceutical industry’s pursuit of enhanced product quality, regulatory compliance, and operational efficiency has catalyzed the integration of Artificial Intelligence (AI) into Ongoing Process Verification (OPV) frameworks. This comprehensive review examines the synergistic application of Human-in-the-Loop (HITL) AI systems within OPV, contextualized by the evolving regulatory landscape, particularly the newly introduced Annex 22 of the European Union Good Manufacturing Practices (EU-GMP). The review delineates the sector’s strategic shift from traditional validation models toward dynamic, data-driven approaches that leverage AI for real-time monitoring, predictive analytics, and proactive process control. Central to this transformation is the HITL paradigm, which ensures that human expertise remains embedded in critical decision-making loops, thereby safeguarding patient safety, product quality, data integrity, and ethical responsibility. Annex 22 explicitly mandates deterministic behavior, traceability, and explainability for AI models used in GMP-critical applications, excluding adaptive and probabilistic systems from such contexts. The document also reinforces the necessity of multidisciplinary governance, rigorous validation protocols, and risk-based oversight throughout the AI lifecycle. This paper synthesizes current industry practices, regulatory expectations, and technological capabilities, offering a structured framework for compliant AI deployment in OPV. By aligning AI implementation with Annex 22 principles and existing GMP frameworks (e.g., Annex 11 and ICH Q9), the pharmaceutical sector can harness AI’s transformative potential while maintaining robust regulatory compliance. The review concludes with actionable recommendations for integrating HITL AI into OPV strategies, fostering a resilient, transparent, ethical, and future-ready manufacturing ecosystem.

Graphical Abstract

1. Introduction

Digital transformation is advancing across several domains of pharmaceutical manufacturing, including the integration of Artificial Intelligence (AI) into areas such as production operations, supervisory control, quality assurance, and strategic decision-support processes [1]. Although adoption remains highly heterogeneous across organizations, sites, and regulatory regions, this development holds meaningful transformative potential for the sector [2]. Within this broader context, one of the most promising applications is the use of AI in Ongoing Process Verification (OPV) [3]. OPV is a continuous quality assurance strategy and is conceptually consistent with the lifecycle approaches outlined in ICH Q8 [4] and ICH Q14 [5]. These continuous verifications align with the principles of modern pharmaceutical continuous manufacturing first introduced under the FDA’s Process Analytical Technology (PAT) framework because of their potential for real-time process understanding and control that enables data driven monitoring and control of Critical Process Parameters and quality attributes, thereby enhancing product consistency, reducing variability, and supporting continuous improvement [6,7].
In this context, Human-in-the-Loop (HITL) AI systems have emerged as a pivotal approach, combining the computational power of AI with human oversight to ensure regulatory compliance, interpretability, and accountability. HITL configurations are particularly relevant in GMP-regulated environments, where decisions affecting patient safety, product quality, and data integrity must be transparent and traceable [8].
Annex 22 to Volume 4 of the EU GMPs marks a significant regulatory milestone. This annex provides targeted guidance on the use of AI in GMP-critical applications, complementing existing frameworks and establishing stringent requirements for model validation, explainability, and performance monitoring, and explicitly excludes adaptive, probabilistic, and generative AI models from critical use cases [8].
This review explores the intersection of HITL AI and OPV in the pharmaceutical sector, analyzing current practices, technological capabilities, and regulatory expectations. It aims to provide a structured framework for the compliant integration of AI into OPV strategies, ensuring that innovation is harmonized with the rigorous demands of pharmaceutical quality systems.

2. Methodology

The search included the Science Direct and PubMed databases within the study period from April 2019 to October 2025. Terms and combinations used were “neural network”, “machine learning”, “deep learning”, “data mining”, “knowledge discovery”, “pharmaceutical sector”, “pharmaceutical industry”, “ongoing process verification”, “continuous process verification”, “GMP annex 22”, “human in the loop”, and “artificial intelligence”.
Consensus Pro was used to explore the following fields:
AI adaptation in the pharmaceutical industry and most used strategies [9,10,11,12].
Compliance challenge assessment and other potential barriers that could slow down implementation in the GMP-regulated environment [13,14,15,16].
Scientists’ agreement level about regulatory compliance based on AI systems, including key claims and evidence level (see Figure 1 and Figure 2).

3. State of the Art: Towards the Augmented OPV

OPV represents a paradigm shift in pharmaceutical manufacturing, transitioning from static, retrospective validation models to dynamic, data-driven quality assurance [17]. Traditionally, OPV relied on periodic sampling and the statistical analysis of historical data to verify process consistency. The principles of continued process verification were formalized in ISPE guidance [18] and further reinforced by the FDA’s vision for continuous manufacturing [19]. Those guidelines emphasized that effective monitoring requires data collected with sufficient frequency and analytical sensitivity to detect small or emerging process trends. Therefore, retrospective approaches based on periodic sampling may provide limited data granularity and responsiveness, thus potentially compromising the detection of subtle process shifts or emerging risks in real time [20].
The integration of AI into OPV frameworks has enabled a more proactive and predictive approach to process control. AI models, particularly those based on supervised learning and multivariate analysis, can continuously monitor Critical Process Parameters (CPPs) and Critical Quality Attributes (CQAs), identifying deviations and trends that may compromise product quality. These models can ingest high-frequency data streams from manufacturing execution systems (MES), laboratory information management systems (LIMS), and Process Analytical Technology (PAT) sensors, thereby facilitating real-time decision support [21,22].
Moreover, AI-enhanced OPV systems may accelerate the detection of complex, non-linear relationships between process variables—relationships that traditional statistical methods may struggle to identify in real time [23]. This capability could be particularly valuable in high-variability pharmaceutical processes, helping to reduce batch failures and reduce waste through earlier anomaly detection.
The evolution toward AI-augmented OPV is not merely technological but also strategic. It aligns with the principles of Quality by Design and continuous improvement, fostering a manufacturing environment that is both agile and compliant.

3.1. Implementation Maturity Level

Recent industry analyses indicate a varied maturity in the implementation of OPV across the pharmaceutical sector. Approximately 20% of manufacturers operate without formal OPV frameworks, relying instead on legacy validation models. In contrast, traditional OPV—based on statistical process control and retrospective data analysis—has achieved widespread adoption, with an estimated maturity level of 65%. The integration of AI into OPV, while promising, remains in a developmental phase, with only 35% of companies actively piloting or deploying AI-enhanced OPV systems (see Appendix A, Figure A1).
These data are supported by reports from Pugatch Consilium (2024) [24], Forbes (2025) [25], and Pharma Focus America (2025) [26], which collectively highlight the regulatory, technical, and organizational barriers that constrain full-scale AI adoption in GMP environments.
Traditional OPV practices are grounded in the principles of ICH Q8/Q10 [4,27] and ICH Q14 [5]. These frameworks are further reinforced by the FDA’s Process Analytical Technology (PAT) Framework [6] and the CDER’s Continuous Manufacturing Journey [19]. Implementation guidance from the International Society for Pharmaceutical Engineering (ISPE) [18] and the FDA’s Process Validation principles [6] have further standardized process monitoring and lifecycle control across manufacturing stages. Table 1 summarizes the current maturity levels of OPV implementation across the industry.

3.2. Current Applications for AI in Pharmaceutical OPV

The application of AI in OPV spans a diverse array of use cases, each contributing to enhanced process understanding, control, and compliance. Below are some representative examples of key domains within the pharmaceutical industry where AI has demonstrated both its potential and significant impact:

3.2.1. Real-Time Process Monitoring

AI models are increasingly used to monitor real-time data from manufacturing processes, enabling the early detection of deviations and facilitating timely corrective actions. For example, convolutional neural networks (CNNs) have been applied to image-based analysis of granule morphology, while regression models predict dissolution profiles based on in-process measurements [30]. These systems often operate within a Human-in-the-Loop (HITL) framework, where operators validate AI outputs before implementation.

3.2.2. Predictive Maintenance

Machine learning algorithms trained on equipment sensor data can forecast mechanical failures, allowing for preventive maintenance scheduling. This reduces unplanned downtime and ensures consistent process performance. Predictive maintenance is particularly valuable in sterile manufacturing environments, where equipment reliability directly impacts product sterility and compliance [29].

3.2.3. Automated Visual Inspection

Deep learning models have been deployed for the automated inspection of injectable products, detecting defects such as particulates, cracks, and fill volume inconsistencies. These systems outperform traditional rule-based vision systems in sensitivity and specificity, and when integrated with HITL oversight, they maintain regulatory acceptability [29,31].

3.2.4. Deviation Analysis and CAPA Optimization

Natural Language Processing (NLP) techniques are used to analyze deviation reports and identify root causes. Clustering algorithms group similar deviations, while classification models suggest corrective and preventive actions. These tools accelerate investigation timelines and improve CAPA effectiveness, contributing to a more resilient quality system [32].

3.2.5. Digital Twins and Process Simulation

AI-driven digital twins simulate manufacturing processes under varying conditions, enabling virtual experimentation and optimization. These models support process design, scale-up and technology transfer, and are increasingly used in conjunction with OPV to validate process robustness [33,34].

3.2.6. Data Integration and Knowledge Management

AI facilitates the integration of heterogeneous data sources, including batch records, sensor outputs, and laboratory results. Knowledge graphs and semantic models enable the contextualization and retrieval of relevant information, supporting decision-making across the product lifecycle [31,33].

4. HITL Architecture Overview

The HITL paradigm is a critical architectural approach for deploying AI systems in regulated pharmaceutical environments. It ensures that human expertise remains central to decision-making processes, particularly in contexts where AI outputs influence product quality, patient safety, or regulatory compliance. HITL architecture is designed to balance automation with oversight, enabling AI to augment human capabilities without replacing them.

4.1. Distinguishing HITL AI from Classical Decision-Support Systems

The separation between HITL/AI and traditional decision-support systems lies primarily in the degree of autonomy, interaction dynamics, and lifecycle governance [35].
Classical decision-support systems are typically deterministic or rule-based, providing static recommendations derived from predefined algorithms or structured data queries [36]. Human operators interpret these outputs and make final decisions without influencing the system’s internal logic during operation. HITL/AI systems, by contrast, integrate human feedback as an active component of the model’s inference or learning process [35]. This feedback can occur at various stages—training, validation, or real-time decision-making—enabling adaptive behavior and continuous improvement [37].
HITL/AI introduces iterative retraining cycles and dynamic parameter adjustments based on human input, which necessitates robust model governance frameworks [35]. These frameworks must address version control, bias mitigation, and explainability across evolving states of the model. Classical systems generally operate under a fixed specification, with lifecycle management focused on software updates rather than epistemic uncertainty or emergent behaviors [36].
For HITL/AI, accountability is more complex, because decision provenance involves both algorithmic outputs and human interventions. Regulatory compliance must therefore encompass the traceability of human feedback loops, auditability of model updates, and risk controls for unintended consequences [37]. In contrast, classical systems fall under conventional software validation regimes, where responsibility is primarily linked to deterministic logic and static datasets [36,37].
HITL/AI systems exhibit a higher degree of epistemic uncertainty due to their adaptive nature, requiring continuous monitoring and performance validation [35]. Classical decision-support systems, being non-adaptive, present a more predictable risk landscape, often governed by pre-established thresholds and deterministic error bounds [36].
In summary, the distinction is not merely technical but systemic: HITL/AI transforms decision support into a co-evolutionary process between humans and algorithms, demanding new paradigms for governance, accountability, and lifecycle oversight.

4.2. Architectural Layers

A typical HITL AI system in pharmaceutical OPV comprises the following layers [8,31,38]:
Data Acquisition Layer: Collects real-time process data from sensors, manufacturing execution systems (MESs), and laboratory information management systems (LIMSs). This layer ensures data integrity and traceability, complying with ALCOA+ principles.
Preprocessing and Feature Engineering Layer: Cleans, normalizes, and transforms raw data into structured inputs for AI models. Feature selection is often guided by domain experts to ensure relevance and interpretability.
AI Model Layer: Hosts deterministic models such as decision trees, support vector machines, or rule-based systems. These models are trained on historical data and validated against predefined acceptance criteria. Self-adaptive models are excluded per Annex 22.
Human Oversight Layer: Provides interfaces for human operators to review, validate, and override AI outputs. This layer includes dashboards, alerts, and explainability tools (e.g., SHAP, LIME) that justify model decisions and display confidence scores.
Decision Execution Layer: Implements approved decisions into the manufacturing process, either manually or via automated control systems. All actions are logged and auditable.

4.3. Interaction Modes

HITL systems support various interaction modes depending on the criticality of the decision and the confidence level of the AI output [8,31,38]:
Supervisory Mode: AI provides recommendations, and humans make final decisions. Common in batch release and deviation management.
Collaborative Mode: AI and humans jointly analyze data, with humans validating AI-generated insights. Used in root cause analysis and trend detection.
Override Mode: Humans can reject or modify AI outputs based on contextual knowledge or regulatory constraints. Essential for compliance with Annex 22.

4.4. Explainability and Transparency

Explainability is a cornerstone of HITL architecture. Models must provide interpretable outputs that can be understood by non-technical stakeholders. Techniques include the following [8,31,38]:
SHAP (Shapley Additive Explanations): Quantifies the contribution of each feature to a prediction.
LIME (Local Interpretable Model-agnostic Explanations): Generates local approximations of model behavior.
These tools are integrated into the oversight layer to support informed decision-making and regulatory audits (see Figure 3).

4.5. Governance and Lifecycle Management

HITL systems necessitate a comprehensive governance and lifecycle management framework to ensure operational integrity, regulatory compliance, and ethical oversight throughout their deployment and evolution. This framework must address both the technical and procedural dimensions of human–AI collaboration [39].
A key component is role-based access control, which restricts system access to authorized personnel based on predefined roles and qualifications. This ensures that only individuals with the requisite expertise are permitted to review, validate, or override AI-generated outputs, thereby minimizing the risk of misuse or error [39].
To maintain transparency and accountability, audit logging mechanisms must be implemented. These logs should capture all interactions between human operators and AI systems, including decisions made, overrides performed, and feedback provided. Such records are essential for post hoc analysis, compliance verification, and continuous improvement [13,40].
Change management protocols are also critical. Any modification to the AI system —whether it involves model retraining, algorithmic updates, or deployment in a new operational context—must follow a structured change control process. This includes documentation of the rationale for change, impact assessments, validation procedures, and stakeholder approvals prior to implementation [39].
Finally, periodic performance evaluations should be conducted to assess the system’s effectiveness, fairness, and alignment with intended outcomes. These reviews may include quantitative metrics (e.g., accuracy, latency, and error rates) as well as qualitative assessments (e.g., user satisfaction, ethical considerations) [41]. The results should inform decisions regarding system recalibration, retraining, or decommissioning [42].
Together, these governance elements form the backbone of a resilient HITL architecture, enabling safe, transparent, and adaptive integration of human judgment into AI-driven processes.

5. Regulatory Challenges in AI-Driven OPV

The integration of AI into GMP-regulated pharmaceutical manufacturing introduces a complex array of regulatory challenges. These challenges are addressed in the newly drafted Annex 22 [4], which outlines the conditions under which AI may be used in critical operations.
Table 2 shows a comparative integration of the EMA’s perspective, alongside the FDA and WHO, in terms of AI compliance frameworks.
Table 2. FDA, EMA, and WHO views on AI use in the pharmaceutical industry.
Table 2. FDA, EMA, and WHO views on AI use in the pharmaceutical industry.
AspectFDA (U.S.)EMA (EU/EFTA)WHO/International
Regulatory Nature and ScopeNon-binding, risk-based guidance (e.g., 2023 SaMD Action Plan [19], 2025 AI-in-drug draft guidance [6])EU-centered, risk-tiered framework under MDR/IVDR; EMA reflection papers (Sept 2024) align with EU AI Act [42,43]
Implemented via network-wide plan 2023–2028 [42]
Advisory, principle-based (safety, transparency) for LMICs; fosters global harmonization [44]
Lifecycle GovernanceEmphasizes Change Control Plans (e.g., PCCP), real-world evidence, and post-market monitoring [6,19]Reflection Paper specifies governance over drug discovery, clinical trials, post-market, data integrity, GxP compliance; MDR/IVDR uses EN 62304 lifecycle standards [42,45]WHO recommends documentation of study design, human oversight, lifecycle validation consistent with other national frameworks [44]
Core Focus AreasContextual credibility, interpretability, and traceability of adaptive systems [15,40]Risk-based model (“high-patient risk” vs. “high-regulatory impact”); includes bias control, explainability, and human oversight; also issues QO for AI-based diagnostics (AIM-NASH) [21,22,42]Emphasizes oversight alignment with EU, FDA, data protection, cybersecurity, and equitable access [44]
Structure and CertaintyFlexible, dialog-based case-by-case approval; fosters innovation but increases unpredictability [6,19]Structured, tiered, and formal; clarifies compliance thresholds at each lifecycle stage [42,43]Promotes harmonization but leaves implementation to national authorities; encourages regulatory sandboxes [44]
Alignment EffortsInvites public comments, collaborates under NIST AI Risk Management Framework [6,19]Harmonized across EU via HMA–EMA workplan; EU AI Act compliance begins 2024 [42,43]Supports cross-border standardization via EU–US, HMA, and global forums [44]
Industry ImpactUS–EU approval divergence complicates global submission; flexible FDA promotes early collaboration [6,19]EU’s predictability may slow early adoption but ensures stricter validation and clarity [42,43]Convergence on risk-based principles, divergence on implementation [44]

5.1. Scope and Model Restrictions

Annex 22 applies exclusively to static, deterministic AI models (those that do not adapt or learn post-deployment and produce consistent outputs for identical inputs) [4,23,24]. This restriction ensures reproducibility and control, which are foundational to GMP compliance. Dynamic models, probabilistic systems, and generative AI, including Large Language Models (LLMs), are explicitly excluded from critical applications due to their inherent unpredictability.
Although these limitations reinforce control and traceability, they also restrict the range of AI technologies that can be deployed in GMP-critical environments. In particular, the exclusion of adaptive, probabilistic, and generative models constrains the use of advanced tools that could support continuous optimization, early detection of emerging process trends, or automated knowledge extraction. Recent regulatory analyses highlight that such advanced AI systems must be subject to strict limitations to maintain the reproducibility and validation standards required in pharmaceutical manufacturing [13].
To mitigate these constraints, Annex 22 allows for the use of non-deterministic or generative systems in non-critical roles, provided they operate under a Human-in-the-Loop (HITL) paradigm. In such cases, LLMs and adaptive models may be applied to supportive tasks, such as preliminary data exploration, initial risk identification, non-GMP-critical document review, or decision-support activities, if their outputs do not directly influence product quality or real-time process control. This approach aligns with both industry and regulatory recommendations, which indicate that employing advanced AI in non-critical functions—combined with human supervision and without direct impact on quality attributes—is a viable pathway for leveraging its capabilities while preserving GMP compliance [39,46].
To facilitate understanding of the regulatory boundaries established by Annex 22 [8], Figure 4 provides a visual summary of its key provisions, including model eligibility criteria, governance requirements, and lifecycle controls applicable to AI systems in GMP-critical contexts.

5.2. HITL Requirements

In cases where AI is used to support rather than automate decision-making, Annex 22 mandates an HITL configuration [8,31,38]. The human operator must be adequately trained, their responsibilities clearly defined, and their performance monitored like any manual GMP process. This ensures that accountability remains with qualified personnel and that AI outputs are subject to expert review.

5.3. Validation and Performance Criteria

AI models must undergo rigorous validation against predefined acceptance criteria, including metrics such as accuracy, sensitivity, specificity, and F1 scores [8,31,38]. These criteria should be tailored to the model’s intended application and approved by relevant subject matter experts. Crucially, the model’s performance must meet or exceed that of the process it is designed to replace. These validation expectations and recommended performance metrices and thresholds are summarized in Table 3.

5.4. Proposal for Deterministic Behavior Verification in AI Models: A Suitability-Inspired Testing Framework

In the context of AI model deployment, particularly in critical applications where reproducibility and reliability are paramount, it is essential to ensure that model behavior remains deterministic under controlled conditions. While stochastic elements are inherent to many machine learning algorithms, the operational execution of a trained model should ideally yield consistent outputs when provided with identical inputs and environmental configurations [8,40].
We propose a verification framework inspired by the suitability test commonly applied to analytical laboratory instruments, as described in the United States Pharmacopeia (USP) General Chapter 1058, Analytical Instrument Qualification [47]. These tests are designed to confirm that an instrument performs within predefined specifications despite inherent variability in environmental conditions or sample characteristics. Analogously, the deterministic behavior of an AI model can be assessed through a structured protocol that includes the following:
  • Controlled Input Repetition: Repeated execution of the model using identical input data and fixed system parameters (e.g., hardware, software versions, and random seed initialization) to detect any output variability.
  • Environmental Stability Assessment: Evaluation of model behavior across different but nominally equivalent computational environments (e.g., containerized deployments, virtual machines) to identify hidden dependencies or non-deterministic execution paths.
  • Tolerance Threshold Definition: Establishment of acceptable output deviation margins, if applicable, particularly for models involving floating-point operations or probabilistic components. These thresholds must be justified based on domain-specific requirements.
  • Logging and Traceability: Comprehensive logging of execution metadata, including system states, library versions, and runtime configurations, to facilitate reproducibility and forensic analysis in case of discrepancies.
  • Benchmarking Against Reference Outputs: Comparison of current model outputs with a validated reference set to detect regressions or unintended behavioral shifts.
This framework aims to provide a systematic approach for verifying the deterministic integrity of AI models prior to their integration into regulated or high-stakes environments. By adopting principles from laboratory instrument validation, we can enhance the trustworthiness and auditability of AI systems.

5.5. Data Integrity and Independence

Annex 22 introduces strict controls on test data management, requiring stratification, subgroup analysis, and independence from training datasets. Access controls and audit trails must be implemented to prevent data leakage and ensure traceability [8,41,44].

5.6. Explainability and Confidence Scores

A key innovation in Annex 22 is the requirement for model explainability. Techniques such as SHAP or LIME must be used to justify decisions, and confidence scores must be logged to assess prediction reliability. This supports transparency and enables human operators to intervene when model certainty is low [8,31,38].

5.7. Operational Oversight

Once deployed, AI models must be subject to change control, performance monitoring, and input space validation. Any modification to the model or its operational context must trigger a reassessment of its validity.

6. Governance Best Practices for HITL AI in GMP Environments

Effective governance is essential for the successful and compliant deployment of HITL AI systems in pharmaceutical manufacturing. Governance frameworks must ensure that AI systems are transparent, auditable, and aligned with regulatory expectations, particularly those outlined in Annex 22 of the EU GMP guidelines.

6.1. Multidisciplinary Oversight

Governance should be led by a cross-functional team comprising the following:
Quality Assurance;
Information Technology;
Mathematics, Statistics, Data Scientists, and AI specialists;
Regulatory Affairs;
Process Engineering.
This ensures that decisions regarding AI deployment are informed by diverse expertise and that risks are assessed from multiple perspectives [8,31].

6.2. Role Definition and Accountability

The clear definition of roles is critical. Responsibilities must be assigned for the following:
Model development and validation;
Human oversight and decision review;
Data management and integrity;
Change control and lifecycle monitoring.
Each role must be documented in SOPs and training records to ensure traceability and accountability [38].

6.3. Lifecycle Management

AI systems must be governed across their entire lifecycle:
Design and Development: Models must be built using GMP-compliant data and documented methodologies.
Validation: Performance must be benchmarked against predefined acceptance criteria.
Deployment: HITL interfaces must be tested for usability and reliability.
Monitoring: Ongoing performance reviews and input space validation are required.
Retirement: Decommissioning must follow formal procedures to prevent unintended use.

6.4. Documentation and Auditability

All aspects of the AI system, including training data, model architecture, validation results, and human interactions, must be documented [38,41]. Audit trails should be enabled for the following:
Model predictions and confidence scores;
Human decisions and overrides;
System updates and retraining events.

6.5. Risk Management

Governance must incorporate Quality Risk Management (QRM) principles from ICH Q9 [8,45,48]. Risk assessments should evaluate the following:
Impact of incorrect predictions;
Reliability of human oversight;
Data integrity vulnerabilities;
Regulatory non-compliance scenarios.
Mitigation strategies must be implemented and periodically reviewed.

6.6. Ethical AI Principles

Balancing transparency and fairness in AI systems involves reconciling two complementary yet sometimes conflicting objectives. Transparency enables accountability through interpretability, documentation, and traceability, but excessive disclosure can expose proprietary logic or sensitive data [42,43]. Fairness, on the other hand, seeks to mitigate bias and ensure equitable outcomes, often requiring algorithmic adjustments that reduce model simplicity and, in some cases, obscure decision pathways [14,40,49].
The challenge lies in managing this trade-off without compromising regulatory compliance or ethical integrity. Effective strategies include structured documentation (e.g., model cards), fairness-aware learning techniques, and privacy-preserving methods that maintain openness while safeguarding confidentiality [9,10,50]. Human-in-the-Loop oversight remains essential to contextualize fairness interventions and address interpretability gaps [10,14].
Ultimately, transparency and fairness should be treated as interdependent principles within a risk-based governance framework calibrated to the application context and stakeholder expectations [16,43]. This integrated approach ensures that AI systems in the pharmaceutical industry operate ethically, remain regulatory compliant, and are socially responsible.

7. Case Studies on HITL AI in Pharmaceutical OPV

To illustrate the practical application of HITL AI in pharmaceutical OPV, this sectionpresents selected case studies from industry implementations. These examples demonstrate how AI can enhance process control while maintaining regulatory compliance through human oversight.

7.1. Case Study A: Real-Time Monitoring of Granulation Process

Deployment of an HITL AI system to monitor granule size distribution in a continuous granulation line [30]. The AI model used image analysis and regression techniques to predict granule quality. Human operators reviewed predictions via a dashboard with SHAP-based explanations and confidence scores.
Outcome: Improved batch consistency and reduced deviations.
Governance: QA reviewed model outputs daily; override capability was retained.
Compliance: Model validated under Annex 22 [8]; deterministic behavior ensured.

7.2. Case Study B: Predictive Maintenance in Sterile Filling

Implementation of an HITL AI system to predict equipment failures in filling lines for sterile manufacturing [29]. The system used sensor data and time-series analysis to flag anomalies. Maintenance engineers received alerts and could validate or dismiss predictions.
Outcome: Reduced downtime and improved equipment reliability.
Governance: Engineering and QA jointly managed model performance.
Compliance: AI used in non-critical support role; HITL ensured no direct impact on product quality.

7.3. Case Study C: Deviation Root Cause Analysis

Use HITL AI to assist in root cause analysis of deviations [32]. The system clustered historical deviation data and suggested potential causes. Investigators reviewed suggestions and documented decisions [21,22].
Outcome: Accelerated investigations and improved CAPA effectiveness.
Governance: Regulatory Affairs ensured traceability of decisions.
Compliance: AI outputs were advisory; human judgment remained primary.

7.4. Summary of Cases A, B, and C

The summary of the three HITL AI case studies in pharmaceutical OPV is shown in Table 4.

8. Implementation Framework for HITL AI in Ongoing Process Verification

Deploying HITL AI systems in pharmaceutical OPV requires a structured implementation framework that aligns technological capabilities with regulatory expectations. This section outlines a phased approach to ensure successful and compliant integration based on the best practices currently applied.
  • Phase I: Feasibility and Risk Assessment
  • Process Selection: Identify OPV processes suitable for AI augmentation (e.g., granulation, blending, and filling).
  • Risk Analysis: Apply ICH Q9 principles to assess risks associated with AI deployment.
  • Stakeholder Engagement: Involve QA, IT, regulatory, and operations teams early in the planning.
  • Phase II: Model Development and Validation
  • Data Preparation: Ensure data integrity, traceability, and representativeness.
  • Model Design: Use deterministic algorithms (e.g., decision trees, regression models) compliant with Annex 22 [8].
  • Validation Protocols: Define acceptance criteria (e.g., accuracy, F1 score) and perform cross-validation. In alignment with the FDA’s Continuous Manufacturing Initiative [7] and ISPE’s CPV framework [18].
  • Explainability Integration: Embed SHAP or LIME for model transparency.
  • Phase III: HITL Interface Design
  • User Interface: Develop dashboards for human review of AI outputs.
  • Override Mechanism: Implement controls for human intervention and decision logging.
  • Training Programs: Educate operators on AI behavior, limitations, and responsibilities.
  • Phase IV: Deployment and Monitoring
  • Change Control: Document deployment procedures and approval workflows.
  • Performance Monitoring: Track model accuracy, confidence scores, and override frequency.
  • Audit Readiness: Maintain complete documentation and audit trails.
  • Phase V: Continuous Improvement
  • Feedback Loops: Use human feedback to refine model inputs and oversight protocols.
  • Periodic Review: Reassess model performance and compliance annually or upon process changes.
  • Scalability Planning: Evaluate potential for broader application across manufacturing sites.

9. Advantages and Limitations of AI in Ongoing Process Verification Under the Human-in-the-Loop Paradigm

The integration of AI into Ongoing Process Verification frameworks offers significant potential for enhancing the robustness, responsiveness, and scalability of pharmaceutical manufacturing oversight. When embedded within the HITL paradigm, AI systems are not autonomous but operate in conjunction with human expertise, enabling a synergistic approach to decision-making and quality assurance.

9.1. Advantages

  • Enhanced Data Processing and Pattern Recognition: AI algorithms can process large volumes of process data in real time, identifying subtle trends, deviations, or anomalies that may elude traditional statistical methods or human observation. This capability supports early detection of process drift and facilitates proactive interventions [34].
  • Continuous Monitoring and Adaptability: Unlike static control systems, AI models can adapt to evolving process conditions, enabling dynamic verification strategies. This is particularly valuable in complex or multivariate manufacturing environments where process variability is inherent [34].
  • Decision Support and Risk Mitigation: In an HITL configuration, AI serves as a decision-support tool, providing probabilistic assessments or predictive insights that inform human judgment. This reduces cognitive load and enhances the consistency of decision-making while preserving human oversight in critical scenarios [31].
  • Auditability and Traceability: When properly governed, AI systems can generate detailed logs of their outputs and interactions with human operators, contributing to regulatory compliance and facilitating retrospective analysis [38].

9.2. Limitations

  • Dependence on Data Quality and Representativeness: The performance of AI systems is contingent on the quality, completeness, and relevance of training data. Inadequate or biased datasets may lead to inaccurate predictions or reinforce systemic errors [8,31].
  • Governance Complexity: Implementing HITL systems requires robust governance structures, including role-based access control, audit trails, and change management protocols. These add operational complexity and require ongoing maintenance and review [8,38].
  • Human Factors and Operational Burden: While HITL mitigates the risks of full automation, it introduces challenges related to human engagement, such as alert fatigue, over-reliance on AI outputs, or inconsistent intervention practices. Ensuring that human operators remain effectively integrated and trained is essential [8,31].
To summarize, the application of AI within OPV under an HITL paradigm offers a promising pathway toward more intelligent and resilient process verification. However, its success depends on careful system design, rigorous governance, and a balanced integration of human expertise and algorithmic capabilities.

10. Conclusions and Future Outlook

The integration of HITL AI into OPV represents a transformative opportunity for the pharmaceutical industry. By combining the analytical power of AI with the contextual judgment of human experts, HITL systems offer a pathway to enhanced process control, reduced variability, and proactive quality assurance.
The publication of Annex 22 by the European Medicines Agency provides a clear regulatory framework for AI use in GMP-critical applications. It emphasizes the importance of deterministic behavior, explainability, and human accountability (principles that are foundational to HITL architectures). As the industry adapts to these guidelines, HITL AI systems will become increasingly central to digital quality strategies.
Importantly, ethical considerations—such as transparency, fairness, and human oversight—must guide the design and deployment of these systems to ensure responsible innovation and maintain trust in pharmaceutical manufacturing.
Looking ahead, several trends are expected to shape the future of HITL AI in pharmaceutical manufacturing:
Standardization of Validation Protocols: Industry-wide benchmarks for AI performance and explainability will emerge.
Integration with Advanced Analytics: HITL systems will be combined with multivariate analysis, digital twins, and real-time release testing.
Expansion Beyond OPV: Applications will extend to deviation management, CAPA effectiveness, and regulatory intelligence.
Global Regulatory Harmonization: Other regulatory bodies may adopt similar frameworks, fostering international consistency.
Ultimately, the successful deployment of HITL AI in OPV will depend on a balanced approach that embraces innovation while upholding rigorous standards of pharmaceutical quality, patient safety, and ethical responsibility.

Author Contributions

Conceptualization, M.R.-O., E.G.-M. and P.P.-L.; Methodology, M.R.-O., E.G.-M., K.R.-E.-H., R.V. and V.S.-O.; Investigation, M.R.-O., K.R.-E.-H., R.V. and E.G.-M.; Supervision, E.G.-M. and P.P.-L.; Writing—original draft preparation, M.R.-O. and K.R.-E.-H.; Writing—review, M.R.-O., K.R.-E.-H., M.S.-P., P.P.-L., R.V. and E.G.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Departament de Recerca i Universitats de la Generalitat de Catalunya (AGAUR 2021 SGR 01068) and by the University of Costa Rica (project 817-C5-253).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
CAPACorrective and Preventive Action 
CNNsConvolutional Neural Networks
CPPsCritical Process Parameters 
CQAsCritical Quality Attributes
EU GMPEuropean Union Good Manufacturing Practices 
FDAFood and Drug Administration 
HITLHuman-in-the-Loop
ICHInternational Council for Harmonization
ISPEInternational Society for Pharmaceutical Engineering
ITInformation Technology 
LIMELocal Interpretable Model-agnostic Explanations
LIMSLaboratory Information Management Systems
LLMsLarge Language Models 
MESManufacturing Execution Systems
OPVOngoing Process Verification 
PATProcess Analytical Technology 
QAQuality Assurance
QRMQuality Risk Management
SHAPShapley Additive Explanations
SOPsStandard Operating Procedures
USPUnited States Pharmacopeia
DCSDistributed Control System
DTDecision Tree
SVMSupport Vector Machine

Appendix A

Figure A1. Interpretation of maturity levels. No OPV (20%): Represents legacy systems or companies that have not yet adopted continuous verification practices. OPV without AI (65%): Reflects the widespread use of traditional statistical and quality control methods in line with ICH Q8/Q10 [1,13] and Annex 15 [5]. OPV with AI (35%): Indicates growing but still limited adoption of AI-enhanced OPV, constrained by regulatory, technical, and cultural factors. Source: image generated by the authors using information from [5,8,9,10].
Figure A1. Interpretation of maturity levels. No OPV (20%): Represents legacy systems or companies that have not yet adopted continuous verification practices. OPV without AI (65%): Reflects the widespread use of traditional statistical and quality control methods in line with ICH Q8/Q10 [1,13] and Annex 15 [5]. OPV with AI (35%): Indicates growing but still limited adoption of AI-enhanced OPV, constrained by regulatory, technical, and cultural factors. Source: image generated by the authors using information from [5,8,9,10].
Information 16 01082 g0a1

References

  1. Phiri, V.J.; Battas, I.; Semmar, A.; Medromi, H.; Moutaouakkil, F. Towards enterprise-wide Pharma 4.0 adoption. Sci. Afr. 2025, 28, e02771. [Google Scholar] [CrossRef]
  2. Destro, F.; Inguva, P.K.; Srisuma, P.; Braatz, R.D. Advanced methodologies for model-based optimization and control of pharmaceutical processes. Curr. Opin. Chem. Eng. 2024, 45, 101035. [Google Scholar] [CrossRef]
  3. Soni, S.J.; Patel, A.K. Digital transformation and Industry 4.0 in pharma manufacturing: The role of IoT, AI, and big data. J. Integral Sci. 2024, 7, 92. [Google Scholar] [CrossRef]
  4. International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use. ICH Q8(R2) Pharmaceutical Development. 2009. Available online: https://www.ema.europa.eu/en/ich-q8-r2-pharmaceutical-development (accessed on 25 October 2025).
  5. ICH. Q14 Analytical Procedure Development. 2023. Available online: https://database.ich.org/sites/default/files/ICH_Q14_Guideline_2023_1116.pdf (accessed on 25 October 2025).
  6. U.S. Food and Drug Administration. PAT—A Framework for Innovative Pharmaceutical Development, Manufacturing, and Quality Assurance. FDA. 2004. Available online: https://www.fda.gov/media/71012/download (accessed on 25 October 2025).
  7. Manzano, T.; Whitford, W. Artificial Intelligence Empowering Process Analytical Technology and Continued Process Verification in Biotechnology. GEN Biotechnol. 2025, 4, 23–28. [Google Scholar] [CrossRef]
  8. European Medicines Agency. Annex 22 Draft. EMA. 2025. Available online: https://health.ec.europa.eu/document/download/5f38a92d-bb8e-4264-8898-ea076e926db6_en?filename=mp_vol4_chap4_annex22_consultation_guideline_en.pdf (accessed on 25 October 2025).
  9. Vora, L.; Gholap, A.; Jetha, K.; Thakur, R.; Solanki, H.; Chavda, V. Artificial Intelligence in Pharmaceutical Technology and Drug Delivery Design. Pharmaceutics 2023, 15, 1916. [Google Scholar] [CrossRef] [PubMed]
  10. Kodumuru, R.; Sarkar, S.; Parepally, V.; Chandarana, J. Artificial Intelligence and Internet of Things Integration in Pharmaceutical Manufacturing: A Smart Synergy. Pharmaceutics 2025, 17, 290. [Google Scholar] [CrossRef] [PubMed]
  11. Huanbutta, K.; Burapapadh, K.; Kraisit, P.; Sriamornsak, P.; Ganokratana, T.; Suwanpitak, K.; Sangnim, T. The Artificial Intelligence-Driven Pharmaceutical Industry: A Paradigm Shift in Drug Discovery, Formulation Development, Manufacturing, Quality Control, and Post-Market Surveillance. Eur. J. Pharm. Sci. 2024, 203, 106938. [Google Scholar] [CrossRef] [PubMed]
  12. Arden, S.; Fisher, A.; Tyner, K.; Yu, L.; Lee, S.; Kopcha, M. Industry 4.0 for Pharmaceutical Manufacturing: Preparing for the Smart Factories of the Future. Int. J. Pharm. 2021, 602, 120554. [Google Scholar] [CrossRef] [PubMed]
  13. Niazi, S. Regulatory Perspectives for AI/ML Implementation in Pharmaceutical GMP Environments. Pharmaceuticals 2025, 18, 901. [Google Scholar] [CrossRef] [PubMed]
  14. Huysentruyt, K.; Kjoersvik, O.; Dobracki, P.; Savage, E.; Mishalov, E.; Cherry, M.; Leonard, E.; Taylor, R.; Patel, B.; Abatemarco, D. Validating Intelligent Automation Systems in Pharmacovigilance: Insights from Good Manufacturing Practices. Drug Saf. 2021, 44, 261–272. [Google Scholar] [CrossRef] [PubMed]
  15. Muppalla, A.; Maddi, B.; Maddi, N. Artificial Intelligence in Regulatory Compliance: Transforming Pharmaceutical and Healthcare Documentation. Int. J. Drug Regul. Aff. 2025, 13, 73–80. [Google Scholar] [CrossRef]
  16. Ajmal, C.; Yerram, S.; Abishek, V.; Nizam, V.; Aglave, G.; Patnam, J.; Raghuvanshi, R.; Srivastava, S. Innovative Approaches in Regulatory Affairs: Leveraging Artificial Intelligence and Machine Learning for Efficient Compliance and Decision-Making. AAPS J. 2025, 27, 22. [Google Scholar] [CrossRef] [PubMed]
  17. European Commission. EU GMP Annex 15: Qualification and Validation. 2015. Available online: https://health.ec.europa.eu/system/files/2016-11/2015-10_annex15_0.pdf (accessed on 25 August 2025).
  18. ISPE. Continued Process Verification in Stages 1–3. Pharmaceutical Engineering. 2020. Available online: https://ispe.org/pharmaceutical-engineering/july-august-2020/continued-process-verification-stages-1-3 (accessed on 25 August 2025).
  19. U.S. Food and Drug Administration. CDER’s Perspective on the Continuous Manufacturing Journey. FDA. 2023. Available online: https://www.fda.gov/media/173811/download (accessed on 25 October 2025).
  20. Kim, E.J.; Kim, J.H.; Kim, M.S.; Jeong, S.H.; Choi, D.H. Process Analytical Technology Tools for Monitoring Pharmaceutical Unit Operations: A Control Strategy for Continuous Process Verification. Pharmaceutics 2024, 13, 619. [Google Scholar] [CrossRef] [PubMed]
  21. Sanchez, C. Janssen Case Study Presentation—Continuous Process & Real-Time Release. PQRI. 2015. Available online: https://pqri.org/wp-content/uploads/2015/08/pdf/Sanchez.pdf (accessed on 25 August 2025).
  22. Novartis. Towards Real-Time Release of Pharmaceutical Tablets. 2021. Available online: https://oak.novartis.com/44574/ (accessed on 8 September 2025).
  23. Evans, C.; Giacoletti, K.; Hurley, D.; Levers, R.; McMenamin, M.; Wade, J. Process Validation in the Context of Small Molecule Drug Substance and Drug Product Continuous Manufacturing Processes. ISPE Concept Paper. 2024. Available online: https://ispe.org/sites/default/files/concept-papers/ISPE_PV_Context%20of%20Small%20Molecule%20DS-DP.pdf (accessed on 12 November 2025).
  24. Pugatch Consilium. AI Readiness in the Pharmaceutical Industry: Final Report. Pugatch Consilium. 2024. Available online: https://www.pugatch-consilium.com/reports/AI_Readiness_in_the_Pharmaceutical_Industry_Final%20report.pdf (accessed on 8 September 2025).
  25. Valloppillil, S. AI in Pharma: Startups, VCs and Big Tech Are Reshaping the Industry. Forbes. 2025. Available online: https://www.forbes.com/sites/sindhyavalloppillil/2025/07/17/ai-in-pharma-era-where-big-tech-leads-startups-scale-and-incumbents-strategize/ (accessed on 12 September 2025).
  26. Buvailo, A. How the Pharmaceutical Industry Is Adopting Artificial Intelligence to Boost Drug Research. BioPharmaTrend. 2022. Available online: https://www.biopharmatrend.com/artificial-intelligence/how-pharmaceutical-industry-is-adopting-artificial-intelligence-to-boost-drug-research-496/ (accessed on 13 September 2025).
  27. International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use. ICH Q10 Pharmaceutical Quality System. 2008. Available online: https://www.ema.europa.eu/en/ich-q10-pharmaceutical-quality-system (accessed on 20 October 2025).
  28. U.S. Food and Drug Administration. Process Validation: General Principles and Practices. FDA. 2011. Available online: https://www.fda.gov/files/drugs/published/Process-Validation--General-Principles-and-Practices.pdf (accessed on 20 October 2025).
  29. Eissa, M.E. Enhancing Sterile Manufacturing with AI and Machine Learning for Predictive Equipment Maintenance. Pharma Focus America. 2025. Available online: https://www.pharmafocusamerica.com/manufacturing/enhancing-sterile-manufacturing-with-ai (accessed on 10 September 2025).
  30. Clayton, J. Real-Time In-Line Monitoring of High Shear Wet Granulation. American Pharmaceutical Review. 2017. Available online: https://www.americanpharmaceuticalreview.com/Featured-Articles/345587-Real-Time-In-Line-Monitoring-of-High-Shear-Wet-Granulation/ (accessed on 10 September 2025).
  31. BioPhorum. Implementing AI Systems in Regulated Pharma Environments. BioPhorum. 2025. Available online: https://www.biophorum.com/download/implementing-ai-systems-in-regulated-pharma-environments-biophorum/ (accessed on 10 September 2025).
  32. Altabrisa Group. AI in Pharma Deviation Management: GMP Compliance & Automation. Altabrisa Group. 2025. Available online: https://altabrisagroup.com/ai-in-pharma-deviation-management-gmp-compliance-automation/ (accessed on 28 August 2025).
  33. PDA Journal of Pharmaceutical Science and Technology. CPV of the Future—AI-Powered CPV for Bioreactor Processes. 2023. Available online: https://journal.pda.org/content/77/3/146 (accessed on 22 September 2025).
  34. Seeq Corporation. Realizing the Benefits of CPV with Advanced Analytics. White Paper. 2022. Available online: https://www.seeq.com/resources/downloads/realizing-the-benefits-of-cpv-with-advanced-analytics-2/ (accessed on 22 September 2025).
  35. Mosqueira-Rey, E.; Hernández-Pereira, E.; Alonso-Ríos, D.; Bobes-Bascarán, J.; Fernández-Leal, Á. Human-in-the-loop Machine Learning: A State of the Art. Artif. Intell. Rev. 2023, 56, 3005–3054. [Google Scholar] [CrossRef]
  36. Kadia, H. Rule-Based vs. LLM-Based AI Agents: A Side-by-Side Comparison. TeckNexus. 2025. Available online: https://tecknexus.com/rule-based-vs-llm-based-ai-agents-a-side-by-side-comparison/ (accessed on 19 November 2025).
  37. Natarajan, S.; Mathur, S.; Sidheekh, S.; Stammer, W.; Kersting, K. Human-in-the-loop or AI-in-the-loop? Automate or Collaborate? In Proceedings of the AAAI Conference on Artificial Intelligence, Philadelphia, PA, USA, 25 February–4 March 2025; Volume 39, pp. 28594–28600. [Google Scholar] [CrossRef]
  38. ISPE. GAMP 5: A Risk-Based Approach to Compliant GxP Computerized Systems, 2nd ed.; ISPE: Tampa, FL, USA, 2022; Available online: https://ispe.org/publications/guidance-documents/gamp-5-guide-2nd-edition (accessed on 13 September 2025).
  39. EFPIA. Application of AI in a GMP/Manufacturing Environment—An Industry Approach; Version 1.0, September 2024. Available online: https://www.efpia.eu/media/vqmfjjmv/position-paper-application-of-ai-in-a-gmp-manufacturing-environment-sept2024.pdf (accessed on 19 November 2025).
  40. European Commission. Ethics Guidelines for Trustworthy AI. High-Level Expert Group on Artificial Intelligence. 2019. Available online: https://www.europarl.europa.eu/cmsdata/196377/AI%20HLEG_Ethics%20Guidelines%20for%20Trustworthy%20AI.pdf (accessed on 16 November 2025).
  41. PIC/S. PI 041-1: Good Practices for Data Management and Integrity in Regulated GMP/GDP Environments. 2021. Available online: https://picscheme.org/docview/4234 (accessed on 13 September 2025).
  42. European Medicines Agency. Reflection Paper on the Use of Artificial Intelligence in the Medicinal Product Lifecycle. EMA. 2024. Available online: https://www.ema.europa.eu/system/files/documents/scientific-guideline/reflection-paper-use-artificial-intelligence-ai-medicinal-product-lifecycle-en.pdf (accessed on 23 August 2025).
  43. European Union. Artificial Intelligence Act (Regulation (EU) 2024/1689). 2024. Available online: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng (accessed on 1 October 2025).
  44. WHO. TRS 1033, Annex 4: Guideline on Data Integrity. 2021. Available online: https://www.who.int/publications/m/item/annex-4-trs-1033 (accessed on 20 August 2025).
  45. European Medicines Agency. ICH Q9 Quality Risk Management—Scientific Guideline; EMA/CHMP/ICH/24235/2006 Corr.2. 2006. Available online: https://www.ema.europa.eu/en/ich-q9-quality-risk-management-scientific-guideline (accessed on 20 October 2025).
  46. ECA Academy. What Is ‘Human-in-the-Loop’? GMP-Compliance News, 8 September de 2025. Available online: https://www.gmp-compliance.org/gmp-news/what-is-human-in-the-loop (accessed on 19 November 2025).
  47. United States Pharmacopeia (USP). General Chapter 1058: Analytical Instrument Qualification. USP–NF.; Rockville, MD, USA. Available online: https://online.uspnf.com/uspnf/document/1_GUID-EA8F36CE-5B60-4CA4-A3B8-51F22DE87BC6_1_en-US (accessed on 21 November 2025).
  48. FDA/ICH. Q2(R2)/Q14 Overview Deck. 2024. Available online: https://www.fda.gov/media/177718/download (accessed on 22 September 2025).
  49. Ueda, D.; Kakinuma, T.; Fujita, S.; Kamagata, K.; Fushimi, Y.; Ito, R.; Matsui, Y.; Nozaki, T.; Nakaura, T.; Fujima, N.; et al. Fairness of artificial intelligence in healthcare: Review and recommendations. Jpn. J. Radiol. 2024, 42, 3–15. [Google Scholar] [CrossRef] [PubMed]
  50. Fehr, J.; Citro, B.; Malpani, R.; Lippert, C.; Madai, V.I. A trustworthy AI reality-check: The lack of transparency of artificial intelligence products in healthcare. Front. Digit. Health 2024, 6, 1267290. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Consensus meter on regulatory compliance level based on AI systems. Created by the authors with Consensus.app with the prompt “can AI based systems significantly enhance regulatory compliance” (executed on 10 November 2025).
Figure 1. Consensus meter on regulatory compliance level based on AI systems. Created by the authors with Consensus.app with the prompt “can AI based systems significantly enhance regulatory compliance” (executed on 10 November 2025).
Information 16 01082 g001
Figure 2. Consensus meter key claims and evidence. Created by the authors with Consensus.app with the prompt “can AI based systems significantly enhance regulatory compliance” (executed on 10 November 2025). Evidence strength is visually represented: green = strong, yellow = moderate.
Figure 2. Consensus meter key claims and evidence. Created by the authors with Consensus.app with the prompt “can AI based systems significantly enhance regulatory compliance” (executed on 10 November 2025). Evidence strength is visually represented: green = strong, yellow = moderate.
Information 16 01082 g002
Figure 3. SHAP and LIME integration with an HITL decision-making process. Source: image created by the authors.
Figure 3. SHAP and LIME integration with an HITL decision-making process. Source: image created by the authors.
Information 16 01082 g003
Figure 4. Model restrictions as per Annex 22 EU-GMP. Source: image created by the authors.
Figure 4. Model restrictions as per Annex 22 EU-GMP. Source: image created by the authors.
Information 16 01082 g004
Table 1. OPV implementation maturity levels in the pharmaceutical industry.
Table 1. OPV implementation maturity levels in the pharmaceutical industry.
Maturity LevelDescriptionEstimated
Adoption
Key References
No OPVLegacy systems without formal OPV frameworks~20%Pugatch Consilium (2024) [24]
Traditional OPVOPV based on statistical process control and retrospective data analysis~65%ISPE (2020) [18]
FDA (2011) [28]
AI-Enhanced OPVOPV integrated with AI models for real-time monitoring and predictive analytics~35%
(pilot or partial)
BioPharmaTrend (2025) [26], Pharma Focus America (2025) [29]
Note: categories may overlap (e.g., companies piloting AI while maintaining traditional OPV).
Table 3. Recommended performance metrics and thresholds for AI models in GMP contexts.
Table 3. Recommended performance metrics and thresholds for AI models in GMP contexts.
MetricRecommended ThresholdPurposeKey References
Accuracy≥95%Overall correctness of predictionsAnnex 22 EU-GMP
ISPE GAMP 5 [8]
Precision≥90%Minimizing false positivesAnnex 22 EU-GMP [8]
Recall (Sensitivity)≥90%Minimizing false negativesAnnex 22 EU-GMP [8]
F1 Score≥0.90Balanced measure for imbalanced datasetsAnnex 22 EU-GMP
BioPhorum [8]
Specificity≥95%Correctly identifying true negativesAnnex 22 EU-GMP [8]
ExplainabilitySHAP or LIME integrationTransparency and interpretabilityAnnex 22 EU-GMP [8]
Note: Thresholds are indicative and should be adjusted based on risk and intended to use following ICH Q9 Quality Risk Management principles [45]. For high-risk processes (e.g., impacting patient safety), stricter thresholds may be required (e.g., accuracy ≥ 98%). For non-critical support functions, slightly lower thresholds may be acceptable if robust HITL oversight is in place.
Table 4. Summary of HITL AI case studies in pharmaceutical OPV.
Table 4. Summary of HITL AI case studies in pharmaceutical OPV.
Case StudyKey Data/Impact/BenefitReferences
A. Real-Time Monitoring of Granulation Process
-
In-line DFF sensor at 500 samples/s for granule quality prediction.
-
High repeatability across batches.
-
Strong correlation between DFF and critical properties.
-
Immediate process adjustments → reduced offline testing time.
[30]
B. Predictive Maintenance in Sterile Filling
-
100 vibration sensors with AI for WFI pumps and AHUs.
-
Up to 3× reduction in unplanned downtime.
-
Cost savings > USD 1 M/year.
-
Faster root cause diagnosis via predictive alerts.
[29]
C. Deviation Root Cause Analysis
-
Investigation time reduced 83% (18 h → 3 h).
-
Report writing reduced 87% (4 h → 30 min).
-
Full cycle reduced 67% (45 days → 15 days).
-
QA workload reduced 40–60%.
[32]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Romero-Obon, M.; Rouaz-El-Hajoui, K.; Sancho-Ochoa, V.; Vargas, R.; Pérez-Lozano, P.; Suñé-Pou, M.; García-Montoya, E. Human-in-the-Loop AI Use in Ongoing Process Verification in the Pharmaceutical Industry. Information 2025, 16, 1082. https://doi.org/10.3390/info16121082

AMA Style

Romero-Obon M, Rouaz-El-Hajoui K, Sancho-Ochoa V, Vargas R, Pérez-Lozano P, Suñé-Pou M, García-Montoya E. Human-in-the-Loop AI Use in Ongoing Process Verification in the Pharmaceutical Industry. Information. 2025; 16(12):1082. https://doi.org/10.3390/info16121082

Chicago/Turabian Style

Romero-Obon, Miquel, Khadija Rouaz-El-Hajoui, Virginia Sancho-Ochoa, Ronny Vargas, Pilar Pérez-Lozano, Marc Suñé-Pou, and Encarna García-Montoya. 2025. "Human-in-the-Loop AI Use in Ongoing Process Verification in the Pharmaceutical Industry" Information 16, no. 12: 1082. https://doi.org/10.3390/info16121082

APA Style

Romero-Obon, M., Rouaz-El-Hajoui, K., Sancho-Ochoa, V., Vargas, R., Pérez-Lozano, P., Suñé-Pou, M., & García-Montoya, E. (2025). Human-in-the-Loop AI Use in Ongoing Process Verification in the Pharmaceutical Industry. Information, 16(12), 1082. https://doi.org/10.3390/info16121082

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop