Next Article in Journal
Research Review on Traffic Safety for Expressway Maintenance Road Sections
Previous Article in Journal
Green AI for Energy-Efficient Ground Investigation: A Greedy Algorithm-Optimized AI Model for Subsurface Data Prediction
Previous Article in Special Issue
Development and Evaluation of a Next-Generation Medication Safety Support System Based on AI and Mixed Reality: A Study from South Korea
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence in Medicine and Healthcare: A Complexity-Based Framework for Model–Context–Relation Alignment

by
Emanuele Di Vita
,
Giovanni Caivano
,
Fabio Massimo Sciarra
,
Simone Lo Bianco
,
Pietro Messina
,
Enzo Maria Cumbo
,
Luigi Caradonna
,
Salvatore Nigliaccio
,
Davide Alessio Fontana
,
Antonio Scardina
and
Giuseppe Alessandro Scardina
*
Department of Precision Medicine in Medical, Surgical and Critical Care (Me.Pre.C.C.), University of Palermo, Via Del Vespro, 129-90127 Palermo, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(22), 12005; https://doi.org/10.3390/app152212005
Submission received: 29 October 2025 / Revised: 7 November 2025 / Accepted: 10 November 2025 / Published: 12 November 2025

Abstract

Artificial intelligence (AI) is profoundly transforming medicine and healthcare, evolving from analytical tools aimed at automating specific tasks to integrated components of complex socio-technical systems. This work presents a conceptual and theoretical review proposing the Model–Context–Relation (M–C–R) framework to interpret how the effectiveness of Artificial Intelligence (AI) in medicine and healthcare emerges from the dynamic alignment among algorithmic, contextual, and relational dimensions. No new patient-level data were generated or analyzed. Through a qualitative conceptual framework analysis, the study integrates theoretical, regulatory, and applicative perspectives, drawing on the Revision of the Semiological Paradigm developed by the Palermo School, as well as on major international guidelines (WHO, European AI Act, FDA). The results indicate that AI-supported processes have been reported in the literature to improve clinical accuracy and workflow efficiency when appropriately integrated, yet its value depends on contextual adaptation and human supervision rather than on algorithmic performance alone. When properly integrated, AI functions as a digital semiotic extension of clinical reasoning and may enhance the physician’s interpretative capacity without replacing it. The M–C–R framework enables understanding of how performance, ethical reliability, and organizational sustainability emerge from the alignment between the technical model, the context of use, and relational trust. In this perspective, AI is conceptualized not as a decision-maker but as an adaptive cognitive partner, fostering a reflective, transparent, and person-centered medicine. The proposed approach supports the design of sustainable and ethically responsible AI systems within a Medicine of Complexity, in which human and artificial intelligence co-evolve to strengthen knowledge, accountability, and equity in healthcare systems.

1. Introduction

Artificial intelligence (AI) is progressively transforming biomedical sciences, redefining how clinical reasoning, diagnosis, and the organization of healthcare systems are defined [1]. From a collection of analytical tools designed to automate specific tasks, AI is evolving into an integrated component of complex socio-technical systems, capable of supporting adaptive, personalized, and relational decision-making processes [2].
Although the role of AI extends beyond traditional diagnostic support, assuming epistemic and organizational functions within medicine and healthcare, it must nevertheless remain anchored to the biomedical method, avoiding reductionist or dogmatic interpretations that would undermine its function as a critical support to human clinical judgment [3].
This study is grounded in the theoretical framework of the Revision of the Semiological Paradigm developed by the Palermo School, which proposes a reinterpretation of Biomedical Semiotics in the digital era. In this perspective, AI is understood as an instrumental and integrative extension of digital semiotics: an analytical system that not only generates and processes data but also contributes to the construction of clinical meaning.

1.1. AI in Medicine and Health Sciences

Applications of AI in medicine are evolving from systems limited to specific tasks (such as the analysis of radiological images or genomic data) toward multimodal and foundational models capable of integrating heterogeneous information, clinical texts, medical images, molecular profiles, and physiological parameters, to predict and interpret complex health conditions [4].
In the broader healthcare domain, encompassing organizational, social, and policy systems, AI operates as a layer of systemic intelligence capable of analyzing large streams of epidemiological data, anticipating public health trends, optimizing resources, and identifying inequities, thus supporting more informed political and financial decisions [3,5].
It is therefore essential to distinguish between:
  • AI for Medicine, oriented toward the individual patient, diagnosis, therapy, and precision medicine;
  • AI for Healthcare, focused on population health, prevention, service planning, and the sustainability of healthcare systems.
Although interdependent, this distinction allows for a clearer definition of the scientific, ethical, and regulatory boundaries of AI applications across the health sciences [4,6].

1.2. Complexity and Adaptive Systems in Medicine and Healthcare

Interpreting medicine and healthcare as Complex Adaptive Systems means recognizing that clinical and organizational phenomena emerge from nonlinear interactions among people, technologies, institutions, and contexts [7].
From this viewpoint, AI performance cannot be assessed solely in terms of accuracy or predictive power, but rather by its capacity to integrate and adapt within the system in which it operates [8].
To analyze this integration, the present study adopts the Model–Context–Relation (M–C–R) framework, which interprets AI effectiveness as an emergent property of three interdependent dimensions:
  • Model: the algorithmic representation and computational output;
  • Context: the operational, regulatory, and organizational environment of application;
  • Relation: the human and institutional interactions that guide its use.
The alignment among these three elements determines the real impact value of AI in medicine and public health. Only when algorithmic performance, context of use, and relational trust are harmonized does AI generate adaptive knowledge and tangible clinical benefit [9,10].

1.3. Toward a Medicine of Complexity

Adopting a complexity-based perspective promotes a view of care as a dynamic process of meaning-making rather than a linear sequence of technical acts [3]. Within this framework, AI acts as an amplifier of systemic knowledge, which may enhance the physician’s capacity to interpret uncertainty and adapt decisions to real clinical contexts [11].
The physician remains the central interpretative actor, the custodian of uncertainty, supervisor of algorithmic reasoning, and guarantor of ethical responsibility [12]. When properly integrated, AI technologies do not replace human judgment but extend it, enabling a transition toward a paradigm of augmented intelligence in which analytical precision, contextual adaptability, and therapeutic relationship coexist [1].

1.4. The Revision of the Semiological Paradigm: Epistemological Foundation of the M–C–R Framework

The Model–Context–Relation (M–C–R) framework proposed in this study is grounded in the Revision of the Semiological Paradigm developed by the Palermo School of Biomedical Epistemology. This theoretical movement reinterprets the foundations of medical semiotics in light of contemporary digital and systemic medicine. Its central claim is that medical knowledge does not arise directly from data or images, but from the interpretive act that connects observable signs to meaningful clinical judgments.
Traditionally, medical semiotics views the diagnostic process as a triadic relation among sign, symptom, and meaning. The Palermo revision extends this triad into the digital domain, asserting that computational systems can generate new categories of signs, digital, probabilistic, and multimodal, but that the semantic act transforming data into knowledge remains uniquely human. Within this perspective, Artificial Intelligence functions not as a decision-maker but as a semiotic amplifier of clinical perception and reasoning.
The paradigm articulates three key principles that directly inform the M–C–R framework:
  • Model—Instrumental Semiotics. Algorithms are understood as cognitive instruments that expand the perceptual field of the physician, transforming raw data into structured digital signs. Their validity depends on their capacity to preserve interpretability and traceability, not merely statistical accuracy.
  • Context—Situated Meaning. Every clinical or organizational environment provides the situational matrix within which digital signs acquire significance. The same algorithm may yield different meanings and values depending on workflow, infrastructure, and regulatory setting.
  • Relation—Interpretive Mediation. Diagnosis and decision-making occur through the interaction among humans, clinicians, patients, and institutional actors, who assign meaning to algorithmic outputs. Relation is therefore the ethical and epistemic bridge that ensures that computation remains embedded within the human act of care.
By linking semiotics with complexity theory, the Palermo School’s revision offers a coherent epistemological foundation for M–C–R. It reframes AI not as a substitute for medical judgment but as a co-evolving interpretive system in which model, context, and relation continuously generate and refine clinical meaning.

2. Materials and Methods

2.1. General Methodological Approach

This study adopts an integrated analytical approach combining theoretical, regulatory, and applicative perspectives to provide a systemic interpretation of the role of Artificial Intelligence (AI) in contemporary medicine and healthcare.
The work is configured as a structured narrative review, aimed at constructing a conceptual framework, the Model–Context–Relation (M–C–R) model, capable of describing how AI effectiveness emerges from the dynamic alignment among algorithmic model, operational environment, and human–institutional relationships.
The methodological design was articulated along three analytical axes:
  • Paradigmatic analysis, grounded in the Revision of the Semiological Paradigm developed by the Palermo School, to situate AI within biomedical reasoning as a cognitive and semiotic extension;
  • Comparative regulatory analysis, integrating institutional guidelines (WHO, European Union, FDA) to derive criteria for safety, accountability, and adaptive governance;
  • Systemic synthesis, applying the M–C–R framework as an interpretive lens to map the emergent properties of AI across clinical and organizational domains.

2.2. Search Strategy and Source Identification

The literature search was conducted between January 2018 and March 2025 using PubMed, Scopus, and Web of Science databases. Supplementary regulatory and conceptual documents were retrieved from institutional repositories (WHO, European Commission, U.S. FDA).
Example search strings included combinations of controlled and free-text terms such as:
(“artificial intelligence” OR “machine learning” OR “deep learning”) AND
(“medicine” OR “healthcare” OR “clinical decision support”) AND
(“complexity” OR “adaptive systems” OR “governance” OR “ethics”).
Additional sources were identified through backward and forward citation tracking.

2.3. Inclusion and Exclusion Criteria

Inclusion criteria encompassed:
  • Peer-reviewed papers addressing the implementation, governance, or ethical implications of AI in medicine or healthcare systems;
  • Regulatory and policy documents from WHO, EU, and FDA concerning AI-based health technologies;
  • Conceptual and epistemological works on complexity science and medical semiotics relevant to clinical reasoning.
Exclusion criteria included:
  • Studies exclusively focused on algorithmic performance or technical architecture without reference to clinical deployment or human oversight;
  • Non-English or non-Italian sources without accessible translation;
  • Non–peer-reviewed material unless produced by recognized health authorities.
Publications prior to 2018 were included only when of historical or theoretical relevance (e.g., foundational semiotic or epistemological texts).

2.4. Screening, Synthesis, and Conceptual Triangulation

Two authors independently screened titles and abstracts for relevance. Disagreements were resolved through consensus with a third reviewer. The synthesis followed a narrative–thematic approach, clustering sources under three main domains: AI for Medicine, AI for Healthcare, and AI and Complexity.
To ensure theoretical robustness, a process of conceptual triangulation was applied. Each included source was evaluated against three coherence criteria derived from Complex Systems Theory:
  • Nonlinearity—acknowledgment of feedback loops and interdependent interactions;
  • Emergence—identification of system-level properties not reducible to individual components;
  • Adaptive feedback—evidence of learning or contextual adjustment within socio-technical systems.
Only models and interpretations maintaining internal coherence with all three principles were retained as consistent with the proposed Medicine of Complexity framework.

2.5. Methodological Limitations and Reflexivity

Because the study is conceptual and not empirical, no new patient-level data were generated or analyzed.
Quantitative claims cited in the Discussion (e.g., sepsis detection, hospital flow optimization) are derived exclusively from previously published, aggregate, and de-identified studies.
The review emphasizes theoretical synthesis rather than exhaustive statistical analysis; however, transparency of search strategy, inclusion logic, and validation criteria was prioritized to enhance reproducibility and interpretive clarity.
The overall methodological process integrating literature, regulatory, and conceptual analyses is summarized in Figure 1.

3. Discussion

3.1. Rethinking Artificial Intelligence Within the Biomedical Paradigm: From the Myth of Substitution to Integrated Semiotics

The scientific definition of Artificial Intelligence (AI), as shared internationally and adopted by institutions such as the Organization for Economic Co-operation and Development (OECD), the European Commission, and the World Health Organization (WHO), is as follows: “Artificial Intelligence is a machine-based system designed to operate with varying levels of autonomy, capable of processing data, perceiving its environment, interpreting structured or unstructured information, and undertaking actions or making recommendations to achieve specific objectives” [13,14,15]. Although these definitions describe AI as a computational system that learns and makes limited autonomous decisions, public imagination, amplified by technocentric and media narratives, often portrays it as an infallible entity with human-like cognitive abilities [16].
This perception, shaped by quasi-salvific expectations and dogmatic trust, generates a conceptual distortion that becomes particularly evident in medicine: AI is perceived not as a semiotic instrument supporting diagnosis but as a potentially autonomous decision-maker capable of replacing the interpretative act of the clinician [7].
According to the Palermo School, this misconception stems from the absence of a clear epistemological placement of AI within the traditional biomedical paradigm [3,12]. Medicine grounds its diagnostic process in the relationship among sign, symptom, and meaning, a semiological process in which sensory data acquire significance only through the interpretative act of the physician [17].
When AI is seen as a “decision-maker” rather than a tool for describing, measuring, and guiding, it steps outside the semiological domain and shifts judgmental responsibility to computation. Reintegrating AI within Medical Semiotics thus means recognizing its role as diagnostic aid rather than decision-maker [3].
AI neither “sees” nor “thinks”: it collects, processes, and correlates data, expanding human perceptual capacity [8]. Its role can therefore be interpreted as an instrumental extension of digital semiotics, which complements classical analog semiotics, extending the field of the perceptible from radiological imaging to biometric signals, metabolic variations, and genomic determinations, without ever diminishing the interpretative primacy of the clinician [12,18].
Epistemologically, this redefinition places AI along a continuum among perception, interpretation, and decision:
  • In perception, AI acts as a sensory amplifier, capable of detecting weak patterns invisible to the human eye;
  • In interpretation, it provides statistical correlations or probabilistic inferences that orient clinical reasoning;
  • In decision-making, the role remains human, as it requires synthesizing scientific knowledge, experience, and patient values, dimensions no algorithm can encompass [10,19,20].
Reframed in this way, AI assumes an ancillary and integrative role that expands rather than limits medical knowledge [1]. It provides a semiological and scientific foundation for its use as an instrument mediating between data and meaning [3]. Within this framework, digital-era medicine can preserve its epistemological identity while combining innovation, responsibility, and human depth [17].

3.2. Applications of Artificial Intelligence in Medicine

Analysis of the scientific literature and practical experiences indicates that AI in medicine has transitioned from tools limited to specific diagnostic or predictive tasks to integrative models capable of analyzing heterogeneous data simultaneously [9,21].
Three areas are particularly prominent:
  • Digital Pathology and Precision Oncology. Foundation models trained on millions of histopathological images can correlate morphological patterns with genomic data, thereby improving diagnostic accuracy, prognostic assessment, and therapeutic decision-making [22,23].
  • Prediction of Multimorbidity and Personalized Risk. Transformer architectures and Large Language Models (LLMs) applied to longitudinal clinical databases estimate the combined probability of multiple chronic diseases, adapting flexibly to different healthcare contexts [24].
  • Integration of Electronic Health Records (EHR Modeling). Federated learning enables algorithms to be trained on distributed clinical datasets, enhancing performance while maintaining data privacy [7,24].
Emerging spontaneous practices among healthcare professionals also involve the use of language models for report generation, complex case review, and literature synthesis [25]. While such uses have been reported to improve or may improve cognitive efficiency, they also raise new challenges regarding validation, transparency, and clinical accountability [12,26].

3.3. AI for Healthcare: Systemic Intelligence and Public Health

In the domain of public health, AI does not act upon individual patients but upon the health system as a whole, contributing to its predictive capacity, equity, and sustainability [5].
Five main lines of development are currently observed:
  • Epidemiological Forecasting and Surveillance—neural networks and Bayesian models are used to anticipate epidemic trends and the spread of chronic diseases [27].
  • Resource Optimization and Healthcare Logistics—reinforcement learning algorithms support hospital flow management and reduction in systemic inefficiencies [19].
  • Analysis of Social Determinants of Health (SDH)—machine learning models integrate health, environmental, and socio-economic data to identify disparities [28].
  • Service Planning and Economic Sustainability—predictive simulations assess the potential impact of public health interventions, guiding policy and financial decision-making [5].
  • Systemic Observation—AI systems reveal complex interaction patterns among resources, regulations, and outcomes, making the healthcare system increasingly self-learning [29].
According to the M–C–R framework, the effectiveness of AI depends not on algorithmic accuracy alone but on the harmony among technical model, operational context, and human relations [9] (Table 1).
For example, an early warning model for sepsis proves ineffective if it generates excessive false alerts or if clinicians do not know how to respond [30]. Algorithmic accuracy alone is insufficient, coordination and contextual adaptation are essential [21]. When the hospital adjusts the implementation context, reducing irrelevant notifications, clarifying protocols, and promoting interprofessional collaboration, the same technology becomes genuinely life-saving [31]. Similarly, in prescription support systems, personalization of the model and collaboration between physicians and pharmacists transform a rigid algorithm into a useful, trustworthy instrument [32].
In public health, an algorithm predicting exacerbation risk in chronic patients is ineffective if the community lacks telemonitoring services or care networks, or if collaboration between primary care and social services is weak. AI becomes effective only when it functions as an intelligent observer of the system, potentially revealing hidden patterns supported by appropriate structures and relationships [5,33].

3.4. AI and Complexity: Medicine as an Adaptive System

Findings confirm that both medicine and healthcare should be understood as Complex Adaptive Systems (CAS), where outcomes emerge from dynamic interactions among individuals, rules, technologies, and contexts [3]. Within this perspective, AI acts as an adaptive agent co-evolving with the system, enhancing efficiency or introducing distortions depending on the degree of contextual integration and the quality of human supervision [34].
Clinical complexity cannot be reduced to a sum of numerical parameters; the most effective AI models are those trained on dynamic, contextualized data reflecting real-world variability in care systems [24]. Therefore, the effectiveness of AI is an emergent property of the entire socio-technical system, not an intrinsic feature of the model itself [35].

3.4.1. Positioning of the M–C–R Framework

The proposed Model–Context–Relation (M–C–R) framework builds upon, but also advances beyond, established socio-technical and learning-health-system models. Previous frameworks, such as the Technology–Organization–People (TOP) fit, the continuous monitoring approach for adaptive AI, and the socio-technical theory of health information systems, emphasize the interplay between technological design, organizational infrastructure, and user behavior. While these approaches have provided valuable foundations, they often remain descriptive or operational, lacking a formal epistemological link to clinical reasoning itself.
The M–C–R model introduces three distinctive contributions:
  • Epistemic grounding in clinical semiotics. By anchoring AI within the semiological process of perception–interpretation–decision, M–C–R redefines the algorithm not as an autonomous agent but as a semiotic amplifier that extends clinical perception and reasoning. This establishes a direct continuity between computational modeling and medical sense-making.
  • Co-equality of human and algorithmic components. Whereas many implementation frameworks treat “human oversight” as an external safeguard added post-design, M–C–R conceptualizes Relation, the network of physicians, care teams, and institutions, as a co-constitutive dimension of AI effectiveness, on par with the Model and Context. This shift formalizes accountability and interpretive responsibility as intrinsic properties of system performance.
  • Emergent and contextual evaluation of value. Unlike metrics-based assessments centered on accuracy or AUROC, M–C–R frames AI value as an emergent outcome of its adaptive alignment with the clinical and organizational environment. Effectiveness is thus not a static attribute of the algorithm but a dynamic property of the socio-technical ecosystem in which it operates.
Through these innovations, the M–C–R framework moves beyond conventional socio-technical analysis toward a complexity-informed, epistemically grounded theory of AI in medicine. It offers a conceptual bridge between regulatory expectations for trustworthy AI, centered on transparency, accountability, and human oversight, and the cognitive reality of medical practice, where meaning, context, and relationship jointly determine clinical knowledge.

3.4.2. Limitations of Existing Complexity Approaches and Added Value of the M–C–R Framework

Although several approaches in healthcare AI already draw on complexity theory, such as complex adaptive systems (CAS), socio-technical resilience models, and learning health systems (LHS), these remain largely descriptive and insufficiently operationalized for clinical governance.
Most of these models emphasize interdependence, feedback loops, and emergent behavior but tend to stop short of specifying who interprets complexity and how interpretive accountability is maintained when AI systems adapt dynamically. In practical terms, existing complexity-based approaches often:
  • Lack clear epistemic grounding in clinical reasoning and semiotics;
  • Focus on organizational adaptiveness rather than interpretive reliability;
  • Offer limited criteria for evaluating alignment between algorithmic behavior, clinical context, and human judgment.
The proposed M–C–R framework builds on this literature but introduces a structured, semiotic, and ethically grounded articulation of complexity. It explicitly integrates epistemic (Model), systemic (Context), and relational (Relation) dimensions, positioning interpretive accountability as co-equal with technical and organizational performance.
In doing so, M–C–R operationalizes complexity: it treats meaning generation and responsibility as measurable interactions rather than emergent by-products, offering a testable model that can inform empirical research and regulatory assessment.
Table 2 summarizes how M–C–R advances beyond current complexity-informed frameworks in healthcare AI governance.

3.5. International Regulatory Framework

Comparison among major regulatory authorities, WHO, the European Union, and the FDA, reveals convergence toward a harmonized regulatory framework built on three key principles:
  • Life-cycle approach, covering the entire lifespan of the system, from design to decommissioning;
  • Risk management, proportional to the system’s impact on health and safety;
  • Post-market surveillance and continuous updating [14,15,36,37].
The WHO emphasizes transparency, traceability, and stakeholder engagement [15]. The European AI Act (2024–2025) classifies healthcare AI systems as “high-risk,” mandating clinical validation and mandatory human oversight [37]. The FDA adopts an adaptive regulatory model balancing safety and innovation [36].
All converge on the need for multi-level governance, transparent data management, ethical–legal auditing, and clear professional supervision to ensure safety and systemic trust [29].
To clarify the practical alignment between international regulations and the proposed Model–Context–Relation (M–C–R) framework, Table 3 summarizes how the main regulatory principles established by the WHO, the European AI Act, and the U.S. FDA correspond to each component of the model. This mapping highlights that global governance priorities, transparency, accountability, and human oversight, are not external constraints but intrinsic dimensions of AI effectiveness in healthcare.
By situating regulatory expectations within the M–C–R architecture, the framework provides an integrative lens through which ethical, legal, and operational compliance can be designed and monitored. It reinforces that trustworthy AI is achieved not merely through algorithmic validation but through systemic alignment among model reliability, contextual safety, and relational accountability.

3.6. Ethical, Social, and Organizational Dimensions

The integration of AI into contemporary medicine represents not only a technological advancement but an epistemic transition that reshapes the very structure of the medical act and healthcare organization [38]. Ethically, the physician is no longer solely an interpreter of clinical signs but a cognitive mediator among patient, technology, and system [3].
From a social and economic standpoint, AI adoption should be viewed as a reconfiguration of expenditure through the lens of complexity, where initial investments in infrastructure and training may translate, in some contexts, into greater efficiency, diagnostic timeliness, and prevention [39].
Organizationally, AI enables a transition toward a reflective and adaptive healthcare system capable of learning from its own data and improving over time [33].
The resulting concept of sustainable complexity introduces not only new tools but a reformulation of the interconnections among knowledge, decision, and value, while keeping the person and distributive justice at the center [28,40].
To operationalize this perspective, the Relation component of the Model–Context–Relation (M–C–R) framework must be understood as extending beyond the physician–AI dyad to include the broader constellation of actors who co-produce clinical and organizational decisions. In contemporary healthcare, responsibility for AI-informed choices is distributed among multidisciplinary teams, physicians, nurses, pharmacists, data scientists, hospital managers, ethics committees, and, where appropriate, patients and community representatives.
This expanded notion of relation reflects the fact that AI-driven processes often influence not only diagnostic reasoning but also care coordination, logistics, and public health planning. Accordingly, accountability becomes collective and multi-level: individual professionals remain ethically responsible for their judgments, but institutions and governance bodies share the duty of ensuring that AI systems are transparent, validated, and aligned with patient and societal values.
Within this extended framework, the “relational” dimension integrates three layers of interaction:
  • Interpersonal, concerning clinician–patient communication and shared decision-making;
  • Interprofessional, encompassing collaboration among healthcare workers, data experts, and administrators;
  • Institutional, involving regulatory oversight, ethical governance, and social participation.
By incorporating these layers, Relation functions as the connective tissue of trustworthy AI, transforming the framework from a model of individual supervision into a paradigm of distributed interpretive responsibility. This perspective aligns with international regulatory expectations that emphasize human oversight not as a single act but as a systemic property of the healthcare ecosystem.

3.7. Conceptual Validation of the M–C–R Framework: The Case of Early Sepsis Diagnosis

The application of the Model–Context–Relation (M–C–R) framework can be illustrated through the paradigmatic case of AI-based early warning systems for sepsis in hospital environments [Figure 2].
Recent prospective multicenter studies, such as Adams et al. [41] and Henry et al. [42], have shown that machine learning models trained on longitudinal vital signs and laboratory data can identify sepsis several hours before conventional clinical recognition. Reported median lead times range from 2.8 to 4.5 h, with AUROC values between 0.80 and 0.88 and variable positive predictive values (20–35%) depending on implementation context.
However, performance alone is insufficient to ensure clinical benefit. When implemented without workflow adaptation, such systems have produced high false-alert rates (up to 85% in some ICU deployments), leading to clinician alert fatigue and limited response rates. Conversely, in sites that combined algorithmic tuning with clear escalation protocols and interdisciplinary training, meaningful improvements were reported, including reduced sepsis-related mortality (−8–12%) and earlier initiation of antibiotics [41,43].
Within the M–C–R framework:
  • Model refers to the neural network generating risk scores from real-time electronic health record data;
  • Context denotes the hospital infrastructure, interoperability, staffing, and local protocols governing alert management;
  • Relation captures the trust and coordination among clinicians, nurses, and data scientists who interpret and act on alerts.
Empirical evidence confirms that system impact depends not only on algorithmic accuracy but on the alignment among these three dimensions. Hospitals with structured communication channels and clear accountability pathways demonstrate significantly higher intervention compliance and clinical benefit. The M–C–R framework therefore explains sepsis detection effectiveness as an emergent property of an adaptive socio-technical system rather than a fixed computational output.

3.8. Applicative Example and Operational Implications: Predictive Management of Hospital Flows

The Model–Context–Relation (M–C–R) framework also applies at the macro-organizational level, where AI supports hospital flow management and resource allocation. Following the COVID-19 pandemic, several European and North American health systems introduced predictive analytics to anticipate admission peaks and optimize staff distribution [44]. Reported implementations include reinforcement learning and Bayesian models analyzing real-time Admission–Discharge–Transfer (ADT) data across regional networks [45,46,47].
Quantitative outcomes from these projects vary depending on organizational maturity. In integrated governance settings, where predictive dashboards were embedded in daily planning meetings, studies reported reductions of 10–18% in emergency-department boarding time, 12–15% improvement in bed turnover efficiency, and decreased elective-surgery cancelations [47]. Conversely, hospitals with limited coordination or siloed decision structures showed negligible improvements despite using identical algorithms.
In M–C–R terms:
  • The Model represents predictive engines trained on operational and epidemiological data;
  • The Context includes hospital logistics, regulatory constraints, and available human resources;
  • The Relation encompasses communication among management teams, clinical departments, and regional authorities that translate predictions into actionable decisions.
Thus, organizational benefits emerge only when predictive insights are embedded within relational and contextual readiness. This confirms that AI value at the system level cannot be measured by algorithmic precision alone but must be assessed through the emergent performance of the institution as a complex adaptive system.

4. Conclusions

Artificial intelligence (AI) neither replaces medicine nor governs healthcare; rather, it becomes a relational and cognitive component capable of amplifying or distorting what it encounters, depending on how it is conceived, designed, and integrated into real contexts. Its actual impact does not depend on what it “knows” or how “performant” it is, but rather on how it interacts with people, rules, values, and institutions within the healthcare system.
A complexity-based approach allows this transition to be interpreted as a form of co-evolution between human and artificial intelligence, in which knowledge emerges from the dynamic alignment of data, context, and relationship [48]. The physician remains the central interpretative and deontological actor, the epistemic translator, custodian of uncertainty, and legal and moral supervisor of decision-making processes.
Far from being diminished by technological progress, the physician’s function is expanded: they become the guarantor of meaning, transparency, and trust connecting technology, patient, and healthcare organization.
Properly situated within the biomedical semiological paradigm, AI represents an amplifier of the collective cognitive capacity of the healthcare system. It has been reported to enhance, or may contribute to enhanced, diagnostic precision and decision-making timeliness, while promoting more reflective, adaptive, and person-centered healthcare processes. However, its effectiveness cannot be separated from a clear regulatory framework, interdisciplinary professional training, and ethical governance ensuring responsibility, traceability, and distributive justice.
True progress lies not in diagnostic automation but in building shared intelligence among humans, science, and technology, one that learns from its limits, adapts to contexts, and keeps the person at the center of healthcare values. Ultimately, AI represents for medicine and healthcare a genuine test of complex thinking: not a substitute for human knowledge, but an ally in making it more conscious, reflective, and sustainable. Only the harmonic alignment of Model, Context, and Relation is expected to translate computational power into meaningful clinical knowledge and technological innovation into collective, ethical, and sustainable benefit.

4.1. Limitations and Future Perspectives

This study is conceptual in nature and does not include empirical validation of the proposed Model–Context–Relation (M–C–R) framework. Although the framework integrates theoretical, regulatory, and practical dimensions, its effectiveness should be further assessed through case studies and interdisciplinary research involving clinicians, data scientists, and policymakers.
Future work could focus on developing quantitative and mixed-method approaches to measure alignment among model performance, contextual adaptability, and relational trust.
Comparative studies across healthcare systems may also clarify how different regulatory and cultural environments influence the emergent properties of AI effectiveness.
Integrating the M–C–R framework into simulation environments or digital twins could provide a valuable tool for testing adaptive behaviors and ethical robustness prior to real-world implementation.

4.2. Proposed Study Designs for Empirical Validation of the M–C–R Framework

To move from conceptual to testable claims, we propose two complementary, mixed-methods study designs capable of operationalizing Model–Context–Relation (M–C–R) alignment and correlating it with clinical and organizational outcomes.
Primary hypothesis. The effectiveness of AI in healthcare is an emergent property of the alignment among Model, Context, and Relation. Sites with higher M–C–R alignment will demonstrate better outcomes than sites with comparable algorithms but weaker alignment.
Operationalization of M–C–R alignment. We will construct a composite MCR-Alignment Index (MCR-AI) (0–100) combining standardized indicators across the three dimensions:
  • Model (M): external validation present (yes/no); calibration (Brier score or calibration slope); AUROC; drift monitoring in place (yes/no); explainability artifacts available (docs/dashboards, yes/no).
  • Context (C): EHR interoperability score; alert routing latency (sec); protocol availability for escalation (yes/no); staffing adequacy index; regulatory/QA procedures embedded (yes/no).
  • Relation (R): proportion of staff trained (%); role clarity index (RACI completed, yes/no); compliance with alerts (% acted within protocol time); perceived trust/usability (Likert 1–5); presence of governance/ethics oversight (yes/no).
Each indicator is min–max normalized and weighted (pre-specified weights, e.g., 0.35 M, 0.30 C, 0.35 R; sensitivity analyses vary weights). Higher scores indicate stronger alignment.
Study A—Prospective multi-site evaluation of AI sepsis early-warning
Design. Prospective, pragmatic, stepped-wedge trial across ≥6 hospitals deploying the same sepsis early-warning model. Sites cross over from “usual care” to “AI-supported workflow” on randomized schedule. Parallel process-evaluation (qualitative interviews, observations) to enrich the R and C measures.
Setting and population. Adult inpatients in ED/ICU/wards. Exclusions: palliative care/end-of-life pathways.
Primary outcomes.
  • Time-to-antibiotics (hours) from first qualifying alert;
  • Sepsis-related mortality (in-hospital or 28-day).
Secondary/process outcomes.
  • Lead time before clinician suspicion (hours);
  • PPV/alert burden; alert-fatigue rate (% alerts ignored);
  • Protocol adherence (% alerts followed within X minutes);
  • LOS, ICU transfer within 24–48 h.
Analysis. Mixed-effects models (patient-level nested in site-period) estimate the effect of deployment; moderation by MCR-AI (site-level) tests whether higher alignment amplifies benefits. Mediation analyses evaluate whether R-layer variables (e.g., compliance, trust) mediate the MCR-AI → outcomes pathway. Pre-specified subgroup analyses (ICU vs. wards; high vs. low baseline resources). Power calculated on detectable difference in time-to-antibiotics and mortality given cluster structure.
Study B—Quasi-experimental evaluation of predictive hospital-flow management
Design. Controlled interrupted time series in ≥8 hospitals introducing AI-based admission/boarding forecasts and bed-allocation support, with matched controls. Concurrent process-evaluation for C and R layers.
Primary outcomes.
  • ED boarding time (median minutes);
  • Bed-turnover efficiency (% beds available within target time).
Secondary outcomes.
  • Elective-surgery cancelations;
  • Variability of occupancy (% SD);
  • Staff overtime hours;
  • Time-to-placement for high-acuity patients.
Analysis. Segmented regression with site random effects and calendar adjustments; interaction between intervention and MCR-AI tests whether higher alignment yields larger level/slope changes. Sensitivity: alternative weights in MCR-AI; falsification outcomes (e.g., non-AI-related processes).
Mixed-methods and data governance
Qualitative data (interviews/focus groups with clinicians, nurses, pharmacists, managers, patients/community reps) are analyzed thematically to explain mechanisms linking R and C to outcomes and to refine the MCR-AI. All analyses use de-identified data within institutional approvals; no new identifiable patient data are generated.
Expected contribution. These designs render the M–C–R framework empirically testable by (I) quantifying alignment, (II) linking alignment to meaningful outcomes, and (III) identifying relational/contextual mechanisms that convert algorithmic potential into real-world value.

Author Contributions

Conceptualization, E.D.V., G.C., F.M.S. and S.L.B.; methodology, E.D.V., G.C., F.M.S. and S.L.B.; formal analysis, G.C., F.M.S. and E.M.C.; investigation, E.D.V., G.C., F.M.S., S.L.B., S.N., D.A.F., A.S. and L.C.; writing—original draft preparation, G.C., E.D.V., F.M.S. and S.L.B.; writing—review and editing, P.M., G.A.S. and E.M.C.; supervision, G.A.S. and P.M.; project administration, G.A.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The work is based exclusively on conceptual synthesis and secondary literature. No new data were created or analyzed in this study. All cited materials are available in the public domain through indexed scientific databases and institutional repositories (WHO, EU, FDA).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
Artificial IntelligenceAI
Complex Adaptive SystemCAS
Electronic Health RecordEHR
Food and Drug AdministrationFDA
Large Language ModelLLM
Model–Context–Relation frameworkM–C–R
Organization for Economic Co-operation and DevelopmentOECD
Software as a Medical DeviceSaMD
Social Determinants of HealthSDH
World Health OrganizationWHO

References

  1. Saghiri, M.A.; Vakhnovetsky, J.; Nadershahi, N. Scoping review of artificial intelligence and immersive digital tools in dental education. J. Dent. Educ. 2022, 86, 736–750. [Google Scholar] [CrossRef] [PubMed]
  2. Schwendicke, F.; Göstemeyer, G.; Krois, J. Artificial Intelligence in Dentistry: Chances and Challenges. J. Dent. Res. 2020, 99, 769–774. [Google Scholar] [CrossRef] [PubMed]
  3. Sciarra, F.M.; Caivano, G.; Cacioppo, A.; Messina, P.; Cumbo, E.M.; Di Vita, E.; Scardina, G.A. Dentistry in the Era of Artificial Intelligence: Medical Behavior and Clinical Responsibility. Prosthesis 2025, 7, 95. [Google Scholar] [CrossRef]
  4. Briganti, G.; Le Moine, O. Artificial Intelligence in Medicine: Today and Tomorrow. Front. Med. 2020, 7, 27. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  5. Nutbeam, D.; Milat, A.J. Artificial intelligence and public health: Prospects, hype and challenges. Public Health Res. Pract. 2025, 35, PU24001. [Google Scholar] [CrossRef] [PubMed]
  6. Johnson, K.B.; Wei, W.Q.; Weeraratne, D.; Frisse, M.E.; Misulis, K.; Rhee, K.; Zhao, J.; Snowdon, J.L. Precision Medicine, AI, and the Future of Personalized Health Care. Clin. Transl. Sci. 2021, 14, 86–93. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  7. Kwong, J.C.C.; Nickel, G.C.; Wang, S.C.Y.; Kvedar, J.C. Integrating artificial intelligence into healthcare systems: More than just the algorithm. npj Digit. Med. 2024, 7, 52. [Google Scholar] [CrossRef]
  8. Feng, J.; Phillips, R.V.; Malenica, I.; Bishara, A.; Hubbard, A.E.; Celi, L.A.; Pirracchio, R. Clinical artificial intelligence quality improvement: Towards continual monitoring and updating of AI algorithms in healthcare. npj Digit. Med. 2022, 5, 66. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  9. Soenksen, L.R.; Ma, Y.; Zeng, C.; Boussioux, L.; Villalobos Carballo, K.; Na, L.; Wiberg, H.M.; Li, M.L.; Fuentes, I.; Bertsimas, D. Integrated multimodal artificial intelligence framework for healthcare applications. npj Digit. Med. 2022, 5, 149. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  10. Crossnohere, N.L.; Elsaid, M.; Paskett, J.; Bose-Brill, S.; Bridges, J.F.P. Guidelines for Artificial Intelligence in Medicine: Literature Review and Content Analysis of Frameworks. J. Med. Internet Res. 2022, 24, e36823. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  11. Alowais, S.A.; Alghamdi, S.S.; Alsuhebany, N.; Alqahtani, T.; Alshaya, A.I.; Almohareb, S.N.; Aldairem, A.; Alrashed, M.; Bin Saleh, K.; Badreldin, H.A.; et al. Revolutionizing healthcare: The role of artificial intelligence in clinical practice. BMC Med. Educ. 2023, 23, 689. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  12. Cacioppo, A.; Caivano, G.; Sciarra, F.M.; Cumbo, E.; Messina, P.; Argo, A.; Zerbo, S.; Albano, D.; Scardina, G.A. Digital dentistry: Clinical, ethical and medico-legal aspects in the use of new technologies. Odontoiatria digitale: Aspetti clinici, etici e medico-legali nell’utilizzo delle nuove tecnologie. Dent. Cadmos 2025, 93, 40–55. [Google Scholar] [CrossRef]
  13. Organisation for Economic Co-operation and Development (OECD). Artificial Intelligence in Society; OECD Publishing: Paris, France, 2019. [Google Scholar] [CrossRef]
  14. European Parliament. EU AI Act: First Regulation on Artificial Intelligence; European Parliament: Brussels, Belgium, 2023. Available online: https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence (accessed on 20 August 2025).
  15. World Health Organization (WHO). WHO Outlines Considerations for Regulation of Artificial Intelligence for Health; WHO: Geneva, Switzerland, 2023; Available online: https://www.who.int/news/item/19-10-2023-who-outlines-considerations-for-regulation-of-artificial-intelligence-for-health (accessed on 20 August 2025).
  16. Ulnicane, I. Artificial intelligence in the European Union: Policy, ethics and regulation. In The Routledge Handbook of European Integrations, 1st ed.; Hoerber, T., Weber, G., Cabras, I., Eds.; Routledge: London, UK, 2022; pp. 254–269, ISBN (print): 978-0-367-20307-8; ISBN (electronic): 978-0-429-26208-1. [Google Scholar] [CrossRef]
  17. Keskinbora, K.H. Medical ethics considerations on artificial intelligence. J. Clin. Neurosci. 2019, 64, 277–282. [Google Scholar] [CrossRef] [PubMed]
  18. Avanzo, M.; Stancanello, J.; Pirrone, G.; Drigo, A.; Retico, A. The Evolution of Artificial Intelligence in Medical Imaging: From Computer Science to Machine and Deep Learning. Cancers 2024, 16, 3702. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  19. Secinaro, S.; Calandra, D.; Secinaro, A.; Muthurangu, V.; Biancone, P. The role of artificial intelligence in healthcare: A structured literature review. BMC Med. Inform. Decis. Mak. 2021, 21, 125. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  20. Cacioppo, A.; Sciarra, F.M.; Caivano, G.; Cumbo, E.; Messina, P.; Scardina, G.A. Autonomy and competence of the dentist in radiology: The missing link. Autonomia e competenza dell’odontoiatra in ambito radiologico: L’anello mancante. Dent. Cadmos 2025, 93, 358–367. [Google Scholar] [CrossRef]
  21. Ahmed, Z. Practicing precision medicine with intelligently integrative clinical and multi-omics data analysis. Hum. Genom. 2020, 14, 35. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  22. Lipkova, J.; Chen, R.J.; Chen, B.; Lu, M.Y.; Barbieri, M.; Shao, D.; Vaidya, A.J.; Chen, C.; Zhuang, L.; Williamson, D.F.K.; et al. Artificial intelligence for multimodal data integration in oncology. Cancer Cell 2022, 40, 1095–1110. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  23. Albano, D.; Argo, A.; Bilello, G.; Cumbo, E.; Lupatelli, M.; Messina, P.; Sciarra, F.M.; Sessa, M.; Zerbo, S.; Scardina, G.A. Oral Squamous Cell Carcinoma: Features and Medico-Legal Implications of Diagnostic Omission. Case Rep. Dent. 2024, 2024, 2578271. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  24. Amann, J.; Blasimme, A.; Vayena, E.; Frey, D.; Madai, V.I.; Precise4Q Consortium. Explainability for artificial intelligence in healthcare: A multidisciplinary perspective. BMC Med. Inform. Decis. Mak. 2020, 20, 310. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  25. Alhaidry, H.M.; Fatani, B.; Alrayes, J.O.; Almana, A.M.; Alfhaed, N.K. ChatGPT in Dentistry: A Comprehensive Review. Cureus 2023, 15, e38317. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  26. Cheng, S.L.; Tsai, S.J.; Bai, Y.M.; Ko, C.H.; Hsu, C.W.; Yang, F.C.; Tsai, C.K.; Tu, Y.K.; Yang, S.N.; Tseng, P.T.; et al. Comparisons of Quality, Correctness, and Similarity Between ChatGPT-Generated and Human-Written Abstracts for Basic Research: Cross-Sectional Study. J. Med. Internet Res. 2023, 25, e51229. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  27. Brunello, G.H.V.; Nakano, E.Y. A Bayesian Measure of Model Accuracy. Entropy 2024, 26, 510. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  28. Kelkar, A.H.; Hantel, A.; Koranteng, E.; Cutler, C.S.; Hammer, M.J.; Abel, G.A. Digital Health to Patient-Facing Artificial Intelligence: Ethical Implications and Threats to Dignity for Patients with Cancer. JCO Oncol. Pract. 2024, 20, 314–317. [Google Scholar] [CrossRef] [PubMed]
  29. Zhang, J.; Zhang, Z.M. Ethics and governance of trustworthy medical artificial intelligence. BMC Med. Inform. Decis. Mak. 2023, 23, 7. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  30. Caivano, G.; Sciarra, F.M.; Messina, P.; Cumbo, E.M.; Caradonna, L.; Di Vita, E.; Nigliaccio, S.; Fontana, D.A.; Scardina, A.; Scardina, G.A. Antimicrobial Resistance and Causal Relationship: A Complex Approach Between Medicine and Dentistry. Medicina 2025, 61, 1870. [Google Scholar] [CrossRef] [PubMed]
  31. Jonkisz, A.; Karniej, P.; Krasowska, D. SERVQUAL method as an “old new” tool for improving the quality of medical services: A literature review. Int. J. Environ. Res. Public Health 2021, 18, 10758. [Google Scholar] [CrossRef]
  32. Thabit, A.K.; Aljereb, N.M.; Khojah, O.M.; Shanab, H.; Badahdah, A. Towards Wiser Prescribing of Antibiotics in Dental Practice: What Pharmacists Want Dentists to Know. Dent. J. 2024, 12, 345. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  33. Tekkeşin, A.İ. Artificial Intelligence in Healthcare: Past, Present and Future. Anatol. J. Cardiol. 2019, 22 (Suppl. S2), 8–9. [Google Scholar] [CrossRef] [PubMed]
  34. Morley, J.; Machado, C.C.V.; Burr, C.; Cowls, J.; Joshi, I.; Taddeo, M.; Floridi, L. The ethics of AI in health care: A mapping review. Soc. Sci. Med. 2020, 260, 113172. [Google Scholar] [CrossRef] [PubMed]
  35. Volkman, R.; Gabriels, K. AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement. Sci. Eng. Ethics 2023, 29, 11. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  36. U.S. Food and Drug Administration (FDA). Artificial Intelligence in Software as a Medical Device; FDA: Silver Spring, MD, USA, 2025. Available online: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-software-medical-device (accessed on 20 August 2025).
  37. European Union. Regulation (EU) 2024/1689—Article 6: Classification Rules for High-Risk AI Systems; European Union: Brussels, Belgium, 2024. Available online: https://artificialintelligenceact.eu/article/6/ (accessed on 20 August 2025).
  38. Felländer-Tsai, L. Al ethics, accountability, and sustainability: Revisiting the Hippocratic path. Acta Orthop. 2020, 91, 1–2. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  39. Char, D.S.; Shah, N.H.; Magnus, D. Implementing Machine Learning in Health Care—Addressing Ethical Challenges. N. Engl. J. Med. 2018, 378, 981–983. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  40. Luxton, D.D. Recommendations for the ethical use and design of artificial intelligent care providers. Artif. Intell. Med. 2014, 62, 1–10. [Google Scholar] [CrossRef] [PubMed]
  41. Adams, R.; Henry, K.E.; Sridharan, A.; Soleimani, H.; Zhan, A.; Rawat, N.; Johnson, L.; Hager, D.N.; Cosgrove, S.E.; Markowski, A.; et al. Prospective, multi-site study of patient outcomes after implementation of the TREWS machine learning-based early warning system for sepsis. Nat. Med. 2022, 28, 1455–1460. [Google Scholar] [CrossRef] [PubMed]
  42. Moor, M.; Rieck, B.; Horn, M.; Jutzeler, C.R.; Borgwardt, K. Early Prediction of Sepsis in the ICU Using Machine Learning: A Systematic Review. Front. Med. 2021, 8, 607952. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  43. Reddy, S.; Rogers, W.; Makinen, V.P.; Coiera, E.; Brown, P.; Wenzel, M.; Weicken, E.; Ansari, S.; Mathur, P.; Casey, A.; et al. Evaluation framework to guide implementation of AI systems into healthcare settings. BMJ Health Care Inform. 2021, 28, e100444. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  44. Wulff, A.; Montag, S.; Marschollek, M.; Jack, T. Clinical Decision-Support Systems for Detection of Systemic Inflammatory Response Syndrome, Sepsis, and Septic Shock in Critically Ill Patients: A Systematic Review. Methods Inf. Med. 2019, 58, e43–e57. [Google Scholar] [CrossRef] [PubMed]
  45. Topol, E.J. High-performance medicine: The convergence of human and artificial intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef] [PubMed]
  46. Choi, R.Y.; Coyner, A.S.; Kalpathy-Cramer, J.; Chiang, M.F.; Campbell, J.P. Introduction to Machine Learning, Neural Networks, and Deep Learning. Transl. Vis. Sci. Technol. 2020, 9, 14. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  47. Ankolekar, A.; Eppings, L.; Bottari, F.; Pinho, I.F.; Howard, K.; Baker, R.; Nan, Y.; Xing, X.; Walsh, S.L.; Vos, W.; et al. Using artificial intelligence and predictive modelling to enable learning healthcare systems (LHS) for pandemic preparedness. Comput. Struct. Biotechnol. J. 2024, 24, 412–419. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  48. Fontaine, P.; Ross, S.E.; Zink, T.; Schilling, L.M. Systematic review of health information exchange in primary care practices. J. Am. Board. Fam. Med. 2010, 23, 655–670. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flow diagram illustrating the methodological process through which conceptual, regulatory, and empirical sources were integrated. The structured narrative review combined literature synthesis, analysis of international regulations, and conceptual triangulation grounded in complexity and clinical semiotics, leading to the formulation of the Model–Context–Relation (M–C–R) framework.
Figure 1. Flow diagram illustrating the methodological process through which conceptual, regulatory, and empirical sources were integrated. The structured narrative review combined literature synthesis, analysis of international regulations, and conceptual triangulation grounded in complexity and clinical semiotics, leading to the formulation of the Model–Context–Relation (M–C–R) framework.
Applsci 15 12005 g001
Figure 2. Conceptual representation of the Model–Context–Relation (M–C–R) framework. The figure shows how AI effectiveness in healthcare emerges from the dynamic alignment among the Model (algorithmic logic, data structures, predictive capacity), the Context (clinical environment, resources, and regulation), and the Relation (human interactions, accountability, professional roles) within a complex adaptive health system.
Figure 2. Conceptual representation of the Model–Context–Relation (M–C–R) framework. The figure shows how AI effectiveness in healthcare emerges from the dynamic alignment among the Model (algorithmic logic, data structures, predictive capacity), the Context (clinical environment, resources, and regulation), and the Relation (human interactions, accountability, professional roles) within a complex adaptive health system.
Applsci 15 12005 g002
Table 1. Main Domains of AI Application in Medicine and Health Systems.
Table 1. Main Domains of AI Application in Medicine and Health Systems.
Main ChallengesKey OpportunitiesTypical ApplicationsFocusDomain
Validation, interpretability, professional accountabilityPersonalized care, improved accuracy, multimodal data integrationDigital pathology, oncology, predictive modeling, clinical decision supportIndividual patient, diagnosis, therapy, precision medicineAI for Medicine
Data governance, policy alignment, ethical oversightEfficiency, equity, systemic learningEpidemiological forecasting, resource optimization, health inequity analysisPopulation health, prevention, organization, sustainabilityAI for Health Systems
Managing uncertainty, maintaining trust, balancing automation and human judgmentCo-evolution of human and artificial reasoning, emergent knowledgeSystem-level intelligence, feedback-based learning, relational modelingAdaptive interaction among people, technologies, and institutionsAI and Complexity
Table 2. Comparison between M–C–R and Existing AI Governance Frameworks.
Table 2. Comparison between M–C–R and Existing AI Governance Frameworks.
Added Value of M–C–RMain Limitations in Healthcare AIKey PrinciplesScope and OrientationFramework
Embeds interpretive responsibility within the system; defines “Relation” as co-constitutive of AI performanceOften descriptive; weak epistemic link to clinical reasoning; accountability treated as ex-postInteraction between human and technical componentsIntegration of people, technology, and organizationSocio-technical models
Adds semiotic and ethical layers that connect feedback to interpretive coherence and clinical reasoningFocus on data feedback, not on meaning interpretation or semiotics; governance remains managerialFeedback, adaptation, learning loopsContinuous data-driven improvement cyclesLearning Health Systems (LHS)
Couples complexity dynamics with epistemic accountability and relational supervisionLacks explicit human interpretive agency; limited operational guidance for governance of AI toolsEmergence, interdependence, adaptationHealth systems as dynamic, nonlinear networksComplex Adaptive Systems (CAS)
M–C–R provides a theoretical rationale linking regulation to semiotic and clinical dimensionsNormative, not explanatory; limited conceptual integration with epistemology or organizational practiceTraceability, monitoring, post-market controlSafety, transparency, and human oversight of AI systemsRegulatory risk frameworks (WHO/EU/FDA)
Offers a testable, epistemically grounded, and ethically aligned model bridging technical, organizational, and human domainsModel–Context–Relation alignment; interpretive accountability; emergent valueClinically grounded governance of AI as semiotic systemM–C–R Framework (proposed)
Table 3. Alignment between international regulatory frameworks and the M–C–R components.
Table 3. Alignment between international regulatory frameworks and the M–C–R components.
Corresponding M–C–R ElementFDA (2025)EU AI Act (2024–2025)WHO (2023)Regulatory Principle
Model—ensures interpretability and auditability of the algorithmAlgorithmic transparency and labeling within SaMD frameworkMandatory technical documentation and explainability requirements for “high-risk” AI systems (Art. 13)Traceability and documentation of data provenance and model logicTransparency and explainability
Relation—guarantees interpretive responsibility and ethical mediationAdaptive AI control plans and human review of outputsMandatory human-in-the-loop and human-on-the-loop safeguards (Art. 14)Requirement for human-in-the-loop supervisionHuman oversight and control
Context—embeds safety and quality management into operational environmentsTotal product life-cycle (TPLC) approach for SaMDRisk management system proportional to impact on health (Annex III)Continuous monitoring across the AI life cycleRisk management and life-cycle monitoring
Relation/Context—ensures fairness and participatory governancePublic transparency and stakeholder feedback in post-market evaluationNon-discrimination and fairness requirementsPromotion of equitable access and avoidance of biasEquity, inclusiveness, and societal participation
Model/Context—supports adaptive and context-aware validationPeriodic algorithm updates and real-world performance monitoringContinuous performance assessment and compliance with technical standardsOngoing evaluation of AI behavior and outcomesPost-market surveillance and adaptability
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Di Vita, E.; Caivano, G.; Sciarra, F.M.; Lo Bianco, S.; Messina, P.; Cumbo, E.M.; Caradonna, L.; Nigliaccio, S.; Fontana, D.A.; Scardina, A.; et al. Artificial Intelligence in Medicine and Healthcare: A Complexity-Based Framework for Model–Context–Relation Alignment. Appl. Sci. 2025, 15, 12005. https://doi.org/10.3390/app152212005

AMA Style

Di Vita E, Caivano G, Sciarra FM, Lo Bianco S, Messina P, Cumbo EM, Caradonna L, Nigliaccio S, Fontana DA, Scardina A, et al. Artificial Intelligence in Medicine and Healthcare: A Complexity-Based Framework for Model–Context–Relation Alignment. Applied Sciences. 2025; 15(22):12005. https://doi.org/10.3390/app152212005

Chicago/Turabian Style

Di Vita, Emanuele, Giovanni Caivano, Fabio Massimo Sciarra, Simone Lo Bianco, Pietro Messina, Enzo Maria Cumbo, Luigi Caradonna, Salvatore Nigliaccio, Davide Alessio Fontana, Antonio Scardina, and et al. 2025. "Artificial Intelligence in Medicine and Healthcare: A Complexity-Based Framework for Model–Context–Relation Alignment" Applied Sciences 15, no. 22: 12005. https://doi.org/10.3390/app152212005

APA Style

Di Vita, E., Caivano, G., Sciarra, F. M., Lo Bianco, S., Messina, P., Cumbo, E. M., Caradonna, L., Nigliaccio, S., Fontana, D. A., Scardina, A., & Scardina, G. A. (2025). Artificial Intelligence in Medicine and Healthcare: A Complexity-Based Framework for Model–Context–Relation Alignment. Applied Sciences, 15(22), 12005. https://doi.org/10.3390/app152212005

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop