Next Article in Journal
Exploring Kalman Filtering Applications for Enhancing Artificial Neural Network Learning
Previous Article in Journal
2D/3D Pattern Formation Comparison Using Spectral Methods to Solve Nonlinear Partial Differential Equations of Condensed and Soft Matter
Previous Article in Special Issue
DenseNet-Based Classification of EEG Abnormalities Using Spectrograms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Reclaiming XAI as an Innovation in Healthcare: Bridging Rule-Based Systems

Behavioral Science Research Institute, Srinakharinwirot University, Bangkok 10110, Thailand
*
Author to whom correspondence should be addressed.
Algorithms 2025, 18(9), 586; https://doi.org/10.3390/a18090586
Submission received: 12 August 2025 / Revised: 9 September 2025 / Accepted: 16 September 2025 / Published: 17 September 2025
(This article belongs to the Special Issue AI-Assisted Medical Diagnostics)

Abstract

The adoption of explainable artificial intelligence (XAI) in healthcare has been increasingly framed as dependent on transparency, trustworthiness, and accountability. The objective of this study was the reclamation of rule-based systems within XAI as innovations aligned with healthcare accountability. A scientometric mapping analysis was conducted using publications indexed in Scopus between 1 January 2018 and 20 May 2025. The search strategy was applied to 1034 records. From these, 892 were screened, 238 duplicates were removed, and 654 studies were retained in accordance with the PRISMA 2020 framework. Thematic cluster analysis, co-authorship structures, and keyword co-occurrence patterns were visualised through VOSviewer 1.6.20. Transparency, accountability, and trustworthiness were established as central values for clinical integration. Expanding domains were identified in smart healthcare, digital health, healthcare technology, and mHealth, while interpretability was observed to remain underrepresented. Rule-based systems, frequently in hybrid forms, were demonstrated to bridge algorithmic complexity with interpretability. This bridging was interpreted as reinforcing physician confidence, regulatory compliance, and patient safety. It was concluded that the advancement of XAI in healthcare has been shaped by the interplay of ethical principles, methodological innovation, and digital health applications. Practical implications, theoretical contributions, and potential limitations were systematically addressed.

1. Introduction

Over the past decade, diagnostic precision, treatment personalisation, and predictive analytics in healthcare have been reshaped through advances in machine learning and data-driven modelling. Within this transformation, XAI has been established as a central mechanism for transparency, accountability, and clinician trust [1,2]. Decision pathways have been rendered opaque by deep learning architectures, thereby intensifying the significance of XAI. The demand for interpretable models has been reinforced by regulatory mandates, documented cases of algorithmic bias, and the diffusion of clinical artificial intelligence into safety-critical domains [3]. Historical reliance on rule-based systems has preserved trustworthiness through the embedding of expert-defined heuristics and explicit reasoning processes. Auditability and compliance with clinical guidelines have similarly been enabled by such systems [4]. In this study, XAI has been positioned not merely as a compliance instrument but as a conceptual bridge between the transparency of rule-based frameworks and the predictive strength of contemporary artificial intelligence. Through this bridging role, epistemic reliability and operational safety in healthcare decision-making have been advanced.
Advances in computational performance have not been accompanied by the removal of opacity that constrains interpretability, reproducibility, and regulatory compliance in healthcare artificial intelligence systems [5]. Black-box architectures, particularly deep neural networks, have not been associated with clinically actionable explanations and have therefore been restricted in integration into diagnostic and therapeutic workflows [2]. Deficits in transparency have been linked to the erosion of clinician confidence, the limitation of patient comprehension, and the intensification of institutional exposure to legal and ethical risks when algorithmic outputs diverge from clinical judgement. Historically, these challenges were mitigated by rule-based systems through the provision of explicit decision pathways and guideline-conformant reasoning. Predictive accuracy, however, was reduced in complex and data-rich environments owing to restricted adaptability [6]. The present study has addressed this gap by reconciling the epistemic rigour and auditability of rule-based frameworks with the adaptive capacity of modern artificial intelligence. Through this reconciliation, healthcare innovation has been advanced as both technologically progressive and socially accountable.
XAI has been defined as a suite of computational methods through which decision-making by artificial intelligence is rendered transparent, interpretable, and aligned with human cognitive frameworks [7]. Rule-based systems have been characterised by explicitly codified if–then logic grounded in domain expertise, thereby ensuring traceability and regulatory auditability in clinical practice [8]. Techniques of deep learning—such as saliency mapping, surrogate modelling, and counterfactual explanation—have been developed to approximate interpretability; however, the causal clarity intrinsic to rule-based reasoning has not been preserved [5]. The model advanced in this study has been positioned not as a post hoc interpretative layer but as an integrative design paradigm in which adaptive statistical learning is embedded within transparent, rule-governed structures. Within this configuration, interpretability has been maintained without loss of predictive accuracy. This synthesis has been proposed as a mechanism through which clinical trust is sustained, informed decision-making is supported, and compliance with emerging frameworks for healthcare artificial intelligence governance is ensured [9].
The integration of XAI with rule-based systems has been positioned as a transformative innovation through which the epistemic foundations of clinical artificial intelligence are reconceptualised. Adaptive statistical inference has been embedded within inherently interpretable logical structures so that the dual demands for transparency, accountability, auditability, and ethically aligned decision-making in healthcare are met [7]. Broader implications have been carried by this integration, including regulatory harmonisation, cross-institutional interoperability, and the consolidation of sustained stakeholder trust. These outcomes have been expected to facilitate the adoption of artificial intelligence in domains that have previously exhibited resistance to opaque algorithms [10]. The synthesis has been defined by a dual commitment to predictive performance and interpretability. Through the establishment of this balance, a methodological precedent has been set for artificial intelligence integration with the potential to reshape clinical workflows, enhance shared decision-making, and reinforce the social licence under which healthcare artificial intelligence operates.
The urgency of advancing XAI—understood as understandable, usable, interpretable, responsible, and accountable artificial intelligence—has been generated by the convergence of rapid algorithmic adoption with intensifying regulatory, ethical, and trust imperatives [11]. Conventional approaches have treated interpretability as a secondary, post hoc extension of predictive modelling, and transparency has therefore not been embedded as a structural property of healthcare artificial intelligence systems [12,13]. A departure from this convention has been proposed in the present study through a hybrid paradigm in which the explicit logical reasoning of rule-based systems is structurally integrated with the adaptive capacity of statistical learning models. Within this configuration, causal interpretability, regulatory auditability, and clinician usability have been preserved without compromise in predictive performance [14,15]. The aim has been defined not only as the establishment of a model architecture but also as the articulation of a design philosophy that addresses existing transparency deficits while creating a replicable precedent for responsible artificial intelligence deployment. Through this advance, healthcare innovation policy, clinical workflow integration, and cross-domain governance of intelligent systems have been positioned to be reshaped.

Research Questions

The resurgence and evolving role of rule-based systems within XAI in healthcare are examined. A combined approach of systematic review and scientometric mapping of Scopus-indexed literature is employed to investigate the integration of rule-based logic into contemporary applications. Implications for transparency, trust, and clinical adoption are assessed. The inquiry is directed by the following research questions:
RQ1: What publication trends emerge in Scopus-indexed healthcare literature on XAI and rule-based systems, and which authors, institutions, and countries contribute most prominently to the identified thematic domains?
RQ2: What research gaps remain in connecting rule-based models to interpretability in healthcare, and which areas are underexplored in the current literature?
RQ3: Which thematic clusters emerge from systematically mapping the integration of rule-based systems into XAI for healthcare?
Interpretability in XAI is reframed in this study as a theoretical contribution extending beyond narrow technical accounts. A methodological advance is developed through hybrid modelling, in which rule-based reasoning is integrated with data-driven approaches to balance rigour with adaptability. A practical contribution is established through the alignment of XAI design with emerging global regulatory frameworks, thereby ensuring compliance while supporting transparency and trust. Collectively, these findings are interpreted as indicative of a broader shift towards responsible, human-centred, and innovative adoption of XAI in healthcare systems worldwide.

2. Materials and Methods

2.1. Scientometric Mapping Design

Rule-based systems in XAI were reclaimed as a timely innovation for healthcare in response to the rising demand for interpretable models in safety-critical contexts. Interpretability, innovation, and scientometric mapping were positioned as bridging mechanisms for the repositioning of rule-based reasoning within current debates on XAI. Knowledge advanced beyond prior narrative reviews through a scientometric mapping design grounded in Scopus data, thereby enabling systematic, replicable, and large-scale mapping of research dynamics [16]. Bibliographic records were extracted, metadata were cleaned, co-occurrence patterns were mapped, and thematic domains were clustered through the VOSviewer workflow. Intellectual structures and knowledge networks were visualised by VOSviewer, thereby exposing the ways in which rule-based approaches have either been integrated or marginalised in healthcare XAI research. A theoretical contribution is made by reframing interpretability as a methodological core; a methodological contribution is offered by validating scientometric mapping over narrative synthesis; and a practical contribution is provided by guiding healthcare stakeholders in balancing innovation with transparency [17].

2.2. Scientometric Mapping Search Strategies

Scopus was selected as the primary data source for this scientometric mapping study owing to its extensive coverage of peer-reviewed journals, conference proceedings, and healthcare-related artificial intelligence literature. International and high-impact publications were indexed within Scopus, thereby ensuring the inclusion of validated scholarly outputs and rendering the database appropriate for the mapping of research frontiers. The timeframe from 1 January 2018 to 20 May 2025 was defined to capture the accelerated growth of XAI research in healthcare during this period. Variations in terminology were addressed through the application of Boolean operators, truncation, and wildcards in the search strategy. The following exact query was executed: TITLE-ABS-KEY (“explainable AI” OR “XAI” OR “interpretable AI” OR “rule-based AI”) AND TITLE-ABS-KEY (“healthcare” OR “healthcare innovation”) AND PUBYEAR > 2018 AND PUBYEAR < 2025.
The workflow was structured in accordance with PRISMA 2020 guidelines (Supplementary Materials), and transparency as well as reproducibility were ensured through sequential steps of identification, screening, eligibility, and inclusion [18]. Replication was facilitated, and the reliability of the scientometric mapping analysis was strengthened through this design. Conceptual terms in XAI and healthcare were combined in the query using AND/OR operators and quotation marks to ensure phrase precision. A total of 1304 records were retrieved from the initial Scopus search, forming the foundation for the PRISMA flow diagram. The scientometric mapping process is illustrated in Figure 1 through the PRISMA flow diagram.

2.3. Scientometric Mapping Eligibility Criteria

At the eligibility stage, the full texts of all potentially relevant publications were assessed to determine inclusion. Studies were excluded if they were not relevant to healthcare, if they did not maintain an explicit focus on XAI, or if they addressed only black-box models without rule-based interpretability. Transparency was ensured by defining rule-based models as those employing explicit rules, logic, or symbolic reasoning, whereas black-box models were characterised by opaque architectures such as deep neural networks. Healthcare relevance was operationalised through the presence of clinical settings, health technologies, or smart healthcare applications. Following the application of these eligibility criteria, 654 publications were retained as the final set for scientometric mapping analysis.
At the screening stage, duplicates were removed through EndNote and cross-checked in Excel to ensure accuracy. Only peer-reviewed articles published in English and explicitly addressing XAI in healthcare were retained for inclusion, while conference abstracts, grey literature, and studies outside healthcare applications were excluded. Screening was conducted at two levels. At the first level, titles and abstracts were reviewed for relevance, resulting in the exclusion of 412 records owing to irrelevance or non-healthcare scope. At the second level, full texts were assessed for methodological and thematic fit, and 238 records were excluded for lacking an XAI focus or for reliance solely on black-box models. For the studies retained, bibliographic data—including authors, titles, abstracts, keywords, and citations—were exported from Scopus and converted into CSV and RIS formats for import into VOSviewer. Quality control was maintained through the cross-checking of metadata and the resolution of inconsistencies across records. The process has been visualised through the PRISMA flow diagram (Figure 1), which illustrates the systematic inclusion and exclusion of records prior to final analysis.
Within the PRISMA eligibility framework, systematic data cleaning was undertaken to ensure both accuracy and reproducibility. Author names and institutional affiliations were disambiguated to eliminate duplicate identities and false co-authorship links. Keywords were harmonised through the merging of synonyms, the correction of inconsistencies, and the standardisation of spelling variants. From this process, a controlled dictionary was created and subsequently applied as input for VOSviewer. The dictionary increased the validity of the co-occurrence analysis and enhanced the transparency of the scientometric mapping workflow by ensuring that each cleaning step was explicit and replicable.

2.4. Scientometric Mapping Models

The visualisation of the similarities equation was employed by Van Eck and Waltman [16] to develop bibliometric maps for each item pair i and j , with the similarity s i j ( s i j ≥ 0) used as input. The mapping of items is expressed through the following equation:
V x 1 , , x n i < j x i x j 2
subject to
2 n n 1 i < j x i x j = 1
The VOS method is used to calculate the weighted composite of squared distances between each pair of items. The location of each item i is expressed as follows:
x i * = j i s i j x j j i s i j
where the item i is defined as a weighted average of the locations. According to Newman [19], the equation of co-authorship networks is specified using N and M, where N denotes the number of researchers and M the total number of publications. In constructing co-authorship networks, A = [ a i k ] is used to denote the N   ×   M authorship matrix. The key element a i k of this matrix equals 1 if researcher i is an author of publication k and 0 otherwise. Furthermore, n k is used to denote the number of authors of publication k , as expressed by the following equation:
n k = i = 1 n a i k
where the mean represents the number of authors, with n k > 1 for each publication k . The equation is expressed in symmetrical form as U = [ u i j ] to denote the full-counting co-authorship matrix of size N × N . The key element u i j represents the linkage between researchers i and j defined as follows:
u i j = k = 1 M a i k a j k
where the co-authorship matrix U is defined as follows:
U = A A T
Hence, U is defined as the co-authorship matrix, and A as the post-multiplying authorship matrix, with self-links in the co-authorship matrix U set to 0. The fractional-counting co-authorship matrix is expressed as U * = [ u i j ], where the linkage between researchers i and j , u i j * , is defined as follows:
u i j =   k = 1 M a i k a j k n k 1
where the co-authorship matrix   U * is defined as follows:
U * =   A diag ( A T 1 1 ) 1 A T
where diag( v ) denotes a diagonal matrix with the elements of the vector v , while 1 denotes a column vector of length N with all elements equal to 1 and U * 0. The co-occurrence links are denoted between nodes i and j , ( c i j = u j i   ≥ 0), while s i j denotes the association strengths of nodes i and j . The relationship is expressed in the following equation:
s i j = 2 m c i j c i c j
where c i denotes the total number of links of node i , while m denotes the total number of links in the network. The relationship is defined as follows:
c i = j i c i j   a n d   m = 1 2 i c i
where node i is represented by a vector x i   R p denoting the location of node i in a p -dimensional map ( p = 2 ). A positive value of x i defines the cluster to which node i belongs. The mapping process is unified in the following equation:
v ( x i , , x n ) = i < j s i j d i j 2 i < j d i j
with respect to x i , , x n . The distance between nodes i and j is denoted as d i j and expressed as follows:
d i j = x i x j = k = 1 p x i k x j k 2
where the mapping of the parameter is defined as follows:
d i j = 0                   i f   x i =   x j 1 γ     i f   x i   x j  
where the parameter γ is defined as the resolution ( γ   > 0). A larger value of γ results in stronger separation, with the co-occurrence structure interpreted in terms of repulsive forces between nodes.
According to Traag et al. [20], the Leiden algorithm was defined as a modularity-based clustering approach with guaranteed well-connected communities. The modularity function with the resolution parameter γ is expressed as follows:
Q ( y ) = 1 2 m i , j A i j   y k i k j 2 m   δ ( c i , c j )
Here, Q ( γ ) is defined as the modularity score for a given partition. A i j is defined as the weight of the edge between nodes i and j represented by the normalised co-occurrence (e.g., s i j ). k i =   i A i j   is defined as the degree (or strength) of node i. m   =   1 2 i A i j   is defined as the total edge weight in the network. γ   > 0 is defined as the resolution parameter that controls cluster granularity. δ c i , c j is defined as an indicator function, taking the value 1 if nodes i and j belong to the same community and 0 otherwise. A low value of γ results in fewer and larger clusters, whereas a high value of γ produces more and smaller clusters.
The mapping objective was formalised as the systematic identification of structural patterns in the literature on XAI in healthcare, with particular emphasis on the integration of legacy rule-based systems. Methodological rigour was ensured through the adoption of full counting in VOSviewer for both co-citation and co-occurrence analyses, thereby incorporating the complete weight of each link across the dataset. Clustering was conducted with the Leiden algorithm, which was selected over the Louvain approach because it produces more stable and internally consistent partitions, particularly in large bibliometric networks. The resolution parameter was explicitly reported to specify cluster granularity, and multiple values were tested to demonstrate robustness. Edge thresholds were applied in accordance with VOSviewer’s recommended practices to remove spurious connections while preserving meaningful relational structures. To maximise transparency, exact map reporting was reproduced, including the counting scheme, resolution values, and thresholds, so that the clustering process remains fully traceable and replicable for subsequent scientometric analyses of XAI in healthcare.

2.5. Scientometric Mapping Clusters

Thematic clustering in scientometric mapping analysis has been employed to identify major research foci, intellectual structures, and emerging trends within a field by grouping co-occurring keywords and citations into distinct thematic areas [21]. In the context of XAI and rule-based systems in healthcare, thematic clusters generated through VOSviewer have typically revealed domains such as clinical decision support systems, algorithmic transparency, interpretable machine learning, trust in artificial intelligence, and hybrid models. For instance, one cluster was centred on the integration of rule-based logic with deep learning to enhance diagnostic accuracy, while another was oriented towards the ethical and regulatory dimensions of explainability in clinical practice [5,7]. These clusters were employed to map the intellectual landscape of the field, thereby disclosing both established and emerging areas of inquiry that are critical to the advancement of trustworthy artificial intelligence in healthcare.

2.6. Scientometric Mapping Analyses

The scientometric mapping analytical procedure was undertaken through a structured process beginning with the extraction of bibliographic data from the Scopus database and concluding with visualisation and interpretation in VOSviewer. A comprehensive keyword search was conducted with Boolean operators to identify literature on XAI, rule-based systems, and healthcare within the period 1 January 2018 to 20 May 2025. The dataset generated from this search comprised metadata including authorship, titles, abstracts, keywords, citations, and institutional affiliations. These records were exported in CSV and RIS formats for bibliometric processing. The data were subsequently cleaned to remove duplicates and irrelevant entries. Terms were normalised to ensure consistency through the merging of synonyms and the standardisation of author names.
The cleaned data were imported into VOSviewer for multiple forms of analysis. Co-authorship analysis was conducted to map collaboration networks among authors and institutions. Co-citation analysis was undertaken to identify the intellectual structure of the field. Co-occurrence analysis of keywords was applied to detect thematic clusters [17,20]. VOSviewer’s distance-based visualisation technique was employed to represent relationships, with the proximity between nodes indicating the strength of association. Clustering algorithms embedded in the software were applied to group related items into thematic areas, thereby enabling the identification of core research topics such as interpretability in medical artificial intelligence, clinical decision support, and hybrid rule-learning models. These visual and quantitative outputs were subsequently interpreted in relation to the study’s objective, and rule-based systems were shown to have been reintegrated into contemporary XAI research in healthcare.

3. Results

3.1. RQ1: What Publication Trends Emerge in Scopus-Indexed Healthcare Literature on XAI and Rule-Based Systems, and Which Authors, Institutions, and Countries Contribute Most Prominently to the Identified Thematic Domains?

3.1.1. Synthesis of Results

In response to RQ1, a scientometric mapping analysis was undertaken. Marked growth in Scopus-indexed healthcare literature on XAI and rule-based systems was documented between 1 January 2018 and 20 May 2025. The annual growth curve of publications was shown to demonstrate a consistent upward trajectory, indicating sustained scholarly interest in XAI as a healthcare innovation that reclaims and integrates rule-based systems. In parallel, the annual growth curve of citations, supported by a positive trendline, was confirmed as evidence that the field expanded in volume while also gaining recognition and influence across the research community. Taken together, these patterns were interpreted as validating the dual momentum of XAI scholarship: increasing output was shown to signal productivity, while rising citation impact was recognised as reflecting the consolidation of relevance. The role of rule-based systems was substantiated as a central bridge between technical robustness and clinical accountability. Publication and citation trends in XAI for healthcare are illustrated in Figure 2.

3.1.2. Co-Authorship Analysis

The co-authorship analysis of Scopus data identified India as the leading contributor to XAI research in healthcare (see Table 1). The highest publication output (126), strong citation impact (1906), and a centrality score of 0.95 were recorded for India. The United States was ranked second in publication output (94) but achieved the highest centrality score (1.23) and citation impact (2427), confirming its pivotal role in connecting global collaborations. The United Kingdom, Italy, and Saudi Arabia were also distinguished by substantial citation counts and moderate centrality scores, indicating active bilateral and multilateral partnerships. VOSviewer visualisation reinforced these patterns. Dense South–South and South–North linkages were highlighted, particularly between India and Germany, Bangladesh, Australia, and France. Emerging contributions were observed from Bangladesh, Egypt, and Vietnam, reflecting an expanding diversification of the global XAI research ecosystem. India and the United States were positioned as central hubs (see Figure 3).
The United States was revealed as the central hub in XAI healthcare research. Extensive linkages with Italy, Germany, Singapore, and China were demonstrated, underscoring the role of the United States in cross-border innovation. Emerging contributions were identified from Malaysia, Greece, and Morocco, with these states frequently providing regional bridges in Asia, the Middle East, and Africa. The highest publication output (13) was recorded by King Saud University. The greatest citation counts (464 and 377) were achieved by the University of Southern Queensland and Khalifa University of Science and Technology, respectively, reflecting the influence and quality of their research. High productivity was also exhibited by institutions in India and the Middle East, including the Manipal Academy, Vellore Institute of Technology, and Princess Nourah Bint Abdulrahman University. Their performance was shown to indicate a diversification of XAI research leadership beyond Western institutions. A strong trend towards international and multi-institutional collaboration was revealed by the findings (see Table 2).

3.1.3. Citation Analysis

Citation analysis of Scopus data revealed a rapidly growing body of high-impact research published in journals spanning computing, biomedical engineering, and data science. The most cited work, authored by Linardatos in Entropy (1780 citations), was shown to establish foundational frameworks and evaluation methods for XAI. Central publication venues were represented by leading journals such as Information Fusion (impact factor 23.9) and ACM Computing Surveys (impact factor 51.1). Their prominence was shown to reflect both methodological rigour and broad interdisciplinary reach. Sustained scholarly influence was confirmed by authors including Kaur, Yang, and Loh, whose work appeared in outlets with high H-indices (232, 179, 150). Applied journals such as IEEE Access and Sensors were shown to highlight the integration of XAI into sensor-based and diagnostic healthcare technologies. The most influential contributions were published between 2021 and 2023, confirming the emergent yet maturing nature of the field (see Figure 4).

3.2. RQ2: What Research Gaps Remain in Connecting Rule-Based Models to Interpretability in Healthcare, and Which Areas Are Underexplored in the Current Literature?

3.2.1. Co-Occurrence Analysis

In response to RQ2, the findings were shown to indicate that although XAI spans multiple healthcare domains, its strongest momentum has been observed in interpretable diagnostics for life-threatening and complex diseases. This trend was driven by demands for trust, accountability, and clinical relevance. VOSviewer co-occurrence analysis revealed an increasing concentration of XAI in high-impact medical fields such as cardiology, oncology, and neurology. Strong keyword linkages were identified with conditions including lung cancer, brain tumours, Alzheimer’s disease, and heart failure. Central terms such as disease diagnosis, pathology, and cancer classification were positioned as hubs connecting diverse clinical applications. These hubs were shown to underscore the dual emphasis on diagnostic accuracy and interpretability across medical specialties. The recent emergence of terms such as Alzheimer’s disease, dementia, and skin cancer was found to suggest growing interest in explainable models for complex, image-rich, and time-series data (see Figure 5).
The first VOSviewer map, centred on XAI, was shown to reveal a tightly connected conceptual network dominated by terms such as interpretability, explainability, explainable machine learning, and artificial neural networks. Co-occurring terms indicated that discourse has remained primarily oriented towards making complex black-box models more understandable. High-frequency nodes such as LIME and SHAP were identified, confirming a methodological emphasis on post hoc interpretation tools. These methods underscored the reconciliation of model complexity with user trust. Strong linkages were observed to human-centred artificial intelligence, decision-making, and trust, reflecting sustained concern with clinical usability and ethical accountability. Temporal overlay analysis indicated recent growth in subtopics such as interpretable deep learning and generative adversarial networks, signalling the diversification of methodological approaches within healthcare-focused XAI (see Figure 6).

3.2.2. Co-Occurrence Networks

Three interlinked clusters were revealed through the co-occurrence analysis as shaping the discourse on XAI in healthcare (see Table 3). The first cluster was delineated as the conceptual core. It was dominated by XAI, responsible artificial intelligence, and artificial intelligence technologies, with emerging prominence assigned to generative artificial intelligence and trustworthy artificial intelligence. This configuration was interpreted as reflecting a sustained emphasis on ethics, transparency, and the responsible design of intelligent systems. The second cluster was identified as the methodological toolkit. Within this cluster, XAI models, explainability, and accountability were positioned as a central locus of innovation in healthcare. Post hoc tools such as LIME and SHAP were also recorded, highlighting persistent efforts to reconcile the opacity of complex black-box models with interpretability. The third cluster was associated with healthcare applications. It was led by the themes of healthcare, healthcare technology, and healthcare applications, and was extended into domains such as mHealth, smart healthcare, telemonitoring, and digital health. The key terms for the strategic diagram clusters of XAI in healthcare research were depicted in Figure 7. The strategic diagram clusters of XAI and healthcare research are illustrated in Table 4.

3.2.3. Cluster Networks

Interpretability was identified as the dominant construct within the cluster. It was characterised by the highest levels of frequency, centrality, and connectivity. Its position was established as the bridging concept between technical development and clinical utility. Explainability and transparency were subsequently observed, reinforcing the discourse concerned with rendering artificial intelligence systems more understandable. The cluster hierarchy was demonstrated as follows: C 1 = {interpretability ( f = 163, c = 0.82, l = 1853) > explainability ( f = 69, c = 0.75, l = 0) > transparency ( f = 60, c = 0.66, l = 0)}. Additional terms were detected: ∪ {trustworthiness ( f = 14, c = 0, l = 221), usability ( f = 7, c = 0, l = 112), understandability ( f = 6, c = 0, l = 83), and accountability ( f = 5, c = 0, l = 53)}.
Within the healthcare-related cluster, the following constructs were identified: C 2 {healthcare delivery ( f = 32, c = 0.46, l = 603), smart healthcare ( f = 15, c = 0.32, l = 210), healthcare application ( f = 17, c = 0.28, l = 205), telemonitoring ( f = 5, c = 0.19, l = 162), digital health ( f = 8, c = 0.16, l = 134), mhealth ( f = 14, c = 0.17, l = 133), and healthcare technology ( f = 6, c = 0.13, l = 64)}. Cluster theme networks were identified as low-frequency and low-connectivity constructs, thereby indicating dimensions that remain underexplored yet conceptually significant. The cluster networks of XAI in healthcare are illustrated in Table 5. The application of XAI in healthcare practice is depicted in Figure 8. The research agenda clusters and the impacts of XAI in healthcare are presented in Table 6.

3.3. RQ3: Which Thematic Clusters Emerge from Systematically Mapping the Integration of Rule-Based Systems into XAI for Healthcare?

Scientometric Clustering of Rule-Based Systems

Articulating an answer to RQ3, the cluster network analysis was found to position rule-based systems as a bridging construct between explainability concepts and healthcare applications. Formally, node strength was computed as the sum of weighted connections with all other nodes, expressed as S i =   j = i w i j . For rule-based systems, the strength development need was calculated as S R B S 1.0 on research impact. This value was derived from cumulative linkages with XAI, interpretability, transparency, accountability, healthcare technology, smart healthcare, and digital health. The comparatively high score was interpreted as evidence that rule-based systems were embedded across network clusters of both higher impact, including future potential and priority agendas, and lower impact, including exploratory and mature areas. Rule-based systems were also shown to be consistently aligned with both conceptual and applied dimensions of XAI in healthcare. The ranked research agenda bubble chart for rule-based systems in XAI for healthcare is presented in Figure 9.
Rule-based systems were positioned as a transversal theme within hotspot clustering of XAI healthcare pathways (Figure 10). Their function was demonstrated to enhance connectivity and policy pathways between theoretical constructs—healthcare, responsible AI, rule-based systems, and XAI—and practical applications—actors, providers, regulators, and developers. Evidence was provided that rule-based systems were not treated as an isolated niche but were instead framed as a bridge cluster of strategic significance. Their contribution was identified in consolidating interpretability within policy frameworks while supporting exploratory directions, future potential, mature domains, and priority agendas. An integrated research–policy agenda for XAI in healthcare is presented in Table 7.

4. Discussion

The purpose of this study was defined as the examination of XAI as an emerging innovation in healthcare, with particular emphasis on its bridging role in rule-based systems. A scientometric mapping analysis was conducted on 654 studies indexed in the Scopus database between 2018 and 2025. Through VOSviewer analysis, three distinct clusters were identified—XAI, XAI models, and healthcare—as reported in Table 3. The principal findings were derived from two cluster networks that mapped XAI and healthcare, as presented in Table 5, together with the integrated cluster of XAI in healthcare illustrated in Figure 5. The evidence was shown to demonstrate that the research landscape was organised into conceptual groupings that collectively underscored the progressive convergence of XAI techniques with healthcare applications.
In response to RQ1, the scientometric mapping analysis demonstrated that although black-box models such as deep neural networks have remained dominant within predictive analytics, growing scholarly attention has been directed towards XAI techniques that prioritise interpretability, particularly in high-stakes clinical contexts [5,22]. A notable resurgence of rule-based systems was identified, frequently in hybrid forms, providing a bridge between algorithmic complexity and human interpretability [23]. Thematic clusters generated through VOSviewer were shown to reveal convergence around clinical decision support systems, algorithmic accountability, and human–AI collaboration. These findings were interpreted as indicating that future research and innovation will increasingly favour models designed to achieve both predictive performance and explainability [15].
The analysis revealed a concentrated thematic focus on foundational values essential to the integration of artificial intelligence in healthcare. These values were found to comprise interpretability, explainability, transparency, trustworthiness, usability, understandability, and accountability. They were identified as central within the literature, reflecting persistent concern that artificial intelligence systems must be not only technically robust but also ethically and operationally viable in clinical practice [5,24]. The co-occurrence analysis was further demonstrated to show that terms such as explainability and trust frequently intersected with debates on clinical decision support systems and algorithmic outcomes. These intersections were interpreted as indicating that artificial intelligence tools lacking transparency may erode physician confidence and compromise patient safety [25].
Within the cluster, understandability was emphasised as a prerequisite for clinical adoption, reinforcing the requirement that models be aligned with human cognitive processes in order to be actionable [8]. Accountability was also identified as a critical concern, particularly in discussions of responsibility when artificial intelligence systems contribute to medical errors, thereby underscoring the need for traceable and justifiable decision pathways [26]. The prominence of these themes was shown to indicate a paradigm shift in artificial intelligence research—from an exclusive focus on performance towards a more holistic emphasis on human-centred design that supports ethical implementation, legal compliance, and real-world usability in healthcare systems. These findings were found to align with current regulatory trends and ethical frameworks advocating explainable and accountable AI, marking a decisive turn in the discourse on medical artificial intelligence deployment [10,11].
In response to RQ2, keyword co-occurrence analysis conducted through VOSviewer was found to reveal a concentrated thematic focus on the practical application of XAI technologies in healthcare delivery and digital transformation. The cluster was shown to encompass terms such as smart healthcare, digital health, mHealth, telemonitoring, and healthcare technology, thereby reflecting the increasing integration of intelligent systems into both clinical and remote healthcare environments. The emergence of this cluster was interpreted as evidence that artificial intelligence is no longer restricted to theoretical or diagnostic applications but is progressively embedded within operational health service infrastructures [27]. The prominence of telemonitoring and mHealth was further shown to suggest a trend towards the decentralisation of care, in which artificial intelligence enables continuous monitoring and decision support beyond traditional clinical settings. This trajectory was interpreted as particularly critical for the management of chronic diseases and for mitigating pandemic-related care disruptions [28].
The convergence of healthcare applications and technology within this cluster was found to align with global health innovation agendas that emphasise scalability, personalisation, and data-driven efficiency. Numerous studies within this thematic group were observed to focus on XAI-enhanced mobile platforms and wearable technologies designed to support patient engagement, early intervention, and real-time data collection [29]. The high frequency and link strength of these terms within the cluster were interpreted as evidence of interdisciplinary collaboration between computer scientists, healthcare practitioners, and digital health innovators, thereby underscoring the expanding role of artificial intelligence in enabling smart health ecosystems [30].
The analysis indicated an underrepresentation of interpretability and explainability in healthcare delivery innovations, reflecting concerns that many XAI tools in telehealth and mobile health applications prioritise functionality over transparency [5,7,12,25]. This gap was found to highlight a tension between technical innovation and clinical accountability, where the demand for real-time, accessible artificial intelligence must be balanced with trust, regulatory compliance, and user-centred design. Findings from the cluster network underscore not only the transformative potential of artificial intelligence in healthcare delivery but also the urgent need to embed explainability and rule-based logic into system design to ensure adoption and ethical implementation.
In interpreting RQ3, the integration of rule-based systems within XAI in healthcare was increasingly recognised as a necessary response to the limitations of opaque black-box models. Renewed emphasis was placed on transparency, accountability, and trustworthiness, which were consistently identified as prerequisites for clinical adoption [31,32]. Smart healthcare and digital health infrastructures were found to require explainable outputs, particularly in contexts where patient safety and clinical validation are critical [33]. It was further demonstrated that, despite the dominance of neural architectures in predictive analytics, hybrid models combining statistical learning with rule-based frameworks were better positioned to ensure interpretability. This positioning was interpreted as strengthening both clinical decision support and regulatory compliance [34].
Healthcare technologies, including mHealth applications, telemonitoring platforms, and digital diagnostics, have been deployed with increasing frequency but have frequently prioritised functionality and scalability over explainability. Concerns were raised that limited transparency in these tools may undermine both physician confidence and patient safety [35]. The embedding of rule-based systems within XAI was proposed as a means of aligning healthcare innovations with ethical and operational requirements. By formalising decision pathways and codifying accountability, rule-based elements were found to bridge the gap between rapid technical advancement and human interpretability. In doing so, they ensured that outputs could be validated by both healthcare professionals and regulators [36].
The broader adoption of XAI in healthcare has been argued to depend not only on model performance but also on the explicit embedding of trustworthiness, accountability, and transparency within system design. Evidence was shown to indicate that rule-based XAI frameworks enable clinical stakeholders to interrogate, contest, and refine algorithmic outputs, thereby supporting the development of user-centred and ethically grounded healthcare innovations [37]. The convergence of XAI, digital health, smart healthcare, and mHealth was consequently interpreted as marking a critical turning point, wherein innovation is increasingly defined by interpretability and human validation rather than computational opacity.

4.1. Practical Implications

Findings from the scientometric mapping analysis offer insights into the ways in which XAI and healthcare technology are shaping medical applications. The analysis centred on interpretability, explainable models, trust, and decision support systems, thereby emphasising the critical role of transparency in XAI-driven healthcare tools. The practical implication was identified as the requirement for AI systems to be aligned with physician reasoning and to maintain auditability, particularly in high-stakes contexts such as diagnosis and prognosis. This underscores the necessity for developers, physicians, and regulatory authorities to prioritise rule-based or inherently interpretable frameworks in XAI solutions, especially in domains where clinical accountability and regulatory approval are decisive.
The results of rule-based systems highlight the rapid expansion of smart healthcare, mHealth, telemonitoring, and other digital health technologies that extend care delivery beyond hospital environments. These technologies were demonstrated to create opportunities for chronic disease management, remote patient monitoring, and improved access to healthcare in underserved and rural populations. Their scalability and convenience were recognised, while the lack of embedded explainability in many digital health applications was identified as introducing risks to patient safety, data privacy, and clinical trust. In practical terms, the findings were interpreted as underscoring the requirement for healthcare providers and XAI developers to integrate explainable principles, particularly rule-based logic, into next-generation smart healthcare tools. Through such integration, user confidence was shown to be strengthened, adoption rates improved, and compliance with emerging regulatory standards ensured.

4.2. Theoretical Contributions

This study contributed to the theoretical advancement of XAI through the foregrounding of the often-overlooked relevance of rule-based systems within contemporary artificial intelligence healthcare research. While much of the existing literature on XAI has emphasised post hoc explainability methods such as LIME and SHAP [38], rule-based logic was repositioned in this analysis as a foundational and inherently interpretable model rather than as a legacy system. Scientometric mapping evidence was found to demonstrate that interpretability and practical healthcare innovation are not mutually exclusive but can be jointly addressed through hybrid or logic-based frameworks. This finding was interpreted as supporting a reconceptualisation of XAI theory, extending beyond technical interpretability to encompass cognitive alignment with clinical reasoning. In this way, calls from social science and medical communities for artificial intelligence systems that are more human-centred and transparent were addressed [8,23].
The theoretical understanding of technological diffusion and interdisciplinary knowledge integration in healthcare XAI was extended by this study through the demonstration that themes such as telemonitoring, digital health, and smart healthcare are increasingly interwoven with concerns of explainability. This dual-cluster insight was shown to contribute to innovation theory by indicating that the adoption of artificial intelligence in healthcare is shaped not only by performance but also by trust, accountability, and regulatory alignment—factors grounded in theoretical constructs such as socio-technical systems and translational ethics [5,29]. By linking interpretability theory with digital health application domains, this research was found to provide a conceptual bridge between artificial intelligence system design and real-world healthcare implementation. In this way, a critical gap in the literature was addressed, and a framework was established for future interdisciplinary inquiry.

4.3. Potential Limitations

This study was found to provide a comprehensive scientometric mapping analysis of XAI and rule-based systems in healthcare; however, several limitations were acknowledged. First, reliance on a single database was recognised, as the analysis was restricted to publications indexed in Scopus, which, although extensive, may have excluded contributions from Web of Science, PubMed, or IEEE Xplore, thereby omitting regional or domain-specific studies. Second, potential language bias was introduced because only English-language publications were included, limiting the visibility of research disseminated in other languages. Third, field delineation was identified as a challenge, since the boundaries of XAI, rule-based systems, and healthcare technologies overlap with adjacent domains, which may have resulted in partial coverage or the inclusion of marginally relevant works. Fourth, parameter sensitivity was noted, as the choice of search strings, thresholds, and clustering algorithms in scientometric mapping tools was recognised as influencing both outcomes and visualisations. The study was limited to the period from 1 January 2018 to 20 May 2025. This comparatively short timeframe was acknowledged as a constraint that may not have allowed longer-term trends in rule-based XAI to be fully captured.

4.4. Future Research Paths

Future research was recommended to extend beyond scientometric mapping by incorporating qualitative and mixed-methods approaches—such as expert interviews, case studies, and policy analysis—to capture how explainability is operationalised in clinical practice. Longitudinal studies were identified as necessary to examine how XAI models, particularly those integrating rule-based logic, are adopted, trusted, and evaluated over time by physicians, patients, and regulatory bodies. Further investigation was advised to address geographical disparities in XAI research and application, particularly in low- and middle-income countries where access to digital infrastructure and artificial intelligence expertise remains constrained. Interdisciplinary research engaging bioethics, cognitive science, and medical informatics was also called for to refine the theoretical foundations of interpretability and to inform the design of more user-centred and context-aware artificial intelligence tools for healthcare. Building on this study, future work was deemed essential to bridge the gap between technical development and responsible implementation, thereby enabling the full potential of explainable and trustworthy artificial intelligence in global health.

5. Conclusions

The objective of the study was defined as the reclamation of rule-based systems within XAI as innovations aligned with healthcare accountability, using Scopus-indexed publications between 1 January 2018 and 20 May 2025. In relation to RQ1, the dynamic evolution of XAI in healthcare was demonstrated, with rule-based systems observed to re-emerge as hybrid solutions bridging algorithmic complexity and interpretability. In addressing RQ2, thematic clusters were shown to emphasise foundational values of transparency, trustworthiness, accountability, and explainability, which were consistently positioned as prerequisites for clinical integration. In response to RQ3, healthcare applications such as smart healthcare, digital health, mHealth, and telemonitoring were identified as domains shaped by interpretability and rule-based design in healthcare innovation. The findings were interpreted as indicating that XAI in healthcare advances through the interplay of ethical principles, methodological innovation, and applied technologies. This trajectory was shown to be shaped by actors, regulators, healthcare providers, and XAI developers, while the enduring role of rule-based systems was confirmed as central to bridging technical robustness with clinical accountability.

Supplementary Materials

The following supporting information was made available for download at: https://www.mdpi.com/article/10.3390/a18090586/s1. Table S1: PRISMA 2020 Main Checklist; Table S2: PRISMA 2020 Abstract Checklist; and the OSF registries project (osf.io/mv23e).

Author Contributions

Conceptualisation, H.D.; methodology, H.D.; software, H.D.; validation, H.D.; formal analysis, H.D., C.S., P.P., N.P. and P.C.; investigation, H.D.; resources, H.D.; data curation, H.D.; writing—original draft preparation, H.D., C.S., P.P., N.P. and P.C.; writing—review and editing, H.D., C.S., P.P., N.P. and P.C.; visualisation, H.D.; supervision, H.D.; project administration, H.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The current research data were accessed through the SCOPUS database (https://www.scopus.com/), accessed on 20 May 2025. The data supporting this study were made openly available at the Open Science Framework under the registration DOI: https://doi.org/10.17605/OSF.IO/ENBZ2.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Adadi, A.; Berrada, M. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 2018, 6, 52138–52160. [Google Scholar] [CrossRef]
  2. Rajpurkar, P.; Chen, E.; Banerjee, O.; Topol, E.J. AI in health and medicine. Nat. Med. 2022, 28, 31–38. [Google Scholar] [CrossRef] [PubMed]
  3. Okoro, E.M.; Umagba, A.O.; Abara, B.A.; Isa, Z.S.; Buhari, A. XAI Based Intelligent Systems for Society 5.0. In Towards Explainable Artificial Intelligence: History, Present Scenarios, and Future Trends; Elsevier: Amsterdam, The Netherlands, 2024; pp. 29–59. [Google Scholar]
  4. Nimmy, S.F.; Hussain, O.K.; Chakrabortty, R.K.; Hussain, F.K.; Saberi, M. An optimized Belief-Rule-Based (BRB) approach to ensure the trustworthiness of interpreted time-series decisions. Knowl.-Based Syst. 2023, 271, 110552. [Google Scholar] [CrossRef]
  5. Ghassemi, M.; Oakden-Rayner, L.; Beam, A.L. The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digit. Health 2021, 3, e745–e750. [Google Scholar] [CrossRef]
  6. Van Der Waa, J.; Nieuwburg, E.; Cremers, A.; Neerincx, M. Evaluating XAI: A comparison of rule-based and example-based explanations. Artif. Intell. 2021, 291, 103404. [Google Scholar] [CrossRef]
  7. Arrieta, A.B.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; García, S.; Gil-López, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef]
  8. Miller, T. Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 2019, 267, 1–38. [Google Scholar] [CrossRef]
  9. Reddy, S.; Allan, S.; Coghlan, S.; Cooper, P. A governance model for the application of AI in health care. J. Am. Med. Inform. Assoc. 2020, 27, 491–497. [Google Scholar] [CrossRef]
  10. Srinivasu, P.N.; Sandhya, N.; Jhaveri, R.H.; Raut, R. From blackbox to explainable AI in healthcare: Existing tools and case studies. Mob. Inf. Syst. 2022, 1, 8167821. [Google Scholar] [CrossRef]
  11. Hulsen, T. Explainable Artificial Intelligence (XAI): Concepts and Challenges in Healthcare. AI 2023, 4, 652–666. [Google Scholar] [CrossRef]
  12. Saraswat, D.; Bhattacharya, P.; Verma, A.; Prasad, V.K.; Tanwar, S.; Sharma, G.; Bokoro, P.N.; Sharma, R. Explainable AI for healthcare 5.0: Opportunities and challenges. IEEE Access 2022, 10, 84486–84517. [Google Scholar] [CrossRef]
  13. Dhiman, P.; Bonkra, A.; Kaur, A.; Gulzar, Y.; Hamid, Y.; Mir, M.S.; Soomro, A.B.; Elwasila, O. Healthcare Trust Evolution with Explainable Artificial Intelligence: Bibliometric Analysis. Information 2023, 14, 541. [Google Scholar] [CrossRef]
  14. Gupta, J.; Seeja, K.R. A comparative study and systematic analysis of XAI models and their applications in healthcare. Arch. Comput. Methods Eng. 2024, 31, 3977–4002. [Google Scholar] [CrossRef]
  15. Noor, A.A.; Manzoor, A.; Qureshi, M.D.M.; Qureshi, M.A.; Rashwan, W. Unveiling Explainable AI in Healthcare: Current Trends, Challenges, and Future Directions. Rev. Data Min. Knowl. Discov. 2025, 15, e70018. [Google Scholar] [CrossRef]
  16. Van Eck, N.; Waltman, L. Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics 2010, 84, 523–538. [Google Scholar] [CrossRef]
  17. Shi, J.; Bendig, D.; Vollmar, H.C.; Rasche, P. Mapping the Bibliometrics Landscape of AI in Medicine: Methodological Study. J. Med. Internet Res. 2023, 25, e45815. [Google Scholar] [CrossRef]
  18. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
  19. Newman, M.E. The Structure of Scientific Collaboration Networks. Proc. Natl. Acad. Sci. USA 2001, 98, 404–409. [Google Scholar] [CrossRef] [PubMed]
  20. Traag, V.A.; Waltman, L.; Van Eck, N.J. From Louvain to Leiden: Guaranteeing Well-Connected Communities. Sci. Rep. 2019, 9, 5233. [Google Scholar] [CrossRef] [PubMed]
  21. Cobo, M.J.; López-Herrera, A.G.; Herrera-Viedma, E.; Herrera, F. Science mapping software tools: Review, analysis, and cooperative study among tools. J. Assoc. Inf. Sci. Technol. 2011, 62, 1382–1402. [Google Scholar] [CrossRef]
  22. Tjoa, E.; Guan, C. A survey on explainable artificial intelligence (XAI): Toward medical XAI. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 4793–4813. [Google Scholar] [CrossRef]
  23. Amann, J.; Blasimme, A.; Vayena, E.; Frey, D.; Madai, V.I. Explainability for artificial intelligence in healthcare: A multidisciplinary perspective. BMC Med. Inform. Decis. Mak. 2020, 20, 310. [Google Scholar] [CrossRef]
  24. Senoner, J.; Schallmoser, S.; Kratzwald, B.; Feuerriegel, S.; Netland, T. Explainable AI improves task performance in human–AI collaboration. Sci. Rep. 2024, 14, 31150. [Google Scholar] [CrossRef]
  25. Eke, C.I.; Shuib, L. The role of explainability and transparency in fostering trust in AI healthcare systems: A systematic literature review, open issues and potential solutions. Neural Comput. Appl. 2025, 37, 1999–2034. [Google Scholar] [CrossRef]
  26. Wysocki, O.; Davies, J.K.; Vigo, M.; Armstrong, A.C.; Landers, D.; Lee, R.; Freitas, A. Assessing the communication gap between AI models and healthcare professionals: Explainability, utility and trust in AI-driven clinical decision-making. Artif. Intell. 2023, 316, 103839. [Google Scholar] [CrossRef]
  27. Jiang, F.; Jiang, Y.; Zhi, H.; Dong, Y.; Li, H.; Ma, S.; Wang, Y.; Dong, Q.; Shen, H.; Wang, Y. Artificial intelligence in healthcare: Past, present and future. Stroke Vasc. Neurol. 2017, 2, 4. [Google Scholar] [CrossRef] [PubMed]
  28. Aranovich, T.D.C.; Matulionyte, R. Ensuring AI explainability in healthcare: Problems and possible policy solutions. Inf. Commun. Technol. Law 2023, 32, 259–275. [Google Scholar] [CrossRef]
  29. Topol, E.J. High-performance medicine: The convergence of human and artificial intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef]
  30. Mubarakali, A.; AlJarullah, A. IoT and XAI-Driven Data Aggregation Framework for Intelligent Decision-Making in Smart Healthcare Systems. Sustain. Comput. Inform. Syst. 2025, 48, 101179. [Google Scholar] [CrossRef]
  31. Goktas, P.; Grzybowski, A. Shaping the Future of Healthcare: Ethical Clinical Challenges and Pathways to Trustworthy AI. J. Clin. Med. 2025, 14, 1605. [Google Scholar] [CrossRef] [PubMed]
  32. Shaw, K.; Cassel, C.K.; Black, C.; Levinson, W. Shared medical regulation in a time of increasing calls for accountability and transparency: Comparison of recertification in the United States, Canada, and the United Kingdom. JAMA 2009, 302, 2008–2014. [Google Scholar] [CrossRef] [PubMed]
  33. Gomis-Pastor, M.; Berdún, J.; Borrás-Santos, A.; De Dios López, A.; Fernández-Montells Rama, B.; García-Esquirol, Ó.; Gratacòs, M.; Ontiveros Rodríguez, G.D.; Pelegrín Cruz, R.; Real, J.; et al. Clinical Validation of Digital Healthcare Solutions: State of the Art, Challenges and Opportunities. Healthcare 2024, 12, 1057. [Google Scholar] [CrossRef]
  34. Patel, A.U.; Gu, Q.; Esper, R.; Maeser, D.; Maeser, N. The Crucial Role of Interdisciplinary Conferences in Advancing Explainable AI in Healthcare. BioMedInformatics 2024, 4, 1363–1383. [Google Scholar] [CrossRef]
  35. Senbekov, M.; Saliev, T.; Bukeyeva, Z.; Almabayeva, A.; Zhanaliyeva, M.; Aitenova, N.; Fakhradiyev, I. The recent progress and applications of digital technologies in healthcare: A review. Int. J. Telemed. Appl. 2020, 2020, 8830200. [Google Scholar] [CrossRef] [PubMed]
  36. Silva, B.; Hak, F.; Guimaraes, T.; Manuel, M.; Santos, M.F. Rule-based system for effective clinical decision support. Procedia Comput. Sci. 2023, 220, 880–885. [Google Scholar] [CrossRef]
  37. Antoniadi, A.M.; Du, Y.; Guendouz, Y.; Wei, L.; Mazo, C.; Becker, B.A.; Mooney, C. Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review. Appl. Sci. 2021, 11, 5088. [Google Scholar] [CrossRef]
  38. Alabi, R.O.; Elmusrati, M.; Leivo, I.; Almangush, A.; Mäkitie, A.A. Machine learning explainability in nasopharyngeal cancer survival using LIME and SHAP. Sci. Rep. 2023, 13, 8984. [Google Scholar] [CrossRef]
Figure 1. PRISMA flow diagram illustrating the process of scientometric mapping analysis.
Figure 1. PRISMA flow diagram illustrating the process of scientometric mapping analysis.
Algorithms 18 00586 g001
Figure 2. Publication and citation trends in XAI for healthcare: (a) annual growth curve of publications; (b) annual growth curve of citations with trendline.
Figure 2. Publication and citation trends in XAI for healthcare: (a) annual growth curve of publications; (b) annual growth curve of citations with trendline.
Algorithms 18 00586 g002
Figure 3. Publication trends based on a country-level scientometric mapping analysis: (a) co-authorship networks; (b) citation rankings.
Figure 3. Publication trends based on a country-level scientometric mapping analysis: (a) co-authorship networks; (b) citation rankings.
Algorithms 18 00586 g003
Figure 4. Combined co-authorship and citation network of researchers in XAI in healthcare.
Figure 4. Combined co-authorship and citation network of researchers in XAI in healthcare.
Algorithms 18 00586 g004
Figure 5. Disease categories classified using XAI in healthcare.
Figure 5. Disease categories classified using XAI in healthcare.
Algorithms 18 00586 g005
Figure 6. Overlay visualization showing: (a) XAI; (b) healthcare.
Figure 6. Overlay visualization showing: (a) XAI; (b) healthcare.
Algorithms 18 00586 g006
Figure 7. Key terms in XAI for healthcare: (a) key term clusters; (b) strategic diagram of research clusters.
Figure 7. Key terms in XAI for healthcare: (a) key term clusters; (b) strategic diagram of research clusters.
Algorithms 18 00586 g007
Figure 8. XAI in healthcare clusters: (a) XAI in healthcare networks; (b) research agenda quadrant clusters.
Figure 8. XAI in healthcare clusters: (a) XAI in healthcare networks; (b) research agenda quadrant clusters.
Algorithms 18 00586 g008
Figure 9. Ranked research clusters in XAI for healthcare.
Figure 9. Ranked research clusters in XAI for healthcare.
Algorithms 18 00586 g009
Figure 10. Research–policy pathways for rule-based systems in XAI for healthcare.
Figure 10. Research–policy pathways for rule-based systems in XAI for healthcare.
Algorithms 18 00586 g010
Table 1. Top ten countries in co-authorship networks related to XAI in healthcare.
Table 1. Top ten countries in co-authorship networks related to XAI in healthcare.
RankCountry/RegionYearPublicationsCitationsCentrality
1India201812619060.95
2United States20189424271.23
3United Kingdom20185520640.87
4Saudi Arabia2018505980.48
5Italy20184915060.61
6South Korea2018398420.35
7Australia20183615680.39
8China20183413160.37
9Pakistan2018337230.28
10Canada2018269890.15
Table 2. The top ten most productive organizations in XAI in healthcare.
Table 2. The top ten most productive organizations in XAI in healthcare.
RankInstitutionsPublicationsCitationsCentrality
1King Saud University131860.78
2Princess Nourah Bint Abdulrahman University11910.64
3Manipal Academy of Higher Education101460.56
4Vellore Institute of Technology92490.47
5Consiglio Nazionale delle Ricerche92100.40
6Manipal Institute of Technology91430.34
7Khalifa University of Science and Technology83770.28
8Korea University72390.21
9University of Southern Queensland74640.17
10Prince Mohammad Bin Fahd University7110.15
Table 3. Key terms identified through a co-word analysis of XAI in healthcare.
Table 3. Key terms identified through a co-word analysis of XAI in healthcare.
ClusterRankKey Co-WordingCentralityDurationRange
One1XAI0.952018–2025
2Responsible AI0.872018–2025
3AI technologies0.852018–2025
4Generative AI0.762018–2025
5Interpretable AI0.662018–2025
6Healthcare AI0.582018–2025
7AI techniques0.552018–2025
8GANs0.542018–2025
9Trustworthy AI0.522018–2025
Two1XAI model0.972018–2025
2Explainability0.942018–2025
3Accountability0.902018–2025
4Understandability0.882018–2025
5Usability0.862018–2025
6Interpretability0.832018–2025
7LIME0.712018–2025
8SHAP0.592018–2025
Three1Healthcare0.962018–2025
2Healthcare technology0.872018–2025
3Healthcare application0.782018–2025
4mhealth0.722018–2025
5Smart healthcare0.682018–2025
6Healthcare professionals0.602018–2025
7Healthcare delivery0.532018–2025
8Telemonitoring0.482018–2025
9Virtual reality0.412018–2025
10Digital health0.372018–2025
11Healthcare providers0.302018–2025
12Healthcare domains0.222018–2025
Note: = denotes the range of high citation density; = denotes the range of low citation density.
Table 4. Strategic diagram clusters in XAI and healthcare research.
Table 4. Strategic diagram clusters in XAI and healthcare research.
ClusterQuadrantKey TermsInterpretation
OneMotor themes (high centrality, high density)XAI, responsible AI, AI technologies, generative AI, interpretable AI, healthcare AI, AI techniques, GANs, and trustworthy AICluster one was identified as driving the field. Conceptual maturity was shown, and strong links with adjacent domains were maintained. The cluster was positioned as forming the backbone of healthcare AI innovation, where responsible and trustworthy frameworks intersect with advanced technical methods.
TwoBasic themes (high centrality, low density)XAI model, explainability, accountability, understandability, usability, interpretability, LIME, and SHAPCluster two was identified as forming a fundamental building block of the field. Wide links across domains were demonstrated, while internal cohesion was shown to be limited. The cluster was defined as establishing the methodological and conceptual foundations of XAI, thereby ensuring accountability, usability, and interpretability in healthcare systems.
ThreeNiche themes (Low centrality, high density)Healthcare, healthcare technology, healthcare application, mHealth, smart healthcare, healthcare professionals, healthcare delivery, telemonitoring, virtual reality, digital health, healthcare providers, and healthcare domainsCluster three was identified as containing specialised and internally cohesive themes. Strong internal development was demonstrated, while weaker connections to the wider network were maintained. The cluster was characterised as embodying practical, domain-specific applications of AI and XAI in healthcare, tailored to contexts such as telemonitoring and digital health.
Table 5. Cluster networks of XAI in healthcare.
Table 5. Cluster networks of XAI in healthcare.
ClusterRankXAI in HealthcareOccurrenceCentralityLinkRange
One XAI
1Interpretability1630.821853
2Explainability690.75814
3Transparency600.66617
4Trustworthiness140.16221
5Usability70.1471
6Understandability60.0958
7Accountability50.0853
Two Healthcare
1Healthcare delivery320.46603
2Smart healthcare150.32210
3Healthcare application170.28205
4Telemonitoring50.19162
5Digital health80.16134
6mHealth 140.17133
7Healthcare technology 60.1364
Note: = denotes the range of high citation density; = denotes the range of low citation density.
Table 6. Research agenda clusters and their research impacts.
Table 6. Research agenda clusters and their research impacts.
QuadrantClusterResearch Impact
High impact/low gap (priority agenda)XAIInterpretability, transparency, and accountability were identified as the primary drivers of healthcare AI research. Their maturity was shown to position them for large-scale clinical adoption, policy integration, and standard-setting in trustworthy AI for healthcare.
High impact/high gap (future potential)Responsible AIClusters such as trustworthiness, usability, and understandability were recognised as vital yet underdeveloped. They were identified as representing high-growth opportunities for advancing human-centred AI in healthcare. Their progression was shown to require cross-disciplinary collaboration and the establishment of robust ethical frameworks.
Low impact/high gap (exploratory)HealthcareDigital health, mHealth, and telemonitoring were identified as emerging domains that remain fragmented. Their impact was shown to be modest owing to technical, regulatory, and infrastructural gaps. With the integration of XAI, these domains were interpreted as having the potential to evolve into high-impact innovations in personalised and remote healthcare.
Low impact/low gap (mature areas)Hybrid AI–HealthcareApplied smart healthcare solutions and XAI-based applications were identified as relatively mature, with minimal conceptual gaps. Their impact was recognised as limited by saturation, yet they were shown to provide stable platforms for incremental innovation, benchmarking, and scaling across healthcare systems.
Table 7. An integrated research–policy agenda for XAI in healthcare.
Table 7. An integrated research–policy agenda for XAI in healthcare.
Quadrant ClusterRegulatorsHealthcare Providers XAI Developers
High impact/low gap (priority agenda)XAI Mandatory explainability standards were proposed for establishment in relation to clinical AI tools. These standards were specified as requiring alignment with EU AI Act provisions, FDA regulations, and WHO guidelines in order to ensure safety and transparency.The adoption of XAI-based decision support was recommended for healthcare systems to strengthen accountability and build patient trust. Training on interpretability was specified as requiring integration into clinical practice to ensure effective and responsible use.Scalable explainability modules such as SHAP and LRP were recommended for embedding within healthcare platforms. Usability for non-technical users was identified as a priority in order to support practical adoption.
High impact/high gap (future potential) Responsible AI Ethical compliance frameworks were proposed for introduction to address fairness, usability, and trustworthiness. Incentives in the form of responsible innovation grants were recommended to accelerate ethical adoption.Responsible AI applications were recommended for piloting within clinical workflows, including fairness in triage and usability in digital health applications.Human-centred AI design principles were recommended for development with participatory input from clinicians and patients. Accountability-by-design was specified as requiring implementation to ensure responsible deployment.
Low impact/high gap (exploratory)Healthcare (digital health, mHealth, telemonitoring) Regulatory sandboxes were recommended for the provision of testing mHealth and telemonitoring solutions with integrated explainability safeguards. Data privacy risks were identified as requiring mitigation to ensure safe and trustworthy adoption.Telehealth pilots were recommended for implementation with XAI to ensure interpretation of AI outputs for patients. Digital literacy training was identified as requiring development to support patient understanding and engagement.Lightweight, explainable models were recommended for design to support mobile and remote monitoring. Interoperability with healthcare IT systems was identified as a priority to enable seamless integration.
Low impact/low gap (mature areas)Hybrid AI–healthcare Certification frameworks were recommended for maintenance in relation to mature smart healthcare applications. Interoperability standards across providers were specified as requiring assurance to guarantee consistent and reliable deployment.Benchmarking tools were recommended for use in assessing AI integration within clinical decision-making. Existing workflows were identified as requiring refinement rather than replacement to support safe and effective adoption.Established healthcare AI models were recommended for improvement to enhance efficiency and scalability. Interpretability features were specified as requiring incremental enhancement without the introduction of disruptive change.
Low impact/low gap (mature/declining areas)Rule-based systems Archival standards for legacy systems were recommended for preservation. Their traceability and audit features were specified as requiring integration into modern regulatory guidelines.Rule-based systems were recommended for retention as fallback decision aids in sensitive cases where transparency is paramount.Rule-based logic was recommended for translation into hybrid XAI models. Clinical knowledge bases were specified as requiring embedding to enhance explainability and ensure domain grounding.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Daovisan, H.; Suwanwong, C.; Prasittichok, P.; Prayai, N.; Choowan, P. Reclaiming XAI as an Innovation in Healthcare: Bridging Rule-Based Systems. Algorithms 2025, 18, 586. https://doi.org/10.3390/a18090586

AMA Style

Daovisan H, Suwanwong C, Prasittichok P, Prayai N, Choowan P. Reclaiming XAI as an Innovation in Healthcare: Bridging Rule-Based Systems. Algorithms. 2025; 18(9):586. https://doi.org/10.3390/a18090586

Chicago/Turabian Style

Daovisan, Hanvedes, Charin Suwanwong, Pitchada Prasittichok, Narulmon Prayai, and Phaktada Choowan. 2025. "Reclaiming XAI as an Innovation in Healthcare: Bridging Rule-Based Systems" Algorithms 18, no. 9: 586. https://doi.org/10.3390/a18090586

APA Style

Daovisan, H., Suwanwong, C., Prasittichok, P., Prayai, N., & Choowan, P. (2025). Reclaiming XAI as an Innovation in Healthcare: Bridging Rule-Based Systems. Algorithms, 18(9), 586. https://doi.org/10.3390/a18090586

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop