1. Introduction
Over the past decade, diagnostic precision, treatment personalisation, and predictive analytics in healthcare have been reshaped through advances in machine learning and data-driven modelling. Within this transformation, XAI has been established as a central mechanism for transparency, accountability, and clinician trust [
1,
2]. Decision pathways have been rendered opaque by deep learning architectures, thereby intensifying the significance of XAI. The demand for interpretable models has been reinforced by regulatory mandates, documented cases of algorithmic bias, and the diffusion of clinical artificial intelligence into safety-critical domains [
3]. Historical reliance on rule-based systems has preserved trustworthiness through the embedding of expert-defined heuristics and explicit reasoning processes. Auditability and compliance with clinical guidelines have similarly been enabled by such systems [
4]. In this study, XAI has been positioned not merely as a compliance instrument but as a conceptual bridge between the transparency of rule-based frameworks and the predictive strength of contemporary artificial intelligence. Through this bridging role, epistemic reliability and operational safety in healthcare decision-making have been advanced.
Advances in computational performance have not been accompanied by the removal of opacity that constrains interpretability, reproducibility, and regulatory compliance in healthcare artificial intelligence systems [
5]. Black-box architectures, particularly deep neural networks, have not been associated with clinically actionable explanations and have therefore been restricted in integration into diagnostic and therapeutic workflows [
2]. Deficits in transparency have been linked to the erosion of clinician confidence, the limitation of patient comprehension, and the intensification of institutional exposure to legal and ethical risks when algorithmic outputs diverge from clinical judgement. Historically, these challenges were mitigated by rule-based systems through the provision of explicit decision pathways and guideline-conformant reasoning. Predictive accuracy, however, was reduced in complex and data-rich environments owing to restricted adaptability [
6]. The present study has addressed this gap by reconciling the epistemic rigour and auditability of rule-based frameworks with the adaptive capacity of modern artificial intelligence. Through this reconciliation, healthcare innovation has been advanced as both technologically progressive and socially accountable.
XAI has been defined as a suite of computational methods through which decision-making by artificial intelligence is rendered transparent, interpretable, and aligned with human cognitive frameworks [
7]. Rule-based systems have been characterised by explicitly codified if–then logic grounded in domain expertise, thereby ensuring traceability and regulatory auditability in clinical practice [
8]. Techniques of deep learning—such as saliency mapping, surrogate modelling, and counterfactual explanation—have been developed to approximate interpretability; however, the causal clarity intrinsic to rule-based reasoning has not been preserved [
5]. The model advanced in this study has been positioned not as a post hoc interpretative layer but as an integrative design paradigm in which adaptive statistical learning is embedded within transparent, rule-governed structures. Within this configuration, interpretability has been maintained without loss of predictive accuracy. This synthesis has been proposed as a mechanism through which clinical trust is sustained, informed decision-making is supported, and compliance with emerging frameworks for healthcare artificial intelligence governance is ensured [
9].
The integration of XAI with rule-based systems has been positioned as a transformative innovation through which the epistemic foundations of clinical artificial intelligence are reconceptualised. Adaptive statistical inference has been embedded within inherently interpretable logical structures so that the dual demands for transparency, accountability, auditability, and ethically aligned decision-making in healthcare are met [
7]. Broader implications have been carried by this integration, including regulatory harmonisation, cross-institutional interoperability, and the consolidation of sustained stakeholder trust. These outcomes have been expected to facilitate the adoption of artificial intelligence in domains that have previously exhibited resistance to opaque algorithms [
10]. The synthesis has been defined by a dual commitment to predictive performance and interpretability. Through the establishment of this balance, a methodological precedent has been set for artificial intelligence integration with the potential to reshape clinical workflows, enhance shared decision-making, and reinforce the social licence under which healthcare artificial intelligence operates.
The urgency of advancing XAI—understood as understandable, usable, interpretable, responsible, and accountable artificial intelligence—has been generated by the convergence of rapid algorithmic adoption with intensifying regulatory, ethical, and trust imperatives [
11]. Conventional approaches have treated interpretability as a secondary, post hoc extension of predictive modelling, and transparency has therefore not been embedded as a structural property of healthcare artificial intelligence systems [
12,
13]. A departure from this convention has been proposed in the present study through a hybrid paradigm in which the explicit logical reasoning of rule-based systems is structurally integrated with the adaptive capacity of statistical learning models. Within this configuration, causal interpretability, regulatory auditability, and clinician usability have been preserved without compromise in predictive performance [
14,
15]. The aim has been defined not only as the establishment of a model architecture but also as the articulation of a design philosophy that addresses existing transparency deficits while creating a replicable precedent for responsible artificial intelligence deployment. Through this advance, healthcare innovation policy, clinical workflow integration, and cross-domain governance of intelligent systems have been positioned to be reshaped.
Research Questions
The resurgence and evolving role of rule-based systems within XAI in healthcare are examined. A combined approach of systematic review and scientometric mapping of Scopus-indexed literature is employed to investigate the integration of rule-based logic into contemporary applications. Implications for transparency, trust, and clinical adoption are assessed. The inquiry is directed by the following research questions:
RQ1: What publication trends emerge in Scopus-indexed healthcare literature on XAI and rule-based systems, and which authors, institutions, and countries contribute most prominently to the identified thematic domains?
RQ2: What research gaps remain in connecting rule-based models to interpretability in healthcare, and which areas are underexplored in the current literature?
RQ3: Which thematic clusters emerge from systematically mapping the integration of rule-based systems into XAI for healthcare?
Interpretability in XAI is reframed in this study as a theoretical contribution extending beyond narrow technical accounts. A methodological advance is developed through hybrid modelling, in which rule-based reasoning is integrated with data-driven approaches to balance rigour with adaptability. A practical contribution is established through the alignment of XAI design with emerging global regulatory frameworks, thereby ensuring compliance while supporting transparency and trust. Collectively, these findings are interpreted as indicative of a broader shift towards responsible, human-centred, and innovative adoption of XAI in healthcare systems worldwide.
2. Materials and Methods
2.1. Scientometric Mapping Design
Rule-based systems in XAI were reclaimed as a timely innovation for healthcare in response to the rising demand for interpretable models in safety-critical contexts. Interpretability, innovation, and scientometric mapping were positioned as bridging mechanisms for the repositioning of rule-based reasoning within current debates on XAI. Knowledge advanced beyond prior narrative reviews through a scientometric mapping design grounded in Scopus data, thereby enabling systematic, replicable, and large-scale mapping of research dynamics [
16]. Bibliographic records were extracted, metadata were cleaned, co-occurrence patterns were mapped, and thematic domains were clustered through the VOSviewer workflow. Intellectual structures and knowledge networks were visualised by VOSviewer, thereby exposing the ways in which rule-based approaches have either been integrated or marginalised in healthcare XAI research. A theoretical contribution is made by reframing interpretability as a methodological core; a methodological contribution is offered by validating scientometric mapping over narrative synthesis; and a practical contribution is provided by guiding healthcare stakeholders in balancing innovation with transparency [
17].
2.2. Scientometric Mapping Search Strategies
Scopus was selected as the primary data source for this scientometric mapping study owing to its extensive coverage of peer-reviewed journals, conference proceedings, and healthcare-related artificial intelligence literature. International and high-impact publications were indexed within Scopus, thereby ensuring the inclusion of validated scholarly outputs and rendering the database appropriate for the mapping of research frontiers. The timeframe from 1 January 2018 to 20 May 2025 was defined to capture the accelerated growth of XAI research in healthcare during this period. Variations in terminology were addressed through the application of Boolean operators, truncation, and wildcards in the search strategy. The following exact query was executed: TITLE-ABS-KEY (“explainable AI” OR “XAI” OR “interpretable AI” OR “rule-based AI”) AND TITLE-ABS-KEY (“healthcare” OR “healthcare innovation”) AND PUBYEAR > 2018 AND PUBYEAR < 2025.
The workflow was structured in accordance with PRISMA 2020 guidelines (
Supplementary Materials), and transparency as well as reproducibility were ensured through sequential steps of identification, screening, eligibility, and inclusion [
18]. Replication was facilitated, and the reliability of the scientometric mapping analysis was strengthened through this design. Conceptual terms in XAI and healthcare were combined in the query using AND/OR operators and quotation marks to ensure phrase precision. A total of 1304 records were retrieved from the initial Scopus search, forming the foundation for the PRISMA flow diagram. The scientometric mapping process is illustrated in
Figure 1 through the PRISMA flow diagram.
2.3. Scientometric Mapping Eligibility Criteria
At the eligibility stage, the full texts of all potentially relevant publications were assessed to determine inclusion. Studies were excluded if they were not relevant to healthcare, if they did not maintain an explicit focus on XAI, or if they addressed only black-box models without rule-based interpretability. Transparency was ensured by defining rule-based models as those employing explicit rules, logic, or symbolic reasoning, whereas black-box models were characterised by opaque architectures such as deep neural networks. Healthcare relevance was operationalised through the presence of clinical settings, health technologies, or smart healthcare applications. Following the application of these eligibility criteria, 654 publications were retained as the final set for scientometric mapping analysis.
At the screening stage, duplicates were removed through EndNote and cross-checked in Excel to ensure accuracy. Only peer-reviewed articles published in English and explicitly addressing XAI in healthcare were retained for inclusion, while conference abstracts, grey literature, and studies outside healthcare applications were excluded. Screening was conducted at two levels. At the first level, titles and abstracts were reviewed for relevance, resulting in the exclusion of 412 records owing to irrelevance or non-healthcare scope. At the second level, full texts were assessed for methodological and thematic fit, and 238 records were excluded for lacking an XAI focus or for reliance solely on black-box models. For the studies retained, bibliographic data—including authors, titles, abstracts, keywords, and citations—were exported from Scopus and converted into CSV and RIS formats for import into VOSviewer. Quality control was maintained through the cross-checking of metadata and the resolution of inconsistencies across records. The process has been visualised through the PRISMA flow diagram (
Figure 1), which illustrates the systematic inclusion and exclusion of records prior to final analysis.
Within the PRISMA eligibility framework, systematic data cleaning was undertaken to ensure both accuracy and reproducibility. Author names and institutional affiliations were disambiguated to eliminate duplicate identities and false co-authorship links. Keywords were harmonised through the merging of synonyms, the correction of inconsistencies, and the standardisation of spelling variants. From this process, a controlled dictionary was created and subsequently applied as input for VOSviewer. The dictionary increased the validity of the co-occurrence analysis and enhanced the transparency of the scientometric mapping workflow by ensuring that each cleaning step was explicit and replicable.
2.4. Scientometric Mapping Models
The visualisation of the similarities equation was employed by Van Eck and Waltman [
16] to develop bibliometric maps for each item pair
and
, with the similarity
(
≥ 0) used as input. The mapping of items is expressed through the following equation:
subject to
The VOS method is used to calculate the weighted composite of squared distances between each pair of items. The location of each item
is expressed as follows:
where the item
is defined as a weighted average of the locations. According to Newman [
19], the equation of co-authorship networks is specified using
N and
M, where
N denotes the number of researchers and
the total number of publications. In constructing co-authorship networks,
= [
] is used to denote the
authorship matrix. The key element
of this matrix equals 1 if researcher
is an author of publication
and
otherwise. Furthermore,
is used to denote the number of authors of publication
, as expressed by the following equation:
where the mean represents the number of authors, with
> 1 for each publication
. The equation is expressed in symmetrical form as
= [
] to denote the full-counting co-authorship matrix of size
×
. The key element
represents the linkage between researchers
and
defined as follows:
where the co-authorship matrix
is defined as follows:
Hence,
is defined as the co-authorship matrix, and
as the post-multiplying authorship matrix, with self-links in the co-authorship matrix
set to 0. The fractional-counting co-authorship matrix is expressed as
= [
], where the linkage between researchers
and
,
, is defined as follows:
where the co-authorship matrix
is defined as follows:
where diag(
) denotes a diagonal matrix with the elements of the vector
, while 1 denotes a column vector of length
N with all elements equal to 1 and
0. The co-occurrence links are denoted between nodes
and
, (
=
≥ 0), while
denotes the association strengths of nodes
and
. The relationship is expressed in the following equation:
where
denotes the total number of links of node
, while
denotes the total number of links in the network. The relationship is defined as follows:
where node
is represented by a vector
denoting the location of node
in a
-dimensional map (
). A positive value of
defines the cluster to which node
belongs. The mapping process is unified in the following equation:
with respect to
. The distance between nodes
and
is denoted as
and expressed as follows:
where the mapping of the parameter is defined as follows:
where the parameter
is defined as the resolution (
> 0). A larger value of
results in stronger separation, with the co-occurrence structure interpreted in terms of repulsive forces between nodes.
According to Traag et al. [
20], the Leiden algorithm was defined as a modularity-based clustering approach with guaranteed well-connected communities. The modularity function with the resolution parameter
is expressed as follows:
Here, is defined as the modularity score for a given partition. is defined as the weight of the edge between nodes and j represented by the normalised co-occurrence (e.g., ). is defined as the degree (or strength) of node i. is defined as the total edge weight in the network. is defined as the resolution parameter that controls cluster granularity. is defined as an indicator function, taking the value 1 if nodes and belong to the same community and 0 otherwise. A low value of results in fewer and larger clusters, whereas a high value of produces more and smaller clusters.
The mapping objective was formalised as the systematic identification of structural patterns in the literature on XAI in healthcare, with particular emphasis on the integration of legacy rule-based systems. Methodological rigour was ensured through the adoption of full counting in VOSviewer for both co-citation and co-occurrence analyses, thereby incorporating the complete weight of each link across the dataset. Clustering was conducted with the Leiden algorithm, which was selected over the Louvain approach because it produces more stable and internally consistent partitions, particularly in large bibliometric networks. The resolution parameter was explicitly reported to specify cluster granularity, and multiple values were tested to demonstrate robustness. Edge thresholds were applied in accordance with VOSviewer’s recommended practices to remove spurious connections while preserving meaningful relational structures. To maximise transparency, exact map reporting was reproduced, including the counting scheme, resolution values, and thresholds, so that the clustering process remains fully traceable and replicable for subsequent scientometric analyses of XAI in healthcare.
2.5. Scientometric Mapping Clusters
Thematic clustering in scientometric mapping analysis has been employed to identify major research foci, intellectual structures, and emerging trends within a field by grouping co-occurring keywords and citations into distinct thematic areas [
21]. In the context of XAI and rule-based systems in healthcare, thematic clusters generated through VOSviewer have typically revealed domains such as clinical decision support systems, algorithmic transparency, interpretable machine learning, trust in artificial intelligence, and hybrid models. For instance, one cluster was centred on the integration of rule-based logic with deep learning to enhance diagnostic accuracy, while another was oriented towards the ethical and regulatory dimensions of explainability in clinical practice [
5,
7]. These clusters were employed to map the intellectual landscape of the field, thereby disclosing both established and emerging areas of inquiry that are critical to the advancement of trustworthy artificial intelligence in healthcare.
2.6. Scientometric Mapping Analyses
The scientometric mapping analytical procedure was undertaken through a structured process beginning with the extraction of bibliographic data from the Scopus database and concluding with visualisation and interpretation in VOSviewer. A comprehensive keyword search was conducted with Boolean operators to identify literature on XAI, rule-based systems, and healthcare within the period 1 January 2018 to 20 May 2025. The dataset generated from this search comprised metadata including authorship, titles, abstracts, keywords, citations, and institutional affiliations. These records were exported in CSV and RIS formats for bibliometric processing. The data were subsequently cleaned to remove duplicates and irrelevant entries. Terms were normalised to ensure consistency through the merging of synonyms and the standardisation of author names.
The cleaned data were imported into VOSviewer for multiple forms of analysis. Co-authorship analysis was conducted to map collaboration networks among authors and institutions. Co-citation analysis was undertaken to identify the intellectual structure of the field. Co-occurrence analysis of keywords was applied to detect thematic clusters [
17,
20]. VOSviewer’s distance-based visualisation technique was employed to represent relationships, with the proximity between nodes indicating the strength of association. Clustering algorithms embedded in the software were applied to group related items into thematic areas, thereby enabling the identification of core research topics such as interpretability in medical artificial intelligence, clinical decision support, and hybrid rule-learning models. These visual and quantitative outputs were subsequently interpreted in relation to the study’s objective, and rule-based systems were shown to have been reintegrated into contemporary XAI research in healthcare.
4. Discussion
The purpose of this study was defined as the examination of XAI as an emerging innovation in healthcare, with particular emphasis on its bridging role in rule-based systems. A scientometric mapping analysis was conducted on 654 studies indexed in the Scopus database between 2018 and 2025. Through VOSviewer analysis, three distinct clusters were identified—XAI, XAI models, and healthcare—as reported in
Table 3. The principal findings were derived from two cluster networks that mapped XAI and healthcare, as presented in
Table 5, together with the integrated cluster of XAI in healthcare illustrated in
Figure 5. The evidence was shown to demonstrate that the research landscape was organised into conceptual groupings that collectively underscored the progressive convergence of XAI techniques with healthcare applications.
In response to
RQ1, the scientometric mapping analysis demonstrated that although black-box models such as deep neural networks have remained dominant within predictive analytics, growing scholarly attention has been directed towards XAI techniques that prioritise interpretability, particularly in high-stakes clinical contexts [
5,
22]. A notable resurgence of rule-based systems was identified, frequently in hybrid forms, providing a bridge between algorithmic complexity and human interpretability [
23]. Thematic clusters generated through VOSviewer were shown to reveal convergence around clinical decision support systems, algorithmic accountability, and human–AI collaboration. These findings were interpreted as indicating that future research and innovation will increasingly favour models designed to achieve both predictive performance and explainability [
15].
The analysis revealed a concentrated thematic focus on foundational values essential to the integration of artificial intelligence in healthcare. These values were found to comprise interpretability, explainability, transparency, trustworthiness, usability, understandability, and accountability. They were identified as central within the literature, reflecting persistent concern that artificial intelligence systems must be not only technically robust but also ethically and operationally viable in clinical practice [
5,
24]. The co-occurrence analysis was further demonstrated to show that terms such as explainability and trust frequently intersected with debates on clinical decision support systems and algorithmic outcomes. These intersections were interpreted as indicating that artificial intelligence tools lacking transparency may erode physician confidence and compromise patient safety [
25].
Within the cluster, understandability was emphasised as a prerequisite for clinical adoption, reinforcing the requirement that models be aligned with human cognitive processes in order to be actionable [
8]. Accountability was also identified as a critical concern, particularly in discussions of responsibility when artificial intelligence systems contribute to medical errors, thereby underscoring the need for traceable and justifiable decision pathways [
26]. The prominence of these themes was shown to indicate a paradigm shift in artificial intelligence research—from an exclusive focus on performance towards a more holistic emphasis on human-centred design that supports ethical implementation, legal compliance, and real-world usability in healthcare systems. These findings were found to align with current regulatory trends and ethical frameworks advocating explainable and accountable AI, marking a decisive turn in the discourse on medical artificial intelligence deployment [
10,
11].
In response to
RQ2, keyword co-occurrence analysis conducted through VOSviewer was found to reveal a concentrated thematic focus on the practical application of XAI technologies in healthcare delivery and digital transformation. The cluster was shown to encompass terms such as smart healthcare, digital health, mHealth, telemonitoring, and healthcare technology, thereby reflecting the increasing integration of intelligent systems into both clinical and remote healthcare environments. The emergence of this cluster was interpreted as evidence that artificial intelligence is no longer restricted to theoretical or diagnostic applications but is progressively embedded within operational health service infrastructures [
27]. The prominence of telemonitoring and mHealth was further shown to suggest a trend towards the decentralisation of care, in which artificial intelligence enables continuous monitoring and decision support beyond traditional clinical settings. This trajectory was interpreted as particularly critical for the management of chronic diseases and for mitigating pandemic-related care disruptions [
28].
The convergence of healthcare applications and technology within this cluster was found to align with global health innovation agendas that emphasise scalability, personalisation, and data-driven efficiency. Numerous studies within this thematic group were observed to focus on XAI-enhanced mobile platforms and wearable technologies designed to support patient engagement, early intervention, and real-time data collection [
29]. The high frequency and link strength of these terms within the cluster were interpreted as evidence of interdisciplinary collaboration between computer scientists, healthcare practitioners, and digital health innovators, thereby underscoring the expanding role of artificial intelligence in enabling smart health ecosystems [
30].
The analysis indicated an underrepresentation of interpretability and explainability in healthcare delivery innovations, reflecting concerns that many XAI tools in telehealth and mobile health applications prioritise functionality over transparency [
5,
7,
12,
25]. This gap was found to highlight a tension between technical innovation and clinical accountability, where the demand for real-time, accessible artificial intelligence must be balanced with trust, regulatory compliance, and user-centred design. Findings from the cluster network underscore not only the transformative potential of artificial intelligence in healthcare delivery but also the urgent need to embed explainability and rule-based logic into system design to ensure adoption and ethical implementation.
In interpreting
RQ3, the integration of rule-based systems within XAI in healthcare was increasingly recognised as a necessary response to the limitations of opaque black-box models. Renewed emphasis was placed on transparency, accountability, and trustworthiness, which were consistently identified as prerequisites for clinical adoption [
31,
32]. Smart healthcare and digital health infrastructures were found to require explainable outputs, particularly in contexts where patient safety and clinical validation are critical [
33]. It was further demonstrated that, despite the dominance of neural architectures in predictive analytics, hybrid models combining statistical learning with rule-based frameworks were better positioned to ensure interpretability. This positioning was interpreted as strengthening both clinical decision support and regulatory compliance [
34].
Healthcare technologies, including mHealth applications, telemonitoring platforms, and digital diagnostics, have been deployed with increasing frequency but have frequently prioritised functionality and scalability over explainability. Concerns were raised that limited transparency in these tools may undermine both physician confidence and patient safety [
35]. The embedding of rule-based systems within XAI was proposed as a means of aligning healthcare innovations with ethical and operational requirements. By formalising decision pathways and codifying accountability, rule-based elements were found to bridge the gap between rapid technical advancement and human interpretability. In doing so, they ensured that outputs could be validated by both healthcare professionals and regulators [
36].
The broader adoption of XAI in healthcare has been argued to depend not only on model performance but also on the explicit embedding of trustworthiness, accountability, and transparency within system design. Evidence was shown to indicate that rule-based XAI frameworks enable clinical stakeholders to interrogate, contest, and refine algorithmic outputs, thereby supporting the development of user-centred and ethically grounded healthcare innovations [
37]. The convergence of XAI, digital health, smart healthcare, and mHealth was consequently interpreted as marking a critical turning point, wherein innovation is increasingly defined by interpretability and human validation rather than computational opacity.
4.1. Practical Implications
Findings from the scientometric mapping analysis offer insights into the ways in which XAI and healthcare technology are shaping medical applications. The analysis centred on interpretability, explainable models, trust, and decision support systems, thereby emphasising the critical role of transparency in XAI-driven healthcare tools. The practical implication was identified as the requirement for AI systems to be aligned with physician reasoning and to maintain auditability, particularly in high-stakes contexts such as diagnosis and prognosis. This underscores the necessity for developers, physicians, and regulatory authorities to prioritise rule-based or inherently interpretable frameworks in XAI solutions, especially in domains where clinical accountability and regulatory approval are decisive.
The results of rule-based systems highlight the rapid expansion of smart healthcare, mHealth, telemonitoring, and other digital health technologies that extend care delivery beyond hospital environments. These technologies were demonstrated to create opportunities for chronic disease management, remote patient monitoring, and improved access to healthcare in underserved and rural populations. Their scalability and convenience were recognised, while the lack of embedded explainability in many digital health applications was identified as introducing risks to patient safety, data privacy, and clinical trust. In practical terms, the findings were interpreted as underscoring the requirement for healthcare providers and XAI developers to integrate explainable principles, particularly rule-based logic, into next-generation smart healthcare tools. Through such integration, user confidence was shown to be strengthened, adoption rates improved, and compliance with emerging regulatory standards ensured.
4.2. Theoretical Contributions
This study contributed to the theoretical advancement of XAI through the foregrounding of the often-overlooked relevance of rule-based systems within contemporary artificial intelligence healthcare research. While much of the existing literature on XAI has emphasised post hoc explainability methods such as LIME and SHAP [
38], rule-based logic was repositioned in this analysis as a foundational and inherently interpretable model rather than as a legacy system. Scientometric mapping evidence was found to demonstrate that interpretability and practical healthcare innovation are not mutually exclusive but can be jointly addressed through hybrid or logic-based frameworks. This finding was interpreted as supporting a reconceptualisation of XAI theory, extending beyond technical interpretability to encompass cognitive alignment with clinical reasoning. In this way, calls from social science and medical communities for artificial intelligence systems that are more human-centred and transparent were addressed [
8,
23].
The theoretical understanding of technological diffusion and interdisciplinary knowledge integration in healthcare XAI was extended by this study through the demonstration that themes such as telemonitoring, digital health, and smart healthcare are increasingly interwoven with concerns of explainability. This dual-cluster insight was shown to contribute to innovation theory by indicating that the adoption of artificial intelligence in healthcare is shaped not only by performance but also by trust, accountability, and regulatory alignment—factors grounded in theoretical constructs such as socio-technical systems and translational ethics [
5,
29]. By linking interpretability theory with digital health application domains, this research was found to provide a conceptual bridge between artificial intelligence system design and real-world healthcare implementation. In this way, a critical gap in the literature was addressed, and a framework was established for future interdisciplinary inquiry.
4.3. Potential Limitations
This study was found to provide a comprehensive scientometric mapping analysis of XAI and rule-based systems in healthcare; however, several limitations were acknowledged. First, reliance on a single database was recognised, as the analysis was restricted to publications indexed in Scopus, which, although extensive, may have excluded contributions from Web of Science, PubMed, or IEEE Xplore, thereby omitting regional or domain-specific studies. Second, potential language bias was introduced because only English-language publications were included, limiting the visibility of research disseminated in other languages. Third, field delineation was identified as a challenge, since the boundaries of XAI, rule-based systems, and healthcare technologies overlap with adjacent domains, which may have resulted in partial coverage or the inclusion of marginally relevant works. Fourth, parameter sensitivity was noted, as the choice of search strings, thresholds, and clustering algorithms in scientometric mapping tools was recognised as influencing both outcomes and visualisations. The study was limited to the period from 1 January 2018 to 20 May 2025. This comparatively short timeframe was acknowledged as a constraint that may not have allowed longer-term trends in rule-based XAI to be fully captured.
4.4. Future Research Paths
Future research was recommended to extend beyond scientometric mapping by incorporating qualitative and mixed-methods approaches—such as expert interviews, case studies, and policy analysis—to capture how explainability is operationalised in clinical practice. Longitudinal studies were identified as necessary to examine how XAI models, particularly those integrating rule-based logic, are adopted, trusted, and evaluated over time by physicians, patients, and regulatory bodies. Further investigation was advised to address geographical disparities in XAI research and application, particularly in low- and middle-income countries where access to digital infrastructure and artificial intelligence expertise remains constrained. Interdisciplinary research engaging bioethics, cognitive science, and medical informatics was also called for to refine the theoretical foundations of interpretability and to inform the design of more user-centred and context-aware artificial intelligence tools for healthcare. Building on this study, future work was deemed essential to bridge the gap between technical development and responsible implementation, thereby enabling the full potential of explainable and trustworthy artificial intelligence in global health.