Reclaiming XAI as an Innovation in Healthcare: Bridging Rule-Based Systems
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThe authors provide a well-structured, systematic review paper, where research questions are clearly laid out and relevant to the research scope of the surveyed papers. These questions also address gaps in current research and postulate future work directions. Notwithstanding the foregoing, the paper could be easily improved to better balance the relevance of the topic and achieve more effective results in addressing the research questions.
- Within the paper, each section/subsection addressing the research question should be explicitly referenced with a number, so that we know which part of the paper is explicitly addressing it (e.g., adding RQ1/2/3 within rounded brackets).
- Within the introduction, the authors should also remember to not only provide the research questions, but add a brief summary of the findings (just some highlights), for then pointing to the relevant sections in the paper.
- I would reduce the part concerning the methodology on how the papers were collected, and provide a broader analysis of the findings and the comparisons between the papers' objects of study. The authors have already provided a comparison table for the methodology used in conducting the survey. However, including additional tables that compare the content of each surveyed paper, as well as features and missing parts, would be beneficial in providing a clearer summary of the findings.
Overall, the paper provides significant contributions, and might be accepted after applying the aforementioned changes.
Author Response
Response to Reviewers
Dear Reviewers,
Thank you for reviewing the manuscript entitled “Reclaiming Explainable Artificial Intelligence (XAI) as an In-novation in Healthcare: Bridging Rule-Based Systems.” We are grateful for you spending your valuable time on reviewing our manuscript and providing valuable suggestions. The authors have carefully reviewed the comments and put in a lot of effort to address them in a step-by-step manner as shown below. Changes to the manuscript are marked in tracked changes.
Thank you for taking the time to review our manuscript.
Sincerely,
The author
====================================
REVIEWERS’ SUGGESTIONS FOR THE AUTHOR:
Response to Reviewer #1
General comment 1:
The authors provide a well-structured, systematic review paper, where research questions are clearly laid out and relevant to the research scope of the surveyed papers. These questions also address gaps in current research and postulate future work directions. Notwithstanding the foregoing, the paper could be easily improved to better balance the relevance of the topic and achieve more effective results in addressing the research questions.
General response 1:
We appreciate the reviewer’s recognition of the paper’s structured design, systematic approach, and clear research questions. We also acknowledge the concern regarding the balance between topic relevance and the effectiveness of addressing the research questions. In response, we refined the Discussion to align scope, research gaps, and projected directions more closely. These revisions enhance coherence, sharpen focus, and clarify the paper’s contribution to the field.
Point 1:
Within the paper, each section/subsection addressing the research question should be explicitly referenced with a number, so that we know which part of the paper is explicitly addressing it (e.g., adding RQ1/2/3 within rounded brackets).
Response 1:
We thank the reviewer for this constructive suggestion. We explicitly labelled each subsection with the corresponding research question identifier (e.g., RQ1, RQ2, RQ3). This revision enables readers to link each section directly to the relevant research question and strengthens the transparency of the paper’s structure.
Point 2:
Within the introduction, the authors should also remember to not only provide the research questions, but add a brief summary of the findings (just some highlights), for then pointing to the relevant sections in the paper.
Response 2:
Thank you for this suggestion. We revised the Introduction to include a concise summary of the key findings and indicated the relevant sections for detailed discussion (lines 117–125, page 2).
Point 3:
I would reduce the part concerning the methodology on how the papers were collected, and provide a broader analysis of the findings and the comparisons between the papers' objects of study. The authors have already provided a comparison table for the methodology used in conducting the survey. However, including additional tables that compare the content of each surveyed paper, as well as features and missing parts, would be beneficial in providing a clearer summary of the findings.
Response 3:
Thank you for this constructive feedback. We shortened the description of paper collection, expanded the analysis of findings, and added comparative tables to highlight features and gaps across the surveyed studies.
Point 4:
Overall, the paper provides significant contributions, and might be accepted after applying the aforementioned changes.
Response 4:
We appreciate the reviewer’s positive evaluation of our work. We carefully implemented all suggested revisions. These changes enhance the paper’s clarity, strengthen its rigour, and reinforce its overall contribution.
Reviewer 2 Report
Comments and Suggestions for AuthorsThe manuscript has clear potential to make a valuable contribution, provided several areas are strengthened. I'm sharing the following comments to align with best practices in systematic reviews and bibliometric research. The first area for improvement is that the abstract is currently too general in scope. It should clearly present essential information, including the database(s) used, the search date, the time window, the total number of records analyzed, and the headline results (e.g., top contributing countries or institutions). Including a short note on limitations will also improve transparency.
Although the three research questions (RQ1–RQ3) are appropriate, the novelty of the study in comparison to recent systematic reviews and bibliometric analyses should be more explicitly stated. For example, similar studies include: Dhiman 2023 (bibliometric analysis of empirical XAI in healthcare, DOI: 10.3390/info14100541) and Shi 2023 (Landscape of AI in medicine, DOI: 10.2196/45815) are the closest methodological comparators to your metadata/VOSviewer approach. You need to: (i) cite them explicitly, (ii) align your research. In addition, Noor 2025 (DOI: 10.1002/widm.70018) synthesized methods, challenges, and directions across clinical areas. These results are close to a general XAI-healthcare mapping. If your paper claims a broad state-of-the-art synthesis, you must position against these and state what you add to the current state of the art.
Now, the materials and methods section is the part of the paper where there are many opportunities to improve clarity, transparency, and reproducibility. In its current form, it is not easy to replicate the methodology, so please specify:
- The exact queries used (with Boolean logic), the search date, and time window.
- Inclusion/exclusion criteria and de-duplication steps It is necessary to explain the reason why to select just scopus).
- Data cleaning methods (author and affiliation disambiguation, keyword harmonization) serve as a preliminary process for creating a dictionary for VosViewer, and in both cases, include the process to improve transparency.
- The counting method (full or fractional), normalization approach (e.g., association strength), community detection algorithm (Leiden recommended), and resolution γ parameter.
- Software tools and versions (e.g., specify the VOSviewer version and also mention the software for data preprocesses).
- Following the PRISMA 2020 statement (doi:10.1136/bmj.n71) will help structure these elements.
On the other hand, if the authors decide to retain equations, I suggest reorganizing them into a short subsection titled “Network Measures and Clustering.” Include the formula for association-strength normalization (see van Eck & Waltman, Scientometrics DOI:10.1007/s11192-009-0146-3) and for modularity with resolution γ (see Traag et al., DOI: 10.1038/s41598-019-41695-z). Also define centrality metrics if they are reported.
Now, regarding the Results section, I encourage the authors to structure the section directly around the RQs. If so, the authors can enhance the paper's cohesion by revising the Results -> Discussion section. Explicitly close the loop by adding a subsection titled “Answers to RQ1–RQ3.” Present concise bullets with quantitative evidence for each RQ. Please also temper language—bibliometrics can describe research activity, but cannot, by itself, demonstrate clinical adoption or model performance. Finally, expand the Limitations section (single database, language bias, field delineation, parameter sensitivity).
With these improvements—particularly a PRISMA-compliant methodology, RQ-structured results, a concise equations subsection, and DOI-compliant references—your study will present a transparent and reproducible scientometric map of XAI in healthcare, while also offering unique insights into the enduring role of legacy rule-based systems.
Comments on the Quality of English Language
The manuscript is generally readable, but the English would benefit from a careful line edit to improve clarity and flow. Please standardize terminology (e.g., use “explainable AI (XAI)” consistently), define all acronyms at first mention, and ensure tense and capitalization are uniform. Watch for minor grammar and punctuation issues (article use, subject–verb agreement, hyphenation), and streamline long sentences. Make figure/table captions fully self-contained and align symbols/notation across sections. A light professional copyedit or native-speaker review is recommended.
Author Response
Response to Reviewers
Dear Reviewers,
Thank you for reviewing the manuscript entitled “Reclaiming Explainable Artificial Intelligence (XAI) as an In-novation in Healthcare: Bridging Rule-Based Systems.” We are grateful for you spending your valuable time on reviewing our manuscript and providing valuable suggestions. The authors have carefully reviewed the comments and put in a lot of effort to address them in a step-by-step manner as shown below. Changes to the manuscript are marked in tracked changes.
Thank you for taking the time to review our manuscript.
Sincerely,
The author
====================================
REVIEWERS’ SUGGESTIONS FOR THE AUTHOR:
Response to Reviewer #2
Point 1:
The manuscript has clear potential to make a valuable contribution, provided several areas are strengthened. I'm sharing the following comments to align with best practices in systematic reviews and bibliometric research. The first area for improvement is that the abstract is currently too general in scope. It should clearly present essential information, including the database(s) used, the search date, the time window, the total number of records analyzed, and the headline results (e.g., top contributing countries or institutions). Including a short note on limitations will also improve transparency.
Response 1:
We thank the reviewer for this valuable comment. We revised the abstract to specify the databases consulted, the search date, the study period, the total number of records analysed, and the key findings, including the leading countries and institutions. We also added a brief statement on the study’s limitations to improve transparency.
Point 2:
Although the three research questions (RQ1–RQ3) are appropriate, the novelty of the study in comparison to recent systematic reviews and bibliometric analyses should be more explicitly stated. For example, similar studies include: Dhiman 2023 (bibliometric analysis of empirical XAI in healthcare, DOI: 10.3390/info14100541) and Shi 2023 (Landscape of AI in medicine, DOI: 10.2196/45815) are the closest methodological comparators to your metadata/VOSviewer approach. You need to: (i) cite them explicitly, (ii) align your research. In addition, Noor 2025 (DOI: 10.1002/widm.70018) synthesized methods, challenges, and directions across clinical areas. These results are close to a general XAI-healthcare mapping. If your paper claims a broad state-of-the-art synthesis, you must position against these and state what you add to the current state of the art.
Response 2:
We appreciate the reviewer’s observation. We revised the introduction and research design sections to include and discuss the studies by Dhiman et al. [13], Shi et al. [17], and Noor et al. [15] in both text and reference list. We clarified how our work differs from and extends these reviews by positioning it as a state-of-the-art synthesis that combines systematic review with bibliometric mapping. Our approach highlights thematic clusters, tracks methodological evolution, and addresses theoretical implications. This framing underscores the novelty of our contribution and situates the paper more firmly within the existing body of literature.
Point 3:
Now, the materials and methods section is the part of the paper where there are many opportunities to improve clarity, transparency, and reproducibility. In its current form, it is not easy to replicate the methodology, so please specify: The exact queries used (with Boolean logic), the search date, and time window. Inclusion/exclusion criteria and de-duplication steps It is necessary to explain the reason why to select just scopus).
Response 3:
We appreciate the reviewer’s observation concerning the need for clarity and reproducibility in the Materials and Methods section. In the revised manuscript (lines 144–153, page 4), we specify the Scopus search query with Boolean logic, the execution date, and the defined time window (2018–2025). We detail the inclusion and exclusion criteria, covering language (English), document type (peer-reviewed journal articles), and thematic scope (XAI in healthcare). We describe the de-duplication procedure, which combined EndNote with manual cross-checking to ensure accuracy. We also justify the exclusive use of Scopus by citing its broad international coverage of peer-reviewed journals and conference proceedings in healthcare and artificial intelligence. This rationale strengthens the consistency and quality of data required for scientometric mapping.
Point 4:
Data cleaning methods (author and affiliation disambiguation, keyword harmonization) serve as a preliminary process for creating a dictionary for VosViewer, and in both cases, include the process to improve transparency.
Response 4:
We thank the reviewer for the observation. In the revised manuscript, we clarify the data-cleaning methods. We explain the disambiguation of author names and affiliations, the harmonisation of keywords, and the construction of a controlled dictionary for VOSviewer. These additions, presented in lines 191–198 (page 5), demonstrate how the procedures strengthen the transparency and reproducibility of the scientometric workflow.
Point 5:
The counting method (full or fractional), normalization approach (e.g., association strength), community detection algorithm (Leiden recommended), and resolution γ parameter. Software tools and versions (e.g., specify the VOSviewer version and also mention the software for data preprocesses).
Response 5:
We appreciate the reviewer’s constructive suggestion. In the revised manuscript (lines 239–249, page 7), we specify the counting method (full counting), the normalisation approach (association strength), and the community detection algorithm (Leiden) with the applied resolution parameter γ. We also report the software tools and versions, including VOSviewer for network construction and clustering, together with the programme used for data preprocessing. These revisions improve methodological transparency and reinforce the reproducibility of the scientometric workflow.
Point 6:
Following the PRISMA 2020 statement (doi:10.1136/bmj.n71) will help structure these elements.
Response 6:
We thank the reviewer for the observation. In the revised manuscript, we clarify the data-cleaning methods. We describe the disambiguation of author names and affiliations, the harmonisation of keywords, and the construction of a controlled dictionary for VOSviewer. These clarifications, presented in lines 154–198 (see Figure 1), demonstrate how the procedures enhance the transparency and reproducibility of the scientometric workflow.
Point 7:
On the other hand, if the authors decide to retain equations, I suggest reorganizing them into a short subsection titled “Network Measures and Clustering.” Include the formula for association-strength normalization (see van Eck & Waltman, Scientometrics DOI:10.1007/s11192-009-0146-3) and for modularity with resolution γ (see Traag et al., DOI: 10.1038/s41598-019-41695-z). Also define centrality metrics if they are reported.
Response 7:
We acknowledge the reviewer’s constructive suggestion. We reorganised the equations into a dedicated subsection entitled Network Measures and Clustering. This subsection presents the formula for association-strength normalisation (van Eck & Waltman, 2009) and the modularity function with resolution γ (Traag et al., 2019). We also define the centrality metrics reported in the study. These revisions, located in lines 239–249 (page 7), enhance clarity and strengthen methodological rigour.
Point 8:
Now, regarding the Results section, I encourage the authors to structure the section directly around the RQs. If so, the authors can enhance the paper's cohesion by revising the Results -> Discussion section. Explicitly close the loop by adding a subsection titled “Answers to RQ1–RQ3.” Present concise bullets with quantitative evidence for each RQ.
Response 8:
We acknowledge the reviewer’s valuable recommendation. We restructured the Results section to align directly with the research questions. We revised the transition to the Discussion section to reinforce cohesion. We added a dedicated subsection entitled Answers to RQ1–RQ3. This subsection presents concise bullet points supported by quantitative evidence for each research question. These revisions strengthen the logical flow between the Results and Discussion and ensure that the findings explicitly address the research questions.
Point 9:
Please also temper language—bibliometrics can describe research activity, but cannot, by itself, demonstrate clinical adoption or model performance. Finally, expand the Limitations section (single database, language bias, field delineation, parameter sensitivity).
Response 9:
We acknowledge the reviewer’s suggestion. We revised the manuscript language to clarify that bibliometric and scientometric methods identify patterns of research activity but do not demonstrate clinical adoption or model performance. We also expanded the Limitations section to discuss reliance on a single database, the risk of language bias, the challenge of field delineation, and the sensitivity of results to parameter settings in scientometric tools. These revisions appear in lines 623–636 (page 20).
Point 10:
With these improvements—particularly a PRISMA-compliant methodology, RQ-structured results, a concise equations subsection, and DOI-compliant references—your study will present a transparent and reproducible scientometric map of XAI in healthcare, while also offering unique insights into the enduring role of legacy rule-based systems.
Response 10:
We implemented the reviewer’s recommendation. We revised the methodology to ensure compliance with PRISMA guidelines. We restructured the results to align with the research questions. We added a concise subsection on network equations. We updated all references to ensure DOI compliance. These revisions enhance transparency, reproducibility, and thematic clarity. The changes appear in lines 154–198 (pages 4–5).
Point 11:
The manuscript is generally readable, but the English would benefit from a careful line edit to improve clarity and flow. Please standardize terminology (e.g., use “explainable AI (XAI)” consistently), define all acronyms at first mention, and ensure tense and capitalization are uniform. Watch for minor grammar and punctuation issues (article use, subject–verb agreement, hyphenation), and streamline long sentences. Make figure/table captions fully self-contained and align symbols/notation across sections. A light professional copyedit or native-speaker review is recommended.
Response 11:
We implemented the reviewer’s recommendation. Native UK English editors professionally edited the manuscript twice before resubmission. We standardised terminology and defined all acronyms at first mention. We harmonised tense and capitalisation across the text. We corrected grammar, punctuation, and hyphenation. We shortened long sentences to improve clarity. We revised figure and table captions to ensure they are fully self-contained. We also aligned symbols and notation across sections to maintain consistency.
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsAll the points from my previous review were accepted, except for the requested comparison table from Section 4, which should help in better grasping the details via a visual summary. Adding this would immensely help the paper towards accessibility to a broader audience.
Author Response
Response to Reviewers
Dear Reviewers,
Thank you for reviewing the manuscript entitled “Reclaiming Explainable Artificial Intelligence (XAI) as an In-novation in Healthcare: Bridging Rule-Based Systems.” We are grateful for you spending your valuable time on reviewing our manuscript and providing valuable suggestions. The authors have carefully reviewed the comments and put in a lot of effort to address them in a step-by-step manner as shown below. Changes to the manuscript are marked in tracked changes.
Thank you for taking the time to review our manuscript.
Sincerely,
The author
===================================
REVIEWERS’ SUGGESTIONS FOR THE AUTHOR:
Response to Reviewer #1 (round 2)
Point 1:
All the points from my previous review were accepted, except for the requested comparison table from Section 4, which should help in better grasping the details via a visual summary. Adding this would immensely help the paper towards accessibility to a broader audience.
Response 1:
The requested comparison was addressed by integrating the keyword analysis in Table 3 and the cluster network in Table 5 into Section 4. This integration was intended to provide a clearer visual summary and to improve accessibility for a broader audience, as reflected in lines 486–495 on pages 18–19.
Reviewer 2 Report
Comments and Suggestions for AuthorsThanks for the updated manuscript. I re-checked this second version and can confirm the abstract already names Scopus and the 2018–2025 window; this appears explicitly in the abstract text ...A scientometric analysis was conducted using publications indexed in Scopus between 2018 and 2025.... That said, the abstract still reads too general for a systematic scientometric study: it does not report the calendar search date, the numbers retrieved and retained, any headline quantitative signals, or a brief limitations clause, even though those details are present later in Methods, where you note 1,304 records retrieved and 654 retained under a PRISMA workflow. Please surface those essentials in the abstract for transparency and quick appraisal.
On novelty, the additions help but remain broad. Your response says the paper is positioned against recent reviews and that it “combines systematic review with bibliometric mapping,” yet many papers do the same. The introduction and RQ section mention an overview of related work rather than stating in plain terms what this study adds beyond it; the distinct value should be the quantitative, RQ-driven lens on reclaiming legacy rule-based systems and a reproducible mapping of that niche. Please make that claim explicit in the introduction so readers can see the contribution at a glance.
Your Boolean query is clearly shown and helpful, but there is a consistency issue to fix. The query uses PUBYEAR > 2018 AND PUBYEAR < 2025, which actually selects 2019–2024, while elsewhere the narrative states a timeframe of 2018–2025, and a later section refers to 2019–2025. Please align narrative and code: either keep an inclusive 2018–2025 window and adjust the query accordingly, or update all prose, figures, and counts to the precise exclusive bounds you truly used.
I see that Figure 1 (the PRISMA flow) is flagged around line 190 and appears low-resolution in this version. Please replace it with a vector/PDF export, ensure that every count is legible, and mirror the exact numbers stated in the text. Immediately after that figure, you describe author and affiliation disambiguation and the creation of a controlled dictionary for VOSviewer, which is excellent for reproducibility. To maximize transparency and traceability, include the thesaurus/dictionary file and a short example of the harmonization table either in an appendix or as supplementary material, and point to it directly where you describe the process.
The “Scientometric Models” subsection is stronger in this version and it formalizes the mapping objective, counting choices, and clustering narrative, including reference to the resolution parameter and the Leiden algorithm. What is still missing are the concrete run settings and software identifiers: please add the specific software and versions you actually used (for example, the VOSviewer release and any preprocessing tools), and state the settings you ran—your chosen counting scheme, the resolution values you adopted, and the minimum occurrence or edge thresholds—so an independent reader can reproduce the exact maps you report.
The three research questions now organize the results, and the discussion and conclusion tie back to them, which improves cohesion and readability. To make the take-home messages unmistakable to skimming readers, consider closing the discussion with a short paragraph that explicitly answers each question in plain language and points to the specific tables or figures that support the statements.
Finally, the data-availability note remains generic and directs users to a policy page rather than to specific materials. Please replace it with a concrete statement appropriate for licensed Scopus data, and link to a public repository (OSF/Zenodo/GitHub) that holds your exact query file, exported and de-duplicated lists where licensing allows, the controlled dictionary, VOSviewer files, scripts, and a brief README. Adding DOIs across the reference list will also make editorial checking straightforward.
Author Response
Response to Reviewers
Dear Reviewers,
Thank you for reviewing the manuscript entitled “Reclaiming Explainable Artificial Intelligence (XAI) as an In-novation in Healthcare: Bridging Rule-Based Systems.” We are grateful for you spending your valuable time on reviewing our manuscript and providing valuable suggestions. The authors have carefully reviewed the comments and put in a lot of effort to address them in a step-by-step manner as shown below. Changes to the manuscript are marked in tracked changes.
Thank you for taking the time to review our manuscript.
Sincerely,
The author
====================================
REVIEWERS’ SUGGESTIONS FOR THE AUTHOR:
Response to Reviewer #2 (round 2)
Point 1:
Thanks for the updated manuscript. I re-checked this second version and can confirm the abstract already names Scopus and the 2018–2025 window; this appears explicitly in the abstract text ...A scientometric analysis was conducted using publications indexed in Scopus between 2018 and 2025.... That said, the abstract still reads too general for a systematic scientometric study: it does not report the calendar search date, the numbers retrieved and retained, any headline quantitative signals, or a brief limitations clause, even though those details are present later in Methods, where you note 1,304 records retrieved and 654 retained under a PRISMA workflow. Please surface those essentials in the abstract for transparency and quick appraisal.
Response 1:
We thank the reviewer for the valuable comment. We revised the abstract to state the search date, the number of records retrieved (1,304), and the number retained (654) under the PRISMA workflow. We also added a concise limitations clause. These revisions ensure transparency of the search strategy and enable quick appraisal of the scientometric study.
Point 2:
On novelty, the additions help but remain broad. Your response says the paper is positioned against recent reviews and that it “combines systematic review with bibliometric mapping,” yet many papers do the same. The introduction and RQ section mention an overview of related work rather than stating in plain terms what this study adds beyond it; the distinct value should be the quantitative, RQ-driven lens on reclaiming legacy rule-based systems and a reproducible mapping of that niche. Please make that claim explicit in the introduction so readers can see the contribution at a glance.
Response 2:
The introduction has been revised as suggested in lines 105–110 on page 3. An explicit statement has been included that the study contributes a quantitative, research question–driven perspective on the reclamation of legacy rule-based systems and provides a reproducible scientometric mapping of this niche. The approach has further been extended by the combination of systematic review and scientometric mapping, with the Scopus database employed for the period 1 January 2018 to 20 May 2025.
Point 3:
Your Boolean query is clearly shown and helpful, but there is a consistency issue to fix. The query uses PUBYEAR > 2018 AND PUBYEAR < 2025, which actually selects 2019–2024, while elsewhere the narrative states a timeframe of 2018–2025, and a later section refers to 2019–2025. Please align narrative and code: either keep an inclusive 2018–2025 window and adjust the query accordingly, or update all prose, figures, and counts to the precise exclusive bounds you truly used.
Response 3:
Appreciation was expressed for the remark. The years, Boolean query code, figures, and counts were revised throughout the manuscript to ensure consistency. All references were standardised to the inclusive 2018–2025 timeframe, and the query was adjusted accordingly. These corrections were implemented to align the narrative, data presentation, and visual outputs in three sentences for clarity.
Point 4:
I see that Figure 1 (the PRISMA flow) is flagged around line 190 and appears low-resolution in this version. Please replace it with a vector/PDF export, ensure that every count is legible, and mirror the exact numbers stated in the text. Immediately after that figure, you describe author and affiliation disambiguation and the creation of a controlled dictionary for VOSviewer, which is excellent for reproducibility. To maximize transparency and traceability, include the thesaurus/dictionary file and a short example of the harmonization table either in an appendix or as supplementary material, and point to it directly where you describe the process.
Response 4:
Gratitude was expressed for the positive suggestion. Figure 1 was replaced with a high-resolution PDF export to ensure the legibility of all counts and alignment with the text. The thesaurus/dictionary file and an example harmonisation table were provided as supplementary material in the appendix. The manuscript was revised to direct readers to these materials at the point where the author and affiliation disambiguation process was described, thereby strengthening transparency and reproducibility.
Point 5:
The “Scientometric Models” subsection is stronger in this version and it formalizes the mapping objective, counting choices, and clustering narrative, including reference to the resolution parameter and the Leiden algorithm. What is still missing are the concrete run settings and software identifiers: please add the specific software and versions you actually used (for example, the VOSviewer release and any preprocessing tools), and state the settings you ran—your chosen counting scheme, the resolution values you adopted, and the minimum occurrence or edge thresholds—so an independent reader can reproduce the exact maps you report.
Response 5:
Thank you for the observation. The requested details were incorporated into the text at lines 252–265 on page 7. The revised subsection was updated to specify the software and versions employed, including VOSviewer and the preprocessing tools. Exact run settings were also reported, namely the chosen counting scheme, the applied resolution values, and the minimum occurrence and edge thresholds. These additions were implemented to ensure that an independent reader can reproduce the scientometric maps with precision and transparency.
Point 6:
The three research questions now organize the results, and the discussion and conclusion tie back to them, which improves cohesion and readability. To make the take-home messages unmistakable to skimming readers, consider closing the discussion with a short paragraph that explicitly answers each question in plain language and points to the specific tables or figures that support the statements.
Response 6:
We revised the discussion in lines 486–495 on page 19. The revision introduces a closing paragraph that answers each research question in plain language. The paragraph directs readers to the relevant tables and figures that substantiate the claims. This structure ensures that the principal messages remain clear and accessible, even to readers who skim the text.
Point 7:
Finally, the data-availability note remains generic and directs users to a policy page rather than to specific materials. Please replace it with a concrete statement appropriate for licensed Scopus data, and link to a public repository (OSF/Zenodo/GitHub) that holds your exact query file, exported and de-duplicated lists where licensing allows, the controlled dictionary, VOSviewer files, scripts, and a brief README. Adding DOIs across the reference list will also make editorial checking straightforward.
Response 7:
A data-availability statement was added in lines 699–701 on page 23. The statement clarified that the research data were accessed through the Scopus database (https://www.scopus.com/). Readers were also directed to the Open Science Framework repository, where the query file, exported and de-duplicated lists (where licensing permits), the controlled dictionary, VOSviewer files, scripts, and a README were deposited under the registration DOI: https://doi.org/10.17605/OSF.IO/ENBZ2. In addition, DOIs were inserted across the reference list to facilitate editorial checking.