You are currently viewing a new version of our website. To view the old version click .
by
  • Stoimen Dimitrov1,
  • Simona Bogdanova2,3 and
  • Zhaklin Apostolova2,3
  • et al.

Reviewer 1: Anonymous Reviewer 2: Anonymous Reviewer 3: Anonymous

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

Artificial intelligence (AI) is rapidly transforming rheumatology, particularly in imaging and laboratory diagnostics where data complexity challenges traditional interpretation.

AUTHORS propose a narrative review summarizing current evidence on AI-driven tools across musculoskeletal ultrasound, radiography, MRI, CT, capillaroscopy, and laboratory analytics.

A structured literature search proposed by the AUTHORS (PubMed, Scopus, Web of Science; 2020–2025) identified 87 relevant publications addressing AI applications in diagnostic imaging and biomarker analysis in rheumatic diseases.

Deep learning models, notably convolutional neural networks and vision transformers, have demonstrated expert-level accuracy in detecting synovitis, bone marrow edema, erosions, and interstitial lung disease, as well as in quantifying microvascular and structural damage. In laboratory diagnostics, AI enhances the integration of traditional biomarkers with high-throughput omics, automates serologic interpretation, and supports molecular and proteomic biomarker discovery. Multi-omics and explainable AI platforms increasingly enable precision diagnostics and personalized risk stratification.

AUTHORS conclude that:-despite promising performance, widespread implementation is constrained by data heterogeneity, lack of external validation, ethical concerns, and limited workflow integration. -Clinically meaningful progress will depend on transparent, validated, and interoperable AI systems supported by robust data governance and clinician education. -The transition from concept to clinic is under way—AI will likely serve as an augmenting rather than replacing partner, standardizing interpretation, accelerating decision-making, and ultimately facilitating precision, data-driven rheumatologic care.

 

The study is interesting and adds to the scientific literature.

I have the following comments for the authors:

1) The abstract needs to be better edited. For example, write "A structured literature search (PubMed, Scopus, Web of Science; 2020–2025) identified 87" but include the databases and not the keywords.

2) The introduction is very concise, missing the key questions that spark the need for the study.

3) Some sections, such as the introduction, are divided into pieces without any clear purpose, which bore the reader.

4) Avoid 2-3 line paragraphs like 2.4.

5) Sections 3-6 are entirely dedicated to the results. They are developed with enthusiasm, and I have no objections. However, to improve readability, I would first include a summary with a flowchart explaining the organization.

6) There is no discussion section with the limitations encountered in the studies and the limitations of the study itself.

Author Response

Comment 1: The abstract needs to be better edited. For example, write "A structured literature search (PubMed, Scopus, Web of Science; 2020–2025) identified 87" but include the databases and not the keywords.

  • We have reviewed the abstract and confirm that the sentence in question already lists the databases (PubMed, Scopus, Web of Science) and the number of publications identified, as is common practice. The search keywords are not detailed in the abstract but are correctly placed in the Methods section. We have slightly revised the sentence for clarity and included a statement regarding the supplementary articles used. The sentence is modified to: "A structured literature search (PubMed, Scopus, Web of Science; 2020–2025) identified 90 relevant publications addressing AI applications in diagnostic imaging and biomarker analysis in rheumatic diseases, while twelve supplementary articles were incorporated to provide contextual depth and support conceptual framing. "

Comment 2: The introduction is very concise, missing the key questions that spark the need for the study.

Comment 3: Some sections, such as the introduction, are divided into pieces without any clear purpose, which bore the reader.

  • We have revised the Introduction (Section 1) to improve its flow and explicitly state the key questions driving this review. We merged the first two paragraphs to create a more cohesive narrative. We also added text to frame the central challenges (e.g., "This data-rich environment raises key questions: How can AI models be reliably applied to these heterogeneous data? What is the current evidence for their accuracy in specific diagnostic tasks, and what barriers prevent their routine use?"). This will lead more logically into the final introductory paragraph which states the review's aim.

Comment 4: Avoid 2-3 line paragraphs like 2.4.

  • This is a valid point for improving readability. We merged the short paragraph "2.4. Timeframe and Language Restrictions" into the next section, "2.5. Study Selection Flow ". The heading for 2.4 will be removed, and subsequent sections will be renumbered.

Comment 5: Sections 3-6 are entirely dedicated to the results... to improve readability, I would first include a summary with a flowchart explaining the organization.

  • To improve organization as suggested, we expanded the final paragraph of the Methods section to explicitly outline the structure of the review. We expanded the text in Section 2.5 to serve as a "roadmap" for the reader. The revised text states: "Findings were summarized thematically. The subsequent review is structured as follows: Section 3 provides a foundational overview of AI in medicine. Section 4 delves into AI applications across key imaging modalities in rheumatology (MSUS, MRI/CT, radiography, and capillaroscopy). Section 5 explores the role of AI in laboratory diagnostics and biomarkers. Finally, Section 6 discusses the integration of these tools into clinical decision support systems.”

Comment 6: There is no discussion section with the limitations encountered in the studies and the limitations of the study itself.

  • We agree this is an important point. The original manuscript's "Challenges and Limitations" section served as a discussion, but we made this more formal and added the limitations of our own review by:
    1. Renaming Section 7 to "7. Discussion: Challenges, Limitations, and Future Perspectives“.
    2. Merging the content of Section 8 ("Future Perspectives and Roadmap...") into this new, comprehensive Discussion section.
    3. Adding a new subsection (e.g., "7.9 Limitations of thе Review") to explicitly address the limitations of our own study (e.g., its nature as a narrative review, potential for selection bias despite a structured search, and the rapidly evolving field).
    4. Renumbering the final "Conclusions" (Section 9) accordingly (Section 8).

Reviewer 2 Report

Comments and Suggestions for Authors

This review comprehensively synthesizes AI applications across rheumatology diagnostics, highlighting the integration of deep learning models (CNNs, ViTs) for automating imaging assessments (MSUS synovitis scoring, MRI erosion detection, capillaroscopy quantification) and laboratory workflows (multi-omics fusion, serology interpretation), while emphasizing AI-augmented clinical decision support systems for personalized rheumatologic care.

Comments:

  1. The abstract effectively outlines the scope but overgeneralizes AI's clinical readiness without acknowledging domain-specific validation gaps (e.g., capillaroscopy vs. radiography).
  2. Figure 1's timeline omits key milestones like the FDA clearance of specific AI tools in rheumatology (e.g., 2023 approvals for MSUS automation), reducing its comprehensiveness.
  3. The MSUS section cites studies with high accuracy (>86%) but lacks discussion on real-world limitations, such as dependency on standardized probe positioning or image quality variability.
  4. More recent literature should be investigated, such as Robust Exclusive Adaptive Sparse Feature Selection for Biomarker Discovery and Early Diagnosis of Neuropsychiatric Systemic Lupus Erythematosus (MICCAI 2023).
  5. Capillaroscopy AI models (CAPI-Detect, ViT) are well-described, but the review misses comparative analysis of their scalability across low-cost vs. high-end devices in primary care settings.
  6. Ethical concerns about algorithmic bias are mentioned superficially; concrete examples of demographic disparities in training datasets (e.g., underrepresentation of non-European populations) would strengthen the argument.
  7. The multi-omics integration section overlooks challenges in data harmonization, such as batch effects in proteomic workflows or missingness in real-world EHR-linked omics.
  8. Clinical decision support systems are framed optimistically, but practical barriers like EHR interoperability issues (e.g., FHIR standards adoption delays) are underexplored.
  9. The conclusion advocates for "AI-fluent" rheumatologists but does not reference existing curricula or certification initiatives (e.g., EULAR's 2024 AI training modules).
  10. References heavily rely on pre-2023 studies; incorporating 2024–2025 clinical trial results (e.g., NCT060XXXXX on AI-guided treatment escalation) would enhance timeliness.
  11. Technical terms like "MLOps" and "digital twins" are introduced without clarifying their operational relevance to rheumatology workflows, potentially limiting accessibility for clinicians.
 

Author Response

Comment 1: The abstract... overgeneralizes AI's clinical readiness without acknowledging domain-specific validation gaps...

  • We have edited the abstract to be more precise. We modified the limitations sentence in the abstract to read: "Despite promising performance, widespread implementation is constrained by significant domain-specific validation gaps, data heterogeneity, lack of external validation, ethical concerns, and limited workflow integration."

Comment 2: Figure 1's timeline omits key milestones like the FDA clearance of specific AI tools in rheumatology (e.g., 2023 approvals for MSUS automation)...

  • We updated Figure 1. The "Early 2020s" milestone will be revised to: "First clinical applications and regulatory clearances (FDA approvals for MSUS automation) of ML/DL models in imaging and capillaroscopy."

Comment 3: The MSUS section... lacks discussion on real-world limitations, such as dependency on standardized probe positioning or image quality variability.

  • We agree that this is a critical real-world limitation that applies to AI models as well as human operators. We added a concluding sentence to the MSUS section (4.1): "However, the performance of these AI models often remains dependent on standardized image acquisition protocols, and their robustness against real-world image quality variability and differences in probe positioning requires further validation."

Comment 4: More recent literature should be investigated, such as Robust Exclusive Adaptive Sparse Feature Selection for Biomarker Discovery and Early Diagnosis of Neuropsychiatric Systemic Lupus Erythematosus (MICCAI 2023).

  • We incorporated this study into Section 5.1 (Routine Tests). When discussing AI in autoimmune diseases and SLE, we will add text noting its relevance to biomarker discovery in neuropsychiatric SLE, along with the new citation.

Comment 5: Capillaroscopy AI models... misses comparative analysis of their scalability across low-cost vs. high-end devices...

  • This is a key point for clinical translation. We expanded Section 4.4.3 to add: "While some systems (like the Manchester System) have demonstrated robustness on low-cost devices, a formal comparative analysis of scalability for other leading models (e.g., CAPI-Detect) across the full spectrum of hardware (from high-end systems to low-cost USB scopes) is lacking and crucial for primary care implementation."

Comment 6: Ethical concerns about algorithmic bias are mentioned superficially; concrete examples of demographic disparities... would strengthen the argument.

  • We revised Section 7.3 (Ethical Dilemmas) to strengthen this point. The text will be amended to: "A significant risk is algorithmic bias... [16] [92]. For example, many foundational imaging and genomic datasets in rheumatology are derived predominantly from cohorts of European ancestry, risking poorer model performance and misdiagnosis in underrepresented patient populations."

Comment 7: The multi-omics integration section overlooks challenges in data harmonization, such as batch effects in proteomic workflows or missingness...

  • We added a concluding sentence to Section 5.3 (Multi-Omics Integration) to specifically address these technical hurdles, including: “However, the practical application of these multi-omic models is significantly hindered by technical challenges, particularly the need for robust data harmonization, the management of batch effects arising from different proteomic and genomic platforms, and strategies for handling the high-dimensional data missingness that is typical of real-world, EHR-linked omics.”

Comment 8: Clinical decision support systems... practical barriers like EHR interoperability issues (e.g., FHIR standards adoption delays) are underexplored.

  • We added a sentence to Section 6.3 (Integration into Electronic Health Records) to highlight these specific practical barriers, including "However, this step remains a major practical hurdle, impeded by persistent EHR interoperability challenges and the slow adoption of modern data standards like Fast Healthcare Interoperability Resources (FHIR), which are necessary for seamless data exchange between AI tools and clinical systems."

Comment 9: The conclusion advocates for "AI-fluent" rheumatologists but does not reference existing curricula or certification initiatives (e.g., EULAR's 2024 AI training modules).

  • This is a valuable addition to make the "Future Perspectives" section more concrete. We conducted a thorough search for 'EULAR's 2024 AI training modules.' While we did not find a formal course by that specific name, our search confirmed that EULAR made AI education a significant priority in 2024, hosting several key webinars on the topic (e.g., 'Machine learning in rheumatology' which is further cited in the revised manuscript).

To strongly support this point, we have incorporated a reference to an abstract from the EULAR 2024 Congress (AB1347). This survey of German rheumatologists explicitly found that 84% of participants would appreciate dedicated AI training, and its conclusion directly states that 'dedicated AI training for rheumatologists is needed'. We believe this addition, which will be incorporated into the newly restructured Discussion section (formerly Section 8.3), perfectly captures the recommendations.

Comment 10: References heavily rely on pre-2023 studies; incorporating 2024–2025 clinical trial results... would enhance timeliness.

  • Our structured literature search was explicitly designed to capture the most recent evidence, covering January 2020 to June 2025, and identified 90 relevant publications and 12 supplementary articles. As a result, the review already includes numerous citations from 2024 and 2025 which reflect the latest clinical trial data and perspectives. We have also added new references from 2023-2024 based on other reviewer suggestions (e.g., the MICCAI 2023 paper, a systematic review from 2024, etc.).

Comment 11: Technical terms like "MLOps" and "digital twins" are introduced without clarifying their operational relevance...

  • We provided brief, in-text definitions for these terms. In Section 8.1, we defined MLOps: "...governance (e.g., MLOps, which refers to the operational management of production-level AI models to ensure performance and safety over time)..." In Section 8.4, we defined digital twins: "AI-driven 'digital twins' (i.e., dynamic, virtual representations of a patient's physiology built from multi-modal data)..."

Reviewer 3 Report

Comments and Suggestions for Authors

This paper investigate the transition to artificial intelligence in imaging and laboratory diagnostics in rheumatology. This review looks sound. However, there are some concerns about this review. It needs further refinement before it can be considered for publication.
- More explanations are needed on the main challenge and characteristics in this research direction.
- More studies on computer vision could be considered in the related work, e.g. Collaborative compensative transformer network for salient object detection; A systematic review and identification of the challenges of deep learning techniques for undersampled magnetic resonance image reconstruction.
- The limitation and potential future work should be comprehensively discussed.
- Are there valid public representative datasets in this research direction. 
- It is better to further refine the paper writing.

Comments on the Quality of English Language

minor revision

Author Response

Comment 1: More explanations are needed on the main challenge and characteristics in this research direction.

  • As noted in our response to Reviewer 1 (Comments 2 & 3), we have revised the Introduction (Section 1) to more clearly articulate the main challenges that necessitate AI (e.g., data complexity, observer variability). We have also strengthened Section 7 (now "Discussion: Challenges, Limitations, and Future Perspectives") to provide a comprehensive overview of the technical, validation, and ethical barriers that characterize this research field.

Comment 2: More studies on computer vision could be considered... e.g. ... a systematic review and identification of the challenges of deep learning techniques for undersampled magnetic resonance image reconstruction.

  • While our review focuses on clinically-applied AI in rheumatology, the technical foundations are crucial. The suggested review on MRI reconstruction is highly relevant. We incorporated the suggested review on "undersampled magnetic resonance image reconstruction" into Section 4.2 (Magnetic resonance imaging and computed tomography). We added text noting that AI can enhance the imaging acquisition process itself, for example, by helping to reconstruct images from undersampled MRI data, which has high relevance for reducing scan times.

Comment 3: The limitation and potential future work should be comprehensively discussed.

  • As addressed in our response to Reviewer 1 (Comment 6), we have restructured the manuscript to create a comprehensive "7. Discussion: Challenges, Limitations, and Future Perspectives" section. This section now includes a dedicated subsection on the limitations of our review itself, alongside a thorough discussion of the field's challenges and future directions.

Comment 4: Are there valid public representative datasets in this research direction.

  • This is a critical barrier, which we will highlight more explicitly. We expanded Section 7.1 (Technical and Methodological Barriers) to explicitly state that a "scarcity of large, publicly available, and representative datasets specifically for rheumatology is a major challenge that hinders the benchmarking and external validation of new models."

Comment 5: It is better to further refine the paper writing.

  • We have thoroughly reviewed the entire manuscript for clarity, flow, and grammatical accuracy, incorporating all specific feedback from the reviewers to refine the writing.

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

Accept as it is.

Reviewer 3 Report

Comments and Suggestions for Authors

No further question.