Artificial Intelligence in Medical Imaging: Innovations and Diagnostic Applications

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: 31 May 2026 | Viewed by 2329

Special Issue Editors


E-Mail Website
Guest Editor

E-Mail Website
Guest Editor
Bioengineering Department, University of Louisville, Louisville, KY, USA
Interests: deep learning; medical imaging; MRI

E-Mail Website
Guest Editor
Electrical and Computer Engineering, University of Louisville, Louisville, KY, USA
Interests: image modeling; sensor planning for smart systems; multimodality imaging; face recognition at a distance; non-intrusive sensors; wireless biometric sensor networks
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Bioengineering Department, University of Louisville, Louisville, KY, USA
Interests: computer vision; image processing; robotics; object detection; artificial intelligence; medical imaging; facial biometrics and sensors
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In a range of medical imaging modalities—such as computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, positron emission tomography (PET), pathological imaging, and X-rays—artificial intelligence (AI), and in particular deep learning, has shown impressive performance. Moreover, in contrast to task-specific models, foundation models may be able to help with the complex and multifaceted problems that arise in clinical practice. Also, explainable AI promotes reliable, broadly applicable, and clinically interpretable solutions for accurate diagnosis and treatment, bridging the gap between black-box AI models and practical implementation.

Recently, a novel family of artificial intelligence models called Vision-Language Models (VLMs) have combined natural language comprehension with image analysis. These models can generate, comprehend, or react to clinical narratives in addition to interpreting medical images because they are trained on extensive datasets of matched images and text. VLMs connect visual features to clinical language. Complex tasks like abnormality identification, image labeling, and finding images with similar diagnoses (i.e., cross-modal retrieval) can be automated by VLMs.

This Special Issue encourages original research on AI models for 3D medical image analysis. We particularly encourage contributions that examine theoretical developments, practical applications, or empirical evaluations. These should focus on improving clinical applicability and addressing problems including data variability, annotation scarcity, and multi-modal integration. Techniques that prioritize interpretability, scalability, and robustness must be highlighted in submissions.

The topics of interest include, but are not limited to, the following:

  • Advanced multi-modal techniques integrating diverse medical data (MRI, CT, X-ray, ultrasound) for comprehensive analysis.
  • Development of AI, foundational, and VLMs models tailored for medical analysis in 2D or 3D data and performance enhancement in multi-modal medical imaging.
  • Weakly supervised and semi-supervised learning models for addressing annotation limitations.
  • AI, foundational, and VLMs models for early disease detection and personalized diagnostics.
  • Real-world clinical validation and applications of AI, foundational models, and VLM models in healthcare environments.
  • Explainable AI approaches in medical imaging for clinical interpretability to make medical models more accessible to clinicians, including visualization and feature-attribution tools,
  • Development of benchmark datasets and metrics for evaluating AI, foundational models, and VLM models in medical contexts.
  • Radiomics and imaging biomarkers derived from deep learning.

Prof. Dr. Ayman El-Baz
Prof. Dr. Moumen Elmelegy
Dr. Asem M. Ali
Dr. Ali H. Mahmoud
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical imaging
  • artificial intelligence
  • disease diagnosis
  • radiomics

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 1345 KB  
Article
A Novel Dual-Modality Dual-View Hybrid Deep Learning–Machine Learning Framework for the Prediction of Carotid Plaque Vulnerability via Late Fusion
by Wenxuan Zhang, Chao Hou, Xinyi Wang, Hongyu Kang, Shuai Li, Yu Sun, Yongping Zheng, Wei Zhang and Sai-Kit Lam
Diagnostics 2026, 16(5), 807; https://doi.org/10.3390/diagnostics16050807 - 9 Mar 2026
Viewed by 496
Abstract
Background: Ultrasound imaging is an ideal tool for regular carotid plaque screening to identify individuals at high risk of stroke for clinical intervention. However, no existing study leverages multi-modal multi-view ultrasound imaging for AI-enabled auto-classification of carotid plaque vulnerability. This study aims [...] Read more.
Background: Ultrasound imaging is an ideal tool for regular carotid plaque screening to identify individuals at high risk of stroke for clinical intervention. However, no existing study leverages multi-modal multi-view ultrasound imaging for AI-enabled auto-classification of carotid plaque vulnerability. This study aims to develop and validate an effective AI model for carotid plaque vulnerability classification through the applications of dual-modal (B-Mode and contrast-enhanced mode) dual-view (longitudinal and cross-sectional) settings to maximize the utility and potential of ultrasound imaging. Methods: Hybrid deep-learning (DL) and machine-learning (ML) methods were employed to balance between model discriminability and interpretability. B-Mode ultrasound (BMUS) and contrast-enhanced ultrasound (CEUS) images from 241 patients were retrospectively analyzed using the proposed hybrid-DL-ML variants. Results: Our findings suggest the hybrid VGG-RF model developed from a dual-modal dual-view setting outperforms those developed from other settings for identifying vulnerable carotid plaques. The VGG-RF model emerged as the best-performing model, achieving an optimal performance with an AUC of 0.908, precision of 0.765, recall of 0.929, specificity of 0.886, and F1 score of 0.839. The inherent interpretability of the VGG-RF model divulged that long-axis views of BMUS and CEUS images were the major contributing features for discriminating vulnerable carotid plaques against their counterparts. Conclusions: The present study underscored the effectiveness of AI models developed from dual-modal dual-view settings of ultrasound images. Notably, the hybrid VGG-RF model was benchmarked as the best-performing model among other studied hybrid DL-ML variants. Further studies on a larger cohort in a prospective setting are warranted to validate the findings of the current study. Full article
Show Figures

Figure 1

30 pages, 29830 KB  
Article
From Hematoxylin and Eosin to Masson’s Trichrome: A Comprehensive Framework for Virtual Stain Transformation in Chronic Liver Disease Diagnosis
by Hossam Magdy Balaha, Khadiga M. Ali, Ali Mahmoud, Ahmed Aboudessouki, Mohamed T. Azam, Guruprasad A. Giridharan, Dibson Gondim and Ayman El-Baz
Diagnostics 2026, 16(5), 764; https://doi.org/10.3390/diagnostics16050764 - 4 Mar 2026
Viewed by 576
Abstract
Background/Objectives: Virtual histological staining offers a rapid, cost-effective alternative to physical reprocessing but faces challenges related to spatial misalignment and staining heterogeneity between Hematoxylin and Eosin (H&E) and Masson’s Trichrome (MT) domains. This study develops a robust framework for H&E-to-MT virtual staining [...] Read more.
Background/Objectives: Virtual histological staining offers a rapid, cost-effective alternative to physical reprocessing but faces challenges related to spatial misalignment and staining heterogeneity between Hematoxylin and Eosin (H&E) and Masson’s Trichrome (MT) domains. This study develops a robust framework for H&E-to-MT virtual staining to enable accurate fibrosis assessment without additional tissue consumption. Methods: We propose a transformer-based generative adversarial network (TbGAN) supported by a multi-stage alignment pipeline (SIFT (scale-invariant feature transform) coarse alignment, ORB/homography patch registration, and B-spline free-form deformation) and a weighted fusion mechanism combining four configuration outputs (O/10/3, O/3/10, R/10/3, and R/3/10). The framework was validated on 27 whole-slide images (>100,000 aligned patches) through 24 independent experiments. Results: The fused approach achieved state-of-the-art performance: MI = 0.9815 ± 0.0934, SSIM = 0.7474 ± 0.0597, NCC = 0.9320 ± 0.0220, and CS = 0.9946 ± 0.0014. Statistical analysis confirmed enhanced stability through narrower interquartile ranges, fewer outliers, and tighter 95% confidence intervals compared to individual configurations. Qualitative assessment demonstrated preserved collagen morphology critical for fibrosis staging. Conclusions: Our framework provides a reliable, IRB-compliant solution for virtual MT staining that maintains high structural fidelity suitable for diagnostic support. It enables resource-efficient fibrosis quantification and supports integration into clinical digital pathology workflows without patient-specific recalibration. Full article
Show Figures

Figure 1

26 pages, 5734 KB  
Article
AI-Based Quantitative HRCT for In-Hospital Adverse Outcomes and Exploratory Assessment of Reinfection in COVID-19
by Xin-Yi Feng, Fei-Yao Wang, Si-Yu Jiang, Li-Heng Wang, Xin-Yue Chen, Shi-Bo Tang, Fan Yang and Rui Li
Diagnostics 2025, 15(24), 3156; https://doi.org/10.3390/diagnostics15243156 - 11 Dec 2025
Viewed by 757
Abstract
Background/Objectives: Quantitative computed tomography (CT) metrics are widely used to assess pulmonary involvement and to predict short-term severity in coronavirus disease 2019 (COVID-19). However, it remains unclear whether baseline artificial intelligence (AI)-based quantitative high-resolution computed tomography (HRCT) metrics of pneumonia burden provide [...] Read more.
Background/Objectives: Quantitative computed tomography (CT) metrics are widely used to assess pulmonary involvement and to predict short-term severity in coronavirus disease 2019 (COVID-19). However, it remains unclear whether baseline artificial intelligence (AI)-based quantitative high-resolution computed tomography (HRCT) metrics of pneumonia burden provide incremental prognostic value for in-hospital composite adverse outcomes beyond routine clinical factors, or whether these imaging-derived markers carry any exploratory signal for long-term severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) reinfection among hospitalized patients. Most existing imaging studies have focused on diagnosis and acute-phase prognosis, leaving a specific knowledge gap regarding AI-based quantitative HRCT correlates of early deterioration and subsequent reinfection in this population. To evaluate whether combining deep learning-derived, quantitative, HRCT features and clinical factors improve prediction of in-hospital composite adverse events and to explore their association with long-term reinfection in patients with COVID-19 pneumonia. Methods: In this single-center retrospective study, we analyzed 236 reverse-transcription polymerase chain reaction (RT-PCR)-confirmed COVID-19 patients who underwent baseline HRCT. Median follow-up durations were 7.65 days for in-hospital outcomes and 611 days for long-term outcomes. A pre-trained, adaptive, artificial-intelligence-based, prototype model (Siemens Healthineers) was used for pneumonia analysis. Inflammatory lung lesions were automatically segmented, and multiple quantitative metrics were extracted, including opacity score, volume and percentage of opacities and high-attenuation opacities, and mean Hounsfield units (HU) of the total lung and opacity. Patients were stratified based on receiver operating characteristic (ROC)-derived optimal thresholds, and multivariable Cox regression was used to identify predictors of the composite adverse outcome (intensive care unit [ICU] admission or all-cause death) and SARS-CoV-2 reinfection, defined as a second RT-PCR-confirmed episode of COVID-19 occurring ≥90 days after initial infection. Results: The composite adverse outcome occurred in 38 of 236 patients (16.1%). Higher AI-derived opacity burden was significantly associated with poorer outcomes; for example, opacity score cut-off of 5.5 yielded an area under the ROC curve (AUC) of 0.71 (95% confidence interval [CI] 0.62–0.79), and similar performance was observed for the volume and percentage of opacities and high-attenuation opacities (AUCs up to 0.71; all p < 0.05). After adjustment for age and comorbidities, selected HRCT metrics—including opacity score, percentage of opacities, and mean HU of the total lung (cut-off −662.38 HU; AUC 0.64, 95% CI 0.54–0.74)—remained independently associated with adverse events. Individual predictors demonstrated modest discriminatory ability, with C-indices of 0.59 for age, 0.57 for chronic obstructive pulmonary disease (COPD), 0.62 for opacity score, 0.63 for percentage of opacities, and 0.63 for mean total-lung HU, whereas a combined model integrating clinical and imaging variables improved prediction performance (C-index = 0.68, 95% CI: 0.57–0.80). During long-term follow-up, RT-PCR–confirmed reinfection occurred in 18 of 193 patients (9.3%). Higher baseline CT-derived metrics—particularly opacity score and both volume and percentage of high-attenuation opacities (percentage cut-off = 4.94%, AUC 0.69, 95% CI 0.60–0.79)—showed exploratory associations with SARS-CoV-2 reinfection. However, this analysis was constrained by the very small number of events (n = 18) and wide confidence intervals, indicating substantial statistical uncertainty. In this context, individual predictors again showed only modest C-indices (e.g., 0.62 for procalcitonin [PCT], 0.66 for opacity score, 0.66 for the volume and 0.64 for the percentage of high-attenuation opacities), whereas the combined model achieved an apparent C-index of 0.73 (95% CI 0.64–0.83), suggesting moderate discrimination in this underpowered exploratory reinfection sample that requires confirmation in external cohorts. Conclusions: Fully automated, deep learning-derived, quantitative HRCT parameters provide useful prognostic information for early in-hospital deterioration beyond routine clinical factors and offer preliminary, hypothesis-generating insights into long-term reinfection risk. The reinfection-related findings, however, require external validation and should be interpreted with caution given the small number of events and limited precision. In both settings, combining AI-based imaging and clinical variables yields better risk stratification than either modality alone. Full article
Show Figures

Figure 1

Back to TopTop