jcm-logo

Journal Browser

Journal Browser

Artificial Intelligence and Deep Learning in Medical Imaging

A special issue of Journal of Clinical Medicine (ISSN 2077-0383). This special issue belongs to the section "Nuclear Medicine & Radiology".

Deadline for manuscript submissions: 30 October 2025 | Viewed by 4186

Special Issue Editors


E-Mail Website
Guest Editor
1. Department of Radiology, Bernard and Irene Schwartz Center for Biomedical Imaging, New York, NY USA
2. Department of Radiology, Center for Advanced Imaging Innovation and Research (CAI2R), New York, NY, USA
Interests: magnetic resonance imaging; radiomics; artificial intelligence (AI); deep learning; clinical and translational medicine

E-Mail Website
Guest Editor
Department of Electrical Engineering and Information Technology, University Federico II of Napoli, Napoli, Italy
Interests: basic research; deep learning; SAR; safety analysis; MRI

Special Issue Information

Dear Colleagues,

Artificial Intelligence (AI) and Deep Learning (DL) are redefining medical imaging by providing unprecedented opportunities to improve diagnosis, prognosis, treatment planning, and patient safety. From advanced image segmentation and radiomics to explainable AI and privacy-preserving models, these technologies are addressing long-standing challenges in clinical imaging. However, critical issues such as reproducibility, algorithmic bias, and the safe implementation of AI in clinical workflows remain to be solved.

This Special Issue aims to focus on the latest advancements in AI and DL for medical imaging, emphasizing not only precision and efficiency but also safety and ethical deployment. Submissions are welcome to explore topics such as multi-modal data integration, robust feature extraction, federated learning, clinical validation, and methods ensuring regulatory compliance. Studies that investigate strategies for mitigating risks and improving interpretability to enhance patient safety are especially welcome to be submitted.

We invite authors to submit their original research articles and reviews on the abovementioned themes. This Special Issue aims to mobilize interdisciplinary collaboration and drive innovation in safe and effective AI applications for medical imaging.

Dr. Eros Montin
Dr. Giuseppe Carluccio
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Clinical Medicine is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • magnetic resonance imaging
  • radiomics
  • artificial intelligence (AI)
  • deep learning
  • clinical and translational medicine
  • MRI safety
  • radiogenomics
  • multiomics

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

28 pages, 8411 KB  
Article
SEPoolConvNeXt: A Deep Learning Framework for Automated Classification of Neonatal Brain Development Using T1- and T2-Weighted MRI
by Gulay Maçin, Melahat Poyraz, Zeynep Akca Andi, Nisa Yıldırım, Burak Taşcı, Gulay Taşcı, Sengul Dogan and Turker Tuncer
J. Clin. Med. 2025, 14(20), 7299; https://doi.org/10.3390/jcm14207299 - 16 Oct 2025
Viewed by 172
Abstract
Background/Objectives: The neonatal and infant periods represent a critical window for brain development, characterized by rapid and heterogeneous processes such as myelination and cortical maturation. Accurate assessment of these changes is essential for understanding normative trajectories and detecting early abnormalities. While conventional [...] Read more.
Background/Objectives: The neonatal and infant periods represent a critical window for brain development, characterized by rapid and heterogeneous processes such as myelination and cortical maturation. Accurate assessment of these changes is essential for understanding normative trajectories and detecting early abnormalities. While conventional MRI provides valuable insights, automated classification remains challenging due to overlapping developmental stages and sex-specific variability. Methods: We propose SEPoolConvNeXt, a novel deep learning framework designed for fine-grained classification of neonatal brain development using T1- and T2-weighted MRI sequences. The dataset comprised 29,516 images organized into four subgroups (T1 Male, T1 Female, T2 Male, T2 Female), each stratified into 14 age-based classes (0–10 days to 12 months). The architecture integrates residual connections, grouped convolutions, and channel attention mechanisms, balancing computational efficiency with discriminative power. Model performance was compared with 19 widely used pre-trained CNNs under identical experimental settings. Results: SEPoolConvNeXt consistently achieved test accuracies above 95%, substantially outperforming pre-trained CNN baselines (average ~70.7%). On the T1 Female dataset, early stages achieved near-perfect recognition, with slight declines at 11–12 months due to intra-class variability. The T1 Male dataset reached >98% overall accuracy, with challenges in intermediate months (2–3 and 8–9). The T2 Female dataset yielded accuracies between 99.47% and 100%, including categories with perfect F1-scores, whereas the T2 Male dataset maintained strong but slightly lower performance (>93%), especially in later infancy. Combined evaluations across T1 + T2 Female and T1 Male + Female datasets confirmed robust generalization, with most subgroups exceeding 98–99% accuracy. The results demonstrate that domain-specific architectural design enables superior sensitivity to subtle developmental transitions compared with generic transfer learning approaches. The lightweight nature of SEPoolConvNeXt (~9.4 M parameters) further supports reproducibility and clinical applicability. Conclusions: SEPoolConvNeXt provides a robust, efficient, and biologically aligned framework for neonatal brain maturation assessment. By integrating sex- and age-specific developmental trajectories, the model establishes a strong foundation for AI-assisted neurodevelopmental evaluation and holds promise for clinical translation, particularly in monitoring high-risk groups such as preterm infants. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning in Medical Imaging)
Show Figures

Figure 1

23 pages, 3359 KB  
Article
Capsule Neural Networks with Bayesian Optimization for Pediatric Pneumonia Detection from Chest X-Ray Images
by Szymon Salamon and Wojciech Książek
J. Clin. Med. 2025, 14(20), 7212; https://doi.org/10.3390/jcm14207212 - 13 Oct 2025
Viewed by 375
Abstract
Background: Pneumonia in children poses a serious threat to life and health, making early detection critically important. In this regard, artificial intelligence methods can provide valuable support. Methods: Capsule networks and Bayesian optimization are modern techniques that were employed to build effective models [...] Read more.
Background: Pneumonia in children poses a serious threat to life and health, making early detection critically important. In this regard, artificial intelligence methods can provide valuable support. Methods: Capsule networks and Bayesian optimization are modern techniques that were employed to build effective models for predicting pneumonia from chest X-ray images. The medical images underwent essential preprocessing, were divided into training, validation, and testing sets, and were subsequently used to develop the models. Results: The designed capsule neural network model with Bayesian optimization achieved the following final results: an accuracy of 95.1%, sensitivity of 98.9%, specificity of 85.4%, precision (PPV) of 94.8%, negative predictive value (NPV) of 96.2%, F1-score of 96.8%, and a Matthews correlation coefficient (MCC) of 0.877. In addition, the model was complemented with an explainability analysis using Grad-CAM, which demonstrated that its predictions rely predominantly on clinically relevant pulmonary regions. Conclusions: The proposed model demonstrates high accuracy and shows promise for potential use in clinical practice. It may also be applied to other tasks in medical image analysis. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning in Medical Imaging)
Show Figures

Figure 1

27 pages, 3948 KB  
Article
Fully Automated Segmentation of Cervical Spinal Cord in Sagittal MR Images Using Swin-Unet Architectures
by Rukiye Polattimur, Emre Dandıl, Mehmet Süleyman Yıldırım and Utku Şenol
J. Clin. Med. 2025, 14(19), 6994; https://doi.org/10.3390/jcm14196994 - 2 Oct 2025
Viewed by 501
Abstract
Background/Objectives: The spinal cord is a critical component of the central nervous system that transmits neural signals between the brain and the body’s peripheral regions through its nerve roots. Despite being partially protected by the vertebral column, the spinal cord remains highly [...] Read more.
Background/Objectives: The spinal cord is a critical component of the central nervous system that transmits neural signals between the brain and the body’s peripheral regions through its nerve roots. Despite being partially protected by the vertebral column, the spinal cord remains highly vulnerable to trauma, tumors, infections, and degenerative or inflammatory disorders. These conditions can disrupt neural conduction, resulting in severe functional impairments, such as paralysis, motor deficits, and sensory loss. Therefore, accurate and comprehensive spinal cord segmentation is essential for characterizing its structural features and evaluating neural integrity. Methods: In this study, we propose a fully automated method for segmentation of the cervical spinal cord in sagittal magnetic resonance (MR) images. This method facilitates rapid clinical evaluation and supports early diagnosis. Our approach uses a Swin-Unet architecture, which integrates vision transformer blocks into the U-Net framework. This enables the model to capture both local anatomical details and global contextual information. This design improves the delineation of the thin, curved, low-contrast cervical cord, resulting in more precise and robust segmentation. Results: In experimental studies, the proposed Swin-Unet model (SWU1), which uses transformer blocks in the encoder layer, achieved Dice Similarity Coefficient (DSC) and Hausdorff Distance 95 (HD95) scores of 0.9526 and 1.0707 mm, respectively, for cervical spinal cord segmentation. These results confirm that the model can consistently deliver precise, pixel-level delineations that are structurally accurate, which supports its reliability for clinical assessment. Conclusions: The attention-enhanced Swin-Unet architecture demonstrated high accuracy in segmenting thin and complex anatomical structures, such as the cervical spinal cord. Its ability to generalize with limited data highlights its potential for integration into clinical workflows to support diagnosis, monitoring, and treatment planning. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning in Medical Imaging)
Show Figures

Figure 1

14 pages, 1942 KB  
Article
Vocal Fold Disorders Classification and Optimization of a Custom Video Laryngoscopy Dataset Through Structural Similarity Index and a Deep Learning-Based Approach
by Elif Emre, Dilber Cetintas, Muhammed Yildirim and Sadettin Emre
J. Clin. Med. 2025, 14(19), 6899; https://doi.org/10.3390/jcm14196899 - 29 Sep 2025
Viewed by 380
Abstract
Background/Objectives: Video laryngoscopy is one of the primary methods used by otolaryngologists for detecting and classifying laryngeal lesions. However, the diagnostic process of these images largely relies on clinicians’ visual inspection, which can lead to overlooked small structural changes, delayed diagnosis, and interpretation [...] Read more.
Background/Objectives: Video laryngoscopy is one of the primary methods used by otolaryngologists for detecting and classifying laryngeal lesions. However, the diagnostic process of these images largely relies on clinicians’ visual inspection, which can lead to overlooked small structural changes, delayed diagnosis, and interpretation errors. Methods: AI-based approaches are becoming increasingly critical for accelerating early-stage diagnosis and improving reliability. This study proposes a hybrid Convolutional Neural Network (CNN) architecture that eliminates repetitive and clinically insignificant frames from videos, utilizing only meaningful key frames. Video data from healthy individuals, patients with vocal fold nodules, and those with vocal fold polyps were summarized using three different threshold values with the Structural Similarity Index Measure (SSIM). Results: The resulting key frames were then classified using a hybrid CNN. Experimental findings demonstrate that selecting an appropriate threshold can significantly reduce the model’s memory usage and processing load while maintaining accuracy. In particular, a threshold value of 0.90 provided richer information content thanks to the selection of a wider variety of frames, resulting in the highest success rate. Fine-tuning the last 20 layers of the MobileNetV2 and Xception backbones, combined with the fusion of extracted features, yielded an overall classification accuracy of 98%. Conclusions: The proposed approach provides a mechanism that eliminates unnecessary data and prioritizes only critical information in video-based diagnostic processes, thus helping physicians accelerate diagnostic decisions and reduce memory requirements. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning in Medical Imaging)
Show Figures

Figure 1

22 pages, 6194 KB  
Article
KidneyNeXt: A Lightweight Convolutional Neural Network for Multi-Class Renal Tumor Classification in Computed Tomography Imaging
by Gulay Maçin, Fatih Genç, Burak Taşcı, Sengul Dogan and Turker Tuncer
J. Clin. Med. 2025, 14(14), 4929; https://doi.org/10.3390/jcm14144929 - 11 Jul 2025
Cited by 2 | Viewed by 1295
Abstract
Background: Renal tumors, encompassing benign, malignant, and normal variants, represent a significant diagnostic challenge in radiology due to their overlapping visual characteristics on computed tomography (CT) scans. Manual interpretation is time consuming and susceptible to inter-observer variability, emphasizing the need for automated, [...] Read more.
Background: Renal tumors, encompassing benign, malignant, and normal variants, represent a significant diagnostic challenge in radiology due to their overlapping visual characteristics on computed tomography (CT) scans. Manual interpretation is time consuming and susceptible to inter-observer variability, emphasizing the need for automated, reliable classification systems to support early and accurate diagnosis. Method and Materials: We propose KidneyNeXt, a custom convolutional neural network (CNN) architecture designed for the multi-class classification of renal tumors using CT imaging. The model integrates multi-branch convolutional pathways, grouped convolutions, and hierarchical feature extraction blocks to enhance representational capacity. Transfer learning with ImageNet 1K pretraining and fine tuning was employed to improve generalization across diverse datasets. Performance was evaluated on three CT datasets: a clinically curated retrospective dataset (3199 images), the Kaggle CT KIDNEY dataset (12,446 images), and the KAUH: Jordan dataset (7770 images). All images were preprocessed to 224 × 224 resolution without data augmentation and split into training, validation, and test subsets. Results: Across all datasets, KidneyNeXt demonstrated outstanding classification performance. On the clinical dataset, the model achieved 99.76% accuracy and a macro-averaged F1 score of 99.71%. On the Kaggle CT KIDNEY dataset, it reached 99.96% accuracy and a 99.94% F1 score. Finally, evaluation on the KAUH dataset yielded 99.74% accuracy and a 99.72% F1 score. The model showed strong robustness against class imbalance and inter-class similarity, with minimal misclassification rates and stable learning dynamics throughout training. Conclusions: The KidneyNeXt architecture offers a lightweight yet highly effective solution for the classification of renal tumors from CT images. Its consistently high performance across multiple datasets highlights its potential for real-world clinical deployment as a reliable decision support tool. Future work may explore the integration of clinical metadata and multimodal imaging to further enhance diagnostic precision and interpretability. Additionally, interpretability was addressed using Grad-CAM visualizations, which provided class-specific attention maps to highlight the regions contributing to the model’s predictions. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning in Medical Imaging)
Show Figures

Figure 1

21 pages, 5385 KB  
Article
Radiomics for Precision Diagnosis of FAI: How Close Are We to Clinical Translation? A Multi-Center Validation of a Single-Center Trained Model
by Eros Montin, Srikar Namireddy, Hariharan Subbiah Ponniah, Kartik Logishetty, Iman Khodarahmi, Sion Glyn-Jones and Riccardo Lattanzi
J. Clin. Med. 2025, 14(12), 4042; https://doi.org/10.3390/jcm14124042 - 7 Jun 2025
Viewed by 952
Abstract
Background: Femoroacetabular impingement (FAI) is a complex hip disorder characterized by abnormal contact between the femoral head and acetabulum, often leading to joint damage, chronic pain, and early-onset osteoarthritis. Despite MRI being the imaging modality of choice, diagnosis remains challenging due to subjective [...] Read more.
Background: Femoroacetabular impingement (FAI) is a complex hip disorder characterized by abnormal contact between the femoral head and acetabulum, often leading to joint damage, chronic pain, and early-onset osteoarthritis. Despite MRI being the imaging modality of choice, diagnosis remains challenging due to subjective interpretation, lack of standardized imaging criteria, and difficulty differentiating symptomatic from asymptomatic cases. This study aimed to develop and externally validate radiomics-based machine learning (ML) models capable of classifying healthy, asymptomatic, and symptomatic FAI cases with high diagnostic accuracy and generalizability. Methods: A total of 82 hip MRI datasets (31 symptomatic, 31 asymptomatic, 20 healthy) from a single center were used for training and cross-validation. Radiomic features were extracted from four segmented anatomical regions (femur, acetabulum, gluteus medius, gluteus maximus). A four-step feature selection pipeline was implemented, followed by training 16 ML classifiers. External validation was conducted on a separate multi-center cohort of 185 symptomatic FAI cases acquired with heterogeneous MRI protocols. Results: The best-performing models achieved a cross-validation accuracy of up to 90.9% in distinguishing among healthy, asymptomatic, and symptomatic hips. External validation on the independent multi-center cohort demonstrated 100% accuracy in identifying symptomatic FAI cases. Since this metric reflects performance on symptomatic cases only, it should be interpreted as a detection rate (true positive rate) rather than overall multi-class accuracy. Gini index-based feature selection consistently outperformed F-statistic-based methods across all the models. Conclusions: This is the first study to systematically integrate radiomics and multiple ML models for FAI classification for these three phenotypes, trained on a single-center dataset and externally validated on multi-institutional MRI data. The demonstrated robustness and generalizability of radiomic features support their use in clinical workflows and future large-scale studies targeting standardized, data-driven FAI diagnosis. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning in Medical Imaging)
Show Figures

Figure 1

Back to TopTop