Imaging in Healthcare: Progress and Challenges

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Medical Imaging".

Deadline for manuscript submissions: 31 December 2025 | Viewed by 886

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA 90089, USA
Interests: machine learning; generatative AI; medical imaging; image processing; interpretable models
Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA
Interests: medical image analysis; computer vision; deep learning

E-Mail Website
Guest Editor
Moverse, 55535 Thessaloniki, Greece
Interests: computer vision; representation learning; motion capture; motion synthesis

Special Issue Information

Dear Colleagues,

The rapid growth of Artificial Intelligence (AI) in recent years has revolutionized the image analysis techniques and capabilities. In the healthcare domain, medical imaging is crucial for screening for several diseases, as well as monitoring patients’ health progress. Hence, the amount of imaging data produced every year accounts for the largest portion of the data in healthcare. Different types of imaging can provide different clinical information and have their own advantages. Magnetic resonance imaging (MRI), computerized tomography (CT) scan, ultrasound images, and pathological images of different modalities at the microscopy level can be analyzed by AI algorithms to expedite the image reading process and bring more objectivity and accuracy in diagnosis.

Modern deep learning and Generative AI (GenAI) models have made rapid progress, thus enabling new functionalities and applications of AI in medical imaging. Several tools have emerged, such as semantic image segmentation, medical image synthesis, modality co-registration and image enhancement.

Yet, there are several challenges in imaging in healthcare that constrain the clinical application of AI tools. Their interpretability and transparency in decision making (i.e. reliability), accuracy compared to experienced clinicians, performance when trained with small datasets and generalization performance across different clinics and imaging acquisition protocols are some of the main challenges that need to be addressed in the future, so they can bring the AI technology advancements closer to the actual clinical setting.

We request contributions presenting techniques, novel methods, new applications, tools or studies that push forward the frontiers of the medical imaging field and try to address the identified challenges as previously stated.

Dr. Vasileios Magoulianitis
Dr. Pawan Jogi
Dr. Spyridon Thermos
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical image analysis
  • Generative AI
  • Trustworthy AI
  • medical image segmentation
  • magnetic resonance imaging (MRI)
  • computerized tomography (CT) scan
  • Ultrasound
  • medical image synthesis
  • MRI enhancement

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 1267 KiB  
Article
Prediction of PD-L1 and CD68 in Clear Cell Renal Cell Carcinoma with Green Learning
by Yixing Wu, Alexander Shieh, Steven Cen, Darryl Hwang, Xiaomeng Lei, S. J. Pawan, Manju Aron, Inderbir Gill, William D. Wallace, C.-C. Jay Kuo and Vinay Duddalwar
J. Imaging 2025, 11(6), 191; https://doi.org/10.3390/jimaging11060191 - 10 Jun 2025
Abstract
Clear cell renal cell carcinoma (ccRCC) is the most common type of renal cancer. Extensive efforts have been made to utilize radiomics from computed tomography (CT) imaging to predict tumor immune microenvironment (TIME) measurements. This study proposes a Green Learning (GL) framework for [...] Read more.
Clear cell renal cell carcinoma (ccRCC) is the most common type of renal cancer. Extensive efforts have been made to utilize radiomics from computed tomography (CT) imaging to predict tumor immune microenvironment (TIME) measurements. This study proposes a Green Learning (GL) framework for approximating tissue-based biomarkers from CT scans, focusing on the PD-L1 expression and CD68 tumor-associated macrophages (TAMs) in ccRCC. Our approach includes radiomic feature extraction, redundancy removal, and supervised feature selection through a discriminant feature test (DFT), a relevant feature test (RFT), and least-squares normal transform (LNT) for robust feature generation. For the PD-L1 expression in 52 ccRCC patients, treated as a regression problem, our GL model achieved a 5-fold cross-validated mean squared error (MSE) of 0.0041 and a Mean Absolute Error (MAE) of 0.0346. For the TAM population (CD68+/PanCK+), analyzed in 78 ccRCC patients as a binary classification task (at a 0.4 threshold), the model reached a 10-fold cross-validated Area Under the Receiver Operating Characteristic (AUROC) of 0.85 (95% CI [0.76, 0.93]) using 10 LNT-derived features, improving upon the previous benchmark of 0.81. This study demonstrates the potential of GL in radiomic analyses, offering a scalable, efficient, and interpretable framework for the non-invasive approximation of key biomarkers. Full article
(This article belongs to the Special Issue Imaging in Healthcare: Progress and Challenges)
Show Figures

Figure 1

19 pages, 1536 KiB  
Article
A Study on Energy Consumption in AI-Driven Medical Image Segmentation
by R. Prajwal, S. J. Pawan, Shahin Nazarian, Nicholas Heller, Christopher J. Weight, Vinay Duddalwar and C.-C. Jay Kuo
J. Imaging 2025, 11(6), 174; https://doi.org/10.3390/jimaging11060174 - 26 May 2025
Viewed by 423
Abstract
As artificial intelligence advances in medical image analysis, its environmental impact remains largely overlooked. This study analyzes the energy demands of AI workflows for medical image segmentation using the popular Kidney Tumor Segmentation-2019 (KiTS-19) dataset. It examines how training and inference differ in [...] Read more.
As artificial intelligence advances in medical image analysis, its environmental impact remains largely overlooked. This study analyzes the energy demands of AI workflows for medical image segmentation using the popular Kidney Tumor Segmentation-2019 (KiTS-19) dataset. It examines how training and inference differ in energy consumption, focusing on factors that influence resource usage, such as computational complexity, memory access, and I/O operations. To address these aspects, we evaluated three variants of convolution—Standard Convolution, Depthwise Convolution, and Group Convolution—combined with optimization techniques such as Mixed Precision and Gradient Accumulation. While training is energy-intensive, the recurring nature of inference often results in significantly higher cumulative energy consumption over a model’s life cycle. Depthwise Convolution with Mixed Precision achieves the lowest energy consumption during training while maintaining strong performance, making it the most energy-efficient configuration among those tested. In contrast, Group Convolution fails to achieve energy efficiency due to significant input/output overhead. These findings emphasize the need for GPU-centric strategies and energy-conscious AI practices, offering actionable guidance for designing scalable, sustainable innovation in medical image analysis. Full article
(This article belongs to the Special Issue Imaging in Healthcare: Progress and Challenges)
Show Figures

Figure 1

21 pages, 17011 KiB  
Article
Three-Blind Validation Strategy of Deep Learning Models for Image Segmentation
by Andrés Larroza, Francisco Javier Pérez-Benito, Raquel Tendero, Juan Carlos Perez-Cortes, Marta Román and Rafael Llobet
J. Imaging 2025, 11(5), 170; https://doi.org/10.3390/jimaging11050170 - 21 May 2025
Viewed by 185
Abstract
Image segmentation plays a central role in computer vision applications such as medical imaging, industrial inspection, and environmental monitoring. However, evaluating segmentation performance can be particularly challenging when ground truth is not clearly defined, as is often the case in tasks involving subjective [...] Read more.
Image segmentation plays a central role in computer vision applications such as medical imaging, industrial inspection, and environmental monitoring. However, evaluating segmentation performance can be particularly challenging when ground truth is not clearly defined, as is often the case in tasks involving subjective interpretation. These challenges are amplified by inter- and intra-observer variability, which complicates the use of human annotations as a reliable reference. To address this, we propose a novel validation framework—referred to as the three-blind validation strategy—that enables rigorous assessment of segmentation models in contexts where subjectivity and label variability are significant. The core idea is to have a third independent expert, blind to the labeler identities, assess a shuffled set of segmentations produced by multiple human annotators and/or automated models. This allows for the unbiased evaluation of model performance and helps uncover patterns of disagreement that may indicate systematic issues with either human or machine annotations. The primary objective of this study is to introduce and demonstrate this validation strategy as a generalizable framework for robust model evaluation in subjective segmentation tasks. We illustrate its practical implementation in a mammography use case involving dense tissue segmentation while emphasizing its potential applicability to a broad range of segmentation scenarios. Full article
(This article belongs to the Special Issue Imaging in Healthcare: Progress and Challenges)
Show Figures

Figure 1

Back to TopTop