Artificial Intelligence-Based Medical Imaging Processing

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: 28 February 2026 | Viewed by 7891

Special Issue Editors


E-Mail Website
Guest Editor
Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
Interests: artificial intelligence; medical imaging; computed tomography; computer aided detection; radiomics

E-Mail Website
Guest Editor
Department of Radiology, Mayo Clinic at Arizona, Phoenix, AZ 85054, USA
Interests: image; MRI; PET; artificial intelligence; safety; medicine
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are pleased to announce a Special Issue of the Journal of Bioengineering on "Artificial Intelligence-Based Medical Imaging Processing". This issue aims to highlight groundbreaking advancements, address emerging challenges, and explore the transformative potential of AI in medical imaging, a field that is revolutionizing diagnostics, improving clinical efficiency, and enabling more personalized care. As AI technologies continue to evolve, they hold the potential to reshape the future of healthcare delivery by offering faster, more accurate, and data-driven insights.

AI technologies, such as deep learning, machine learning, and radiomics, are fundamentally changing the way complex imaging data is analyzed. These innovations are paving the way for enhanced disease detection, better risk assessment, and more effective treatment planning. However, significant challenges remain in fully realizing their potential. Key issues include improving the interpretability of AI models, ensuring resilience against biases, addressing data scarcity and diversity, and integrating these technologies seamlessly into clinical workflows. Overcoming these hurdles is critical to ensuring that AI technologies are both effective and equitable in real-world healthcare settings.

This Special Issue welcomes research contributions that focus on AI-driven disease detection and prediction across diverse imaging modalities. We are particularly interested in studies that explore the development of novel AI technologies and quantitative methods for precision medicine. Additionally, we encourage research that addresses the ethical and practical challenges of AI adoption, including enhancing model transparency, reducing disparities in healthcare outcomes, and building trust among clinicians and patients in AI systems.

We invite submissions from researchers, clinicians, and industry professionals that contribute original research, comprehensive reviews, or insightful case studies. By advancing the science of AI in medical imaging, we hope to foster innovation and collaboration across disciplines to tackle the challenges and harness the full potential of AI in healthcare.

Dr. Xin Meng
Dr. Yuxiang Zhou
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • medical imaging
  • disease detection and prediction
  • quantitative analysis
  • clinical integration
  • bias resilience
  • model transparency
  • precision medicine

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

10 pages, 1114 KB  
Article
Toward Supportive Decision-Making for Ureteral Stent Removal: Development of a Morphology-Based X-Ray Analysis
by So Hyeon Lee, Young Jae Kim, Tae Young Park and Kwang Gi Kim
Bioengineering 2025, 12(10), 1084; https://doi.org/10.3390/bioengineering12101084 - 5 Oct 2025
Viewed by 470
Abstract
Purpose: Timely removal of ureteral stents is critical to prevent complications such as infection, discomfort and stent encrustation or fragmentation, as well as stone formation associated with neglected stents. Current decisions, however, rely heavily on subjective interpretation of postoperative imaging. This study introduces [...] Read more.
Purpose: Timely removal of ureteral stents is critical to prevent complications such as infection, discomfort and stent encrustation or fragmentation, as well as stone formation associated with neglected stents. Current decisions, however, rely heavily on subjective interpretation of postoperative imaging. This study introduces a semi-automated image-processing algorithm that quantitatively evaluates stent morphology, aiming to support objective and reproducible decision-making in minimally invasive urological care. Methods: Two computational approaches were developed to analyze morphological changes in ureteral stents following surgery. The first method employed a vector-based analysis, using the FitLine function to derive unit vectors for each stent segment and calculating inter-vector angles. The second method applied a slope-based analysis, computing gradients between coordinate points to evaluate global straightening of the ureter over time. Results: The vector-angle method did not demonstrate significant temporal changes (p = 0.844). In contrast, the slope-based method identified significant ureteral straightening (p < 0.05), consistent with clinical observations. These results confirm that slope-based quantitative analysis provides reliable insight into postoperative morphological changes. Conclusions: This study presents an algorithm-based and reproducible imaging analysis method that enhances objectivity in postoperative assessment of ureteral stents. By aligning quantitative image processing with clinical decision support, the approach contributes to precision medicine and addresses the absence of standardized criteria for stent removal. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Medical Imaging Processing)
Show Figures

Figure 1

23 pages, 347 KB  
Article
Comparative Analysis of Foundational, Advanced, and Traditional Deep Learning Models for Hyperpolarized Gas MRI Lung Segmentation: Robust Performance in Data-Constrained Scenarios
by Ramtin Babaeipour, Matthew S. Fox, Grace Parraga and Alexei Ouriadov
Bioengineering 2025, 12(10), 1062; https://doi.org/10.3390/bioengineering12101062 - 30 Sep 2025
Viewed by 441
Abstract
This study investigates the comparative performance of foundational models, advanced large-kernel architectures, and traditional deep learning approaches for hyperpolarized gas MRI segmentation across progressive data reduction scenarios. Chronic obstructive pulmonary disease (COPD) remains a leading global health concern, and advanced imaging techniques are [...] Read more.
This study investigates the comparative performance of foundational models, advanced large-kernel architectures, and traditional deep learning approaches for hyperpolarized gas MRI segmentation across progressive data reduction scenarios. Chronic obstructive pulmonary disease (COPD) remains a leading global health concern, and advanced imaging techniques are crucial for its diagnosis and management. Hyperpolarized gas MRI, utilizing helium-3 (3He) and xenon-129 (129Xe), offers a non-invasive way to assess lung function. We evaluated foundational models (Segment Anything Model and MedSAM), advanced architectures (UniRepLKNet and TransXNet), and traditional deep learning models (UNet with VGG19 backbone, Feature Pyramid Network with MIT-B5 backbone, and DeepLabV3 with ResNet152 backbone) using four data availability scenarios: 100%, 50%, 25%, and 10% of the full training dataset (1640 2D MRI slices from 205 participants). The results demonstrate that foundational and advanced models achieve statistically equivalent performance across all data scenarios (p > 0.01), while both significantly outperform traditional architectures under data constraints (p < 0.001). Under extreme data scarcity (10% training data), foundational and advanced models maintained DSC values above 0.86, while traditional models experienced catastrophic performance collapse. This work highlights the critical advantage of architectures with large effective receptive fields in medical imaging applications where data collection is challenging, demonstrating their potential to democratize advanced medical imaging analysis in resource-limited settings. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Medical Imaging Processing)
Show Figures

Figure 1

13 pages, 1454 KB  
Article
Predicting Short-Term Outcome of COVID-19 Pneumonia Using Deep Learning-Based Automatic Detection Algorithm Analysis of Serial Chest Radiographs
by Chae Young Lim, Yoon Ki Cha, Kyeongman Jeon, Subin Park, Kyunga Kim and Myung Jin Chung
Bioengineering 2025, 12(10), 1054; https://doi.org/10.3390/bioengineering12101054 - 29 Sep 2025
Viewed by 329
Abstract
This study aimed to evaluate short-term clinical outcomes in COVID-19 pneumonia patients using parameters derived from a commercial deep learning-based automatic detection algorithm (DLAD) applied to serial chest radiographs (CXRs). We analyzed 391 patients with COVID-19 who underwent serial CXRs during isolation at [...] Read more.
This study aimed to evaluate short-term clinical outcomes in COVID-19 pneumonia patients using parameters derived from a commercial deep learning-based automatic detection algorithm (DLAD) applied to serial chest radiographs (CXRs). We analyzed 391 patients with COVID-19 who underwent serial CXRs during isolation at a residential treatment center (median interval: 3.57 days; range: 1.73–5.56 days). Patients were categorized into two groups: the improved group (n = 309), who completed the standard 7-day quarantine, and the deteriorated group (n = 82), who showed worsening symptoms, vital signs, or CXR findings. Using DLAD’s consolidation probability scores and gradient-weighted class activation mapping (Grad-CAM)-based localization maps, we quantified the consolidation area through heatmap segmentation. The weighted area was calculated as the sum of the consolidation regions’ areas, with each area weighted by its corresponding probability score. Change rates (Δ) were defined as per-day differences between consecutive measurements. Prediction models were developed using Cox proportional hazards regression and evaluated daily from day 1 to day 7 after the subsequent CXR acquisition. Among the imaging factors, baseline probability and ΔProbability, ΔArea, and ΔWeighted area were identified as prognostic indicators. The multivariate Cox model incorporating baseline probability and ΔWeighted area demonstrated optimal performance (C-index: 0.75, 95% Confidence Interval: 0.68–0.81; integrated calibration index: 0.03), with time-dependent AUROC (Area Under Receiver Operating Curve) values ranging from 0.74 to 0.78 across daily predictions. These findings suggest that the Δparameters of DLAD can aid in predicting short-term clinical outcomes in patients with COVID-19. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Medical Imaging Processing)
Show Figures

Figure 1

22 pages, 4893 KB  
Article
Ultrawidefield-to-Conventional Fundus Image Translation with Scaled Feature Registration and Distorted Vessel Correction
by JuChan Kim, Junghyun Bum, Duc-Tai Le, Chang-Hwan Son, Eun Jung Lee, Jong Chul Han and Hyunseung Choo
Bioengineering 2025, 12(10), 1046; https://doi.org/10.3390/bioengineering12101046 - 28 Sep 2025
Viewed by 310
Abstract
Conventional fundus (CF) and ultrawidefield fundus (UF) imaging are two primary modalities widely used in ophthalmology. Despite the complementary use of both imaging modalities in clinical practice, existing research on fundus image translation has yet to reach clinical viability and often lacks the [...] Read more.
Conventional fundus (CF) and ultrawidefield fundus (UF) imaging are two primary modalities widely used in ophthalmology. Despite the complementary use of both imaging modalities in clinical practice, existing research on fundus image translation has yet to reach clinical viability and often lacks the necessary accuracy and detail required for practical medical use. Additionally, collecting paired UFI-CFI data from the same patients presents significant limitations, and unpaired learning-based generative models frequently suffer from distortion phenomena, such as hallucinations. This study introduces an enhanced modality transformation method to improve the diagnostic support capabilities of deep learning models in ophthalmology. The proposed method translates UF images (UFIs) into CF images (CFIs), potentially replacing the dual-imaging approach commonly used in clinical practice. This replacement can significantly reduce financial and temporal burdens on patients. To achieve this, this study leveraged UFI–CFI image pairs obtained from the same patient on the same day. This approach minimizes information distortion and accurately converts the two modalities. Our model employs scaled feature registration and distorted vessel correction methods to align UFI–CFI pairs effectively. The generated CFIs not only enhance image quality and better represent the retinal area compared to existing methods but also effectively preserve disease-related details from UFIs, aiding in accurate diagnosis. Furthermore, compared with existing methods, our model demonstrated a substantial 18.2% reduction in MSE, a 7.2% increase in PSNR, and a 12.7% improvement in SSIM metrics. Notably, our results show that the generated CFIs are nearly indistinguishable from the real CFIs, as confirmed by ophthalmologists. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Medical Imaging Processing)
Show Figures

Figure 1

16 pages, 551 KB  
Article
The ASC Module: A GPU Memory-Efficient, Physiology-Aware Approach for Improving Segmentation Accuracy on Poorly Contrast-Enhanced CT Scans—A Preliminary Study
by Zuoyuan Zhao, Toru Higaki, Yanlei Gu and Bisser Raytchev
Bioengineering 2025, 12(9), 974; https://doi.org/10.3390/bioengineering12090974 - 12 Sep 2025
Viewed by 515
Abstract
At present, some aging populations, such as those in Japan, face an underlying risk of inadequate medical resources. Using neural networks to assist doctors in locating the aorta in patients via computed tomography (CT) before surgery is a task with practical value. While [...] Read more.
At present, some aging populations, such as those in Japan, face an underlying risk of inadequate medical resources. Using neural networks to assist doctors in locating the aorta in patients via computed tomography (CT) before surgery is a task with practical value. While UNet and some of its derived models are efficient for the semantic segmentation of optimally contrast-enhanced CT images, their segmentation accuracy on poorly or non-contrasted CT images is too low to provide usable results. To solve this problem, we propose a data-processing module based on the physical–spatial structure and anatomical properties of the aorta, which we call the Automatic Spatial Contrast Module. In an experiment using UNet, Attention UNet, TransUNet, and Swin-UNet as baselines, modified versions of these models using the proposed Automatic Spatial Contrast (ASC) Module showed improvements of up to 24.84% in the Intersection-over-Union (IoU) and 28.13% in the Dice Similarity Coefficient (DSC). Furthermore, the proposed approach entails only a small increase in GPU memory when compared with the baseline models. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Medical Imaging Processing)
Show Figures

Graphical abstract

21 pages, 5025 KB  
Article
Cascaded Self-Supervision to Advance Cardiac MRI Segmentation in Low-Data Regimes
by Martin Urschler, Elisabeth Rechberger, Franz Thaler and Darko Štern
Bioengineering 2025, 12(8), 872; https://doi.org/10.3390/bioengineering12080872 - 12 Aug 2025
Viewed by 886
Abstract
Deep learning has shown remarkable success in medical image analysis over the last decade; however, many contributions focused on supervised methods which learn exclusively from labeled training samples. Acquiring expert-level annotations in large quantities is time-consuming and costly, even more so in medical [...] Read more.
Deep learning has shown remarkable success in medical image analysis over the last decade; however, many contributions focused on supervised methods which learn exclusively from labeled training samples. Acquiring expert-level annotations in large quantities is time-consuming and costly, even more so in medical image segmentation, where annotations are required on a pixel level and often in 3D. As a result, available labeled training data and consequently performance is often limited. Frequently, however, additional unlabeled data are available and can be readily integrated into model training, paving the way for semi- or self-supervised learning (SSL). In this work, we investigate popular SSL strategies in more detail, namely Transformation Consistency, Student–Teacher and Pseudo-Labeling, as well as exhaustive combinations thereof. We comprehensively evaluate these methods on two 2D and 3D cardiac Magnetic Resonance datasets (ACDC, MMWHS) for which several different multi-compartment segmentation labels are available. To assess performance in limited dataset scenarios, different setups with a decreasing amount of patients in the labeled dataset are investigated. We identify cascaded Self-Supervision as the best methodology, where we propose to employ Pseudo-Labeling and a self-supervised cascaded Student–Teacher model simultaneously. Our evaluation shows that in all scenarios, all investigated SSL methods outperform the respective low-data supervised baseline as well as state-of-the-art self-supervised approaches. This is most prominent in the very-low-labeled data regime, where for our proposed method we demonstrate 10.17% and 6.72% improvement in Dice Similarity Coefficient (DSC) for ACDC and MMWHS, respectively, compared with the low-data supervised approach, as well as 2.47% and 7.64% DSC improvement, respectively, when compared with related work. Moreover, in most experiments, our proposed method is able to greatly decrease the performance gap when compared to the fully supervised scenario, where all available labeled samples are used. We conclude that it is always beneficial to incorporate unlabeled data in cardiac MRI segmentation whenever it is present. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Medical Imaging Processing)
Show Figures

Figure 1

19 pages, 2017 KB  
Article
Segmentation of Brain Tumors Using a Multi-Modal Segment Anything Model (MSAM) with Missing Modality Adaptation
by Jiezhen Xing and Jicong Zhang
Bioengineering 2025, 12(8), 871; https://doi.org/10.3390/bioengineering12080871 - 12 Aug 2025
Viewed by 1479
Abstract
This paper presents a novel multi-modal segment anything model (MSAM) for glioma tumor segmentation using structural MRI images and diffusion tensor imaging data. We designed an effective multimodal feature fusion block to effectively integrate features from different modalities of data, thereby improving the [...] Read more.
This paper presents a novel multi-modal segment anything model (MSAM) for glioma tumor segmentation using structural MRI images and diffusion tensor imaging data. We designed an effective multimodal feature fusion block to effectively integrate features from different modalities of data, thereby improving the accuracy of brain tumor segmentation. We have designed an effective missing modality training method to address the issue of missing modalities in actual clinical scenarios. To evaluate the effectiveness of MSAM, a series of experiments were conducted comparing its performance with U-Net across various modality combinations. The results demonstrate that MSAM consistently outperforms U-Net in terms of both Dice Similarity Coefficient and 95% Hausdorff Distance, particularly when structural modality data are used alone. Through feature visualization and the use of missing modality training, we show that MSAM can effectively adapt to missing data, providing robust segmentation even when key modalities are absent. Additionally, segmentation accuracy is influenced by tumor region size, with smaller regions presenting more challenges. These findings underscore the potential of MSAM in clinical applications where incomplete data or varying tumor sizes are prevalent. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Medical Imaging Processing)
Show Figures

Figure 1

17 pages, 2864 KB  
Article
A Deep-Learning-Based Diffusion Tensor Imaging Pathological Auto-Analysis Method for Cervical Spondylotic Myelopathy
by Shuoheng Yang, Junpeng Li, Ningbo Fei, Guangsheng Li and Yong Hu
Bioengineering 2025, 12(8), 806; https://doi.org/10.3390/bioengineering12080806 - 27 Jul 2025
Cited by 1 | Viewed by 1003
Abstract
Pathological conditions of the spinal cord have been found to be associated with cervical spondylotic myelopathy (CSM). This study aims to explore the feasibility of automatic deep-learning-based classification of the pathological condition of the spinal cord to quantify its severity. A Diffusion Tensor [...] Read more.
Pathological conditions of the spinal cord have been found to be associated with cervical spondylotic myelopathy (CSM). This study aims to explore the feasibility of automatic deep-learning-based classification of the pathological condition of the spinal cord to quantify its severity. A Diffusion Tensor Imaging (DTI)-based spinal cord pathological assessment method was proposed. A multi-dimensional feature fusion model, referred to as DCSANet-MD (DTI-Based CSM Severity Assessment Network-Multi-Dimensional), was developed to extract both 2D and 3D features from DTI slices, incorporating a feature integration mechanism to enhance the representation of spatial information. To evaluate this method, 176 CSM patients with cervical DTI slices and clinical records were collected. The proposed assessment model demonstrated an accuracy of 82% in predicting two categories of severity levels (mild and severe). Furthermore, in a more refined three-category severity classification (mild, moderate, and severe), using a hierarchical classification strategy, the model achieved an accuracy of approximately 68%, which significantly exceeded the baseline performance. In conclusion, these findings highlight the potential of the deep-learning-based method as a decision-making support tool for DTI-based pathological assessments of CSM, offering great value in monitoring disease progression and guiding the intervention strategies. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Medical Imaging Processing)
Show Figures

Figure 1

15 pages, 8698 KB  
Article
Geometric Self-Supervised Learning: A Novel AI Approach Towards Quantitative and Explainable Diabetic Retinopathy Detection
by Lucas Pu, Oliver Beale and Xin Meng
Bioengineering 2025, 12(2), 157; https://doi.org/10.3390/bioengineering12020157 - 6 Feb 2025
Viewed by 1583
Abstract
Background: Diabetic retinopathy (DR) is the leading cause of blindness among working-age adults. Early detection is crucial to reducing DR-related vision loss risk but is fraught with challenges. Manual detection is labor-intensive and often misses tiny DR lesions, necessitating automated detection. Objective: We [...] Read more.
Background: Diabetic retinopathy (DR) is the leading cause of blindness among working-age adults. Early detection is crucial to reducing DR-related vision loss risk but is fraught with challenges. Manual detection is labor-intensive and often misses tiny DR lesions, necessitating automated detection. Objective: We aimed to develop and validate an annotation-free deep learning strategy for the automatic detection of exudates and bleeding spots on color fundus photography (CFP) images and ultrawide field (UWF) retinal images. Materials and Methods: Three cohorts were created: two CFP cohorts (Kaggle-CFP and E-Ophtha) and one UWF cohort. Kaggle-CFP was used for algorithm development, while E-Ophtha, with manually annotated DR-related lesions, served as the independent test set. For additional independent testing, 50 DR-positive cases from both the Kaggle-CFP and UWF cohorts were manually outlined for bleeding and exudate spots. The remaining cases were used for algorithm training. A multiscale contrast-based shape descriptor transformed DR-verified retinal images into contrast fields. High-contrast regions were identified, and local image patches from abnormal and normal areas were extracted to train a U-Net model. Model performance was evaluated using sensitivity and false positive rates based on manual annotations in the independent test sets. Results: Our trained model on the independent CFP cohort achieved high sensitivities for detecting and segmenting DR lesions: microaneurysms (91.5%, 9.04 false positives per image), hemorrhages (92.6%, 2.26 false positives per image), hard exudates (92.3%, 7.72 false positives per image), and soft exudates (90.7%, 0.18 false positives per image). For UWF images, the model’s performance varied by lesion size. Bleeding detection sensitivity increased with lesion size, from 41.9% (6.48 false positives per image) for the smallest spots to 93.4% (5.80 false positives per image) for the largest. Exudate detection showed high sensitivity across all sizes, ranging from 86.9% (24.94 false positives per image) to 96.2% (6.40 false positives per image), though false positive rates were higher for smaller lesions. Conclusions: Our experiments demonstrate the feasibility of training a deep learning neural network for detecting and segmenting DR-related lesions without relying on their manual annotations. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Medical Imaging Processing)
Show Figures

Figure 1

Back to TopTop