Artificial Intelligence-Based Medical Imaging Processing

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: 28 February 2026 | Viewed by 2622

Special Issue Editors


E-Mail Website
Guest Editor
Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
Interests: artificial intelligence; medical imaging; computed tomography; computer aided detection; radiomics

E-Mail Website
Guest Editor
Department of Radiology, Mayo Clinic at Arizona, Phoenix, AZ 85054, USA
Interests: image; MRI; PET; artificial intelligence; safety; medicine
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are pleased to announce a Special Issue of the Journal of Bioengineering on "Artificial Intelligence-Based Medical Imaging Processing". This issue aims to highlight groundbreaking advancements, address emerging challenges, and explore the transformative potential of AI in medical imaging, a field that is revolutionizing diagnostics, improving clinical efficiency, and enabling more personalized care. As AI technologies continue to evolve, they hold the potential to reshape the future of healthcare delivery by offering faster, more accurate, and data-driven insights.

AI technologies, such as deep learning, machine learning, and radiomics, are fundamentally changing the way complex imaging data is analyzed. These innovations are paving the way for enhanced disease detection, better risk assessment, and more effective treatment planning. However, significant challenges remain in fully realizing their potential. Key issues include improving the interpretability of AI models, ensuring resilience against biases, addressing data scarcity and diversity, and integrating these technologies seamlessly into clinical workflows. Overcoming these hurdles is critical to ensuring that AI technologies are both effective and equitable in real-world healthcare settings.

This Special Issue welcomes research contributions that focus on AI-driven disease detection and prediction across diverse imaging modalities. We are particularly interested in studies that explore the development of novel AI technologies and quantitative methods for precision medicine. Additionally, we encourage research that addresses the ethical and practical challenges of AI adoption, including enhancing model transparency, reducing disparities in healthcare outcomes, and building trust among clinicians and patients in AI systems.

We invite submissions from researchers, clinicians, and industry professionals that contribute original research, comprehensive reviews, or insightful case studies. By advancing the science of AI in medical imaging, we hope to foster innovation and collaboration across disciplines to tackle the challenges and harness the full potential of AI in healthcare.

Dr. Xin Meng
Dr. Yuxiang Zhou
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • medical imaging
  • disease detection and prediction
  • quantitative analysis
  • clinical integration
  • bias resilience
  • model transparency
  • precision medicine

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 5025 KiB  
Article
Cascaded Self-Supervision to Advance Cardiac MRI Segmentation in Low-Data Regimes
by Martin Urschler, Elisabeth Rechberger, Franz Thaler and Darko Štern
Bioengineering 2025, 12(8), 872; https://doi.org/10.3390/bioengineering12080872 - 12 Aug 2025
Viewed by 173
Abstract
Deep learning has shown remarkable success in medical image analysis over the last decade; however, many contributions focused on supervised methods which learn exclusively from labeled training samples. Acquiring expert-level annotations in large quantities is time-consuming and costly, even more so in medical [...] Read more.
Deep learning has shown remarkable success in medical image analysis over the last decade; however, many contributions focused on supervised methods which learn exclusively from labeled training samples. Acquiring expert-level annotations in large quantities is time-consuming and costly, even more so in medical image segmentation, where annotations are required on a pixel level and often in 3D. As a result, available labeled training data and consequently performance is often limited. Frequently, however, additional unlabeled data are available and can be readily integrated into model training, paving the way for semi- or self-supervised learning (SSL). In this work, we investigate popular SSL strategies in more detail, namely Transformation Consistency, Student–Teacher and Pseudo-Labeling, as well as exhaustive combinations thereof. We comprehensively evaluate these methods on two 2D and 3D cardiac Magnetic Resonance datasets (ACDC, MMWHS) for which several different multi-compartment segmentation labels are available. To assess performance in limited dataset scenarios, different setups with a decreasing amount of patients in the labeled dataset are investigated. We identify cascaded Self-Supervision as the best methodology, where we propose to employ Pseudo-Labeling and a self-supervised cascaded Student–Teacher model simultaneously. Our evaluation shows that in all scenarios, all investigated SSL methods outperform the respective low-data supervised baseline as well as state-of-the-art self-supervised approaches. This is most prominent in the very-low-labeled data regime, where for our proposed method we demonstrate 10.17% and 6.72% improvement in Dice Similarity Coefficient (DSC) for ACDC and MMWHS, respectively, compared with the low-data supervised approach, as well as 2.47% and 7.64% DSC improvement, respectively, when compared with related work. Moreover, in most experiments, our proposed method is able to greatly decrease the performance gap when compared to the fully supervised scenario, where all available labeled samples are used. We conclude that it is always beneficial to incorporate unlabeled data in cardiac MRI segmentation whenever it is present. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Medical Imaging Processing)
Show Figures

Figure 1

19 pages, 2017 KiB  
Article
Segmentation of Brain Tumors Using a Multi-Modal Segment Anything Model (MSAM) with Missing Modality Adaptation
by Jiezhen Xing and Jicong Zhang
Bioengineering 2025, 12(8), 871; https://doi.org/10.3390/bioengineering12080871 - 12 Aug 2025
Viewed by 189
Abstract
This paper presents a novel multi-modal segment anything model (MSAM) for glioma tumor segmentation using structural MRI images and diffusion tensor imaging data. We designed an effective multimodal feature fusion block to effectively integrate features from different modalities of data, thereby improving the [...] Read more.
This paper presents a novel multi-modal segment anything model (MSAM) for glioma tumor segmentation using structural MRI images and diffusion tensor imaging data. We designed an effective multimodal feature fusion block to effectively integrate features from different modalities of data, thereby improving the accuracy of brain tumor segmentation. We have designed an effective missing modality training method to address the issue of missing modalities in actual clinical scenarios. To evaluate the effectiveness of MSAM, a series of experiments were conducted comparing its performance with U-Net across various modality combinations. The results demonstrate that MSAM consistently outperforms U-Net in terms of both Dice Similarity Coefficient and 95% Hausdorff Distance, particularly when structural modality data are used alone. Through feature visualization and the use of missing modality training, we show that MSAM can effectively adapt to missing data, providing robust segmentation even when key modalities are absent. Additionally, segmentation accuracy is influenced by tumor region size, with smaller regions presenting more challenges. These findings underscore the potential of MSAM in clinical applications where incomplete data or varying tumor sizes are prevalent. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Medical Imaging Processing)
Show Figures

Figure 1

17 pages, 2864 KiB  
Article
A Deep-Learning-Based Diffusion Tensor Imaging Pathological Auto-Analysis Method for Cervical Spondylotic Myelopathy
by Shuoheng Yang, Junpeng Li, Ningbo Fei, Guangsheng Li and Yong Hu
Bioengineering 2025, 12(8), 806; https://doi.org/10.3390/bioengineering12080806 - 27 Jul 2025
Viewed by 395
Abstract
Pathological conditions of the spinal cord have been found to be associated with cervical spondylotic myelopathy (CSM). This study aims to explore the feasibility of automatic deep-learning-based classification of the pathological condition of the spinal cord to quantify its severity. A Diffusion Tensor [...] Read more.
Pathological conditions of the spinal cord have been found to be associated with cervical spondylotic myelopathy (CSM). This study aims to explore the feasibility of automatic deep-learning-based classification of the pathological condition of the spinal cord to quantify its severity. A Diffusion Tensor Imaging (DTI)-based spinal cord pathological assessment method was proposed. A multi-dimensional feature fusion model, referred to as DCSANet-MD (DTI-Based CSM Severity Assessment Network-Multi-Dimensional), was developed to extract both 2D and 3D features from DTI slices, incorporating a feature integration mechanism to enhance the representation of spatial information. To evaluate this method, 176 CSM patients with cervical DTI slices and clinical records were collected. The proposed assessment model demonstrated an accuracy of 82% in predicting two categories of severity levels (mild and severe). Furthermore, in a more refined three-category severity classification (mild, moderate, and severe), using a hierarchical classification strategy, the model achieved an accuracy of approximately 68%, which significantly exceeded the baseline performance. In conclusion, these findings highlight the potential of the deep-learning-based method as a decision-making support tool for DTI-based pathological assessments of CSM, offering great value in monitoring disease progression and guiding the intervention strategies. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Medical Imaging Processing)
Show Figures

Figure 1

15 pages, 8698 KiB  
Article
Geometric Self-Supervised Learning: A Novel AI Approach Towards Quantitative and Explainable Diabetic Retinopathy Detection
by Lucas Pu, Oliver Beale and Xin Meng
Bioengineering 2025, 12(2), 157; https://doi.org/10.3390/bioengineering12020157 - 6 Feb 2025
Viewed by 1302
Abstract
Background: Diabetic retinopathy (DR) is the leading cause of blindness among working-age adults. Early detection is crucial to reducing DR-related vision loss risk but is fraught with challenges. Manual detection is labor-intensive and often misses tiny DR lesions, necessitating automated detection. Objective: We [...] Read more.
Background: Diabetic retinopathy (DR) is the leading cause of blindness among working-age adults. Early detection is crucial to reducing DR-related vision loss risk but is fraught with challenges. Manual detection is labor-intensive and often misses tiny DR lesions, necessitating automated detection. Objective: We aimed to develop and validate an annotation-free deep learning strategy for the automatic detection of exudates and bleeding spots on color fundus photography (CFP) images and ultrawide field (UWF) retinal images. Materials and Methods: Three cohorts were created: two CFP cohorts (Kaggle-CFP and E-Ophtha) and one UWF cohort. Kaggle-CFP was used for algorithm development, while E-Ophtha, with manually annotated DR-related lesions, served as the independent test set. For additional independent testing, 50 DR-positive cases from both the Kaggle-CFP and UWF cohorts were manually outlined for bleeding and exudate spots. The remaining cases were used for algorithm training. A multiscale contrast-based shape descriptor transformed DR-verified retinal images into contrast fields. High-contrast regions were identified, and local image patches from abnormal and normal areas were extracted to train a U-Net model. Model performance was evaluated using sensitivity and false positive rates based on manual annotations in the independent test sets. Results: Our trained model on the independent CFP cohort achieved high sensitivities for detecting and segmenting DR lesions: microaneurysms (91.5%, 9.04 false positives per image), hemorrhages (92.6%, 2.26 false positives per image), hard exudates (92.3%, 7.72 false positives per image), and soft exudates (90.7%, 0.18 false positives per image). For UWF images, the model’s performance varied by lesion size. Bleeding detection sensitivity increased with lesion size, from 41.9% (6.48 false positives per image) for the smallest spots to 93.4% (5.80 false positives per image) for the largest. Exudate detection showed high sensitivity across all sizes, ranging from 86.9% (24.94 false positives per image) to 96.2% (6.40 false positives per image), though false positive rates were higher for smaller lesions. Conclusions: Our experiments demonstrate the feasibility of training a deep learning neural network for detecting and segmenting DR-related lesions without relying on their manual annotations. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Medical Imaging Processing)
Show Figures

Figure 1

Back to TopTop