Medical Imaging Analysis: Current and Future Trends

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: 31 January 2026 | Viewed by 6374

Special Issue Editors


E-Mail Website
Guest Editor
Department of Eye and Vision Sciences, University of Liverpool, Liverpool, UK
Interests: computer vision; artificial intelligence (AI); multi-modality learning; disease progression monitoring; human computer interaction
Center for Advanced Intelligence Project, RIKEN, Tokyo, Japan
Interests: convolutional neural network; computer vision; medical imaging

Special Issue Information

Dear Colleagues,

Medical image analysis is a rapidly advancing field that leverages deep learning and machine learning to enhance computer-aided diagnostic accuracy and support clinicians in treatment planning. From traditional image processing techniques to cutting-edge deep learning models and foundation models, the field has achieved significant progress. This Special Issue aims to explore the latest techniques and future trends in medical image analysis, with a focus on innovations that bridge the gap between technological advancements and clinical practice.

Over the past decade, deep learning-based algorithms have driven the development of state-of-the-art methods for medical image analysis, including applications in disease prediction and anatomical segmentation. With the advent of large-scale and foundation models, multi-modality learning has enabled researchers to leverage diverse data sources, further enhancing diagnostic capabilities. Current research trends are shifting towards disease progression prediction and monitoring, offering critical insights into disease development over time. Additionally, there is increasing emphasis on the interpretability of AI models, which is crucial for their adoption in clinical workflows. Key challenges remain in integrating medical image analysis algorithms into real-world applications, particularly in designing effective human–computer interaction systems that support clinicians in decision-making.

This Special Issue on “Medical Image Analysis: Current and Future Trends” aims to provide a venue for presenting recent findings in the field of medical image analysis. Researchers, healthcare professionals, and AI practitioners are encouraged to submit papers with the applications to the medical image analysis for disease detection, diagnosis, and prognosis. The topics of interest for this Special Issue include, but are not limited to, the following:

  1. Deep learning and foundation models for medical image analysis;
  2. Interpretable and trustworthy AI for medical imaging;
  3. Multi-modality data learning;
  4. Disease progression prediction and monitoring;
  5. Computer-aided diagnosis systems;
  6. Human-computer interaction for healthcare.

Dr. He Zhao
Dr. Lin Gu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computer vision
  • artificial intelligence (AI)
  • multi-modality learning
  • disease progression monitoring
  • explainable AI
  • human-computer interaction

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 5124 KB  
Article
Self-Attention Diffusion Models for Zero-Shot Biomedical Image Segmentation: Unlocking New Frontiers in Medical Imaging
by Abderrachid Hamrani and Anuradha Godavarty
Bioengineering 2025, 12(10), 1036; https://doi.org/10.3390/bioengineering12101036 - 27 Sep 2025
Viewed by 374
Abstract
Producing high-quality segmentation masks for medical images is a fundamental challenge in biomedical image analysis. Recent research has investigated the use of supervised learning with large volumes of labeled data to improve segmentation across medical imaging modalities and unsupervised learning with unlabeled data [...] Read more.
Producing high-quality segmentation masks for medical images is a fundamental challenge in biomedical image analysis. Recent research has investigated the use of supervised learning with large volumes of labeled data to improve segmentation across medical imaging modalities and unsupervised learning with unlabeled data to segment without detailed annotations. However, a significant hurdle remains in constructing a model that can segment diverse medical images in a zero-shot manner without any annotations. In this work, we introduce the attention diffusion zero-shot unsupervised system (ADZUS), a new method that uses self-attention diffusion models to segment biomedical images without needing any prior labels. This method combines self-attention mechanisms to enable context-aware and detail-sensitive segmentations, with the strengths of the pre-trained diffusion model. The experimental results show that ADZUS outperformed state-of-the-art models on various medical imaging datasets, such as skin lesions, chest X-ray infections, and white blood cell segmentations. The model demonstrated significant improvements by achieving Dice scores ranging from 88.7% to 92.9% and IoU scores from 66.3% to 93.3%. The success of the ADZUS model in zero-shot settings could lower the costs of labeling data and help it adapt to new medical imaging tasks, improving the diagnostic capabilities of AI-based medical imaging technologies. Full article
(This article belongs to the Special Issue Medical Imaging Analysis: Current and Future Trends)
Show Figures

Graphical abstract

17 pages, 5300 KB  
Article
Multimodal Integration Enhances Tissue Image Information Content: A Deep Feature Perspective
by Fatemehzahra Darzi and Thomas Bocklitz
Bioengineering 2025, 12(8), 894; https://doi.org/10.3390/bioengineering12080894 - 21 Aug 2025
Viewed by 671
Abstract
Multimodal imaging techniques have the potential to enhance the interpretation of histology by offering additional molecular and structural information beyond that accessible through hematoxylin and eosin (H&E) staining alone. Here, we present a quantitative approach for comparing the information content of different image [...] Read more.
Multimodal imaging techniques have the potential to enhance the interpretation of histology by offering additional molecular and structural information beyond that accessible through hematoxylin and eosin (H&E) staining alone. Here, we present a quantitative approach for comparing the information content of different image modalities, such as H&E and multimodal imaging. We used a combination of deep learning and radiomics-based feature extraction with different information markers, implemented in Python 3.12, to compare the information content of the H&E stain, multimodal imaging, and the combined dataset. We also compared the information content of individual channels in the multimodal image and of different Coherent Anti-Stokes Raman Scattering (CARS) microscopy spectral channels. The quantitative measurements of information that we utilized were Shannon entropy, inverse area under the curve (1-AUC), the number of principal components describing 95% of the variance (PC95), and inverse power law fitting. For example, the combined dataset achieved an entropy value of 0.5740, compared to 0.5310 for H&E and 0.5385 for the multimodal dataset using MobileNetV2 features. The number of principal components required to explain 95 percent of the variance was also highest for the combined dataset, with 62 components, compared to 33 for H&E and 47 for the multimodal dataset. These measurements consistently showed that the combined datasets provide more information. These observations highlight the potential of multimodal combinations to enhance image-based analyses and provide a reproducible framework for comparing imaging approaches in digital pathology and biomedical image analysis. Full article
(This article belongs to the Special Issue Medical Imaging Analysis: Current and Future Trends)
Show Figures

Figure 1

21 pages, 2629 KB  
Article
From Pixels to Precision—A Dual-Stream Deep Network for Pathological Nuclei Segmentation
by Rashid Nasimov, Kudratjon Zohirov, Adilbek Dauletov, Akmalbek Abdusalomov and Young Im Cho
Bioengineering 2025, 12(8), 868; https://doi.org/10.3390/bioengineering12080868 - 12 Aug 2025
Viewed by 779
Abstract
Segmenting cell nuclei in histopathological images is an extremely important process for computational pathology, affecting not only the accuracy of a disease diagnosis but also the analysis of biomarkers and the assessment of cells performed on a large scale. Although many deep learning [...] Read more.
Segmenting cell nuclei in histopathological images is an extremely important process for computational pathology, affecting not only the accuracy of a disease diagnosis but also the analysis of biomarkers and the assessment of cells performed on a large scale. Although many deep learning models can take out global and local features, it is still difficult to find a good balance between semantic context and fine boundary precision, especially when nuclei are overlapping or have changed shapes. In this paper, we put forward a novel deep learning model named Dual-Stream HyperFusionNet (DS-HFN), which is capable of explicitly representing the global contextual and boundary-sensitive features for the robust nuclei segmentation task by first decoupling and then fusing them. The dual-stream encoder in DS-HFN can simultaneously acquire the semantic and edge-focused features, which can be later combined with the help of the attention-driven HyperFeature Embedding Module (HFEM). Additionally, the dual-decoder concept, together with the Gradient-Aligned Loss Function, facilitates structural precision by making the segmentation gradients that are predicted consistent with the ground-truth contours. On various benchmark datasets like TNBC and MoNuSeg, DS-HFN not only achieves better results than other 30 state-of-the-art models in all evaluation metrics but also is less computationally expensive. These findings indicate that DS-HFN provides a capability for accurate nuclei segmentation, which is essential for clinical diagnosis and biomarker analysis, across a wide range of tissues in digital pathology. Full article
(This article belongs to the Special Issue Medical Imaging Analysis: Current and Future Trends)
Show Figures

Figure 1

18 pages, 1850 KB  
Article
Cross-Subject Motor Imagery Electroencephalogram Decoding with Domain Generalization
by Yanyan Zheng, Senxiang Wu, Jie Chen, Qiong Yao and Siyu Zheng
Bioengineering 2025, 12(5), 495; https://doi.org/10.3390/bioengineering12050495 - 7 May 2025
Viewed by 1296
Abstract
Decoding motor imagery (MI) electroencephalogram (EEG) signals in the brain–computer interface (BCI) can assist patients in accelerating motor function recovery. To realize the implementation of plug-and-play functionality for MI-BCI applications, cross-subject models are employed to alleviate time-consuming calibration and avoid additional model training [...] Read more.
Decoding motor imagery (MI) electroencephalogram (EEG) signals in the brain–computer interface (BCI) can assist patients in accelerating motor function recovery. To realize the implementation of plug-and-play functionality for MI-BCI applications, cross-subject models are employed to alleviate time-consuming calibration and avoid additional model training for target subjects by utilizing EEG data from source subjects. However, the diversity in data distribution among subjects limits the model’s robustness. In this study, we investigate a cross-subject MI-EEG decoding model with domain generalization based on a deep learning neural network that extracts domain-invariant features from source subjects. Firstly, a knowledge distillation framework is adopted to obtain the internally invariant representations based on spectral features fusion. Then, the correlation alignment approach aligns mutually invariant representations between each pair of sub-source domains. In addition, we use distance regularization on two kinds of invariant features to enhance generalizable information. To assess the effectiveness of our approach, experiments are conducted on the BCI Competition IV 2a and the Korean University dataset. The results demonstrate that the proposed model achieves 8.93% and 4.4% accuracy improvements on two datasets, respectively, compared with current state-of-the-art models, confirming that the proposed approach can effectively extract invariant features from source subjects and generalize to the unseen target distribution, hence paving the way for effective implementation of the plug-and-play functionality in MI-BCI applications. Full article
(This article belongs to the Special Issue Medical Imaging Analysis: Current and Future Trends)
Show Figures

Figure 1

18 pages, 3481 KB  
Article
Assessment of Urethral Elasticity by Shear Wave Elastography: A Novel Parameter Bridging a Gap Between Hypermobility and ISD in Female Stress Urinary Incontinence
by Desirèe De Vicari, Marta Barba, Clarissa Costa, Alice Cola and Matteo Frigerio
Bioengineering 2025, 12(4), 373; https://doi.org/10.3390/bioengineering12040373 - 1 Apr 2025
Cited by 2 | Viewed by 1277
Abstract
Stress urinary incontinence (SUI) results from complex anatomical and functional interactions, including urethral mobility, muscle activity, and pelvic floor support. Despite advancements in imaging and electrophysiology, a comprehensive model remains elusive. This study employed shear wave elastography (SWE), incorporating sound touch elastography (STE) [...] Read more.
Stress urinary incontinence (SUI) results from complex anatomical and functional interactions, including urethral mobility, muscle activity, and pelvic floor support. Despite advancements in imaging and electrophysiology, a comprehensive model remains elusive. This study employed shear wave elastography (SWE), incorporating sound touch elastography (STE) and sound touch quantification (STQ) with acoustic radiation force impulse (ARFI) technology, to assess urethral elasticity and bladder neck descent (BND) in women with SUI and continent controls. Between October 2024 and January 2025, 30 women (15 with SUI, 15 controls) underwent transperineal and intravaginal ultrasonography at IRCCS San Gerardo. Statistical analysis, conducted using JMP 17, revealed significantly greater BND in the SUI group (21.8 ± 7.8 mm vs. 10.5 ± 5 mm) and increased urethral stiffness (Young’s modulus: middle urethra, 57.8 ± 15.6 kPa vs. 30.7 ± 6.4 kPa; p < 0.0001). Mean urethral pressure was the strongest predictor of SUI (p < 0.0001). Findings emphasize the role of urethral support and connective tissue integrity in continence. By demonstrating SWE’s diagnostic utility, this study provides a foundation for personalized, evidence-based approaches to SUI assessment and management. Full article
(This article belongs to the Special Issue Medical Imaging Analysis: Current and Future Trends)
Show Figures

Figure 1

17 pages, 3051 KB  
Article
Introduction of a Semi-Quantitative Image-Based Analysis Tool for CBCT-Based Evaluation of Bone Regeneration in Tooth Extraction Sockets
by Anja Heselich, Pauline Neff, Joanna Śmieszek-Wilczewska, Robert Sader and Shahram Ghanaati
Bioengineering 2025, 12(3), 301; https://doi.org/10.3390/bioengineering12030301 - 16 Mar 2025
Cited by 2 | Viewed by 1105
Abstract
After tooth extraction, resorptive changes in extraction sockets and the adjacent alveolar ridge can affect subsequent tooth replacement and implantation. Several surgical concepts, including the application of autologous blood concentrate platelet-rich fibrin (PRF), aim to reduce these changes. While PRF’s wound-healing and pain-relieving [...] Read more.
After tooth extraction, resorptive changes in extraction sockets and the adjacent alveolar ridge can affect subsequent tooth replacement and implantation. Several surgical concepts, including the application of autologous blood concentrate platelet-rich fibrin (PRF), aim to reduce these changes. While PRF’s wound-healing and pain-relieving effects are well-documented, its impact on bone regeneration is less clear due to varying PRF protocols and measurement methods for bone regeneration. This study aimed to develop a precise, easy-to-use non-invasive radiological evaluation method that examines the entire extraction socket to assess bone regeneration using CBCT data from clinical trials. The method, based on the freely available Image J-based software “Fiji”, proved to be precise, reproducible, and transferable. As limitation remains the time requirement and its exclusive focus on radiological bone regeneration. Nevertheless, the method presented here is more precise than the ones currently described in the literature, as it evaluates the entire socket rather than partial areas. The application of the novel method to measure mineralized socket volume and radiological bone density of newly formed bone in a randomized, controlled clinical trial assessing solid PRF for socket preservation in premolar and molar sockets showed only slight, statistically non-significant trends toward better regeneration in the PRF group compared to natural healing. Full article
(This article belongs to the Special Issue Medical Imaging Analysis: Current and Future Trends)
Show Figures

Figure 1

Back to TopTop