Artificial Intelligence in Medical Image Processing and Segmentation, Third Edition

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: 30 June 2026 | Viewed by 3148

Special Issue Editor


E-Mail Website
Guest Editor
Department of Physics & Computer Science, Wilfrid Laurier University, Waterloo, ON, Canada
Interests: medical imaging; image processing and quantification
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, Artificial Intelligence (AI) has deeply revolutionized the field of medical image processing. In particular, image segmentation has benefited the most from such innovations.

This boost has led to great advancements in the translation of AI algorithms from solely laboratory-based use to real clinical practice, especially for assisting with computer-aided diagnosis and image-guided surgery.

We are pleased to invite you to submit your work to this Special Issue focused on the cutting-edge developments in AI applications in the medical imaging field.

Bioengineering will be accepting contributions (both original articles and reviews) centered primarily on the following topics:

  • Medical image segmentation;
  • AI-based medical image registration;
  • Medical image recognition;
  • Patient/treatment stratification based on AI image processing;
  • Human interactions for the improvement of AI image processing outcomes;
  • Image-guided surgery/radiotherapy based on AI;
  • Radiomics;
  • Explainable AI in medicine.

Dr. Bernard Chiu
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical image processing
  • image segmentation
  • computer-aided diagnosis
  • image-guided surgery
  • artificial intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issues

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 2565 KB  
Article
AI-Based Myocardial Segmentation and Attenuation Mapping Improved Detection of Myocardial Ischemia and Infarction on Emergency CT Angiography
by Martin Segeroth, Jan Vosshenrich, Hanns-Christian Breit, Helge Walter Anand Krebs-Fleischmann, Lorraine Abel, Markus Obmann, Shan Yang, Joshy Cyriac, Jakob Wasserthal, Ashraya Kumar Indrakanti, Michael Bach, Michael J. Zellweger, Alexander Sauter, Jens Bremerich, Philip Haaf and David Jean Winkel
Bioengineering 2026, 13(3), 355; https://doi.org/10.3390/bioengineering13030355 - 18 Mar 2026
Viewed by 62
Abstract
Purpose: To investigate whether an AI-based approach combining deep learning myocardial segmentation with attenuation-normalized myocardial mapping (colormaps) improves detection of myocardial ischemia and infarction on emergency ECG-gated CT angiography. Materials and Methods: In this retrospective study, 119 patients with acute chest pain who [...] Read more.
Purpose: To investigate whether an AI-based approach combining deep learning myocardial segmentation with attenuation-normalized myocardial mapping (colormaps) improves detection of myocardial ischemia and infarction on emergency ECG-gated CT angiography. Materials and Methods: In this retrospective study, 119 patients with acute chest pain who underwent ECG-gated CT angiography to exclude pulmonary embolism or acute aortic syndrome and invasive coronary angiography within 48 h were included. A deep learning model (nnU-Net) was used for automatic left-ventricular myocardial segmentation, serving as the basis for voxel-wise attenuation normalization to generate AI-based myocardial attenuation maps. Six readers with varying experience levels evaluated all cases for myocardial hypoattenuation in a multi-reader, multi-case design, with and without AI-generated attenuation maps. Results: AI-based myocardial attenuation mapping increased mean sensitivity for detection of myocardial ischemia or infarction by 12% [IQR 2–20%] compared with standard CT interpretation alone. Sensitivity improved by 15% [IQR 10–22%] in STEMI (ST-Elevation Myocardial Infarction) and 11% [IQR −1–18%] in NSTEMI (Non-STEMI) cases. The AI-assisted approach resulted in the correct reclassification of 11% of patients and improved inter-reader agreement, particularly among less experienced readers, demonstrating reduced reader dependency. Conclusions: AI-based myocardial segmentation and attenuation mapping enhance the detection of myocardial ischemia and infarction on emergency CT angiography and improve inter-reader agreement. This AI-assisted image processing approach provides clinically meaningful decision support in acute chest pain imaging workflows. Full article
Show Figures

Figure 1

19 pages, 3106 KB  
Article
Explainability of a Deep Learning Model for Mediastinal Lymph Node Station Classification in Endobronchial Ultrasound (EBUS)
by Øyvind Ervik, Mia Rødde, Erlend Fagertun Hofstad, Thomas Langø, Håkon O. Leira, Tore Amundsen and Hanne Sorger
Bioengineering 2026, 13(2), 198; https://doi.org/10.3390/bioengineering13020198 - 10 Feb 2026
Viewed by 443
Abstract
Accurate localization of thoracic lymph nodes during endobronchial ultrasound (EBUS) is crucial for lung cancer staging, treatment planning, and prognostication. Artificial intelligence (AI) has the potential to support this process. Deep learning (DL) models often lack transparency but can benefit from explainable AI [...] Read more.
Accurate localization of thoracic lymph nodes during endobronchial ultrasound (EBUS) is crucial for lung cancer staging, treatment planning, and prognostication. Artificial intelligence (AI) has the potential to support this process. Deep learning (DL) models often lack transparency but can benefit from explainable AI (XAI) tools like Gradient-weighted Class Activation Mapping (Grad-CAM). However, no prior study has quantitatively assessed whether model attention in EBUS imaging corresponds to relevant anatomy. This study developed a convolutional neural network (CNN) to classify thoracic lymph node stations and evaluated the anatomical relevance of Grad-CAM activations using a structured annotation framework. Applied on 35,527 labeled EBUS images, the CNN achieved 63.1% accuracy, with the highest F1-score in stations 4L, 4R, and 10R. Three expert bronchoscopists independently annotated Grad-CAM maps from 3131 test images. Activations predominantly aligned with lymph nodes and/or blood vessels, yielding an accuracy of 65.9% and an F1-score of 58.4%, with moderate interobserver agreement. These findings indicate that DL can aid lymph node station classification and that XAI offers meaningful insight into model behavior. The proposed framework may enhance anatomical orientation and operator training during EBUS, although further optimization and multicenter validation are required. Full article
Show Figures

Figure 1

16 pages, 4585 KB  
Article
Cascaded Deep Learning-Based Model for Classification and Segmentation of Plaques from Carotid Ultrasound Images
by Bo-Wen Ren, Ran Zhou, Xinyao Cheng, Mingyue Ding and Bernard Chiu
Bioengineering 2026, 13(2), 190; https://doi.org/10.3390/bioengineering13020190 - 6 Feb 2026
Viewed by 403
Abstract
Carotid plaque classification based on ultrasound echogenicity and quantification of plaque burden are crucial in stroke risk assessment. In this work, we propose a framework that leverages the synergy between classification and segmentation by sharing plaque location information to enhance the performance of [...] Read more.
Carotid plaque classification based on ultrasound echogenicity and quantification of plaque burden are crucial in stroke risk assessment. In this work, we propose a framework that leverages the synergy between classification and segmentation by sharing plaque location information to enhance the performance of both tasks. Our cascaded framework integrates a ResNet-based classifier (Masked-ResNet-DS) with MedSAM, a medically adapted version of the Segment Anything Model for joint classification and segmentation of carotid plaques from 2D ultrasound images. Ground truth boundaries are used to guide region-specific feature pooling in the classifier, helping it focus on plaques during training. Since ground truth boundaries are unavailable at inference, we introduce a two-iteration strategy: the first generates a class activation map (CAM), which is then used for focused pooling in the second iteration to predict plaque type. The CAM is also used as a prompt to guide MedSAM for segmentation. To ensure accurate localization, the CAM is supervised during training using a Dice loss against the segmentation ground truth. Masked-ResNet-DS achieves a mean F1-score of 96.7% in plaque classification, at least 3.2% higher than competing methods. Ablation studies confirm that ground truth-based pooling and CAM supervision both improve classification. CAM-guided MedSAM achieves a Dice similarity coefficient (DSC) of 86.6%, outperforming U-Net and nnU-Net by 5.9% and 3.6%, respectively. In addition, CAM prompts improve MedSAM’s DSC by 2.2%. By sharing plaque location between classification and segmentation, the proposed method improves both tasks and provides a more accurate tool for stroke risk stratification. Full article
Show Figures

Figure 1

15 pages, 4617 KB  
Article
Artificial Intelligence-Based Proximal Bone Shape Asymmetry Analysis and Clinical Correlation with Cartilage Relaxation Times and Functional Activity
by Rafeek Thahakoya, Rupsa Bhattacharjee, Misung Han, Felix Gerhard Gassert, Johanna Luitjens, Valentina Pedoia, Richard B. Souza and Sharmila Majumdar
Bioengineering 2026, 13(2), 184; https://doi.org/10.3390/bioengineering13020184 - 5 Feb 2026
Viewed by 941
Abstract
The current study investigated proximal femur bone shape asymmetry and its associations with cartilage composition and functional performance in individuals with hip osteoarthritis (OA). Forty-seven participants with hip OA (mean age: 53.77 ± 12.47 years; 22 females; BMI: 24.49 ± 4.0 kg/m2 [...] Read more.
The current study investigated proximal femur bone shape asymmetry and its associations with cartilage composition and functional performance in individuals with hip osteoarthritis (OA). Forty-seven participants with hip OA (mean age: 53.77 ± 12.47 years; 22 females; BMI: 24.49 ± 4.0 kg/m2) were included in this study. Bilateral hip MRI was performed using a 3.0 T MR scanner with 3D proton density fat-saturated CUBE and MAPSS sequences. Automatic segmentation of the proximal femur was achieved using a U-Net framework refined through a human-in-the-loop annotation strategy, followed by three-dimensional bone shape analysis to quantify asymmetry. Cartilage relaxation times were assessed using atlas-based segmentation and quantification, while functional activity was evaluated according to OARSI-recommended criteria. The proposed proximal femur bone segmentation showed a DSC of 96.48% (95%-CI: 96.33–96.64) and Hausdorff Distance of 4.66 mm (95%-CI: 3.80–5.51). Increased bone shape asymmetry in the posterior–lateral–superior region of the proximal femur was associated with functional activity in the chair stand test (rho = −0.41; p = 0.006), and the anterior–lateral–inferior region demonstrated a comparatively higher significant positive correlation (rho = 0.37; p = 0.006) with the T1rho values of the acetabular cartilage region. Overall, the findings indicate that region-specific proximal femoral bone shape asymmetry in hip OA is associated with cartilage characteristics and functional impairment, highlighting the potential value of bone shape features as imaging biomarkers relevant to clinical function. Full article
Show Figures

Graphical abstract

21 pages, 1574 KB  
Article
Watershed Encoder–Decoder Neural Network for Nuclei Segmentation of Breast Cancer Histology Images
by Vincent Majanga, Ernest Mnkandla, Donatien Koulla Moulla, Sree Thotempudi and Attipoe David Sena
Bioengineering 2026, 13(2), 154; https://doi.org/10.3390/bioengineering13020154 - 28 Jan 2026
Viewed by 319
Abstract
Recently, deep learning methods have seen major advancements and are preferred for medical image analysis. Clinically, deep learning techniques for cancer image analysis are among the main applications for early diagnosis, detection, and treatment. Consequently, segmentation of breast histology images is a key [...] Read more.
Recently, deep learning methods have seen major advancements and are preferred for medical image analysis. Clinically, deep learning techniques for cancer image analysis are among the main applications for early diagnosis, detection, and treatment. Consequently, segmentation of breast histology images is a key step towards diagnosing breast cancer. However, the use of deep learning methods for image analysis is constrained by challenging features in the histology images. These challenges include poor image quality, complex microscopic tissue structures, topological intricacies, and boundary/edge inhomogeneity. Furthermore, this leads to a limited number of images required for analysis. The U-Net model was introduced and gained significant traction for its ability to produce high-accuracy results with very few input images. Many modifications of the U-Net architecture exist. Therefore, this study proposes the watershed encoder–decoder neural network (WEDN) to segment cancerous lesions in supervised breast histology images. Pre-processing of supervised breast histology images via augmentation is introduced to increase the dataset size. The augmented dataset is further enhanced and segmented into the region of interest. Data enhancement methods such as thresholding, opening, dilation, and distance transform are used to highlight foreground and background pixels while removing unwanted parts from the image. Consequently, further segmentation via the connected component analysis method is used to combine image pixel components with similar intensity values and assign them their respective labeled binary masks. The watershed filling method is then applied to these labeled binary mask components to separate and identify the edges/boundaries of the regions of interest (cancerous lesions). This resultant image information is sent to the WEDN model network for feature extraction and learning via training and testing. Residual convolutional block layers of the WEDN model are the learnable layers that extract the region of interest (ROI), which is the cancerous lesion. The method was evaluated on 3000 images–watershed masks, an augmented dataset. The model was trained on 2400 training set images and tested on 600 testing set images. This proposed method produced significant results of 98.53% validation accuracy, 96.98% validation dice coefficient, and 97.84% validation intersection over unit (IoU) metric scores. Full article
Show Figures

Figure 1

26 pages, 25350 KB  
Article
Applying Supervised Machine Learning to Effusion Analysis for the Diagnosis of Feline Infectious Peritonitis
by Dawn E. Dunbar, Simon A. Babayan, Sarah Krumrie, Sharmila Rennie, Elspeth M. Waugh, Margaret J. Hosie and William Weir
Bioengineering 2026, 13(2), 127; https://doi.org/10.3390/bioengineering13020127 - 23 Jan 2026
Viewed by 714
Abstract
Feline infectious peritonitis (FIP) is a major disease of cats which, unless promptly diagnosed and treated, is invariably fatal. Although it has long been recognised that the condition is the result of an aberrant immune response to infection with feline coronavirus, there remain [...] Read more.
Feline infectious peritonitis (FIP) is a major disease of cats which, unless promptly diagnosed and treated, is invariably fatal. Although it has long been recognised that the condition is the result of an aberrant immune response to infection with feline coronavirus, there remain significant gaps in our understanding of its pathogenesis. Consequently, diagnosis is complex and relies on the combined interpretation of numerous clinical signs and laboratory biomarkers, many of which are non-specific. In the case of effusive FIP, a commonly encountered acute form of the disease where body cavity effusions develop; the interpretation of fluid analysis results is key to diagnosing the condition. We hypothesised that machine learning could be applied to fluid analysis test data in order to help diagnose effusive FIP. Thus, historical test records from a veterinary laboratory dataset of 718 suspected cases of effusive disease were identified, representing 336 cases of FIP and 382 cases that were determined not to be FIP. This dataset was used to train an ensemble model to predict disease status based on clinical observations and laboratory features. Our model predicts the correct disease state with an accuracy of 96.51%, an area under the receiver operator curve of 96.48%, a sensitivity of 98.85% and a specificity of 94.12%. This study demonstrates that machine learning can be successfully applied to the interpretation of fluid analysis results to accurately detect cases of effusive FIP. Thus, this method has the potential to be utilised in a veterinary diagnostic laboratory setting to standardise and improve service provision. Full article
Show Figures

Figure 1

Back to TopTop