AI-Driven Imaging and Analysis for Biomedical Applications

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: 30 April 2026 | Viewed by 3296

Special Issue Editors


E-Mail Website
Guest Editor
Radiology and Biomedical Engineering Department, Northwestern University, Chicago, IL, USA
Interests: artificial intelligence; medical artificial intelligence; biomedical imaging; digital health; explainable AI; trustworthy AI; generative AI
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of AI and Advanced Computing, Xi’an Jiaotong-Liverpool University, Suzhou, China
Interests: fetal ultrasound; medical image and video analysis; ubiquitous/pervasive computing; transfer learning; federated learning
Special Issues, Collections and Topics in MDPI journals
Department of Computer Science, University of Exeter, Exeter, UK
Interests: artificial intelligence; medical image analysis; computer vision; video analysis

Special Issue Information

Dear Colleagues,

Recent advances in AI-driven medical imaging and analysis are transforming computational bioengineering research and clinical practice, enabling quantitative insights into healthy and diseased tissues. From static imaging- to dynamic video-based analyses, these techniques provide tools for understanding complex physiological processes such as vascular pathologies, tumor hypoxia, neurodegeneration, and fetal development. Imagine modalities, such as ultrasound, MRI, CT, and PET, are at the forefront of this transformation, supported by cutting-edge AI-driven computational methods.

Recent breakthroughs in AI, particularly machine learning, are driving innovations in medical image and video analysis. These include addressing challenges such as motion artifacts, operator dependency, and multimodal integration. We welcome contributions on cutting-edge AI-driven methods and novel hardware/software integrations that enhance medical imaging for diagnosis, monitoring, and therapy, including the following:

  • AI-driven methods for real-time interpretation of medical images and videos, including deep learning models for rapid disease detection and automated clinical decision support;
  • Advances in multimodal imaging integration, such as AI-enhanced fusion of ultrasound, MRI, Doppler, and NIRS, for improved diagnostic accuracy;
  • Bioengineered solutions for next-generation patient-centric imaging systems, including low-cost portable devices, automated scanning protocols, and ergonomic human–machine interfaces designed for accessibility and ease of use;
  • Computational methods for dynamic medical video analysis, incorporating real-time biomechanical tracking, spatiotemporal feature extraction, and AI-enhanced motion artifact correction for improved diagnostic precision;
  • Techniques addressing the challenges of video-based medical analysis, including noise reduction, contrast enhancement, and real-time segmentation;
  • Explainable AI approaches for medical imaging, ensuring transparency, interpretability, and trust in AI-driven diagnostic systems;
  • Generative AI for medical imaging and analysis, leveraging synthetic data generation, AI-driven image enhancement, and virtual contrast agents to improve image quality and model robustness.

This Special Issue highlights interdisciplinary research at the intersection of technological innovation, clinical application, and computational analysis, demonstrating how AI-driven imaging advances biomedical diagnosis, monitoring, and treatment.

Dr. Ulas Bagci
Dr. Netzahualcoyotl Hernandez-Cruz
Dr. Zeyu Fu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • biomedical image and video analysis
  • ultrasound-based image and video analysis
  • machine learning
  • multimodal imaging
  • AI-assisted diagnostics
  • real-time analysis
  • generative AI
  • explainable AI

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

22 pages, 2905 KB  
Article
Image Captioning with Object Detection and Facial Expression Recognition for Smart Industry
by Abdul Saboor Khan, Abdul Haseeb Khan, Muhammad Jamshed Abbass and Imran Shafi
Bioengineering 2025, 12(12), 1325; https://doi.org/10.3390/bioengineering12121325 - 5 Dec 2025
Abstract
This paper presents a new image captioning system which contains facial expression recognition as a way to provide better emotional and contextual comprehension of the captions generated. A combination of affective cues and visual features is made, which enables semantically full and emotionally [...] Read more.
This paper presents a new image captioning system which contains facial expression recognition as a way to provide better emotional and contextual comprehension of the captions generated. A combination of affective cues and visual features is made, which enables semantically full and emotionally conscious descriptions. Experiments were carried out on two created datasets, FlickrFace11k and COCOFace15k, with standard benchmarks such as BLEU, METEOR, ROUGE-L, CIDEr, and SPICE to analyze their effectiveness. The suggested model produced better results in all metrics as compared to baselines, like Show-Attend-Tell and Up-Down, remaining consistently better on all the scores. Remarkably, it has reached gains of 2.5 points on CIDEr and 1.0 on SPICE, which means a closer correlation to the prompt captions made by people. A 5-fold cross-validation confirmed the model’s robustness, with minimal standard deviation across folds (<±0.2). Qualitative results further demonstrated its ability to capture fine-grained emotional expressions often missed by conventional models. These findings underscore the model’s potential in affective computing, assistive technologies, and human-centric AI applications. The pipeline is designed for on-prem/edge deployment with lightweight interfaces to IoT middleware (MQTT/OPC UA), enabling smart-factory integration. These characteristics align the method with Industry 4.0 sensor networks and human-centric analytics. Full article
(This article belongs to the Special Issue AI-Driven Imaging and Analysis for Biomedical Applications)
Show Figures

Graphical abstract

16 pages, 1446 KB  
Article
Cross-Software Radiomic Feature Robustness Assessed by Hierarchical Clustering and Composite Index Analysis: A Multi-Cancer Study on Colorectal and Liver Lesions
by Roberta Fusco, Giulia Festa, Mario Sansone, Sergio Venanzio Setola, Antonio Avallone, Francesco Izzo, Antonella Petrillo and Vincenza Granata
Bioengineering 2025, 12(12), 1282; https://doi.org/10.3390/bioengineering12121282 - 21 Nov 2025
Viewed by 471
Abstract
Background: Radiomic feature robustness is a key prerequisite for the reproducibility and clinical translation of imaging biomarkers. Variability across software platforms can significantly affect feature consistency, compromising predictive modeling reliability. This study aimed to develop and validate a hierarchical clustering-based workflow for evaluating [...] Read more.
Background: Radiomic feature robustness is a key prerequisite for the reproducibility and clinical translation of imaging biomarkers. Variability across software platforms can significantly affect feature consistency, compromising predictive modeling reliability. This study aimed to develop and validate a hierarchical clustering-based workflow for evaluating radiomic feature robustness within and across software platforms, identifying stable and reproducible features suitable for clinical applications. Methods: A multi-cancer CT dataset including 97 lesions from 71 patients, comprising primary colorectal cancer (CRC), colorectal liver metastases, and hepatocellular carcinoma (HCC), was analyzed. Radiomic features were extracted using two IBSI-compliant platforms (MM Radiomics of syngo.via Frontier and 3D Slicer with PyRadiomics). Intra-software reliability was assessed through the intraclass correlation coefficient ICC(A,1), while cross-software stability was evaluated using hierarchical clustering validated by the Adjusted Rand Index (ARI). A Composite Index (CI) integrating correlation, distributional similarity, and mean fractional ratio quantified inter-platform feature robustness. Results: Over 95% of radiomic features demonstrated good-to-excellent intra-software reliability. Several clustering configurations achieved ARI = 1.0, confirming strong cross-platform concordance. The most robust and recurrent features were predominantly wavelet-derived descriptors and first-order statistics, particularly cluster shade (GLCM-based) and mean intensity-related features. Conclusions: The proposed multi-stage framework effectively identifies stable, non-redundant, and transferable radiomic features across IBSI-compliant software platforms. These findings provide a methodological foundation for cross-platform harmonization and enhance the reproducibility of radiomic biomarkers in oncologic imaging. Full article
(This article belongs to the Special Issue AI-Driven Imaging and Analysis for Biomedical Applications)
Show Figures

Figure 1

17 pages, 1832 KB  
Article
Beyond Human Vision: Revolutionizing the Localization of Diminutive Sessile Polyps in Colonoscopy
by Mahsa Dehghan Manshadi and M. Soltani
Bioengineering 2025, 12(11), 1234; https://doi.org/10.3390/bioengineering12111234 - 11 Nov 2025
Viewed by 356
Abstract
Gastrointestinal disorders, such as colorectal cancer (CRC), pose a substantial health burden worldwide, showing increased incidence rates across different age groups. Detecting and removing polyps promptly, recognized as CRC precursors, are crucial for prevention. While traditional colonoscopy works well, it is vulnerable to [...] Read more.
Gastrointestinal disorders, such as colorectal cancer (CRC), pose a substantial health burden worldwide, showing increased incidence rates across different age groups. Detecting and removing polyps promptly, recognized as CRC precursors, are crucial for prevention. While traditional colonoscopy works well, it is vulnerable to specialist errors. This study suggests an AI-based diminutive sessile polyp localization assistant utilizing the YOLO-V8 family. Comprehensive evaluations were conducted using a diverse dataset that was assembled from various available datasets to support our investigation. The final dataset contains images obtained using two imaging methods: white light endoscopy (WLE) and narrow-band imaging (NBI). The research demonstrated remarkable results, boasting a precision of 96.4%, recall of 93.89%, and F1-score of 94.46%. This success can be credited to a meticulously balanced combination of hyperparameters and the specific attributes of the comprehensive dataset designed for the colorectal polyp localization in colonoscopy images. Also, it was proved that the dataset selection was acceptable by analyzing the polyp sizes and their coordinates using a special matrix. This study brings forth significant insights for augmenting the detection of diminutive sessile colorectal polyps, thereby advancing technology-driven colorectal cancer diagnosis in offline scenarios. This is particularly beneficial for gastroenterologists analyzing endoscopy capsule images to detect gastrointestinal polyps. Full article
(This article belongs to the Special Issue AI-Driven Imaging and Analysis for Biomedical Applications)
Show Figures

Figure 1

14 pages, 2890 KB  
Article
Automatic 3D Tracking of Liver Metastases: Follow-Up Assessment of Cancer Patients in Contrast-Enhanced MRI
by Sophia Schulze-Weddige, Uli Fehrenbach, Johannes Kolck, Richard Ruppel, Georg Lukas Baumgärtner, Maximilian Lindholz, Isabel Theresa Schobert, Anna-Maria Haack, Henning Jann, Martina Mogl, Dominik Geisel, Bertram Wiedenmann and Tobias Penzkofer
Bioengineering 2025, 12(8), 874; https://doi.org/10.3390/bioengineering12080874 - 12 Aug 2025
Viewed by 986
Abstract
Background: Tracking differential growth of secondary liver metastases is important for early detection of progression but remains challenging due to variable tumor growth rates. We aimed to automate accurate, consistent, and efficient longitudinal monitoring. Methods: We developed an automatic 3D segmentation and tracking [...] Read more.
Background: Tracking differential growth of secondary liver metastases is important for early detection of progression but remains challenging due to variable tumor growth rates. We aimed to automate accurate, consistent, and efficient longitudinal monitoring. Methods: We developed an automatic 3D segmentation and tracking algorithm to quantify differential growth, tested on contrast-enhanced MRI follow-ups of patients with neuroendocrine liver metastases (NELMs). The output was integrated into a decision support tool to distinguish between progressive disease, stable disease, and partial/complete response. A user study involving an expert group of seven expert radiologists evaluated its impact. Group comparisons used the Friedman test with post hoc analyses. Results: Our algorithm detected 991 metastases in 30 patients: 13% new, 30% progressive, 18% stable, and 18% regressive; the remainder were either too small to measure (15%) or merged with another metastasis in the follow-up assessment (6%). Diagnostic accuracy improved with additional information on hepatic tumor load and differential growth, albeit not significantly (p = 0.72). The diagnosis time increased (p < 0.001). All radiologists found the method useful and expressed a desire to integrate it in existing diagnostic tools. Conclusions: We automated segmentation and quantification of individual NELMs, enabling comprehensive longitudinal analysis of differential tumor growth with the potential to enhance clinical decision-making. Full article
(This article belongs to the Special Issue AI-Driven Imaging and Analysis for Biomedical Applications)
Show Figures

Figure 1

Review

Jump to: Research

31 pages, 3812 KB  
Review
Generative Adversarial Networks in Dermatology: A Narrative Review of Current Applications, Challenges, and Future Perspectives
by Rosa Maria Izu-Belloso, Rafael Ibarrola-Altuna and Alex Rodriguez-Alonso
Bioengineering 2025, 12(10), 1113; https://doi.org/10.3390/bioengineering12101113 - 16 Oct 2025
Cited by 1 | Viewed by 834
Abstract
Generative Adversarial Networks (GANs) have emerged as powerful tools in artificial intelligence (AI) with growing relevance in medical imaging. In dermatology, GANs are revolutionizing image analysis, enabling synthetic image generation, data augmentation, color standardization, and improved diagnostic model training. This narrative review explores [...] Read more.
Generative Adversarial Networks (GANs) have emerged as powerful tools in artificial intelligence (AI) with growing relevance in medical imaging. In dermatology, GANs are revolutionizing image analysis, enabling synthetic image generation, data augmentation, color standardization, and improved diagnostic model training. This narrative review explores the landscape of GAN applications in dermatology, systematically analyzing 27 key studies and identifying 11 main clinical use cases. These range from the synthesis of under-represented skin phenotypes to segmentation, denoising, and super-resolution imaging. The review also examines the commercial implementations of GAN-based solutions relevant to practicing dermatologists. We present a comparative summary of GAN architectures, including DCGAN, cGAN, StyleGAN, CycleGAN, and advanced hybrids. We analyze technical metrics used to evaluate performance—such as Fréchet Inception Distance (FID), SSIM, Inception Score, and Dice Coefficient—and discuss challenges like data imbalance, overfitting, and the lack of clinical validation. Additionally, we review ethical concerns and regulatory limitations. Our findings highlight the transformative potential of GANs in dermatology while emphasizing the need for standardized protocols and rigorous validation. While early results are promising, few models have yet reached real-world clinical integration. The democratization of AI tools and open-access datasets are pivotal to ensure equitable dermatologic care across diverse populations. This review serves as a comprehensive resource for dermatologists, researchers, and developers interested in applying GANs in dermatological practice and research. Future directions include multimodal integration, clinical trials, and explainable GANs to facilitate adoption in daily clinical workflows. Full article
(This article belongs to the Special Issue AI-Driven Imaging and Analysis for Biomedical Applications)
Show Figures

Figure 1

Back to TopTop