1. Introduction
Artificial intelligence (AI) has rapidly evolved from a conceptual promise into a practical and transformative asset in modern medicine [,]. Driven by significant advances in machine learning (ML), deep learning (DL), and biomedical signal and image processing, AI is reshaping the ways in which we understand, diagnose, and manage a wide range of clinical conditions [,]. This Special Issue of Bioengineering brings together a selection of innovative contributions that reflect the current state of the art in applying advanced AI-based computational techniques to solve real-world problems in healthcare.
The peer-reviewed studies cover a broad spectrum of applications, from contactless monitoring of physiological signals and predictive modeling in critical care settings to neural network-based approaches for neurorehabilitation [,,,,,]. They highlight how the integration of AI-based approaches, engineering, data science, and biomedical research is paving the way toward scalable, non-invasive, and clinically relevant solutions. These contributions not only demonstrate methodological rigor but also underscore a strong commitment to translational impact, with the potential to enhance clinical decision-making and improve patient outcomes [,,,,,]. Table 1 summarizes the peer-reviewed studies according to key criteria such as their biomedical signal, AI methodology, and clinical applications. A brief analysis of the featured papers is provided below.
Table 1.
Comparative overview of the articles published in the Special Issue “Artificial Intelligence for Biomedical Signal Processing”.
2. Special Issue Articles
The Special Issue titled “Artificial Intelligence for Biomedical Signal Processing” contains six articles (as of 27 September 2024).
One of the central challenges in the analysis of physiological signals and images using AI lies in the quantity of available data. In this context, Li et al. (2024) introduced BioDiffusion [], a generative diffusion model based on a modified U-Net architecture that is designed to synthesize biomedical signals. Although their study does not process signals directly, their model can generate realistic instances of physiological data such as electrocardiography (ECG) and accelerometry while preserving its key statistical, structural, and physiological properties. Its versatility enables the generation of synthetic data with high fidelity, positioning it as a strategic tool for synthetic dataset generation in digital medicine. This capability makes the approach particularly valuable for training and/or validating AI-based models when real clinical data are inaccessible, scarce, unlabeled, incomplete, and/or imbalanced.
Non-invasive vital signs monitoring is another major area of interest in applying AI algorithms to biomedical signals and images. In this context, the work by Cheng et al. (2023) proposed an AI-based approach to estimate blood oxygen saturation (SpO2) from facial videos []. Their approach encodes red–green–blue (RGB) facial videos into 2D spatial–temporal maps (STMaps) and estimates SpO2 levels from these maps using convolutional neural networks (CNNs), specifically ResNet-50, DenseNet-121, and EfficientNet-B3. The three CNN-based DL networks showed good performance, surpassing state-of-the-art methods and the international standard of 4% error for approved pulse oximeters. Sensitivity analyses also showed the influence of different color spaces, acquisition devices, and SpO2 ranges, while feature map visualization provided an identification of forehead, nose, and cheek facial features, which are relevant for SpO2 estimation []. This approach offers an accurate, rapid, and contactless alternative for SpO2 measurement that would be particularly beneficial in long-term monitoring scenarios requiring minimal physical contact, such as during infectious disease outbreaks or for patients with sensitive skin.
Similarly, Hwang et al. (2023) and Zhao et al. (2023) investigated the application of AI to the non-invasive monitorization of respiratory rate (RR) using biomedical signals [,]. Hwang et al. (2023) developed seven DL models, including a newly proposed dilated residual neural network, to estimate RR from photoplethysmography (PPG) data []. Their DL models showed promising results, although the authors acknowledged challenges such as motion artifacts and limited data diversity. Conversely, Zhao et al. (2023) proposed a transformer-based model, called TransRR, to predict RR from PPG and ECG data []. TransRR outperformed conventional CNN-based and long short-term memory (LSTM)-based models, although the authors acknowledged that more data are required to train a highly generalizable and clinically reliable RR estimation model. Overall, Hwang et al. (2023) and Zhao et al. (2023) highlight the potential of AI-based approaches in enhancing non-invasive RR monitoring across diverse clinical settings, including ICUs, emergency departments, and intensive care patients [,].
Extending the application of AI in non-invasive monitoring to ventilatory management, Park et al. (2023) evaluated a CNN-based DL model to predict weaning outcomes from ventilatory parameters that were obtained from ventilators during spontaneous breathing trials (SBTs) []. Particularly, a multi-modal CNN was developed for weaning prediction using pressure, flow, and volume time waveforms, as well as twenty-five numerical features from each breath. The model demonstrated strong predictive performance, with superior results compared with the conventional rapid shallow breathing index (RSBI) method. Park et al. (2023) also offered visual explanations of the waveform features that influence the predictions of the CNN using gradient-weighted class activation mapping (Grad-CAM) []. By relying solely on ventilator-derived data, this AI-driven approach provides a non-invasive method for assessing a patient’s readiness for weaning, highlighting its potential to enhance patient care and optimize ventilator management in critical care settings.
Finally, the study by Kaviri and Vinjamuri (2024) focused on the analysis of EEG signals to support neurological rehabilitation processes by means of motor imagery following a stroke []. Their approach combines EEG source localization (ESL), a technique that transforms multichannel EEG signals into three-dimensional representations of cortical activity, with a residual convolutional neural network (ResNetCNN). The proposed ResNetCNN architecture incorporates residual convolutional blocks with shortcut connections, which facilitate deep network training and prevent gradient degradation in architectures with many layers. This design enables the capture of complex spatial patterns of cortical activity from EEG recordings while maintaining training stability and depth. As a result, their method not only improves the accuracy of identifying post-stroke active brain regions but also offers a robust architecture for functional brain activity mapping []. This is particularly valuable for optimizing patient monitoring and planning personalized rehabilitation therapies based on cortical activation patterns.
Thus, beyond the specific methodologies and types of signals employed, it is essential to examine the technical core of each study, where AI is thoughtfully tailored to address concrete biomedical problems through well-founded approaches that are clearly oriented toward clinical practice.
3. Conclusions
The six articles featured in this Special Issue of Bioengineering serve as compelling evidence of the transformative potential of AI in biomedical signal processing. From patient monitoring and clinical outcome prediction to the generation of synthetic data, these contributions offer a glimpse into the future of real-world clinical settings and digital medicine. These AI-based approaches are paving the way toward more precise, accessible, and human-centered healthcare.
We encourage the scientific community to explore these works in depth and continue fostering collaborative research that can help translate these technologies from the laboratory to the patient’s bedside.
Author Contributions
Conceptualization, V.B.-G. and F.V.-V.; formal analysis, V.B.-G. and F.V.-V.; investigation, V.B.-G. and F.V.-V.; writing—original draft preparation, V.B.-G. and F.V.-V.; writing—review and editing, V.B.-G. and F.V.-V.; supervision, V.B.-G. and F.V.-V.; All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by the projects PID2023-148895OB-I00 and CPP2022-009735, funded by MICIU/AEI/10.13039/501100011033, the FSE+, and the European Union “NextGenerationEU”/PRTR, and by “CIBER—Consorcio Centro de Investigación Biomédica en Red” (CB19/01/00012) through “Instituto de Salud Carlos III (ISCIII)”, co-funded with European Regional Development Fund.
Acknowledgments
The editors want to acknowledge the excellent work of the authors and reviewers of the manuscripts submitted to this Special Issue of Bioengineering.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| AI | Artificial intelligence |
| CNN | Convolutional neural network |
| DL | Deep learning |
| ECG | Electrocardiogram |
| EEG | Electroencephalogram |
| ESL | EEG source localization |
| Grad-CAM | Gradient-weighted class activation mapping |
| LSTM | Long short-term memory |
| ML | Machine learning |
| PPG | Photoplethysmography |
| ResNetCNN | Residual convolutional neural network |
| RGB | Red–green–blue |
| RR | Respiratory rate |
| RSBI | Rapid shallow breathing index |
| SBTs | Spontaneous breathing trials |
| SpO2 | Oxygen saturation |
| STMaps | Spatial–temporal maps |
References
- Aminizadeh, S.; Heidari, A.; Dehghan, M.; Toumaj, S.; Rezaei, M.; Navimipour, N.J.; Stroppa, F.; Unal, M. Opportunities and Challenges of Artificial Intelligence and Distributed Systems to Improve the Quality of Healthcare Service. Artif. Intell. Med. 2024, 149, 102779. [Google Scholar] [CrossRef] [PubMed]
- Ahmadi, A.; Ganji, N.R. AI-Driven Medical Innovations: Transforming Healthcare through Data Intelligence. Int. J. BioLife Sci. 2023, 2, 132–142. [Google Scholar] [CrossRef]
- Ieracitano, C.; Zhang, X. Editorial Topical Collection: “Biomedical Imaging and Data Analytics for Disease Diagnosis and Treatment”. Bioengineering 2024, 11, 726. [Google Scholar] [CrossRef] [PubMed]
- Lian, S.; Luo, Z. Cutting-Edge Machine Learning in Biomedical Image Analysis: Editorial for Bioengineering Special Issue: “Recent Advance of Machine Learning in Biomedical Image Analysis”. Bioengineering 2024, 11, 1106. [Google Scholar] [CrossRef] [PubMed]
- Kaviri, S.M.; Vinjamuri, R. Integrating Electroencephalography Source Localization and Residual Convolutional Neural Network for Advanced Stroke Rehabilitation. Bioengineering 2024, 11, 967. [Google Scholar] [CrossRef] [PubMed]
- Li, X.; Sakevych, M.; Atkinson, G.; Metsis, V. BioDiffusion: A Versatile Diffusion Model for Biomedical Signal Synthesis. Bioengineering 2024, 11, 299. [Google Scholar] [CrossRef] [PubMed]
- Cheng, C.H.; Yuen, Z.; Chen, S.; Wong, K.L.; Chin, J.W.; Chan, T.T.; So, R.H.Y. Contactless Blood Oxygen Saturation Estimation from Facial Videos Using Deep Learning. Bioengineering 2024, 11, 251. [Google Scholar] [CrossRef] [PubMed]
- Hwang, C.S.; Kim, Y.H.; Hyun, J.K.; Kim, J.H.; Lee, S.R.; Kim, C.M.; Nam, J.W.; Kim, E.Y. Evaluation of the Photoplethysmogram-Based Deep Learning Model for Continuous Respiratory Rate Estimation in Surgical Intensive Care Unit. Bioengineering 2023, 10, 1222. [Google Scholar] [CrossRef] [PubMed]
- Park, J.E.; Kim, D.Y.; Park, J.W.; Jung, Y.J.; Lee, K.S.; Park, J.H.; Sheen, S.S.; Park, K.J.; Sunwoo, M.H.; Chung, W.Y. Development of a Machine Learning Model for Predicting Weaning Outcomes Based Solely on Continuous Ventilator Parameters during Spontaneous Breathing Trials. Bioengineering 2023, 10, 1163. [Google Scholar] [CrossRef] [PubMed]
- Zhao, Q.; Liu, F.; Song, Y.; Fan, X.; Wang, Y.; Yao, Y.; Mao, Q.; Zhao, Z. Predicting Respiratory Rate from Electrocardiogram and Photoplethysmogram Using a Transformer-Based Model. Bioengineering 2023, 10, 1024. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).