Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (506)

Search Parameters:
Keywords = Photoplethysmography (PPG)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1651 KB  
Case Report
Pulse Waveform Changes During Vasopressor Therapy Assessed Using Remote Photoplethysmography: A Case Series
by Mara Klibus, Viktorija Serova, Uldis Rubins, Zbignevs Marcinkevics, Andris Grabovskis and Olegs Sabelnikovs
J. Clin. Med. 2026, 15(3), 1118; https://doi.org/10.3390/jcm15031118 - 30 Jan 2026
Abstract
Background/Objectives: Septic shock involves severe circulatory and microcirculatory dysfunction and often requires vasopressors to maintain adequate mean arterial pressure (MAP). Conventional monitoring mainly reflects macrocirculation and may not capture changes in vascular tone or microcirculation. Remote photoplethysmography (rPPG) is a contactless optical [...] Read more.
Background/Objectives: Septic shock involves severe circulatory and microcirculatory dysfunction and often requires vasopressors to maintain adequate mean arterial pressure (MAP). Conventional monitoring mainly reflects macrocirculation and may not capture changes in vascular tone or microcirculation. Remote photoplethysmography (rPPG) is a contactless optical method that analyzes peripheral pulse waveforms and may offer additional physiological insight during vasopressor therapy. The aim of this study was to assess the feasibility of rPPG for detecting pulse waveform changes associated with norepinephrine administration in septic shock. Methods: Prospective case series included three adult patients (n = 3) with septic shock admitted to the intensive care unit at Pauls Stradins Clinical University Hospital, Riga, Latvia. All patients received standard sepsis treatment, including fluid resuscitation and titrated norepinephrine to maintain MAP ≥ 65 mmHg. Continuous invasive arterial pressure monitoring was performed alongside rPPG signal acquisition from the palmar skin surface under controlled lighting. From averaged rPPG waveforms, perfusion index (PI), dicrotic notch amplitude (c-wave), and diastolic wave amplitude (d-wave) were extracted. Correlations between norepinephrine dose, MAP, and rPPG parameters were explored. Results: Increasing norepinephrine doses were associated with higher MAP and PI in all patients. Dicrotic notch and diastolic wave amplitude decreased consistently. These changes occurred alongside macrocirculatory stabilization and are consistent with increased vascular tone and altered arterial compliance. Conclusions: rPPG demonstrated feasibility for detecting pulse waveform changes during norepinephrine therapy in septic shock; however, larger controlled studies are required for validation. Full article
Show Figures

Figure 1

25 pages, 8220 KB  
Article
A Mobile Triage Robot for Natural Disaster Situations
by Darwin-Alexander Angamarca-Avendaño, Diego-Alexander Zhañay-Salto and Juan-Carlos Cobos-Torres
Electronics 2026, 15(3), 559; https://doi.org/10.3390/electronics15030559 - 28 Jan 2026
Viewed by 31
Abstract
This research describes the development of an autonomous robotic triage system, carried out by a student through project-based and challenge-based learning methodologies, aimed at solving real-world problems using applied technologies. The system operated in three phases: environment exploration, victim detection through computer vision [...] Read more.
This research describes the development of an autonomous robotic triage system, carried out by a student through project-based and challenge-based learning methodologies, aimed at solving real-world problems using applied technologies. The system operated in three phases: environment exploration, victim detection through computer vision supported by autonomous navigation, and remote measurement of vital signs. The system incorporated SLAM algorithms for mapping and localization, YOLOv8 pose for human detection and posture estimation, and remote photoplethysmography (rPPG) for contactless vital-sign measurement. This configuration was integrated into a mobile platform (myAGV) equipped with a robotic manipulator (myCobot 280) and tested in scenarios simulating real emergency conditions. All three triage phases defined in this case study were executed continuously and autonomously, enabling navigation in unknown environments, human detection, and accurate positioning in front of victims to measure vital signs without human intervention. Although limitations were identified in low-light environments or in cases of facial obstruction, the modular ROS-based architecture was designed to be adaptable to other mobile platforms, thereby extending its applicability to more demanding scenarios and reinforcing its value as both an educational and technological solution in emergency response contexts. Full article
(This article belongs to the Special Issue Robotics: From Technologies to Applications)
Show Figures

Figure 1

20 pages, 908 KB  
Article
Wearable ECG-PPG Deep Learning Model for Cardiac Index-Based Noninvasive Cardiac Output Estimation in Cardiac Surgery Patients
by Minwoo Kim, Min Dong Sung, Jimyeoung Jung, Sung Pil Cho, Junghwan Park, Sarah Soh, Hyun Chel Joo and Kyung Soo Chung
Sensors 2026, 26(2), 735; https://doi.org/10.3390/s26020735 - 22 Jan 2026
Viewed by 99
Abstract
Accurate cardiac output (CO) measurement is vital for hemodynamic management; however, it usually requires invasive monitoring, which limits its continuous and out-of-hospital use. Wearable sensors integrated with deep learning offer a noninvasive alternative. This study developed and validated a lightweight deep learning model [...] Read more.
Accurate cardiac output (CO) measurement is vital for hemodynamic management; however, it usually requires invasive monitoring, which limits its continuous and out-of-hospital use. Wearable sensors integrated with deep learning offer a noninvasive alternative. This study developed and validated a lightweight deep learning model using wearable electrocardiography (ECG) and photoplethysmography (PPG) signals to predict CO and examined whether cardiac index-based normalization (Cardiac Index (CI) = CO/body surface area) improves performance. Twenty-seven patients who underwent cardiac surgery and had pulmonary artery catheters were prospectively enrolled. Single-lead ECG (HiCardi+ chest patch) and finger PPG (WristOx2 3150) were recorded simultaneously and processed through an ECG–PPG fusion network with cross-modal interaction. Three models were trained as follows: (1) CI prediction, (2) direct CO prediction, and (3) indirect CO prediction. The total number of CO = predicted CI × body surface area. Reference values were derived from thermodilution. The CI model achieved the best performance, and the indirect CO model showed significant reductions in error/agreement metrics (MAE/RMSE/bias; p < 0.0001), while correlation-based metrics are reported descriptively without implying statistical significance. The Pearson correlation coefficient (PCC) and percentage error (PE) for the indirect CO estimates (PCC = 0.904; PE = 23.75%). The indirect CO estimates met the predefined PE < 30% agreement benchmark for method-comparison; this is not a universal clinical standard. These results demonstrate that wearable ECG–PPG fusion deep learning can achieve accurate, noninvasive CO estimation and that CI-based normalization enhances model agreement with pulmonary artery catheter measurements, supporting continuous catheter-free hemodynamic monitoring. Full article
Show Figures

Figure 1

21 pages, 14300 KB  
Article
A Lightweight Embedded PPG-Based Authentication System for Wearable Devices via Hyperdimensional Computing
by Ruijin Zhuang, Haiming Chen, Daoyong Chen and Xinyan Zhou
Algorithms 2026, 19(1), 83; https://doi.org/10.3390/a19010083 - 18 Jan 2026
Viewed by 198
Abstract
In the realm of wearable technology, achieving robust continuous authentication requires balancing high security with the strict resource constraints of embedded platforms. Conventional machine learning approaches and deep learning-based biometrics often incur high computational costs, making them unsuitable for low-power edge devices. To [...] Read more.
In the realm of wearable technology, achieving robust continuous authentication requires balancing high security with the strict resource constraints of embedded platforms. Conventional machine learning approaches and deep learning-based biometrics often incur high computational costs, making them unsuitable for low-power edge devices. To address this challenge, we propose H-PPG, a lightweight authentication system that integrates photoplethysmography (PPG) and inertial measurement unit (IMU) signals for continuous user verification. Using Hyperdimensional Computing (HDC), a lightweight classification framework inspired by brain-like computing, H-PPG encodes user physiological and motion data into high-dimensional hypervectors that comprehensively represent individual identity, enabling robust, efficient and lightweight authentication. An adaptive learning process is employed to iteratively refine the user’s hypervector, allowing it to progressively capture discriminative information from physiological and behavioral samples. To further enhance identity representation, a dimension regeneration mechanism is introduced to maximize the information capacity of each dimension within the hypervector, ensuring that authentication accuracy is maintained under lightweight conditions. In addition, a user-defined security level scheme and an adaptive update strategy are proposed to ensure sustained authentication performance over prolonged usage. A wrist-worn prototype was developed to evaluate the effectiveness of the proposed approach and extensive experiments involving 15 participants were conducted under real-world conditions. The experimental results demonstrate that H-PPG achieves an average authentication accuracy of 93.5%. Compared to existing methods, H-PPG offers a lightweight and hardware-efficient solution suitable for resource-constrained wearable devices, highlighting its strong potential for integration into future smart wearable ecosystems. Full article
Show Figures

Figure 1

20 pages, 5073 KB  
Article
SAWGAN-BDCMA: A Self-Attention Wasserstein GAN and Bidirectional Cross-Modal Attention Framework for Multimodal Emotion Recognition
by Ning Zhang, Shiwei Su, Haozhe Zhang, Hantong Yang, Runfang Hao and Kun Yang
Sensors 2026, 26(2), 582; https://doi.org/10.3390/s26020582 - 15 Jan 2026
Viewed by 210
Abstract
Emotion recognition from physiological signals is pivotal for advancing human–computer interaction, yet unimodal pipelines frequently underperform due to limited information, constrained data diversity, and suboptimal cross-modal fusion. Addressing these limitations, the Self-Attention Wasserstein Generative Adversarial Network with Bidirectional Cross-Modal Attention (SAWGAN-BDCMA) framework is [...] Read more.
Emotion recognition from physiological signals is pivotal for advancing human–computer interaction, yet unimodal pipelines frequently underperform due to limited information, constrained data diversity, and suboptimal cross-modal fusion. Addressing these limitations, the Self-Attention Wasserstein Generative Adversarial Network with Bidirectional Cross-Modal Attention (SAWGAN-BDCMA) framework is proposed. This framework reorganizes the learning process around three complementary components: (1) a Self-Attention Wasserstein GAN (SAWGAN) that synthesizes high-quality Electroencephalography (EEG) and Photoplethysmography (PPG) to expand diversity and alleviate distributional imbalance; (2) a dual-branch architecture that distills discriminative spatiotemporal representations within each modality; and (3) a Bidirectional Cross-Modal Attention (BDCMA) mechanism that enables deep two-way interaction and adaptive weighting for robust fusion. Evaluated on the DEAP and ECSMP datasets, SAWGAN-BDCMA significantly outperforms multiple contemporary methods, achieving 94.25% accuracy for binary and 87.93% for quaternary classification on DEAP. Furthermore, it attains 97.49% accuracy for six-class emotion recognition on the ECSMP dataset. Compared with state-of-the-art multimodal approaches, the proposed framework achieves an accuracy improvement ranging from 0.57% to 14.01% across various tasks. These findings offer a robust solution to the long-standing challenges of data scarcity and modal imbalance, providing a profound theoretical and technical foundation for fine-grained emotion recognition and intelligent human–computer collaboration. Full article
(This article belongs to the Special Issue Advanced Signal Processing for Affective Computing)
Show Figures

Figure 1

15 pages, 3599 KB  
Article
High-Fidelity rPPG Waveform Reconstruction from Palm Videos Using GANs
by Tao Li and Yuliang Liu
Sensors 2026, 26(2), 563; https://doi.org/10.3390/s26020563 - 14 Jan 2026
Viewed by 220
Abstract
Remote photoplethysmography (rPPG) enables non-contact acquisition of human physiological parameters using ordinary cameras, and has been widely applied in medical monitoring, human–computer interaction, and health management. However, most existing studies focus on estimating specific physiological metrics, such as heart rate and heart rate [...] Read more.
Remote photoplethysmography (rPPG) enables non-contact acquisition of human physiological parameters using ordinary cameras, and has been widely applied in medical monitoring, human–computer interaction, and health management. However, most existing studies focus on estimating specific physiological metrics, such as heart rate and heart rate variability, while paying insufficient attention to reconstructing the underlying rPPG waveform. In addition, publicly available datasets typically record facial videos accompanied by fingertip PPG signals as reference labels. Since fingertip PPG waveforms differ substantially from the true photoplethysmography (PPG) signals obtained from the face, deep learning models trained on such datasets often struggle to recover high-quality rPPG waveforms. To address this issue, we collected a new dataset consisting of palm-region videos paired with wrist-based PPG signals as reference labels, and experimentally validated its effectiveness for training neural network models aimed at rPPG waveform reconstruction. Furthermore, we propose a generative adversarial network (GAN)-based pulse-wave synthesis framework that produces high-quality rPPG waveforms by denoising the mean green-channel signal. By incorporating time-domain peak-aware loss, frequency-domain loss, and adversarial loss, our method achieves promising performance, with an RMSE (Root Mean Square Error) of 0.102, an MAPE (Mean Absolute Percentage Error) of 0.028, a Pearson correlation of 0.987, and a cosine similarity of 0.989. These results demonstrate the capability of the proposed approach to reconstruct high-fidelity rPPG waveforms with improved morphological accuracy compared to noisy raw rPPG signals, rather than directly validating health monitoring performance. This study presents a high-quality rPPG waveform reconstruction approach from both data and model perspectives, providing a reliable foundation for subsequent physiological signal analysis, waveform-based studies, and potential health-related applications. Full article
(This article belongs to the Special Issue Systems for Contactless Monitoring of Vital Signs)
Show Figures

Figure 1

17 pages, 3529 KB  
Article
Study on Multimodal Sensor Fusion for Heart Rate Estimation Using BCG and PPG Signals
by Jisheng Xing, Xin Fang, Jing Bai, Luyao Cui, Feng Zhang and Yu Xu
Sensors 2026, 26(2), 548; https://doi.org/10.3390/s26020548 - 14 Jan 2026
Viewed by 258
Abstract
Continuous heart rate monitoring is crucial for early cardiovascular disease detection. To overcome the discomfort and limitations of ECG in home settings, we propose a multimodal temporal fusion network (MM-TFNet) that integrates ballistocardiography (BCG) and photoplethysmography (PPG) signals. The network extracts temporal features [...] Read more.
Continuous heart rate monitoring is crucial for early cardiovascular disease detection. To overcome the discomfort and limitations of ECG in home settings, we propose a multimodal temporal fusion network (MM-TFNet) that integrates ballistocardiography (BCG) and photoplethysmography (PPG) signals. The network extracts temporal features from BCG and PPG signals through temporal convolutional networks (TCNs) and bidirectional long short-term memory networks (BiLSTMs), respectively, achieving cross-modal dynamic fusion at the feature level. First, bimodal features are projected into a unified dimensional space through fully connected layers. Subsequently, a cross-modal attention weight matrix is constructed for adaptive learning of the complementary correlation between BCG mechanical vibration and PPG volumetric flow features. Combined with dynamic focusing on key heartbeat waveforms through multi-head self-attention (MHSA), the model’s robustness under dynamic activity states is significantly enhanced. Experimental validation using a publicly available BCG-PPG-ECG simultaneous acquisition dataset comprising 40 subjects demonstrates that the model achieves excellent performance with a mean absolute error (MAE) of 0.88 BPM in heart rate prediction tasks, outperforming current mainstream deep learning methods. This study provides theoretical foundations and engineering guidance for developing contactless, low-power, edge-deployable home health monitoring systems, demonstrating the broad application potential of multimodal fusion methods in complex physiological signal analysis. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

17 pages, 1870 KB  
Article
Non-Invasive Blood Glucose Monitoring via Multimodal Features Fusion with Interpretable Machine Learning
by Ying Shan and Junsheng Yu
Appl. Sci. 2026, 16(2), 790; https://doi.org/10.3390/app16020790 - 13 Jan 2026
Viewed by 294
Abstract
This study aimed to develop a non-invasive blood glucose estimation method by integrating wearable multimodal signals, including photoplethysmography (PPG), electrodermal activity (EDA), and skin temperature (ST), with food log–derived nutritional features, and to validate its clinical reliability. We analyzed data from 16 adults [...] Read more.
This study aimed to develop a non-invasive blood glucose estimation method by integrating wearable multimodal signals, including photoplethysmography (PPG), electrodermal activity (EDA), and skin temperature (ST), with food log–derived nutritional features, and to validate its clinical reliability. We analyzed data from 16 adults who underwent continuous glucose monitoring (CGM) while multimodal physiological signals were collected over 8–10 consecutive days, yielding more over 20,000 paired samples. Features from food logs and physiological signals were extracted, followed by feature selection using Boruta and minimum Redundancy Maximum Relevance (mRMR). Five machine learning models were trained and evaluated using five-fold cross-validation. Food log features alone demonstrated stronger predictive power than unimodal physiological signals. The fusion of nutritional, physiological, and temporal features achieved the best accuracy using LightGBM, reducing the RMSE to 12.9 mg/dL, with a MARD of 7.9%, a MAE of 8.82 mg/dL, and R2 of 0.69. SHapley Additive exPlanations (SHAP) analysis revealed that 24-h carbohydrate and sugar intake, time since last meal, and short-term EDA features were the most influential predictors. By integrating multimodal wearable and dietary information, the proposed framework significantly enhances non-invasive glucose estimation. The interpretable LightGBM model demonstrates promising clinical utility for continuous monitoring and early dysglycemia management. Full article
(This article belongs to the Special Issue AI-Based Biomedical Signal Processing—2nd Edition)
Show Figures

Figure 1

28 pages, 4481 KB  
Article
Smart Steering Wheel Prototype for In-Vehicle Vital Sign Monitoring
by Branko Babusiak, Maros Smondrk, Lubomir Trpis, Tomas Gajdosik, Rudolf Madaj and Igor Gajdac
Sensors 2026, 26(2), 477; https://doi.org/10.3390/s26020477 - 11 Jan 2026
Viewed by 415
Abstract
Drowsy driving and sudden medical emergencies are major contributors to traffic accidents, necessitating continuous, non-intrusive driver monitoring. Since current technologies often struggle to balance accuracy with practicality, this study presents the design, fabrication, and validation of a smart steering wheel prototype. The device [...] Read more.
Drowsy driving and sudden medical emergencies are major contributors to traffic accidents, necessitating continuous, non-intrusive driver monitoring. Since current technologies often struggle to balance accuracy with practicality, this study presents the design, fabrication, and validation of a smart steering wheel prototype. The device integrates dry-contact electrocardiogram (ECG), photoplethysmography (PPG), and inertial sensors to facilitate multimodal physiological monitoring. The system underwent a two-stage evaluation involving a single participant: laboratory validation benchmarking acquired signals against medical-grade equipment, followed by real-world testing in a custom electric research vehicle to assess performance under dynamic conditions. Laboratory results demonstrated that the prototype captured high-quality signals suitable for reliable heart rate variability analysis. Furthermore, on-road evaluation confirmed the system’s operational functionality; despite increased noise from motion artifacts, the ECG signal remained sufficiently robust for continuous R-peak detection. These findings confirm that the multimodal smart steering wheel is a feasible solution for unobtrusive driver monitoring. This integrated platform provides a solid foundation for developing sophisticated machine-learning algorithms to enhance road safety by predicting fatigue and detecting adverse health events. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

15 pages, 16035 KB  
Article
Preliminary Study of Real-Time Detection of Chicken Embryo Viability Using Photoplethysmography
by Zeyu Liu, Zhuwen Xu, Yin Zhang, Hui Shi and Shengzhao Zhang
Sensors 2026, 26(2), 472; https://doi.org/10.3390/s26020472 - 10 Jan 2026
Viewed by 231
Abstract
Currently, in influenza vaccine production via the chicken embryo splitting method, embryo viability detection is a pivotal quality control step—non-viable embryos are prone to microbial contamination, directly endangering the vaccine batch quality. However, the predominant manual candling method suffers from unstable accuracy and [...] Read more.
Currently, in influenza vaccine production via the chicken embryo splitting method, embryo viability detection is a pivotal quality control step—non-viable embryos are prone to microbial contamination, directly endangering the vaccine batch quality. However, the predominant manual candling method suffers from unstable accuracy and occupational visual health risks. To address this challenge, we developed a novel real-time embryo viability detection system based on photoplethysmography (PPG) technology, comprising a hardware circuit for chicken embryo PPG signal collection and customized software for real-time signal filtering and time–frequency-domain analysis. Based on this system, we conducted three pivotal experiments: (1) impact of the source–detector spatial arrangement on PPG signal acquisition, (2) viable/non-viable embryo discrimination, and (3) embryo PPG signal detection performance for days 10–14. The experimental results show that within the sample size (15 viable, 5 non-viable embryos), the system achieved a 100% discrimination accuracy; meanwhile, it realized 100% successful multi-day (days 10–14) PPG signal capture for the 15 viable embryos, with consistent performance across the developmental stages. This PPG-based system overcomes limitations of traditional and existing automated methods, provides a non-invasive alternative for embryo viability detection, and presents significant implications for standardizing vaccine production quality control and advancing optical biosensing for biological viability detection. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

21 pages, 2477 KB  
Article
Non-Invasive Blood Pressure Estimation Enhanced by Capillary Refill Time Modulation of PPG Signals
by Qianheng Yin, Yixiong Chen, Lan Lin, Dongdong Wang and Shen Sun
Sensors 2026, 26(1), 345; https://doi.org/10.3390/s26010345 - 5 Jan 2026
Viewed by 419
Abstract
This study evaluates the impact of capillary refill time (CRT) modulation on photoplethysmography (PPG) signals for improved non-invasive continuous blood pressure (CBP) estimation. Data from 21 healthy participants were collected, applying a standardized 9 N pressure for 15 s to induce CRT during [...] Read more.
This study evaluates the impact of capillary refill time (CRT) modulation on photoplethysmography (PPG) signals for improved non-invasive continuous blood pressure (CBP) estimation. Data from 21 healthy participants were collected, applying a standardized 9 N pressure for 15 s to induce CRT during 6-min sessions. PPG signals were segmented into 252 paired 30-s intervals (CRT-modulated and standard). Three machine learning models—ResNetCNN, LSTM, and Transformer—were validated using leave-one-subject-out (LOSO) and non-LOSO methods. CRT modulation significantly enhanced accuracy across all models. ResNetCNN showed substantial improvements, reducing mean absolute error (MAE) by up to 35.6% and mean absolute percentage error (MAPE) by up to 40.6%. LSTM and Transformer models also achieved notable accuracy gains. All models met the Association for the Advancement of Medical Instrumentation (AAMI) criteria (mean error < 5 mmHg; standard deviation < 8 mmHg). The findings suggest CRT modulation’s strong potential to improve wearable CBP monitoring, especially in resource-limited settings. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

15 pages, 2133 KB  
Article
Impact of Helicopter Vibrations on In-Ear PPG Monitoring for Vital Signs—Mountain Rescue Technology Study (MoReTech)
by Aaron Benkert, Jakob Bludau, Lukas Boborzi, Stephan Prueckner and Roman Schniepp
Sensors 2026, 26(1), 324; https://doi.org/10.3390/s26010324 - 4 Jan 2026
Viewed by 474
Abstract
Pulsoximeters are widely used in the medical care of preclinical patients to evaluate the cardiorespiratory status and monitor basic vital signs, such as pulse rate (PR) and oxygen saturation (SpO2). In many preclinical situations, air transport of the patient by helicopter [...] Read more.
Pulsoximeters are widely used in the medical care of preclinical patients to evaluate the cardiorespiratory status and monitor basic vital signs, such as pulse rate (PR) and oxygen saturation (SpO2). In many preclinical situations, air transport of the patient by helicopter is necessary. Conventional pulse oximeters, mostly used on the patient’s finger, are prone to motion artifacts during transportation. Therefore, this study aims to determine whether simulated helicopter vibration has an impact on the photoplethysmogram (PPG) derived from an in-ear sensor at the external ear canal and whether the vibration influences the calculation of vital signs PR and SpO2. The in-ear PPG signals of 17 participants were measured at rest and under exposure to vibration generated by a helicopter simulator. Several signal quality indicators (SQI), including perfusion index, skewness, entropy, kurtosis, omega, quality index, and valid pulse detection, were extracted from the in-ear PPG recordings during rest and vibration. An intra-subject comparison was performed to evaluate signal quality changes under exposure to vibration. The analysis revealed no significant difference in any SQI between vibration and rest (all p > 0.05). Furthermore, the vital signs PR and SpO2 calculated using the in-ear PPG signal were compared to reference measurements by a clinical monitoring system (ECG and SpO2 finger sensor). The results for the PR showed substantial agreement (CCCrest = 0.96; CCCvibration = 0.96) and poor agreement for SpO2 (CCCrest = 0.41; CCCvibration = 0.19). The results of our study indicate that simulated helicopter vibration had no significant impact on the calculation of the SQIs, and the calculation of vital signs PR and SpO2 did not differ between rest and vibration conditions. Full article
(This article belongs to the Special Issue Novel Optical Sensors for Biomedical Applications—2nd Edition)
Show Figures

Figure 1

17 pages, 291 KB  
Article
A Unified Benchmarking Framework for Classical Machine Learning Based Heart Rate Estimation from RGB and NIR rPPG
by Sahar Qaadan, Ghassan Al Jayyousi and Adam Alkhalaileh
Electronics 2026, 15(1), 218; https://doi.org/10.3390/electronics15010218 - 2 Jan 2026
Viewed by 309
Abstract
This work presents a unified benchmarking framework for evaluating classical machine-learning–based heart-rate estimation from remote photoplethysmography (rPPG) across both RGB and near-infrared (NIR) modalities. Despite extensive research on algorithmic rPPG methods, their relative robustness across datasets, illumination conditions, and sensor types remains inconsistently [...] Read more.
This work presents a unified benchmarking framework for evaluating classical machine-learning–based heart-rate estimation from remote photoplethysmography (rPPG) across both RGB and near-infrared (NIR) modalities. Despite extensive research on algorithmic rPPG methods, their relative robustness across datasets, illumination conditions, and sensor types remains inconsistently reported. To address this gap, we standardize ROI extraction, signal preprocessing, rPPG computation, handcrafted feature generation, and label formation across four publicly available datasets: UBFC-rPPG Part 1, UBFC-rPPG Part 2, VicarPPG-2, and IMVIA-NIR. We benchmark five rPPG extraction methods (Green, POS, CHROM, PBV, PCA/ICA) combined with four classical regressors using MAE, RMSE, and R2, complemented by permutation feature importance for interpretability. Results show that CHROM is consistently the most reliable algorithm across all RGB datasets, providing the lowest error and highest stability, particularly when paired with tree-based models. For NIR recordings, PCA with spatial patch decomposition substantially outperforms ICA, highlighting the importance of spatial redundancy when color cues are absent. While handcrafted features and classical regressors offer interpretable baselines, their generalization is limited by small-sample datasets and the absence of temporal modeling. The proposed pipeline establishes robust cross-dataset baselines and offers a standardized foundation for future deep-learning architectures, hybrid algorithmic–learned models, and multimodal sensor-fusion approaches in remote physiological monitoring. Full article
(This article belongs to the Special Issue Image Processing and Analysis)
21 pages, 1183 KB  
Article
LLM-Assisted Explainable Daily Stress Recognition: Physiologically Grounded Threshold Rules from PPG Features
by Yekta Said Can
Electronics 2026, 15(1), 201; https://doi.org/10.3390/electronics15010201 - 1 Jan 2026
Viewed by 309
Abstract
Stress has become one of the most pervasive health challenges in modern societies, contributing to cardiovascular, cognitive, and emotional disorders that degrade overall well-being and productivity. Continuous monitoring of stress in everyday settings is thus critical for preventive healthcare. Recent advances in wearable [...] Read more.
Stress has become one of the most pervasive health challenges in modern societies, contributing to cardiovascular, cognitive, and emotional disorders that degrade overall well-being and productivity. Continuous monitoring of stress in everyday settings is thus critical for preventive healthcare. Recent advances in wearable sensing technologies, particularly photoplethysmography (PPG)-based devices, have enabled unobtrusive measurement of physiological signals linked to stress. However, the analysis of such data increasingly relies on deep learning models whose complex and non-transparent decision mechanisms limit clinical interpretability and user trust. To address this gap, this study introduces a novel LLM-assisted explainable framework that combines data-driven analysis of photoplethysmography (PPG) features with physiological reasoning. First, handcrafted cardiac variability features such as Root Mean Square of Successive Differences (RMSSD), high-frequency (HF) power, and the percentage of successive NN intervals differing by more than 50 ms (pNN50) are extracted from wearable PPG signals collected in daily conditions. After algorithmic threshold selection via ROC–Youden analysis, an LLM is used solely for physiological interpretation and literature-based justification of the resulting rules. The resulting transparent rule set achieves approximately 75% binary accuracy, rivaling CNN, LSTM, Transformer, and traditional ML baselines, while maintaining full interpretability and physiological validity. This work demonstrates that LLMs can function as scientific reasoning companions, bridging raw biosignal analytics with explainable, evidence-based models—marking a new step toward trustworthy affective computing. Full article
Show Figures

Figure 1

19 pages, 1534 KB  
Article
A Deep Learning Model That Combines ResNet and Transformer Architectures for Real-Time Blood Glucose Measurement Using PPG Signals
by Ting-Hong Chen, Lei Wang, Qian-Xun Hong and Meng-Ting Wu
Bioengineering 2026, 13(1), 49; https://doi.org/10.3390/bioengineering13010049 - 31 Dec 2025
Viewed by 503
Abstract
Recent advances in wearable devices and physiological signal monitoring technologies have motivated research into non-invasive glucose estimation for diabetes management. However, the existing studies are often limited by sample constraints, in terms of relatively small numbers of subjects, and few address personalized differences. [...] Read more.
Recent advances in wearable devices and physiological signal monitoring technologies have motivated research into non-invasive glucose estimation for diabetes management. However, the existing studies are often limited by sample constraints, in terms of relatively small numbers of subjects, and few address personalized differences. Physiological signals vary considerably for different individuals, affecting the reliability of accuracy measurements, and training data and test data are both used from the same subjects, which makes the test result more affirmative than the truth. This study aims to compare the two scenarios mentioned above, regardless of whether the testing/training involves the same individual, in order to determine whether the proposed training method has better generalization ability. The publicly available MIMIC-III dataset, which contains 700,000 data points for 10,000 subjects, is used to create a more generalized model. The model architecture uses a ResNet CNN + Transformer block, and data quality is graded during preprocessing to select signals with less interference for training to increase data quality. This preprocessing method allows the model to extract useful features without being adversely affected by noise and anomalous data that decreases performance; therefore, the model’s training results and generalization capability are increased. This study creates a model to predict blood glucose values from 70 to 250 for 180 classes, using mean absolute relative difference (MARD) as the evaluation metric and a Clarke error grid (CEG) to determine a reasonable error tolerance. For personalized cases (specific individual data during model training), the MARD is 11.69%, and the optimal Zone A (representing no clinical risk) in the Clarke error grid is 82.7%. Non-personalized cases (test subjects not included in the model training samples) using 60,000 unseen data yields MARD = 15.16%, and the optimal Zone A in the Clarke error grid is 75.4%. Across multiple testing runs, the proportion of predictions falling within Clarke error grid zones A and B consistently approached 100%. The small performance difference suggests that the proposed method has the potential to improve subject-independent estimation; however, further validation in broader populations is required. Therefore, the primary objective of this study is to improve subject-independent, non-personalized PPG-based glucose estimation and reduce the performance gap between personalized and non-personalized measurements. Full article
Show Figures

Graphical abstract

Back to TopTop