Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (287)

Search Parameters:
Keywords = EEG sensors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 663 KiB  
Article
Multi-Scale Temporal Fusion Network for Real-Time Multimodal Emotion Recognition in IoT Environments
by Sungwook Yoon and Byungmun Kim
Sensors 2025, 25(16), 5066; https://doi.org/10.3390/s25165066 - 14 Aug 2025
Abstract
This paper introduces EmotionTFN (Emotion-Multi-Scale Temporal Fusion Network), a novel hierarchical temporal fusion architecture that addresses key challenges in IoT emotion recognition by processing diverse sensor data while maintaining accuracy across multiple temporal scales. The architecture integrates physiological signals (EEG, PPG, and GSR), [...] Read more.
This paper introduces EmotionTFN (Emotion-Multi-Scale Temporal Fusion Network), a novel hierarchical temporal fusion architecture that addresses key challenges in IoT emotion recognition by processing diverse sensor data while maintaining accuracy across multiple temporal scales. The architecture integrates physiological signals (EEG, PPG, and GSR), visual, and audio data using hierarchical temporal attention across short-term (0.5–2 s), medium-term (2–10 s), and long-term (10–60 s) windows. Edge computing optimizations, including model compression, quantization, and adaptive sampling, enable deployment on resource-constrained devices. Extensive experiments on MELD, DEAP, and G-REx datasets demonstrate 94.2% accuracy on discrete emotion classification and 0.087 mean absolute error on dimensional prediction, outperforming the best baseline (87.4%). The system maintains sub-200 ms latency on IoT hardware while achieving a 40% improvement in energy efficiency. Real-world deployment validation over four weeks achieved 97.2% uptime and user satisfaction scores of 4.1/5.0 while ensuring privacy through local processing. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

19 pages, 487 KiB  
Review
Smart Clothing and Medical Imaging Innovations for Real-Time Monitoring and Early Detection of Stroke: Bridging Technology and Patient Care
by David Sipos, Kata Vészi, Bence Bogár, Dániel Pető, Gábor Füredi, József Betlehem and Attila András Pandur
Diagnostics 2025, 15(15), 1970; https://doi.org/10.3390/diagnostics15151970 - 6 Aug 2025
Viewed by 420
Abstract
Stroke is a significant global health concern characterized by the abrupt disruption of cerebral blood flow, leading to neurological impairment. Accurate and timely diagnosis—enabled by imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI)—is essential for differentiating stroke types and [...] Read more.
Stroke is a significant global health concern characterized by the abrupt disruption of cerebral blood flow, leading to neurological impairment. Accurate and timely diagnosis—enabled by imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI)—is essential for differentiating stroke types and initiating interventions like thrombolysis, thrombectomy, or surgical management. In parallel, recent advancements in wearable technology, particularly smart clothing, offer new opportunities for stroke prevention, real-time monitoring, and rehabilitation. These garments integrate various sensors, including electrocardiogram (ECG) electrodes, electroencephalography (EEG) caps, electromyography (EMG) sensors, and motion or pressure sensors, to continuously track physiological and functional parameters. For example, ECG shirts monitor cardiac rhythm to detect atrial fibrillation, smart socks assess gait asymmetry for early mobility decline, and EEG caps provide data on neurocognitive recovery during rehabilitation. These technologies support personalized care across the stroke continuum, from early risk detection and acute event monitoring to long-term recovery. Integration with AI-driven analytics further enhances diagnostic accuracy and therapy optimization. This narrative review explores the application of smart clothing in conjunction with traditional imaging to improve stroke management and patient outcomes through a more proactive, connected, and patient-centered approach. Full article
Show Figures

Figure 1

23 pages, 85184 KiB  
Article
MB-MSTFNet: A Multi-Band Spatio-Temporal Attention Network for EEG Sensor-Based Emotion Recognition
by Cheng Fang, Sitong Liu and Bing Gao
Sensors 2025, 25(15), 4819; https://doi.org/10.3390/s25154819 - 5 Aug 2025
Viewed by 406
Abstract
Emotion analysis based on electroencephalogram (EEG) sensors is pivotal for human–machine interaction yet faces key challenges in spatio-temporal feature fusion and cross-band and brain-region integration from multi-channel sensor-derived signals. This paper proposes MB-MSTFNet, a novel framework for EEG emotion recognition. The model constructs [...] Read more.
Emotion analysis based on electroencephalogram (EEG) sensors is pivotal for human–machine interaction yet faces key challenges in spatio-temporal feature fusion and cross-band and brain-region integration from multi-channel sensor-derived signals. This paper proposes MB-MSTFNet, a novel framework for EEG emotion recognition. The model constructs a 3D tensor to encode band–space–time correlations of sensor data, explicitly modeling frequency-domain dynamics and spatial distributions of EEG sensors across brain regions. A multi-scale CNN-Inception module extracts hierarchical spatial features via diverse convolutional kernels and pooling operations, capturing localized sensor activations and global brain network interactions. Bi-directional GRUs (BiGRUs) model temporal dependencies in sensor time-series, adept at capturing long-range dynamic patterns. Multi-head self-attention highlights critical time windows and brain regions by assigning adaptive weights to relevant sensor channels, suppressing noise from non-contributory electrodes. Experiments on the DEAP dataset, containing multi-channel EEG sensor recordings, show that MB-MSTFNet achieves 96.80 ± 0.92% valence accuracy, 98.02 ± 0.76% arousal accuracy for binary classification tasks, and 92.85 ± 1.45% accuracy for four-class classification. Ablation studies validate that feature fusion, bidirectional temporal modeling, and multi-scale mechanisms significantly enhance performance by improving feature complementarity. This sensor-driven framework advances affective computing by integrating spatio-temporal dynamics and multi-band interactions of EEG sensor signals, enabling efficient real-time emotion recognition. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

24 pages, 4294 KiB  
Article
Post Hoc Event-Related Potential Analysis of Kinesthetic Motor Imagery-Based Brain-Computer Interface Control of Anthropomorphic Robotic Arms
by Miltiadis Spanos, Theodora Gazea, Vasileios Triantafyllidis, Konstantinos Mitsopoulos, Aristidis Vrahatis, Maria Hadjinicolaou, Panagiotis D. Bamidis and Alkinoos Athanasiou
Electronics 2025, 14(15), 3106; https://doi.org/10.3390/electronics14153106 - 4 Aug 2025
Viewed by 235
Abstract
Kinesthetic motor imagery (KMI), the mental rehearsal of a motor task without its actual performance, constitutes one of the most common techniques used for brain–computer interface (BCI) control for movement-related tasks. The effect of neural injury on motor cortical activity during execution and [...] Read more.
Kinesthetic motor imagery (KMI), the mental rehearsal of a motor task without its actual performance, constitutes one of the most common techniques used for brain–computer interface (BCI) control for movement-related tasks. The effect of neural injury on motor cortical activity during execution and imagery remains under investigation in terms of activations, processing of motor onset, and BCI control. The current work aims to conduct a post hoc investigation of the event-related potential (ERP)-based processing of KMI during BCI control of anthropomorphic robotic arms by spinal cord injury (SCI) patients and healthy control participants in a completed clinical trial. For this purpose, we analyzed 14-channel electroencephalography (EEG) data from 10 patients with cervical SCI and 8 healthy individuals, recorded through Emotiv EPOC BCI, as the participants attempted to move anthropomorphic robotic arms using KMI. EEG data were pre-processed by band-pass filtering (8–30 Hz) and independent component analysis (ICA). ERPs were calculated at the sensor space, and analysis of variance (ANOVA) was used to determine potential differences between groups. Our results showed no statistically significant differences between SCI patients and healthy control groups regarding mean amplitude and latency (p < 0.05) across the recorded channels at various time points during stimulus presentation. Notably, no significant differences were observed in ERP components, except for the P200 component at the T8 channel. These findings suggest that brain circuits associated with motor planning and sensorimotor processes are not disrupted due to anatomical damage following SCI. The temporal dynamics of motor-related areas—particularly in channels like F3, FC5, and F7—indicate that essential motor imagery (MI) circuits remain functional. Limitations include the relatively small sample size that may hamper the generalization of our findings, the sensor-space analysis that restricts anatomical specificity and neurophysiological interpretations, and the use of a low-density EEG headset, lacking coverage over key motor regions. Non-invasive EEG-based BCI systems for motor rehabilitation in SCI patients could effectively leverage intact neural circuits to promote neuroplasticity and facilitate motor recovery. Future work should include validation against larger, longitudinal, high-density, source-space EEG datasets. Full article
(This article belongs to the Special Issue EEG Analysis and Brain–Computer Interface (BCI) Technology)
Show Figures

Figure 1

35 pages, 6415 KiB  
Review
Recent Advances in Conductive Hydrogels for Electronic Skin and Healthcare Monitoring
by Yan Zhu, Baojin Chen, Yiming Liu, Tiantian Tan, Bowen Gao, Lijun Lu, Pengcheng Zhu and Yanchao Mao
Biosensors 2025, 15(7), 463; https://doi.org/10.3390/bios15070463 - 18 Jul 2025
Cited by 1 | Viewed by 482
Abstract
In recent decades, flexible electronics have witnessed remarkable advancements in multiple fields, encompassing wearable electronics, human–machine interfaces (HMI), clinical diagnosis, and treatment, etc. Nevertheless, conventional rigid electronic devices are fundamentally constrained by their inherent non-stretchability and poor conformability, limitations that substantially impede their [...] Read more.
In recent decades, flexible electronics have witnessed remarkable advancements in multiple fields, encompassing wearable electronics, human–machine interfaces (HMI), clinical diagnosis, and treatment, etc. Nevertheless, conventional rigid electronic devices are fundamentally constrained by their inherent non-stretchability and poor conformability, limitations that substantially impede their practical applications. In contrast, conductive hydrogels (CHs) for electronic skin (E-skin) and healthcare monitoring have attracted substantial interest owing to outstanding features, including adjustable mechanical properties, intrinsic flexibility, stretchability, transparency, and diverse functional and structural designs. Considerable efforts focus on developing CHs incorporating various conductive materials to enable multifunctional wearable sensors and flexible electrodes, such as metals, carbon, ionic liquids (ILs), MXene, etc. This review presents a comprehensive summary of the recent advancements in CHs, focusing on their classifications and practical applications. Firstly, CHs are categorized into five groups based on the nature of the conductive materials employed. These categories include polymer-based, carbon-based, metal-based, MXene-based, and ionic CHs. Secondly, the promising applications of CHs for electrophysiological signals and healthcare monitoring are discussed in detail, including electroencephalogram (EEG), electrocardiogram (ECG), electromyogram (EMG), respiratory monitoring, and motion monitoring. Finally, this review concludes with a comprehensive summary of current research progress and prospects regarding CHs in the fields of electronic skin and health monitoring applications. Full article
Show Figures

Figure 1

18 pages, 2062 KiB  
Article
Measuring Blink-Related Brainwaves Using Low-Density Electroencephalography with Textile Electrodes for Real-World Applications
by Emily Acampora, Sujoy Ghosh Hajra and Careesa Chang Liu
Sensors 2025, 25(14), 4486; https://doi.org/10.3390/s25144486 - 18 Jul 2025
Viewed by 406
Abstract
Background: Electroencephalography (EEG) systems based on textile electrodes are increasingly being developed to address the need for more wearable sensor systems for brain function monitoring. Blink-related oscillations (BROs) are a new measure of brain function that corresponds to brainwave responses occurring after [...] Read more.
Background: Electroencephalography (EEG) systems based on textile electrodes are increasingly being developed to address the need for more wearable sensor systems for brain function monitoring. Blink-related oscillations (BROs) are a new measure of brain function that corresponds to brainwave responses occurring after spontaneous blinking, and indexes neural processes as the brain evaluates new visual information appearing after eye re-opening. Prior studies have reported BRO utility as both a clinical and non-clinical biomarker of cognition, but no study has demonstrated BRO measurement using textile-based EEG devices that facilitate user comfort for real-world applications. Methods: We investigated BRO measurement using a four-channel EEG system with textile electrodes by extracting BRO responses using existing, publicly available EEG data (n = 9). We compared BRO effects derived from textile-based electrodes with those from standard dry Ag/Ag-Cl electrodes collected at the same locations (i.e., Fp1, Fp2, F7, F8) and using the same EEG amplifier. Results: Results showed that BRO effects measured using textile electrodes exhibited similar features in both time and frequency domains compared to dry Ag/Ag-Cl electrodes. Data from both technologies also showed similar performance in artifact removal and signal capture. Conclusions: These findings provide the first demonstration of successful BRO signal capture using four-channel EEG with textile electrodes, providing compelling evidence toward the development of a comfortable and user-friendly EEG technology that uses the simple activity of blinking for objective brain function assessment in a variety of settings. Full article
Show Figures

Figure 1

40 pages, 2250 KiB  
Review
Comprehensive Comparative Analysis of Lower Limb Exoskeleton Research: Control, Design, and Application
by Sk Hasan and Nafizul Alam
Actuators 2025, 14(7), 342; https://doi.org/10.3390/act14070342 - 9 Jul 2025
Viewed by 1019
Abstract
This review provides a comprehensive analysis of recent advancements in lower limb exoskeleton systems, focusing on applications, control strategies, hardware architecture, sensing modalities, human-robot interaction, evaluation methods, and technical innovations. The study spans systems developed for gait rehabilitation, mobility assistance, terrain adaptation, pediatric [...] Read more.
This review provides a comprehensive analysis of recent advancements in lower limb exoskeleton systems, focusing on applications, control strategies, hardware architecture, sensing modalities, human-robot interaction, evaluation methods, and technical innovations. The study spans systems developed for gait rehabilitation, mobility assistance, terrain adaptation, pediatric use, and industrial support. Applications range from sit-to-stand transitions and post-stroke therapy to balance support and real-world navigation. Control approaches vary from traditional impedance and fuzzy logic models to advanced data-driven frameworks, including reinforcement learning, recurrent neural networks, and digital twin-based optimization. These controllers support personalized and adaptive interaction, enabling real-time intent recognition, torque modulation, and gait phase synchronization across different users and tasks. Hardware platforms include powered multi-degree-of-freedom exoskeletons, passive assistive devices, compliant joint systems, and pediatric-specific configurations. Innovations in actuator design, modular architecture, and lightweight materials support increased usability and energy efficiency. Sensor systems integrate EMG, EEG, IMU, vision, and force feedback, supporting multimodal perception for motion prediction, terrain classification, and user monitoring. Human–robot interaction strategies emphasize safe, intuitive, and cooperative engagement. Controllers are increasingly user-specific, leveraging biosignals and gait metrics to tailor assistance. Evaluation methodologies include simulation, phantom testing, and human–subject trials across clinical and real-world environments, with performance measured through joint tracking accuracy, stability indices, and functional mobility scores. Overall, the review highlights the field’s evolution toward intelligent, adaptable, and user-centered systems, offering promising solutions for rehabilitation, mobility enhancement, and assistive autonomy in diverse populations. Following a detailed review of current developments, strategic recommendations are made to enhance and evolve existing exoskeleton technologies. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

24 pages, 3200 KiB  
Article
A Spatial–Temporal Time Series Decomposition for Improving Independent Channel Forecasting
by Yue Yu, Pavel Loskot, Wenbin Zhang, Qi Zhang and Yu Gao
Mathematics 2025, 13(14), 2221; https://doi.org/10.3390/math13142221 - 8 Jul 2025
Viewed by 339
Abstract
Forecasting multivariate time series is a pivotal task in controlling multi-sensor systems. The joint forecasting of all channels may be too complex, whereas forecasting the channels independently may cause important spatial inter-dependencies to be overlooked. In this paper, we improve the performance of [...] Read more.
Forecasting multivariate time series is a pivotal task in controlling multi-sensor systems. The joint forecasting of all channels may be too complex, whereas forecasting the channels independently may cause important spatial inter-dependencies to be overlooked. In this paper, we improve the performance of single-channel forecasting algorithms by designing an interpretable front-end that extracts the spatial–temporal components from the input multivariate time series. Specifically, the multivariate samples are first segmented into equal-sized matrix symbols. The symbols are decomposed into the frequency-separated Intrinsic Mode Functions (IMFs) using a 2D Empirical-Mode Decomposition (EMD). The IMF components in each channel are then forecasted independently using relatively simple univariate predictors (UPs) such as DLinear, FITS, and TCN. The symbol size is determined to maximize the temporal stationarity of the EMD residual trend using Bayesian optimization. In addition, since the overall performance is usually dominated by a few of the weakest predictors, it is shown that the forecasting accuracy can be further improved by reordering the corresponding channels to make more correlated channels more adjacent. However, channel reordering requires retraining the affected predictors. The main advantage of the proposed forecasting framework for multivariate time series is that it retains the interpretability and simplicity of single-channel forecasting methods while improving their accuracy by capturing information about the spatial-channel dependencies. This has been demonstrated numerically assuming a 64-channel EEG dataset. Full article
Show Figures

Figure 1

27 pages, 6579 KiB  
Review
Bionic Sensors for Biometric Acquisition and Monitoring: Challenges and Opportunities
by Haoran Yu, Mingqi Ma, Baishun Zhang, Anxin Wang, Gaowei Zhong, Ziyuan Zhou, Chengxin Liu, Chunqing Li, Jingjing Fang, Yanbo He, Donghai Ren, Feifei Deng, Qi Hong, Yunong Zhao and Xiaohui Guo
Sensors 2025, 25(13), 3981; https://doi.org/10.3390/s25133981 - 26 Jun 2025
Viewed by 796
Abstract
The development of materials science, artificial intelligence and wearable technology has created both opportunities and challenges for the next generation of bionic sensor technology. Bionic sensors are extensively utilized in the collection and monitoring of human biological signals. Human biological signals refer to [...] Read more.
The development of materials science, artificial intelligence and wearable technology has created both opportunities and challenges for the next generation of bionic sensor technology. Bionic sensors are extensively utilized in the collection and monitoring of human biological signals. Human biological signals refer to the parameters generated inside or outside the human body to transmit information. In a broad sense, they include bioelectrical signals, biomechanical information, biomolecules, and chemical molecules. This paper systematically reviews recent advances in bionic sensors in the field of biometric acquisition and monitoring, focusing on four major technical directions: bioelectric signal sensors (electrocardiograph (ECG), electroencephalograph (EEG), electromyography (EMG)), biomarker sensors (small molecules, large molecules, and complex-state biomarkers), biomechanical sensors, and multimodal integrated sensors. These breakthroughs have driven innovations in medical diagnosis, human–computer interaction, wearable devices, and other fields. This article provides an overview of the above biomimetic sensors and outlines the future development trends in this field. Full article
(This article belongs to the Special Issue Nature Inspired Engineering: Biomimetic Sensors)
Show Figures

Figure 1

31 pages, 3895 KiB  
Article
Enhanced Pilot Attention Monitoring: A Time-Frequency EEG Analysis Using CNN–LSTM Networks for Aviation Safety
by Quynh Anh Nguyen, Nam Anh Dao and Long Nguyen
Information 2025, 16(6), 503; https://doi.org/10.3390/info16060503 - 17 Jun 2025
Viewed by 459
Abstract
Despite significant technological advancements in aviation safety systems, human-operator condition monitoring remains a critical challenge, with more than 75% of aircraft incidents stemming from attention-related perceptual failures. This study addresses a fundamental question in sensor-based condition monitoring: how can temporal- and frequency-domain EEG [...] Read more.
Despite significant technological advancements in aviation safety systems, human-operator condition monitoring remains a critical challenge, with more than 75% of aircraft incidents stemming from attention-related perceptual failures. This study addresses a fundamental question in sensor-based condition monitoring: how can temporal- and frequency-domain EEG sensor data be optimally integrated to detect precursors of system failure in human–machine interfaces? We propose a three-stage diagnostic framework that mirrors industrial condition monitoring approaches. First, raw EEG sensor signals undergo preprocessing into standardized one-second epochs. Second, a novel hybrid feature-extraction methodology combines time- and frequency-domain features to create comprehensive sensor signatures of neural states. Finally, our dual-architecture CNN–LSTM model processes spatial patterns via CNNs while capturing temporal degradation signals via LSTMs, enabling robust classification in noisy operational environments. Our contributions include (1) a multimodal data fusion approach for EEG sensors that provides a more comprehensive representation of operator conditions, and (2) an artificial intelligence architecture that balances spatial and temporal analysis for the predictive maintenance of attention states. When validated on aviation-related EEG datasets, our condition monitoring system achieved significantly higher diagnostic accuracy across various noise conditions compared to existing approaches. The practical applications extend beyond theoretical improvement, offering a pathway to implement more reliable human–machine interface monitoring in critical systems, potentially preventing catastrophic failures by detecting condition anomalies before they propagate through the system. Full article
(This article belongs to the Special Issue Machine Learning and Artificial Intelligence with Applications)
Show Figures

Figure 1

24 pages, 10907 KiB  
Article
Time-Frequency Analysis of Motor Imagery During Plantar and Dorsal Flexion Movements Using a Low-Cost Ankle Exoskeleton
by Cristina Polo-Hortigüela, Mario Ortiz, Paula Soriano-Segura, Eduardo Iáñez and José M. Azorín
Sensors 2025, 25(10), 2987; https://doi.org/10.3390/s25102987 - 9 May 2025
Viewed by 735
Abstract
Sensor technology plays a fundamental role in neuro-motor rehabilitation by enabling precise movement analysis and control. This study explores the integration of brain–machine interfaces (BMIs) and wearable sensors to enhance motor recovery in individuals with neuro-motor impairments. Specifically, different time-frequency transforms are evaluated [...] Read more.
Sensor technology plays a fundamental role in neuro-motor rehabilitation by enabling precise movement analysis and control. This study explores the integration of brain–machine interfaces (BMIs) and wearable sensors to enhance motor recovery in individuals with neuro-motor impairments. Specifically, different time-frequency transforms are evaluated to analyze the correlation between electroencephalographic (EEG) activity and ankle position, measured by using inertial measurement units (IMUs). A low-cost ankle exoskeleton was designed to conduct the experimental trials. Six subjects performed plantar and dorsal flexion movements while the EEG and IMU signals were recorded. The correlation between brain activity and foot kinematics was analyzed using the Short-Time Fourier Transform (STFT), Stockwell (ST), Hilbert–Huang (HHT), and Chirplet (CT) methods. The 8–20 Hz frequency band exhibited the highest correlation values. For motor imagery classification, the STFT achieved the highest accuracy (92.9%) using an EEGNet-based classifier and a state-machine approach. This study presents a dual approach: the analysis of EEG-movement correlation in different cognitive states, and the systematic comparison of four time-frequency transforms for both correlation and classification performance. The results support the potential of combining EEG and IMU data for BMI applications and highlight the importance of cognitive state in motion analysis for accessible neurorehabilitation technologies. Full article
Show Figures

Figure 1

29 pages, 8414 KiB  
Article
Development of Multimodal Physical and Virtual Traffic Reality Simulation System
by Ismet Goksad Erdagi, Slavica Gavric and Aleksandar Stevanovic
Appl. Sci. 2025, 15(9), 5115; https://doi.org/10.3390/app15095115 - 4 May 2025
Viewed by 961
Abstract
As urban traffic complexity increases, realistic multimodal simulation environments are essential for evaluating transportation safety and human behavior. This study introduces a novel multimodal, multi-participant co-simulation framework designed to comprehensively model interactions between drivers, bicyclists, and pedestrians. The framework integrates CARLA, a high-fidelity [...] Read more.
As urban traffic complexity increases, realistic multimodal simulation environments are essential for evaluating transportation safety and human behavior. This study introduces a novel multimodal, multi-participant co-simulation framework designed to comprehensively model interactions between drivers, bicyclists, and pedestrians. The framework integrates CARLA, a high-fidelity driving simulator, with PTV Vissim, a widely used microscopic traffic simulation tool. This integration was achieved through the development of custom scripts in Python and C++ that enable real-time data exchange and synchronization between the platforms. Additionally, physiological sensors, including heart rate monitors, electrodermal activity sensors, and EEG devices, were integrated using Lab Streaming Layer to capture physiological responses under different traffic conditions. Three experimental case studies validate the system’s capabilities. In the first, cyclists showed a significant rightward lane shift (from 0.94 m to 1.14 m, p<0.00001) and elevated heart rates (69.45 to 72.75 bpm, p<0.00001) in response to overtaking vehicles. In the second, pedestrians exhibited more conservative gap acceptance behavior at 50 mph vs. 30 mph (gap acceptance time: 3.70 vs. 3.18 s, p<0.00001), with corresponding increases in HR (3.54 bpm vs. 1.91 bpm post-event). In the third case study, mean vehicle speeds recorded during simulated driving were compared with real-world field data along urban corridors, demonstrating strong alignment and validating the system’s ability to reproduce realistic traffic conditions. These findings demonstrate the system’s effectiveness in capturing dynamic, real-time human responses and provide a foundation for advancing human-centered, multimodal traffic research. Full article
(This article belongs to the Special Issue Virtual Models for Autonomous Driving Systems)
Show Figures

Figure 1

23 pages, 4240 KiB  
Article
Research on the Identification of Road Hypnosis Based on the Fusion Calculation of Dynamic Human–Vehicle Data
by Han Zhang, Longfei Chen, Bin Wang, Xiaoyuan Wang, Jingheng Wang, Chenyang Jiao, Kai Feng, Cheng Shen, Quanzheng Wang, Junyan Han and Yi Liu
Sensors 2025, 25(9), 2846; https://doi.org/10.3390/s25092846 - 30 Apr 2025
Viewed by 428
Abstract
Driver factors are the main cause of road traffic accidents. For the research of automotive active safety, an identification method for road hypnosis of a driver of a car with dynamic human–vehicle heterogeneous data fusion calculation is proposed. Road hypnosis is an unconscious [...] Read more.
Driver factors are the main cause of road traffic accidents. For the research of automotive active safety, an identification method for road hypnosis of a driver of a car with dynamic human–vehicle heterogeneous data fusion calculation is proposed. Road hypnosis is an unconscious driving state formed by the combination of external environmental factors and the psychological state of the car driver. When drivers fall into a state of road hypnosis, they cannot clearly perceive the surrounding environment and make various reactions in time to complete the driving task. The safety of humans and cars is greatly affected. Therefore, the study of the identification of drivers’ road hypnosis is of great significance. Vehicle and virtual driving experiments are designed and carried out to collect human and vehicle data. Eye movement data and EEG data of human data are collected with eye movement sensors and EEG sensors. Vehicle speed and acceleration data are collected by a mobile phone with AutoNavi navigation, which serves as an onboard sensor. In order to screen the characteristics of human and vehicles related to the road hypnosis state, the characteristic parameters of the road hypnosis in the preprocessed data are selected by the method of independent sample T-test, the hidden Markov model (HMM) is constructed, and the identification of the road hypnosis of the Ridge Regression model is combined. In order to evaluate the identification performance of the model, six evaluation indicators are used and compared with multiple regression models. The results show that the hidden Markov-Ridge Regression model is the most superior in the identification accuracy and effect of the road hypnosis state. A new technical scheme reference for the development of intelligent driving assistance systems is provided by the proposed comprehensive road hypnosis state identification model based on human–vehicle data can provide, which can effectively improve the life recognition ability of automobile intelligent cockpits, enhance the active safety performance of automobiles, and further improve traffic safety. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

38 pages, 7211 KiB  
Article
Cross-Context Stress Detection: Evaluating Machine Learning Models on Heterogeneous Stress Scenarios Using EEG Signals
by Omneya Attallah, Mona Mamdouh and Ahmad Al-Kabbany
AI 2025, 6(4), 79; https://doi.org/10.3390/ai6040079 - 14 Apr 2025
Cited by 1 | Viewed by 1514
Abstract
Background/Objectives: This article addresses the challenge of stress detection across diverse contexts. Mental stress is a worldwide concern that substantially affects human health and productivity, rendering it a critical research challenge. Although numerous studies have investigated stress detection through machine learning (ML) techniques, [...] Read more.
Background/Objectives: This article addresses the challenge of stress detection across diverse contexts. Mental stress is a worldwide concern that substantially affects human health and productivity, rendering it a critical research challenge. Although numerous studies have investigated stress detection through machine learning (ML) techniques, there has been limited research on assessing ML models trained in one context and utilized in another. The objective of ML-based stress detection systems is to create models that generalize across various contexts. Methods: This study examines the generalizability of ML models employing EEG recordings from two stress-inducing contexts: mental arithmetic evaluation (MAE) and virtual reality (VR) gaming. We present a data collection workflow and publicly release a portion of the dataset. Furthermore, we evaluate classical ML models and their generalizability, offering insights into the influence of training data on model performance, data efficiency, and related expenses. EEG data were acquired leveraging MUSE-STM hardware during stressful MAE and VR gaming scenarios. The methodology entailed preprocessing EEG signals using wavelet denoising mother wavelets, assessing individual and aggregated sensor data, and employing three ML models—linear discriminant analysis (LDA), support vector machine (SVM), and K-nearest neighbors (KNN)—for classification purposes. Results: In Scenario 1, where MAE was employed for training and VR for testing, the TP10 electrode attained an average accuracy of 91.42% across all classifiers and participants, whereas the SVM classifier achieved the highest average accuracy of 95.76% across all participants. In Scenario 2, adopting VR data as the training data and MAE data as the testing data, the maximum average accuracy achieved was 88.05% with the combination of TP10, AF8, and TP9 electrodes across all classifiers and participants, whereas the LDA model attained the peak average accuracy of 90.27% among all participants. The optimal performance was achieved with Symlets 4 and Daubechies-2 for Scenarios 1 and 2, respectively. Conclusions: The results demonstrate that although ML models exhibit generalization capabilities across stressors, their performance is significantly influenced by the alignment between training and testing contexts, as evidenced by systematic cross-context evaluations using an 80/20 train–test split per participant and quantitative metrics (accuracy, precision, recall, and F1-score) averaged across participants. The observed variations in performance across stress scenarios, classifiers, and EEG sensors provide empirical support for this claim. Full article
Show Figures

Figure 1

23 pages, 2928 KiB  
Article
Intra- and Inter-Regional Complexity in Multi-Channel Awake EEG Through Multivariate Multiscale Dispersion Entropy for Assessing Sleep Quality and Aging
by Ahmad Zandbagleh, Saeid Sanei, Lucía Penalba-Sánchez, Pedro Miguel Rodrigues, Mark Crook-Rumsey and Hamed Azami
Biosensors 2025, 15(4), 240; https://doi.org/10.3390/bios15040240 - 9 Apr 2025
Viewed by 1119
Abstract
Aging and poor sleep quality are associated with altered brain dynamics, yet current electroencephalography (EEG) analyses often overlook regional complexity. This study addresses this gap by introducing a novel integration of intra- and inter-regional complexity analysis using multivariate multiscale dispersion entropy (mvMDE) from [...] Read more.
Aging and poor sleep quality are associated with altered brain dynamics, yet current electroencephalography (EEG) analyses often overlook regional complexity. This study addresses this gap by introducing a novel integration of intra- and inter-regional complexity analysis using multivariate multiscale dispersion entropy (mvMDE) from awake resting-state EEG for the first time. Moreover, assessing both intra- and inter-regional complexity provides a comprehensive perspective on the dynamic interplay between localized neural activity and its coordination across brain regions, which is essential for understanding the neural substrates of aging and sleep quality. Data from 58 participants—24 young adults (mean age = 24.7 ± 3.4) and 34 older adults (mean age = 72.9 ± 4.2)—were analyzed, with each age group further divided based on Pittsburgh Sleep Quality Index (PSQI) scores. To capture inter-regional complexity, mvMDE was applied to the most informative group of sensors, with one sensor selected from each brain region using four methods: highest average correlation, highest entropy, highest mutual information, and highest principal component loading. This targeted approach reduced computational cost and enhanced the effect sizes (ESs), particularly at large scale factors (e.g., 25) linked to delta-band activity, with the PCA-based method achieving the highest ESs (1.043 for sleep quality in older adults). Overall, we expect that both inter- and intra-regional complexity will play a pivotal role in elucidating neural mechanisms as captured by various physiological data modalities—such as EEG, magnetoencephalography, and magnetic resonance imaging—thereby offering promising insights for a range of biomedical applications. Full article
Show Figures

Figure 1

Back to TopTop