Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (523)

Search Parameters:
Keywords = electroencephalogram (EEG) data

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 2472 KB  
Article
MFSleepNet: An Interactive Multimodal Fusion Framework for Automatic Sleep Staging
by Ranran Gui, Chen Wang, Qunfeng Niu and Li Wang
Sensors 2026, 26(10), 3085; https://doi.org/10.3390/s26103085 - 13 May 2026
Viewed by 51
Abstract
Accurate automatic sleep staging remains challenging due to complex temporal dynamics, inter-subject variability, and the difficulty of effectively integrating heterogeneous physiological signals. Electroencephalogram (EEG) and electrooculogram (EOG) recordings provide complementary information for sleep analysis; however, most existing multimodal approaches rely on simple feature [...] Read more.
Accurate automatic sleep staging remains challenging due to complex temporal dynamics, inter-subject variability, and the difficulty of effectively integrating heterogeneous physiological signals. Electroencephalogram (EEG) and electrooculogram (EOG) recordings provide complementary information for sleep analysis; however, most existing multimodal approaches rely on simple feature concatenation, which limits their ability to capture structured inter-modality relationships. This paper proposes MFSleepNet, a multimodal sleep staging framework that explicitly models interactions between EEG and EOG signals. The proposed system incorporates a multimodal feature fusion module to enable bidirectional information exchange between modality-specific representations, followed by a gated temporal-channel attention mechanism to adaptively emphasize informative temporal segments and signal channels, facilitating joint representation learning while preserving modality-specific characteristics. Experiments on three public datasets (Sleep-EDF, SHHS, and HSP) under an epoch-level cross-validation protocol show that MFSleepNet consistently outperforms representative single-modality and multimodal baseline methods in terms of overall accuracy, Cohen’s κ, and Macro-F1. Ablation studies further demonstrate the contribution of each functional module. Correlation analysis indicates stage-dependent variations in EEG–EOG relationships, while interaction-based experiments show that explicit feature interaction improves both joint and modality-specific representations. Grad-CAM visualizations provide interpretability of model decisions. External validation on unseen subjects reveals a noticeable performance drop, highlighting the challenges of inter-subject variability and the limited baseline generalization capability of the model. To address this, a lightweight subject-specific adaptation strategy is introduced, which improves performance using a small amount of labeled subject-specific data. Overall, the proposed framework provides an effective and interpretable solution for multimodal sleep staging while emphasizing the importance of structured inter-modality interaction and subject-adaptive modeling in practical applications. Full article
(This article belongs to the Section Biomedical Sensors)
30 pages, 4043 KB  
Article
Bi-Hemispheric Adversarial Domain Adaptation Neural Network for EEG-Based Emotion Recognition
by Yuqi Chen and Ming Meng
Brain Sci. 2026, 16(5), 507; https://doi.org/10.3390/brainsci16050507 - 8 May 2026
Viewed by 277
Abstract
Background/Objectives: Adversarial domain adaptation methods are widely used in EEG-based emotion recognition to reduce the influence of individual differences and the non-stationary characteristics of electroencephalogram (EEG) signals. Most existing methods employ binary domain discriminators to align source and target domains at the global [...] Read more.
Background/Objectives: Adversarial domain adaptation methods are widely used in EEG-based emotion recognition to reduce the influence of individual differences and the non-stationary characteristics of electroencephalogram (EEG) signals. Most existing methods employ binary domain discriminators to align source and target domains at the global distribution level. However, such strategies often neglect the potential multimodal structure of emotional EEG data and the asymmetric emotional processing characteristics of the left and right hemispheres. To address these issues, this study proposes a Bi-Hemispheric Adversarial Domain Adaptation Neural Network (BiHADA) for EEG-based emotion recognition. Methods: In the proposed BiHADA framework, the conventional binary domain discriminator is extended into a multimodal discriminator by incorporating the label structure information of source-domain data into the domain discrimination process. This design encourages features belonging to the same emotional category to be aligned across domains and promotes positive knowledge transfer. In addition, dual adversarial domain adaptation branches are constructed to model the left and right hemispheres separately, enabling the network to capture hemisphere-specific emotional representations. Furthermore, discriminator-derived perplexity is introduced to evaluate the distribution alignment quality of target samples and to adaptively determine the weights of the corresponding hemisphere classifiers, thereby reducing the influence of poorly aligned samples during the final decision stage. Results: Experiments on the SEED dataset show that BiHADA achieves classification accuracies of 86.82% and 92.71% in cross-subject and cross-session tasks, respectively. These results demonstrate that the proposed method can effectively improve the transferability and discriminability of EEG emotional features under different domain adaptation scenarios. Conclusions: The proposed BiHADA method enhances EEG-based emotion recognition by jointly considering class-structure-guided domain alignment, hemispheric functional asymmetry, and branch-wise adaptation quality. The results suggest that incorporating source-domain label structure and hemisphere-specific adaptation can improve cross-domain EEG emotion recognition performance. Full article
Show Figures

Figure 1

16 pages, 4498 KB  
Article
Decoding Mandarin Action Verbs from EEG Using a Dual-LSTM Network: Towards Practical Assistive Brain–Computer Interfaces
by Binshuo Liu, Gengbiao Chen, Lairong Yin and Jing Liu
Sensors 2026, 26(9), 2749; https://doi.org/10.3390/s26092749 - 29 Apr 2026
Viewed by 274
Abstract
Electroencephalogram (EEG)-based brain–computer interfaces (BCIs) offer a promising pathway for restoring communication. Decoding tonal languages like Mandarin from EEG remains challenging due to homophones and complex temporal dynamics. This study investigates the decoding of six high-frequency Mandarin action verbs—Chi (eat), He (drink), Chuan [...] Read more.
Electroencephalogram (EEG)-based brain–computer interfaces (BCIs) offer a promising pathway for restoring communication. Decoding tonal languages like Mandarin from EEG remains challenging due to homophones and complex temporal dynamics. This study investigates the decoding of six high-frequency Mandarin action verbs—Chi (eat), He (drink), Chuan (wear), Na (take), Kan (look), and Dai (put on)—from EEG signals. We designed a visual-cue-based overt speech production experiment and collected EEG data from 30 participants during visually guided verb reading aloud. A recurrent neural network framework incorporating dual Long Short-Term Memory (LSTM) layers was implemented to model the long-range temporal dependencies in EEG patterns. The proposed model was compared against a traditional Common Spatial Pattern combined with Support Vector Machine (CSP-SVM) baseline. Our LSTM-based model achieved an average classification accuracy of 69.93% ± 3.07% for the six-class task, significantly outperforming the CSP-SVM baseline (36.53% ± 3.17%). Accuracy exceeded 75% under specific training conditions, including more than 15 training repetitions and a training-data proportion of 38%. Furthermore, the model attained this performance level utilizing approximately 38% of the available trial data for training, demonstrating data efficiency. The results indicate that the LSTM architecture can effectively capture the neural signatures associated with Mandarin verb processing, providing a foundation for developing practical EEG-based assistive communication technologies. The inference latency of the trained model, quantified as the post-training per-trial testing time, was under 2 s, supporting near-real-time applications. Full article
Show Figures

Figure 1

23 pages, 1673 KB  
Article
Transformer-Based SFDA by Class-Balanced Multicentric Dynamic Pseudo-Labeling for Privacy-Preserving EEG-Based BCI Systems
by Jiangchuan Liu, Jiatao Zhang, Cong Hu and Yong Peng
Systems 2026, 14(5), 476; https://doi.org/10.3390/systems14050476 - 28 Apr 2026
Viewed by 373
Abstract
As a common brain-computer interface (BCI) paradigm, electroencephalogram (EEG)-based motor imagery provides a critical pathway for both assistive technology to (restoring communication and control) and active rehabilitation (promoting neural plasticity and functional recovery). Domain adaptation has been shown to effectively enhance the decoding [...] Read more.
As a common brain-computer interface (BCI) paradigm, electroencephalogram (EEG)-based motor imagery provides a critical pathway for both assistive technology to (restoring communication and control) and active rehabilitation (promoting neural plasticity and functional recovery). Domain adaptation has been shown to effectively enhance the decoding performance of motor intentions for target subjects by leveraging labeled data from source subjects. However, EEG data from source subjects often contains extensive personal privacy, and the direct access to source EEG data easily leads to privacy leakage issues. An important research topic is to achieve domain adaptation without directly accessing the source subjects’ raw data. To address this challenge, a privacy-preserving source-free domain adaptation framework, termed Transformer-based SFDA with Class-balanced Multicentric Dynamic Pseudo-labeling (T-CMDP), is proposed for cross-subject motor-imagery EEG classification. This framework consists of three coupled stages. In the source model training stage, a Transformer-based encoder combined with Riemannian manifold-aware feature extraction is employed to learn transferable and discriminative EEG feature representations. In the source-free target adaptation stage, only the pretrained source model is transferred to the target domain and adapted through knowledge distillation and information maximization, without accessing raw source EEG data. In the self-supervised learning stage, class-balanced multicentric prototypes and high-confidence pseudo-label updates are introduced to progressively refine the target-domain decision boundaries. Extensive experiments on three motor-imagery EEG datasets demonstrate that the proposed T-CMDP framework consistently outperforms eleven representative baselines from traditional machine learning, deep learning, and source-free transfer approaches, achieving average accuracies of 56.85%, 76.34%, and 74.49%, respectively. These results indicate that T-CMDP effectively alleviates inter-subject EEG distribution discrepancies and ensures the privacy preserving of source subjects, thereby facilitating more reliable and practical deployment of EEG-based BCI systems. Full article
Show Figures

Figure 1

27 pages, 1143 KB  
Systematic Review
Missing Data Gap Imputation Methods in Electroencephalogram (EEG) Signals: A Systematic Scoping Review
by Tobias Bergmann, Michael Movshovich, Yushu Shao, Julia Ryznar, Xue Nemoga-Stout, Izabella Marquez, Isuru Herath, Amanjyot Singh Sainbhi, Nuray Vakitbilir, Noah Silvaggio, Rakibul Hasan, Kevin Y. Stein, Hina Shaheen, Jaewoong Moon and Frederick A. Zeiler
Sensors 2026, 26(8), 2431; https://doi.org/10.3390/s26082431 - 15 Apr 2026
Viewed by 590
Abstract
Objective: Electroencephalogram (EEG) measures electrophysiological activity in the cerebral cortex and is broadly used across diagnostic, research, and clinical contexts. Missing data gaps are a pervasive issue in EEG signal recording, resulting from sensor failures and sensor disconnections, amongst other sources. To preserve [...] Read more.
Objective: Electroencephalogram (EEG) measures electrophysiological activity in the cerebral cortex and is broadly used across diagnostic, research, and clinical contexts. Missing data gaps are a pervasive issue in EEG signal recording, resulting from sensor failures and sensor disconnections, amongst other sources. To preserve a continuous signal describing underlying electrophysiological processes, imputation must be used to reconstruct these gaps. The aim of this review is to examine the methods that have been developed for missing data gap imputation in EEG signals. Methods: A search of five databases was conducted based on the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines. The search question examined existing algorithms for imputation in EEG signals. Results: The initial search yielded 17,490 results (an update included 1913 additional results). This review includes 16 articles presenting EEG gap imputation methods. These imputation methods were characterized as (i) tensor-based, (ii) machine learning and deep learning, and (iii) model-based and classical. Conclusions: Several of these methods achieved strong effectiveness for accurately reconstructing gaps in ‘ground truth’ EEG signals; however, the limited generalizability of many of the studies due to small datasets lacking adequate participant diversity as well as methodological differences made it impossible to describe a single leading method. Further, the reliance on full recordings for segment imputation in some methods could prove prohibitive to real-time imputation. Future study is required to rectify these limitations and to properly investigate computational latency and requirements. Significance: This work provides novel insights into existing methods for EEG gap imputation, as it identifies current shortcomings in the literature and paves a way for a more generalizable solution to be achieved through future work. Full article
Show Figures

Figure 1

30 pages, 4178 KB  
Article
An Intelligent Evaluation Algorithm for Pilot Flight Training Ability Based on Multimodal Information Fusion
by Heming Zhang, Changyuan Wang and Pengbo Wang
Sensors 2026, 26(7), 2245; https://doi.org/10.3390/s26072245 - 4 Apr 2026
Viewed by 618
Abstract
Intelligent-assisted assessment of pilot flight training ability is a method of automating the evaluation of pilots’ flight skills using artificial intelligence. Currently, using AI to assist or replace human instructors in flight skill assessment has become a mainstream research direction in the field [...] Read more.
Intelligent-assisted assessment of pilot flight training ability is a method of automating the evaluation of pilots’ flight skills using artificial intelligence. Currently, using AI to assist or replace human instructors in flight skill assessment has become a mainstream research direction in the field of intelligent aviation. Existing flight skill assessment methods suffer from limitations in data types and insufficient assessment accuracy. To address these issues, we evaluate and predict pilot performance in simulated flight missions based on physiological signals. Following the “OODA loop” theory, we established a multimodal dataset including pilot eye movement, electroencephalogram (EEG), electrocardiogram (ECG), electrodermal signaling (EDS), heart rate, respiration, and flight attitude data. This dataset records changes in physiological rhythms and flight behaviors during pilots’ flight training at different difficulty levels. To enhance the signal-to-noise ratio, we propose an enhanced wavelet fuzzy thresholding denoising algorithm utilizing LSTM optimization. We address the problem of isolated features across different time frames in multimodal data modeling by introducing a multi-feature fusion algorithm based on STFT. Furthermore, by combining a high-efficiency sub-attention mechanism with a Transformer network, we construct a multi-classification network for intelligent-assisted assessment of pilot flight training ability, further improving the output accuracy of each category. Experiments show that our designed algorithm can achieve a classification accuracy of up to 85% on the dataset (5-fold cross-validation), which meets the requirements for auxiliary assessment of flight capabilities. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

19 pages, 721 KB  
Article
Evaluating EEG-Based Seizure Classification Using Foundation and Classical Ensemble Models
by George Obaido and Ebenezer Esenogho
Appl. Sci. 2026, 16(7), 3120; https://doi.org/10.3390/app16073120 - 24 Mar 2026
Viewed by 687
Abstract
Electroencephalogram (EEG)-based seizure classification remains challenging due to inter-subject variability and heterogeneous signal characteristics. Foundation models offer a promising alternative to dataset-specific training by leveraging pretrained priors. In this study, we evaluate a tabular foundation model, the Tabular Prior-Data Fitted Network (TabPFN), against [...] Read more.
Electroencephalogram (EEG)-based seizure classification remains challenging due to inter-subject variability and heterogeneous signal characteristics. Foundation models offer a promising alternative to dataset-specific training by leveraging pretrained priors. In this study, we evaluate a tabular foundation model, the Tabular Prior-Data Fitted Network (TabPFN), against classical ensemble baselines (gradient boosting, random forests, AdaBoost, and XGBoost) for EEG seizure segment classification. We use subject-independent GroupKFold cross-validation without out-of-fold evaluation to assess generalization to unseen individuals. Experiments on the Bangalore EEG Epilepsy Dataset (BEED) and the University of Bonn (Bonn) dataset show that TabPFN achieves higher accuracy than classical ensembles, reaching 99.7% on BEED and 99.6% on Bonn. These results suggest that pretrained tabular priors can be effective in feature-based EEG pipelines where subject-level generalization is required. Full article
(This article belongs to the Special Issue AI-Driven Healthcare)
Show Figures

Figure 1

14 pages, 658 KB  
Article
EEG in the Emergency Department: When the Neurophysiological Test Can Be Avoided in Emergency Diagnostic Workups? The EMINENCE Study
by Maenia Scarpino, Antonello Grippo, Federica Barraco, Benedetta Piccardi, Laura Betti, Peiman Nazerian, Arianna Fabbri, Roberto Fratangelo, Cristina Mei and Andrea Nencioni
Neurol. Int. 2026, 18(3), 54; https://doi.org/10.3390/neurolint18030054 - 16 Mar 2026
Viewed by 431
Abstract
Introduction: This study was conducted to determine whether specific emergency physician (EP) diagnoses and/or neurological signs/symptoms upon admission to the Emergency Department (ED) were associated with normal/non-informative emergency electroencephalogram (emEEG). Methods: Data from consecutive patients admitted to the ED of our tertiary [...] Read more.
Introduction: This study was conducted to determine whether specific emergency physician (EP) diagnoses and/or neurological signs/symptoms upon admission to the Emergency Department (ED) were associated with normal/non-informative emergency electroencephalogram (emEEG). Methods: Data from consecutive patients admitted to the ED of our tertiary hospital over a two-year period (1 January 2023–31 December 2024) were analyzed retrospectively. We evaluated the correlation between normal/non-specific emEEGs and EP admission diagnoses and neurological signs/symptoms on admission. Epileptic discharges and sharp waves with triphasic morphology were considered specific patterns. Results: A total of 2008 patients underwent emEEG recording during the study period. EmEEGs were considered non-informative in 100% of global amnesia diagnoses, 100% of cases of mild head trauma, 100% of cases of migraine with aura, 98.3% of transient ischemic attacks (TIAs), 95.6% of transient losses of consciousness (TLCs) when seizure was not the primary suspected diagnosis, and in 92.7% of falls of unknown dynamics. Epileptic patterns were detected in 4% of patients presenting with TLC and in 2.4% of those with falls of unknown dynamics, with approximately half of these patients having a pre-existing diagnosis of epilepsy. Triphasic waves were detected in 4.9% patients with falls of unknown dynamics, in 1.7% with TIA, and in 0.4% with TLC. All of these patients had fever/sepsis or metabolic/electrolyte disorders. Overall, across all clinical scenarios, emEEGs were considered non-informative in 385 (19.1%) tested patients. Conclusions: emEEGs are almost non-informative in the diagnostic pathway for patients with global amnesia, mild head trauma, and migraine with aura, and in patients with TIA, TLC, or falls of unknown dynamics. EPs can safely consider avoiding emEEGs in the absence of previous epilepsy diagnosis, fever/sepsis, metabolic/electrolyte disturbances, or drug abuse. Full article
Show Figures

Figure 1

49 pages, 5891 KB  
Article
A Study on Autonomous Driving Motion Sickness from the Perspective of Multimodal Human Signals
by Su Young Kim and Yoon Sang Kim
Sensors 2026, 26(5), 1675; https://doi.org/10.3390/s26051675 - 6 Mar 2026
Viewed by 791
Abstract
In autonomous driving, motion sickness (MS) arises from physical or visual stimuli, or a combination of both. However, objective quantification of MS level (MSL) remains limited beyond questionnaire-based assessments. Using multimodal human signals (physiological and behavioral) collected in an autonomous driving simulator, this [...] Read more.
In autonomous driving, motion sickness (MS) arises from physical or visual stimuli, or a combination of both. However, objective quantification of MS level (MSL) remains limited beyond questionnaire-based assessments. Using multimodal human signals (physiological and behavioral) collected in an autonomous driving simulator, this study addresses the association between these signals and MSL, across these MS types, by (i) screening and curating a decade of human-signal MS studies (HS-Set) to establish a data-driven foundation for selecting target sensor domains and features, (ii) constructing a dataset with subjective measures of MSL (fast motion sickness scale and simulator sickness questionnaire (SSQ)), alongside human signals (electroencephalogram (EEG), photoplethysmogram (PPG), electrodermal activity (EDA), skin temperature, and head/eye movement), (iii) conducting a correlation analysis between MSL and the identified features from HS-Set, and (iv) quantifying multivariable contributions at the feature and sensor domains through an explainable boosting machine (EBM). Key correlations include head amplitude/energy (pitch/surge) with SSQ total/oculomotor, eye entropy with nausea/oculomotor (positive), and EDA with nausea (negative). The EBM-based contribution analysis highlights EEG connectivity and head kinematics as dominant contributors; excluding EEG, the interpretability of single-domain models remains limited. Additionally, a combination of Head, PPG, and EDA domains retains over 80% of the full model’s interpretability. Full article
Show Figures

Graphical abstract

14 pages, 1506 KB  
Article
LightGBM-Based Seizure Detection Method in Pilocarpine Mouse Model of Epilepsy
by Mercy Edoho, Nicolas Partouche, Christiaan Warner Hoornenborg, Tycho M. Hoogland, Stéphane Baudouin, Catherine Mooney and Lan Wei
Algorithms 2026, 19(3), 167; https://doi.org/10.3390/a19030167 - 24 Feb 2026
Viewed by 489
Abstract
Electroencephalogram (EEG) has been the gold standard for measuring epileptic activity in rodent models of epilepsy. Manual scoring of seizures in EEG recordings lasting from days to months is laborious and prone to human error. The existing literature on automatic seizure detection in [...] Read more.
Electroencephalogram (EEG) has been the gold standard for measuring epileptic activity in rodent models of epilepsy. Manual scoring of seizures in EEG recordings lasting from days to months is laborious and prone to human error. The existing literature on automatic seizure detection in rodent models of epilepsy is limited, and the electrographic characteristics of induced epilepsy significantly differ from those of other epilepsy types. This study employed a Light Gradient Boosting Machine (LightGBM), with the dataset carefully partitioned into separate training and testing sets to ensure no data overlap. The model was trained using five-fold cross-validation to enhance robustness and generalisability. The training, validation, and independent test sets comprised 29,722 h of EEG recordings from 102 mice with pilocarpine-induced temporal lobe epilepsy. Following feature selection, model training, and post-processing, the lightGBM-based model exhibited a sensitivity of 80%, a specificity of 99%, and an F1-score of 0.71 on the independent test set. Multiple pairwise and non-parametric statistical tests indicated that envelope, skewness, and kurtosis, identified as the three most significant features in the feature importance ranking, exhibit statistically significant differences in their distributions (p-value < 0.05). The statistical analysis revealed significant differences across the three features and between seizure and non-seizure events for each feature, highlighting their relevance for discriminating epileptic activity. This study highlights the potential to support the automation of seizure event detection in preclinical rodent models of epilepsy. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (4th Edition))
Show Figures

Figure 1

19 pages, 13892 KB  
Article
The Effect of Visual Landscape Design on the Emotional and Physiological Responses of Older Adults
by Yalin Zhang, Menglin Zhang, Xiangxi Li, Keming Hou and Weijun Gao
Buildings 2026, 16(4), 783; https://doi.org/10.3390/buildings16040783 - 14 Feb 2026
Viewed by 614
Abstract
Landscape quality significantly impacts residents’ well-being through visual perception, particularly among the elderly who exhibit heightened sensitivity to environmental stimuli. Therefore, this study investigates how landscape configurations influence emotional and physiological responses in older adults under controlled visual conditions. This study selected representative [...] Read more.
Landscape quality significantly impacts residents’ well-being through visual perception, particularly among the elderly who exhibit heightened sensitivity to environmental stimuli. Therefore, this study investigates how landscape configurations influence emotional and physiological responses in older adults under controlled visual conditions. This study selected representative outdoor activity sites in northern Chinese cities and designed five landscape scenarios by adjusting the green coverage ratio (GCR) and landscape composition. Participants (mean age 64.8) reported feelings of pleasure, relaxation, and fatigue while viewing screen-based landscape images, with simultaneous recording of attention-to-interest area (AOIA), pupil diameter range (PD), and electroencephalogram (EEG) data. Research findings reveal a non-linear relationship between the GCR and emotional and physiological responses among elderly populations: when the GCR increased from 18.4% to 38.1%, participants reported significantly heightened feelings of pleasure and relaxation, alongside marked reductions in fatigue-related physiological indicators. However, when the GCR further rose to 48.5%, both reported subjective measures and physiological indicators deteriorated among elderly participants. Under equivalent green coverage conditions, water features within natural settings enhance visual focus on natural elements more effectively than purely green landscapes. Women demonstrated greater sensitivity to changes in the GCR. Correlation analysis further indicated that visual attention among the elderly positively correlated with positive emotions and negatively correlated with fatigue-related physiological responses. This research provides valuable guidance for green space design. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

15 pages, 5421 KB  
Article
Pre-Ictal EEG Augmentation Based CDCGAN Model for Epileptic Seizure Prediction
by Xindi Huang, Hongying Meng and Zhangyong Li
Technologies 2026, 14(2), 114; https://doi.org/10.3390/technologies14020114 - 12 Feb 2026
Viewed by 661
Abstract
Epilepsy is a common neurological disorder affecting over 50 million people worldwide, characterised by recurrent seizures accompanied by abnormal neuronal electrical activity. Electroencephalogram (EEG) is a technique for recording brain electrical signals, widely employed for epileptic seizure (ES) prediction due to its high [...] Read more.
Epilepsy is a common neurological disorder affecting over 50 million people worldwide, characterised by recurrent seizures accompanied by abnormal neuronal electrical activity. Electroencephalogram (EEG) is a technique for recording brain electrical signals, widely employed for epileptic seizure (ES) prediction due to its high temporal resolution, portability, and cost-effectiveness. However, reliable ES prediction based on EEG remains challenging, primarily owing to the limited duration of recorded pre-ictal states in publicly available datasets and the typically low signal-to-noise ratio (SNR) in non-invasive recordings. To mitigate these issues, we propose a Conditional Deep Convolutional Generative Adversarial Network (CDCGAN), which combines the representational power of Deep Convolutional Generative Adversarial Network (DCGAN) with the categorical conditioning mechanism of Conditional Generative Adversarial Network (CGAN) to generate class-specific EEG samples. By synthesising target samples, CDCGAN aims to alleviate class imbalance and enhance the quality of low-resolution spectral representations. To evaluate the practical utility of generated data, we trained a Convolutional Neural Network (CNN) on the augmented dataset and compared its performance against prior studies. Under the Leave-One-Seizure-Out cross-validation (LOSO-CV) protocol, our method achieved an average AUC of 0.876 at a 60% augmentation rate with 50 training epochs. The AUC improvement relative to corresponding control settings demonstrates that GAN-based data augmentation provides additional effective training samples for ES prediction while preserving task-relevant and discriminative pre-ictal EEG features. Full article
Show Figures

Figure 1

19 pages, 2186 KB  
Article
EEG Feature Extraction and Classification for Upper Limb Flexion and Extension Motor Imagery Based on Discriminative Filter Bank Common Spatial Pattern
by Yuqi Zhang and Xiaoyan Shen
Brain Sci. 2026, 16(2), 217; https://doi.org/10.3390/brainsci16020217 - 11 Feb 2026
Viewed by 508
Abstract
Background: Traditional common spatial pattern (CSP) algorithms for upper limb neural rehabilitation face inherent challenges of overlapping cortical representations and frequency sensitivity, which hinder the decoding performance of motor imagery (MI) electroencephalogram (EEG) signals. Objective: To address these issues, this study adopts an [...] Read more.
Background: Traditional common spatial pattern (CSP) algorithms for upper limb neural rehabilitation face inherent challenges of overlapping cortical representations and frequency sensitivity, which hinder the decoding performance of motor imagery (MI) electroencephalogram (EEG) signals. Objective: To address these issues, this study adopts an improved discriminative filter bank CSP (DFBCSP) framework and applies it to the decoding of upper limb MI-EEG signals, achieving remarkable classification performance. Methods: EEG data were acquired from sixteen participants performing two-class (left upper limb flexion-extension vs. relaxing) and three-class (left upper limb flexion vs. right upper limb extension vs. relaxing) MI tasks. The acquired EEG data were then decomposed into nine distinct sub-bands, followed by the adoption of a mutual information-based feature selection strategy to optimize the feature sets. These optimized feature sets were subsequently input into three classification models, namely multilayer perceptron (MLP), support vector machine (SVM), and linear discriminant analysis (LDA), for MI task classification. Results: Experimental results demonstrate that the DFBCSP + MLP method significantly outperforms the traditional CSP approach. Specifically, it achieves an accuracy of 94.83% (Kappa coefficient: 0.890) in two-class MI tasks and 86.20% (Kappa coefficient: 0.775) in three-class MI tasks. Conclusion: The DFBCSP + MLP framework exhibits high robustness and provides a potential technical framework and theoretical basis for future research on the rehabilitation of patients with upper limb motor dysfunction. Full article
(This article belongs to the Section Neurorehabilitation)
Show Figures

Figure 1

39 pages, 12426 KB  
Article
Fine-Grained Implicit Intention Pattern Recognition for Key Interactive Tasks in Industrial Human–Machine Collaboration
by Xiu Miao, Wenjun Hou and Zhichun Li
Symmetry 2026, 18(2), 317; https://doi.org/10.3390/sym18020317 - 9 Feb 2026
Cited by 1 | Viewed by 449
Abstract
The information symmetry between humans and machines can enhance mutual perception and understanding, leading to more robust cooperation. Intention recognition is a key technology in natural human–machine collaboration (HMC). However, as the complexity of the system increases, the amount of information and the [...] Read more.
The information symmetry between humans and machines can enhance mutual perception and understanding, leading to more robust cooperation. Intention recognition is a key technology in natural human–machine collaboration (HMC). However, as the complexity of the system increases, the amount of information and the types of tasks become numerous, which leads to continuous dynamic changes in interaction intentions. New sensing technologies such as electroencephalogram (EEG) have provided a continuous and unobtrusive monitoring approach to accurately and effectively identify intentions. But the complexity of physiological responses and the uncertain nature of intention make cross-subject recognition difficult, resulting in poor generalization performance and coarse-grained recognition patterns. To address these limitations, we proposed a framework for modeling tasks in complex systems and applied it to model tasks in an industrial system. Then, we use operators’ EEG data to effectively recognize the fine-grained intention patterns within different typical task scenarios, such as monitoring production and communication tasks. By inputting the improved multi-channel phase synchronization features into a machine learning classifier, cross-subject accuracy rates of 99.42% and 99.91% were achieved. This work furnishes systematic, field-tested cases for task modeling in the industrial field, demonstrates high-performance implicit intention recognition with a single EEG modality, and refines the granularity of implicit intention recognition. It provides theoretical underpinnings and technical support for both human–machine information symmetry and the advancement of HMC hybrid intelligence. Full article
(This article belongs to the Special Issue Symmetry/Asymmetry in Computer-Aided Industrial Design)
Show Figures

Figure 1

25 pages, 4607 KB  
Article
Integrating EEG Sensors with Virtual Reality to Support Students with ADHD
by Juriaan Wolfers, William Hurst and Caspar Krampe
Sensors 2026, 26(3), 1017; https://doi.org/10.3390/s26031017 - 4 Feb 2026
Viewed by 805
Abstract
Students with attention deficit hyperactivity disorder (ADHD) face a continuous challenge with their attention span, putting them at a greater risk of academic or psychological difficulties compared to their peers. Innovative communication technologies are demonstrating potential to address these attention-span concerns. Virtual Reality [...] Read more.
Students with attention deficit hyperactivity disorder (ADHD) face a continuous challenge with their attention span, putting them at a greater risk of academic or psychological difficulties compared to their peers. Innovative communication technologies are demonstrating potential to address these attention-span concerns. Virtual Reality (VR) is one such example, and has the potential to address attention-span difficulties among ADHD students. Accordingly, this study presents an EEG-based multimodal sensing pipeline as a methodological contribution, focusing on sensor-based data acquisition, signal processing, and neurophysiological interpretation to assess attention in VR-based environments, simulating a university supply chain educational topic. Thus, in this paper, a sequential exploratory approach investigated how 35 participants experienced an interactive VR-learning-driven supply chain game. A Brain–Computer Interaction (BCI) sensor generated insights by quantitatively analysing electroencephalogram (EEG) data that were processed through the proposed pipeline and integrated with subjective measures to validate participant’s subjective feelings. These insights originated from questions during the experiment that followed the Spatial Presence and Technology Acceptance Model to form a multimodal assessment framework. Findings demonstrated that the experimental group experienced a higher improved attention, concentration, engagement, and focus levels compared to the control group. BCI results from the experimental group showed more dominant voltage potentials in the right frontal and prefrontal cortex of the brain in areas responsible for attention, memory, and decision-making. A high acceptance of the VR technology among neurodiverse students highlights the added benefits of multimodal learning assessment methods in an educational setting. Full article
Show Figures

Figure 1

Back to TopTop