Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (194)

Search Parameters:
Keywords = EEG-based adaptation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1175 KB  
Article
NAMI: A Neuro-Adaptive Multimodal Architecture for Wearable Human–Computer Interaction
by Christos Papakostas, Christos Troussas, Akrivi Krouska and Cleo Sgouropoulou
Multimodal Technol. Interact. 2025, 9(10), 108; https://doi.org/10.3390/mti9100108 - 18 Oct 2025
Viewed by 58
Abstract
The increasing ubiquity of wearable computing and multimodal interaction technologies has created unprecedented opportunities for natural and seamless human–computer interaction. However, most existing systems adapt only to external user actions such as speech, gesture, or gaze, without considering internal cognitive or affective states. [...] Read more.
The increasing ubiquity of wearable computing and multimodal interaction technologies has created unprecedented opportunities for natural and seamless human–computer interaction. However, most existing systems adapt only to external user actions such as speech, gesture, or gaze, without considering internal cognitive or affective states. This limits their ability to provide intelligent and empathetic adaptations. This paper addresses this critical gap by proposing the Neuro-Adaptive Multimodal Architecture (NAMI), a principled, modular, and reproducible framework designed to integrate behavioral and neurophysiological signals in real time. NAMI combines multimodal behavioral inputs with lightweight EEG and peripheral physiological measurements to infer cognitive load and engagement and adapt the interface dynamically to optimize user experience. The architecture is formally specified as a three-layer pipeline encompassing sensing and acquisition, cognitive–affective state estimation, and adaptive interaction control, with clear data flows, mathematical formalization, and real-time performance on wearable platforms. A prototype implementation of NAMI was deployed in an augmented reality Java programming tutor for postgraduate informatics students, where it dynamically adjusted task difficulty, feedback modality, and assistance frequency based on inferred user state. Empirical evaluation with 100 participants demonstrated significant improvements in task performance, reduced subjective workload, and increased engagement and satisfaction, confirming the effectiveness of the neuro-adaptive approach. Full article
Show Figures

Figure 1

22 pages, 5387 KB  
Article
EEG-Based Personal Identification by Special Design Domain-Adaptive Autoencoder
by Muhammed Esad Oztemel and Ömer Muhammet Soysal
Sensors 2025, 25(20), 6457; https://doi.org/10.3390/s25206457 - 18 Oct 2025
Viewed by 65
Abstract
Individual brain activity patterns derived from electroencephalogram (EEG) data offer a unique source for personal identification, introducing a novel approach to the field. Autoencoders are well-known machine learning models that automate feature extraction, which is a crucial step in biometric identification. Among various [...] Read more.
Individual brain activity patterns derived from electroencephalogram (EEG) data offer a unique source for personal identification, introducing a novel approach to the field. Autoencoders are well-known machine learning models that automate feature extraction, which is a crucial step in biometric identification. Among various types of autoencoders, the domain-adaptive autoencoder (DAAE) is explored for feature extraction. The extracted latent features are employed by four machine learning classifiers, KNN, ANN, SVM and RF, for personal identification. Two domain adaptation approaches were presented. The proposed frameworks were evaluated in a longitudinal setting, using three types of EEG recordings: resting state, auditory and cognitive stimuli. Model performance was assessed through experiments involving seven-, five- and two-subject classification tasks. The highest identification accuracy, 100%, was achieved by the SVM-based model in the two-subject experiment, using features extracted with the uniform referential DAAE. Similarly, the RF-based model attained an accuracy of 99.84% in the two-subject experiment when trained on features obtained from the softmin referential DAAE. As expected, accuracy declined with an increasing number of subjects in the dataset, reflecting the difficulty of multi-subject classification. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

20 pages, 2820 KB  
Article
Wearable EEG Sensor Analysis for Cognitive Profiling in Educational Contexts
by Eleni Lekati, Georgios N. Dimitrakopoulos, Konstantinos Lazaros, Panagiota Giannopoulou, Aristidis G. Vrahatis, Marios G. Krokidis, Panagiotis Vlamos and Spyridon Doukakis
Sensors 2025, 25(20), 6446; https://doi.org/10.3390/s25206446 - 18 Oct 2025
Viewed by 102
Abstract
Electroencephalography (EEG) provides a powerful means of capturing real-time neural activity, enabling the study of cognitive processes during complex learning tasks. This study explores the application of wearable EEG and advanced signal analysis to examine cognitive profiles of 30 sixth-grade students engaged in [...] Read more.
Electroencephalography (EEG) provides a powerful means of capturing real-time neural activity, enabling the study of cognitive processes during complex learning tasks. This study explores the application of wearable EEG and advanced signal analysis to examine cognitive profiles of 30 sixth-grade students engaged in fraction learning. Using validated estimations alongside interactive digital tools such as Fraction Lab and the Diamond Paper task, EEG recordings were processed to evaluate spectral dynamics across delta, theta, alpha, and beta bands. Results revealed that lower-performing students exhibited elevated delta and theta power under cognitive load, whereas higher-performing students showed more stable beta activity linked to cognitive control. These findings highlight the utility of EEG-based signal analysis for identifying neurocognitive markers associated with conceptual and procedural knowledge (PK) in mathematics. The integration of such methodologies supports the development of precision-oriented educational strategies grounded in objective neural data. Clustering further revealed three learner profiles: Core Support Needed, Developing, and Advanced, while classification analyses confirmed that EEG features, especially gamma and beta oscillations, reliably distinguished among them, underscoring the potential of neurocognitive markers to guide adaptive instruction. Full article
(This article belongs to the Special Issue Recent Advances in Wearable and Non-Invasive Sensors)
Show Figures

Figure 1

26 pages, 1351 KB  
Review
Trends and Limitations in Transformer-Based BCI Research
by Maximilian Achim Pfeffer, Johnny Kwok Wai Wong and Sai Ho Ling
Appl. Sci. 2025, 15(20), 11150; https://doi.org/10.3390/app152011150 - 17 Oct 2025
Viewed by 247
Abstract
Transformer-based models have accelerated EEG motor imagery (MI) decoding by using self-attention to capture long-range temporal structures while complementing spatial inductive biases. This systematic survey of Scopus-indexed works from 2020 to 2025 indicates that reported advances are concentrated in offline, protocol-heterogeneous settings; inconsistent [...] Read more.
Transformer-based models have accelerated EEG motor imagery (MI) decoding by using self-attention to capture long-range temporal structures while complementing spatial inductive biases. This systematic survey of Scopus-indexed works from 2020 to 2025 indicates that reported advances are concentrated in offline, protocol-heterogeneous settings; inconsistent preprocessing, non-standard data splits, and sparse efficiency frequently reporting cloud claims of generalization and real-time suitability. Under session- and subject-aware evaluation on the BCIC IV 2a/2b dataset, typical performance clusters are in the high-80% range for binary MI and the mid-70% range for multi-class tasks with gains of roughly 5–10 percentage points achieved by strong hybrids (CNN/TCN–Transformer; hierarchical attention) rather than by extreme figures often driven by leakage-prone protocols. In parallel, transformer-driven denoising—particularly diffusion–transformer hybrids—yields strong signal-level metrics but remains weakly linked to task benefit; denoise → decode validation is rarely standardized despite being the most relevant proxy when artifact-free ground truth is unavailable. Three priorities emerge for translation: protocol discipline (fixed train/test partitions, transparent preprocessing, mandatory reporting of parameters, FLOPs, per-trial latency, and acquisition-to-feedback delay); task relevance (shared denoise → decode benchmarks for MI and related paradigms); and adaptivity at scale (self-supervised pretraining on heterogeneous EEG corpora and resource-aware co-optimization of preprocessing and hybrid transformer topologies). Evidence from subject-adjusting evolutionary pipelines that jointly tune preprocessing, attention depth, and CNN–Transformer fusion demonstrates reproducible inter-subject gains over established baselines under controlled protocols. Implementing these practices positions transformer-driven BCIs to move beyond inflated offline estimates toward reliable, real-time neurointerfaces with concrete clinical and assistive relevance. Full article
(This article belongs to the Special Issue Brain-Computer Interfaces: Development, Applications, and Challenges)
Show Figures

Figure 1

20 pages, 2565 KB  
Article
GBV-Net: Hierarchical Fusion of Facial Expressions and Physiological Signals for Multimodal Emotion Recognition
by Jiling Yu, Yandong Ru, Bangjun Lei and Hongming Chen
Sensors 2025, 25(20), 6397; https://doi.org/10.3390/s25206397 - 16 Oct 2025
Viewed by 297
Abstract
A core challenge in multimodal emotion recognition lies in the precise capture of the inherent multimodal interactive nature of human emotions. Addressing the limitation of existing methods, which often process visual signals (facial expressions) and physiological signals (EEG, ECG, EOG, and GSR) in [...] Read more.
A core challenge in multimodal emotion recognition lies in the precise capture of the inherent multimodal interactive nature of human emotions. Addressing the limitation of existing methods, which often process visual signals (facial expressions) and physiological signals (EEG, ECG, EOG, and GSR) in isolation and thus fail to exploit their complementary strengths effectively, this paper presents a new multimodal emotion recognition framework called the Gated Biological Visual Network (GBV-Net). This framework enhances emotion recognition accuracy through deep synergistic fusion of facial expressions and physiological signals. GBV-Net integrates three core modules: (1) a facial feature extractor based on a modified ConvNeXt V2 architecture incorporating lightweight Transformers, specifically designed to capture subtle spatio-temporal dynamics in facial expressions; (2) a hybrid physiological feature extractor combining 1D convolutions, Temporal Convolutional Networks (TCNs), and convolutional self-attention mechanisms, adept at modeling local patterns and long-range temporal dependencies in physiological signals; and (3) an enhanced gated attention fusion module capable of adaptively learning inter-modal weights to achieve dynamic, synergistic integration at the feature level. A thorough investigation of the publicly accessible DEAP and MAHNOB-HCI datasets reveals that GBV-Net surpasses contemporary methods. Specifically, on the DEAP dataset, the model attained classification accuracies of 95.10% for Valence and 95.65% for Arousal, with F1-scores of 95.52% and 96.35%, respectively. On MAHNOB-HCI, the accuracies achieved were 97.28% for Valence and 97.73% for Arousal, with F1-scores of 97.50% and 97.74%, respectively. These experimental findings substantiate that GBV-Net effectively captures deep-level interactive information between multimodal signals, thereby improving emotion recognition accuracy. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

22 pages, 2258 KB  
Article
Designing Light for Emotion: A Neurophysiological Approach to Modeling Affective Responses to the Interplay of Color and Illuminance
by Xuejiao Li, Ruili Wang and Mincheol Whang
Biomimetics 2025, 10(10), 696; https://doi.org/10.3390/biomimetics10100696 - 14 Oct 2025
Viewed by 433
Abstract
As the influence of indoor environments on human emotional regulation and cognitive function becomes increasingly critical in modern society, there is a growing need for intelligent lighting systems that dynamically respond to users’ emotional states. While previous studies have investigated either illuminance or [...] Read more.
As the influence of indoor environments on human emotional regulation and cognitive function becomes increasingly critical in modern society, there is a growing need for intelligent lighting systems that dynamically respond to users’ emotional states. While previous studies have investigated either illuminance or color in isolation, this study concentrates on quantitatively analyzing the interaction of these two key elements on human emotion and cognitive control capabilities. Utilizing electroencephalography (EEG) and electrocardiography (ECG) signals, we measured participants’ physiological responses and subjective emotional assessments in 18 unique lighting conditions, combining six colors and three levels of illuminance. The results confirmed that the interaction between light color and illuminance significantly affects physiological indicators related to emotion regulation. Notably, low-illuminance purple lighting was found to promote positive emotions and inhibit negative ones by increasing frontal alpha asymmetry (FAA) and gamma wave activity. Conversely, low-illuminance environments generally diminished cognitive reappraisal and negative emotion inhibition capabilities. Furthermore, a random forest model integrating time-series data from EEG and ECG predicted emotional valence and arousal with accuracies of 87% and 79%, respectively, demonstrating the validity of multi-modal physiological signal-based emotion prediction. This study provides empirical data and a theoretical foundation for the development of human-centered, emotion-adaptive lighting systems by presenting a quantitative causal model linking lighting, physiological responses, and emotion. These findings also provide a biomimetic perspective by linking lighting-induced physiological responses with emotion regulation, offering a foundation for the development of adaptive lighting systems that emulate natural light–human interactions. Full article
Show Figures

Figure 1

18 pages, 4337 KB  
Article
A Transformer-Based Multimodal Fusion Network for Emotion Recognition Using EEG and Facial Expressions in Hearing-Impaired Subjects
by Shuni Feng, Qingzhou Wu, Kailin Zhang and Yu Song
Sensors 2025, 25(20), 6278; https://doi.org/10.3390/s25206278 - 10 Oct 2025
Viewed by 366
Abstract
Hearing-impaired people face challenges in expressing and perceiving emotions, and traditional single-modal emotion recognition methods demonstrate limited effectiveness in complex environments. To enhance recognition performance, this paper proposes a multimodal fusion neural network based on a multimodal multi-head attention fusion neural network (MMHA-FNN). [...] Read more.
Hearing-impaired people face challenges in expressing and perceiving emotions, and traditional single-modal emotion recognition methods demonstrate limited effectiveness in complex environments. To enhance recognition performance, this paper proposes a multimodal fusion neural network based on a multimodal multi-head attention fusion neural network (MMHA-FNN). This method utilizes differential entropy (DE) and bilinear interpolation features as inputs, learning the spatial–temporal characteristics of brain regions through an MBConv-based module. By incorporating the Transformer-based multi-head self-attention mechanism, we dynamically model the dependencies between EEG and facial expression features, enabling adaptive weighting and deep interaction of cross-modal characteristics. The experiment conducted a four-classification task on the MED-HI dataset (15 subjects, 300 trials). The taxonomy included happy, sad, fear, and calmness, where ‘calmness’ corresponds to a low-arousal neutral state as defined in the MED-HI protocol. Results indicate that the proposed method achieved an average accuracy of 81.14%, significantly outperforming feature concatenation (71.02%) and decision layer fusion (69.45%). This study demonstrates the complementary nature of EEG and facial expressions in emotion recognition among hearing-impaired individuals and validates the effectiveness of feature layer interaction fusion based on attention mechanisms in enhancing emotion recognition performance. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

15 pages, 1613 KB  
Article
EEG-Powered UAV Control via Attention Mechanisms
by Jingming Gong, He Liu, Liangyu Zhao, Taiyo Maeda and Jianting Cao
Appl. Sci. 2025, 15(19), 10714; https://doi.org/10.3390/app151910714 - 4 Oct 2025
Viewed by 329
Abstract
This paper explores the development and implementation of a brain–computer interface (BCI) system that utilizes electroencephalogram (EEG) signals for real-time monitoring of attention levels to control unmanned aerial vehicles (UAVs). We propose an innovative approach that combines spectral power analysis and machine learning [...] Read more.
This paper explores the development and implementation of a brain–computer interface (BCI) system that utilizes electroencephalogram (EEG) signals for real-time monitoring of attention levels to control unmanned aerial vehicles (UAVs). We propose an innovative approach that combines spectral power analysis and machine learning classification techniques to translate cognitive states into precise UAV command signals. This method overcomes the limitations of traditional threshold-based approaches by adapting to individual differences and improving classification accuracy. Through comprehensive testing with 20 participants in both controlled laboratory environments and real-world scenarios, our system achieved an 85% accuracy rate in distinguishing between high and low attention states and successfully mapped these cognitive states to vertical UAV movements. Experimental results demonstrate that our machine learning-based classification method significantly enhances system robustness and adaptability in noisy environments. This research not only advances UAV operability through neural interfaces but also broadens the practical applications of BCI technology in aviation. Our findings contribute to the expanding field of neurotechnology and underscore the potential for neural signal processing and machine learning integration to revolutionize human–machine interaction in industries where dynamic relationships between cognitive states and automated systems are beneficial. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Graphical abstract

34 pages, 4605 KB  
Article
Forehead and In-Ear EEG Acquisition and Processing: Biomarker Analysis and Memory-Efficient Deep Learning Algorithm for Sleep Staging with Optimized Feature Dimensionality
by Roberto De Fazio, Şule Esma Yalçınkaya, Ilaria Cascella, Carolina Del-Valle-Soto, Massimo De Vittorio and Paolo Visconti
Sensors 2025, 25(19), 6021; https://doi.org/10.3390/s25196021 - 1 Oct 2025
Viewed by 604
Abstract
Advancements in electroencephalography (EEG) technology and feature extraction methods have paved the way for wearable, non-invasive systems that enable continuous sleep monitoring outside clinical environments. This study presents the development and evaluation of an EEG-based acquisition system for sleep staging, which can be [...] Read more.
Advancements in electroencephalography (EEG) technology and feature extraction methods have paved the way for wearable, non-invasive systems that enable continuous sleep monitoring outside clinical environments. This study presents the development and evaluation of an EEG-based acquisition system for sleep staging, which can be adapted for wearable applications. The system utilizes a custom experimental setup with the ADS1299EEG-FE-PDK evaluation board to acquire EEG signals from the forehead and in-ear regions under various conditions, including visual and auditory stimuli. Afterward, the acquired signals were processed to extract a wide range of features in time, frequency, and non-linear domains, selected based on their physiological relevance to sleep stages and disorders. The feature set was reduced using the Minimum Redundancy Maximum Relevance (mRMR) algorithm and Principal Component Analysis (PCA), resulting in a compact and informative subset of principal components. Experiments were conducted on the Bitbrain Open Access Sleep (BOAS) dataset to validate the selected features and assess their robustness across subjects. The feature set extracted from a single EEG frontal derivation (F4-F3) was then used to train and test a two-step deep learning model that combines Long Short-Term Memory (LSTM) and dense layers for 5-class sleep stage classification, utilizing attention and augmentation mechanisms to mitigate the natural imbalance of the feature set. The results—overall accuracies of 93.5% and 94.7% using the reduced feature sets (94% and 98% cumulative explained variance, respectively) and 97.9% using the complete feature set—demonstrate the feasibility of obtaining a reliable classification using a single EEG derivation, mainly for unobtrusive, home-based sleep monitoring systems. Full article
Show Figures

Figure 1

18 pages, 1949 KB  
Article
EEG-Based Analysis of Motor Imagery and Multi-Speed Passive Pedaling: Implications for Brain–Computer Interfaces
by Cristian Felipe Blanco-Diaz, Aura Ximena Gonzalez-Cely, Denis Delisle-Rodriguez and Teodiano Freire Bastos-Filho
Signals 2025, 6(4), 52; https://doi.org/10.3390/signals6040052 - 1 Oct 2025
Viewed by 366
Abstract
Decoding motor imagery (MI) of lower-limb movements from electroencephalography (EEG) signals remains a challenge due to the involvement of deep cortical regions, limiting the applicability of Brain–Computer Interfaces (BCIs). This study proposes a novel protocol that combines passive pedaling (PP) as sensory priming [...] Read more.
Decoding motor imagery (MI) of lower-limb movements from electroencephalography (EEG) signals remains a challenge due to the involvement of deep cortical regions, limiting the applicability of Brain–Computer Interfaces (BCIs). This study proposes a novel protocol that combines passive pedaling (PP) as sensory priming with MI at different speeds (30, 45, and 60 rpm) to improve EEG-based classification. Ten healthy participants performed PP followed by MI tasks while EEG data were recorded. An increase in spectral relative power around Cz associated with both PP and MI was observed, varying with speed and suggesting that PP may enhance cortical engagement during MI. Furthermore, our classification strategy, based on Convolutional Neural Networks (CNNs), achieved an accuracy of 0.87–0.89 across four classes (three speeds and rest). This performance was also compared with the standard Common Spatial Patterns (CSP) and Linear Discriminant Analysis (LDA), which achieved an accuracy of 0.67–0.76. These results demonstrate the feasibility of multiclass decoding of imagined pedaling velocities and lay the groundwork for speed-adaptive BCIs, supporting future personalized and user-centered neurorehabilitation interventions. Full article
(This article belongs to the Special Issue Advances in Biomedical Signal Processing and Analysis)
Show Figures

Figure 1

28 pages, 1342 KB  
Article
Cognitively Inspired Federated Learning Framework for Interpretable and Privacy-Secured EEG Biomarker Prediction of Depression Relapse
by Sana Yasin, Umar Draz, Tariq Ali, Mohammad Hijji, Muhammad Ayaz, El-Hadi M. Aggoune and Isha Yasin
Bioengineering 2025, 12(10), 1032; https://doi.org/10.3390/bioengineering12101032 - 26 Sep 2025
Viewed by 418
Abstract
Depression relapse is a common issue during long-term care. We introduce a privacy-aware explainable personalized federated learning (PFL) framework that incorporates layer-wise relevance propagation and Shapley value analysis to provide patient-specific interpretable predictions from EEG. The study is conducted with the publicly available [...] Read more.
Depression relapse is a common issue during long-term care. We introduce a privacy-aware explainable personalized federated learning (PFL) framework that incorporates layer-wise relevance propagation and Shapley value analysis to provide patient-specific interpretable predictions from EEG. The study is conducted with the publicly available Healthy Brain Network (HBN) dataset, with analysis conducted for n = 100 subjects with resting-state 128-channel EEG with accompanying psychometric scores, and subject-wise 10-fold cross-validation is used to assess the performance of the model. Multi-channel EEG features and standardized symptom scales are jointly modeled to both increase the clinical context of the model and avoid leakage issues. This results in overall accuracy, precision, recall, and F1-score values of 92%, 91%, 93%, and 90.5%, respectively. The attribution maps from the model suggest region-anchored spectral patterns that are associated with relapse risk, providing clinical interpretability, and the federated setup of the model allows for a privacy-aware training setup that is more easily adaptable to multi-site deployment. Together, these results suggest a scalable and clinically feasible approach to trustworthy relapse monitoring with earlier intervention. Full article
Show Figures

Figure 1

16 pages, 1473 KB  
Article
MASleepNet: A Sleep Staging Model Integrating Multi-Scale Convolution and Attention Mechanisms
by Zhiyuan Wang, Zian Gong, Tengjie Wang, Qi Dong, Zhentao Huang, Shanwen Zhang and Yahong Ma
Biomimetics 2025, 10(10), 642; https://doi.org/10.3390/biomimetics10100642 - 23 Sep 2025
Viewed by 519
Abstract
With the rapid development of modern industry, people’s living pressures are gradually increasing, and an increasing number of individuals are affected by sleep disorders such as insomnia, hypersomnia, and sleep apnea syndrome. Many cardiovascular and psychiatric diseases are also closely related to sleep. [...] Read more.
With the rapid development of modern industry, people’s living pressures are gradually increasing, and an increasing number of individuals are affected by sleep disorders such as insomnia, hypersomnia, and sleep apnea syndrome. Many cardiovascular and psychiatric diseases are also closely related to sleep. Therefore, the early detection, accurate diagnosis, and treatment of sleep disorders an urgent research priority. Traditional manual sleep staging methods have many problems, such as being time-consuming and cumbersome, relying on expert experience, or being subjective. To address these issues, researchers have proposed multiple algorithmic strategies for sleep staging automation based on deep learning in recent years. This paper studies MASleepNet, a sleep staging neural network model that integrates multimodal deep features. This model takes multi-channel Polysomnography (PSG) signals (including EEG (Fpz-Cz, Pz-Oz), EOG, and EMG) as input and employs a multi-scale convolutional module to extract features at different time scales in parallel. It then adaptively weights and fuses the features from each modality using a channel-wise attention mechanism. The integrated temporal features are integrated into a Bidirectional Long Short-Term Memory (BiLSTM) sequence encoder, where an attention mechanism is introduced to identify key temporal segments. The final classification result is produced by the fully connected layer. The proposed model was experimentally evaluated on the Sleep-EDF dataset (consisting of two subsets, Sleep-EDF-78 and Sleep-EDF-20), achieving classification accuracies of 82.56% and 84.53% on the two subsets, respectively. These results demonstrate that deep models that integrate multimodal signals and an attention mechanism offer the possibility to enhance the efficiency of automatic sleep staging compared to cutting-edge methods. Full article
Show Figures

Graphical abstract

23 pages, 5635 KB  
Article
Attention-Based Transfer Enhancement Network for Cross-Corpus EEG Emotion Recognition
by Zongni Li, Kin-Yeung Wong and Chan-Tong Lam
Sensors 2025, 25(18), 5718; https://doi.org/10.3390/s25185718 - 13 Sep 2025
Viewed by 589
Abstract
A critical challenge in EEG-based emotion recognition is the poor generalization of models across different datasets due to significant domain shifts. Traditional methods struggle because they either overfit to source-domain characteristics or fail to bridge large discrepancies between datasets. To address this, we [...] Read more.
A critical challenge in EEG-based emotion recognition is the poor generalization of models across different datasets due to significant domain shifts. Traditional methods struggle because they either overfit to source-domain characteristics or fail to bridge large discrepancies between datasets. To address this, we propose the Cross-corpus Attention-based Transfer Enhancement network (CATE), a novel two-stage framework. The core novelty of CATE lies in its dual-view self-supervised pre-training strategy, which learns robust, domain-invariant representations by approaching the problem from two complementary perspectives. Unlike single-view models that capture an incomplete picture, our framework synergistically combines: (1) Noise-Enhanced Representation Modeling (NERM), which builds resilience to domain-specific artifacts and noise, and (2) Wavelet Transform Representation Modeling (WTRM), which captures the essential, multi-scale spectral patterns fundamental to emotion. This dual approach moves beyond the brittle assumptions of traditional domain adaptation, which often fails when domains are too dissimilar. In the second stage, a supervised fine-tuning process adapts these powerful features for classification using attention-based mechanisms. Extensive experiments on six transfer tasks across the SEED, SEED-IV, and SEED-V datasets demonstrate that CATE establishes a new state-of-the-art, achieving accuracies from 68.01% to 81.65% and outperforming prior methods by up to 15.65 percentage points. By effectively learning transferable features from these distinct, synergistic views, CATE provides a robust framework that significantly advances the practical applicability of cross-corpus EEG emotion recognition. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

13 pages, 1228 KB  
Article
Neural Pattern of Chanting-Driven Intuitive Inquiry Meditation in Expert Chan Practitioners
by Kin Cheung George Lee, Hin Hung Sik, Hang Kin Leung, Bonnie Wai Yan Wu, Rui Sun and Junling Gao
Behav. Sci. 2025, 15(9), 1213; https://doi.org/10.3390/bs15091213 - 5 Sep 2025
Viewed by 789
Abstract
Background: Intuitive inquiry meditation (Can-Hua-Tou) is a unique mental practice which differs from relaxation-based practices by continuously demanding intuitive inquiry. It emphasizes the doubt-driven self-interrogation, also referred to as Chan/Zen meditation. Nonetheless, its electrophysiological signature remains poorly characterized. Methods: We recorded 128-channel EEG [...] Read more.
Background: Intuitive inquiry meditation (Can-Hua-Tou) is a unique mental practice which differs from relaxation-based practices by continuously demanding intuitive inquiry. It emphasizes the doubt-driven self-interrogation, also referred to as Chan/Zen meditation. Nonetheless, its electrophysiological signature remains poorly characterized. Methods: We recorded 128-channel EEG from 20 male Buddhist monks (5–28 years Can-Hua-Tou experience) and 18 male novice lay practitioners (<0.5 year) during three counter-balanced eyes-closed blocks: Zen inquiry meditation (ZEN), a phonological control task silently murmuring “A-B-C-D” (ABCD), and passive resting state (REST). Power spectral density was computed for alpha (8–12 Hz), beta (12–30 Hz) and gamma (30–45 Hz) bands and mapped across the scalp. Mixed-design ANOVAs and electrode-wise tests were corrected with false discovery rate (p < 0.05). Results: Alpha power increased globally with eyes closed, but condition- or group-specific effects did not survive FDR correction, indicating comparable relaxation in both cohorts. In contrast, monks displayed a robust beta augmentation, showing significantly higher beta over parietal-occipital leads than novices across all conditions. The most pronounced difference lay in the gamma band: monks exhibited trait-like fronto-parietal gamma elevations in all three conditions, with additional, though sub-threshold, increases during ZEN. Novices showed negligible beta or gamma modulation across tasks. No significant group × condition interaction emerged after correction, yet only experts expressed concurrent beta/gamma amplification during meditative inquiry. Conclusions: Long-term Can-Hua-Tou practice is associated with frequency-specific neural adaptations—stable high-frequency synchrony and state-dependent beta enhancement—consistent with Buddhist constructs of citta-ekāgratā (one-pointed concentration) and vigilance during self-inquiry. Unlike mindfulness styles that accentuate alpha/theta, Chan inquiry manifests an oscillatory profile dominated by beta–gamma dynamics, underscoring that different contemplative strategies sculpt distinct neurophysiological phenotypes. These findings advance contemplative neuroscience by linking intensive cognitive meditation to enduring high-frequency cortical synchrony. Future research integrating cross-frequency coupling analyses, source localization, and behavioral correlates of insight will further fully delineate the mechanisms underpinning this advanced contemplative expertise. Full article
Show Figures

Figure 1

23 pages, 2939 KB  
Article
ADG-SleepNet: A Symmetry-Aware Multi-Scale Dilation-Gated Temporal Convolutional Network with Adaptive Attention for EEG-Based Sleep Staging
by Hai Sun and Zhanfang Zhao
Symmetry 2025, 17(9), 1461; https://doi.org/10.3390/sym17091461 - 5 Sep 2025
Viewed by 658
Abstract
The increasing demand for portable health monitoring has highlighted the need for automated sleep staging systems that are both accurate and computationally efficient. However, most existing deep learning models for electroencephalogram (EEG)-based sleep staging suffer from parameter redundancy, fixed dilation rates, and limited [...] Read more.
The increasing demand for portable health monitoring has highlighted the need for automated sleep staging systems that are both accurate and computationally efficient. However, most existing deep learning models for electroencephalogram (EEG)-based sleep staging suffer from parameter redundancy, fixed dilation rates, and limited generalization, restricting their applicability in real-time and resource-constrained scenarios. In this paper, we propose ADG-SleepNet, a novel lightweight symmetry-aware multi-scale dilation-gated temporal convolutional network enhanced with adaptive attention mechanisms for EEG-based sleep staging. ADG-SleepNet features a structurally symmetric, parallel multi-branch architecture utilizing various dilation rates to comprehensively capture multi-scale temporal patterns in EEG signals. The integration of adaptive gating and channel attention mechanisms enables the network to dynamically adjust the contribution of each branch based on input characteristics, effectively breaking architectural symmetry when necessary to prioritize the most discriminative features. Experimental results on the Sleep-EDF-20 and Sleep-EDF-78 datasets demonstrate that ADG-SleepNet achieves accuracy rates of 87.1% and 85.1%, and macro F1 scores of 84.0% and 81.1%, respectively, outperforming several state-of-the-art lightweight models. These findings highlight the strong generalization ability and practical potential of ADG-SleepNet for EEG-based health monitoring applications. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

Back to TopTop