Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (528)

Search Parameters:
Keywords = EEG signal application

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1408 KiB  
Systematic Review
Fear Detection Using Electroencephalogram and Artificial Intelligence: A Systematic Review
by Bladimir Serna, Ricardo Salazar, Gustavo A. Alonso-Silverio, Rosario Baltazar, Elías Ventura-Molina and Antonio Alarcón-Paredes
Brain Sci. 2025, 15(8), 815; https://doi.org/10.3390/brainsci15080815 - 29 Jul 2025
Viewed by 218
Abstract
Background/Objectives: Fear detection through EEG signals has gained increasing attention due to its applications in affective computing, mental health monitoring, and intelligent safety systems. This systematic review aimed to identify the most effective methods, algorithms, and configurations reported in the literature for detecting [...] Read more.
Background/Objectives: Fear detection through EEG signals has gained increasing attention due to its applications in affective computing, mental health monitoring, and intelligent safety systems. This systematic review aimed to identify the most effective methods, algorithms, and configurations reported in the literature for detecting fear from EEG signals using artificial intelligence (AI). Methods: Following the PRISMA 2020 methodology, a structured search was conducted using the string (“fear detection” AND “artificial intelligence” OR “machine learning” AND NOT “fnirs OR mri OR ct OR pet OR image”). After applying inclusion and exclusion criteria, 11 relevant studies were selected. Results: The review examined key methodological aspects such as algorithms (e.g., SVM, CNN, Decision Trees), EEG devices (Emotiv, Biosemi), experimental paradigms (videos, interactive games), dominant brainwave bands (beta, gamma, alpha), and electrode placement. Non-linear models, particularly when combined with immersive stimulation, achieved the highest classification accuracy (up to 92%). Beta and gamma frequencies were consistently associated with fear states, while frontotemporal electrode positioning and proprietary datasets further enhanced model performance. Conclusions: EEG-based fear detection using AI demonstrates high potential and rapid growth, offering significant interdisciplinary applications in healthcare, safety systems, and affective computing. Full article
(This article belongs to the Special Issue Neuropeptides, Behavior and Psychiatric Disorders)
Show Figures

Figure 1

23 pages, 19710 KiB  
Article
Hybrid EEG Feature Learning Method for Cross-Session Human Mental Attention State Classification
by Xu Chen, Xingtong Bao, Kailun Jitian, Ruihan Li, Li Zhu and Wanzeng Kong
Brain Sci. 2025, 15(8), 805; https://doi.org/10.3390/brainsci15080805 - 28 Jul 2025
Viewed by 152
Abstract
Background: Decoding mental attention states from electroencephalogram (EEG) signals is crucial for numerous applications such as cognitive monitoring, adaptive human–computer interaction, and brain–computer interfaces (BCIs). However, conventional EEG-based approaches often focus on channel-wise processing and are limited to intra-session or subject-specific scenarios, lacking [...] Read more.
Background: Decoding mental attention states from electroencephalogram (EEG) signals is crucial for numerous applications such as cognitive monitoring, adaptive human–computer interaction, and brain–computer interfaces (BCIs). However, conventional EEG-based approaches often focus on channel-wise processing and are limited to intra-session or subject-specific scenarios, lacking robustness in cross-session or inter-subject conditions. Methods: In this study, we propose a hybrid feature learning framework for robust classification of mental attention states, including focused, unfocused, and drowsy conditions, across both sessions and individuals. Our method integrates preprocessing, feature extraction, feature selection, and classification in a unified pipeline. We extract channel-wise spectral features using short-time Fourier transform (STFT) and further incorporate both functional and structural connectivity features to capture inter-regional interactions in the brain. A two-stage feature selection strategy, combining correlation-based filtering and random forest ranking, is adopted to enhance feature relevance and reduce dimensionality. Support vector machine (SVM) is employed for final classification due to its efficiency and generalization capability. Results: Experimental results on two cross-session and inter-subject EEG datasets demonstrate that our approach achieves classification accuracy of 86.27% and 94.01%, respectively, significantly outperforming traditional methods. Conclusions: These findings suggest that integrating connectivity-aware features with spectral analysis can enhance the generalizability of attention decoding models. The proposed framework provides a promising foundation for the development of practical EEG-based systems for continuous mental state monitoring and adaptive BCIs in real-world environments. Full article
Show Figures

Figure 1

29 pages, 2830 KiB  
Article
BCINetV1: Integrating Temporal and Spectral Focus Through a Novel Convolutional Attention Architecture for MI EEG Decoding
by Muhammad Zulkifal Aziz, Xiaojun Yu, Xinran Guo, Xinming He, Binwen Huang and Zeming Fan
Sensors 2025, 25(15), 4657; https://doi.org/10.3390/s25154657 - 27 Jul 2025
Viewed by 279
Abstract
Motor imagery (MI) electroencephalograms (EEGs) are pivotal cortical potentials reflecting cortical activity during imagined motor actions, widely leveraged for brain-computer interface (BCI) system development. However, effectively decoding these MI EEG signals is often overshadowed by flawed methods in signal processing, deep learning methods [...] Read more.
Motor imagery (MI) electroencephalograms (EEGs) are pivotal cortical potentials reflecting cortical activity during imagined motor actions, widely leveraged for brain-computer interface (BCI) system development. However, effectively decoding these MI EEG signals is often overshadowed by flawed methods in signal processing, deep learning methods that are clinically unexplained, and highly inconsistent performance across different datasets. We propose BCINetV1, a new framework for MI EEG decoding to address the aforementioned challenges. The BCINetV1 utilizes three innovative components: a temporal convolution-based attention block (T-CAB) and a spectral convolution-based attention block (S-CAB), both driven by a new convolutional self-attention (ConvSAT) mechanism to identify key non-stationary temporal and spectral patterns in the EEG signals. Lastly, a squeeze-and-excitation block (SEB) intelligently combines those identified tempo-spectral features for accurate, stable, and contextually aware MI EEG classification. Evaluated upon four diverse datasets containing 69 participants, BCINetV1 consistently achieved the highest average accuracies of 98.6% (Dataset 1), 96.6% (Dataset 2), 96.9% (Dataset 3), and 98.4% (Dataset 4). This research demonstrates that BCINetV1 is computationally efficient, extracts clinically vital markers, effectively handles the non-stationarity of EEG data, and shows a clear advantage over existing methods, marking a significant step forward for practical BCI applications. Full article
(This article belongs to the Special Issue Advanced Biomedical Imaging and Signal Processing)
Show Figures

Figure 1

21 pages, 597 KiB  
Article
Competency Learning by Machine Learning-Based Data Analysis with Electroencephalography Signals
by Javier M. Antelis, Myriam Alanis-Espinosa, Omar Mendoza-Montoya, Pedro Cervantes-Lozano and Luis G. Hernandez-Rojas
Educ. Sci. 2025, 15(8), 957; https://doi.org/10.3390/educsci15080957 - 25 Jul 2025
Viewed by 246
Abstract
Data analysis and machine learning have become essential cross-disciplinary skills for engineering students and professionals. Traditionally, these topics are taught through lectures or online courses using pre-existing datasets, which limits the opportunity to engage with the full cycle of data analysis and machine [...] Read more.
Data analysis and machine learning have become essential cross-disciplinary skills for engineering students and professionals. Traditionally, these topics are taught through lectures or online courses using pre-existing datasets, which limits the opportunity to engage with the full cycle of data analysis and machine learning, including data collection, preparation, and contextualization of the application field. To address this, we designed and implemented a learning activity that involves students in every step of the learning process. This activity includes multiple stages where students conduct experiments to record their own electroencephalographic (EEG) signals and use these signals to learn data analysis and machine learning techniques. The purpose is to actively involve students, making them active participants in their learning process. This activity was implemented in six courses across four engineering careers during the 2023 and 2024 academic years. To validate its effectiveness, we measured improvements in grades and self-reported motivation using the MUSIC model inventory. The results indicate a positive development of competencies and high levels of motivation and appreciation among students for the concepts of data analysis and machine learning. Full article
(This article belongs to the Section Higher Education)
Show Figures

Figure 1

81 pages, 4295 KiB  
Systematic Review
Leveraging AI-Driven Neuroimaging Biomarkers for Early Detection and Social Function Prediction in Autism Spectrum Disorders: A Systematic Review
by Evgenia Gkintoni, Maria Panagioti, Stephanos P. Vassilopoulos, Georgios Nikolaou, Basilis Boutsinas and Apostolos Vantarakis
Healthcare 2025, 13(15), 1776; https://doi.org/10.3390/healthcare13151776 - 22 Jul 2025
Viewed by 514
Abstract
Background: This systematic review examines artificial intelligence (AI) applications in neuroimaging for autism spectrum disorder (ASD), addressing six research questions regarding biomarker optimization, modality integration, social function prediction, developmental trajectories, clinical translation challenges, and multimodal data enhancement for earlier detection and improved [...] Read more.
Background: This systematic review examines artificial intelligence (AI) applications in neuroimaging for autism spectrum disorder (ASD), addressing six research questions regarding biomarker optimization, modality integration, social function prediction, developmental trajectories, clinical translation challenges, and multimodal data enhancement for earlier detection and improved outcomes. Methods: Following PRISMA guidelines, we conducted a comprehensive literature search across 8 databases, yielding 146 studies from an initial 1872 records. These studies were systematically analyzed to address key questions regarding AI neuroimaging approaches in ASD detection and prognosis. Results: Neuroimaging combined with AI algorithms demonstrated significant potential for early ASD detection, with electroencephalography (EEG) showing promise. Machine learning classifiers achieved high diagnostic accuracy (85–99%) using features derived from neural oscillatory patterns, connectivity measures, and signal complexity metrics. Studies of infant populations have identified the 9–12-month developmental window as critical for biomarker detection and the onset of behavioral symptoms. Multimodal approaches that integrate various imaging techniques have substantially enhanced predictive capabilities, while longitudinal analyses have shown potential for tracking developmental trajectories and treatment responses. Conclusions: AI-driven neuroimaging biomarkers represent a promising frontier in ASD research, potentially enabling the detection of symptoms before they manifest behaviorally and providing objective measures of intervention efficacy. While technical and methodological challenges remain, advancements in standardization, diverse sampling, and clinical validation could facilitate the translation of findings into practice, ultimately supporting earlier intervention during critical developmental periods and improving outcomes for individuals with ASD. Future research should prioritize large-scale validation studies and standardized protocols to realize the full potential of precision medicine in ASD. Full article
Show Figures

Graphical abstract

31 pages, 7723 KiB  
Article
A Hybrid CNN–GRU–LSTM Algorithm with SHAP-Based Interpretability for EEG-Based ADHD Diagnosis
by Makbal Baibulova, Murat Aitimov, Roza Burganova, Lazzat Abdykerimova, Umida Sabirova, Zhanat Seitakhmetova, Gulsiya Uvaliyeva, Maksym Orynbassar, Aislu Kassekeyeva and Murizah Kassim
Algorithms 2025, 18(8), 453; https://doi.org/10.3390/a18080453 - 22 Jul 2025
Viewed by 400
Abstract
This study proposes an interpretable hybrid deep learning framework for classifying attention deficit hyperactivity disorder (ADHD) using EEG signals recorded during cognitively demanding tasks. The core architecture integrates convolutional neural networks (CNNs), gated recurrent units (GRUs), and long short-term memory (LSTM) layers to [...] Read more.
This study proposes an interpretable hybrid deep learning framework for classifying attention deficit hyperactivity disorder (ADHD) using EEG signals recorded during cognitively demanding tasks. The core architecture integrates convolutional neural networks (CNNs), gated recurrent units (GRUs), and long short-term memory (LSTM) layers to jointly capture spatial and temporal dynamics. In addition to the final hybrid architecture, the CNN–GRU–LSTM model alone demonstrates excellent accuracy (99.63%) with minimal variance, making it a strong baseline for clinical applications. To evaluate the role of global attention mechanisms, transformer encoder models with two and three attention blocks, along with a spatiotemporal transformer employing 2D positional encoding, are benchmarked. A hybrid CNN–RNN–transformer model is introduced, combining convolutional, recurrent, and transformer-based modules into a unified architecture. To enhance interpretability, SHapley Additive exPlanations (SHAP) are employed to identify key EEG channels contributing to classification outcomes. Experimental evaluation using stratified five-fold cross-validation demonstrates that the proposed hybrid model achieves superior performance, with average accuracy exceeding 99.98%, F1-scores above 0.9999, and near-perfect AUC and Matthews correlation coefficients. In contrast, transformer-only models, despite high training accuracy, exhibit reduced generalization. SHAP-based analysis confirms the hybrid model’s clinical relevance. This work advances the development of transparent and reliable EEG-based tools for pediatric ADHD screening. Full article
Show Figures

Graphical abstract

17 pages, 1738 KiB  
Article
Multimodal Fusion Multi-Task Learning Network Based on Federated Averaging for SDB Severity Diagnosis
by Songlu Lin, Renzheng Tang, Yuzhe Wang and Zhihong Wang
Appl. Sci. 2025, 15(14), 8077; https://doi.org/10.3390/app15148077 - 20 Jul 2025
Viewed by 487
Abstract
Accurate sleep staging and sleep-disordered breathing (SDB) severity prediction are critical for the early diagnosis and management of sleep disorders. However, real-world polysomnography (PSG) data often suffer from modality heterogeneity, label scarcity, and non-independent and identically distributed (non-IID) characteristics across institutions, posing significant [...] Read more.
Accurate sleep staging and sleep-disordered breathing (SDB) severity prediction are critical for the early diagnosis and management of sleep disorders. However, real-world polysomnography (PSG) data often suffer from modality heterogeneity, label scarcity, and non-independent and identically distributed (non-IID) characteristics across institutions, posing significant challenges for model generalization and clinical deployment. To address these issues, we propose a federated multi-task learning (FMTL) framework that simultaneously performs sleep staging and SDB severity classification from seven multimodal physiological signals, including EEG, ECG, respiration, etc. The proposed framework is built upon a hybrid deep neural architecture that integrates convolutional layers (CNN) for spatial representation, bidirectional GRUs for temporal modeling, and multi-head self-attention for long-range dependency learning. A shared feature extractor is combined with task-specific heads to enable joint diagnosis, while the FedAvg algorithm is employed to facilitate decentralized training across multiple institutions without sharing raw data, thereby preserving privacy and addressing non-IID challenges. We evaluate the proposed method across three public datasets (APPLES, SHHS, and HMC) treated as independent clients. For sleep staging, the model achieves accuracies of 85.3% (APPLES), 87.1% (SHHS_rest), and 79.3% (HMC), with Cohen’s Kappa scores exceeding 0.71. For SDB severity classification, it obtains macro-F1 scores of 77.6%, 76.4%, and 79.1% on APPLES, SHHS_rest, and HMC, respectively. These results demonstrate that our unified FMTL framework effectively leverages multimodal PSG signals and federated training to deliver accurate and scalable sleep disorder assessment, paving the way for the development of a privacy-preserving, generalizable, and clinically applicable digital sleep monitoring system. Full article
(This article belongs to the Special Issue Machine Learning in Biomedical Applications)
Show Figures

Figure 1

35 pages, 6415 KiB  
Review
Recent Advances in Conductive Hydrogels for Electronic Skin and Healthcare Monitoring
by Yan Zhu, Baojin Chen, Yiming Liu, Tiantian Tan, Bowen Gao, Lijun Lu, Pengcheng Zhu and Yanchao Mao
Biosensors 2025, 15(7), 463; https://doi.org/10.3390/bios15070463 - 18 Jul 2025
Viewed by 300
Abstract
In recent decades, flexible electronics have witnessed remarkable advancements in multiple fields, encompassing wearable electronics, human–machine interfaces (HMI), clinical diagnosis, and treatment, etc. Nevertheless, conventional rigid electronic devices are fundamentally constrained by their inherent non-stretchability and poor conformability, limitations that substantially impede their [...] Read more.
In recent decades, flexible electronics have witnessed remarkable advancements in multiple fields, encompassing wearable electronics, human–machine interfaces (HMI), clinical diagnosis, and treatment, etc. Nevertheless, conventional rigid electronic devices are fundamentally constrained by their inherent non-stretchability and poor conformability, limitations that substantially impede their practical applications. In contrast, conductive hydrogels (CHs) for electronic skin (E-skin) and healthcare monitoring have attracted substantial interest owing to outstanding features, including adjustable mechanical properties, intrinsic flexibility, stretchability, transparency, and diverse functional and structural designs. Considerable efforts focus on developing CHs incorporating various conductive materials to enable multifunctional wearable sensors and flexible electrodes, such as metals, carbon, ionic liquids (ILs), MXene, etc. This review presents a comprehensive summary of the recent advancements in CHs, focusing on their classifications and practical applications. Firstly, CHs are categorized into five groups based on the nature of the conductive materials employed. These categories include polymer-based, carbon-based, metal-based, MXene-based, and ionic CHs. Secondly, the promising applications of CHs for electrophysiological signals and healthcare monitoring are discussed in detail, including electroencephalogram (EEG), electrocardiogram (ECG), electromyogram (EMG), respiratory monitoring, and motion monitoring. Finally, this review concludes with a comprehensive summary of current research progress and prospects regarding CHs in the fields of electronic skin and health monitoring applications. Full article
Show Figures

Figure 1

18 pages, 2062 KiB  
Article
Measuring Blink-Related Brainwaves Using Low-Density Electroencephalography with Textile Electrodes for Real-World Applications
by Emily Acampora, Sujoy Ghosh Hajra and Careesa Chang Liu
Sensors 2025, 25(14), 4486; https://doi.org/10.3390/s25144486 - 18 Jul 2025
Viewed by 328
Abstract
Background: Electroencephalography (EEG) systems based on textile electrodes are increasingly being developed to address the need for more wearable sensor systems for brain function monitoring. Blink-related oscillations (BROs) are a new measure of brain function that corresponds to brainwave responses occurring after [...] Read more.
Background: Electroencephalography (EEG) systems based on textile electrodes are increasingly being developed to address the need for more wearable sensor systems for brain function monitoring. Blink-related oscillations (BROs) are a new measure of brain function that corresponds to brainwave responses occurring after spontaneous blinking, and indexes neural processes as the brain evaluates new visual information appearing after eye re-opening. Prior studies have reported BRO utility as both a clinical and non-clinical biomarker of cognition, but no study has demonstrated BRO measurement using textile-based EEG devices that facilitate user comfort for real-world applications. Methods: We investigated BRO measurement using a four-channel EEG system with textile electrodes by extracting BRO responses using existing, publicly available EEG data (n = 9). We compared BRO effects derived from textile-based electrodes with those from standard dry Ag/Ag-Cl electrodes collected at the same locations (i.e., Fp1, Fp2, F7, F8) and using the same EEG amplifier. Results: Results showed that BRO effects measured using textile electrodes exhibited similar features in both time and frequency domains compared to dry Ag/Ag-Cl electrodes. Data from both technologies also showed similar performance in artifact removal and signal capture. Conclusions: These findings provide the first demonstration of successful BRO signal capture using four-channel EEG with textile electrodes, providing compelling evidence toward the development of a comfortable and user-friendly EEG technology that uses the simple activity of blinking for objective brain function assessment in a variety of settings. Full article
Show Figures

Figure 1

24 pages, 890 KiB  
Article
MCTGNet: A Multi-Scale Convolution and Hybrid Attention Network for Robust Motor Imagery EEG Decoding
by Huangtao Zhan, Xinhui Li, Xun Song, Zhao Lv and Ping Li
Bioengineering 2025, 12(7), 775; https://doi.org/10.3390/bioengineering12070775 - 17 Jul 2025
Viewed by 328
Abstract
Motor imagery (MI) EEG decoding is a key application in brain–computer interface (BCI) research. In cross-session scenarios, the generalization and robustness of decoding models are particularly challenging due to the complex nonlinear dynamics of MI-EEG signals in both temporal and frequency domains, as [...] Read more.
Motor imagery (MI) EEG decoding is a key application in brain–computer interface (BCI) research. In cross-session scenarios, the generalization and robustness of decoding models are particularly challenging due to the complex nonlinear dynamics of MI-EEG signals in both temporal and frequency domains, as well as distributional shifts across different recording sessions. While multi-scale feature extraction is a promising approach for generalized and robust MI decoding, conventional classifiers (e.g., multilayer perceptrons) struggle to perform accurate classification when confronted with high-order, nonstationary feature distributions, which have become a major bottleneck for improving decoding performance. To address this issue, we propose an end-to-end decoding framework, MCTGNet, whose core idea is to formulate the classification process as a high-order function approximation task that jointly models both task labels and feature structures. By introducing a group rational Kolmogorov–Arnold Network (GR-KAN), the system enhances generalization and robustness under cross-session conditions. Experiments on the BCI Competition IV 2a and 2b datasets demonstrate that MCTGNet achieves average classification accuracies of 88.93% and 91.42%, respectively, outperforming state-of-the-art methods by 3.32% and 1.83%. Full article
(This article belongs to the Special Issue Brain Computer Interfaces for Motor Control and Motor Learning)
Show Figures

Figure 1

14 pages, 2907 KiB  
Article
Neural Dynamics of Strategic Early Predictive Saccade Behavior in Target Arrival Estimation
by Ryo Koshizawa, Kazuma Oki and Masaki Takayose
Brain Sci. 2025, 15(7), 750; https://doi.org/10.3390/brainsci15070750 - 15 Jul 2025
Viewed by 254
Abstract
Background/Objectives: Accurately predicting the arrival position of a moving target is essential in sports and daily life. While predictive saccades are known to enhance performance, the neural mechanisms underlying the timing of these strategies remain unclear. This study investigated how the timing [...] Read more.
Background/Objectives: Accurately predicting the arrival position of a moving target is essential in sports and daily life. While predictive saccades are known to enhance performance, the neural mechanisms underlying the timing of these strategies remain unclear. This study investigated how the timing of saccadic strategies—executed early versus late—affects cortical activity patterns, as measured by electroencephalography (EEG). Methods: Sixteen participants performed a task requiring them to predict the arrival position and timing of a parabolically moving target that became occluded midway through its trajectory. Based on eye movement behavior, participants were classified into an Early Saccade Strategy Group (SSG) or a Late SSG. EEG signals were analyzed in the low beta band (13–15 Hz) using the Hilbert transform. Group differences in eye movements and EEG activity were statistically assessed. Results: No significant group differences were observed in final position or response timing errors. However, time-series analysis showed that the Early SSG achieved earlier and more accurate eye positioning. EEG results revealed greater low beta activity in the Early SSG at electrode sites FC6 and P8, corresponding to the frontal eye field (FEF) and middle temporal (MT) visual area, respectively. Conclusions: Early execution of predictive saccades was associated with enhanced cortical activity in visuomotor and motion-sensitive regions. These findings suggest that early engagement of saccadic strategies supports more efficient visuospatial processing, with potential applications in dynamic physical tasks and digitally mediated performance domains such as eSports. Full article
Show Figures

Figure 1

14 pages, 1563 KiB  
Article
High-Resolution Time-Frequency Feature Selection and EEG Augmented Deep Learning for Motor Imagery Recognition
by Mouna Bouchane, Wei Guo and Shuojin Yang
Electronics 2025, 14(14), 2827; https://doi.org/10.3390/electronics14142827 - 14 Jul 2025
Viewed by 283
Abstract
Motor Imagery (MI) based Brain Computer Interfaces (BCIs) have promising applications in neurorehabilitation for individuals who have lost mobility and control over parts of their body due to brain injuries, such as stroke patients. Accurately classifying MI tasks is essential for effective BCI [...] Read more.
Motor Imagery (MI) based Brain Computer Interfaces (BCIs) have promising applications in neurorehabilitation for individuals who have lost mobility and control over parts of their body due to brain injuries, such as stroke patients. Accurately classifying MI tasks is essential for effective BCI performance, but this task remains challenging due to the complex and non-stationary nature of EEG signals. This study aims to improve the classification of left and right-hand MI tasks by utilizing high-resolution time-frequency features extracted from EEG signals, enhanced with deep learning-based data augmentation techniques. We propose a novel deep learning framework named the Generalized Wavelet Transform-based Deep Convolutional Network (GDC-Net), which integrates multiple components. First, EEG signals recorded from the C3, C4, and Cz channels are transformed into detailed time-frequency representations using the Generalized Morse Wavelet Transform (GMWT). The selected features are then expanded using a Deep Convolutional Generative Adversarial Network (DCGAN) to generate additional synthetic data and address data scarcity. Finally, the augmented feature maps data are subsequently fed into a hybrid CNN-LSTM architecture, enabling both spatial and temporal feature learning for improved classification. The proposed approach is evaluated on the BCI Competition IV dataset 2b. Experimental results showed that the mean classification accuracy and Kappa value are 89.24% and 0.784, respectively, making them the highest compared to the state-of-the-art algorithms. The integration of GMWT and DCGAN significantly enhances feature quality and model generalization, thereby improving classification performance. These findings demonstrate that GDC-Net delivers superior MI classification performance by effectively capturing high-resolution time-frequency dynamics and enhancing data diversity. This approach holds strong potential for advancing MI-based BCI applications, especially in assistive and rehabilitation technologies. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

25 pages, 6826 KiB  
Article
Multi-Class Classification Methods for EEG Signals of Lower-Limb Rehabilitation Movements
by Shuangling Ma, Zijie Situ, Xiaobo Peng, Zhangyang Li and Ying Huang
Biomimetics 2025, 10(7), 452; https://doi.org/10.3390/biomimetics10070452 - 9 Jul 2025
Viewed by 354
Abstract
Brain–Computer Interfaces (BCIs) enable direct communication between the brain and external devices by decoding motor intentions from EEG signals. However, the existing multi-class classification methods for motor imagery EEG (MI-EEG) signals are hindered by low signal quality and limited accuracy, restricting their practical [...] Read more.
Brain–Computer Interfaces (BCIs) enable direct communication between the brain and external devices by decoding motor intentions from EEG signals. However, the existing multi-class classification methods for motor imagery EEG (MI-EEG) signals are hindered by low signal quality and limited accuracy, restricting their practical application. This study focuses on rehabilitation training scenarios, aiming to capture the motor intentions of patients with partial or complete motor impairments (such as stroke survivors) and provide feedforward control commands for exoskeletons. This study developed an EEG acquisition protocol specifically for use with lower-limb rehabilitation motor imagery (MI). It systematically explored preprocessing techniques, feature extraction strategies, and multi-classification algorithms for multi-task MI-EEG signals. A novel 3D EEG convolutional neural network (3D EEG-CNN) that integrates time/frequency features is proposed. Evaluations on a self-collected dataset demonstrated that the proposed model achieved a peak classification accuracy of 66.32%, substantially outperforming conventional approaches and demonstrating notable progress in the multi-class classification of lower-limb motor imagery tasks. Full article
(This article belongs to the Special Issue Advances in Brain–Computer Interfaces 2025)
Show Figures

Figure 1

17 pages, 1326 KiB  
Review
State-Dependent Transcranial Magnetic Stimulation Synchronized with Electroencephalography: Mechanisms, Applications, and Future Directions
by He Chen, Tao Liu, Yinglu Song, Zhaohuan Ding and Xiaoli Li
Brain Sci. 2025, 15(7), 731; https://doi.org/10.3390/brainsci15070731 - 8 Jul 2025
Viewed by 493
Abstract
Transcranial magnetic stimulation combined with electroencephalography (TMS-EEG) has emerged as a transformative tool for probing cortical dynamics with millisecond precision. This review examines the state-dependent nature of TMS-EEG, a critical yet underexplored dimension influencing measurement reliability and clinical applicability. By integrating TMS’s neuromodulatory [...] Read more.
Transcranial magnetic stimulation combined with electroencephalography (TMS-EEG) has emerged as a transformative tool for probing cortical dynamics with millisecond precision. This review examines the state-dependent nature of TMS-EEG, a critical yet underexplored dimension influencing measurement reliability and clinical applicability. By integrating TMS’s neuromodulatory capacity with EEG’s temporal resolution, this synergy enables real-time analysis of brain network dynamics under varying neural states. We delineate foundational mechanisms of TMS-evoked potentials (TEPs), discuss challenges posed by temporal and inter-individual variability, and evaluate advanced paradigms such as closed-loop and task-embedded TMS-EEG. The former leverages real-time EEG feedback to synchronize stimulation with oscillatory phases, while the latter aligns TMS pulses with task-specific cognitive phases to map transient network activations. Current limitations—including hardware constraints, signal artifacts, and inconsistent preprocessing pipelines—are critically analyzed. Future directions emphasize adaptive algorithms for neural state prediction, phase-specific stimulation protocols, and standardized methodologies to enhance reproducibility. By bridging mechanistic insights with personalized neuromodulation strategies, state-dependent TMS-EEG holds promise for advancing both basic neuroscience and precision medicine, particularly in psychiatric and neurological disorders characterized by dynamic neural dysregulation. Full article
(This article belongs to the Section Neurotechnology and Neuroimaging)
Show Figures

Figure 1

16 pages, 1351 KiB  
Article
A Comparative Study on Machine Learning Methods for EEG-Based Human Emotion Recognition
by Shokoufeh Davarzani, Simin Masihi, Masoud Panahi, Abdulrahman Olalekan Yusuf and Massood Atashbar
Electronics 2025, 14(14), 2744; https://doi.org/10.3390/electronics14142744 - 8 Jul 2025
Viewed by 424
Abstract
Electroencephalogram (EEG) signals provide a direct and non-invasive means of interpreting brain activity and are increasingly becoming valuable in embedded emotion-aware systems, particularly for applications in healthcare, wearable electronics, and human–machine interactions. Among various EEG-based emotion recognition techniques, deep learning methods have demonstrated [...] Read more.
Electroencephalogram (EEG) signals provide a direct and non-invasive means of interpreting brain activity and are increasingly becoming valuable in embedded emotion-aware systems, particularly for applications in healthcare, wearable electronics, and human–machine interactions. Among various EEG-based emotion recognition techniques, deep learning methods have demonstrated superior performance compared to traditional approaches. This advantage stems from their ability to extract complex features—such as spectral–spatial connectivity, temporal dynamics, and non-linear patterns—from raw EEG data, leading to a more accurate and robust representation of emotional states and better adaptation to diverse data characteristics. This study explores and compares deep and shallow neural networks for human emotion recognition from raw EEG data, with the goal of enabling real-time processing in embedded and edge-deployable systems. Deep learning models—specifically convolutional neural networks (CNNs) and recurrent neural networks (RNNs)—have been benchmarked against traditional approaches such as the multi-layer perceptron (MLP), support vector machine (SVM), and k-nearest neighbors (kNN) algorithms. This comparative study investigates the effectiveness of deep learning techniques in EEG-based emotion recognition by classifying emotions into four categories based on the valence–arousal plane: high arousal, positive valence (HAPV); low arousal, positive valence (LAPV); high arousal, negative valence (HANV); and low arousal, negative valence (LANV). Evaluations were conducted using the DEAP dataset. The results indicate that both the CNN and RNN-STM models have a high classification performance in EEG-based emotion recognition, with an average accuracy of 90.13% and 93.36%, respectively, significantly outperforming shallow algorithms (MLP, SVM, kNN). Full article
(This article belongs to the Special Issue New Advances in Embedded Software and Applications)
Show Figures

Figure 1

Back to TopTop