Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (547)

Search Parameters:
Keywords = BCI/EEG

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1408 KiB  
Systematic Review
Fear Detection Using Electroencephalogram and Artificial Intelligence: A Systematic Review
by Bladimir Serna, Ricardo Salazar, Gustavo A. Alonso-Silverio, Rosario Baltazar, Elías Ventura-Molina and Antonio Alarcón-Paredes
Brain Sci. 2025, 15(8), 815; https://doi.org/10.3390/brainsci15080815 - 29 Jul 2025
Viewed by 309
Abstract
Background/Objectives: Fear detection through EEG signals has gained increasing attention due to its applications in affective computing, mental health monitoring, and intelligent safety systems. This systematic review aimed to identify the most effective methods, algorithms, and configurations reported in the literature for detecting [...] Read more.
Background/Objectives: Fear detection through EEG signals has gained increasing attention due to its applications in affective computing, mental health monitoring, and intelligent safety systems. This systematic review aimed to identify the most effective methods, algorithms, and configurations reported in the literature for detecting fear from EEG signals using artificial intelligence (AI). Methods: Following the PRISMA 2020 methodology, a structured search was conducted using the string (“fear detection” AND “artificial intelligence” OR “machine learning” AND NOT “fnirs OR mri OR ct OR pet OR image”). After applying inclusion and exclusion criteria, 11 relevant studies were selected. Results: The review examined key methodological aspects such as algorithms (e.g., SVM, CNN, Decision Trees), EEG devices (Emotiv, Biosemi), experimental paradigms (videos, interactive games), dominant brainwave bands (beta, gamma, alpha), and electrode placement. Non-linear models, particularly when combined with immersive stimulation, achieved the highest classification accuracy (up to 92%). Beta and gamma frequencies were consistently associated with fear states, while frontotemporal electrode positioning and proprietary datasets further enhanced model performance. Conclusions: EEG-based fear detection using AI demonstrates high potential and rapid growth, offering significant interdisciplinary applications in healthcare, safety systems, and affective computing. Full article
(This article belongs to the Special Issue Neuropeptides, Behavior and Psychiatric Disorders)
Show Figures

Figure 1

23 pages, 19710 KiB  
Article
Hybrid EEG Feature Learning Method for Cross-Session Human Mental Attention State Classification
by Xu Chen, Xingtong Bao, Kailun Jitian, Ruihan Li, Li Zhu and Wanzeng Kong
Brain Sci. 2025, 15(8), 805; https://doi.org/10.3390/brainsci15080805 - 28 Jul 2025
Viewed by 210
Abstract
Background: Decoding mental attention states from electroencephalogram (EEG) signals is crucial for numerous applications such as cognitive monitoring, adaptive human–computer interaction, and brain–computer interfaces (BCIs). However, conventional EEG-based approaches often focus on channel-wise processing and are limited to intra-session or subject-specific scenarios, lacking [...] Read more.
Background: Decoding mental attention states from electroencephalogram (EEG) signals is crucial for numerous applications such as cognitive monitoring, adaptive human–computer interaction, and brain–computer interfaces (BCIs). However, conventional EEG-based approaches often focus on channel-wise processing and are limited to intra-session or subject-specific scenarios, lacking robustness in cross-session or inter-subject conditions. Methods: In this study, we propose a hybrid feature learning framework for robust classification of mental attention states, including focused, unfocused, and drowsy conditions, across both sessions and individuals. Our method integrates preprocessing, feature extraction, feature selection, and classification in a unified pipeline. We extract channel-wise spectral features using short-time Fourier transform (STFT) and further incorporate both functional and structural connectivity features to capture inter-regional interactions in the brain. A two-stage feature selection strategy, combining correlation-based filtering and random forest ranking, is adopted to enhance feature relevance and reduce dimensionality. Support vector machine (SVM) is employed for final classification due to its efficiency and generalization capability. Results: Experimental results on two cross-session and inter-subject EEG datasets demonstrate that our approach achieves classification accuracy of 86.27% and 94.01%, respectively, significantly outperforming traditional methods. Conclusions: These findings suggest that integrating connectivity-aware features with spectral analysis can enhance the generalizability of attention decoding models. The proposed framework provides a promising foundation for the development of practical EEG-based systems for continuous mental state monitoring and adaptive BCIs in real-world environments. Full article
Show Figures

Figure 1

29 pages, 2830 KiB  
Article
BCINetV1: Integrating Temporal and Spectral Focus Through a Novel Convolutional Attention Architecture for MI EEG Decoding
by Muhammad Zulkifal Aziz, Xiaojun Yu, Xinran Guo, Xinming He, Binwen Huang and Zeming Fan
Sensors 2025, 25(15), 4657; https://doi.org/10.3390/s25154657 - 27 Jul 2025
Viewed by 334
Abstract
Motor imagery (MI) electroencephalograms (EEGs) are pivotal cortical potentials reflecting cortical activity during imagined motor actions, widely leveraged for brain-computer interface (BCI) system development. However, effectively decoding these MI EEG signals is often overshadowed by flawed methods in signal processing, deep learning methods [...] Read more.
Motor imagery (MI) electroencephalograms (EEGs) are pivotal cortical potentials reflecting cortical activity during imagined motor actions, widely leveraged for brain-computer interface (BCI) system development. However, effectively decoding these MI EEG signals is often overshadowed by flawed methods in signal processing, deep learning methods that are clinically unexplained, and highly inconsistent performance across different datasets. We propose BCINetV1, a new framework for MI EEG decoding to address the aforementioned challenges. The BCINetV1 utilizes three innovative components: a temporal convolution-based attention block (T-CAB) and a spectral convolution-based attention block (S-CAB), both driven by a new convolutional self-attention (ConvSAT) mechanism to identify key non-stationary temporal and spectral patterns in the EEG signals. Lastly, a squeeze-and-excitation block (SEB) intelligently combines those identified tempo-spectral features for accurate, stable, and contextually aware MI EEG classification. Evaluated upon four diverse datasets containing 69 participants, BCINetV1 consistently achieved the highest average accuracies of 98.6% (Dataset 1), 96.6% (Dataset 2), 96.9% (Dataset 3), and 98.4% (Dataset 4). This research demonstrates that BCINetV1 is computationally efficient, extracts clinically vital markers, effectively handles the non-stationarity of EEG data, and shows a clear advantage over existing methods, marking a significant step forward for practical BCI applications. Full article
(This article belongs to the Special Issue Advanced Biomedical Imaging and Signal Processing)
Show Figures

Figure 1

34 pages, 3704 KiB  
Article
Uncertainty-Aware Deep Learning for Robust and Interpretable MI EEG Using Channel Dropout and LayerCAM Integration
by Óscar Wladimir Gómez-Morales, Sofia Escalante-Escobar, Diego Fabian Collazos-Huertas, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Appl. Sci. 2025, 15(14), 8036; https://doi.org/10.3390/app15148036 - 18 Jul 2025
Viewed by 283
Abstract
Motor Imagery (MI) classification plays a crucial role in enhancing the performance of brain–computer interface (BCI) systems, thereby enabling advanced neurorehabilitation and the development of intuitive brain-controlled technologies. However, MI classification using electroencephalography (EEG) is hindered by spatiotemporal variability and the limited interpretability [...] Read more.
Motor Imagery (MI) classification plays a crucial role in enhancing the performance of brain–computer interface (BCI) systems, thereby enabling advanced neurorehabilitation and the development of intuitive brain-controlled technologies. However, MI classification using electroencephalography (EEG) is hindered by spatiotemporal variability and the limited interpretability of deep learning (DL) models. To mitigate these challenges, dropout techniques are employed as regularization strategies. Nevertheless, the removal of critical EEG channels, particularly those from the sensorimotor cortex, can result in substantial spatial information loss, especially under limited training data conditions. This issue, compounded by high EEG variability in subjects with poor performance, hinders generalization and reduces the interpretability and clinical trust in MI-based BCI systems. This study proposes a novel framework integrating channel dropout—a variant of Monte Carlo dropout (MCD)—with class activation maps (CAMs) to enhance robustness and interpretability in MI classification. This integration represents a significant step forward by offering, for the first time, a dedicated solution to concurrently mitigate spatiotemporal uncertainty and provide fine-grained neurophysiologically relevant interpretability in motor imagery classification, particularly demonstrating refined spatial attention in challenging low-performing subjects. We evaluate three DL architectures (ShallowConvNet, EEGNet, TCNet Fusion) on a 52-subject MI-EEG dataset, applying channel dropout to simulate structural variability and LayerCAM to visualize spatiotemporal patterns. Results demonstrate that among the three evaluated deep learning models for MI-EEG classification, TCNet Fusion achieved the highest peak accuracy of 74.4% using 32 EEG channels. At the same time, ShallowConvNet recorded the lowest peak at 72.7%, indicating TCNet Fusion’s robustness in moderate-density montages. Incorporating MCD notably improved model consistency and classification accuracy, especially in low-performing subjects where baseline accuracies were below 70%; EEGNet and TCNet Fusion showed accuracy improvements of up to 10% compared to their non-MCD versions. Furthermore, LayerCAM visualizations enhanced with MCD transformed diffuse spatial activation patterns into more focused and interpretable topographies, aligning more closely with known motor-related brain regions and thereby boosting both interpretability and classification reliability across varying subject performance levels. Our approach offers a unified solution for uncertainty-aware, and interpretable MI classification. Full article
(This article belongs to the Special Issue EEG Horizons: Exploring Neural Dynamics and Neurocognitive Processes)
Show Figures

Figure 1

24 pages, 890 KiB  
Article
MCTGNet: A Multi-Scale Convolution and Hybrid Attention Network for Robust Motor Imagery EEG Decoding
by Huangtao Zhan, Xinhui Li, Xun Song, Zhao Lv and Ping Li
Bioengineering 2025, 12(7), 775; https://doi.org/10.3390/bioengineering12070775 - 17 Jul 2025
Viewed by 349
Abstract
Motor imagery (MI) EEG decoding is a key application in brain–computer interface (BCI) research. In cross-session scenarios, the generalization and robustness of decoding models are particularly challenging due to the complex nonlinear dynamics of MI-EEG signals in both temporal and frequency domains, as [...] Read more.
Motor imagery (MI) EEG decoding is a key application in brain–computer interface (BCI) research. In cross-session scenarios, the generalization and robustness of decoding models are particularly challenging due to the complex nonlinear dynamics of MI-EEG signals in both temporal and frequency domains, as well as distributional shifts across different recording sessions. While multi-scale feature extraction is a promising approach for generalized and robust MI decoding, conventional classifiers (e.g., multilayer perceptrons) struggle to perform accurate classification when confronted with high-order, nonstationary feature distributions, which have become a major bottleneck for improving decoding performance. To address this issue, we propose an end-to-end decoding framework, MCTGNet, whose core idea is to formulate the classification process as a high-order function approximation task that jointly models both task labels and feature structures. By introducing a group rational Kolmogorov–Arnold Network (GR-KAN), the system enhances generalization and robustness under cross-session conditions. Experiments on the BCI Competition IV 2a and 2b datasets demonstrate that MCTGNet achieves average classification accuracies of 88.93% and 91.42%, respectively, outperforming state-of-the-art methods by 3.32% and 1.83%. Full article
(This article belongs to the Special Issue Brain Computer Interfaces for Motor Control and Motor Learning)
Show Figures

Figure 1

22 pages, 4882 KiB  
Article
Dual-Branch Spatio-Temporal-Frequency Fusion Convolutional Network with Transformer for EEG-Based Motor Imagery Classification
by Hao Hu, Zhiyong Zhou, Zihan Zhang and Wenyu Yuan
Electronics 2025, 14(14), 2853; https://doi.org/10.3390/electronics14142853 - 17 Jul 2025
Viewed by 253
Abstract
The decoding of motor imagery (MI) electroencephalogram (EEG) signals is crucial for motor control and rehabilitation. However, as feature extraction is the core component of the decoding process, traditional methods, often limited to single-feature domains or shallow time-frequency fusion, struggle to comprehensively capture [...] Read more.
The decoding of motor imagery (MI) electroencephalogram (EEG) signals is crucial for motor control and rehabilitation. However, as feature extraction is the core component of the decoding process, traditional methods, often limited to single-feature domains or shallow time-frequency fusion, struggle to comprehensively capture the spatio-temporal-frequency characteristics of the signals, thereby limiting decoding accuracy. To address these limitations, this paper proposes a dual-branch neural network architecture with multi-domain feature fusion, the dual-branch spatio-temporal-frequency fusion convolutional network with Transformer (DB-STFFCNet). The DB-STFFCNet model consists of three modules: the spatiotemporal feature extraction module (STFE), the frequency feature extraction module (FFE), and the feature fusion and classification module. The STFE module employs a lightweight multi-dimensional attention network combined with a temporal Transformer encoder, capable of simultaneously modeling local fine-grained features and global spatiotemporal dependencies, effectively integrating spatiotemporal information and enhancing feature representation. The FFE module constructs a hierarchical feature refinement structure by leveraging the fast Fourier transform (FFT) and multi-scale frequency convolutions, while a frequency-domain Transformer encoder captures the global dependencies among frequency domain features, thus improving the model’s ability to represent key frequency information. Finally, the fusion module effectively consolidates the spatiotemporal and frequency features to achieve accurate classification. To evaluate the feasibility of the proposed method, experiments were conducted on the BCI Competition IV-2a and IV-2b public datasets, achieving accuracies of 83.13% and 89.54%, respectively, outperforming existing methods. This study provides a novel solution for joint time-frequency representation learning in EEG analysis. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Biomedical Data Processing)
Show Figures

Figure 1

14 pages, 1563 KiB  
Article
High-Resolution Time-Frequency Feature Selection and EEG Augmented Deep Learning for Motor Imagery Recognition
by Mouna Bouchane, Wei Guo and Shuojin Yang
Electronics 2025, 14(14), 2827; https://doi.org/10.3390/electronics14142827 - 14 Jul 2025
Viewed by 293
Abstract
Motor Imagery (MI) based Brain Computer Interfaces (BCIs) have promising applications in neurorehabilitation for individuals who have lost mobility and control over parts of their body due to brain injuries, such as stroke patients. Accurately classifying MI tasks is essential for effective BCI [...] Read more.
Motor Imagery (MI) based Brain Computer Interfaces (BCIs) have promising applications in neurorehabilitation for individuals who have lost mobility and control over parts of their body due to brain injuries, such as stroke patients. Accurately classifying MI tasks is essential for effective BCI performance, but this task remains challenging due to the complex and non-stationary nature of EEG signals. This study aims to improve the classification of left and right-hand MI tasks by utilizing high-resolution time-frequency features extracted from EEG signals, enhanced with deep learning-based data augmentation techniques. We propose a novel deep learning framework named the Generalized Wavelet Transform-based Deep Convolutional Network (GDC-Net), which integrates multiple components. First, EEG signals recorded from the C3, C4, and Cz channels are transformed into detailed time-frequency representations using the Generalized Morse Wavelet Transform (GMWT). The selected features are then expanded using a Deep Convolutional Generative Adversarial Network (DCGAN) to generate additional synthetic data and address data scarcity. Finally, the augmented feature maps data are subsequently fed into a hybrid CNN-LSTM architecture, enabling both spatial and temporal feature learning for improved classification. The proposed approach is evaluated on the BCI Competition IV dataset 2b. Experimental results showed that the mean classification accuracy and Kappa value are 89.24% and 0.784, respectively, making them the highest compared to the state-of-the-art algorithms. The integration of GMWT and DCGAN significantly enhances feature quality and model generalization, thereby improving classification performance. These findings demonstrate that GDC-Net delivers superior MI classification performance by effectively capturing high-resolution time-frequency dynamics and enhancing data diversity. This approach holds strong potential for advancing MI-based BCI applications, especially in assistive and rehabilitation technologies. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

25 pages, 6826 KiB  
Article
Multi-Class Classification Methods for EEG Signals of Lower-Limb Rehabilitation Movements
by Shuangling Ma, Zijie Situ, Xiaobo Peng, Zhangyang Li and Ying Huang
Biomimetics 2025, 10(7), 452; https://doi.org/10.3390/biomimetics10070452 - 9 Jul 2025
Viewed by 364
Abstract
Brain–Computer Interfaces (BCIs) enable direct communication between the brain and external devices by decoding motor intentions from EEG signals. However, the existing multi-class classification methods for motor imagery EEG (MI-EEG) signals are hindered by low signal quality and limited accuracy, restricting their practical [...] Read more.
Brain–Computer Interfaces (BCIs) enable direct communication between the brain and external devices by decoding motor intentions from EEG signals. However, the existing multi-class classification methods for motor imagery EEG (MI-EEG) signals are hindered by low signal quality and limited accuracy, restricting their practical application. This study focuses on rehabilitation training scenarios, aiming to capture the motor intentions of patients with partial or complete motor impairments (such as stroke survivors) and provide feedforward control commands for exoskeletons. This study developed an EEG acquisition protocol specifically for use with lower-limb rehabilitation motor imagery (MI). It systematically explored preprocessing techniques, feature extraction strategies, and multi-classification algorithms for multi-task MI-EEG signals. A novel 3D EEG convolutional neural network (3D EEG-CNN) that integrates time/frequency features is proposed. Evaluations on a self-collected dataset demonstrated that the proposed model achieved a peak classification accuracy of 66.32%, substantially outperforming conventional approaches and demonstrating notable progress in the multi-class classification of lower-limb motor imagery tasks. Full article
(This article belongs to the Special Issue Advances in Brain–Computer Interfaces 2025)
Show Figures

Figure 1

20 pages, 2409 KiB  
Article
Spatio-Temporal Deep Learning with Adaptive Attention for EEG and sEMG Decoding in Human–Machine Interaction
by Tianhao Fu, Zhiyong Zhou and Wenyu Yuan
Electronics 2025, 14(13), 2670; https://doi.org/10.3390/electronics14132670 - 1 Jul 2025
Viewed by 386
Abstract
Electroencephalography (EEG) and surface electromyography (sEMG) signals are widely used in human–machine interaction (HMI) systems due to their non-invasive acquisition and real-time responsiveness, particularly in neurorehabilitation and prosthetic control. However, existing deep learning approaches often struggle to capture both fine-grained local patterns and [...] Read more.
Electroencephalography (EEG) and surface electromyography (sEMG) signals are widely used in human–machine interaction (HMI) systems due to their non-invasive acquisition and real-time responsiveness, particularly in neurorehabilitation and prosthetic control. However, existing deep learning approaches often struggle to capture both fine-grained local patterns and long-range spatio-temporal dependencies within these signals, which limits classification performance. To address these challenges, we propose a lightweight deep learning framework that integrates adaptive spatial attention with multi-scale temporal feature extraction for end-to-end EEG and sEMG signal decoding. The architecture includes two core components: (1) an adaptive attention mechanism that dynamically reweights multi-channel time-series features based on spatial relevance, and (2) a multi-scale convolutional module that captures diverse temporal patterns through parallel convolutional filters. The proposed method achieves classification accuracies of 79.47% on the BCI-IV 2a EEG dataset (9 subjects, 22 channels) for motor intent decoding and 85.87% on the NinaPro DB2 sEMG dataset (40 subjects, 12 channels) for gesture recognition. Ablation studies confirm the effectiveness of each module, while comparative evaluations demonstrate that the proposed framework outperforms existing state-of-the-art methods across all tested scenarios. Together, these results demonstrate that our model not only achieves strong performance but also maintains a lightweight and resource-efficient design for EEG and sEMG decoding. Full article
Show Figures

Figure 1

17 pages, 3490 KiB  
Article
Four-Dimensional Adjustable Electroencephalography Cap for Solid–Gel Electrode
by Junyi Zhang, Deyu Zhao, Yue Li, Gege Ming and Weihua Pei
Sensors 2025, 25(13), 4037; https://doi.org/10.3390/s25134037 - 28 Jun 2025
Viewed by 361
Abstract
Currently, the electroencephalogram (EEG) cap is limited to a finite number of sizes based on head circumference, lacking the mechanical flexibility to accommodate the full range of skull dimensions. This reliance on head circumference data alone often results in a poor fit between [...] Read more.
Currently, the electroencephalogram (EEG) cap is limited to a finite number of sizes based on head circumference, lacking the mechanical flexibility to accommodate the full range of skull dimensions. This reliance on head circumference data alone often results in a poor fit between the EEG cap and the user’s head shape. To address these limitations, we have developed a four-dimensional (4D) adjustable EEG cap. This cap features an adjustable mechanism that covers the entire cranial area in four dimensions, allowing it to fit the head shapes of nearly all adults. The system is compatible with 64 channels or lower electrode counts. We conducted a study with numerous volunteers to compare the performance characteristics of the 4D caps with the commercial (COML) caps in terms of contact pressure, preparation time, wearing impedance, and performance in brain–computer interface (BCI) applications. The 4D cap demonstrated the ability to adapt to various head shapes more quickly, reduce impedance during testing, and enhance measurement accuracy, signal-to-noise ratio (SNR), and comfort. These improvements suggest its potential for broader application in both laboratory settings and daily life. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—3rd Edition)
Show Figures

Figure 1

21 pages, 480 KiB  
Perspective
Towards Predictive Communication: The Fusion of Large Language Models and Brain–Computer Interface
by Andrea Carìa
Sensors 2025, 25(13), 3987; https://doi.org/10.3390/s25133987 - 26 Jun 2025
Viewed by 755
Abstract
Integration of advanced artificial intelligence with neurotechnology offers transformative potential for assistive communication. This perspective article examines the emerging convergence between non-invasive brain–computer interface (BCI) spellers and large language models (LLMs), with a focus on predictive communication for individuals with motor or language [...] Read more.
Integration of advanced artificial intelligence with neurotechnology offers transformative potential for assistive communication. This perspective article examines the emerging convergence between non-invasive brain–computer interface (BCI) spellers and large language models (LLMs), with a focus on predictive communication for individuals with motor or language impairments. First, I will review the evolution of language models—from early rule-based systems to contemporary deep learning architectures—and their role in enhancing predictive writing. Second, I will survey existing implementations of BCI spellers that incorporate language modeling and highlight recent pilot studies exploring the integration of LLMs into BCI. Third, I will examine how, despite advancements in typing speed, accuracy, and user adaptability, the fusion of LLMs and BCI spellers still faces key challenges such as real-time processing, robustness to noise, and the integration of neural decoding outputs with probabilistic language generation frameworks. Finally, I will discuss how fully integrating LLMs with BCI technology could substantially improve the speed and usability of BCI-mediated communication, offering a path toward more intuitive, adaptive, and effective neurotechnological solutions for both clinical and non-clinical users. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

41 pages, 2631 KiB  
Systematic Review
Brain-Computer Interfaces and AI Segmentation in Neurosurgery: A Systematic Review of Integrated Precision Approaches
by Sayantan Ghosh, Padmanabhan Sindhujaa, Dinesh Kumar Kesavan, Balázs Gulyás and Domokos Máthé
Surgeries 2025, 6(3), 50; https://doi.org/10.3390/surgeries6030050 - 26 Jun 2025
Cited by 1 | Viewed by 1030
Abstract
Background: BCI and AI-driven image segmentation are revolutionizing precision neurosurgery by enhancing surgical accuracy, reducing human error, and improving patient outcomes. Methods: This systematic review explores the integration of AI techniques—particularly DL and CNNs—with neuroimaging modalities such as MRI, CT, EEG, and ECoG [...] Read more.
Background: BCI and AI-driven image segmentation are revolutionizing precision neurosurgery by enhancing surgical accuracy, reducing human error, and improving patient outcomes. Methods: This systematic review explores the integration of AI techniques—particularly DL and CNNs—with neuroimaging modalities such as MRI, CT, EEG, and ECoG for automated brain mapping and tissue classification. Eligible clinical and computational studies, primarily published between 2015 and 2025, were identified via PubMed, Scopus, and IEEE Xplore. The review follows PRISMA guidelines and is registered with the OSF (registration number: J59CY). Results: AI-based segmentation methods have demonstrated Dice similarity coefficients exceeding 0.91 in glioma boundary delineation and tumor segmentation tasks. Concurrently, BCI systems leveraging EEG and SSVEP paradigms have achieved information transfer rates surpassing 22.5 bits/min, enabling high-speed neural decoding with sub-second latency. We critically evaluate real-time neural signal processing pipelines and AI-guided surgical robotics, emphasizing clinical performance and architectural constraints. Integrated systems improve targeting precision and postoperative recovery across select neurosurgical applications. Conclusions: This review consolidates recent advancements in BCI and AI-driven medical imaging, identifies barriers to clinical adoption—including signal reliability, latency bottlenecks, and ethical uncertainties—and outlines research pathways essential for realizing closed-loop, intelligent neurosurgical platforms. Full article
Show Figures

Figure 1

27 pages, 5969 KiB  
Article
An Analysis of the Severity of Alcohol Use Disorder Based on Electroencephalography Using Unsupervised Machine Learning
by Kaloso M. Tlotleng and Rodrigo S. Jamisola
Big Data Cogn. Comput. 2025, 9(7), 170; https://doi.org/10.3390/bdcc9070170 - 26 Jun 2025
Viewed by 1568
Abstract
This paper presents an analysis of the severity of alcohol use disorder (AUD) based on electroencephalogram (EEG) signals and alcohol drinking experiments by utilizing power spectral density (PSD) and the transitions that occur as individuals drink alcohol in increasing amounts. We use data [...] Read more.
This paper presents an analysis of the severity of alcohol use disorder (AUD) based on electroencephalogram (EEG) signals and alcohol drinking experiments by utilizing power spectral density (PSD) and the transitions that occur as individuals drink alcohol in increasing amounts. We use data from brain—computer interface (BCI) experiments using alcohol as a stimulus recorded from a group of seventeen alcohol-drinking male participants and the assessment scores of the alcohol use disorders identification test (AUDIT). This method investigates the mild, moderate, and severe symptoms of AUD using the three key domains of AUDIT, which are hazardous alcohol use, dependence symptoms, and severe alcohol use. We utilize the EEG spectral power of the theta, alpha, and beta frequency bands by observing the transitions from the initial to the final phase of alcohol consumption. Our results are compared for people with low-risk alcohol consumption, harmful or hazardous alcohol consumption, and lastly a likelihood of AUD based on the individual assessment scores of the AUDIT. We use Balanced Iterative Reducing and Clustering using Hierarchies (BIRCH) to cluster the results of the transitions in EEG signals and the overall brain activity of all the participants for the entire duration of the alcohol-drinking experiments. This study can be useful in creating an automatic AUD severity level detection tool for alcoholics to aid in early intervention and supplement evaluations by mental health professionals. Full article
Show Figures

Graphical abstract

17 pages, 879 KiB  
Article
Effect of EEG Electrode Numbers on Source Estimation in Motor Imagery
by Mustafa Yazıcı, Mustafa Ulutaş and Mukadder Okuyan
Brain Sci. 2025, 15(7), 685; https://doi.org/10.3390/brainsci15070685 - 26 Jun 2025
Viewed by 412
Abstract
The electroencephalogram (EEG) is one of the most popular neurophysiological methods in neuroscience. Scalp EEG measurements are obtained using various numbers of channels for both clinical and research applications. This pilot study explores the effect of EEG channel count on motor imagery classification [...] Read more.
The electroencephalogram (EEG) is one of the most popular neurophysiological methods in neuroscience. Scalp EEG measurements are obtained using various numbers of channels for both clinical and research applications. This pilot study explores the effect of EEG channel count on motor imagery classification using source analysis in brain–computer interface (BCI) applications. Different channel configurations are employed to evaluate classification performance. This study focuses on mu band signals, which are sensitive to motor imagery-related EEG changes. Common spatial patterns are utilized as a spatiotemporal filter to extract signal components relevant to the right hand and right foot extremities. Classification accuracies are obtained using configurations with 19, 30, 61, and 118 electrodes to determine the optimal number of electrodes in motor imagery studies. Experiments are conducted on the BCI Competition III Dataset Iva. The 19-channel configuration yields lower classification accuracy when compared to the others. The results from 118 channels are better than those from 19 channels but not as good as those from 30 and 61 channels. The best results are achieved when 61 channels are utilized. The average accuracy values are 83.63% with 19 channels, increasing to 84.70% with 30 channels, 84.73% with 61 channels, and decreasing to 83.95% when 118 channels are used. Full article
(This article belongs to the Section Neurotechnology and Neuroimaging)
Show Figures

Figure 1

27 pages, 12336 KiB  
Article
Narrowband Theta Investigations for Detecting Cognitive Mental Load
by Silviu Ionita and Daniela Andreea Coman
Sensors 2025, 25(13), 3902; https://doi.org/10.3390/s25133902 - 23 Jun 2025
Viewed by 334
Abstract
The way in which EEG signals reflect mental tasks that vary in duration and intensity is a key topic in the investigation of neural processes concerning neuroscience in general and BCI technologies in particular. More recent research has reinforced historical studies that highlighted [...] Read more.
The way in which EEG signals reflect mental tasks that vary in duration and intensity is a key topic in the investigation of neural processes concerning neuroscience in general and BCI technologies in particular. More recent research has reinforced historical studies that highlighted theta band activity in relation to cognitive performance. In our study, we propose a comparative analysis of experiments with cognitive load imposed by arithmetic calculations performed mentally. The analysis of EEG signals captured with 64 electrodes is performed on low theta components extracted by narrowband filtering. As main signal discriminators, we introduced an original measure inspired by the integral of the curve of a function—specifically the signal function over the period corresponding to the filter band. Another measure of the signal considered as a discriminator is energy. In this research, it was used just for model comparison. A cognitive load detection algorithm based on these signal metrics was developed and tested on original experimental data. The results present EEG activity during mental tasks and show the behavioral pattern across 64 channels. The most precise and specific EEG channels for discriminating cognitive tasks induced by arithmetic tests are also identified. Full article
(This article belongs to the Special Issue Sensors-Based Healthcare Diagnostics, Monitoring and Medical Devices)
Show Figures

Figure 1

Back to TopTop