Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (821)

Search Parameters:
Keywords = Brain–Computer Interface (BCI)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
39 pages, 9751 KB  
Article
Subject-Specific Comparative Performance Analysis of Deep Learning Architectures for Motor Imagery Classification
by Bandile Mdluli, Philani Khumalo and Rito Clifford Maswanganyi
Mathematics 2026, 14(9), 1527; https://doi.org/10.3390/math14091527 - 30 Apr 2026
Viewed by 10
Abstract
Motor Imagery (MI)-based brain–computer interfaces (BCIs) offer promising solutions for enhancing communication and motor functions in individuals with neurological impairments. However, decoding EEG signals accurately is difficult because of their poor signal-to-noise ratio and variability across subjects and sessions. In addition, EEG signals [...] Read more.
Motor Imagery (MI)-based brain–computer interfaces (BCIs) offer promising solutions for enhancing communication and motor functions in individuals with neurological impairments. However, decoding EEG signals accurately is difficult because of their poor signal-to-noise ratio and variability across subjects and sessions. In addition, EEG signals are sensitive to noise. Moreover, the low spatial resolution of EEG signals makes model generalization unreliable due to differences between signals across subjects. While several deep learning models have been developed, a fair comparison remains difficult due to differences in pre-processing, training procedures, and evaluation protocols. This study provides a systematic, controlled comparison of five deep learning approaches for subject-specific classification—EEGNet, EEG-TCNet, ShallowConvNet, DeepConvNet, and CTNet—using the BCI Competition IV datasets 2a and 2b. To enable an unbiased comparison, all models are trained using the same pipeline, with uniform pre-processing and training. Apart from classical accuracy scores, the effect of a constant set of hyper-parameters on the training dynamics, generalization capacity, and the susceptibility to overfitting is evaluated. The performance of the above-stated models is evaluated based on training dynamics, computational efficiency, accuracy, and the quality of the features learned by the models. Using the five-dimensional analysis framework consisting of quantitative performance metrics, training curves, confusion matrix analysis, ROC analysis, and t-SNE visualization techniques, the performance of the brain–computer interfaces is comprehensively analyzed. The experimental analysis confirms that CTNet outperforms other models, with accuracy values of 82.56% and 86.42% on the BCI competition IV datasets 2a and 2b, respectively. The EEGNet model is recognized as having the most potential in the field of real-time applications, owing to its light structure; meanwhile, the DeepConvNet model shows signs of overfitting, despite showing good accuracy. These findings highlight that model training characteristics and sensitivity to the hyper-parameters are important factors in evaluating deep learning models for MI-EEG classification problems. Full article
67 pages, 3190 KB  
Review
Comparative Performance Analysis of Machine Learning Computational Pipelines and Deep Learning Architectures in EEG Motor Imagery BCIs
by Nerita Ramsoonder, Rito Clifford Maswanganyi and Philani Khumalo
Mathematics 2026, 14(9), 1520; https://doi.org/10.3390/math14091520 - 30 Apr 2026
Viewed by 9
Abstract
The deployment of Motor Imagery Brain–Computer Interfaces (MI-BCI) is constrained by the inherent physiological variabilities of Electroencephalography (EEG) and parametric opacity. This paper presents a targeted technical audit of ten high-density MI-BCI computational pipelines, evaluating how existing literature addresses low Signal-to-Noise Ratio (SNR), [...] Read more.
The deployment of Motor Imagery Brain–Computer Interfaces (MI-BCI) is constrained by the inherent physiological variabilities of Electroencephalography (EEG) and parametric opacity. This paper presents a targeted technical audit of ten high-density MI-BCI computational pipelines, evaluating how existing literature addresses low Signal-to-Noise Ratio (SNR), intra-subject variability, and session-to-session instability. The investigation focuses on the contamination of data by ocular and muscular artifacts that overlap with the spectral components of Mu and Beta rhythms, often leading to algorithmic overfitting. Furthermore, the paper evaluates the impact of manifold drift where fluctuations in user state necessitate frequent recalibration as a primary hurdle for BCI portability. By applying a forensic evaluation framework to standardize the analysis across the ten selected studies, this paper identifies a high-performance landscape within standardized benchmarks, with classification accuracies reaching peak values of 95.42%. The audit specifically identifies a performance-reporting gap; while hybrid architectures demonstrate superior noise-rejection, they are frequently characterized by undocumented computational overhead. Additionally, while Neighborhood Component Analysis (NCA) emerges as a stable feature selection algorithm across the sampled literature, the systemic absence of reported execution times prevents a verified assessment of its low-latency viability. A critical technical finding is the widespread issue of Parametric Opacity, particularly regarding the omission of essential deterministic variables such as filter orders, windowing constants, and the final dimensionality of feature vectors. The audit reveals that the frequent failure to report the exact number of features utilized for classification masks potential overfitting and prevents an accurate assessment of the system’s generalization capabilities. Furthermore, only a specialized subset of the reviewed literature validates performance through formal statistical testing, such as Friedman ANOVA or Wilcoxon Signed-Rank tests, with most studies relying on peak accuracy metrics that may disguise filtered artifact residuals. This lack of granular documentation disguises the computational complexity of proposed methods and complicates their feasibility for hardware-in-the-loop validation. The findings establish that standardizing the reporting of preprocessing variables and feature-space dimensions is a prerequisite for overcoming current performance plateaus in universal BCI architectures. Full article
21 pages, 1041 KB  
Article
Comparison of Point-and-Click Performance Between the Brainfingers BCI and the Mouse
by Alexandros Pino, Dimitrios Vrailas and Georgios Kouroupetroglou
Sensors 2026, 26(9), 2777; https://doi.org/10.3390/s26092777 - 29 Apr 2026
Viewed by 420
Abstract
This study quantitatively evaluates the performance of a non-invasive hybrid brain–computer interface (BCI) compared to a conventional mouse in pointing (point-and-click) tasks. A commercial wearable BCI (Brainfingers), based on electromyography (EMG) and electrooculography (EOG) signals with low-level electroencephalography (EEG) components, was assessed against [...] Read more.
This study quantitatively evaluates the performance of a non-invasive hybrid brain–computer interface (BCI) compared to a conventional mouse in pointing (point-and-click) tasks. A commercial wearable BCI (Brainfingers), based on electromyography (EMG) and electrooculography (EOG) signals with low-level electroencephalography (EEG) components, was assessed against a Microsoft Optical Mouse using ISO/TS 9241-411-based one-dimensional (1D) and two-dimensional (2D) target acquisition tasks. Pointer coordinates were recorded and analyzed using Fitts’ law metrics. A total of 48 non-disabled participants completed the experiments. The results reveal significant performance differences between the two input devices. The BCI device exhibits substantially lower performance than the mouse across the reported Fitts’ law measures. Mean throughput was 0.35 bits/s for the BCI and 6.03 bits/s for the mouse in the 1D tests and 0.43 bits/s for the BCI and 5.17 bits/s for the mouse in the 2D tests. Despite the BCI’s low performance and although the present experiments involved non-disabled participants, the findings, considered alongside the prior literature on Brainfingers and non-invasive BCIs for computer access, suggest that the device may still have assistive technology value for users with severe motor impairments. Full article
(This article belongs to the Special Issue Wearable Physiological Sensors for Smart Healthcare)
16 pages, 4498 KB  
Article
Decoding Mandarin Action Verbs from EEG Using a Dual-LSTM Network: Towards Practical Assistive Brain–Computer Interfaces
by Binshuo Liu, Gengbiao Chen, Lairong Yin and Jing Liu
Sensors 2026, 26(9), 2749; https://doi.org/10.3390/s26092749 - 29 Apr 2026
Viewed by 159
Abstract
Electroencephalogram (EEG)-based brain–computer interfaces (BCIs) offer a promising pathway for restoring communication. Decoding tonal languages like Mandarin from EEG remains challenging due to homophones and complex temporal dynamics. This study investigates the decoding of six high-frequency Mandarin action verbs—Chi (eat), He (drink), Chuan [...] Read more.
Electroencephalogram (EEG)-based brain–computer interfaces (BCIs) offer a promising pathway for restoring communication. Decoding tonal languages like Mandarin from EEG remains challenging due to homophones and complex temporal dynamics. This study investigates the decoding of six high-frequency Mandarin action verbs—Chi (eat), He (drink), Chuan (wear), Na (take), Kan (look), and Dai (put on)—from EEG signals. We designed a visual-cue-based overt speech production experiment and collected EEG data from 30 participants during visually guided verb reading aloud. A recurrent neural network framework incorporating dual Long Short-Term Memory (LSTM) layers was implemented to model the long-range temporal dependencies in EEG patterns. The proposed model was compared against a traditional Common Spatial Pattern combined with Support Vector Machine (CSP-SVM) baseline. Our LSTM-based model achieved an average classification accuracy of 69.93% ± 3.07% for the six-class task, significantly outperforming the CSP-SVM baseline (36.53% ± 3.17%). Accuracy exceeded 75% under specific training conditions, including more than 15 training repetitions and a training-data proportion of 38%. Furthermore, the model attained this performance level utilizing approximately 38% of the available trial data for training, demonstrating data efficiency. The results indicate that the LSTM architecture can effectively capture the neural signatures associated with Mandarin verb processing, providing a foundation for developing practical EEG-based assistive communication technologies. The inference latency of the trained model, quantified as the post-training per-trial testing time, was under 2 s, supporting near-real-time applications. Full article
Show Figures

Figure 1

23 pages, 1673 KB  
Article
Transformer-Based SFDA by Class-Balanced Multicentric Dynamic Pseudo-Labeling for Privacy-Preserving EEG-Based BCI Systems
by Jiangchuan Liu, Jiatao Zhang, Cong Hu and Yong Peng
Systems 2026, 14(5), 476; https://doi.org/10.3390/systems14050476 - 28 Apr 2026
Viewed by 168
Abstract
As a common brain-computer interface (BCI) paradigm, electroencephalogram (EEG)-based motor imagery provides a critical pathway for both assistive technology to (restoring communication and control) and active rehabilitation (promoting neural plasticity and functional recovery). Domain adaptation has been shown to effectively enhance the decoding [...] Read more.
As a common brain-computer interface (BCI) paradigm, electroencephalogram (EEG)-based motor imagery provides a critical pathway for both assistive technology to (restoring communication and control) and active rehabilitation (promoting neural plasticity and functional recovery). Domain adaptation has been shown to effectively enhance the decoding performance of motor intentions for target subjects by leveraging labeled data from source subjects. However, EEG data from source subjects often contains extensive personal privacy, and the direct access to source EEG data easily leads to privacy leakage issues. An important research topic is to achieve domain adaptation without directly accessing the source subjects’ raw data. To address this challenge, a privacy-preserving source-free domain adaptation framework, termed Transformer-based SFDA with Class-balanced Multicentric Dynamic Pseudo-labeling (T-CMDP), is proposed for cross-subject motor-imagery EEG classification. This framework consists of three coupled stages. In the source model training stage, a Transformer-based encoder combined with Riemannian manifold-aware feature extraction is employed to learn transferable and discriminative EEG feature representations. In the source-free target adaptation stage, only the pretrained source model is transferred to the target domain and adapted through knowledge distillation and information maximization, without accessing raw source EEG data. In the self-supervised learning stage, class-balanced multicentric prototypes and high-confidence pseudo-label updates are introduced to progressively refine the target-domain decision boundaries. Extensive experiments on three motor-imagery EEG datasets demonstrate that the proposed T-CMDP framework consistently outperforms eleven representative baselines from traditional machine learning, deep learning, and source-free transfer approaches, achieving average accuracies of 56.85%, 76.34%, and 74.49%, respectively. These results indicate that T-CMDP effectively alleviates inter-subject EEG distribution discrepancies and ensures the privacy preserving of source subjects, thereby facilitating more reliable and practical deployment of EEG-based BCI systems. Full article
Show Figures

Figure 1

28 pages, 10998 KB  
Article
Introducing Brain–Computer Interfaces in Factories and Fabrication Lines for the Inclusion of Disabled Workers–Industry 5.0—A Modern Challenge and Opportunity
by Marian-Silviu Poboroniuc, Zoltán Nochta, Martin Klepal, Nina Hunter, Danut-Constantin Irimia, Alina Georgiana Baciu, Kelaja Schert, Tim Piotrowski and Alexandru Mitocaru
Multimodal Technol. Interact. 2026, 10(4), 41; https://doi.org/10.3390/mti10040041 - 17 Apr 2026
Viewed by 269
Abstract
Flexible factories and adaptive fabrication lines offer a testbed for advanced multimodal interaction concepts that can support the inclusion of disabled workers in Industry 5.0 manufacturing systems. The study synthesizes interdisciplinary data from ergonomics, industrial automation, and EU regulatory frameworks to establish a [...] Read more.
Flexible factories and adaptive fabrication lines offer a testbed for advanced multimodal interaction concepts that can support the inclusion of disabled workers in Industry 5.0 manufacturing systems. The study synthesizes interdisciplinary data from ergonomics, industrial automation, and EU regulatory frameworks to establish a conceptual model for human-machine interaction. Building on conceptual modeling and a structured literature analysis, the study proposes a six-step integration framework that links task demands, worker capabilities, and interaction modalities within human-in-the-loop manufacturing environments. Although no empirical case study was conducted in this phase, an exemplary application is presented for a semi-automated bike wheel manufacturing process. Detailed machine-based assembly line flows and simulated process data were utilized for illustrative purposes to depict the process and validate the proposed Capability–Task Matching Matrix. The results operationalize the human-centric vision of Industry 5.0 by providing a structured methodology for the inclusion of disabled workers within fabrication environments. The findings are organized into two primary components: the conceptual development of the Integration Approach and its practical application to a semi-automated industrial use-case. Finally, a particular focus is placed on Brain–Computer Interfaces (BCIs) as an emerging interaction channel that enables non-muscular control, attention monitoring, and neuroadaptive feedback, complementing conventional interfaces rather than replacing them. The framework is illustrated through application to the same semi-automated bicycle wheel assembly line, where BCI-supported interaction, augmented interfaces, and robotic assistance are mapped to specific production tasks and assessed in terms of feasibility and technological maturity. Drawing on the paper’s results, an explanatory 10-year roadmap outlines the feasibility and phased deployment of BCI solutions. It aligns technological advances with European regulations and a vision for a fully inclusive manufacturing enterprise. Full article
Show Figures

Figure 1

18 pages, 9261 KB  
Article
MSResBiMamba: A Deep Cascaded Architecture for EEG Signal Decoding
by Ruiwen Jiang, Yi Zhou and Jingxiang Zhang
Mathematics 2026, 14(8), 1348; https://doi.org/10.3390/math14081348 - 17 Apr 2026
Viewed by 194
Abstract
Electroencephalogram (EEG) signals serve as the core information carrier for brain–computer interfaces (BCIs); however, their highly non-stationary nature, extremely low signal-to-noise ratio, and significant inter-individual variability pose considerable challenges for signal decoding. Existing deep learning methods struggle to strike a balance between multi-scale, [...] Read more.
Electroencephalogram (EEG) signals serve as the core information carrier for brain–computer interfaces (BCIs); however, their highly non-stationary nature, extremely low signal-to-noise ratio, and significant inter-individual variability pose considerable challenges for signal decoding. Existing deep learning methods struggle to strike a balance between multi-scale, fine-grained feature extraction and efficient long-range temporal modeling. To overcome this limitation, this study proposes a novel deep cascaded architecture, MSResBiMamba, which deeply integrates multi-scale spatiotemporal feature learning with cutting-edge long-sequence modeling techniques. The model first utilizes an enhanced multi-scale spatiotemporal convolutional network (MS-CNN) combined with a SE-channel attention mechanism to adaptively extract local multi-band features and dynamically suppress redundant artefacts. Subsequently, it innovatively introduces an enhanced bidirectional Mamba (Bi-Mamba) module to efficiently capture non-causal long-range temporal dependencies with linear computational complexity, whilst cascading multi-head self-attention mechanisms to establish global higher-order feature interactions. Extensive experiments on the BCI Competition IV-2a dataset demonstrate that MSResBiMamba achieves outstanding classification performance in multi-class motor imagery tasks, significantly outperforming traditional methods and existing state-of-the-art neural networks. Ablation studies and t-SNE visualisations further confirm the model’s robustness in feature decoupling and cross-subject applications, providing a high-precision, high-efficiency decoding solution for BCI systems. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

20 pages, 5500 KB  
Article
DTWICA: A Novel Method for Constructing Character Templates in Imaginary Handwriting
by Jiaofen Nan, Panpan Xu, Gaodeng Fan, Xueqi Jin, Shuyao Zhai, Yanting Li, Yongquan Xia, Yinghui Meng, Liqin Yue and Duan Li
Information 2026, 17(4), 379; https://doi.org/10.3390/info17040379 - 17 Apr 2026
Viewed by 266
Abstract
Imaginary handwriting is an important research paradigm in the field of brain-controlled typing. Neural signals exhibit high complexity, low signal-to-noise ratio, and strong temporal and environmental variability, leading to significant inter-trial differences in the temporal dynamics of character-related signals. These factors pose significant [...] Read more.
Imaginary handwriting is an important research paradigm in the field of brain-controlled typing. Neural signals exhibit high complexity, low signal-to-noise ratio, and strong temporal and environmental variability, leading to significant inter-trial differences in the temporal dynamics of character-related signals. These factors pose significant challenges for segmenting character-related signals and accurately decoding imaginary handwriting. To address these issues, this study proposes a Dynamic Time Warping Independent Component Analysis (DTWICA) framework. This framework employs FastDTW to construct individualized warping functions for each trial, followed by FastICA-based decomposition to separate the signal into distinct temporal and neuronal factors. The decomposed temporal factors are then mapped and transformed using the warping function and subsequently merged with the neuronal factors to reconstruct the signal. A sliding time window is then applied for adaptive processing, yielding the transformed signal. Finally, the transformed signals from multiple trials are averaged to generate a template for each character. Results based on a publicly available neural signals dataset for imaginary handwriting indicate that, compared with mainstream time warping models such as Shift, Linear, Piecewise, and TWPCA, the proposed model improves the character decoding accuracy for 31 characters by 14%, 13%, 7%, and 2%, respectively. This study not only constructs effective character signal templates but also facilitates accurate character segmentation during unlabeled imagined typing in an offline setting, providing a promising methodological basis for future real-time imagined typing decoding systems. Full article
Show Figures

Figure 1

27 pages, 1201 KB  
Review
Brain–Computer Interfaces in Learning Disorders and Mathematical Learning: A Scoping Review with Structured Narrative Synthesis
by Viktoriya Galitskaya, Georgios Polydoros, Alexandros-Stamatios Antoniou, Pantelis Pergantis and Athanasios Drigas
Appl. Sci. 2026, 16(8), 3846; https://doi.org/10.3390/app16083846 - 15 Apr 2026
Viewed by 503
Abstract
Brain–Computer Interfaces (BCIs) have increasingly been explored as tools for monitoring and modulating cognitive processes relevant to learning. However, their application to learning disorders, and especially to mathematical learning difficulties such as dyscalculia and ageometria, remains conceptually promising but empirically underdeveloped. The present [...] Read more.
Brain–Computer Interfaces (BCIs) have increasingly been explored as tools for monitoring and modulating cognitive processes relevant to learning. However, their application to learning disorders, and especially to mathematical learning difficulties such as dyscalculia and ageometria, remains conceptually promising but empirically underdeveloped. The present study offers a scoping review with structured narrative synthesis of recent empirical research on BCI-based interventions in learning disorder populations, with particular attention paid to their possible translational relevance for mathematical learning. Following PRISMA-ScR principles and a Population–Concept–Context framework, studies published between 2020 and 2025 were identified through database searches in Scopus, IEEE Xplore, and PubMed. A total of 30 studies met the inclusion criteria. All eligible studies focused on Attention-Deficit/Hyperactivity Disorder (ADHD), while no eligible BCI intervention studies were found for dyscalculia or ageometria. The reviewed literature was dominated by EEG-based neurofeedback interventions. To move beyond descriptive summary, the included studies were organized using a structured analytical framework based on intervention modality, primary cognitive target, methodological robustness, and translational proximity to mathematical learning disorders. Across the evidence base, the most consistent findings concerned attention regulation and executive function outcomes, whereas academic and mathematics-related outcomes were sparse and methodologically less developed. Although several studies suggested improvements in domain-general cognitive mechanisms relevant to mathematical learning, the absence of direct evidence in dyscalculia and ageometria prevents confirmatory conclusions. The review therefore identifies both the promise and the limits of current BCI applications in learning disorder contexts and argues that future research should prioritize theory-driven, disorder-specific trials targeting numeracy, visuospatial reasoning, and executive processes in mathematical learning disabilities. Although current findings suggest promising cognitive and educational potential, these technologies are not yet ready for routine implementation in standard classroom environments without further validation, teacher training, ethical safeguards, and cost-effective deployment models. Full article
Show Figures

Figure 1

14 pages, 4611 KB  
Article
A Multi-Constrained Transfer Learning for Cross-Subject Decoding of Motor Imagery-Based BCI
by Boyang Yu and Li Zhang
Mathematics 2026, 14(8), 1314; https://doi.org/10.3390/math14081314 - 14 Apr 2026
Viewed by 269
Abstract
Individual differences and long calibration time present significant challenges to the practical implementation of brain–computer interfaces (BCIs). Domain adaptation technology can help mitigate these challenges by leveraging knowledge from existing subjects. Although domain adaptation methods have achieved progress in BCIs, there remains a [...] Read more.
Individual differences and long calibration time present significant challenges to the practical implementation of brain–computer interfaces (BCIs). Domain adaptation technology can help mitigate these challenges by leveraging knowledge from existing subjects. Although domain adaptation methods have achieved progress in BCIs, there remains a need for further exploration in class structure and cross-domain dispersion. In this paper, we propose a novel framework, multi-constrained transfer learning with selective pseudo-label update (MCTLP). First, Euclidean alignment is applied to reduce inter-subject variability at the data level. Then, multi-constrained feature alignment (MCFA) is introduced, which iteratively constructs a kernel mapping space and then determines an optimized subspace to align both marginal and conditional distributions at the feature level under class structure and dispersion constraints. Moreover, in this iterative process of feature alignment, a selective pseudo-label update method is proposed to update the pseudo-labels of only the target samples with high classification confidence to realize more reliable conditional distribution alignment. Two benchmark datasets were used to verify the presented MCTLP. The results showed that MCTLP outperformed other existing methods, demonstrating its strong ability for cross-subject transfer. Full article
Show Figures

Figure 1

20 pages, 1069 KB  
Article
Low-Latency Test-Time Adaptation for Inter-Subject SSVEP Decoding via Online Euclidean Alignment and Frequency-Regularized Entropy Minimization
by Sheng-Bin Duan and Jianlong Hao
Appl. Sci. 2026, 16(8), 3799; https://doi.org/10.3390/app16083799 - 13 Apr 2026
Viewed by 292
Abstract
Electroencephalography (EEG)-based brain–computer interface (BCI) systems are often affected by substantial inter-subject variability. These differences cause distribution shifts between the source domain and the target domain. As a result, the decoder’s generalization to unseen subjects is reduced. In online steady-state visual evoked potentials [...] Read more.
Electroencephalography (EEG)-based brain–computer interface (BCI) systems are often affected by substantial inter-subject variability. These differences cause distribution shifts between the source domain and the target domain. As a result, the decoder’s generalization to unseen subjects is reduced. In online steady-state visual evoked potentials (SSVEP)-based BCI systems, the decoder must not only cope with inter-subject distribution shifts but also adapt rapidly. However, most existing methods require accumulating multiple trials before adaptation, which increases data acquisition and update latency and thus limits their practicality in online settings. To address these challenges, this study focuses on a practically important but insufficiently explored setting, which is unlabeled inter-subject SSVEP decoding with single-trial online adaptation, where immediate adaptation is required and multi-trial accumulation is impractical. For this setting, this study proposes a low-latency test-time adaptation algorithm that combines trial-wise online Euclidean alignment, entropy minimization, and pseudo-label frequency regularization. This integration supports single-trial adaptation under online constraints, without requiring target labels or trial buffering, thereby reducing adaptation latency while mitigating inter-subject distribution shift. Experiments on two public datasets using four backbone models show that the proposed method achieves an average accuracy of 75.70%, outperforming the non-adaptive baseline by 3.88%. These results indicate that the proposed method improves inter-subject SSVEP decoding accuracy and shows potential for online BCI applications. Full article
Show Figures

Figure 1

25 pages, 5507 KB  
Article
A Cheonjiin Layout Mental Speller: Developing a Simple and Cost-Effective EEG-Based Brain–Computer Interface System
by Ji Won Ahn, Gi Yeon Yu, Seong-Wan Kim, Young-Seek Seok, Kyung-Min Byun and Seung Ho Choi
Sensors 2026, 26(7), 2265; https://doi.org/10.3390/s26072265 - 7 Apr 2026
Viewed by 529
Abstract
A brain–computer interface (BCI) enables direct communication between the brain and external devices by translating neural activity into executable control commands. Among electroencephalography (EEG)-based paradigms, steady-state visual evoked potential (SSVEP) is widely adopted due to its high signal-to-noise ratio, robustness, and minimal calibration [...] Read more.
A brain–computer interface (BCI) enables direct communication between the brain and external devices by translating neural activity into executable control commands. Among electroencephalography (EEG)-based paradigms, steady-state visual evoked potential (SSVEP) is widely adopted due to its high signal-to-noise ratio, robustness, and minimal calibration requirements. While SSVEP-based spellers have been extensively investigated, many existing systems rely on high-channel-density EEG recordings and computationally complex processing pipelines, and are primarily designed for alphabetic input structures. In this study, we present an SSVEP-based Korean speller that integrates the Cheonjiin keyboard layout to support intuitive composition of Hangul syllables. The proposed system adopts a simple configuration, employing only five visual stimulation frequencies (6.67–12 Hz) and two occipital EEG channels (O1 and O2), with real-time frequency recognition performed using canonical correlation analysis (CCA) within a 1.5 s sliding window. EEG signals were acquired at 200 Hz using an OpenBCI Ganglion board, band-pass filtered (5–45 Hz), and processed with harmonic sinusoidal reference templates for multi-frequency classification. The proposed interface generates five control commands (up, down, left, right, and select), enabling directional cursor navigation and character confirmation on a 4 × 4 virtual Cheonjiin keyboard. Experimental validation with three healthy participants demonstrated an average classification accuracy of approximately 82% and an information transfer rate (ITR) of 31.2 bits/min. Frequency-domain analysis revealed clear spectral peaks at the stimulation frequencies and their harmonics, indicating reliable SSVEP responses. The proposed system employs a simple two-channel configuration integrated with a Korean language-specific input structure, demonstrating that reliable SSVEP-based communication can be realized without computationally intensive algorithms or high-cost EEG acquisition systems. These findings demonstrate that reliable SSVEP-based communication can be achieved using a low-channel configuration without reliance on high-cost EEG equipment. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

23 pages, 1751 KB  
Article
The Use of EEG in the Study of Emotional States and Visual Word Recognition with or Without Musical Stimulus in University Students with Dyslexia
by Pavlos Christodoulides, Dimitrios Peschos and Victoria Zakopoulou
Brain Sci. 2026, 16(4), 396; https://doi.org/10.3390/brainsci16040396 - 6 Apr 2026
Viewed by 483
Abstract
This study investigated neural oscillatory dynamics underlying visual word recognition in university students with dyslexia using a portable brain–computer interface (BCI) EEG system. The sample included university students with dyslexia (N = 12) and matched controls (N = 14) who completed auditory discrimination [...] Read more.
This study investigated neural oscillatory dynamics underlying visual word recognition in university students with dyslexia using a portable brain–computer interface (BCI) EEG system. The sample included university students with dyslexia (N = 12) and matched controls (N = 14) who completed auditory discrimination and visual word recognition tasks, with and without musical accompaniment. Through these experimental conditions, the researchers assessed (a) the cortical activation across frequency bands, (b) the modulatory effect of background music, and (c) the relationship between emotional states and brain activity. Results revealed significant group differences in oscillatory patterns, with reduced β- and γ-band activity in the left occipito-temporal cortex among participants with dyslexia, confirming disrupted temporal coordination in posterior reading networks. Compensatory right-hemisphere activation was observed, particularly under musical conditions, accompanied by increased α-band power and reduced δ activity, indicating enhanced attentional engagement and reduced cognitive fatigue. Emotional assessment using the DASS-21 revealed higher stress and anxiety scores in the dyslexic group, suggesting that affective factors may modulate oscillatory dynamics. The presence of background music appeared to attenuate these effects, supporting improved emotional regulation and cognitive focus. These findings demonstrate that dyslexia reflects a distributed disruption in neural synchrony and cross-frequency coupling, influenced by both cognitive and affective mechanisms. The integration of portable EEG technology with rhythmic auditory stimulation offers new insights into the neurophysiological and emotional aspects of dyslexia, highlighting the potential of rhythm- and music-based approaches for both diagnostic and therapeutic applications. Full article
Show Figures

Figure 1

21 pages, 2193 KB  
Article
Electroencephalography-Based Brain–Computer Interface System Using Tongue Movement Imagery for Wheelchair Control
by Theerat Saichoo, Nannaphat Siribunyaphat, Bukhoree Sahoh, M. Arif Efendi and Yunyong Punsawad
Sensors 2026, 26(7), 2211; https://doi.org/10.3390/s26072211 - 2 Apr 2026
Viewed by 651
Abstract
Brain–computer interfaces (BCIs) are essential in assistive technologies to restore mobility in individuals with motor impairments. Although electroencephalography (EEG)-based brain-controlled wheelchairs have been extensively studied, most tongue-controlled systems rely on physical tongue movements, intraoral devices, or limited offline commands, which reduces the usability [...] Read more.
Brain–computer interfaces (BCIs) are essential in assistive technologies to restore mobility in individuals with motor impairments. Although electroencephalography (EEG)-based brain-controlled wheelchairs have been extensively studied, most tongue-controlled systems rely on physical tongue movements, intraoral devices, or limited offline commands, which reduces the usability and comfort. This study introduces an EEG-based tongue motor imagery (MI) BCI for intuitive and entirely mental wheelchair control. By leveraging preserved motor function and the cortical representation of the tongue, the system enables natural four-directional control through imagined tongue movements. Six imagined tongue actions—touching the left and right mouth corners, the upper and lower lips, and producing left and right cheek bulges—were designed to elicit alpha-band event-related desynchronization (ERD) patterns over the tongue motor cortex. EEG data were collected from 15 healthy participants using a 14-channel consumer-grade EMOTIV EPOC X headset. Alpha-band ERD features were extracted and classified using linear discriminant analysis, support vector machine, naïve Bayes, and artificial neural networks (ANNs). Simpler command sets yielded the highest accuracy: two-class tasks achieved 76.19%, while the performance decreased with increasing task complexity. The ANN achieved superior results in multi-class scenarios. The proposed tongue MI method offers initial support for developing a BCI control strategy for assistive technology; however, further improvements in classification techniques, user training, and real-time validation are needed to improve the robustness and practical usability. Full article
Show Figures

Figure 1

20 pages, 1367 KB  
Review
Deep Learning Decoding of Steady-State Visual Evoked Potential (SSVEP) for Real-Time Mobile Brain–Computer Interfaces: A Narrative Review from Laboratory Settings to Lightweight Engineering Applications
by Hanzhen Zhang and Chunjing Tao
Brain Sci. 2026, 16(4), 387; https://doi.org/10.3390/brainsci16040387 - 31 Mar 2026
Viewed by 734
Abstract
Background/Objectives: SSVEP-BCI has broad application potential in mobile human–computer interaction due to its high information transfer rate and stable signal characteristics. The introduction of deep learning technology has significantly advanced SSVEP decoding performance, offering novel approaches for processing short-duration signals and tackling [...] Read more.
Background/Objectives: SSVEP-BCI has broad application potential in mobile human–computer interaction due to its high information transfer rate and stable signal characteristics. The introduction of deep learning technology has significantly advanced SSVEP decoding performance, offering novel approaches for processing short-duration signals and tackling complex classification tasks. The establishment of the Tsinghua Benchmark dataset provides a standardized benchmark for evaluating algorithm performance, accelerating the development of deep learning-based SSVEP decoding. However, a summary of SSVEP deep learning decoding technologies for real-time mobile applications is lacking. Methods: We conducted a comprehensive literature review of SSVEP deep learning decoding studies published since 2023, using the Tsinghua Benchmark dataset. This review focuses on technical developments targeting real-time performance, low computational complexity, and high robustness. Results: We summarize the key technologies developed for real-time mobile SSVEP decoding. Our analysis thoroughly examines how these techniques address core challenges in the engineering implementation of mobile brain–computer interfaces, including real-time processing requirements, resource constraints, and environmental robustness. Conclusions: This review provides a comprehensive overview of SSVEP deep learning decoding technologies for mobile applications, establishing a technical foundation to advance mobile brain–computer interfaces from laboratory settings to practical deployment. Full article
(This article belongs to the Special Issue Trends and Challenges in Neuroengineering)
Show Figures

Figure 1

Back to TopTop