Next Article in Journal
A Novel Hybrid Metaheuristic Algorithm for Real-World Mechanical Engineering Optimization Problems
Previous Article in Journal
Superconductivity and Cryogenics in Medical Diagnostics and Treatment: An Overview of Selected Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Modal EEG–Fusion Neurointerface Wheelchair Control System

1
School of Communication and Information Engineering, Nanjing University of Posts and Telecommunications, No. 66, XinMofan Road, Gulou District, Nanjing 210003, China
2
Portland College, Nanjing University of Posts and Telecommunications, 9 Wenyuan Road, Yadong New District, Nanjing 210023, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2025, 15(23), 12577; https://doi.org/10.3390/app152312577
Submission received: 2 September 2025 / Revised: 12 November 2025 / Accepted: 25 November 2025 / Published: 27 November 2025

Abstract

The development of effective and user-friendly brain–computer interface (BCI) systems is essential for enhancing mobility and autonomy among individuals with physical disabilities. Recent studies have demonstrated significant advances in BCI technologies, particularly in the areas of motor imagery (MI), blink detection, and attention-level analysis. However, existing systems often face limitations, such as low classification accuracy, high latency, and poor robustness in dynamic, real-world environments. Furthermore, most traditional BCIs rely on single-modality approaches, which restrict their adaptability and real-time performance. This paper aims to address these challenges by presenting a multi-modal Electroencephalography (EEG)–fusion neurointerface wheelchair system integrating MI, intentional blink detection, and attention-level analysis. The proposed system improves on previous methods by employing a novel eight-channel needle-shaped dry electrode EEG headset, which significantly enhances signal quality through better electrode–skin contact without the need for conductive gels. Additionally, the system processes EEG signals in real-time using a Jetson Nano platform, incorporating a dual-threshold blink detection algorithm for emergency stops, an optimized random forest classifier for decoding directional MI, and a support vector machine (SVM) for attention-level assessment. Experimental evaluations involving classification accuracy, response latency, and trajectory-following precision confirmed robust system performance. MI classification accuracy averaged around 80%, with optimized attention-level analysis reaching up to 94.1%. Trajectory control tests demonstrated minimal deviation from predefined paths (typically less than 0.25 m). These results highlight the system’s advancements over existing single-modality BCIs, showcasing its potential to significantly improve the quality of life for mobility-impaired users. Future studies should focus on enhancing lateral MI detection accuracy, expanding datasets, and validating system robustness across diverse real-world scenarios.

1. Introduction

1.1. Background

With the acceleration of global population aging and continuous improvements in living standards, a growing number of individuals are experiencing physical functional impairments caused by aging or various diseases. Consequently, difficulties in performing daily activities and limited mobility have become major challenges, especially for patients with severe conditions such as amyotrophic lateral sclerosis (ALS), cervical spondylosis, and paralysis. Although existing brain-controlled wheelchair technologies provide basic mobility assistance, their performance is often restricted by the inherently weak, complex, and noisy nature of EEG signals. Numerous systematic reviews report that EEG-based wheelchair systems still face issues such as low signal-to-noise ratio, poor generalizability, long command latency, susceptibility to ambient interference, and low robustness in real-world environments [1,2]
Many systems therefore resort to additional external sensors, multi-modal hardware or multi-channel amplifiers to enhance performance, but these solutions inevitably increase device cost, structural complexity, energy consumption, and the risk of system instability.
Recent research has steadily advanced brain-controlled mobility and rehabilitation systems. For example, Kanungo et al. developed a hybrid BCI system combining steady-state visual evoked potential (SSVEP) and intentional blink detection for autonomous wheelchair navigation in a home environment, achieving an approximately 86.97% command success rate with about 4.0 s per command execution [3]. Liu et al. designed a multi-modal rehabilitation platform coupling an asynchronous online EEG interface with a wearable exoskeleton, showing significant improvement in user engagement and upper-limb motor performance [4]. Ghasemi, Gračanin & Azab introduced a smart-wheelchair BCI that decodes EEG-derived navigational intentions into forward, backward, and turning commands while addressing safety and privacy issues for large-scale deployment [5]. However, despite these advances, most existing systems still rely heavily on high-density EEG caps, multi-channel amplifiers, and powerful edge processors, which escalate cost, increase device bulk and power consumption, and introduce more points of failure. Moreover, as highlighted by recent review studies on EEG-based brain–computer interface systems—such as Huang et al. (2020) [6]—the introduction of auxiliary sensors or high-density hardware modules tends to increase system complexity, leading to greater susceptibility to synchronization errors, communication failures, and overall reductions in operational robustness and long-term stability during real-world use.
To comprehensively address these limitations, this study proposes an advanced multi-modal EEG-based wheelchair control framework that integrates motor imagery (MI) signals, intentional blink detection, and attention-level analysis into a unified control scheme without relying on external sensors. By synergistically combining multiple EEG-derived bioelectric modalities, the framework effectively compensates for the weaknesses of any single signal type, thereby significantly enhancing overall control stability, accuracy and responsiveness. Moreover, the design simplifies hardware requirements, reduces system cost, and improves reliability, paving the way for broader adoption and practical deployment of brain-controlled wheelchairs, ultimately enhancing autonomy and quality of life for mobility-impaired users.

1.2. Control Principle

EEG signals are continuously monitored and digitized in real-time by a non-invasive EEG acquisition device equipped with a high sampling rate analog-to-digital converter (ADC) operating at 1 kHz, which is higher than the typical 250–500 Hz [7] used in standard clinical EEG systems. This higher rate provides finer temporal resolution for capturing transient EEG dynamics such as motor imagery and blink-related potentials. Here, “continuously” refers to the uninterrupted acquisition of the underlying analog EEG activity, whereas digitization is performed at a sufficiently high sampling rate to preserve the temporal characteristics of the neural signals. The raw signals are first integrated and amplified locally and subsequently transmitted to the Jetson Nano platform via a wireless Bluetooth module [8]. Jetson Nano is a system-on-chip (SoC) specifically designed for edge computing and embedded artificial intelligence applications [8]. Its core advantage lies in that while maintaining low power consumption (5–10 W), it integrates a graphics processing unit (GPU) based on NVIDIA’s Compute Unified Device Architecture (CUDA) parallel computing platform, which enables thousands of lightweight threads to execute concurrently for real-time signal processing and deep learning inference without relying on external cloud servers. This ensures both system real-time performance and data privacy.
As an edge computing device, the Jetson Nano performs two primary tasks: signal preprocessing and state-based decoding. In the preprocessing stage, noise and artifacts are initially removed from the EEG signals to ensure data quality [9]. Additionally, linear interpolation is applied to reconstruct defective or missing EEG samples by linearly weighting 2–3 neighboring valid channels, which preserves temporal signal integrity while maintaining low computational latency suitable for real-time processing. Specifically, a finite-impulse-response (FIR) band-pass filter (0.5–30 Hz) is applied to suppress low-frequency drift and power-line and high-frequency interference. Subsequently, abnormal signal segments with drastic waveform deviations were manually inspected and removed during offline dataset preparation, whereas in real-time operation, similar artifacts are automatically suppressed by adaptive filtering and ICA-based artifact rejection. Finally, iNdependent Component Analysis (ICA) based on the InfoMax (information-maximization) algorithm is applied to separate and remove artifacts such as electrooculography (EOG), electromyography (EMG), and electrocardiography (ECG) [10]. The InfoMax algorithm is a learning rule that maximizes the mutual information between the input and output of a neural network model to estimate statistically independent source components, which makes it effective for EEG artifact removal. Following preprocessing, the system determines the current operational state of the brain-controlled wheelchair, which can be categorized into three distinct states: stopped, MI, and moving [11].
In the stopped state, the system persistently monitors the preprocessed EEG data and detects intentional blinking signals that are distinguishable from normal physiological blinks. Upon detection of such a deliberate blink, the system transitions into the MI state. In this state, secondary filtering is performed to eliminate any residual ocular interference. Feature vectors are then extracted and input into a pre-trained machine learning classifier. The classifier interprets user intent from MI patterns such as imagining upward, downward, leftward, or rightward limb movement—corresponding to forward, backward, left-turn, and right-turn wheelchair control commands, respectively. After short-duration temporal confirmation, the inferred command is delivered to the STM32G0 MCU via a serial communication interface for subsequent motor control.
In the moving state, the system continues real-time EEG acquisition and evaluates the user’s concentration level to dynamically adjust wheelchair speed across predefined levels. Specifically, three operational speed levels are configured based on indoor mobility safety considerations: low speed = 0.4 m/s, medium speed = 0.7 m/s, and high speed = 1.0 m/s. The system remains vigilant for new blink signals, which function as interrupt commands to immediately stop movement and revert to the stopped state. This closed-loop design enables safe, efficient, and intuitive control of the brain-actuated wheelchair. To ensure real-time interaction, the total delay introduced by the blink-command pipeline, including lightweight preprocessing and dual-threshold detection, is maintained within 0.1 s. Meanwhile, the motor imagery decoding pipeline—involving preprocessing, feature extraction, and machine learning classification—results in an end-to-end latency of less than 0.5 s. These delay specifications satisfy the operational safety requirements for continuous wheelchair navigation. The overall workflow of the multi-modal EEG-based wheelchair control system is illustrated in Figure 1.
Recent studies have validated the feasibility of using non-invasive EEG devices and MI paradigms for reliable control of assistive technologies. Palumbo et al. conducted a comprehensive review on EEG-based BCI applications in wheelchair navigation [1]. Liao et al. proposed EEGEncoder, a hybrid architecture combining a Transformer and a Temporal Convolutional Network (TCN), which leverages self-attention for spatial correlation learning and temporal convolutions for sequence modeling, thereby enhancing MI classification accuracy and generalization [12]. An et al. demonstrated that a low-cost 16-channel EEG system combined with deep neural networks can maintain consistent MI-based robotic control across multiple days with reduced training requirements [13]. These works collectively support the robustness and practicality of our proposed multi-modal closed-loop control framework.

2. Materials and Methods

2.1. Integrated System Design

2.1.1. Core Module Design

The intelligent brain-controlled wheelchair integrates its core functions into a compact electromechanical architecture, as shown in Figure 2. The system consists of four primary modules: the communication module, main control unit, power drive unit, and motion execution unit. EEG signals are acquired via the EEG-Z Sensor and transmitted wirelessly to the Jetson Nano using Bluetooth. The Jetson Nano performs real-time signal decoding and sends motor control instructions to the STM32G0 microcontroller through a USB interface.
The STM32G0 Microcontroller Unit (MCU) (STMicroelectronics, Geneva, Switzerland) generates Pulse Width Modulation (PWM) signals that are fed into Infineon full-bridge motor driver chips, a drive device used to control high-power motors. These signals are amplified to control two independent rear-wheel DC brushed motors, enabling directional motion. The wheelchair adopts a classic rear-wheel drive layout with front universal casters, ensuring stability and maneuverability. Each motor is rated at 24 VDC, 250 W, 120 rpm, and 13.4 A, providing sufficient torque and power for daily indoor and outdoor operation. Similar hardware control frameworks, combining real-time EEG decoding, microcontroller-based PWM generation, and full-bridge motor drive, have been widely adopted in recent BCI wheelchair systems [1]. The proposed design ensures reliable, compact, and safe EEG-based mobility control.

2.1.2. EEG Signal Acquisition Equipment

For EEG signal acquisition, a custom-configured 64-channel EEG headset compatible with the international 10–20 system (Smarting Mobi, Belgrade, Serbia) was employed. The electrode layout strictly adheres to the international 10–20 system, as illustrated in Figure 3, ensuring comprehensive coverage and high spatial resolution across the entire scalp. Unlike dry electrodes, which often suffer from poor contact and high impedance, this system utilizes electrodes interfaced with the scalp using saline solution or conductive EEG paste. This approach significantly enhances electrode–skin contact quality, ensures stable electrical conductivity, and improves the accuracy and reliability of signal acquisition, all while being completely non-invasive. From the full set of 64 channels, a strategic subset of key electrodes was selected to balance decoding efficiency with computational load. As annotated in Figure 3, these include Fp1 and Fp2 in the prefrontal region for ocular artifact detection, enabling robust blink identification for emergency-stop commands; C3 and C4 over the primary motor cortex to decode hand MI signals, which are critical for wheelchair navigation control and are well-established key sites for capturing such intentions [14]; A1 and A2 on the earlobes serving as reference electrodes to suppress common-mode noise; and the midline electrodes Cz and Pz as auxiliary channels to enhance overall decoding robustness and system responsiveness. This configuration aligns with the emphasis in the recent wearable BCI literature on robust preprocessing and modular design. The resulting hardware–software pipeline (EEG → embedded decoder → PWM → motor drivers) implements an efficient and modular control architecture suitable for real-time assistive mobility platforms.
To maintain reliable long-term performance during online operation, electrode–scalp impedance was monitored periodically throughout each recording session. Although contact impedance typically increases gradually over time due to saline evaporation and micro-movement, all experimental sessions were limited to a duration of less than 60 min, during which no noticeable degradation in signal quality nor interruption of EEG decoding occurred. Nevertheless, long-term impedance stability remains a crucial consideration for future practical deployment. As part of our ongoing research, we plan to conduct extended-duration evaluations (4–8 h) to quantitatively analyze impedance drift and its impact on decoding accuracy and to explore improved electrode designs for enhanced robustness during prolonged wheelchair usage.

2.1.3. System Integration and Architecture

For hardware control and drive, a modular design approach was adopted, consisting of three distinct functional modules, the main control, power, and isolation modules, as shown in Figure 4a. This modular architecture improves system scalability, reliability, and maintainability by clearly separating signal processing, power delivery, and safety functions. The main control module integrates an STM32G030C6T6 microcontroller, selected for its low-power 64 MHz CortexM0 + core and rich on-chip peripherals. It communicates bidirectionally with the Jetson Nano through a CH340N USB to serial converter, enabling real-time exchange of EEG encoded commands and feedback. The precise PWM waveforms generated by the MCU drive the actuators of the wheelchair. The module also links to an OLED display and other peripherals via the Inter-Integrated Circuit (I2C) bus, which enables low-speed serial communication for user interaction and system monitoring.
As illustrated in Figure 4b, the integrated hardware system is composed of three main modules: the drive module, relay unit, and master control module. The drive module delivers power conversion and motor control through a high-current driver protected by circuit breakers, ensuring reliable actuation of the wheelchair motors. The relay unit serves as an intermediate switching interface, enabling the safe and isolated control of multiple actuators and peripheral devices. The master control module, centered around a microcontroller board, coordinates signal processing, decision logic, and communication with upper-layer systems. Together, these modules form a robust, modular architecture that ensures stable operation, electrical safety, and ease of maintenance for the brain-controlled wheelchair.

2.2. Multi-Modal Wheelchair Control Realization

2.2.1. Blink Signal Detection and Signal Process

  • Blink Signal Characteristics and Detectability;
Blink signals, also known as EOG signals, arise from transient changes in ocular surface potentials caused by periocular muscle contractions—especially the orbicularis oculi and levator palpebrae superioris—during eyelid closure and opening. These blink-induced potentials are sharp, high-amplitude deflections (typically 100–400 ms) that stand out clearly against background EEG activity [15]. As shown in Figure 5, a canonical blink waveform features a rapid rise (or fall) followed by a slower return to baseline, with polarity and amplitude varying by frontal electrode site. Owing to their distinctive morphology and temporal profile, blink signals serve as a robust control modality in BCI systems—for instance, as reliable “emergency stop” commands or mode switches in real-time brain-controlled wheelchairs [16].
To ensure accurate and real-time blink detection, a lightweight signal preprocessing pipeline was designed, as illustrated in Figure 6. Given that blink signals are short-duration and high-amplitude, intensive filtering is avoided to minimize latency. The process begins by selecting blink-related channels (e.g., Fp1, Fp2) and discarding low-quality or unrelated channels to improve the signal-to-noise ratio. As highlighted by the research in [17], Fp1 and Fp2 electrodes are critical for detecting blink artifacts due to their proximity to ocular control areas in the frontal lobe. Next, interpolation is applied to reconstruct defective signal segments, preserving temporal integrity [18]. The signals are then re-referenced using stable electrodes such as A1 and A2 to eliminate common-mode noise and enhance blink-related feature visibility. Finally, minimal drift correction is performed, and computationally expensive operations like Independent Component Analysis (ICA) are deliberately skipped to maintain processing speed. This approach strikes a balance between signal clarity and the stringent timing requirements of real-time BCI systems. Unlike motor imagery processing, blink detection prioritizes response latency over exhaustive denoising. Therefore, no band-pass filtering or ICA is applied, and only lightweight steps including channel selection (Fp1/Fp2), defective-segment interpolation, rereferencing (A1/A2), and minimal drift correction are performed to preserve the sharp morphology of blink events.
To detect blink events in real-time with high accuracy, we designed a dual-threshold detection mechanism combined with a lockout and reset strategy. As illustrated in Figure 6, the system monitors the amplitude of the EEG signal and triggers blink detection when the signal exceeds a predefined high threshold T h . This marks the start of a blink event.
Immediately after the signal crosses T h , the system enters a lockout period T lockout , during which subsequent signal peaks are ignored to prevent multiple detections from the same blink. This period is typically set to 400–600 ms, corresponding to the average duration of a blink. After the lockout period elapses, the system waits for the signal to drop below a defined low threshold T h , indicating that the blink has ended. However, the system will only begin the next detection cycle when both conditions are satisfied: T lockout has passed, and the signal amplitude has fallen below T h , signaling the completion of the blink. Before blink detection, two adaptive amplitude thresholds are automatically calculated based on the frontal EEG channels (Fp1/Fp2). Specifically, the high threshold Th is defined as μ + 3 σ and the low threshold Tl as μ + 1 σ , where μ and σ denote the mean and standard deviation of the baseline EEG signal over a 1–2 s sliding window without blinking. This statistical rule ensures that deliberate blinks with larger peak amplitudes can be reliably distinguished from spontaneous physiological ones, while minimizing false detections in real-time execution.
The entire process is formalized in Algorithm 1, which iterates over the time series signal and applies the following logic:
Algorithm 1: Blink Detection Algorithm
Applsci 15 12577 i001

2.2.2. MI Signal Processing

  • EEG Rhythms and Event-Related Desynchronization (ERD)/Event-Related Synchronization (ERS) Theory
Motor imagery (MI) primarily modulates two EEG frequency bands, the Mu rhythm (8–13 Hz) and the Beta rhythm (13–30 Hz), which reflect rhythmic oscillations in the motor cortex. These rhythms represent coordinated neural activity associated with motor control and movement preparation.
During MI or actual movement, power in these bands decreases—known as event-related desynchronization (ERD)—and increases upon relaxation, termed event-related synchronization (ERS). These dynamic changes form the basis of MI detection.
Moreover, MI induces contralateral activation: imagining movement on one side (e.g., left hand) suppresses motor signals in the opposite hemisphere, producing distinct ERD patterns used as features for classification.
  • Signal Processing Pipeline for MI
MI signals typically exhibit low amplitudes and are often embedded within spontaneous EEG activity. Therefore, strict signal processing is essential to extract reliable features for downstream classification. As shown in Figure 7, the raw EEG signal often contains baseline drift, EOG, and EMG interference.
The preprocessing pipeline begins with the selection of motor imagery (MI)-relevant channels, specifically C3 and C4, which correspond to the left and right sensorimotor cortices according to the international 10–20 system. These electrodes were chosen because they exhibit strong ERD/ERS patterns during MI tasks. Although the EEG headset provides 64 channels in total, the remaining electrodes are retained to ensure full-scalp coverage for artifact rejection, spatial referencing, and future multi-modal analysis. Subsequently, interpolation is applied to reconstruct corrupted or missing signal segments and maintain temporal integrity. Next, the signals are re-referenced using stable electrodes (e.g., A1/A2) to enhance contrast, and a high-pass filter (0.5 Hz cutoff) is applied to remove low-frequency drift and stabilize the baseline. Finally, Independent Component Analysis (ICA) separates and eliminates residual artifacts—such as those from blinks, ocular motion, and muscle activity—without compromising real-time performance [19]
This structured, multi-stage pipeline significantly improves EEG quality and the robustness of MI decoding for real-time BCI applications. In contrast, motor imagery decoding demands higher signal quality because ocular and muscular artifacts can severely distort ERD/ERS features. Therefore, a more rigorous preprocessing pipeline is adopted, including C3/C4 channel selection, interpolation, rereferencing, high-pass filtering (0.5 Hz cutoff) to remove low-frequency drift, and ICA to suppress EOG/EMG components, improving decoding reliability at the cost of slightly increased latency. This study leverages these principles to implement four distinct MI paradigms corresponding to the wheelchair’s movement states.
  • Experimental Protocol for MI Tasks
To train and validate the motor imagery (MI) classification models, EEG data were collected during four directional MI tasks in which participants imagined a ball moving up, down, left, or right. A total of 20 healthy college students (10 male, 10 female; age 20–25 years) participated in the study. All subjects had normal or corrected vision and no history of neurological or psychiatric disorders.
Each trial began with a visual cue displayed on a screen showing a static image of a ball with an arrow indicating the intended direction of motion. After a 2 s fixation period, the cue changed to an animated ball moving in that direction for 5 s. Participants were instructed to mentally simulate the ball’s movement without any physical motion. A 3 s rest interval followed each trial to avoid fatigue. Each participant completed 40 trials per direction (160 trials total), and the order of directions was randomized. EEG signals were recorded at a 1 kHz sampling rate using the C3, C4, Cz, and Pz electrodes for MI decoding.
The classification results of these four directional MI tasks were then mapped to corresponding wheelchair movement commands: imagining the ball moving up corresponds to the forward command, down to backward, left to turn left, and right to turn right. This mapping enables the system to translate users’ mental imagery into real-time motion control of the wheelchair.
  • Feature Extraction.
ERS and ERD are prominent phenomena observed during MI (MI) tasks. When a participant performs unilateral MI, the energy of the μ rhythm (8–13 Hz) and β rhythm (16–24 Hz) in the contralateral motor–somatosensory cortex decreases (ERD), while energy in the ipsilateral area increases (ERS). Therefore, we extract band energy features within these two frequency bands as the core input for classification.
The band energy in a given frequency band is calculated as follows:
Band Energy i j = k = 1 n P i j ( f k )
where P i j ( f k ) is the power spectral density (PSD) of the i-th channel in the j-th time window at frequency point f k , and f k denotes the k-th frequency component within the target band.
In addition to frequency-domain energy, we also extract time domain statistical features such as mean, variance, skewness, and kurtosis. Furthermore, we incorporate time–frequency domain features, including short-time Fourier transform (STFT), wavelet transform, Hilbert–Huang transform (HHT), and phase locking value (PLV) to capture both spectral and temporal dynamics. Among them, PLV (Phase locking value) is used to measure the phase synchrony between two brain regions:
PLV = 1 N k = 1 N e j Δ ϕ k
where N is the number of sampling points, Δ ϕ k represents the phase difference between two signals at the k-th sampling point, e j Δ ϕ k is the complex exponential form used to represent phase, and the outer absolute value symbols denote the modulus, which reflects the average strength of phase consistency.
Figure 8 illustrates sample EEG signals from the C3 and C4 channels in both time and frequency domains.
  • Random Forest Classifier.
To classify EEG signals corresponding to the four motor imagery (MI) tasks, we employed a random forest classifier due to its high robustness, fast inference speed, and strong tolerance to noisy data. The RF algorithm operates by constructing an ensemble of multiple decision trees, each trained on a different subset of the original dataset using bootstrap sampling.
Given the original dataset D [20] collected from the four-directional ball motor imagery experiment, multiple training subsets D 1 , D 2 , , D B were generated through sampling with replacement, where each D b has the same size as D.
Each decision tree is independently trained on its corresponding subset. During the training process, at every internal node, a random subset of the available features is selected. The best splitting feature is then determined from this subset based on a splitting criterion. Two commonly used criteria are Information Gain and Gini Impurity.
The Information Gain I G quantifies the reduction in entropy after a dataset is split on an attribute. It is defined as follows:
I G ( D , A ) = H ( D ) v A | D v | | D | H ( D v )
where D is the current sample set, A is the set of features, and D v is the subset resulting from splitting D by feature v. The entropy H ( D ) is calculated as:
H ( D ) = i = 1 m p i log 2 p i
where p i denotes the proportion of samples in class i, and m is the total number of classes.
Alternatively, the Gini Impurity criterion measures the probability of a misclassification and is defined as follows:
G i n i ( D ) = 1 i = 1 m p i 2
Once the optimal feature and split point are selected, the dataset is recursively partitioned until one of the stopping conditions is satisfied. These may include reaching a predefined maximum depth or having the number of samples in a leaf node fall below a certain threshold.
Once all decision trees in the random forest are constructed, classification is performed via majority voting. For a given test instance, each of the B trees provides an independent class prediction c b . The final output label c ^ is determined as the class receiving the highest number of votes among all tree predictions:
c ^ = argmax c C b = 1 B I ( c b = c )
where C denotes the set of all possible classes, and I ( c b = c ) is an indicator function equal to 1 if prediction c b from the b-th tree matches class c, and 0 otherwise. This ensemble approach ensures that the final prediction is robust to the variability and noise present in individual trees, thereby improving generalization.
To further enhance classification performance, a comprehensive hyperparameter optimization process was conducted using a grid search strategy. We systematically evaluated combinations of different parameter values—including maximum tree depth, number of decision trees, the minimum number of samples required to split an internal node, and the number of features considered at each split. Each combination was assessed via cross-validation to ensure model stability and generalization.

2.2.3. Concentration Level Analysis

  • Experimental Task Design;
The brain’s level of concentration is closely associated with the activity of specific EEG rhythms, including θ -, α -, and β -band signals, which exhibit distinct frequency ranges and spatial distributions across the cortex. Empirical studies have shown that attention is typically linked with increased frontal β activity, parietal α synchronization, and central θ enhancement under focused mental states [21]. However, due to hardware limitations of the EEG acquisition equipment and the need to ensure system lightweight design, this work selects a reduced number of electrode channels for feature extraction. Feature extraction was performed using EEG signals from Fp1, Fp2, C3, C4, Cz, and Pz electrodes. The frontal channels (Fp1 and Fp2) were selected because they are located near the ocular motor areas and are sensitive to blink and attention-related potentials. The central and parietal electrodes (C3, C4, Cz, and Pz) were chosen because they overlay the sensorimotor cortices responsible for motor imagery (MI) activity, and previous studies have shown strong ERD/ERS patterns in these regions during MI tasks. This electrode configuration provides a balance between ocular control features and motor-intent decoding accuracy, while minimizing redundancy among the 64 available channels.
To enable real-time adaptive speed regulation of the brain-controlled wheelchair based on the user’s attention level, the EEG samples need to be categorized into distinct concentration states. Accordingly, we design four experimental tasks that elicit different cognitive loads and levels of external attention. The classification standard and task design are presented in Table 1.
We use the concentration values of Tasks 1, 2, and 3 as the three levels of high, medium, and low wheelchair speed.
  • Sample Entropy
Sample entropy (SE) is a nonlinear metric that quantifies the complexity of a time series. It is particularly suitable for analyzing EEG signals that contain mixed deterministic and stochastic components. Unlike approximate entropy, SE reduces bias by avoiding self-comparisons and is more consistent across different data lengths.
S E ( m , r , N ) = ln B m + 1 ( r ) C m ( r )
where m is the embedding dimension, representing the length of the constructed vectors; N is the length of the sequence; B m + 1 ( r ) is the proportion of patterns that satisfy the similarity condition in dimension m + 1 ; and C m ( r ) is the proportion of patterns that satisfy the similarity condition in dimension m.
  • Wavelet Packet Decomposition
Wavelet packet decomposition is applied to separate the EEG signal into standard frequency sub-bands corresponding to θ , α , and β waves. Each sub-band captures the signal activity within a specific range, as shown in Figure 9. The total energy for each frequency band is obtained by summing the squared wavelet coefficients in that band. The ratios E θ E all , E α E all , E β E all are then computed, representing the relative contribution of each frequency band to the total energy.
The final feature vector F incorporates both time domain and frequency domain statistics. As illustrated in Figure 10, each of the four selected channels ( F p 1 , F p 2 , C 3 , and C 4 ) yields ten features: rectified mean, maximum value, peak-to-peak difference, root mean square (RMS), standard deviation, margin factor, sample entropy, and the energy ratios of θ , α , and β bands. The complete feature vector thus contains 40 dimensions (10 features × 4 channels).
  • SVM Classifier
To classify EEG signals corresponding to different attentional states, a SVM classifier was employed owing to its robustness and high discriminative power. The SVM algorithm maps the training data into a high-dimensional feature space, where it constructs an optimal hyperplane that maximizes the margin between different classes, thereby improving generalization [22,23]. In this study, the dataset was randomly divided into three subsets: 60% for training, 20% for validation, and the remaining 20% for testing. This stratified split ensured that each subset maintained a representative distribution of all attentional states. Initially, all ten extracted features—including time domain, frequency domain, and nonlinear metrics—were used as input to the SVM model to establish a baseline performance. The SVM classifier was configured to distinguish four attention states (high, medium, low, and non-externally directed). The initial input feature space consisted of 80 dimensions, derived from 10 features extracted from each of the eight EEG channels (6 time domain statistics, 1 nonlinear complexity metric based on sample entropy, and 3 frequency domain energy ratios). To reduce redundancy and improve decoding efficiency, a Sequential Forward Selection (SFS) strategy was applied [24]. The optimal feature subset contained 40 dimensions, primarily including sample entropy, standard deviation, RMS, rectified mean, and margin factor, which provided the strongest discrimination across attention levels. Using the full 80-dimensional feature set, the SVM achieved an average classification accuracy of 88.7%. After SFS optimization, accuracy improved to 94.1%, and the average per-subject accuracy increased from 90.03% to 92.00%, demonstrating that feature selection substantially enhanced performance while reducing computational cost for real-time control.
For motor imagery decoding, a random forest classifier with tuned hyperparameters yielded robust performance, with accuracies consistently above 75% and averaging close to 80%, demonstrating dependable translation of EEG into directional commands. Random forest was selected due to its high accuracy and strong generalization ability [20]. Compared to deep learning, random forest achieves good accuracy with faster response times. This is particularly important for real-time systems. Furthermore, as new EEG data from different subjects may be added in the future, the ability of random forest to prevent overfitting is beneficial. On the other hand, Support Vector Machine (SVM) was chosen for its ability to handle high-dimensional data. With the feature dimensionality reaching 100 (10 channels × 10 features), SVM demonstrated strong performance without the degradation often associated with the ’curse of dimensionality.’ Its mechanism of focusing on critical support vectors ensures that redundant dimensions do not interfere, resulting in a baseline classification accuracy of 88.7%.

3. Results

3.1. MI Classification Results

3.1.1. Random Forest Hyperparameter Optimization

To ensure that the random forest classifier achieved its maximum potential, systematic hyperparameter optimization was performed using a grid search strategy. This process was essential because the performance of tree-based ensemble models is highly sensitive to parameters such as tree depth, number of trees, and minimum sample thresholds. The search space covered a variety of plausible combinations to balance model complexity, overfitting risk, and generalization ability.
The heatmap in Figure 11 illustrates the classification accuracy across different parameter combinations. From the heatmap, it is evident that deeper trees with an appropriate number of features per split consistently produced better results, although excessively deep trees occasionally led to overfitting. Therefore, the final parameters (summarized in Table 2) represent a compromise between predictive accuracy and computational efficiency, ensuring robust performance without unnecessary complexity.
These optimized parameters not only improved the overall model accuracy but also enhanced its stability, making it suitable for real-time EEG signal decoding where both speed and reliability are crucial.

3.1.2. Predicted Results

After training the random forest classifier using the optimized parameters, its performance was evaluated on the MI EEG dataset. The primary goal of this experiment was to assess the system’s ability to distinguish between four distinct imagined movements (up, down, left, right), which directly translate into wheelchair navigation commands.
A total of 1000 EEG signal segments was collected, with 250 segments corresponding to each directional MI task. The dataset was split into 600 samples for training, 200 for validation, and 200 for testing. To evaluate model stability and robustness, 80 samples were randomly selected from the validation set for inference, and this procedure was repeated 15 times.
The results of these trials are summarized in Table 3. The random forest classifier consistently achieved classification accuracies above 75%, with an average around 80%, confirming its robustness. Notably, the best trial reached over 90% accuracy. However, the results also reveal a performance gap in distinguishing left and right movements compared to up and down. This discrepancy is likely due to weaker signal amplitude or greater noise in the EEG channels corresponding to lateral MI, which is a well-known challenge in EEG-based BCIs.
Despite this limitation, the classifier reliably produced correct predictions with low variance across trials, validating the effectiveness of the proposed multi-modal EEG-based control framework. Future work could address the left/right imbalance by collecting more training samples, employing advanced feature extraction techniques, or incorporating adaptive learning methods tailored to individual users.

3.2. Concentration Classification Results

Using the complete feature set, the SVM classifier achieved an average classification accuracy of 88.7%, indicating good baseline performance in distinguishing between different attentional states [25]. To further enhance accuracy and reduce computational complexity, a sequential forward feature selection method was applied. This iterative approach identified the most discriminative subset of features, effectively removing redundant or less informative features. With the optimized feature set, the classification accuracy improved significantly, reaching 94.1%. This substantial improvement demonstrates that the selected features effectively capture the underlying neural patterns associated with attention levels and are suitable for real-time concentration-based speed regulation in brain-controlled wheelchair systems. These results validate the feasibility and reliability of using EEG-based SVM classifiers for adaptive control applications.

3.3. Trajectory Control Experiment Results

To further evaluate the practical performance of the multi-modal EEG–fusion wheelchair system, a trajectory control experiment was designed [26]. The goal of this experiment was to assess whether the proposed system could enable users to control the wheelchair reliably and precisely along a predefined path, while performing complex maneuvers such as forward movement, turning, and reversing. This task reflects real-world mobility demands, where users often encounter various obstacles and must navigate through narrow or curved paths. Thus, validating the system’s trajectory control under controlled but challenging conditions is crucial to demonstrate its feasibility for daily. Three subjects participated in the experiment, each operating the wheelchair via the multi-modal interface. The predefined path followed the sequence A → B → C → D → C, completing a round trip with intermediate points marked at every 0.5 m for measurement. Notably, the segment from D to C required reversing in place, testing the system’s responsiveness and control accuracy during backward motion.
The experimental results, summarized in Table 4, demonstrate that all three subjects successfully completed the full path control task, navigating back and forth from A to F. Their trajectories, marked in Figure 12 with pink circles (Trail 1), green diamonds (Trail 2), and blue squares (Trail 3), closely aligned with the reference path. Quantitatively, the maximum deviation from the reference path was no more than 0.5 m, even during challenging segments involving turns or reversing, and the median deviation remained below 0.25 m. This high level of accuracy underscores the operability, stability, and robustness of the proposed system under various motion modes. The experiment highlights the system’s capacity to maintain precise control despite the inherent variability and noise of EEG signals, confirming its suitability for real-world navigation tasks. Furthermore, the ability to execute smooth transitions between forward, turning, and reversing maneuvers suggests that the multi-modal interface effectively mitigates the limitations of single-modality control by combining complementary signals to improve command reliability and responsiveness.

4. Discussion

This study introduced a multi-modal EEG–fusion neurointerface wheelchair system that integrates motor imagery (MI), blink detection, and attention-level analysis for precise and adaptive control [27]. The experimental results demonstrated reliable decoding accuracy and rapid response under controlled indoor conditions. Nevertheless, several aspects require further investigation before real-world deployment.
The current tests focused on validating predefined commands—forward, backward, left-turn, and right-turn—and demonstrated consistently low path deviation and short response time, confirming operational robustness in structured environments. Real-world experiments were conducted with a small group of healthy university students. With a short instructional session and demonstration, participants typically mastered the system within approximately 30 min, reflecting a low learning burden. A graphical user interface (GUI) was implemented to further improve ease of use and command feedback. However, no elderly or mobility-impaired users—our primary target population—have been tested yet. Future studies will prioritize this demographic to assess usability, comfort, calibration effort, and learning adaptation, as well as to evaluate recalibration efficiency during operator switching [28].
To improve real-world applicability, we will introduce more challenging environments such as dynamic obstacle avoidance and complex terrains. Preliminary tests using an auxiliary gyroscope module show that the wheelchair can automatically compensate motor torque while ascending slopes and reduce driving force when descending. However, multi-level mobility challenges—such as interaction with elevators or escalators—remain unsolved. We plan to incorporate environmental perception modules (e.g., ultrasonic/LiDAR sensors) and corresponding safety mechanisms to support advanced navigation and enhance adaptability in unstructured environments.
Although multi-modal EEG–fusion enhances perceptual capability, the architecture was designed to avoid unnecessary complexity. Each signal channel operates independently through lightweight decoding (e.g., threshold-based blink detection and low-dimensional SVM inference), minimizing computational load and inter-modality interference. Even if a single modality fails, the control pipeline can continue functioning through rapid fault isolation, maintaining robustness and usability.
Although reducing hardware cost is a long-term goal, this is never pursued at the expense of system reliability. Cost optimization was achieved through streamlined electrodes (eight-channel configuration), lightweight preprocessing, and modularized embedded hardware—while sustaining essential performance (MI decoding accuracy > 75% across 15 trials; path deviation < 0.25 m). Safety-critical measures such as dual-threshold emergency blink stopping and concentration-adaptive speed control provide full redundancy. Core components (e.g., STM32G0 MCU, Infineon motor drivers) meet assistive equipment reliability standards. We therefore argue that cost efficiency and operational robustness are complementary goals toward practical adoption and wider accessibility.
In addition, factors affecting long-term EEG usability, such as sweating-induced signal degradation and electrode discomfort, were not the primary focus of this study. Future work will investigate ergonomic electrode designs, monitoring long-term impedance changes, and exploring in-ear EEG acquisition to improve comfort, stability, and social acceptability during prolonged daily operation. Finally, the system’s modular and edge-computing architecture provides compatibility with diverse medical sensing devices already integrated into commercial wheelchairs. With communication-protocol adaptation and regulatory-compliant data handling, this approach supports clinical translation and integration into next-generation smart mobility platforms [29].
Overall, these planned advancements will enhance the usability, adaptability, and long-term deployment potential of the proposed system, paving the way for a reliable and practical brain-controlled wheelchair in everyday life.

5. Conclusions

In conclusion, this study developed and validated a multi-modal EEG–fusion neurointerface wheelchair control system that integrates motor imagery, blink detection, and attention-level signals for precise and adaptive navigation. The system demonstrated high accuracy, low latency, long-term stability, and lower hardware cost compared with traditional single-modality BCIs. These performance improvements benefited from systematic enhancements in EEG channel selection (from a 64-channel cap to eight key electrodes), signal preprocessing, feature extraction, and machine learning optimization.
Although only eight electrodes were ultimately required for real-time decoding, a 64-channel Smarting cap was initially employed to provide high spatial resolution during system design and training [30]. This enabled comprehensive cortical coverage and a rigorous search for the most informative MI-related channels. Compared with consumer-grade EEG headsets, high-density caps ensure stronger electrode–skin contact, superior signal fidelity, and greater flexibility for algorithm development. The validated eight-channel configuration will support future deployment using compact and wearable devices—an ongoing research effort aiming to enhance usability, comfort, and social acceptability in everyday use [31].
Despite these advances, challenges remain, particularly the lower decoding accuracy of lateral MI commands and the limited coverage of real-world environments. Future work will expand participant diversity, strengthen field validation, and integrate additional perception modules to support autonomous interaction with complex indoor and outdoor environments. The long-term vision is that users provide only high-level commands while the wheelchair performs safe and accurate navigation automatically. This goal will be pursued through adaptive and transfer-learning techniques, deeper spatiotemporal feature representations, larger multi-modal datasets, and comprehensive clinical evaluations [32].
Overall, this study highlights the strong potential of multi-modal EEG-based BCIs to enhance mobility, independence, and quality of life for individuals with physical disabilities, establishing a solid foundation for practical translation and future clinical applications.

Author Contributions

Conceptualization, R.A. and Y.Z.; methodology, Y.Z. and R.A.; software, H.C.; validation, R.A. and Y.Z.; formal analysis, R.A. and Y.Z.; investigation, R.A. and X.X.; resources, R.A. and X.X.; data curation, R.A. and Y.Z.; writing—original draft preparation, Y.Z.; writing—review and editing, H.C.; visualization, Y.Z. and H.C.; supervision, R.A. and X.X.; project administration, R.A. and X.X.; funding acquisition, R.A. and X.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of Nanjing Medical University Affiliated Brain Hospital on 3 March 2024. Informed consent for participation was obtained from all subjects involved in the study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Palumbo, A.; Gramigna, V.; Calabrese, B.; Ielpo, N. Motor-Imagery EEG-Based BCIs in Wheelchair Movement and Control: A Systematic Literature Review. Sensors 2021, 21, 6285. [Google Scholar] [CrossRef]
  2. Al-Qaysi, Z.T.; Zaidan, B.B.; Zaidan, A.A.; Suzani, M.S. A Review of Disability EEG-Based Wheelchair Control Systems: Taxonomy, Challenges and Recommendations. Comput. Methods Programs Biomed. 2018, 164, 221–237. [Google Scholar] [CrossRef]
  3. Siribunyaphat, N.; Punsawad, Y. Brain–Computer Interface Based on Steady-State Visual Evoked Potential Using Quick-Response Code Pattern for Wheelchair Control. Sensors 2023, 23, 2069. [Google Scholar] [CrossRef]
  4. Liu, L.; Li, J.; Ouyang, R.; Liang, W.; Li, F.; Lv, Z.; Wu, X. Multimodal Brain-Controlled System for Rehabilitation Training: Combining Asynchronous Online Brain–Computer Interface and Exoskeleton. J. Neurosci. Methods 2024, 406, 110132. [Google Scholar] [CrossRef] [PubMed]
  5. Ghasemi, S.; Gračanin, D.; Azab, M. Empowering Mobility: Brain–Computer Interface for Enhancing Wheelchair Control for Individuals with Physical Disabilities. arXiv 2024, arXiv:2404.17895. [Google Scholar]
  6. Rashid, M.; Sulaiman, N.; Majeed, A.P.P.A.; Musa, R.M.; Nasir, A.F.A.; Bari, B.S.; Khatun, S. Current Status, Challenges, and Possible Solutions of EEG-Based Brain-Computer Interface: A Comprehensive Review. Front. Neurorobot. 2020, 14, 25. [Google Scholar] [CrossRef] [PubMed]
  7. Halford, J.J.; Sabau, D.; Drislane, F.W.; Tsuchida, T.N.; Sinha, S.R. American Clinical Neurophysiology Society Guideline 4: Recording Clinical EEG on Digital Media. Neurodiagn. J. 2016, 56, 261–265. [Google Scholar] [CrossRef]
  8. NVIDIA Corporation. NVIDIA Jetson Nano Developer Kit: Technical Specifications. Available online: https://developer.nvidia.com/embedded/jetson-nano (accessed on 26 October 2025).
  9. Canilang, H.M.; Caliwag, E.M.F.; Njoku, J.; Caliwag, A.; Lim, W. Edge EEG: Edge AI Device-based EEG Signal Processing for Emotion Recognition. Available online: https://api.semanticscholar.org/CorpusID:244950821 (accessed on 10 October 2025).
  10. Bell, A.J.; Sejnowski, T.J. An Information-Maximization Approach to Blind Separation and Blind Deconvolution. Neural Comput. 1995, 7, 1129–1159. [Google Scholar] [CrossRef]
  11. Saibene, A.; Ghaemi, H.; Dagdevir, E. Deep learning in motor imagery EEG signal decoding: A Systematic Review. Neurocomputing 2024, 610, 128577. [Google Scholar] [CrossRef]
  12. Liao, W.; Liu, H.; Wang, W. Advancing BCI with a transformer-based model for motor imagery classification. Sci. Rep. 2025, 15, 23380. [Google Scholar] [CrossRef]
  13. An, Y.; Mitchell, D.; Lathrop, J.; Flynn, D.; Chung, S.-J. Motor Imagery Teleoperation of a Mobile Robot Using a Low-Cost Brain–Computer Interface for Multi-Day Validation. arXiv 2024, arXiv:2412.08971. [Google Scholar] [CrossRef]
  14. Li, X.; Wang, D.; Zhang, B.; Fan, C.; Chen, J.; Xu, M.; Chen, Y. A Review on electroencephalogram Based Channel Selection. J. Biomed. Eng. 2024, 41, 398–403. [Google Scholar] [CrossRef]
  15. Nyström, M.; Andersson, R.; Holmqvist, K. What Is a Blink? Classifying and Characterizing Blinks in Eye Openness Signals. Behav. Res. Methods 2024, 56, 3280–3299. [Google Scholar] [CrossRef] [PubMed]
  16. Ning, B.; Li, M.; Liu, T.; Shen, H.; Hu, L.; Fu, X. Human Brain Control of Electric Wheelchair with Eye-Blink Electrooculogram Signal. In Intelligent Robotics and Applications, ICIRA 2012; Lecture Notes in Computer Science; Su, C.-Y., Baltes, J., Liu, H., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7506, pp. 579–588. [Google Scholar] [CrossRef]
  17. Wang, M.; Cui, X.; Wang, T.; Jiang, T.; Gao, F.; Cao, J. Eye blink artifact detection based on multi-dimensional EEG feature fusion and optimization. Biomed. Signal Process. Control 2023, 83, 104657. [Google Scholar] [CrossRef]
  18. De Cheveigné, A.; Arzounian, D. Robust detrending, rereferencing, outlier detection, and inpainting for multichannel data. NeuroImage 2018, 172, 903–912. [Google Scholar] [CrossRef]
  19. Jung, T.-P.; Makeig, S.; Humphries, C.; Lee, T.-W.; McKeown, M.J.; Iragui, V.; Sejnowski, T.J. Removing electroencephalographic artifacts by blind source separation. Psychophysiology 2000, 37, 163–178. [Google Scholar] [CrossRef]
  20. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  21. Kaushik, P.; Moye, A.; van Vugt, M.; Roy, P.P. Decoding the cognitive states of attention and distraction in a real-life setting using EEG. Sci. Rep. 2022, 12, 20649. [Google Scholar] [CrossRef]
  22. Schölkopf, B.; Smola, A.J. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond; MIT Press: Cambridge, MA, USA, 2002. [Google Scholar]
  23. Hsu, C.W.; Chang, C.C.; Lin, C.J. A Practical Guide to Support Vector Classification; Department of Computer Science, National Taiwan University: Taiwan, China, 2010; Available online: https://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf (accessed on 10 November 2025).
  24. Wolpaw, J.R.; Birbaumer, N.; McFarland, D.J.; Pfurtscheller, G.; Vaughan, T.M. Brain-Computer Interfaces for Communication and Control. Clin. Neurophysiol. 2002, 113, 767–791. [Google Scholar] [CrossRef]
  25. Lotte, F.; Congedo, M.; Lécuyer, A.; Lamarche, F.; Arnaldi, B. A review of classification algorithms for EEG-based brain-computer interfaces. J. Neural Eng. 2007, 4, R01. [Google Scholar] [CrossRef]
  26. Rebsamen, B.; Burdet, E.; Guan, C.; Zhang, H.; Teo, C.L.; Zeng, Q.; Laugier, C.; Ang, M.H. Controlling a Wheelchair Indoors Using Thought. IEEE Intell. Syst. 2007, 22, 18–24. [Google Scholar] [CrossRef]
  27. Iturrate, I.; Antelis, J.M.; Kubler, A.; Minguez, J. A Noninvasive Brain-Actuated Wheelchair Based on a P300 Neurophysiological Protocol and Automated Navigation. IEEE Trans. Robot. 2009, 25, 614–627. [Google Scholar] [CrossRef]
  28. Wolpaw, J.R.; McFarland, D.J. Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans. Proc. Natl. Acad. Sci. USA 2004, 101, 17849–17854. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  29. Habibzadeh Tonekabony Shad, E.; Molinas, M.; Ytterdal, T. Impedance and Noise of Passive and Active Dry EEG Electrodes: A Review. IEEE Sens. J. 2020, 20, 14565–14577. [Google Scholar] [CrossRef]
  30. Pfurtscheller, G.; Lopes da Silva, F.H. Event-related EEG/MEG synchronization and desynchronization: Basic principles. Clin. Neurophysiol 1999, 110, 1842–1857. [Google Scholar] [CrossRef] [PubMed]
  31. Casson, A.J. Wearable EEG and beyond. Biomed. Eng. Lett. 2019, 9, 53–71. [Google Scholar] [CrossRef] [PubMed]
  32. Sayel, N.A.; Sabbar, B.M.; Albermany, S. Real Time Control System for Wheel Chair of Disabled People Using EEG Signal. In Proceedings of the 2022 4th International Conference on Advanced Science and Engineering (ICOASE), Zakho, Iraq, 21–22 September 2022; pp. 71–76. [Google Scholar] [CrossRef]
Figure 1. Brain-controlled wheelchair control principle.
Figure 1. Brain-controlled wheelchair control principle.
Applsci 15 12577 g001
Figure 2. Integrated hardware and brain-controlled wheelchair control architecture.
Figure 2. Integrated hardware and brain-controlled wheelchair control architecture.
Applsci 15 12577 g002
Figure 3. Electrode placement based on the International 10–20 system for EEG recording.
Figure 3. Electrode placement based on the International 10–20 system for EEG recording.
Applsci 15 12577 g003
Figure 4. Hardware design of the control system: (a) circuit schematic; (b) PCB implementation.
Figure 4. Hardware design of the control system: (a) circuit schematic; (b) PCB implementation.
Applsci 15 12577 g004
Figure 5. Blink signal waveform with detected blinks and thresholds.
Figure 5. Blink signal waveform with detected blinks and thresholds.
Applsci 15 12577 g005
Figure 6. Data preprocessing and filtering.
Figure 6. Data preprocessing and filtering.
Applsci 15 12577 g006
Figure 7. EEG signals before and after preprocessing for MI.
Figure 7. EEG signals before and after preprocessing for MI.
Applsci 15 12577 g007
Figure 8. Time and frequency domain representation of EEG signals from channels C3 and C4 during a MI task.
Figure 8. Time and frequency domain representation of EEG signals from channels C3 and C4 during a MI task.
Applsci 15 12577 g008
Figure 9. Schematic diagram of wavelet packet decomposition.
Figure 9. Schematic diagram of wavelet packet decomposition.
Applsci 15 12577 g009
Figure 10. Illustration of feature vector F.
Figure 10. Illustration of feature vector F.
Applsci 15 12577 g010
Figure 11. Heatmap of classification accuracy for different random forest hyperparameter combinations during grid search.
Figure 11. Heatmap of classification accuracy for different random forest hyperparameter combinations during grid search.
Applsci 15 12577 g011
Figure 12. Trajectories of three participants during the wheelchair path-following task compared to the reference path. The yellow dashed line represents the predefined reference path with measurement points, while pink circles (Trail 1), green diamonds (Trail 2), and blue squares (Trail 3) show the actual trajectories of the three participants.
Figure 12. Trajectories of three participants during the wheelchair path-following task compared to the reference path. The yellow dashed line represents the predefined reference path with measurement points, while pink circles (Trail 1), green diamonds (Trail 2), and blue squares (Trail 3) show the actual trajectories of the three participants.
Applsci 15 12577 g012
Table 1. Concentration grading.
Table 1. Concentration grading.
Concentration TypeContent
Task 1HighBrowsing and mental arithmetic
Task 2MediumBrowse text materials
Task 3LowKeep your eyes on the text and think about things unrelated
Task 4Non-externally directedTry to relax and think nothing
Table 2. Final hyperparameters used in the random forest model.
Table 2. Final hyperparameters used in the random forest model.
ParameterDescriptionValue
D max Maximum depth of each decision tree20
F max Maximum number of features considered at each node d
L min Minimum number of samples required for a leaf node1
S min Minimum number of samples required for a split5
NNumber of decision trees50
RRandom seed for reproducibility42
Table 3. Predicted results for MI classification over 15 trials.
Table 3. Predicted results for MI classification over 15 trials.
TrialUpDownLeftRightTotal
10.820.870.800.760.81
20.790.810.750.680.75
30.941.000.880.820.91
40.830.790.610.670.71
51.000.810.790.760.84
60.830.870.710.620.74
70.800.930.630.750.76
80.920.860.720.700.78
91.000.940.820.840.90
100.750.720.730.840.76
110.850.920.760.760.82
120.910.800.740.700.76
130.830.930.640.830.84
140.710.930.930.640.78
150.650.800.710.840.75
Table 4. Performance of subjects in trajectory control task.
Table 4. Performance of subjects in trajectory control task.
SubjectCommand Accuracy (%)Time (s)Driving Distance (m)Avg. Deviation (m)
S198.3 (118/120)43555.60.29
S288.2 (82/93)50454.10.58
S398.6 (145/147)48455.00.20
Mean95.047454.90.36
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

An, R.; Zhou, Y.; Chen, H.; Xu, X. Multi-Modal EEG–Fusion Neurointerface Wheelchair Control System. Appl. Sci. 2025, 15, 12577. https://doi.org/10.3390/app152312577

AMA Style

An R, Zhou Y, Chen H, Xu X. Multi-Modal EEG–Fusion Neurointerface Wheelchair Control System. Applied Sciences. 2025; 15(23):12577. https://doi.org/10.3390/app152312577

Chicago/Turabian Style

An, Rongrong, Yijie Zhou, Hongwei Chen, and Xin Xu. 2025. "Multi-Modal EEG–Fusion Neurointerface Wheelchair Control System" Applied Sciences 15, no. 23: 12577. https://doi.org/10.3390/app152312577

APA Style

An, R., Zhou, Y., Chen, H., & Xu, X. (2025). Multi-Modal EEG–Fusion Neurointerface Wheelchair Control System. Applied Sciences, 15(23), 12577. https://doi.org/10.3390/app152312577

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop