Developing a Motor Imagery-Based Real-Time Asynchronous Hybrid BCI Controller for a Lower-Limb Exoskeleton

This study aimed to develop an intuitive gait-related motor imagery (MI)-based hybrid brain-computer interface (BCI) controller for a lower-limb exoskeleton and investigate the feasibility of the controller under a practical scenario including stand-up, gait-forward, and sit-down. A filter bank common spatial pattern (FBCSP) and mutual information-based best individual feature (MIBIF) selection were used in the study to decode MI electroencephalogram (EEG) signals and extract a feature matrix as an input to the support vector machine (SVM) classifier. A successive eye-blink switch was sequentially combined with the EEG decoder in operating the lower-limb exoskeleton. Ten subjects demonstrated more than 80% accuracy in both offline (training) and online. All subjects successfully completed a gait task by wearing the lower-limb exoskeleton through the developed real-time BCI controller. The BCI controller achieved a time ratio of 1.45 compared with a manual smartwatch controller. The developed system can potentially be benefit people with neurological disorders who may have difficulties operating manual control.


Introduction
Brain-computer interface (BCI) technology benefits people suffering from neurological disorders on account of its characteristics of various computer-controlled applications using brain signals [1,2]. The recent development of a lower-limb exoskeleton is significant, considering the fact it effectively bridges between brain signals and a motor output of extremities to improve the quality of life of the gait disabilities [3][4][5]. Among the various electroencephalogram (EEG) neural features, three distinguishable ones have been adopted notably for decoding lower-limb movement intentions, namely movement-related cortical potential (MRCP), steady-state visual evoked potential (SSVEP), and event-related desynchronization (ERD). However, utilizing the MRCP for the exoskeleton control requires the BCI system to discern a movement onset time [6]. In the case of the SSVEP [7], subjects have to continuously focus on a flickering light until the evoked potential exceeds a threshold. Thereby, it is difficult for the exoskeleton drivers to deal with an unexpected outer situation. Fundamentally, the ERD is another representative EEG neural feature for the exoskeleton BCI controller, usually induced by motor imagery (MI). An asynchronous MI-based ERD indicates both spectral and spatial features. Hence, the BCI controller can match various commands related to distinctive MI strategies with separable scalp topographic patterns [8].
In the very beginning, project DARPA tried to move prosthetics based on the sensorimotor signals of the cortical activity [9,10]. Additionally, the former EU project named MINDWALKER proceeded lower-limb exoskeleton for clinical use with EEG and various biological and kinematic control signals through advanced algorithms [11,12]. The underlying studies adopted MRCP, SSVEP, and evoked potential (EP) to control robotic devices. Lately, several research groups have reported tenable results in operating an overground lower-limb exoskeleton with the MI-based BCI [13][14][15][16] Gordleeva et al. developed an exoskeleton control system utilizing three MI tasks (left, right hand MI, and rest) and subsequently captured the ERD of sensorimotor rhythms (SMR) for 14 subjects [13]. Lee et al. captured an EEG power spectral density during the hand MI and rest and performed exoskeleton mounted navigation tasks with five subjects [14]. Wang et al. compared an SSVEP and an MI-based BCI controller to move the lower-limb exoskeleton with four subjects and revealed that both controllers achieved about 80% accuracy [15]. Yu et al. developed an MI-based ERD decoder that could control the walking speed of a rehabilitation exoskeleton on the treadmill [16]. However, the aforementioned studies still adopted the left and right (or both) hand MI to generate a corresponding command output for controlling the lower-limb exoskeleton. To our knowledge, there were a few pieces of research inducing a gait-related MI [17][18][19]. Firstly, Do et al. adopted a kinesthetic MI (KMI) to refine motor skills in sports science and cognitive neurophysiology [17]. Lopez et al. considered it as a motor-attempt to move subjects' right leg as if they have started walking [18]. Finally, Donati et al. trained spinal cord injury (SCI) patients with kick imagery during a rehabilitation program [19]. Notably, it is still considered that previously mentioned MI protocols focused on the fragments of gait motions. Hence, presenting a limited correlation between the imagery and the execution, and only utilized a neural mechanism that is discriminative at a cortical level. Therefore, MIs for operating the overground lower-limb exoskeleton throughout an entire 'sit-to-sit' scenario should be more intuitive and associated with stand-up, gait-forward, and sit-down, which may reduce a cognitive load and increase decoding accuracies [20].
A real-life MI-based BCI controller for the lower-limb exoskeleton should maintain a low false activation rate in order to ensure the reliability of a control system. A 'brain switch' is a representative concept necessary for the asynchronous BCI to determine whether an ongoing continuous EEG signal implies the user's intention or not [21][22][23][24][25]. Pfurtscheller et al. demonstrated that the on/off switch utilizing a foot MI-induced beta Event-related Synchronization (ERS) rebound measured from a single vertex channel prevents the false activation of an SSVEP interface [26]. Yu et al. extracted a subject's voluntary successive eye-blink signal from an ongoing EEG signal from two prefrontal channels to activate/deactivate a P300-based speller [24]. Notably, Ortiz et al. recently introduced an attention level monitor parallel with an MI gamma-band SMR, which detects a subject's presence or absence of an MI intention [25]. Based on previous researches, this study monitored EEG artifact from an electrooculogram (EOG) signal to extract a user's intentional triple eye-blink (TEB) signals to turn on and off the MI decoder under a concept of a sequentially processed hybrid BCI for improving reliabilities of the control system [27].
Thus, in this study, we developed an MI-based BCI controller for a lower-limb exoskeleton to perform stand-up, gait, and sit-down, sequentially combined with an eye-blink switch considering a real-life scenario. The feasibility of the developed BCI exoskeleton system was tested with ten healthy subjects to explore the potentiality of its application to people with neurological impairments. This study mainly aimed to reduce a variation between the MI manner and motor output of the mounted exoskeleton. To accomplish this, we designed intuitive MI protocols, which correspond with the lower-limb exoskeleton operation.

System Overview
The developed MI-based BCI exoskeleton control system consists of three parts, namely data acquisition, EEG signal processing, and exoskeleton control (Figure 1). While the subject performs MI tasks (i.e., the kinesthetic feeling of gait and sit), a signal processing algorithm extracts features and trains the offline classifier. A decoded control command is sent to the exoskeleton via a real-time online control interface. We employed a lower-limb exoskeleton robot (RoboWear P10, NT Robot, Seoul, Korea) to integrate the developed BCI controller. The exoskeleton robot was primarily designed to assist people with SCI gait impairments (Class III Medical Device Certification, Ministry of Food and Drug Safety of Korea) to stand-up, sit-down, and gait-forward with two crutches on both hands [28].
Sensors 2020, 20, x FOR PEER REVIEW  3 of 15 trains the offline classifier. A decoded control command is sent to the exoskeleton via a real-time online control interface. We employed a lower-limb exoskeleton robot (RoboWear P10, NT Robot, Seoul, Korea) to integrate the developed BCI controller. The exoskeleton robot was primarily designed to assist people with SCI gait impairments (Class III Medical Device Certification, Ministry of Food and Drug Safety of Korea) to stand-up, sit-down, and gait-forward with two crutches on both hands [28].
Ten healthy subjects (age: 26.6 ± 3.06 years.) with no history of neurological disorders participated in this study. The subjects were all male and right-handed. All subjects gave written informed consent, which was approved by the Institutional Review Board of Korea Institute of Science and Technology (KIST IRB number 2019-032). Eight out of 10 subjects had no prior experience in BCI or wearing a powered gait assistive device. We allowed the subjects a one-hour adaptation period to familiarize themselves with operating the wearable exoskeleton.

MI Protocol
To minimize external interference, the MIs were performed in an isolated room. The subjects are standing with their hands-on crutches without wearing the lower-limb exoskeleton and facing a monitor, which displayed MI procedures ( Figure 1). The subjects were to press a hand-held button attached to the crutch when they were ready to begin each trial. Following the notification of a beep
Ten healthy subjects (age: 26.6 ± 3.06 years.) with no history of neurological disorders participated in this study. The subjects were all male and right-handed. All subjects gave written informed consent, which was approved by the Institutional Review Board of Korea Institute of Science and Technology (KIST IRB number 2019-032). Eight out of 10 subjects had no prior experience in BCI or wearing a powered gait assistive device. We allowed the subjects a one-hour adaptation period to familiarize themselves with operating the wearable exoskeleton.

MI Protocol
To minimize external interference, the MIs were performed in an isolated room. The subjects are standing with their hands-on crutches without wearing the lower-limb exoskeleton and facing a monitor, which displayed MI procedures ( Figure 1). The subjects were to press a hand-held button attached to the crutch when they were ready to begin each trial. Following the notification of a beep sound, the monitor displayed a gray fixation cross and randomly presented a symbol ('upward arrow,' 'downward arrow,' or 'box') after 3-5 s, which denotes 'Gait MI,' 'Sit MI,' or 'Do-nothing,' respectively. Once the subjects identified the cue, they started the corresponding MI ('Gait' or 'Sit') for 8 s or 'Do-nothing' for 4 s. When the subjects heard a second beep sound, they stopped the task and prepared for the subsequent trial. Figure 2 shows the MI procedure. respectively. Once the subjects identified the cue, they started the corresponding MI ('Gait' or 'Sit') for 8 s or 'Do-nothing' for 4 s. When the subjects heard a second beep sound, they stopped the task and prepared for the subsequent trial. Figure 2 shows the MI procedure. Each subject executed two types of MI tasks ('Gait' and 'Sit') along with a 'Do-nothing' task. In the 'Do-nothing' task, we let subjects rest with their eyes open without performing MI or other mental tasks. The subjects were instructed during the MI tasks to perform a mental rehearsal of gait or sit. The limbs were to remain still and they were to focus on the kinesthetic feelings, including a somatosensory sensation and experience of motor execution with the exoskeleton. Furthermore, we forbade subjects from visualizing themselves from the viewpoint of an external observer to limit stimulating their visual cortex. The details of the comments were listed in Table 1. Table 1. Detail of motor imagery (MI) instructions.

Operator's Instructions
Before MI "Be familiar with consistent locomotion of the robot trajectory with your pair of crutches." "While practicing 'sit', please pay attention to your upper limb movement which plays an important role in lowering the body down to the chair with the exoskeleton." During MI "Pay attention to the kinesthetic sensation that just before your limb about to execute the movement." "Do mental rehearsal in a slow movement phase, for example, heel strike, weight shift, and toe-off." "We also recommend you to perceive the input sensation of foot sole and hand grip." "For 'Do-nothing', please ignore the somatosensory or visual input sensation, rather stay unfocused eyes with an absent-minded." Prohibited "Do not picture the scene of observing yourselves or other person's movement execution." The offline MI procedure consisted of randomly mixed 90 trials, which constituted 30 repetitions for three tasks; Gait MI, Sit MI, and Do-nothing. The whole process was organized and presented on the monitor by a managing software (E-prime3, Psychology Software Tools, Sharpsburg, PA, USA) with an event marking module (BBTK USB TTL, The Black Box ToolKit Ltd., Sheffield, UK).

EEG Signal Processing
EEG signal processing was conducted using MATLAB software (2017a, MathWorks, Natick, Each subject executed two types of MI tasks ('Gait' and 'Sit') along with a 'Do-nothing' task. In the 'Do-nothing' task, we let subjects rest with their eyes open without performing MI or other mental tasks. The subjects were instructed during the MI tasks to perform a mental rehearsal of gait or sit. The limbs were to remain still and they were to focus on the kinesthetic feelings, including a somatosensory sensation and experience of motor execution with the exoskeleton. Furthermore, we forbade subjects from visualizing themselves from the viewpoint of an external observer to limit stimulating their visual cortex. The details of the comments were listed in Table 1. Table 1. Detail of motor imagery (MI) instructions.

Operator's Instructions
Before MI "Be familiar with consistent locomotion of the robot trajectory with your pair of crutches." "While practicing 'sit', please pay attention to your upper limb movement which plays an important role in lowering the body down to the chair with the exoskeleton." During MI "Pay attention to the kinesthetic sensation that just before your limb about to execute the movement." "Do mental rehearsal in a slow movement phase, for example, heel strike, weight shift, and toe-off." "We also recommend you to perceive the input sensation of foot sole and hand grip." "For 'Do-nothing', please ignore the somatosensory or visual input sensation, rather stay unfocused eyes with an absent-minded." Prohibited "Do not picture the scene of observing yourselves or other person's movement execution." The offline MI procedure consisted of randomly mixed 90 trials, which constituted 30 repetitions for three tasks; Gait MI, Sit MI, and Do-nothing. The whole process was organized and presented on the monitor by a managing software (E-prime3, Psychology Software Tools, Sharpsburg, PA, USA) with an event marking module (BBTK USB TTL, The Black Box ToolKit Ltd., Sheffield, UK).

EEG Signal Processing
EEG signal processing was conducted using MATLAB software (2017a, MathWorks, Natick, MA, USA), which received data through a TCP/IP connection from Remote Data Access host (Recorder, BrainProducts, Gilching, Germany). The offline MI data features were extracted through a Filter Bank Common Spatial Pattern (FBCSP) algorithm. Through a mutual information-based best individual feature (MIBIF) selection method, we sort contributing features as training input to a linear support vector machine (SVM) classifier.

Feature Extraction
Since we focused on the gait-related SMR feature, we monitored ERD from low mu to high beta EEG frequency bands. EEG signals were passed through the zero-phase Butterworth infinite impulse response (IIR) bandpass filter between high Theta to low Gamma frequency . The signals were divided into 6 ranges (filter bank; 7-9, 10-12, 13-15, 16-20, 21-25, and 26-34 Hz) considering the subject-dependent dominant frequency features. Next, six bandpass-filtered EEG data were prepared to derive six different CSP transformation matrices.
The single-trial EEG input signal matrix E (where N × T; N is the number of channels; T the number of samples in time per channel) is linearly transformed by projection matrix W. The spatially filtered signal Z given as We have decided to choose the first and last two rows of signal Z, which differentiate the most [29]. Therefore, the modified transformation matrix has four rows of six frequency bands and channel columns (24 × 31). Finally, the variance difference maximized EEG signals were then log-normalized [30].

Feature Selection
The 24 features then sorted in descending order following the MIBIF method [30], which determined the priority of the signal contributions of well differentiating the two classes. The mutual information of two random variables defined as, where p (X,Y) is a joint probability mass function of X and Y, and p (X) and p (Y) are a marginal probability mass function of X and Y, respectively. Here, X is each of 24 features, and Y is the corresponding classifier label Y ∈ Gait MI vs. Do − nothihg or {Gait MI vs. Sit MI}. The first k features are empirically selected according to each subject ( k = 4 ∼ 10). Finally, the resulting feature matrix was adopted for training the linear SVM classifier.

Real-Time Decoder
The online and offline decoders were synced in signal processing steps. The real-time input EEG signals were sent to the online decoder in every single packet of 31 channels by 10 data points (500 Hz sampling rate). The decoding algorithm ran every 250 data points (window shift). The pre-trained linear SVM classifier outputted a single control command every 0.5 s with a signal processing window size of 2 s. Then, the control interface received the commands to control the exoskeleton.

BCI Controller
To describe an online system logic flow, we illustrate a finite state machine (FSM) of the control interface ( Figure 3). The system should be started and terminated from the sit state for safety purposes. The state transitions were represented by arrows corresponding to methods (MI, Do-nothing, or We designed two binary classifiers. In the state of 'Decoder On (GvN)', the 'classifier_GvN' decodes Gait MI vs. Do-nothing EEG signal. In the 'Decoder On (GvS)' state, the 'classifier_GvS' separates Gait MI vs. Sit MI.

Triple Eye Blink
We utilize TEB (online 97 trials test, a detection rate of 94.7%; online 40.5 min test, FPR of 0.025 times/min; n = 1) to activate and terminate the decoder. Notably, a blinking artifact easily influenced two prefrontal channels among the adopted electrode locations in this study. For both FP1 and FP2 electrodes, a 2-15 Hz range of IIR bandpass filter was integrated to clear the signals related to the non-eyelid movement. Subsequently, a biorthogonal wavelet function was adopted to enlarge the eye-blink pulse efficiently. Finally, we could count the wave peak, which exceeded a predefined threshold in separating single or double ordinary occasional eye-blinks. A window size of TEB detection was 1.6 s with a window shift of 0.4 s [31].

MI Buffer and Visual Feedback
We adopted command stack buffers to minimize potential risks to safety based on a single false detection of the movement intention, as shown in Figure 4. There were three buffers of Sit-to-Stand, Stand-to-Gait and Stand-to-Sit in each size of 10, which is necessary for subjects to engage MI tasks with the exoskeleton movement. First, in the 'Decoder On (GvN)' state, the robot stands-up only when the repetitive correct Gait MI command fully filled the Sit-to-Stand buffer, while the Donothing command emptied the stacked buffer. Second, in the 'Decoder On (GvS)' state, while the Gait MI command filled the Stand-to-Gait buffer, the Stand-to-Sit buffer emptied at the same time, vice versa. The fill/empty ratio of the buffer was set as 1:3 in order to provide reliable state transitions by balancing between the correct and false classification [32]. We designed two binary classifiers. In the state of 'Decoder On (GvN)', the 'classifier_GvN' decodes Gait MI vs. Do-nothing EEG signal. In the 'Decoder On (GvS)' state, the 'classifier_GvS' separates Gait MI vs. Sit MI.

Triple Eye Blink
We utilize TEB (online 97 trials test, a detection rate of 94.7%; online 40.5 min test, FPR of 0.025 times/min; n = 1) to activate and terminate the decoder. Notably, a blinking artifact easily influenced two prefrontal channels among the adopted electrode locations in this study. For both FP1 and FP2 electrodes, a 2-15 Hz range of IIR bandpass filter was integrated to clear the signals related to the non-eyelid movement. Subsequently, a biorthogonal wavelet function was adopted to enlarge the eye-blink pulse efficiently. Finally, we could count the wave peak, which exceeded a predefined threshold in separating single or double ordinary occasional eye-blinks. A window size of TEB detection was 1.6 s with a window shift of 0.4 s [31].

MI Buffer and Visual Feedback
We adopted command stack buffers to minimize potential risks to safety based on a single false detection of the movement intention, as shown in Figure 4. There were three buffers of Sit-to-Stand, Stand-to-Gait and Stand-to-Sit in each size of 10, which is necessary for subjects to engage MI tasks with the exoskeleton movement. First, in the 'Decoder On (GvN)' state, the robot stands-up only when the repetitive correct Gait MI command fully filled the Sit-to-Stand buffer, while the Do-nothing command emptied the stacked buffer. Second, in the 'Decoder On (GvS)' state, while the Gait MI command filled the Stand-to-Gait buffer, the Stand-to-Sit buffer emptied at the same time, vice versa. The fill/empty ratio of the buffer was set as 1:3 in order to provide reliable state transitions by balancing between the correct and false classification [32].
when the repetitive correct Gait MI command fully filled the Sit-to-Stand buffer, while the Donothing command emptied the stacked buffer. Second, in the 'Decoder On (GvS)' state, while the Gait MI command filled the Stand-to-Gait buffer, the Stand-to-Sit buffer emptied at the same time, vice versa. The fill/empty ratio of the buffer was set as 1:3 in order to provide reliable state transitions by balancing between the correct and false classification [32].

Controller Performance
The online BCI controller was compared with a ready-made smartwatch controller through a predefined 10 m gait scenario to evaluate the developed exoskeleton BCI controller feasibility ( Figure 5). All subjects executed stand-up, start 5 m gait and stop, resume 5 m gait and stop again, and finally sit-down. The wearable smartwatch (Galaxy gear series 1, SAMSUNG, Suwon, Korea) and the application were provided to control the exoskeleton ( Figure 6). Three control commands ('stand-up/gait-stop', 'gait', and 'sit-down') were transmitted through a Bluetooth wireless communication to the exoskeleton control computer. We compared the required time to complete the gait scenario between the BCI controller and the smartwatch controller.

Controller Performance
The online BCI controller was compared with a ready-made smartwatch controller through a predefined 10 m gait scenario to evaluate the developed exoskeleton BCI controller feasibility ( Figure  5). All subjects executed stand-up, start 5 m gait and stop, resume 5 m gait and stop again, and finally sit-down. The wearable smartwatch (Galaxy gear series 1, SAMSUNG, Suwon, Korea) and the application were provided to control the exoskeleton ( Figure 6). Three control commands ('standup/gait-stop', 'gait', and 'sit-down') were transmitted through a Bluetooth wireless communication to the exoskeleton control computer. We compared the required time to complete the gait scenario between the BCI controller and the smartwatch controller.

Classification Accuracy
To evaluate the performance of two binary decoders in offline, we measured classification accuracy of 100 repetitions with the prepared MI data composed with 7:3 of train-test ratio. Initially, randomly chosen trials constituted 10 test questions, and 10 train guesses were sampled by the Bootstrap restoration method except for the test trials. The total result was averaged and reported with a standard deviation.
For the online decoder, we recorded the true positive (TP), true negative (TN), false positive (FP),

Controller Performance
The online BCI controller was compared with a ready-made smartwatch controller through a predefined 10 m gait scenario to evaluate the developed exoskeleton BCI controller feasibility ( Figure  5). All subjects executed stand-up, start 5 m gait and stop, resume 5 m gait and stop again, and finally sit-down. The wearable smartwatch (Galaxy gear series 1, SAMSUNG, Suwon, Korea) and the application were provided to control the exoskeleton ( Figure 6). Three control commands ('standup/gait-stop', 'gait', and 'sit-down') were transmitted through a Bluetooth wireless communication to the exoskeleton control computer. We compared the required time to complete the gait scenario between the BCI controller and the smartwatch controller.

Classification Accuracy
To evaluate the performance of two binary decoders in offline, we measured classification accuracy of 100 repetitions with the prepared MI data composed with 7:3 of train-test ratio. Initially, randomly chosen trials constituted 10 test questions, and 10 train guesses were sampled by the

Classification Accuracy
To evaluate the performance of two binary decoders in offline, we measured classification accuracy of 100 repetitions with the prepared MI data composed with 7:3 of train-test ratio. Initially, randomly chosen trials constituted 10 test questions, and 10 train guesses were sampled by the Bootstrap restoration method except for the test trials. The total result was averaged and reported with a standard deviation.
For the online decoder, we recorded the true positive (TP), true negative (TN), false positive (FP), and false negative (FN) of the two classifiers while subjects were executing the gait scenario. The classifier_GvS showed all four occasions hence the accuracy of the decoder could be calculated. On the other hand, the classifier_GvN operated only a single time during the entire gait scenario. Consequently, we chose to use TPR as an online accuracy measurement of the classifier_GvN.
where n stands for the numbers of each of the four parameters: TP, TN, FP, and FN. The entire performance of the online decoder was determined as a lower number of the accuracy of two classifiers.

Information Transfer Rate
An information transfer rate (ITR) in communication per unit time was calculated as follows: where I d denotes the bit rate (bit/trial) and N denotes the number of tasks (in this case, N = 3). p denotes decoding accuracy, and f d denotes the decision rate (trial/min) [33]. In the offline session, we assumed the theoretical decision rate as the 90 trial repetitions divided by a total accumulated time of engaging MI for each subject (average of 4.60 trial/min). In the online session, we set the decision rate as an accumulated time of the MI during the entire gait scenario (average of 5.97 trial/min).

Feature Selection
The MI repetition data was processed to reveal discriminant MI features (Figure 7). Through a Fisher's ratio topography, we could estimate electrodes with a high signal-to-noise ratio. Based on those representative electrodes, we examined a trial-averaged event-related spectral perturbation (ERSP) spectrogram (Figure 8) [34]. The spectrogram reveals that the ERD appeared while subjects are engaging in both Gait MI and Sit MI, whereas less or no ERD was observed during Do-nothing task. Table 2 indicates the time taken to accomplish the 10 m gait scenario of 10 subjects. The hybrid BCI controller showed 144.8 ± 15.12% of average performance in terms of operation time compared to the smartwatch controller. Supplementary Video S1 is provided to compare the consuming time between the smartwatch controller and the hybrid BCI controller.

Control Performance
The MI repetition data was processed to reveal discriminant MI features (Figure 7). Through a Fisher's ratio topography, we could estimate electrodes with a high signal-to-noise ratio. Based on those representative electrodes, we examined a trial-averaged event-related spectral perturbation (ERSP) spectrogram (Figure 8) [34]. The spectrogram reveals that the ERD appeared while subjects are engaging in both Gait MI and Sit MI, whereas less or no ERD was observed during Do-nothing task. Figure 7. A topography of normalized Fisher ratio between Gait MI(Gait) vs. Do-nothing (Dnth) and Gait MI vs. Sit MI(Sit). Repeated trials of signal power in each frequency band were averaged to calculate the fisher ratio. The most dominant frequency band and electrode channels were visually illustrated and highlighted in yellow color. Three out of ten subjects' topography were representatively showed to demonstrate a distinct desynchronization area.  Table 2 indicates the time taken to accomplish the 10 m gait scenario of 10 subjects. The hybrid BCI controller showed 144.8 ± 15.12% of average performance in terms of operation time compared to the smartwatch controller. Supplementary Video S1 is provided to compare the consuming time between the smartwatch controller and the hybrid BCI controller.

Classification Accuracy
As mentioned in Section 2.5.2, the accuracy of the 10 decoders for each subject were inspected through 100 train-test repetitions. The classifier_GvN showed 88.4 ± 7.48% accuracy, while the classifier_GvS showed 80.3 ± 6.79% accuracy (Table 3). The online decoder accuracy was estimated by a log record following the execution of the real-time 10 m gait scenario ( Figure 9). During the operation, each subject engaged MI for at least four times; (1) to stand-up, do Gait MI for the classifier_GvN, (2) to start gait, do Gait MI for the classifier_GvS, after the TEB (3) to gait again, after gait pause, do same as (2), (4) finally to sit-down, do Sit MI for the classifier_GvS. If the subject failed to fill the corresponding buffer, they made subsequent attempts until they succeeded. The online accuracy was around 85% for both classifiers (Table 3).

Classification Accuracy
As mentioned in Section 2.5.2, the accuracy of the 10 decoders for each subject were inspected through 100 train-test repetitions. The classifier_GvN showed 88.4 ± 7.48% accuracy, while the classifier_GvS showed 80.3 ± 6.79% accuracy (Table 3). The online decoder accuracy was estimated by a log record following the execution of the realtime 10 m gait scenario (Figure 9). During the operation, each subject engaged MI for at least four times; (1) to stand-up, do Gait MI for the classifier_GvN, (2) to start gait, do Gait MI for the classifier_GvS, after the TEB (3) to gait again, after gait pause, do same as (2), (4) finally to sit-down, do Sit MI for the classifier_GvS. If the subject failed to fill the corresponding buffer, they made subsequent attempts until they succeeded. The online accuracy was around 85% for both classifiers (Table 3).  Table 4 shows the ITR for all subjects. By estimating the ITR, we could evaluate the efficiencies of the developed BCI controllers. The offline and online ITR was 3.21 bit/min and 3.13 bit/min on average, respectively.  Table 4 shows the ITR for all subjects. By estimating the ITR, we could evaluate the efficiencies of the developed BCI controllers. The offline and online ITR was 3.21 bit/min and 3.13 bit/min on average, respectively.

Discussion
In this study, we developed an MI-based hybrid BCI controller for the lower-limb exoskeleton operation. The subjects could control the exoskeleton to stand-up, gait start/stop, and sit-down without any steer or button press using the real-time TEB switch and EEG decoder. Ten healthy subjects participated in the offline and online sessions, and the average classification accuracy was more than 80% for both sessions. All subjects completed a 10-m walking scenario with the lower-limb exoskeleton using the MI-based hybrid BCI controller and spent 145% of the control time compared with the conventional smartwatch controller.

Characteristics of the EEG Decoder
As shown in Figure 7, the Gait MI vs. Do-nothing topographic plot appeared relatively consistent through the subjects around a motor and somatosensory area than the Gait MI vs. Sit MI. Following the study of the most prominent electrode channel, we illustrated the MI-related power desynchronization from low Mu (8)(9)(10)(11)(12) to around high Beta (13-30 Hz) frequency band by trial-averaged time-frequency wavelet analysis (Figure 8). The baseline was mean amplitude through the entire epoch time. Within 1 s after the MI cue disappeared, the ERD was revealed in the 10-15 Hz band while few subjects showed EEG signals in the upper bandwidth (21-25 Hz band or higher for S6). According to the research of Cebolla et al., significant ERSP appeared between Mu and low Beta frequency (8~17 Hz) in FCz channel, induced by the context based MI [35]. Our result also revealed the correlation between MI and spatial-spectral cortical activity on the mu and beta rhythm in the primary motor cortex, consistent with the previous studies [36][37][38]. Additionally, the result demonstrated that the adopted FBCSP algorithm [30] was suitable for incorporating the difference between the Gait MI vs. Do-nothing and the Gait MI vs. Sit MI in terms of both subject-specific spectral and spatial domain.
According to Figure 9A, there were continuous misclassifications. Additionally, subjects experienced a delayed movement of buffer during the MI tasks. The repeated false classification attributed mainly to the EEG processing window set as 2 s length with a 0.5-s window shift. Consequently, if there were a dominant false feature inside the window, it required at least four steps to renew the signal processing window. Moreover, the decoder cannot respond to the subjects' immediate intention change, consequently allowing a long buffer reaction time. In further research, this problem could be mitigated by shortening the window or reducing the effect of artifacts and noise.

Performance of the BCI Controller
In our study, 10 subjects demonstrated 1.45 of the average time ratio compared with the smartwatch controller. The result suggested that the developed controller could accommodate further improvement. Compared with the existing manual controller, previously developed BCI controllers showed an average time ratio of 2.03 for lower-limb exoskeleton [7], 1.27-1.35 for remote-controlled mobile robots [39,40]. According to the aforementioned studies which presented less performance, considering the subjects were in an ambulatory environment instead of sitting still to control the exoskeleton.
Utilizing the FBCSP algorithm, we could discriminate gait-related SMR with more than 80% accuracy both offline and online. Meanwhile, the classifier_GvN presented an average of 8%-point higher offline accuracy (t(18) = 2.6, p = 0.018) and 2%-point higher online accuracy (t(18) = 0.7, p = 0.495) than the classifier_GvS (Table 2). Thus, the EEG feature difference between the Gait MI and the Do-nothing appeared to be more discriminative than the two MIs. Based on interviews of the subjects, we could assess that non-repeating single action imagery such as Sit MI may be less effective in causing the EEG signal variations than the Gait MI, which is relatively familiar and straightforward. This variation might be the reason that the Gait MI vs. Sit MI classification results were not as high as that of Gait MI and Do-nothing despite the instructions and guidelines (Table 1). Further experiments should consider these concerns about the MI protocol.

Limitations and Future Direction
Notably, we acknowledged the existence of numerous alternative novel algorithms for decoding neural features of the EEG signal [41][42][43][44][45]. Among them, deep learning and EEG channel optimization methods are the most relevant methods for this study. Convolutional Neural Network and its applied algorithms are the prominent and spotlighted algorithm for MI signal toward an image domain analysis through the ERSP or short-time Fourier transform (STFT) [43]. Additionally, the EEG MI signals present prevailing spatial feature via a multi-electrodes channel. Consequently, it is recommended to adopt the channel selection method to enhance the performance of the decoder [44]. Further research can proceed from the above-mentioned updating algorithms concerning practical BCI application. While competing with the classification accuracies, in this study, for the first time, we tried to focus on demonstrating the feasibility of the real-time operation of the lower-limb exoskeleton with the gait-related MI accompanied by a conventional yet well-settled FBCSP algorithm. Our approach and findings can form a basis for further developing an online BCI controller for aiding gait disabilities.
Due to the natural and endogenous characteristics of the MI-actuated exoskeleton, it is the most corresponding BCI application to a fundamental property in terms of it's goal-direct and voluntary nature [3]. Therefore, it is significant that the BCI controlled lower-limb exoskeleton could be advantageous in rehabilitation circumstances [19,[46][47][48]. Patients with lower-limb disabilities following a stroke or SCI devote their efforts to regaining the utility of their limbs. The traditional rehabilitation paradigm has been bottom-up, i.e., physical therapists or treadmill move patients' limb repeatedly to trigger neuroplasticity in the brain. Contrarily, a self-paced assistive exoskeleton controller directly decodes the brain signal and bypasses the path to the damaged limb [49]. Accompanied by this top-down and the classic bottom-up rehabilitation route, a closed-loop feedback interface brings the promising result for the disabilities to regain ambulation ability at will [50,51]. Other researches have also demonstrated the effect of MI-based rehabilitation on balancing or ambulatory skills [19,52]. While this study presents the feasibility of the real-time intuitive MI-based hybrid BCI controller with a wearable exoskeleton on healthy subjects, testing the system with the patients is our intended future study. Further research will recruit more subjects including a SCI gait impairment for practical real-life BCI applications, accompanied by an advanced display device such as portable augmented reality (AR) glasses with an MI assistive environment [53]. We expect that the gait rehabilitation with a BCI-controlled exoskeleton can significantly improve the degree of motor recovery.