Next Article in Journal
Bot Datasets on Twitter: Analysis and Challenges
Next Article in Special Issue
Human Pose Detection for Robotic-Assisted and Rehabilitation Environments
Previous Article in Journal
Submerged Cultivation of Inonotus obliquus Mycelium Using Statistical Design of Experiments and Mathematical Modeling to Increase Biomass Yield
Previous Article in Special Issue
Exoscarne: Assistive Strategies for an Industrial Meat Cutting System Based on Physical Human-Robot Interaction
 
 
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A BMI Based on Motor Imagery and Attention for Commanding a Lower-Limb Robotic Exoskeleton: A Case Study

Brain-Machine Interface System Lab, Miguel Hernández University of Elche, 03202 Elche, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(9), 4106; https://doi.org/10.3390/app11094106
Received: 28 February 2021 / Revised: 27 April 2021 / Accepted: 29 April 2021 / Published: 30 April 2021
(This article belongs to the Special Issue Robotic Platforms for Assistance to People with Disabilities)

Abstract

:
Lower-limb robotic exoskeletons are wearable devices that can be beneficial for people with lower-extremity motor impairment because they can be valuable in rehabilitation or assistance. These devices can be controlled mentally by means of brain–machine interfaces (BMI). The aim of the present study was the design of a BMI based on motor imagery (MI) to control the gait of a lower-limb exoskeleton. The evaluation is carried out with able-bodied subjects as a preliminary study since potential users are people with motor limitations. The proposed control works as a state machine, i.e., the decoding algorithm is different to start (standing still) and to stop (walking). The BMI combines two different paradigms for reducing the false triggering rate (when the BMI identifies irrelevant brain tasks as MI), one based on motor imagery and another one based on the attention to the gait of the user. Research was divided into two parts. First, during the training phase, results showed an average accuracy of 68.44 ± 8.46% for the MI paradigm and 65.45 ± 5.53% for the attention paradigm. Then, during the test phase, the exoskeleton was controlled by the BMI and the average performance was 64.50 ± 10.66%, with very few false positives. Participants completed various sessions and there was a significant improvement over time. These results indicate that, after several sessions, the developed system may be employed for controlling a lower-limb exoskeleton, which could benefit people with motor impairment as an assistance device and/or as a therapeutic approach with very limited false activations.

1. Introduction

Robotic exoskeletons are wearable devices that can enhance physical performance and provide movement assistance. In the case of lower-limb robotic exoskeletons, they can be beneficial for people with motor impairment in the lower extremities as they can assist the gait and facilitate rehabilitation [1]. The combination of lower-limb robotic exoskeletons with brain–machine interfaces (BMI), which are systems that decode neural activity to drive output devices, offers a new method to provide motor support. Thus, patients could walk while being assisted by an exoskeleton that is controlled by their brain activity.
In the literature, there are different BMI control paradigms for lower-limb exoskeletons based on brain changes. The most common ones are steady-state visually evoked potentials [2], which are based on visual stimuli; motion-related cortical potentials [3,4,5,6], which are produced between 1500 and 500 ms before the execution of the movement, and and event-related desynchronization/synchronization (ERD/ERS), which is considered to indicate the activation and posterior recovery of the motor cortex during preparation and completion of a movement [7,8,9]. BMI based on ERD/ERS are usually employed to detect motion intention [3,6,10]. Similar ERD/ERS patterns are produced during motor imagery (MI), which consists of the imagination of a movement [11,12,13]. When performing MI, in contrast to external stimuli, brain changes are induced voluntarily and internally by the subject. BMI based on MI have the objective of identifying different MI tasks or differentiating between MI and an idle state [5,14,15,16]. The work of [16] combined MI with eye blinks as a control criterion.
The main limitation of MI is that patients have to maintain it for long periods in order to force the external device to perform any action. However, contrary to instantaneous brain changes, such as MRCP or motion intention, continuous cognitive involvement of a patient during the assisted motion can induce mechanisms of neuroplasticity. Neuroplasticty is the ability of the brain to reorganize its structure and promote rehabilitation [17]. The performance of maintained brain tasks can be challenging as it requires high focus from the user during the whole experiment and any external influence could easily disturb it. Previous studies have tried to evaluate the level of attention of a subject during the control of the external device [18] and some of them have considered it as a control paradigm for a lower-limb BMI [15]. BMI systems need a training phase in which the model is calibrated for each subject and then it is tested with with new data. In [5,14,15,16], during the training phase, participants alternated periods of MI with idle state and the output device was only moving during MI. Nevertheless, since BMI focus on sensorimotor rhythms, it is difficult to ensure that it is not considering the actual motion instead of motor imagery.
In our previous work [19], we designed a lower-limb MI BMI to control a treadmill and it was tested with able-bodied subjects. The BMI combined the paradigm of MI with another one that measured the level of attention that users had during MI tasks. In the test phase, i.e., when the output device was commanded by the BMI, the treadmill was only activated when the attention measured was higher than a certain threshold, reducing the number of false triggers. In order to ensure that motion artifacts did not affect the BMI classifier model, the training phase consisted of two types of trials: full standing and full motion trials. The mental tasks to perform were the same for both types, alternating periods of MI with idle state. Both types of trials allowed the creation of two different classifier models to be applied depending on the status of the subject: gait and stand.
In this study, the BMI designed in [19] was adapted for the control of the gait of a lower-limb exoskeleton and it was evaluated with able-bodied subjects. The combination of this BMI with a lower-limb exoskeleton is a promising and intuitive assistive approach for people with motor impairment. In addition, it could potentially benefit people with cortical damage (e.g., after a stroke) as a therapeutic approach for the recovery of lost motor function. Participants were trained over 2–5 days to assess the effect of practice on the performance. Each day’s session was divided into two parts: the training and test phases. During training, subjects performed trials in which the exoskeleton was walking the entire time and trials in which it was standing. In the test phase, the exoskeleton provided real-time feedback in a closed-loop control scenario. This is a previous step in the development of a BMI that will reinforce rehabilitation and/or assist the gait for patients with neurological damage.

2. Materials and Methods

2.1. Participants

Two subjects participated in the study (mean age 23.5 ± 3.5). They did not report any known disease and had no movement impairment. They did not have any previous experience with BMI. They were informed about the experiments and signed an informed consent form in accordance with the Declaration of Helsinki. All procedures were approved by the Responsible Research Office of Miguel Hernández University of Elche.

2.2. Equipment

Brain activity was recorded with electroencephalography (EEG). A 32-electrode system actiCap (Brain Products GmbH, Germany) was employed to record EEG signals. The 27 channels selected for acquisition were: F3, FZ, FC1, FCZ, C1, CZ, CP1, CPZ, FC5, FC3, C5, C3, CP5, CP3, P3, PZ, F4, FC2, FC4, FC6, C2, C4, CP2, CP4, C6, CP6, P4. They were placed following the 10-10 international system on an actiCAP (Brain Products GmbH, Germany). Four electrodes were located next to the eyes to record electrooculography (EOG) and ground and reference electrodes were located on the right and left ear lobes, respectively. Each channel signal was amplified with BrainVision BrainAmp amplifier (Brain Products GmbH, Germany). Finally, signals were transmitted wirelessly to the BrainVision recorder software (Brain Products GmbH, Germany).
H3 exoskeleton (Technaid, Madrid, Spain) was employed to assist the movement and participants used crutches as support. Control start/stop gait commands were sent via Bluetooth. The experimental setup can be seen in Figure 1.

2.3. Experimental Design

Each participant completed several sessions and each session was divided into two parts. The first part consisted of the training phase, in which the exoskeleton was in opened-loop control. Thus, it was remotely controlled by the laptop with predefined commands based on the mental tasks to be registered and not by the output of the BMI classifier. Afterwards, the second part of the session allowed assessment of the BMI performance during closed-loop control of the exoskeleton. Commands issued by the BMI were sent to the exoskeleton in real time based on the decoding of the brain activity obtained as output of the BMI classifier, receiving the subjects’ real-time feedback on their performance.

2.3.1. Training Phase

In the first part of each session, subjects performed 20 trials. Each trial consisted of a sequence of three mental tasks: MI of the gait, idle state and regressive count. For idle state, participants were asked to be as relaxed as possible. The regressive count was randomly changed every trial and consisted of a number between 300 and 1000 and a subtrahend between 1 and 9. For example, if they were given the count 500-4, they had to compute the series of subtractions of 496, 492, 488... until they had to perform the following task. This task aims to focus the subject on a demanding mental task very different to MI in order to assess a low level of attention to gait. The protocol can be seen in Figure 2a. There was a voice message that indicated the beginning of each task: ‘Relax’, ‘Imagine’, ‘500-5’. The message for the regressive count indicated a different mathematical operation each time. In order to avoid evoked potentials, the 4 s period after auditory cues was not considered for further analysis.
During the session, subjects used crutches to maintain stability. In addition, a member of the research staff softly held the exoskeleton to prevent any possible loss of balance or fall. Ten of the training trials were performed in a full no-motion status and the other ten in a full motion status assisted by the exoskeleton. These trials were employed to train two different BMI classifiers: StandClassifiers (with non-motion trials) and GaitClassifiers (with full motion trials).

2.3.2. Test Phase

In the second part of each session, the BMI was tested in closed-loop control with the two groups of classifiers obtained with the data of the training phase (StandClassifiers, GaitClassifiers). Subjects performed five trials, whose protocol can be seen in Figure 2b. The transition between tasks was indicated with voice messages for ‘Relax’ and ‘Imagine’ tasks. Notice that no ‘Regression count’ task was considered, as attention level to gait was computed based on the information from training, but there was no need to implement a low-level gait attention task in the testing trials.

2.4. Brain Machine Interface

The presented BMI had the following steps: data acquisition, pre-processing, feature extraction, classification, exoskeleton control and evaluation.
As indicated before, this BMI was based on two paradigms: MI and attention. The first one was based on the distinction between MI of the gait and an idle state, so only data associated with these brain tasks were considered to train the classifiers (relax and motor imagery). With regard to the attention paradigm, it measured the level of attention to gait. Therefore, it had the objective of differentiating between the attention of the subject during MI and the attention during irrelevant tasks. For this paradigm, all brain tasks from training trials were contemplated (relax, motor imagery and regressive count). While the attention to the gait was assumed to be high during MI tasks, it was assumed to be low during regressive count and idle state. The schema of the BMI can be seen in Figure 3.

2.4.1. Data Acquisition

EEG signals were recorded at a sampling frequency of 200 Hz. Then, epochs of 1 s with 0.5 s of shifting were extracted and processed.

2.4.2. Pre-Processing

The pre-processing stage started with two frequency filters: a notch filter at 50 Hz to remove the contribution of the power line and a high-pass filter at 0.1 Hz. In order to reduce motion artifacts, electrode wires were fixed with clamps and a medical mesh. The movement of jaw muscles can generate signal artifacts, so subjects were asked to not swallow or chew while they were performing MI, regressive count or were in a idle state.
The H denoising algorithm was applied to mitigate the presence of eye artifacts and signal drifts [5]. This algorithm estimates the contribution of the EOG and a constant parameter to the EEG signal and removes it. Afterwards, there were two different pre-processing lines, one for each paradigm.
For the MI paradigm, a filter bank comprising multiple band-pass filters was applied to the data after the H denoising algorithm. Four band-pass filters were employed to obtain data associated with alpha and beta rhythms.
Regarding the attention paradigm, EEG signals from each channel were first standardized following the process presented in [20]. For each channel, the maximum visual threshold was computed as the mean of the 6 highest values of the signal. This value was iteratively updated for each epoch and it was used to standardize the data as
S V ( t ) c h = V ( t ) c h 1 C h j = 1 C h M V T h r e s h o l d j .
The signal of each chanel, V ( t ) c h , was normalized taking into consideration the maximum visual threshold ( M V T h r e s h o l d ) of all the EEG channels. Subsequently, the surface Laplacian filter was used to reduce spatial noise and enhance the local activity of each electrode [21].

2.4.3. Feature Extraction

The following step of the BMI has the objective of computing the characteristics of the EEG during each brain task that could be discriminating.
For the MI paradigm, common spatial patterns (CSP) [22] are computed for each frequency band. CSP estimate a spatial transformation that maximizes the discriminability between two brain patterns. If X is the EEG that has N T dimensions, which are the number of channels and number of samples, respectively, the CSP algorithm estimates a matrix of spatial filters W that discriminates between two classes: ( X 1 ) and ( X 2 ). Firstly, the normalized covariance matrices are computed for each class as in
C 1 = X 1 X 1 T t r a c e ( X 1 X 1 T ) , C 2 = X 2 X 2 T t r a c e ( X 2 X 2 T ) .
These matrices are computed for each trial and C 1 ¯ and C 2 ¯ are calculated by averaging over all trials of the same class. The averaged covariance matrices are combined to result in the composite spatial covariance matrix that can be factorized as
C = C 1 ¯ + C 2 ¯ = U 0 Σ U 0 T .
U 0 is a matrix of eigenvectors and Σ is the diagonal matrix of eigenvalues. The averaged covariance matrices are transformed as
P = Σ 1 / 2 U 0 T ,
S 1 = P C 1 ¯ P T , S 2 = P C 2 ¯ P T .
S 1 and S 2 have common eigenvectors, and the sum of both matrices of eigenvalues is the identity matrix.
S 1 = U Σ 1 U T , S 1 = U Σ 2 U T a n d Σ 1 + Σ 2 = I
The projection matrix is obtained as
W = U T P .
Z is the projection of the original EEG signal S into another space. Columns of W 1 are the spatial patterns.
Z = W X
Although Z has N T dimensions, the first and last rows are the components that can be better discriminated in terms of their variance. Therefore, for feature extraction, only the m first and last components of Z are considered. Z p is the subset of Z and the variances of each component are computed and normalized with the logarithm as
f p = l o g v a r ( Z p ) i = 1 2 m Z p .
f p is the vector of features and has ( f b a n d s 2 m ) T dimension. m was set to 4, and in the pre-processing phase, 4 band-pass filters were employed so the dimension is 32 T .
For the attention paradigm, power spectral estimation by Maximum Entropy Method (MEM) was used to obtain features associated with each task. The signal of each electrode was estimated as an autoregressive model in which the known autocorrelation coefficients were calculated and the unknown coefficients were estimated by maximizing the spectral entropy [23]. Afterwards, the autocorrelation cofficients were used to compute the power spectrum that was compatible with the fragment of the signal analyzed, but it was also evasive regarding unseen data. Afterwards, only the power of the frequencies in the gamma band was considered [15].

2.4.4. Classification

Training trials of each session were evaluated using leave-one-out cross-validation. Each trial was once used as a test and the remaining trials conformed to the training group. This process was performed independently for trials in which subjects were standing (10 trials) and trials in which they were in motion (10 trials). Linear Discriminant Analysis (LDA) [24] classifiers were created depending on the subject status—full standing trials (StandClassifiers) and full motion trials(GaitClassifiers)—each one with two different models based on the decoding paradigm: MI and attention paradigms. As stated above, whereas LDA classifiers of the MI paradigm were only trained with data from MI and idle state, LDA classifiers of the attention paradigm were trained with data from all brain tasks (idle, regressive count, MI).
Concerning the test phase, the developed BMI was designed as a state machine system in which a group of classifiers was chosen based on the status of the exoskeleton. This way, if the subject is in a standing position, the MI and attention classifiers of the full standing trials (StandClassifiers) are used to decide if the exoskeleton keeps standing or starts moving, but if the subject is moving, the MI and attention classifiers obtained by the full motion trials (GaitClassifiers) are used to continue walking or to stop. Predictions from both paradigms were combined to decode control commands. Its design can be seen in Figure 4. In summary, in each test trial, subjects started standing with the exoskeleton and StandClassifiers were employed. The system could decode stop or walk commands based on the prediction of their MI and attention classifiers. When a walk command was sent to the exoskeleton, it started the gait and the system was changed to Gait state. Consequently, GaitClassifiers were employed afterwards. Again, the system could decode stop or walk commands, but when a stop command was issued, the exoskeleton stopped the gait and the system changed to Stand state again.

2.4.5. Exoskeleton Control

In the test phase, the exoskeleton was controlled by BMI decoded commands. MI classifiers could predict two classes, 0 for idle state and 1 for MI, and attention classifiers could predict a 0 for low attention to gait and a 1 for high attention. These predictions were averaged every 10 s, which resulted in MI and attention indices that ranged from 0 to 1. Control commands were selected based on the following rules:
  • During 5 s, new commands cannot be issued.
  • If subject was standing:
    -
    If the MI index was higher than or equal than 0.7 or the MI index was higher than or equal than 0.6 and the attention index was higher than or equal to 0.4, a move command was issued and the exoskeleton started the gait.
    -
    Otherwise, the exoskeleton kept standing.
  • If the subject was walking:
    -
    If the MI index was lower than or equal to 0.4, a stop command was issued and the exoskeleton stopped the gait.
    -
    Otherwise, it kept walking.

2.5. Evaluation

The accuracy of training trials was defined as the percentage of correctly classified epochs during each brain task. This metric was computed separately for trials in which participants were moving and trials in which they were static. Furthermore, the performance of closed-loop trials was assessed with the following indices:
  • %MI and %Att: percentage of epochs of data correctly classified for each paradigm.
  • %Commands: percentage of epochs of data with correct control commands.
  • Accuracy commands: percentage of correct commands issued.
  • True positive ratio (TPR): percentage of MI periods in which a walking event is executed. There is only an event of MI per trial, so this value can only be 0 or 100% per trial.
  • False positives (FP) and false positives per minute (FP/min): moving commands issued during rest periods.
Transition events were not considered for the computation of evaluation metrics.

3. Results

During training, participants wore the exoskeleton in an opened-loop control. Each subject completed several sessions, and on each of them, they completed 20 trials: 10 trials standing still and 10 trials walking. Results from subjects S1 and S2 are shown in Table 1 and Table 2, respectively. It must be noted that they did not have the same amount of practice since they participated in a different number of sessions. Two different BMI paradigms were carried out. For the MI paradigm, S1 reached an average accuracy of 72.77 ± 6.61% with a difference of around 6% between the two conditions, standing and walking. In the last session, S2 achieved an average accuracy of 64.11 ± 9.98 with a difference of 20% between the two approaches. With respect to the attention paradigm, S1 obtained an accuracy of 65.06 ± 6.44 with a difference of 8%, and S2 achieved 65.83 ± 4.43 and a 10% difference. The average accuracy of the MI and attention paradigm was 68.44 ± 8.46% and 65.45 ± 5.53%, respectively.
Figure 5 and Figure 6 show the spatial patterns of S1 and S2 in their last session. Moreover, in order to provide a comparison under the same conditions, Figure 7 shows the spatial patterns of S3 in the second session. The spatial patterns estimated during trials without movement show that for S1 and S2, electrode FCz seems to have a relevant role in the discrimination of idle state. During MI events, the most significant electrodes for both subjects are peripheral as FC5. However, results from S2 show that in the 5–10 Hz band, C2 and CPz are relevant to the MI of gait. Regarding trials in which participants are walking, the distribution of relevant areas seems scattered for idle state and for MI; peripheral electrodes are also highlighted.
When comparing the spatial patterns of S2 in two different sessions, the main similarities can be found in the stand trials. CPz and Cz are highlighted for the relax class and electrode FC5 seems to be significant for the MI class.

3.1. Training Phase

Table 1. Results from training, subject S1. Trials with opened-loop control of the exoskeleton.
Table 1. Results from training, subject S1. Trials with opened-loop control of the exoskeleton.
Session 1Session 2
Stand%MI59.29 ± 10.5169.64 ± 7.62
%Att57.38 ± 9.2760.83 ± 7.58
Gait%MI58.93 ± 11.6075.89 ± 5.41
%Att65.83 ± 8.5769.29 ± 5.04
Table 2. Results from training, subject S2. Trials with opened-loop control of the exoskeleton.
Table 2. Results from training, subject S2. Trials with opened-loop control of the exoskeleton.
Session 1Session 2Session 3Session 4Session 5
Stand%MI53.32 ± 8.5969.64 ± 8.7065.54 ± 5.1264.2 ± 11.4574.11 ± 6.14
%Att63.95 ± 4.4462.57 ± 8.7758.21 ± 8.7659.52 ± 8.4460.83 ± 5.48
Gait%MI50.26 ± 7.5454.17 ± 8.7562.50 ± 8.9159.82 ± 10.1154.11 ± 12.71
%Att61.05 ± 5.1363.36 ± 2.8461.55 ± 7.2165.71 ± 6.8270.83 ± 3.04
Figure 5. Spatial patterns for the session of S1 that best discriminate between motor imagery (MI) and idle state. (a) The spatial patterns from trials in which participant was standing still and (b) the spatial patterns from trials in which they were walking with the exoskeleton.
Figure 5. Spatial patterns for the session of S1 that best discriminate between motor imagery (MI) and idle state. (a) The spatial patterns from trials in which participant was standing still and (b) the spatial patterns from trials in which they were walking with the exoskeleton.
Applsci 11 04106 g005
Figure 6. Spatial patterns for the session of S2 that best discriminate between motor imagery (MI) and idle state. (a) The spatial patterns from trials in which participant was standing still and (b) the spatial patterns from trials in which they were walking with the exoskeleton.
Figure 6. Spatial patterns for the session of S2 that best discriminate between motor imagery (MI) and idle state. (a) The spatial patterns from trials in which participant was standing still and (b) the spatial patterns from trials in which they were walking with the exoskeleton.
Applsci 11 04106 g006
Figure 7. Spatial patterns for the fifth session of S2 that best discriminate between motor imagery (MI) and idle state. (a) The spatial patterns from trials in which participant was standing still and (b) the spatial patterns from trials in which they were walking with the exoskeleton.
Figure 7. Spatial patterns for the fifth session of S2 that best discriminate between motor imagery (MI) and idle state. (a) The spatial patterns from trials in which participant was standing still and (b) the spatial patterns from trials in which they were walking with the exoskeleton.
Applsci 11 04106 g007

3.2. Test Phase

The exoskeleton was controlled by the BMI decoded commands and the BMI classifiers were trained with training trials. Table 3, Table 4 and Table 5 summarize the results from closed-loop trials. TPR is 100% in the majority of trials, which means that the exoskeleton was activated at least once during the MI event. The number of false positive activations during idle state ranged from 0 to 2. Regarding %Commands, it improved by 13% from the first to the last session of S2, although their performance in each session was not always superior to the previous one. In the last session, the average %Commands for both subjects was 64.50 ± 10.66%.

4. Discussion

Contrary to the findings of our previous work on a BMI-controlled treadmill [19], we found significant differences between opened-loop trials in which subjects were standing and when they were walking. It is important to note that walking assisted by an exoskeleton is a more complex task than walking on a treadmill, so subjects must be concentrated. Consequently, it is more difficult for them to perform other brain tasks such as MI or regressive count. In addition, when comparing the results from closed-loop trials, the average percentage of epochs with correct commands was 64.5% with the exoskeleton and 75.6% with the treadmill. A possible explanation for this contrast could be also related to the complexity of the movement with the exoskeleton.
On the other hand, the attention paradigm showed worse performance than the MI paradigm in opened-loop trials, which is consistent with the findings of our previous work [19]. However, in line with our previous work with an exoskeleton [15], this difference is not as evident in closed-loop trials. Therefore, future BMI designs could rely more on the attention paradigm for the activation of the exoskeleton.
While results from the MI paradigm showed an increasing trend throughout sessions, this pattern is not as evident for the attention paradigm. Our results for the MI paradigm are in consonance with the conclusions from [25]. Performing MI is not an intuitive activity for novel participants and practice could promote the modulation and enhance brain activity patterns. Nevertheless, with regard to the attention of the user, the performance does not seem to improve with practice. The attention is something that people train on daily basis, so this could explain why a few sessions cannot further improve it.
There are not many investigations in the literature that developed BMI based on lower-limb MI without other external stimuli [2] and they are usually based on motion intention [3,6,10]. In addition, the works of [4,26] employed upper-limb MI to control a lower-limb exoskeleton. Reference [26] showed a percentage of correct commands issued every 4.5 s of 66% and [4] of 80.16% but the BMI was only employed to start the gait and not to stop it. These values can be compared with the %Commands of the present paper. Although superior results are achieved with upper-limb MI, this paradigm cannot be applied to promote neuroplasticity.
In [16], a BMI was presented that employed a combination of MI with eye blinking as a control paradigm, and an accuracy of 86.7% was reported. However, although control mechanisms that employ eye movements have proven to be precise, they lack application from the rehabilitation point of view. In addition, the work of [14] presented a BMI that only controlled the start and maintenance of the gait of a lower-limb exoskeleton and they obtained an average accuracy of 74.4%. In our previous research [15] that also combined the MI and attention paradigms to control an exoskeleton, the percentage of epochs with correct commands issued was 56.77%. Slightly superior results were achieved with the current BMI algorithm.

5. Conclusions

The current research presents a BMI system based on MI and attention paradigms that has been tested to control a lower-limb exoskeleton. Participants performed 2–5 sessions to assess the effect of practice on the performance. Each session was divided into two parts: the training and test phases. First, participants completed trials in which they had to perform certain brain tasks and the exoskeleton was controlled remotely by the laptop with predefined commands. During half of the trials, the exoskeleton was walking, and during the other half, it was completely static. Therefore, contrary to previous works, brain tasks to discriminate happened under the same conditions. Moreover, this setup can reduce the effect of artifacts on the predictions. The average performance in the last session was 68.44 ± 8.46% for the MI paradigm and 65.45 ± 5.53% for the attention paradigm. The second part of the each session consisted of closed-loop controlled trials in which the exoskeleton was commanded by the predictions of the BMI. The BMI worked as a state machine that used different classifiers depending on whether the exoskeleton was static or moving. Training trials were used to train the classifiers corresponding to each state of the state machine. The BMI took a decision every 0.5 s and the average percentage of correct commands chosen was 64.50 ± 10.66% for the last session of both subjects.
Participants did not have any motor impairment, but since the main of objective of the system is to promote neurorehabilitation and neuroplasticity, future research will focus on people with motor disabilities.

Author Contributions

Conceptualization, L.F. and M.O.; methodology, L.F., V.Q. and M.O.; software, L.F. and V.Q.; validation, M.O. and E.I.; formal analysis, L.F. and V.Q.; investigation, L.F. and V.Q.; resources, M.O., E.I. and J.M.A.; data curation, E.I.; writing—original draft preparation, L.F.; writing—review and editing, L.F. and M.O.; visualization, L.F.; supervision, M.O., E.I. and J.M.A.; project administration, J.M.A.; funding acquisition, J.M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Spanish Ministry of Science and Innovation, the Spanish State Agency of Research, and the European Union through the European Regional Development Fund in the framework of the project Walk—Controlling lower-limb exoskeletons by means of brain–machine interfaces to assist people with walking disabilities (RTI2018-096677-B-I00); and by the Consellería de Innovación, Universidades, Ciencia y Sociedad Digital (Generalitat Valenciana), and the European Social Fund in the framework of the project Desarrollo de nuevas interfaces cerebro-máquina para la rehabilitación de miembro inferior (GV/2019/009).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of Miguel Hernandez University of Elche (DIS.JAP.03.18 and 22/01/2019).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bogue, R. Robotic exoskeletons: A review of recent progress. Ind. Robot. 2015, 42, 5–10. [Google Scholar] [CrossRef]
  2. Kwak, N.S.; Müller, K.R.; Lee, S.W. A convolutional neural network for steady state visual evoked potential classification under ambulatory environment. PLoS ONE 2017, 12, e0172578. [Google Scholar] [CrossRef] [PubMed][Green Version]
  3. Zhang, Y.; Prasad, S.; Kilicarslan, A.; Contreras-Vidal, J.L. Multiple kernel based region importance learning for neural classification of gait states from EEG signals. Front. Neurosci. 2017, 11, 170. [Google Scholar] [CrossRef][Green Version]
  4. Liu, D.; Chen, W.; Pei, Z.; Wang, J. A brain-controlled lower-limb exoskeleton for human gait training. Rev. Sci. Instrum. 2017, 88, 104302. [Google Scholar] [CrossRef] [PubMed]
  5. Kilicarslan, A.; Grossman, R.G.; Contreras-Vidal, J.L. A robust adaptive denoising framework for real-time artifact removal in scalp EEG measurements. J. Neural Eng. 2016, 13, 026013. [Google Scholar] [CrossRef] [PubMed]
  6. Kilicarslan, A.; Prasad, S.; Grossman, R.G.; Contreras-Vidal, J.L. High accuracy decoding of user intentions using EEG to control a lower-body exoskeleton. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 5606–5609. [Google Scholar] [CrossRef][Green Version]
  7. Pfurtscheller, G.; Neuper, C.; Flotzinger, D.; Pregenzer, M. EEG-based discrimination between imagination of right and left hand movement. Electroencephalogr. Clin. Neurophysiol. 1997, 103, 642–651. [Google Scholar] [CrossRef]
  8. Pfurtscheller, G.; Lopes Da Silva, F.H. Event-related EEG/MEG synchronization and desynchronization: Basic principles. Clin. Neurophysiol. 1999, 110, 1842–1857. [Google Scholar] [CrossRef]
  9. Seeland, A.; Manca, L.; Kirchner, F.; Kirchner, E.A. Spatio-temporal comparison between ERD/ERS and MRCP-based movement prediction. In Proceedings of the BIOSIGNALS 2015—8th International Conference on Bio-Inspired Systems and Signal Processing, Lisbon, Portugal, 12–15 January 2015; pp. 219–226. [Google Scholar] [CrossRef][Green Version]
  10. Rajasekaran, V.; López-Larraz, E.; Trincado-Alonso, F.; Aranda, J.; Montesano, L.; Del-Ama, A.J.; Pons, J.L. Volition-adaptive control for gait training using wearable exoskeleton: Preliminary tests with incomplete spinal cord injury individuals. J. Neuroeng. Rehabil. 2018, 15, 4. [Google Scholar] [CrossRef][Green Version]
  11. Stippich, C.; Ochmann, H.; Sartor, K. Somatotopic mapping of the human primary sensorimotor cortex during motor imagery and motor execution by functional magnetic resonance imaging. Neurosci. Lett. 2002, 331, 50–54. [Google Scholar] [CrossRef]
  12. Bakker, M.; de Lange, F.P.; Stevens, J.A.; Toni, I.; Bloem, B.R. Motor imagery of gait: A quantitative approach. Exp. Brain Res. 2007, 179, 497–504. [Google Scholar] [CrossRef]
  13. Batula, A.M.; Mark, J.A.; Kim, Y.E.; Ayaz, H. Comparison of Brain Activation during Motor Imagery and Motor Movement Using fNIRS. Comput. Intell. Neurosci. 2017, 2017, 5491296. [Google Scholar] [CrossRef]
  14. Rodríguez-Ugarte, M.; Iáñez, E.; Ortiz, M.; Azorín, J.M. Improving Real-Time Lower Limb Motor Imagery Detection Using tDCS and an Exoskeleton. Front. Neurosci. 2018, 12, 757. [Google Scholar] [CrossRef] [PubMed][Green Version]
  15. Ortiz, M.; Iáñez, E.; Gaxiola, J.; Kilicarslan, A.; Azorín, J.M.; Member, S. Assessment of motor imagery in gamma band using a lower limb exoskeleton. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics, Bari, Italy, 6–9 October 2019; pp. 2773–2778. [Google Scholar]
  16. Choi, J.W.; Huh, S.; Jo, S. Improving performance in motor imagery BCI-based control applications via virtually embodied feedback. Comput. Biol. Med. 2020, 127, 104079. [Google Scholar] [CrossRef]
  17. Gharabaghi, A. What Turns Assistive into Restorative Brain-Machine Interfaces? Front. Neurosci. 2016, 10, 456. [Google Scholar] [CrossRef][Green Version]
  18. Torkamani-Azar, M.; Kanik, S.D.; Aydin, S.; Cetin, M. Prediction of reaction time and vigilance variability from spatiospectral features of resting-state EEG in a long sustained attention task. IEEE J. Biomed. Health Inform. 2020, 24, 2550–2558. [Google Scholar] [CrossRef][Green Version]
  19. Ferrero, L.; Quiles, V.; Ortiz, M.; Iáñez, E.; Azorín, J.M. BCI Based on Lower-Limb Motor Imagery and a State Machine for Walking on a Treadmill. In Proceedings of the International IEEE EMBS Conference on Neural Engineering, Sorrento, Italy, 4–6 May 2020. [Google Scholar]
  20. Costa, Á.; Iáñez, E.; Úbeda, A.; Hortal, E.; Del-Ama, A.J.; Gil-Agudo, Á.; Azorín, J.M. Decoding the Attentional Demands of Gait through EEG Gamma Band Features. PLoS ONE 2016, 11, e0154136. [Google Scholar] [CrossRef]
  21. McFarland, D.J.; McCane, L.M.; David, S.V.; Wolpaw, J.R. Spatial filter selection for EEG-based communication. Electroencephalogr. Clin. Neurophysiol. 1997, 103, 386–394. [Google Scholar] [CrossRef]
  22. Ramoser, H.; Muller-Gerking, J.; Pfurtscheller, G. Optimal spatial filtering of single trial EEG during imagined hand movement. IEEE Trans. Rehabil. Eng. 2000, 8, 441–446. [Google Scholar] [CrossRef][Green Version]
  23. Rainford, B.D.; Daniell, G.J. μSR frequency spectra using the maximum entropy method. Hyperfine Interact. 1994, 87, 1129–1134. [Google Scholar] [CrossRef]
  24. Izenman, A. Linear Discriminant Analysis. In Modern Multivariate Statistical Techniques; Springer Texts in Statistics; Springer: New York, NY, USA, 2006. [Google Scholar]
  25. Zich, C.; De Vos, M.; Kranczioch, C.; Debener, S. Wireless EEG with individualized channel layout enables efficient motor imagery training. Clin. Neurophysiol. 2015, 126, 698–710. [Google Scholar] [CrossRef]
  26. Gordleeva, S.Y.; Lukoyanov, M.V.; Mineev, S.A.; Khoruzhko, M.A.; Mironov, V.I.; Kaplan, A.Y.; Kazantsev, V.B. Exoskeleton control system based on motor-imaginary brain-computer interface. Sovrem. Tehnol. Med. 2017, 9, 31–36. [Google Scholar] [CrossRef][Green Version]
Figure 1. Experimental setup.
Figure 1. Experimental setup.
Applsci 11 04106 g001
Figure 2. (a) The protocol of opened-loop trials and (b) the protocol of close-loop trials.
Figure 2. (a) The protocol of opened-loop trials and (b) the protocol of close-loop trials.
Applsci 11 04106 g002
Figure 3. Brain–machine interface (BMI) scheme. During training, the exoskeleton was in opened-loop control, and for testing, it was in closed-loop control. The BMI used two different paradigms: one based on motor imagery of gait and another one based on the user’s level of attention to gait. Both paradigms shared some steps of pre-processing but there were additional different steps for each one. Then, two different feature extraction methods were employed. Trials from the training phase were used to train the BMI classifiers for testing.
Figure 3. Brain–machine interface (BMI) scheme. During training, the exoskeleton was in opened-loop control, and for testing, it was in closed-loop control. The BMI used two different paradigms: one based on motor imagery of gait and another one based on the user’s level of attention to gait. Both paradigms shared some steps of pre-processing but there were additional different steps for each one. Then, two different feature extraction methods were employed. Trials from the training phase were used to train the BMI classifiers for testing.
Applsci 11 04106 g003
Figure 4. State machine design of the brain–machine interface (BMI). There are two states, gait and stand, that depend on the exoskeleton status. Each state is associated with two different classifiers, one for each paradigm, that will be used to give decode control commands.
Figure 4. State machine design of the brain–machine interface (BMI). There are two states, gait and stand, that depend on the exoskeleton status. Each state is associated with two different classifiers, one for each paradigm, that will be used to give decode control commands.
Applsci 11 04106 g004
Table 3. Test results, subject S1. Trials in close-loop control.
Table 3. Test results, subject S1. Trials in close-loop control.
Trial 1Trial 2Trial 3Trial 4Trial 5Avg.
Session 1%MI64.1350.0048.9161.9656.5256.30
%Att51.0958.7053.2647.8355.4353.26
%Commands63.0078.0062.0060.0083.0069.20
Acc. commands50.0050.0050.000.0050.0040.00
TPR100.00100.00100.000.00100.0080.00
FP1.000.001.000.000.000.40
FP/min2.310.002.310.000.000.92
Session 2%MI61.9652.1750.0060.8757.6156.52
%Att64.1346.7461.9660.8757.6158.26
%Commands60.0057.0063.0057.0069.0061.20
Acc. commands75.0075.0075.0050.0066.6768.33
TPR100.00100.00100.00100.00100.00100.00
FP1.001.001.001.001.001.00
FP/min2.312.312.312.312.312.31
Table 4. Test results, first two sessions of subject S2. Trials in close-loop control.
Table 4. Test results, first two sessions of subject S2. Trials in close-loop control.
Trial 1Trial 2Trial 3Trial 4Trial 5Avg.
Session 1%MI5044.5743.4842.3943.4844.78
%Att47.8356.5253.2653.265052.17
%Commands59.0054.0053.0053.0053.0054.40
Acc. commands0.000.000.000.000.000.00
TPR100.00100.00100.00100.00100.00100.00
FP1.001.001.001.001.001.00
FP/min2.312.312.312.312.312.31
Session 2%MI46.7460.8748.9154.3552.1752.61
%Att56.5259.7843.4866.364.1358.04
%Commands59.0053.0037.0076.0068.0058.60
Acc. commands0.00100.000.0066.6710053.33
TPR100.00100.000.00100.00100.0080.00
FP1.000.001.001.000.000.60
FP/min2.310.002.312.310.001.38
Table 5. Test results, last three sessions of subject S2. Trials in close-loop control.
Table 5. Test results, last three sessions of subject S2. Trials in close-loop control.
Trial 1Trial 2Trial 3Trial 4Trial 5Avg.
Session 3%MI63.0446.7452.1748.9146.7451.52
%Att45.6544.5755.4352.175049.56
%Commands67.0081.0075.0063.0063.0069.80
Acc. commands60.0060.0075.0040.0050.0057.00
TPR100.00100.00100.00100.00100.00100.00
FP2.001.001.002.002.001.60
FP/min4.622.312.314.624.623.69
Session 4%MI53.2664.1345.6558.758.756.09
%Att58.756.527555.4371.7463.48
%Commands57.0064.0057.0078.0065.0064.20
Acc. commands0.00100.000.0075.0050.0045.00
TPR100.00100.00100.00100.00100.00100.00
FP1.000.001.001.001.000.80
FP/min2.310.002.312.312.311.85
Session 5%MI59.7856.5259.7859.7861.9659.56
%Att70.6570.6559.7867.3956.5265.00
%Commands56.0073.0052.0088.0070.0067.80
Acc. commands40.0066.67100.00100.0033.3368.00
TPR100.00100.00100.00100.00100.00100.00
FP2.001.000.000.002.001.00
FP/min4.622.310.000.004.622.31
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ferrero, L.; Quiles, V.; Ortiz, M.; Iáñez, E.; Azorín, J.M. A BMI Based on Motor Imagery and Attention for Commanding a Lower-Limb Robotic Exoskeleton: A Case Study. Appl. Sci. 2021, 11, 4106. https://doi.org/10.3390/app11094106

AMA Style

Ferrero L, Quiles V, Ortiz M, Iáñez E, Azorín JM. A BMI Based on Motor Imagery and Attention for Commanding a Lower-Limb Robotic Exoskeleton: A Case Study. Applied Sciences. 2021; 11(9):4106. https://doi.org/10.3390/app11094106

Chicago/Turabian Style

Ferrero, Laura, Vicente Quiles, Mario Ortiz, Eduardo Iáñez, and José M. Azorín. 2021. "A BMI Based on Motor Imagery and Attention for Commanding a Lower-Limb Robotic Exoskeleton: A Case Study" Applied Sciences 11, no. 9: 4106. https://doi.org/10.3390/app11094106

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop