Motor Imagery Based Continuous Teleoperation Robot Control with Tactile Feedback

: Brain computer interface (BCI) adopts human brain signals to control external devices directly without using normal neural pathway. Recent study has explored many applications, such as controlling a teleoperation robot by electroencephalography (EEG) signals. However, utilizing the motor imagery EEG-based BCI to perform teleoperation for reach and grasp task still has many di ﬃ culties, especially in continuous multidimensional control of robot and tactile feedback. In this research, a motor imagery EEG-based continuous teleoperation robot control system with tactile feedback was proposed. Firstly, mental imagination of di ﬀ erent hand movements was translated into continuous command to control the remote robotic arm to reach the hover area of the target through a wireless local area network (LAN). Then, the robotic arm automatically completed the task of grasping the target. Meanwhile, the tactile information of remote robotic gripper was detected and converted to the feedback command. Finally, the vibrotactile stimulus was supplied to users to improve their telepresence. Experimental results demonstrate the feasibility of using the motor imagery EEG acquired by wireless portable equipment to realize the continuous teleoperation robot control system to ﬁnish the reach and grasp task. The average two-dimensional continuous control success rates for online Task 1 and Task 2 of the six subjects were 78.0% ± 6.1% and 66.2% ± 6.0%, respectively. Furthermore, compared with the traditional EEG triggered robot control using the predeﬁned trajectory, the continuous fully two-dimensional control can not only improve the teleoperation robot system’s e ﬃ ciency but also give the subject a more natural control which is critical to human–machine interaction (HMI). In addition, vibrotactile stimulus can improve the operator’s telepresence and task performance.


Introduction
Brain computer interface (BCI) utilizes brain activity to communicate with external devices directly [1][2][3]. In the past few decades, both invasive BCI and noninvasive BCI received much attention from researchers. The BCI has evolved from basic communication to a state in which some complex tasks can be routinely performed with healthy subjects. Soekadar et al. [4] demonstrated that the hand exoskeleton based on hybrid electroencephalography (EEG)/electro-oculogram (EOG) BCI can restore the autonomy and independence of paraplegic individuals in everyday life. The feasibility of inducing neurological recovery in paraplegic patients by long term training with a BCI-based gait protocol was shown in [5]. In addition, BCI-based control of virtual object [6], robotic arm [7][8][9], robotic prosthetic [10,11], wheelchair [12], and various rehabilitation devices [13][14][15][16] were also reported in previous research.
With the development of the BCI technology, some groups have carried out research on BCI-based teleoperation. In [17], a brain-driven telepresence system was provided to the user in remote environments through a mobile robot. The EEG signal of P300 was utilized. A BCI-based teleoperation control for the exoskeleton robotic system was proposed in [18]. In this study, the subject can guide the robot to perform manipulation tasks by integrating the BCI commands and adaptive fuzzy controllers. Zhao et al. [19] developed a teleoperation control framework for multiple coordinated mobile robots using BCI. Both of the above two studies adopt the EEG signals of steady-state visually evoked potentials (SSVEP).
However, few research groups have attempted to apply the motor imagery EEG signals to the teleoperation robotic system in reach and grasp tasks, and many difficulties still remain, such as continuous extracting trajectory information from human movement imagination, continuous multidimensional control, and tactile feedback. Most of the previous research focuses on the conventional two class or four class classification of EEG signal to get one or four discrete control commands and trigger the robot to move along the predefined trajectory instead of directly converting the EEG signals to multidimensional continuous control information [20]. For example, Bousseta et al. [21] proposed a BCI system which controlled a robot arm based on the user's thought. Four subjects were instructed to imagine the execution of movements of the left hand, right hand, both hands, or movement of the feet. The classifier generated four commands to control the robot's arm to move along the predefined trajectory instead of continuous control. Li Y et al. [22] presented an EEG-based system classifying the signals from the Emotiv EPOC into the corresponding action commands. The trigger commands were sent to the robot and the robot performed the predefined basic maneuvering, such as moving forwards, moving backwards, turning left, and turning right. In our previous work, trigger commands were also used to control the rehabilitation robot [23]. In addition, most of the BCI-based online control systems adopted nonportable equipment and could only be used in laboratory environments. Although it has high signal-to-noise ratio, it is very inconvenient to use and carry.
Aiming at these problems, we designed a BCI-based continuous teleoperation robot control system with tactile feedback and recruited a group of healthy participants to use their movement imagination EEG to continuously control the remote robotic arm for performing reach and grasp tasks. Wireless portable acquisition equipment was adopted to obtain the EEG signals in order to make the BCI-based system easier to use in daily life. Moreover, the tactile information of remote robotic gripper was detected and transferred to the local computer. At last, the subjects were provided with vibrotactile stimulus to improve their telepresence and task performance. The motor imagery EEG-based continuous teleoperation robot control system with vibrotactile feedback for the task of reach and grasp may offer the following advantages: (a) It allows subjects to perform teleoperation using only movement imagination and explore the operator's motor initiatives; (b) the continuous control not only can improve the teleoperation robot system's efficiency but also offer the subject a more natural control which is very important in human machine interaction, especially in the teleoperation robot system; (c) the biofeedback based on vibrotactile stimulus may improve operator's telepresence, enhance the confidence of the operator, and eventually improve task performance; (d) it may provide critical data to understand the processes of motor imagery based-teleoperation robot control system. Figure 1 shows the experimental setup used in this study. The EEG-based continuous teleoperation robot system consists of the following components: A wireless EEG amplifier, vibrotactile stimulation system, a robotic arm, robotic gripper force detection system, a wireless gateway, a master PC, and a slave PC.

System Description
Electronics 2020, 9, x FOR PEER REVIEW 3 of 16 vibrotactile stimulation system, a robotic arm, robotic gripper force detection system, a wireless gateway, a master PC, and a slave PC. Firstly, the master PC displayed the target to indicate the participant to perform the corresponding motor imagery of left hand, right hand, both hands, or relaxation. Secondly, the cursor was controlled by motor imagery EEG and continued moving on the screen as the visual feedback until it hit the virtual target. In accordance with continuous motion trajectory of the cursor, the robotic arm moved continuously toward the target. The wireless EEG amplifier is in charge of recording the motor imagery EEG and sending the data to the master PC by Bluetooth. The master PC is responsible for processing the movement imagination EEG signals, sending the continuous motion commands to the slave PC, controlling the virtual cursor for the purpose of providing subjects with the visual feedback, receiving the force information of robotic gripper from slave PC, and controlling the vibrotactile stimulation system for the purpose of supplying the subjects with tactile feedback. The slave PC is in charge of controlling the robot to finish the task of reach and grasp according to the motion trajectory from the master PC, acquiring the tactile information of robotic gripper, and sending it to the master PC. In addition, the communication between master and slave PC is based on a client-server system with TCP-IP protocol. The master PC and slave PC act as the server and client, respectively.

Control Architecture
The control architecture of the EEG-based continuous teleoperation robot control system is shown in Figure 2. To begin with, the multichannel EEG signals were spatially filtered by a common average reference (CAR) filter. Secondly, in every 50 ms, the amplitude of specific mu rhythm band or beta rhythm band over the left (C3 channel) and right (C4 channel) hemispheres were estimated based on a 16 order autoregressive (AR) model using the last 500 ms EEG signals. Thirdly, these amplitudes were linearly mapped to generate the two-dimensional continuous motion trajectory. Next, the motion trajectory information was utilized to control the cursor for the purpose of giving visual feedback. At the same time, the motion trajectory information was transmitted to the slave PC by the wires local area network (LAN). Then, the robotic arm was controlled according to the continuous motion trajectory. Once the robotic arm arrives the hover area, the robotic gripper would reach and grasp the target automatically in order to reduce the brain load. The "hover area" was a virtual cylindrical region centered above the target wood with a radius of 1 cm. What is more, the grasp force was detected and sent to the master PC also by wireless LAN as the feedback. Finally, the Firstly, the master PC displayed the target to indicate the participant to perform the corresponding motor imagery of left hand, right hand, both hands, or relaxation. Secondly, the cursor was controlled by motor imagery EEG and continued moving on the screen as the visual feedback until it hit the virtual target. In accordance with continuous motion trajectory of the cursor, the robotic arm moved continuously toward the target. The wireless EEG amplifier is in charge of recording the motor imagery EEG and sending the data to the master PC by Bluetooth. The master PC is responsible for processing the movement imagination EEG signals, sending the continuous motion commands to the slave PC, controlling the virtual cursor for the purpose of providing subjects with the visual feedback, receiving the force information of robotic gripper from slave PC, and controlling the vibrotactile stimulation system for the purpose of supplying the subjects with tactile feedback. The slave PC is in charge of controlling the robot to finish the task of reach and grasp according to the motion trajectory from the master PC, acquiring the tactile information of robotic gripper, and sending it to the master PC. In addition, the communication between master and slave PC is based on a client-server system with TCP-IP protocol. The master PC and slave PC act as the server and client, respectively.

Control Architecture
The control architecture of the EEG-based continuous teleoperation robot control system is shown in Figure 2. To begin with, the multichannel EEG signals were spatially filtered by a common average reference (CAR) filter. Secondly, in every 50 ms, the amplitude of specific mu rhythm band or beta rhythm band over the left (C3 channel) and right (C4 channel) hemispheres were estimated based on a 16 order autoregressive (AR) model using the last 500 ms EEG signals. Thirdly, these amplitudes were linearly mapped to generate the two-dimensional continuous motion trajectory. Next, the motion trajectory information was utilized to control the cursor for the purpose of giving visual feedback. At the same time, the motion trajectory information was transmitted to the slave PC by the wires local area network (LAN). Then, the robotic arm was controlled according to the continuous motion trajectory. Once the robotic arm arrives the hover area, the robotic gripper would reach and grasp the target automatically in order to reduce the brain load. The "hover area" was a virtual cylindrical region centered above the target wood with a radius of 1 cm. What is more, the grasp force was detected and sent to the master PC also by wireless LAN as the feedback. Finally, the feedback based on vibrotactile stimulus was given to the subjects when the robotic gripper grasped the target in order to improve the telepresence and task performance.
Electronics 2020, 9, x FOR PEER REVIEW 4 of 16 feedback based on vibrotactile stimulus was given to the subjects when the robotic gripper grasped the target in order to improve the telepresence and task performance.

Human Subjects
Six right-handed healthy subjects were the users of the BCI-based teleoperation robot system. All participants had no previous experience for motor imagery based BCI experiments and teleoperation experiments. They were all recruited from Southeast University, Nanjing, China. Before performing experiments, informed consent was obtained from each subject. This study was approved by local Ethics Committee.

Experimental Paradigm
Each subject sat in a reclining chair facing a screen, while scalp electrodes recorded the EEG. The robotic arm was placed on another table about 8 m away from the subjects. This can be seen in Figure 3. The subject's task was to imagine movement of the left hand, right hand, both hands, and relaxation of both hands. The subject was looking at the virtual cursor during the motor imagery, and could not see the robotic arm. By the motor imagery, the subjects learned to modulate their sensorimotor rhythm amplitude in the mu rhythm (8-12 Hz) and beta (18)(19)(20)(21)(22)(23)(24)(25)(26) Figure 2. Control architecture of BCI-driven continuous teleoperation robot system.

Human Subjects
Six right-handed healthy subjects were the users of the BCI-based teleoperation robot system. All participants had no previous experience for motor imagery based BCI experiments and teleoperation experiments. They were all recruited from Southeast University, Nanjing, China. Before performing experiments, informed consent was obtained from each subject. This study was approved by local Ethics Committee.

Experimental Paradigm
Each subject sat in a reclining chair facing a screen, while scalp electrodes recorded the EEG. The robotic arm was placed on another table about 8 m away from the subjects. This can be seen in Figure 3. The subject's task was to imagine movement of the left hand, right hand, both hands, and relaxation of both hands. The subject was looking at the virtual cursor during the motor imagery, and could not see the robotic arm. By the motor imagery, the subjects learned to modulate their sensorimotor rhythm amplitude in the mu rhythm (8-12 Hz) and beta (18-26 Hz) frequency bands. Each subject performed five runs and each run included 12 trials. Each trial started with a period of 2 s. Next, the virtual target was displayed on the screen to indicate the subject to perform the corresponding movement imagination. After 2 s, the cursor was moving on the screen as the visual feedback. Meanwhile, the continuous motion trajectory was sent to the slave PC by wireless LAN and the robotic arm moved continuously toward the target in real time. Once the movement imagination-controlled cursor hit the target, the robotic arm gripper had arrived the hover area and would perform grasping automatically. At the same time, the fore information of robotic gripper was transmitted from slave PC to master PC, and the feedback based on vibrotactile stimulus was supplied to the subject so as to improve the telepresence. Finally, the robotic arm moved to the original position and prepared to reach and grasp for the next trial.

EEG Recording and Processing
According to the international standard electrode placement [24], electrodes ( Figure 4) of C3, FC3, CP3, C5, C4, FC4, CP4, and C6 were fed into a g.tec's portable acquisition system, namely g.MOBIlab module. All channels were referenced to the left earlobe and the ground electrode was AFz. EEG signals of movement imagination were acquired from the amplifier using the BCI2000 [25] software by Bluetooth. The sample rate was 256 Hz. The impedance of electrodes was kept below 10 KΩ. BCI2000 is in charge of controlling the virtual cursor and displaying the virtual targets. In EEG-based BCI systems, the use of spatial filters can significantly improve the user's performance [20]. A CAR filter was used to preprocess the eight-channel motor imagery EEG signals. Then, an autoregressive (AR) model was adopted to estimate the amplitude of the sensorimotor rhythm in a specific frequency band of the subject. The AR model is given by: Each trial started with a period of 2 s. Next, the virtual target was displayed on the screen to indicate the subject to perform the corresponding movement imagination. After 2 s, the cursor was moving on the screen as the visual feedback. Meanwhile, the continuous motion trajectory was sent to the slave PC by wireless LAN and the robotic arm moved continuously toward the target in real time.
Once the movement imagination-controlled cursor hit the target, the robotic arm gripper had arrived the hover area and would perform grasping automatically. At the same time, the fore information of robotic gripper was transmitted from slave PC to master PC, and the feedback based on vibrotactile stimulus was supplied to the subject so as to improve the telepresence. Finally, the robotic arm moved to the original position and prepared to reach and grasp for the next trial.

EEG Recording and Processing
According to the international standard electrode placement [24], electrodes ( Figure 4) of C3, FC3, CP3, C5, C4, FC4, CP4, and C6 were fed into a g.tec's portable acquisition system, namely g.MOBIlab module. All channels were referenced to the left earlobe and the ground electrode was AFz. EEG signals of movement imagination were acquired from the amplifier using the BCI2000 [25] software by Bluetooth. The sample rate was 256 Hz. The impedance of electrodes was kept below 10 KΩ. BCI2000 is in charge of controlling the virtual cursor and displaying the virtual targets.  Each trial started with a period of 2 s. Next, the virtual target was displayed on the screen to indicate the subject to perform the corresponding movement imagination. After 2 s, the cursor was moving on the screen as the visual feedback. Meanwhile, the continuous motion trajectory was sent to the slave PC by wireless LAN and the robotic arm moved continuously toward the target in real time. Once the movement imagination-controlled cursor hit the target, the robotic arm gripper had arrived the hover area and would perform grasping automatically. At the same time, the fore information of robotic gripper was transmitted from slave PC to master PC, and the feedback based on vibrotactile stimulus was supplied to the subject so as to improve the telepresence. Finally, the robotic arm moved to the original position and prepared to reach and grasp for the next trial.

EEG Recording and Processing
According to the international standard electrode placement [24], electrodes ( Figure 4) of C3, FC3, CP3, C5, C4, FC4, CP4, and C6 were fed into a g.tec's portable acquisition system, namely g.MOBIlab module. All channels were referenced to the left earlobe and the ground electrode was AFz. EEG signals of movement imagination were acquired from the amplifier using the BCI2000 [25] software by Bluetooth. The sample rate was 256 Hz. The impedance of electrodes was kept below 10 KΩ. BCI2000 is in charge of controlling the virtual cursor and displaying the virtual targets. In EEG-based BCI systems, the use of spatial filters can significantly improve the user's performance [20]. A CAR filter was used to preprocess the eight-channel motor imagery EEG signals. Then, an autoregressive (AR) model was adopted to estimate the amplitude of the sensorimotor rhythm in a specific frequency band of the subject. The AR model is given by: In EEG-based BCI systems, the use of spatial filters can significantly improve the user's performance [20]. A CAR filter was used to preprocess the eight-channel motor imagery EEG signals. Then, an autoregressive (AR) model was adopted to estimate the amplitude of the sensorimotor rhythm in a specific frequency band of the subject. The AR model is given by: where x t was the estimated signal at time t, w i was the weight coefficient, and ε is the error of estimation. Every 50 ms, the 16th order AR model with the last 500 ms EEG data was applied to obtain the online amplitude of mu or beta rhythmic activity. Coefficients of the AR model were calculated based on the least-squares criteria. The vertical movement of the cursor (D V ) was obtained by: where R V and L V were the right-side amplitude and left-side amplitude, respectively, w RV and w LV were the coefficients, b V was the offset. The initial values of w RV , w LV , and b V were +1.0, +1.0, and 0.0, respectively. The horizontal movement of the cursor (D H ) was given by: where R H and L H were the right-side amplitude and left-side amplitude, respectively, w RH and w LH were the coefficients, b H was the offset. The initial values of w RH , w LH , and b H were +1.0, −1.0, and 0.0, respectively. According to Equations (2) and (3), when subjects imagine the movement of both hands, the right-side amplitude (R V ) and the left-side amplitude (L V ) will decrease. Therefore, D V becomes negative and the cursor is controlled to move upwards. Conversely, when subjects imagine the relaxation of both hands, the right-side amplitude (R V ) and the left-side amplitude (L V ) will increase. As a result, D V becomes positive and the cursor is controlled to move downwards. Similarly, when subjects imagine the right hand movement or left hand movement, the right-side amplitude (R H ) will decrease or increase and the left-side amplitude (L H ) will increase or decrease. Consequently, D H becomes negative or positive and the cursor is controlled to move right or left. After the first trial, the least-mean-square (LMS) algorithm adaptively adjusts weights and offsets according to past experiments. This adaptation optimizes the online translation of EEG control into the cursor control for the next trial [26].
As a result, the sensorimotor rhythm amplitude of specific frequency bands over the left and right sensorimotor cortex was linearly mapped to the control virtual cursor in one or two dimensions. Simultaneously, the control signals were sent to the robot control software through the TCP/IP protocol with 5 Hz to continuously control the remote robotic arm in two dimensions. Figure 5 shows the grasp force detection system. Firstly, the differential signals from the micro force sensor FSS1500 supplied by Honeywell were amplified and filtered. Then, the microcontroller with internal analog to digital convert transformed the analog force signals into digital signals. Finally, the digital signals were calibrated and sent to slave PC via wireless interface. The sample rate of this force detection system is 100 Hz and the measure range is 0-2000 g. Figure 6 demonstrates the calibration data of two micro force sensors. It can be seen that these sensors have good linearity and can be met with requirements of the fore measurement between the robotic gripper and the target.

Grasp Force Detection and Biofeedback System
There are many ways to realize the tactile feedback, such as vibration stimulation [27], electrical stimulation [28], and thermal stimulation. Vibration stimulation was chosen as the motor imagery EEG-based teleoperation robot system's biofeedback due to its characteristics of fast response, convenient wearing, small size, low power, etc. Moreover, vibrotactile stimulation has been widely used in other research fields, such as rehabilitation training and prosthesis control [29].  There are many ways to realize the tactile feedback, such as vibration stimulation [27], electrical stimulation [28], and thermal stimulation. Vibration stimulation was chosen as the motor imagery EEG-based teleoperation robot system's biofeedback due to its characteristics of fast response, convenient wearing, small size, low power, etc. Moreover, vibrotactile stimulation has been widely used in other research fields, such as rehabilitation training and prosthesis control [29]. The vibrotactile feedback system consists of vibrating motors, motor driving modules, a microcontroller, and a wireless module used for communication with master PC. The vibrating motors were attached to the cuff, which was wrapped around the user's upper arm during the online experiment. Figure 7 illustrates the locations of the vibrating motors. These locations were selected on the basis of our previous prosthetic tactile feedback and teleoperation robot study [29,30]. The vibrating stimulation waveform was a series of discrete pulses with a duty cycle of 50% and the frequency of each pulse was 250 Hz. Once the robotic arm grasped the target, the grasp information was measured by the force detection system and sent to the master PC via wireless net. At the same time, the vibration stimulation based feedback was provided to the subject. Finally, the robotic arm released the target and returned to the center. Meanwhile, the vibration stimulation was stopped. The vibrotactile feedback system can be expected to improve the telepresence and task performance of the operator.    There are many ways to realize the tactile feedback, such as vibration stimulation [27], electrical stimulation [28], and thermal stimulation. Vibration stimulation was chosen as the motor imagery EEG-based teleoperation robot system's biofeedback due to its characteristics of fast response, convenient wearing, small size, low power, etc. Moreover, vibrotactile stimulation has been widely used in other research fields, such as rehabilitation training and prosthesis control [29]. The vibrotactile feedback system consists of vibrating motors, motor driving modules, a microcontroller, and a wireless module used for communication with master PC. The vibrating motors were attached to the cuff, which was wrapped around the user's upper arm during the online experiment. Figure 7 illustrates the locations of the vibrating motors. These locations were selected on the basis of our previous prosthetic tactile feedback and teleoperation robot study [29,30]. The vibrating stimulation waveform was a series of discrete pulses with a duty cycle of 50% and the frequency of each pulse was 250 Hz. Once the robotic arm grasped the target, the grasp information was measured by the force detection system and sent to the master PC via wireless net. At the same time, the vibration stimulation based feedback was provided to the subject. Finally, the robotic arm released the target and returned to the center. Meanwhile, the vibration stimulation was stopped. The vibrotactile feedback system can be expected to improve the telepresence and task performance of the operator.   The vibrotactile feedback system consists of vibrating motors, motor driving modules, a microcontroller, and a wireless module used for communication with master PC. The vibrating motors were attached to the cuff, which was wrapped around the user's upper arm during the online experiment. Figure 7 illustrates the locations of the vibrating motors. These locations were selected on the basis of our previous prosthetic tactile feedback and teleoperation robot study [29,30]. The vibrating stimulation waveform was a series of discrete pulses with a duty cycle of 50% and the frequency of each pulse was 250 Hz. Once the robotic arm grasped the target, the grasp information was measured by the force detection system and sent to the master PC via wireless net. At the same time, the vibration stimulation based feedback was provided to the subject. Finally, the robotic arm released the target and returned to the center. Meanwhile, the vibration stimulation was stopped. The vibrotactile feedback system can be expected to improve the telepresence and task performance of the operator.  There are many ways to realize the tactile feedback, such as vibration stimulation [27], electrical stimulation [28], and thermal stimulation. Vibration stimulation was chosen as the motor imagery EEG-based teleoperation robot system's biofeedback due to its characteristics of fast response, convenient wearing, small size, low power, etc. Moreover, vibrotactile stimulation has been widely used in other research fields, such as rehabilitation training and prosthesis control [29]. The vibrotactile feedback system consists of vibrating motors, motor driving modules, a microcontroller, and a wireless module used for communication with master PC. The vibrating motors were attached to the cuff, which was wrapped around the user's upper arm during the online experiment. Figure 7 illustrates the locations of the vibrating motors. These locations were selected on the basis of our previous prosthetic tactile feedback and teleoperation robot study [29,30]. The vibrating stimulation waveform was a series of discrete pulses with a duty cycle of 50% and the frequency of each pulse was 250 Hz. Once the robotic arm grasped the target, the grasp information was measured by the force detection system and sent to the master PC via wireless net. At the same time, the vibration stimulation based feedback was provided to the subject. Finally, the robotic arm released the target and returned to the center. Meanwhile, the vibration stimulation was stopped. The vibrotactile feedback system can be expected to improve the telepresence and task performance of the operator.

Task Design
In order to improve the task performance for the motor imagery EEG-based teleoperation, a series of experiments with progressively increasing task difficulty were designed. The success rate defined as the ratio of the correct target hit versus all targets is adopted to evaluate the performance of the participants.
As shown in Figure 8, in the first stage, experiments were performed with one-dimensional left vs. right virtual cursor movement control by imagining the movement of left hand or right hand for several sessions until the task success rate reached 80%. In the second stage, subjects were asked to control the virtual cursor move in another one-dimensional up vs. down by imagining the movement of both hands or relaxing until the task performance exceeded 80%. In the third stage, two-dimensional virtual cursor control tasks, namely Task 1 and Task 2, were used to further enhance the ability of modulating their mu and beta rhythm. After the above three stages, participants used the EEG of imagining left hand movement, right hand movement, both hands movement, and both hands relaxation to control the remote robot arm continuously and the tactile feedback was offered.

Task Design
In order to improve the task performance for the motor imagery EEG-based teleoperation, a series of experiments with progressively increasing task difficulty were designed. The success rate defined as the ratio of the correct target hit versus all targets is adopted to evaluate the performance of the participants.
As shown in Figure 8, in the first stage, experiments were performed with one-dimensional left vs. right virtual cursor movement control by imagining the movement of left hand or right hand for several sessions until the task success rate reached 80%. In the second stage, subjects were asked to control the virtual cursor move in another one-dimensional up vs. down by imagining the movement of both hands or relaxing until the task performance exceeded 80%. In the third stage, two-dimensional virtual cursor control tasks, namely Task 1 and Task 2, were used to further enhance the ability of modulating their mu and beta rhythm. After the above three stages, participants used the EEG of imagining left hand movement, right hand movement, both hands movement, and both hands relaxation to control the remote robot arm continuously and the tactile feedback was offered.  Figure 9a,b show the training task success rate of cursor control in one-dimensional left vs. right and another dimension up vs. down, respectively. The average training duration of one-dimensional for all subjects was 4.6 h and it was finished in several days. Obviously, the subjects' ability to control the one-dimensional virtual cursor improved significantly after training and the task performance of all subjects exceeded 80%. Moreover, subject 4 achieved the task success rate of 93%, which was greatly increased compared with the start of the training.   Figure 9a,b show the training task success rate of cursor control in one-dimensional left vs. right and another dimension up vs. down, respectively. The average training duration of one-dimensional for all subjects was 4.6 h and it was finished in several days. Obviously, the subjects' ability to control the one-dimensional virtual cursor improved significantly after training and the task performance of all subjects exceeded 80%. Moreover, subject 4 achieved the task success rate of 93%, which was greatly increased compared with the start of the training.

Task Design
In order to improve the task performance for the motor imagery EEG-based teleoperation, a series of experiments with progressively increasing task difficulty were designed. The success rate defined as the ratio of the correct target hit versus all targets is adopted to evaluate the performance of the participants.
As shown in Figure 8, in the first stage, experiments were performed with one-dimensional left vs. right virtual cursor movement control by imagining the movement of left hand or right hand for several sessions until the task success rate reached 80%. In the second stage, subjects were asked to control the virtual cursor move in another one-dimensional up vs. down by imagining the movement of both hands or relaxing until the task performance exceeded 80%. In the third stage, two-dimensional virtual cursor control tasks, namely Task 1 and Task 2, were used to further enhance the ability of modulating their mu and beta rhythm. After the above three stages, participants used the EEG of imagining left hand movement, right hand movement, both hands movement, and both hands relaxation to control the remote robot arm continuously and the tactile feedback was offered.  Figure 9a,b show the training task success rate of cursor control in one-dimensional left vs. right and another dimension up vs. down, respectively. The average training duration of one-dimensional for all subjects was 4.6 h and it was finished in several days. Obviously, the subjects' ability to control the one-dimensional virtual cursor improved significantly after training and the task performance of all subjects exceeded 80%. Moreover, subject 4 achieved the task success rate of 93%, which was greatly increased compared with the start of the training.   Figure 10a,b demonstrate the success rate of two different cursor control tasks in two dimensional spaces. The average training duration of the two-dimensional control for all subjects was 3.8 h within several days. Similarly, the task performance after training is obviously better than before. However, the overall success rate of the two-dimensional control is lower than that of one-dimensional control due to the increasing difficulty. Additionally, the performance of Task 2 dropped a lot and the success rate of all subjects was less than 80%. Subject 4 still got the best task performance with the success rate of 77%. Figure 10a,b demonstrate the success rate of two different cursor control tasks in two dimensional spaces. The average training duration of the two-dimensional control for all subjects was 3.8 h within several days. Similarly, the task performance after training is obviously better than before. However, the overall success rate of the two-dimensional control is lower than that of one-dimensional control due to the increasing difficulty. Additionally, the performance of Task 2 dropped a lot and the success rate of all subjects was less than 80%. Subject 4 still got the best task performance with the success rate of 77%. After the training, subjects utilized the motor imagery EEG to continuously control the remote robot arm. Figures 11 and 12 are the topographies of power in 8-13 Hz frequency band of two subjects controlling the robotic arm to perform the reach and grasp task. In these figures, blue represents energy decrease and red indicates energy increase.

The ERD/ERS Phenomenon
From Figures 11 and 12, we can observe the event-related desynchronization (ERD) and event-related synchronization (ERS) phenomenon of subject 1 and subject 2 for the online experiments. When the subjects imagined unilateral hand movement (Figures 11a,b and 12a,b), the power in the specific band of the contralateral brain decreased compared with the motor imagery of relaxation (Figures 11d and 12d) and the power in the specific band of the ipsilateral brain increased. Moreover, the decrease of the power in the specific band of the bilateral brain was apparent when the subjects performed movement imagination of both hands (Figures 11c and 12c). Based on the above different phenomena, two-dimensional continuous control information was extracted from mu or beta rhythm frequency bands using Equations (2) and (3) and the remote teleoperation robot was controlled continuously.
In addition, from the topographies in different frequency bands, we can see that the most obvious difference between four different imagery tasks exists near 12 Hz for these two subjects. The determination of optimal frequency for every subject is critical for the motor imagery EEG-based teleoperation control system. The offline analysis tool from the BCI2000 platform can be used to identify the specific electrodes and frequencies that were most differential during the execution of movement imagination tasks. After the training, subjects utilized the motor imagery EEG to continuously control the remote robot arm. Figures 11 and 12 are the topographies of power in 8-13 Hz frequency band of two subjects controlling the robotic arm to perform the reach and grasp task. In these figures, blue represents energy decrease and red indicates energy increase.

The ERD/ERS Phenomenon
From Figures 11 and 12, we can observe the event-related desynchronization (ERD) and event-related synchronization (ERS) phenomenon of subject 1 and subject 2 for the online experiments. When the subjects imagined unilateral hand movement (Figure 11a,b and Figure 12a,b), the power in the specific band of the contralateral brain decreased compared with the motor imagery of relaxation (Figures 11d and 12d) and the power in the specific band of the ipsilateral brain increased. Moreover, the decrease of the power in the specific band of the bilateral brain was apparent when the subjects performed movement imagination of both hands (Figures 11c and 12c). Based on the above different phenomena, two-dimensional continuous control information was extracted from mu or beta rhythm frequency bands using Equations (2) and (3) and the remote teleoperation robot was controlled continuously.
In addition, from the topographies in different frequency bands, we can see that the most obvious difference between four different imagery tasks exists near 12 Hz for these two subjects. The determination of optimal frequency for every subject is critical for the motor imagery EEG-based teleoperation control system. The offline analysis tool from the BCI2000 platform can be used to identify the specific electrodes and frequencies that were most differential during the execution of movement imagination tasks.   Figure 13 demonstrates the target distribution, coordinate system, and an example trajectory of robotic arm within a two-dimensional plane. There are four targets located in a restricted square area. At the beginning of the experiment, the robotic arm is in the center of the square area. The distance between target B and target D is on the X axis where B is positive and D is negative. Similarly, the distance between target A and target C is on the Y axis where A is positive and C is negative. The initial position of the robotic arm is at the origin point.   Figure 13 demonstrates the target distribution, coordinate system, and an example trajectory of robotic arm within a two-dimensional plane. There are four targets located in a restricted square area. At the beginning of the experiment, the robotic arm is in the center of the square area. The distance between target B and target D is on the X axis where B is positive and D is negative. Similarly, the distance between target A and target C is on the Y axis where A is positive and C is negative. The initial position of the robotic arm is at the origin point.  Figure 13 demonstrates the target distribution, coordinate system, and an example trajectory of robotic arm within a two-dimensional plane. There are four targets located in a restricted square area. At the beginning of the experiment, the robotic arm is in the center of the square area. The distance between target B and target D is on the X axis where B is positive and D is negative. Similarly, the distance between target A and target C is on the Y axis where A is positive and C is negative. The initial position of the robotic arm is at the origin point.  Figures 14 and 15 demonstrate the normalized trajectories of the robotic arm within a two-dimensional plane above the target objects. The remote continuous movement of the robotic arm was driven directly by motor imagery EEG signals acquired from local subject 1 and 2.

Trajectory of Robotic Arm
The trajectories of robotic arm moving toward a region above target A are shown in Figures 14a  and 15a. The subjects performed the movement imagination of both hands to modulate their sensorimotor rhythm amplitudes which were utilized to generate the continuous trajectories based on Equations (2) and (3). When the subjects imagined the movement of the left hand more intensely than right hand, the robotic arm deviated to right of the Y axis. On the contrary, the robotic arm moved to the left of the Y axis, if movement imagination of the right hand was performed more intensely than left hand.
Next, Figures 14b and 15b demonstrate the normalized trajectories when subjects imagined the movement of their right hand to guide the robotic arm to the hover region above target B. If the robotic arm deviated to below X axis, the subjects carried out the movement imagination of both hands to move the robotic arm forward. Conversely, when the robotic arm deviated to above of X axis, the subjects imagined the relaxation of both hands to move the robotic arm backward.
Furthermore, when subjects imagined relaxation of their both hands, the robotic arm was controlled to move toward target C. In this case, the trajectories of the robotic arm are illustrated in Figures 14c and 15c. If the robotic arm moved to the right of Y axis, the subjects imagined the movements of left hand to drive the robotic arm to the left. On the contrary, the subjects imagined the movements of right hand to move the robotic arm to the right, if the robotic arm deviated to left of the Y axis.
Moreover, the trajectories of the robotic arm moving toward target D are demonstrated in Figures 14d and 15d. Movement imagination of the left hand was executed to control the robotic arm. If the robotic arm deviated to below of the X axis, the subjects imagined the movements of both hands to move the robotic arm forward. Conversely, if the robotic arm deviated to above of the X axis, imagination of the relaxation of both hands was performed to control the robotic arm to move backward.
Then, if the robotic arm reached the hover region above target A, the robotic arm descended automatically to grasp the object. As was illustrated in Figure 5, the tactile information between the robotic gripper and the target was conducted by the grasp force detection system, which adopted micro force sensors of FSS1500 from Honeywell. Finally, the biofeedback based on vibrotactile stimulus was provided to the participants to improve the telepresence and task performance.
Additionally, the maximum trajectories error of target A, B, C, and D is 6.20%, 6.53%, 8.27%, and 7.46% for subject 1, respectively. Similarly, for subject 2, the maximum trajectories error is 8.53%, 8.20%, 8.52%, and 8.26%. The trajectories of robotic arm moving toward a region above target A are shown in Figures  14a and 15a. The subjects performed the movement imagination of both hands to modulate their sensorimotor rhythm amplitudes which were utilized to generate the continuous trajectories based on Equations (2) and (3). When the subjects imagined the movement of the left hand more intensely than right hand, the robotic arm deviated to right of the Y axis. On the contrary, the robotic arm moved to the left of the Y axis, if movement imagination of the right hand was performed more intensely than left hand.
Next, Figures 14b and 15b demonstrate the normalized trajectories when subjects imagined the movement of their right hand to guide the robotic arm to the hover region above target B. If the robotic arm deviated to below X axis, the subjects carried out the movement imagination of both hands to move the robotic arm forward. Conversely, when the robotic arm deviated to above of X axis, the subjects imagined the relaxation of both hands to move the robotic arm backward.
Furthermore, when subjects imagined relaxation of their both hands, the robotic arm was controlled to move toward target C. In this case, the trajectories of the robotic arm are illustrated in Figures 14c  and 15c. If the robotic arm moved to the right of Y axis, the subjects imagined the movements of left hand to drive the robotic arm to the left. On the contrary, the subjects imagined the movements of right hand to move the robotic arm to the right, if the robotic arm deviated to left of the Y axis.
Moreover, the trajectories of the robotic arm moving toward target D are demonstrated in Figures  14d and 15d. Movement imagination of the left hand was executed to control the robotic arm. If the robotic arm deviated to below of the X axis, the subjects imagined the movements of both hands to move the robotic arm forward. Conversely, if the robotic arm deviated to above of the X axis, imagination of the relaxation of both hands was performed to control the robotic arm to move backward.
Then, if the robotic arm reached the hover region above target A, the robotic arm descended automatically to grasp the object. As was illustrated in Figure 5, the tactile information between the robotic gripper and the target was conducted by the grasp force detection system, which adopted micro force sensors of FSS1500 from Honeywell. Finally, the biofeedback based on vibrotactile stimulus was provided to the participants to improve the telepresence and task performance.
Additionally, the maximum trajectories error of target A, B, C, and D is 6.20%, 6.53%, 8.27%, and 7.46% for subject 1, respectively. Similarly, for subject 2, the maximum trajectories error is 8.53%, 8.20%, 8.52%, and 8.26%.   Figure 16 presents the success rate of the reach and grasp task, in which the subjects used motor imagery EEG signals to control the remote robotic arm. Similar to the cursor control training, the control performance of movement imagination EEG for Task 1 is better than Task 2. After a series of cursor control training, the success rate of the reach and grasp task is greatly increased. The average    Figure 16 presents the success rate of the reach and grasp task, in which the subjects used motor imagery EEG signals to control the remote robotic arm. Similar to the cursor control training, the control performance of movement imagination EEG for Task 1 is better than Task 2. After a series of cursor control training, the success rate of the reach and grasp task is greatly increased. The average  Figure 16 presents the success rate of the reach and grasp task, in which the subjects used motor imagery EEG signals to control the remote robotic arm. Similar to the cursor control training, the control performance of movement imagination EEG for Task 1 is better than Task 2. After a series of cursor control training, the success rate of the reach and grasp task is greatly increased. The average two-dimensional continuous control success rates for online Task 1 and Task 2 of the six subjects are 78.0% ± 6.1% and 66.2% ± 6.0%, respectively. It indicates that the ability of subjects to modulate their sensorimotor rhythm amplitude in the mu or beta frequency band over the left and right sensorimotor cortex has greatly improved. Thus, the two-dimensional continuous control ability for the teleoperation robot increases dramatically. This teleoperation system can effectively and continuously control the robotic arm to complete the four-target reach and grasp task using motor imagery EEG signals. In addition, the operator is able to get the telepresence by the vibration stimulus based feedback.

Online Control Task
Electronics 2020, 9, x FOR PEER REVIEW 13 of 16 two-dimensional continuous control success rates for online Task 1 and Task 2 of the six subjects are 78.0% ± 6.1% and 66.2% ± 6.0%, respectively. It indicates that the ability of subjects to modulate their sensorimotor rhythm amplitude in the mu or beta frequency band over the left and right sensorimotor cortex has greatly improved. Thus, the two-dimensional continuous control ability for the teleoperation robot increases dramatically. This teleoperation system can effectively and continuously control the robotic arm to complete the four-target reach and grasp task using motor imagery EEG signals. In addition, the operator is able to get the telepresence by the vibration stimulus based feedback.
(a) (b) Figure 16. Task success rate of motor imagery EEG-based teleoperation robot system for online Task 1 (a) and online Task 2 (b).

Discussion
In the past, a variety of studies focused on living organisms interaction with robotic cues [31][32][33][34] and controlling the robot through BCI, but there have been few research on the control of teleoperation robot based on motor imagery EEG. Meanwhile, most of the previous research groups focus on the conventional two class or four class classification from EEG signals to get one or four control commands. It can only use the command to trigger the robot arm to move along the predefined trajectory. However, continuous multidimensional control is very important, especially in teleoperation. It can not only improve the teleoperation robot system's efficiency but also give the subject a more natural human machine interaction.

Discussion
In the past, a variety of studies focused on living organisms interaction with robotic cues [31][32][33][34] and controlling the robot through BCI, but there have been few research on the control of teleoperation robot based on motor imagery EEG. Meanwhile, most of the previous research groups focus on the conventional two class or four class classification from EEG signals to get one or four control commands. It can only use the command to trigger the robot arm to move along the predefined trajectory. However, continuous multidimensional control is very important, especially in teleoperation. It can not only improve the teleoperation robot system's efficiency but also give the subject a more natural human machine interaction.
In this paper, a continuous teleoperation robot control system is utilized to demonstrate the feasibility of using the motor imagery EEG acquired by wireless portable equipment to realize the reach and grasp task and it is a fully two-dimensional continuous control system. The real-time continuous control signals sent to the robotic arm includes both horizontal and vertical position control signals when the subject performs a motor imagery. The robotic arm can be controlled to move toward the target in a fully two-dimensional plane.
Furthermore, visual feedback has been utilized in the EEG-based teleportation system. In [18], a BCI-driven teleoperation control of an exoskeleton robotic system was presented. Zhao et al. [19] developed an EEG-based teleoperation control framework of multiple coordinated mobile robots. Both of these studies used the visual feedback.
Nevertheless, tactile feedback also plays an important role in the traditional teleoperation system and will be a critical part of any EEG-based teleoperation system. We adopt both visual and vibrotactile feedback in the proposed motor imagery based teleoperation system. Thus, it can not only fully explore the operator's initiatives and attention processes but also increase the operator's telepresence.
Moreover, the wireless portable EEG acquisition device was adopted in this study in order to overcome the disadvantages of the traditional EEG equipment, which is inconvenient to carry and complex to operation. Thus, the proposed motor imagery EEG-based teleoperation system provides us a novel and convenient way to interact with the external device without using keyboard, joystick, hand controller, and any other traditional input equipment. Additionally, it also allows the people to interact with the remote various scenarios.

Conclusions and Future Work
In this paper, a BCI-based teleoperation robot control system was developed with a wireless portable EEG acquisition device. The movement imagination EEG were translated into a continuous two-dimensional control signal and transmitted to the remote robotic arm using TCP/IP and the robotic arm moved through the control signal in real time. According to the continuously extracted trajectory information, the remote robotic arm finished the reach and grasp task. Furthermore, the grasp force was detected and sent to the master PC. Finally, the biofeedback based on vibrotactile stimulus was given to the subjects in order to improve the telepresence and task performance.
An important work of the next step is to extend the current two-dimensional control to three-dimensional (3D) control. Due to the characteristics of noninvasive EEG, it is very difficult to directly extract the 3D trajectory information from the EEG signals. In order to control the robotic arm to move towards the target in the 3D space, we are currently conducting research on a hybrid BCI system. Moreover, the teleoperation system can be improved from four targets to more targets in the reach and grasp task.
Furthermore, potential application of this system is continuous control of the teleoperation robot, multi-degree of freedom prosthesis, and rehabilitation robot using only motor imagery for paraplegic.

Conflicts of Interest:
The authors declare no conflict of interest.