Studies to Overcome Brain–Computer Interface Challenges

: A brain–computer interface (BCI) is a promising technology that can analyze brain signals and control a robot or computer according to a user’s intention. This paper introduces our studies to overcome the challenges of using BCIs in daily life. There are several methods to implement BCIs, such as sensorimotor rhythms (SMR), P300, and steady-state visually evoked potential (SSVEP). These methods have different pros and cons according to the BCI type. However, all these methods are limited in choice. Controlling the robot arm according to the intention enables BCI users can do various things. We introduced the study predicting three-dimensional arm movement using a non-invasive method. Moreover, the study was described compensating the prediction using an external camera for high accuracy. For daily use, BCI users should be able to turn on or off the BCI system because of the prediction error. The users should also be able to change the BCI mode to the efﬁcient BCI type. The BCI mode can be transformed based on the user state. Our study was explained estimating a user state based on a brain’s functional connectivity and a convolutional neural network (CNN). Additionally, BCI users should be able to do various tasks, such as carrying an object, walking, or talking simultaneously. A multi-function BCI study was described to predict multiple intentions simultaneously through a single classiﬁcation model. Finally, we suggest our view for the future direction of BCI study. Although there are still many limitations when using BCI in daily life, we hope that our studies will be a foundation for developing a practical BCI system.


Introduction
The convergence of brain science and artificial intelligence technology has received significant attention. A typical example is a brain-computer interface (BCI) [1]. BCI is a technology that can measure and analyze brain signals to predict a user's intention and control a robot or computer according to their choice [2]. For a person to perform a movement, the brain generates a movement command and transmits it to the peripheral nerves through the spinal cord to move the body [3]. However, suppose there is a problem such as spinal cord injury (SCI) or locked-in syndrome (LIS) in these pathways. In that case, even if the brain sends a motor command, the body cannot move at all [4]. Using BCI technology, even paralyzed patients can express their intentions by typing letters. They also can drink water by controlling a robot and can drive to the desired place in an electric wheelchair [2]. BCI technology is beneficial to healthy people too. By using BCI technology, one can easily control various electronic devices such as changing the TV channel, adjusting the air conditioner's temperature, and adjusting the music volume just by thinking without moving the body. BCI can also be utilized for games, military purposes, and the elderly. Therefore, the social and economic ripple effects of BCI technology are substantial.
There are several methods used to implement BCIs, such as sensorimotor rhythms (SMR), P300, and steady-state visually evoked potential (SSVEP) [2]. SMR-based BCI is a method using the anatomy of the primary motor cortex. Different parts of the brain are 2 of 7 responsible for their respective functions. The motor cortex's alpha wave (8~13 Hz) and beta wave (13~30 Hz) will increase or decrease according to the movement intention. For example, when users want to move their hands or feet, the power of alpha and beta waves decreases on the corresponding motor cortex [2,[5][6][7][8]. Therefore, the BCI system can predict a left hand, a right hand, or feet movement intention using power change of the brain area. SMR-BCIs are usually used to control a mouse cursor or an electric wheelchair. Users can control the direction of the cursor or wheelchair to the front, left, or right according to feet, a left-hand, or a right-hand movement intention. SMR-BCIs are intuitive. However, this method requires a long training time for movement imagination [7]. P300 is a positive peak of the brain signal on the parietal region about 300 ms after the stimulation [5]. P300 is greatest upon stimulus when a user wants to select among several stimuli. Therefore, P300-based BCI can predict the target that the user wants by choosing the biggest P300 stimulus. P300-BCIs are generally used to type letters by looking at characters [9][10][11]. They can be used to select one among many options, although a monitor and visual stimuli are needed. SSVEP-based BCI utilizes the fact that the electroencephalography (EEG) intensity increases at the same frequency as the visual stimulus from looking from the user among visual stimuli from blinking at different frequencies [2,12]. A user of the SSVEP-BCI can rapidly select the target among several visual stimuli. Generally, SSVEP shows the highest accuracy among the three BCI methods. SSVEP-BCIs also require a monitor and visual stimuli. The stimuli of SSVEP can make the user's eyes tired [13].
As described above, BCIs have pros and cons according to the BCI type [14,15]. However, all these methods are limited in choice. It means that the user of these BCIs can do only predetermined tasks such as selecting wheelchair directions or a character. The user cannot do new jobs such as drinking water or brushing teeth. This paper aims to introduce our studies to overcome the limitations of previous BCI methods for practical use.

Arm Movement Prediction
If BCI users can control the robot arm like their own, they can do various things daily. Therefore, many research groups have attempted to predict arm movement and control the robot arm. It is necessary to identify how brain activities change according to arm movement to predict it. In 1982, Georgopoulos discovered that the firing rate of neurons in the primary motor cortex differed according to the direction of arm movement [16]. It means that neurons have a directional preference for action. Georgopoulos also found that arm movements could be predicted based on firing patterns of neurons [17]. These research results of Georgopoulos became the basis of research to control a robotic arm by predicting arm movements from brain signals. In 2008, a research team at the University of Pittsburgh succeeded in eating marshmallows by controlling a robotic arm in real-time in monkey experiments [18]. A research team from the Brown University and the University of Pittsburgh succeeded in drinking beverages by controlling a robotic arm in real-time with quadriplegic patients in 2012 and 2013, respectively [19,20]. However, existing BCI methods for predicting arm movement measure brain signals by inserting needle-shaped electrodes into the brain. This invasive method requires surgery and causes damage to brain cells. In addition, the invasive method makes it difficult to measure the signals over time [21].
To solve these problems, we developed a technology to predict three-dimensional arm movement using magnetoencephalography (MEG) signals without surgery in 2013 [4]. Movements could be estimated with statistically significant and considerably high accuracy (p < 0.001, mean r > 0.7) from all nine subjects. We analyzed MEG signals based on timefrequency analysis and extracted features for movement prediction using channel selecting, band-pass filtering (0.5-8 Hz), and downsampling (50 Hz). Current movements were predicted using 200 ms intervals (11 time points) of downsampled MEG signals. Then, x, y, and z velocities of movements were estimated using a multiple linear regression method. MEG signals from −60 to −140 ms were critical and 200-300 ms intervals were sufficient to predict current movements. To our knowledge, there was only one paper predicting three-dimensional arm movements using non-invasive neural signals before our study. However, the study's accuracy was quite low (mean r = 0.19-0.38) and the prediction results were unreliable because of the experiment paradigm [4,22]. In our recent study, the prediction accuracy was highly improved using LSTM instead of multiple linear regression [23].

Correction Using Image Processing
We could estimate movement with a non-invasive method through the above study. However, the accuracy for controlling a robot arm to grasp a target is generally low. For example, success rates of the invasive BCI method were 20.8-62.2% for reaching and grasping movements, although the experiment task was easy [19]. One softball was presented on the flexible stick as a target in the experiment. Although the robotic arm approximately reached the target, grasping often failed because the robotic arm did not precisely reach the object. A small inaccuracy in the movement prediction caused task failure.
We proposed a novel prediction method, a feedback-prediction algorithm (FPA), to increase the accuracy using image information as shown in Figure 1 [24]. Positions of objects can be easily calculated using image processing technology. The FPA compensates for the prediction based on object positions. The method predicts the target among several objects based on the predicted direction. It corrects the prediction based on the target position. We applied the Kalman filter to compensate for the prediction. The compensated vector multiplied an automatically calculated weight value to reach the target easily. Even if the user changes the movement direction, the predicted trajectory can be corrected to the changed direction because the FPA predicts and corrects at every step. The accuracy of the movement prediction was significantly improved using the FPA for all nine subjects and 32.1% of the error was reduced. Although there were some studies modifying the prediction using target position, the previous studies predicted and compensated for movements on the 2D screen [25,26]. Therefore, the previous methods are unsuitable for controlling a neural prosthesis in real life.
Then, x, y, and z velocities of movements were estimated using a multiple linear regression method. MEG signals from −60 to −140 ms were critical and 200-300 ms intervals were sufficient to predict current movements. To our knowledge, there was only one paper predicting three-dimensional arm movements using non-invasive neural signals before our study. However, the study's accuracy was quite low (mean r = 0.19-0.38) and the prediction results were unreliable because of the experiment paradigm [4,22]. In our recent study, the prediction accuracy was highly improved using LSTM instead of multiple linear regression [23].

Correction Using Image Processing
We could estimate movement with a non-invasive method through the above study. However, the accuracy for controlling a robot arm to grasp a target is generally low. For example, success rates of the invasive BCI method were 20.8%-62.2% for reaching and grasping movements, although the experiment task was easy [19]. One softball was presented on the flexible stick as a target in the experiment. Although the robotic arm approximately reached the target, grasping often failed because the robotic arm did not precisely reach the object. A small inaccuracy in the movement prediction caused task failure.
We proposed a novel prediction method, a feedback-prediction algorithm (FPA), to increase the accuracy using image information as shown in Figure 1 [24]. Positions of objects can be easily calculated using image processing technology. The FPA compensates for the prediction based on object positions. The method predicts the target among several objects based on the predicted direction. It corrects the prediction based on the target position. We applied the Kalman filter to compensate for the prediction. The compensated vector multiplied an automatically calculated weight value to reach the target easily. Even if the user changes the movement direction, the predicted trajectory can be corrected to the changed direction because the FPA predicts and corrects at every step. The accuracy of the movement prediction was significantly improved using the FPA for all nine subjects and 32.1% of the error was reduced. Although there were some studies modifying the prediction using target position, the previous studies predicted and compensated for movements on the 2D screen [25,26]. Therefore, the previous methods are unsuitable for controlling a neural prosthesis in real life.

Prediction of User State
Although we can predict arm movement with high accuracy using our method, there are still problems. The research team of the University of Pittsburgh revealed that severe movement prediction errors could occur during the resting state [27]. It implies that the brain-controlled robot can operate in a dangerous manner if the BCI user does nothing or sleeps. Therefore, the user should be able to turn on or off the BCI system according to their needs. Moreover, different kinds of BCI have different pros and cons. It is challenging to type characters or control a wheelchair by controlling a robot arm. Thus, BCI should be able to predict a user state and apply a suitable BCI mode to the system.
We developed a technology that could predict the user state to change the BCI type as shown in Figure 2 [28]. The user state could be estimated based on the brain's functional connectivity and a convolutional neural network (CNN). Common average reference (CAR) and band-pass filtering are applied to EEG signals. After filtering, the system calculates mutual information (MI) among EEG signals as functional connectivity. The MI is used as the CNN input. The CNN predicts the user state into four states (resting, speech imagery, leg-motor imagery, and hand-motor imagery). Five-fold cross-validation was applied to evaluate the feasibility. The mean accuracy of 10 subjects for state prediction was 88.25 ± 2.34%. It implies that predicting user state and changing BCI mode are possible using functional connectivity and CNN.

Prediction of User State
Although we can predict arm movement with high accuracy using our method, there are still problems. The research team of the University of Pittsburgh revealed that severe movement prediction errors could occur during the resting state [27]. It implies that the brain-controlled robot can operate in a dangerous manner if the BCI user does nothing or sleeps. Therefore, the user should be able to turn on or off the BCI system according to their needs. Moreover, different kinds of BCI have different pros and cons. It is challenging to type characters or control a wheelchair by controlling a robot arm. Thus, BCI should be able to predict a user state and apply a suitable BCI mode to the system.
We developed a technology that could predict the user state to change the BCI type as shown in Figure 2 [28]. The user state could be estimated based on the brain's functional connectivity and a convolutional neural network (CNN). Common average reference (CAR) and band-pass filtering are applied to EEG signals. After filtering, the system calculates mutual information (MI) among EEG signals as functional connectivity. The MI is used as the CNN input. The CNN predicts the user state into four states (resting, speech imagery, leg-motor imagery, and hand-motor imagery). Five-fold cross-validation was applied to evaluate the feasibility. The mean accuracy of 10 subjects for state prediction was 88.25 ± 2.34%. It implies that predicting user state and changing BCI mode are possible using functional connectivity and CNN.

Multi-Functional BCI
Now, the BCI user can control various electric devices such as a computer, a wheelchair, or a robot by changing the BCI type according to the user's state. Although the BCI system is helpful to the user, there is still a problem. The user can control only one device at a time. However, the user often performs several tasks simultaneously in real life, such as carrying an object, walking, and talking. Therefore, the BCI system should be able to predict various intentions simultaneously for convenience. For this purpose, we developed a multi-functional BCI system that could simultaneously predict multiple intentions using a single prediction model as shown in Figure 3 [29]. The multi-functional BCI system applies CAR and band-pass filtering. After filtering, the system performs power spectrum and normalization. Finally, artificial neural networks predict multiple intentions from the normalized power spectrum. The prediction accuracy of the proposed BCI was 32.96%. Although the accuracy was not very high, it was significantly higher than the chance level (1.56%). Our ongoing study will increase the prediction accuracy for multiple intentions using a deep learning algorithm. Moreover, we will develop a multi-functional BCI that can work in real-time.

Multi-Functional BCI
Now, the BCI user can control various electric devices such as a computer, a wheelchair, or a robot by changing the BCI type according to the user's state. Although the BCI system is helpful to the user, there is still a problem. The user can control only one device at a time. However, the user often performs several tasks simultaneously in real life, such as carrying an object, walking, and talking. Therefore, the BCI system should be able to predict various intentions simultaneously for convenience. For this purpose, we developed a multi-functional BCI system that could simultaneously predict multiple intentions using a single prediction model as shown in Figure 3 [29]. The multi-functional BCI system applies CAR and band-pass filtering. After filtering, the system performs power spectrum and normalization. Finally, artificial neural networks predict multiple intentions from the normalized power spectrum. The prediction accuracy of the proposed BCI was 32.96%. Although the accuracy was not very high, it was significantly higher than the chance level (1.56%). Our ongoing study will increase the prediction accuracy for multiple intentions using a deep learning algorithm. Moreover, we will develop a multi-functional BCI that can work in real-time.
Appl. Sci. 2022, 12, x FOR PEER REVIEW 5 of 8 Figure 3. Multi-functional BCI. We developed the multi-functional BCI system that can simultaneously predict multiple intentions using a single prediction model.

Discussion
Over the past decades, there have been lots of BCI studies. They were usually focused on the methods to improve the prediction accuracy [5,8,23,[30][31][32], raise the number of commands [12,33], increase the information transfer rate (ITR) [34][35][36][37][38], or reduce the training efforts [7,30,34,39]. To enhance the prediction accuracy, new classification algorithms [30,40,41] or feature extraction methods have been proposed [31,32,42]. Recent BCI studies frequently applied deep learning algorithms for high accuracy [5,23]. The BCI study using deep learning shows 99.38% prediction accuracy for motor imagery tasks [30]. Furthermore, various stimulus-presentation methods were suggested to increase the number of commands [12,15,43]. A recent study implemented a speller with 160 characters by combining different frequency signals [12]. Another critical issue in BCI fields is improving typing speed while maintaining high accuracy. Canonical Correlation Analysis (CCA) is often used in SSVEP spellers and shows good ITR performance [35,36,44]. A hybrid BCI study combining an eye tracker and SSVEP achieved considerably high ITR [37,38]. The accuracy and ITR of the hybrid study were 95.2% and 360.7 bpm, respectively [38]. To reduce the time measuring training data, data augmentation or transfer learning approaches were often used [39]. The prediction model can be trained with fewer data by augmenting the data. The data augmentation could be achieved by cropping the signals with a sliding window [45], adding noise [46], or segmenting and recombining the signals [47]. Recent studies also used generative deep learning algorithms to create artificial neural signals as training data [48,49]. An alternative approach to reducing acquisition time is transfer learning. It utilizes the pre-trained model for new subjects [50,51]. Therefore, transfer learning requires less training data and reduces the acquisition time.
Despite these BCI studies, there were still critical limitations for disabled people to using the BCI system in real life. Here, we introduced our BCI studies to overcome the barriers for practical use. We suggested the methods of estimating arm movement with high accuracy using a non-invasive method and compensating the prediction. Moreover, the study was described predicting the user's state and changing the system mode. A study predicting multiple intentions was also introduced. Our future study will develop a real-time multi-functional BCI system combining automatic prediction correction, mode change, and multi-intention prediction. In our opinion, automatic control and suggestion systems will be crucial to the BCIs for safe and efficient use of the BCI system in daily life. For instance, autonomous driving of an electric wheelchair by commanding the destination will be more convenient and safe than controlling the wheelchair constantly. Moreover, it will be convenient if the BCI system can suggest proper behavior based on circumstances, schedule, and time like an artificial intelligent secretary. The smart BCI may ask whether the user is hungry based on the user's routine and time, suggest and order foods according to the preference, and automatically feed the user. To develop smart BCIs, several state-ofthe-art technologies such as autonomic driving, context recognition, robotics, and artificial intelligence should be combined.
Although we proposed some approaches to overcome the challenges for the practical use, there are still many limitations when using BCIs in daily life, such as the inconvenience of electrode attachment, system recharge, and system detachment. Nevertheless, we hope that our studies will be a foundation for the development of a practical BCI system.

Conflicts of Interest:
The authors declare no conflict of interest.