User State Classiﬁcation Based on Functional Brain Connectivity Using a Convolutional Neural Network

: The brain–computer interface (BCI) is a promising technology where a user controls a robot or computer by thinking with no movement. There are several underlying principles to implement BCI, such as sensorimotor rhythms, P300, steady-state visually evoked potentials, and directional tuning. Generally, different principles are applied to BCI depending on the application, because strengths and weaknesses vary according to each BCI method. Therefore, BCI should be able to predict a user state to apply suitable principles to the system. This study measured electroencephalography signals in four states (resting, speech imagery, leg-motor imagery, and hand-motor imagery) from 10 healthy subjects. Mutual information from 64 channels was calculated as brain connectivity. We used a convolutional neural network to predict a user state, where brain connectivity was the network input. We applied ﬁve-fold cross-validation to evaluate the proposed method. Mean accuracy for user state classiﬁcation was 88.25 ± 2.34%. This implies that the system can change the BCI principle using brain connectivity. Thus, a BCI user can control various applications according to their intentions.


Introduction
The brain-computer interface (BCI) is a promising technology that predicts the user's intention and controls the robot or computer by analyzing neural activities [1]. Disabled people can express their thought by typing and can go to other places by controlling an electrical wheelchair using BCI technology. BCI is also useful for healthy people, as it allows people to conveniently control various electric devices. There are several underlying principles used to implement BCI, such as sensorimotor rhythms (SMR), P300, steady-state visually evoked potentials (SSVEP), and directional tuning [2]. SMR BCI is a method based on the topological characteristics of brain functions. SMR predicts movement intentions according to the power change of alpha and beta waves. The power of the alpha and beta waves decreases in the contralateral motor area depending on the movement intention of the arm [2,3]. The SMR is mainly used to control an electric wheelchair or the direction of a mouse cursor. The P300 BCI analyzes the amplitude of the neural signals about 300ms after the visual stimuli. The magnitude of the P300 is at its maximum when the user focuses on selecting a character [2,4]. The P300 is generally used to type letters to express thought. SSVEP BCI uses the neural response from the visual cortex to visual stimuli. SSVEP BCI shows visual stimuli flickering at different frequencies.
When the user looks at one stimulus to select a target, the power of the neural signals on the visual cortex peaks at the same frequency with the stimulus. SSVEP is often used to select a target among several stimuli on the screen [2,5]. A BCI based on directional tuning is used to predict the direction of arm movements. In 1982, Georgopoulos found that neurons in the primary motor cortex (M1) have directional preference [6]. The firing rate of the neurons in the M1 varies depending on the direction of reaching. Therefore, directions of arm movements can be predicted by analyzing the firing rates of neurons in the M1. BCI based on directional tuning are used to predict arm movements and control a robotic arm [7][8][9][10].
Generally, different principles are applied to BCI depending on the application, because strengths and weaknesses vary according to the specific BCI method. Therefore, BCI should be able to predict a user state to apply suitable principles to the BCI system. In this study, we develop a BCI to predict the user state using functional connectivity and a convolutional neural network (CNN). Electroencephalography (EEG) was measured in four states (resting, speech imagery, leg-motor imagery, and hand-motor imagery) from 10 healthy subjects. Mutual information among 64 EEG channels was calculated as brain connectivity. The CNN was used to predict a user state, where mutual information values were the CNN input. We applied five-fold cross-validation to evaluate the proposed method.

Motor Imagery
Motor imagery is a mental process where an individual rehearses or simulates a given behavior. It has been used extensively in sports training to help with exercise and neural rehabilitation; it is also a common research paradigm for cognitive neuroscience and psychology to investigate content and structure of secret processes, i.e., unconscious, before execution [11,12]. When paired with physical rehearsals in medical, musical, and exercise contexts, mental rehearsals can be as productive as pure physical rehearsal [13]. Motor imagery can be defined as a dynamic state where the person mentalizes a given behavior, i.e., where the subject feels they are acting on their own [14], and corresponds to the internal image (or first-person perspective) of sports psychology [15].
Motor imagery is a widely used technique to improve motor learning and nerve rehabilitation in patients after stroke, and many musicians have proven the technique's effectiveness [16]. Motor imagery is also essential for athlete's exercise. Physical exercises usually include a warm-up period, relaxation and concentration, and mental simulation of a specific exercise [17]. Some nerve rehabilitation evidence also suggests that moving images provide additional benefits for physical or occupational therapy [18], and a recent review showed there was sufficient evidence to support additional benefits from motion imaging compared to conventional physical therapies in stroke patients [19]. The authors concluded that motor imagery was an attractive therapeutic opinion, easy to learn and apply, and mediation is not physically tiring or harmful. Therefore, motor imagery can provide additional patient benefits, and can substitute for actual behaviors with similar effects on cognition and behavior [13]. For example, repeatedly simulating food can reduce actual food consumption [20].

Speech Imagery
Speech imagery is an accurate anatomy based on microscopic observations of internal brain structures, comparison between damage and signs (post-clinical anatomical method) performed by post-mortem autopsy, or dye techniques. Various other techniques such as electric stimulation for various parts of the cortex, have been used extensively for over 50 years. Finally, the cerebrum nationality doctrine was established. Leuret showed the connection between cerebral wrinkle (cerebral ridge and furrow) and intelligence development for humans.
In particular, there was a case where a persons frontal lobe and third cerebral cortex was substantially damaged, with the rear half almost completely destroyed. Broca concluded that "the possibility of a patient's disability in the third brain region of the left frontal lobe is enriched", and argued that "a major area of mind corresponds to a major area of the brain". Thus, modern neuropsychology was born. This area is on the frontal lobe of the dominant hemisphere, usually the left side of the hominid brain [21]. Since Broca reported a disability in two patients [22], language processing has been linked to the Broca area. The loss of ability to speak was caused by injury to the posterior frontal lobe of the brain [23].
Subsequently, this approximate Broca area has been conclusively established, and Broca's aphasia (also known as expressive aphasia) is the deficiency in language production. The Broca area is now defined using wavefronts and wave triangles of the lower frontal lines, represented by areas 44 and 45 of the dominant hemisphere in Brodmann's cytoarchitectonic map [23]. Chronic aphasia research has established an integral role in Broca's diverse language and functional language areas, and fMRI studies have identified activation patterns in the Broca area related to various language tasks. However, slow destruction of Broca's area by brain tumors suggests that the function can move to near brain areas without compromising speech.

Brain Connectivity
Brain connectivity analyzes the connection between brain regions (corresponding to the EEG channel region), focusing on relationships among specific stimulus based activated regions [24,25]. Analysis methods are generally classified into three modes: anatomical, functional, and effective connectivity. Anatomical connectivity refers to the physical or structural nerve and biological network parameters, such as synaptic strength or effectiveness [26]. They can also be used to interpret nerve fiber pathways [27]. Functional connections are a fundamentally statistical concept, and calculates whether all elements of the system are directly connected, without considering directionality [28]. They are used to interpret correlations between connected elements. In contrast to anatomical connectivity, functional connection is time dependent. When describing a network where one neural element is oriented, effective connectivity can be viewed as a combination of anatomical connectivity and functional connectivity, which helps to interpret the flow of information [29]. It can also analyze the effect of elements over time. The present paper analyzes brain connectivity correlations using mutual information through functional connectivity analysis [27].
Brain connectivity refers to anatomical connection patterns, or anatomical connectivity, i.e., statistical dependencies between distinct units in the nervous system (i.e., functional connectivity) or causal interactions (i.e., effective connectivity) [30,31]. The units are individual neurons, connected populations, or anatomically separated brain regions. The connection patterns represent statistical or causal relationships formed by structural connections such as synapses or fiber paths, or measured by cross-correlation, consistency, or information flow. Neural activity and extended nerve cords are constrained by connectivity [32]. Therefore, brain connectivity is essential to explain how neurons and neural networks process information.
One significant aspect of nervous system complexity is related to morphological complexity, particularly neuronal connection inter-connectivity. Neural connectivity patterns have long been of interest to neuroanatomists [33], which help determine neuron and nervous system functional properties. In more advanced nervous systems, brain connectivity can be described at multiple levels, including synaptic connections between individual neurons on the microscale, brain regions connected by a macro fiber path. At the microscale, previous anatomical and physiological studies have defined many essential components and interconnections for mammalian cerebral cortical micro circuits. At the mesoscale, they are arranged in a network of columns and mini-columns, and macroscopically, a substantial number of neurons and nerve cell populations, forming distinct brain regions, are interconnected by interregional pathways to form anatomical connectivity patterns. When applied to the brain, connection refers to different and interrelated brain tissue aspects [34]. The fundamental difference is the relationship among structural, functional, and effective connectivity [35]. This distinction is often referred to in the context of functional neuro-imaging, but it is equally applicable to neural networks at other tissue levels [36]. Therefore, we studied functional brain connectivity. In general, functional connectivity captures the statistical independence between distributed and often spatially distant spaces. Statistical dependence can be estimated by measuring the correlation, covariance, spectral consistency, or phase fixation. Functional connectivity can be calculated whether or not all elements of the system are directly connected. in contrast to structural connectivity, functional connectivity is very time-dependent. Statistical patterns between connecting elements fluctuate on multiple time scales, with some being as short as tens or hundreds of milliseconds. In particular, functional connectivity does not explicitly refer to a specific directional effect or the underlying structural model.

Mutual Information
In probability and information theory, mutual information (MI) of two random variables is a measure of their interdependence [37]. In particular, by observing other random variables, we can quantify the amount of information (usually Shannon-like units called bits) obtained for a single random variable. The concept of mutual information is intricately related to the concept of entropy of a random variable and is the basic concept of information theory, which quantifies the expected amount of information for a random variable.
Mutual information is not limited to random variables with real values, such as correlation coefficients, but is more general and determines how similar the joint distribution, p(x, y), is to the decomposed marginal distribution, p(x) · p(y). MI is the expected value of the pointwise mutual information, and can be defined for two discrete random variables, X and Y, as [37].
where p(x, y) is the joint probability function of X and Y, and p(x) and p(y) are the marginal probability distribution functions of X and Y, respectively. For continuous random variables, the summation can be replaced by a definite double integral, The MI shared by X and Y is the number of variables measured to reduce uncertainty for other variables. For example, if X and Y are independent, X does not provide information about Y, and vice versa, i.e., MI = zero. At the other extreme, if X is a deterministic function of Y, and Y is a deterministic function of X, then all information conveyed by X is shared with Y, i.e., X determines the value of Y, and vice versa. Consequently, MI = uncertainty in Y (or X), i.e., the entropy of Y (or X). This mutual information is also equal to the entropy of X and the entropy of Y (an exceptional case is where X and Y are the same random variable). Thus, MI is a measure of inherent dependence expressed by the joint distribution of X and Y under the hypothesis of independence. Therefore, MI measures dependencies as follows.
• If X and Y are independent random variables, Moreover, MI is non-negative and symmetric, i.e., I(X; Y) ≥ 0 and I(X; Y) = I(Y; X), as discussed below. MI can be equivalently expressed as where H(X) and H(Y) are the limiting entropy; H(X|Y) and H(Y|X) are the conditional entropy, and H(X, Y) is the joint entropy of X and Y.
Mutual information is also used in the signal-processing domain to measure similarity between signals. For example, an FMI metric is an image fusion performance measure that uses MI to measure the amount of information that a fusion image contains for a source image.

Convolutional Neural Network
A convolutional neural network (CNN) is a class of deep feedforward artificial neural networks commonly applied to analyze images. CNN uses a multi-layer perceptron variant designed to require minimal preprocessing [38], known as shift invariant or space invariant artificial neural networks (SIANN), based on shared weighted architectures and transformation invariant properties [39,40].
The CNN was inspired by biological processes [41], where the connection pattern between neurons resembles animal visual cortex composition. Individual cortical neurons respond to stimuli only in a limited area of the field of view known as the receptive field. The receptive fields of different neurons are partially overlapped to see the entire field of view. The conventional filtering technique was image processing using a fixed filter (identity, edge detection, sharpen, blur box, etc.).
The basic idea of CNN is suggesting the elements of the filter represented by the matrix to learn to fit the data processing automatically. For example, when we want to develop an image classification algorithm, we can improve the classification accuracy by using a filtering technique. However, one problem is that we have to decide which filter to use in the algorithm through human intuition or iterative experimentation. If we use the CNN in this situation, the algorithm can automatically learn the filter to maximize image classification accuracy. So image classification using a convolutional neural network rarely uses preprocessing compared to other image classification algorithms. The main advantage of a convolutional neural network is that there is no process in which a person creates an algorithm by understanding the features of an image in advance in the existing image classification algorithm.
There are many applications for image and video recognition, including referral systems [42], image classification, medical imaging analysis, and natural language processing [43]. Convolutional neural networks consist of input, output, and several hidden layers. The hidden layers are typically composed of a convolution, pooling, fully connected, and normalization layer. The description of the process as a convolution in the neural network follows the convention. Mathematically, it is a cross-correlation rather than a convolution (although cross-correlation is a related operation). This is only meaningful for the indices of the matrix, thus indicating which weights are placed in which indexes. The convolution layer applies a convolution operation to the input and passes the result to the next layer. The circuit emulates the response of individual neurons to visual stimuli.
Each convolutional neuron only processes data for its corresponding coverage area. Although the fully connected feedforward neural network can be used to classify data and learn the functionality, applying this architecture to images is not realistic. Due to the enormous input image size, vast numbers of neurons would be required in very shallow architectures, where each pixel is a related variable.
A convolutional neural network can include a local or global pooling layer, which combines the output of a neuron cluster in one layer into a single neuron in the next [44,45]. For example, maximum pooling uses the maximum value of each neuron cluster in the previous layer [46], and average pooling uses the average of each neuron cluster from the previous layer [47]. A fully connected layer connects all neurons in one layer to all neurons in the subsequent. In principle, this is identical to the conventional multi-layer perceptron (MLP) neural network.
In this paper, we classified the characteristics of a mutual information 64 × 64 matrix using a two-depth layer CNN, as shown in Figure 1. The fully connected layer classified the results into A, B, and C.

Five-Fold Cross Validation Method
The purpose of the validation method is to create a regression model parameter estimation that requires a data set for learning. When we talk about the regression analysis performance, we analyze how well the dependent variable value of this learning data set is predicted by using a coefficient of determination. This performance is called insample testing. Figure 2 demonstrates a five-fold cross-validation. The main idea behind cross-validation is to ensure that every sample in the dataset has a chance to be tested. k-fold cross-validation is a particular case of cross-validation that iterates a dataset k times. Divide the dataset into k parts in each iteration. For model evaluation, one part is used for verification, and the remaining k−1 parts are combined into a training subset.

Brain Connectivity-Based Feature Extraction Module
We developed a module that extracts brain wave characteristics based on brain region connectivity, and explain the module using a simple diagram. To use the BCI system in real life, it is necessary to change the activating mode (typing, robot control, electric wheelchair, etc.) depending on the situation. Figure 3 shows the feature extraction module based on brain connectivity. First, we set parameters for sampling frequency, number of channels, window size, analysis size, baseline settings, window length, overlap settings, etc., including • Sampling frequency. The EEG continuously collects an analogue time-related signal. Therefore, the instrument samples the signal to provide the data digitally, e.g., sampling frequency 500 Hz means 500 data points were acquired per second from each EEG channel. Total data size (i.e., the number of data points collected) collection is the product of the sampling frequency by the number of channels. When the number of channels is increased to 64, 128, and 256, the meaningful data are separated according to the sampling frequency value. Therefore, we need the appropriate channel number and sampling frequency value. If we receive too much data, it will take longer to extract meaningful data from meaningless data, and the error will worsen. • Number of channels. EEG data were acquired non-invasively, hence data accuracy differs depending on the number of channels. We used the international 10-20 system [48] to set the measurement range according to the number of EEG channels, selecting 64 channels. • Analysis size. The analysis size determines the interval over which the data are analyzed from when the stimulus is entered. We set analysis size = 4 s per stimulus. However, the range from −1 s to 3 s was set as the analysis interval based on the stimulus time point. • Window size. We set the window size to be 1 s larger than the analysis size, i.e., window size = analysis size + 1.

Experimental Method
Various variables may arise, particularly for experiments involving human subjects, that can profoundly affect the experimental results. Since the experiment will require confirming proposed processes in actual conditions, one must always know the predicted result or waveform in advance and establish a standard for the scale such that the experiment results can be trusted. We conducted a detailed study of brain connectivity for known active brain areas using non-invasive EEG. Ten subjects were recruited, and a paradigm proposed that included 4 stimuli in 4 states with 4 s epochs while the subjects were continuously observed and recorded. The study only considered significant data, since the number of participants was relatively small, and the results were verified using 5-fold cross-validation.

Experimental Apparatus
The ubiquitous robot and intelligent information system (URIS) laboratory of Chung-Ang University in Korea has a shielded room for EEG measurements. The room was built in 2011 to test and verify BCI and AI machine learning algorithms [49][50][51][52][53][54][55]. The shielded room is divided into the control room and measuring chamber, with a total size of 3.3 m × 2.18 m × 2.5 m, as shown in Figure 4. Figure 5 shows the 2 types of EEG equipment available: a Neuroscan Synamps2 (64 channel EEG, Compumedics Ltd., Melbourne, Australia), which uses Curry 7 software; a STIM2 device, which sets various audiovisual stimuli with realtime matching to the Synamps2 instrument. The hardware and software are collectively referred to as STIM2.

Participants
There are many requirements to successfully identify and verify brain connectivity. The most important is to acquire good quality EEG data. Conventional brain connectivity experiments require considerable time, effort, and money to obtain suitable EEG data from a single subject, and we recruited 10 subjects without previous neurological or psychiatric history: 5 males, 5 females, mean age = 27.50 ± 3.77 years old, and (20 year)/(30 year) > 7: 3. All subjects received research and experimentation after providing informed the consent to participate.
People have different head sizes and volumes. Although the electrodes were positioned and measured following the international 10-20 system, subjects were not precisely matched to the same brain locations. Therefore, we installed a 3D digitizer to sense individual subject's head dimensions, and marked the correct 64 channel locations on the subjects head to ensure the electrodes were properly position in 3D. Thus, the electrode location file could be replaced with the subject's head image, providing more accurate customized EEG data.

Proposed Paradigm Design
The experimental procedures were approved by the Institutional Review Board of the Dongseo University. All participants were asked to read and sign an informed consent form to participate in the study. There were free to withdraw from the study at any time.
The study was performed in accordance with the Declaration of Helsinki. The experimenter received the Institutional Review Board (IRB) education (DSUIRB 2020-20). Experiments were performed using the Compumedics 64-channel EEG and STIM2 in the shielded room using additional micro-electromagnetic shielding. We collected 64 data channels and one additional reference channel for each subject. Additional electrodes were attached to measure eye flicker and facial muscle movements.
Four visual stimuli (letters R, A, B, or C) were presented to the subjects using the STIM2 monitor, with subject instructions to follow for each letter shown in Table 1. The sequence proceeded as follows (see Figure 6).
• R was displayed for the first 4 s of every sequence (subject relaxes). • One of A, B, or C were displayed for 4 s, and the subject performed the appropriate action. • R was displayed on the screen, and the subject took a break. • One of three stimuli (total of 24 s) randomly appears with the letter R.  The sequence was repeated 25 times (10 min), the subjects took 5 min rest, and the whole process repeated. Some subjects listened to relaxing music for 5 min, some were freely given a break for 5 min and the data for 5 min and repeat the measurement through several days because they could affect the next experiment.
Each stimulus matched a trigger per point of view. Although the subject experienced the stimulus signal randomly, the researcher must know each signal to enable supervised learning [56]. Therefore, the subjects who participated in the experiment should be introduced as a stimulus for random signals.

Protocol
The overall EEG measurement procedure was as follows.

1.
Recruit the subjects and fully inform them regarding the experiment.

2.
Place electrolyte gel into the EEG equipment for 20-30 min, and reduce the impedance to 10 for each electrode.

3.
Randomly generate 4 states and 3 visual stimuli. Three characters (A, B, C) are randomly displayed on the screen, separated in the timeline by R (when the subject can rest comfortably).

4.
Observe the stimulus at 4 s intervals for 10 min. The subject can then rest for 5 min, and then repeat step 3.

5.
After two cycles (25 min), remove the EEG cap and clean the subject's hair with gel. Figure 3 shows the stimulus identification system architecture using the proposed convolution-pulling operation. Some mutual information matrix samples are required during training, to parameterize the raw 64-channel EEG data and design a band-pass filter using the CAR algorithm. Subsequently, each stimulus was labeled, each epoch extracted, and brain connectivity was calculated and normalized to produce features.

Experimental Results
We obtained symmetric mutual information matrices for each of the 10 subjects, who had been learning the characteristics of the CNN, to input a matrix of 64 × 64. It is critical that consistent data be extracted. Conducting the experiment with the same subjects, but with changes to the subject's state, time, or environmental factors can cause EEG changes, because the data are non-stationary without depending on the absolute power and spectral values of the mutual information data of the brain connectivity analysis. Figure 7 shows a typical data set for functional brain connectivity for stimulus A from −1 to 3 s (i.e., one epoch, where 0 indicates the stimulation time). Brain connectivity was sampled 40 times at 100 ms intervals. Since each subject's reaction rate differs, it is essential to establish a good time point when the brain is well connected. It is also good to have visibility of the connectivity of the brain through threshold setting.
Stimulus A requires the subject to think about saying hello (but not actually speak). Thus, we expect to see activation or connectivity of the left ventricular and motor cortex regions. Figure 8 shows brain connectivity for subject 2 when imagining "Hello." Since the Broca and motor imagery areas are connected, and visual stimulus is received through the STIM2 monitor, the occipital area is also connected to the respective areas. The electrodeposition point is a node, the connection between two nodes is a branch (see Figure 8), and the weight of the branch is the weighted connectivity. Previous studies have investigated connection shape and strength between nodes and branches [57].
Imagining the language can manifest in various ways. For example, a subject may imagine a specific pronunciation, one can imagine a mouth shape when one pronounces it or think about five phonemes. Figure 9 shows that we learn mutual information correlation matrices for data results that are expected to respond to specific stimuli. The database can be proportionally expanded if the epoch is finely divided and repeatedly for several subjects. Most brain related studies normalize their data by measuring a subject's data several times, similar to finding periods in the signals that show a periodic characteristics. We also increased the number of subjects, analyze and classify the brain connective pattern, and eliminate areas with high correlation or outlier pattern.     Figure 11 show the ratio results. Figure 11 is a result of the box plot using the results in Figure 10, where chance level accuracy = 25%. The accuracy can be higher than the mean probability of occurring in a fortuitous opportunity to predict the performance of a single state from the four states and the reliability of the data.

Discussion
In this paper, we made three attempts to improve the performance of the analysis method using EEG. First, two types of stimuli were applied in various ways, including motor imagery and speech imagery. Second, data were independently acquired through a direct 64-channel EEG device, and data were obtained for 10 subjects. Third, we designed a paradigm directly and proposed a convolutional neural network architecture to apply the proposed algorithm and used a 64 × 64 functional brain connectivity-based mutual information matrix as an input value. Finally, the paradigm was designed and proposed by directly acquiring EEG data in a non-invasive manner without using the existing datasets, and this experiment was able to proceed through trial and error through several experiments. We tried to expand the brain-computer interface research area from these three perspectives in various ways, and the results were excellent. The significance of this paper is that it is possible to implement a BCI capable of various controls by predicting the state of the user, which previously had only one type of control. In other words, the result of this paper will be usefully utilized in the relevant field in that BCI-based switching method commands are possible. In addition, since EEG signals have time-series data characteristics, it was not easy to obtain meaningful correlations between data when applied to CNN. Therefore, in this paper, functional brain connectivity was measured for each correlation for each of 64 channels, not for the time-series data values of EEG, using the characteristic of CNN, which is robust to changes between pixels. The mutual information matrix was utilized to measure the correlation between the respective electrodes. In addition, the correlation between 64 channels was derived as a value between 0 and 1. The closer the result value is to 1, the higher the correlation between the two channels. In particular, when performing speech imagery, it was confirmed that the correlation between the electrodes was high in the brain region related to the Broca's region, and even when visualizing the functional brain connectivity at that time, the correlation between the temporal lobe and the frontal lobe was high. We proposed an idea to visualize with spectrography and apply it as an input value of CNN. As a result, speech imagery and two classes of motor imagery could be distinguished, and the mean accuracy for classifying the user state was about 88.25 ± 2.34%. This means that a considerable level of results was achieved during the BCI-based multi-classification experiment. It was confirmed that the performance was improved from 5% to 8% compared to the previous research results. In addition, since the EEG feature extraction process was generalized using CNN, real-time prediction and classification would be possible.

Conclusions
This paper developed an algorithm to estimate four subject states by classifying them with a convolutional neural network (CNN) using mutual information (MI) obtained from a conventional EEG measured by the general population. We expect that BCI technology will be practically applied as a critical algorithm to identify mode changes. Brain activity analysis using EEG or other non-invasive methods currently requires significant trial and error. For example, it is essential to remove noise caused by the reconstructing of the paradigm, subject condition, and room temperature and humidity. The proposed functional connectivity method using MI offers several advantages in this regard: it does not depend on the absolute power or the measurement value, since domain correlations are analyzed rather than the active domain state. Although it was not apparent in all subjects, this approach can obtain the expected results in some situations, and using some redevelopment methods and stimulant paradigms to ensure more predictable data should be discussed. Functional connectivity changes depending on the brain state, and could be predicted using a CNN. The model of user state classification accuracy was 88.25 ± 2.34%. Using brain connectivity, one can select the brain-computer interface or toggle the system. Future studies will optimize how MI is extracted using CNNs to obtain the user's state in real time from EEG changes. We will also develop a sequential data state prediction algorithm based on the long short-term memory network of the recurrent neural network over time.

Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the subjects in the experiment for reasons they do not want.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript: BCI