Next Article in Journal
Incentive Based Load Shedding Management in a Microgrid Using Combinatorial Auction with IoT Infrastructure
Previous Article in Journal
Strongyloidiasis Serological Analysis with Three Different Biological Probes and Their Electrochemical Responses in a Screen-Printed Gold Electrode
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Neural Regression Prediction of Motor Imagery Skills Using EEG Functional Connectivity Indicators

by
Julian Caicedo-Acosta
1,*,†,
German A. Castaño
2,†,
Carlos Acosta-Medina
1,†,
Andres Alvarez-Meza
1,† and
German Castellanos-Dominguez
1,†
1
Signal Processing and Recognition Group, Universidad Nacional de Colombia, Manizales 170001, Colombia
2
Grupo de investigación Cultura de la Calidad en la Educación, Universidad Nacional de Colombia, Manizales 170001, Colombia
*
Author to whom correspondence should be addressed.
The authors contributed equally to this work.
Sensors 2021, 21(6), 1932; https://doi.org/10.3390/s21061932
Submission received: 5 February 2021 / Revised: 20 February 2021 / Accepted: 25 February 2021 / Published: 10 March 2021
(This article belongs to the Section Biomedical Sensors)

Abstract

:
Motor imaging (MI) induces recovery and neuroplasticity in neurophysical regulation. However, a non-negligible portion of users presents insufficient coordination skills of sensorimotor cortex control. Assessments of the relationship between wakefulness and tasks states are conducted to foster neurophysiological and mechanistic interpretation in MI-related applications. Thus, to understand the organization of information processing, measures of functional connectivity are used. Also, models of neural network regression prediction are becoming popular, These intend to reduce the need for extracting features manually. However, predicting MI practicing’s neurophysiological inefficiency raises several problems, like enhancing network regression performance because of the overfitting risk. Here, to increase the prediction performance, we develop a deep network regression model that includes three procedures: leave-one-out cross-validation combined with Monte Carlo dropout layers, subject clustering of MI inefficiency, and transfer learning between neighboring runs. Validation is performed using functional connectivity predictors extracted from two electroencephalographic databases acquired in conditions close to real MI applications (150 users), resulting in a high prediction of pretraining desynchronization and initial training synchronization with adequate physiological interpretability.

1. Introduction

Motor imaging (MI) is the dynamic cognitive capability of generating mental movements without executing them. This mental process triggers the neurocognitive mechanisms that underlie voluntary movement planning, similar to how the action is performed realistically. MI has been proposed as a reliable tool in acquiring new motor skills to increase sports performance and physical therapy [1,2,3,4], in the development of professional motor skills learning [5], and in improving balance and mobility outcomes in older adults and children with developmental coordination disorders [6,7], among others. There is sufficient experimental evidence that MI induces recovery and neuroplasticity in neurophysical regulation as the basis of motor learning [8] and educational fields [9]. Concerning this aspect, the Media and Information Literacy approach has been proposed by the United Nations Educational, Scientific, and Cultural Organization (UNESCO) to gather several vital human development capabilities. In practice, MI tasks are commonly solved from electroencephalography (EEG) records, which provide noninvasive measures and flexible portability at a relatively low-cost. However, EEG signals lack a suitable spatial resolution, not to mention the inter and intra-subject variability regarding the somatosensory cortex’s responses. Specifically, there is no consistency in the patterns among different subjects. Indeed, the variability arises within a session for the same subject because of a non-stationary, nonlinear, and low signal-to-noise ratio of EEG signals [10]. Together with frequently used small sample datasets, all of these factors reduce MI systems’ performance based on EEG [11,12].
An enhanced approach to addressing this EEG data complexity is to conduct multiple training sessions to refine the modulation of sensorimotor rhythms (SMR). Nonetheless, the inter-subject variability, together with uncertain long-term effects and the apparent failure of some individuals to achieve self-regulation, makes a non-negligible portion of users (between 15 % to 30 % ) develop insufficient coordination skills even after long training sessions. This inadequate performance of most brain-computer interface (BCI) systems (BCI inefficiency) poses a challenge in MI research [13]. To address this problem, the BCI performance model is enhanced in two directions: (i) Developing guidelines in neural testing set-ups, practice, and instructions to ensure better performance of brain responses; and (ii) Promoting evaluation tools to forecast the system performance may help identify the core issue of variability to incorporate compensating actions for the inefficiency when solving BCI-based tasks. In particular, a calibration strategy can be added, working hand in hand within the training stage. Therefore, it is possible to adapt the decoding scheme with an explicit brain pattern [14], highlighting relevant BCI predictors to decrease training efforts and encourage user-centered MI [15]. To date, several electrophysiological indicators have been reported to anticipate the MI inefficiency, like the direct assessment of the SMR, which extracts the power spectral density (PSD) from the resting wakefulness at motor cortex locations [16]; a measure of the PSD uniformity of the resting-state data using spectral entropy [17,18]; and the PSD-based estimate to assess the dis/similarity (connectivity) of EEG signals at different locations in an attempt to understand the interdependency between functional and structural networks of corresponding cortical brain structures (like spectral coherence [19,20] or coherence-based correntropy spectral density [21]), among others. To tackle the influence of artifacts and intertrial/inter-subject amplitude variability, phase-based relationships (phase synchronization) are more desirable as a functional connectivity (FC) measure of spatially distributed regions, dynamically interacting in accomplishing a mental task [22]. It has been proved that the functional connectivity features measured by the phase lag index (or its weighted version—wPLI and phase-locking value PLV) can discriminate between different MI tasks [23,24].
Therefore, predicting motor performance from the resting motor-system functional connectivity can be determined as in [16], showing that the efficient brain reconfiguration corresponds to a better MI performance [25,26]. Nevertheless, several conditions can affect their correct estimation and introduce spurious contributions, giving a potentially distorted measure of the real interactions (termed spurious connectivity) [27]. Thus, FC estimation is highly time-dependent and fluctuates within multiple timescales, yielding inter-subject variations that remain a substantial problem [28]. Specifically, the obstacles related to volume conduction and noise perturbations cause phase synchronization to incorporate thresholds applied to these FC measures to improve the connection sets’ discriminative ability. However, the threshold selection is generally far from being an automated procedure for big datasets [29]. Undeterred by the promising evidence, there is a need to understand the learning mechanisms and the brain network reorganization, aiming to support the efficiency of BCI systems [30].
As regards the prediction model, several regression methods are available for prognosticating MI accuracy from neurophysiological variables like simple and multiple linear regression [31,32], stepwise regression [33], kernel regression [34], and (kernel) support vector machine regression [35], among others. Additionally, there is increasing use of regression approaches with neural networks that can be applied to the raw EEG data, simplifying BCI’s design pipelines by removing the need to extract features manually. However, several aspects degrade the prediction model performance, such as the fact that FC measures are prone to be influenced by outliers, which are to be removed before calculating correlations [36]. Another drawback is the inter-trial variability of MI data (with a notable increase in subjects having low MI skills), which restricts prediction models with single-trial EEG data [37]. One more issue influencing the regression model is the user’s categorization depending on their SMR activity (predictor) and classifier performance (target response) during the MI runs. Users are frequently adjusted to two partitions (skilled and non-skilled) divided by a single target value given in advance, as in [38]. Still, as the number of subjects tested increases, the range of FC changes also rises. The partition-based method should also be sensitive in detecting predictor differences among subject clusters [39,40]. Therefore, the need for clustering into more partitions becomes more evident, as shown in [41]. Lastly, the correlation coefficient (reflected in r-squared) is often applied to assess the prediction shape, while the p-value levels its statistical significance that can be implemented through several test procedures, as developed in [42]. A common issue in neural network regression models, trained with small samples in MI studies, is their fitting to spurious residual variation (overfitting), apart from a controversial interpretation of p-values [43].
Here, to increase the prediction performance of the baseline linear regression models, we develop a deep network regression model devoted to prognosticating Motor Imagery Skills using EEG Functional Connectivity Indicators, appraising three procedures: leave-one-out cross-validation combined with Monte Carlo dropout layers, subject clustering of MI inefficiency, and transfer learning connecting neighboring runs. Our approach comprises functional connectivity predictors extracted from electroencephalographic signals to favor the data interpretability. To deal with the risk of overfitting prediction assessments because of the deep learning framework, we intend to preserve as much information as possible from the measured scalp potentials. Thus, to reach competitive values of prediction errors achieved by the leave-one-out cross-validation scheme, we introduce the following procedures: (i) Monte Carlo dropout layers to decrease the probability that the learned rules from specific training data cannot be generalized to new observations; (ii) Subject efficiency clustering to adapt the DNR estimator more effectively to complex EEG measurements inherent to BCI inefficiency subjects; (iii) For Prediction of Initial-training Synchronization, transfer learning of the weights inferred at the predecessor run to deal with the few-trial sets. The validation is performed in two MI databases (150 users) acquired in conditions close to real MI applications. Obtained results show how our approach can achieve a high prediction of pretraining desynchronization and initial training synchronization with adequate physiological interpretability. We further compared the DRN predictor prediction performance (on average, 0.8) with the results obtained by linear regression models that are reported, at least for DBI, in the baseline work [44], presenting values of R-squared not exceeding 0.54.
The rest of the paper is organized as follows: Section 2 briefly discusses the regression prediction model’s theoretical background. Section 3 describes the experimental set-up, including both datasets evaluated. Section 4 presents the assessment of Deep Regression Network performance and discusses the findings obtained to predict pretraining desynchronization and initial training synchronization. Lastly, Section 5 concludes the paper.

2. Methods

2.1. Electrophysiological Predictors Based on Phase Synchronization Relationships

Initially, we consider predictors based on the following two widely-used FC measures of phase synchronization between every pair of EEG channels x c , x c R T ( c , c C , c c , where C N is the number of channels):
Phase Locking Value (PLV): This phase coherence measure assesses the pairwise similarity relation based on states’ recurrence density, occurring between electrodes. For the single-trial analysis, PLV is computed by the following average over a time window [45]:
ν 1 ( c , c ) = | E exp ( j ( Δ Φ ( c , c ; n , t , f ) ) ) : t T | , ν 1 ( , ) [ 0 , 1 ] ,
where Δ Φ ( c , c ; n , t , f ) [ π , π ] is the instantaneous phase difference computed, at time instant t for the n-th trial ( n N , where N N is the number of EEG trials), and notations E · and | · | stand for expectation operator and magnitude, respectively. Of note, to preserve physically meaningful, the phase signal must highlight only a given frequency oscillation f R + . Here, this is achieved utilizing the convolution with a narrow band complex Morlet wavelet through the continuous wavelet-transform [46].
Weighted Phase Locking Index (wPLI): This indicator quantifies the pairwise phase difference distribution’s asymmetry by averaging across the trial set. Then, the wPLI is computed by averaging over the trial ensemble, as follows:
ν 2 ( c , c ) = | E { S ( c , c ; n , f ) } : n N | E | { S ( c , c ; n , f ) } | : n N , ν 2 ( , ) [ 0 , 1 ]
where S ( c , c ; n , f ) C is the cross-spectral density based on Morlet wavelets and { · } stands for the imaginary part of a complex-valued function. wPLI is assumed to deal with the presence of volume-conduction, noise, and sample-size bias [47].

2.2. Construction of Brain Graph Predictors

We also consider the predictors that involve a generic approach to characterizing brain activity using undirected graph theory. These predictors describe complex systems’ properties by quantifying their respective network representations’ topologies. In large-scale brain networks, the node-set (noted as { φ r ( c ) R } r Z ) usually designates brain regions holding V = C ( C 1 ) / 2 paired (undirected) links. The following weighted network indexes are extracted from the phase synchronization-based relationships (spatiotemporal dependences) [48]:
Strength is a local-scale property that accounts for the number of links connected to each node, computed as follows:
φ 1 ( c ) = C E ν ( c , c ) : c C , c c .
Clustering Coefficient is a global-scale property that indicates the tendency of a network to form tightly connected neighborhoods, measuring the segregation brain’s ability for specialized processing within densely interconnected regions, computed as follows:
φ 2 ( c ) = 1 C 2 E π c E ν ^ ( c , c ) E ν ^ ( c , c ) 1 : c C , c c
where the binarized connection value (connection status) ν ^ ( c , c ) = 1 , if ν ( c , c ) > 0 , otherwise, ν ^ ( c , c ) = 0 . π c N is the number of triangles neighboring the c-th node.

2.3. Regression Network Models

Let X X (termed predictor) and Y Y (response) be a couple of random variables for which the mutual dependence y Y = ξ ( x X ) is assessed through the approximating function (termed regressor) ξ : X Y . Namely, let { x m X , y m Y : m M } be the corresponding composite observation set, across M N subjects, the following optimization framework allows fixing the regressor as:
w * = min w E y m ξ ( x m ; w ) 2 : m M ,
where w is an unknown parameter vector fitting the data most closely in terms of the 2 -norm.
For implementing the data-driven estimator in Equation (5), we employ the Deep Regression Network (DRN) developed in [49] that jointly extracts and performs the regression analysis, as follows:
min w E ( y m ( ξ J ξ J 1 ξ 1 ) ( x m ( f ) ; w ) 2 2 : m M ,
where ξ j is the j-th layer ( j J ) and ∘ stands for function composition. Notation x m ( f ) describes the connectivity predictors extracted in each frequency rhythm f while y m contains the accuracy response of m-th subject.

3. Experimental Set-Up

The methodology for enhanced prediction of motor imagery skills using functional connectivity indicators is evaluated under a regression model to predict the bi-class accuracy response of subjects, embracing the following stages: (i) Predicting capability estimation of the pre-training desynchronization under a conventional linear regression model, testing different scenarios of input arrangements to improve the system performance; (ii) Prediction assessment of the pre-training desynchronization under the data-driven network regression model; (iii) Enhanced network prediction assessment using leave-one-out cross-validation combined with Monte Carlo dropout layers and clustering of subject inefficiency; (iv) Enhanced network regression prediction of initial-training synchronization with an additional transfer learning procedure.
The pre-training desynchronization assesses the relationship between the bi-class accuracy response and the electrophysiological indicators extracted from resting wakefulness data. We employ either resting-state or task-negative state before the cue-onset of the conventional MI trial timing for evaluation purposes. Besides, as the target response, we compute each subject’s classifier accuracy in distinguishing either MI class using the short-time sliding feature set extracted by the Common Spatial Patterns (CSP), which maximizes the class variance. To accurately extract the subject EEG dynamics over time, the sliding window is adjusted to 2 s, having an overlap of 50 % .
On the other hand, the pre-training desynchronization predictor relies on the fact that the change in neural activity, intentionally evoked by a mental imagery task, shows certain regularities through training runs or sessions. Accordingly, the pre-training indicator of neural desynchronization attempts to anticipate the MI responses evoked within every run’s wakefulness data.

3.1. MI Databases Description and Preprocessing

Giga-DBI: This MI dataset is publicly available at (http://gigadb.org/dataset/100295, accessde on 30 January 2021). It gathers EEG records from fifty subjects ( M = 50 ), fixing the well-known 10 - 10 electrode configuration with C = 64 channels. The signal x ( c ) comprises T = 7 s, at F s = 512 Hz sample frequency. The MI protocol (see Figure 1) starts with a fixation cross shown on a black screen for 2 s. Further, a cue instruction is displayed depending on the MI instruction (label), which appears randomly within 3 s. For concrete testing, the cue asked to imagine moving his fingers, starting from the index finger and reaching the little one. Afterward, a blank screen is visible at the beginning of a break period (shown randomly between 6.1 and 6.8 s). Each MI run composes over 20 trials and a written cognitive quiz [50]. Every subject performed five runs (on average) and a single-trial resting-state recording, lasting 60 s.
Physionet-DBII: This database, publicly available at (https://physionet.org/content/eegmmidb/1.0.0/, accessde on 30 January 2021), holds M = 105 volunteers who properly performed the left and right-hand MI tasks, collecting a total average of 46.62 ± 0.96 trials per subject. Besides, two one-minute baseline records are captured concerning a resting state trial (with eyes open and closed, respectively). The 64-channel EEG signals were recorded using the 10 - 10 international system, and sampled at F s = 160 Hz. Figure 2 describes the motor imagery timing.
Every raw EEG channel of either database was band-pass filtered in the frequency range f [4–40] Hz, covering the sensorimotor rhythms considered ( θ , μ , β ). Then, the band-passed EEG data are spatially filtered by a Laplacian filter centered on the selected electrode to improve the spatial resolution of EEG recordings, avoiding the influence of noise coming from neighboring channels and thus addressing the volume conduction problem (This filtering procedure was carried out using Biosig Toolbox that is free available at http://biosig.sourceforge.net, accessde on 30 January 2021). Further, the electrophysiological indicator set, { φ r } , based on phase synchronization is extracted using the MNE package in Python, while the graph predictors are estimated using the Brain Connectivity Toolbox (brain-connectivity-toolbox.net).

3.2. Deep Network Regressor Set-Up and Performance Evaluation

The proposed Deep Regression Network architecture comprises (see Figure 3):
IN: We consider two inputs-layer arrangements: multivariate indicator X ( f ) R C × M ; f F (Being C the electrode and links number when using graph indexes and FC, respectively).
ξ 1 : The first dense layer codes the input relevant patterns from phase synchronization features. Here, we fix the number of neurons as h = 1.5 C neurons, where ⌜·⌝ stands for the ceiling operator. A tanh-based activation is employed to reveal non-linear relationships.
CT: A concatenate layer is applied to append the resulting feature maps from the set of patterns extracted in ξ 1 . In particular, all phase synchronization-based features (coded as connectivity matrices) are stacked into a single block, sizing h F .
ξ 2 : This fully-connected layer aims to preserve the predicted patterns assembled in the CT layer to fed a linear regressor. The number of neurons is fixed as 0.5 h F . Again, the tanh is used as activation function.
ξ 3 : A one-neuron layer with linear activation is used to predict the MI skill value y m R .
DO: This Dropout layer randomly skips neurons according to drop rate. We fix the drop rate at 0.2 empirically.
For measuring the relationship between the response variable and the composite predictor, we build the set { y m * , y m } m = 1 M , where y m * R is computed using our Deep Learning Regressor following a leave-one-out cross-validation strategy along with the M subjects. The quantity measures account for the influence to predict the acceptance rate on the electrodes performed by individuals, namely, for computation of value y m * , one individual is picked out as the training set and the remaining ones as the testing set. Then, the coefficient of determination (noted as R 2 R [ 0 , 1 ] ) is computed. Besides, a p-value is computed from a two-sided t-test whose null hypothesis is that the regression slope is zero [44]. It is worth noting that such a hypothesis testing is used, as in state-of-the-art works [38,44], because our Deep Learning Regressor aims to code the no consistency in the brain patterns among different subjects to favor a linear dependency between y m and y m * . Moreover, to provide a comparison with Neural Network-based regression strategies, the real-valued measures of Mean Absolute Error (MAE), and Root Squared Error (RMSE) are also assessed, as carried out in [51,52]:
R 2 = 1 var { y m y m * : m M } var { y m : m M }
MAE = E y m y m * 1 : m M
RMSE = E y m y m * 2 : m M
where var { · } stands for the variance operator.

4. Results and Discussion

4.1. Baseline Linear Regression of Pre-Training Desynchronization

Here, we consider two scenarios of input predictor arrangements: (i) Matrix indicator, when computing one individual network vector to reflect the electrode contribution (termed multichannel); (ii) Vector indicator, holding a single scalar value of FC accomplished by each subject. For comparison purposes with similar reported works, we analyze two indicator approaches extracted from each individual: (a) Average that obtains the mean value over the electrode set, and (b) the channel with the best R-squared (best channel).
Intending to evaluate the linear regression model, Table 1 displays the values of R-squared and its significance (namely, p-value), which are calculated using the Sklearn package in Python. As seen, both indicator approaches, multichannel and average, perform below the procedure for best channel extraction from the resting-state data regardless of the brain graph predictor employed. Thus, selecting the best channel allows achieving higher values of R-squared with lower p-values within the considered frequency rhythms. In the case of the predictors directly extracted from FC measures, PLV and WPLI, the upper triangular matrix is vectorized to feed the regression, yielding a performance similar to the graph indexes. Note that the best channel approach is not reported for PLV and WPLI because of difficulties in their implementation. Consequently, the above prediction results show that DBI achieves a poor performance in predicting the pre-training desynchronization, at least, using the baseline linear regression. Besides, the DBII collection gives a much worse prediction than DBI since the former EEG data contain fewer trials (100 vs. 22 per label), more subjects (105 vs. 50), and was acquired with a much lower sampling frequency (512 vs. 150 Hz). Still, the importance of considering EEG collections with elevated complexity remains an actual problem since they are more close to the requirements of real MI applications.
We also consider the case of multiple linear regression, involving both graph index predictors. Table 2 shows the effect of multiple regression remains still controversial for the tested EEG data because no suitable values of R-squared can be accomplished using either scenario of input arrangement representation.

4.2. Network Regression Prediction of Pre-training Desynchronization

Next, we employ the multichannel matrix indicator extracted from the whole subject set to predict the pre-training desynchronization. We also analyze the joint characterization of both graph indexes (strength and clustering coefficient), concatenating their vector representations of each subject into a single supervector. Besides, we take advantage of the wide path to feed the Wide&Deep neural network with different training sets simultaneously that are learned by the first layer separately. The next layer merges all input predictor sets, exploring common relations among them. In particular, we contrast the network regression fed by the connectivity indicators extracted within all three frequency rhythms against the widely used extracting approach from μ and β low .
Besides the R-squared value, the estimates of MAE and RMSE are also computed to evaluate the DRN performance, accounting for the influence of the adopted leave-one-out cross-validation strategy on the DRN performance. Concerning the examined graph indexes, Table 3 shows that the strength and clustering coefficient result in similar prediction performance. Also, their combination performs comparably to their separate training. In turn, the use of indicators directly based on phase synchronization allows producing comparative assessments with the graph indexes, meaning that the network regression can handle FC indicators with a lower complexity of computation.
Overall, the network regression reaches an R-squared value as high as 0.79 on average across the FC indicators extracted from DBI and 0.45 from DBII, respectively, outperforming all previous baseline linear regression outcomes displayed in Table 1. Moreover, the joint use of all frequency bands achieves better predictive performance than the two-rhythms ensemble in each EEG collection evaluated. Nevertheless, the prediction errors assessed by MAE and RMSE are still high. Even worse, the corresponding error values for DBI are lower than those estimated for DBII due to its greater variability.
Still, the DRN’s performance can be enhanced by evaluating its robustness against the noisy input sets, making the data-driven the data-driven regressor provide overfitting prediction assessments. To cope with this issue, we consider two strategies for improving the DRN robustness: Firstly, the thresholding method is incorporated, usually performed in functional connectivity analysis at the preprocessing stage, to remove false connections and noise. Following the procedure in [53], we fix the proportional thresholding rule to 0.7 , preserving a sufficient amount of links under a value of p ≤ 0.1. Secondly, the leave-one-out cross-validation is further refined by incorporating the Monte Carlo dropout layers, containing neurons with a probability of being ignored during training and validation. Therefore, both assessments in Equation (8) and Equation (9) are recomputed by averaging over Q iterations of the Monte Carlo dropout applied to the dense DRN layers. It should be noted that the dropout rate is expected to be low due to the relatively small amount of input data in both databases tested. So, we fix the dropout rate heuristically to 0.2 while the number of iterations adjusts to Q = 100 .
Table 4 displays the regression performance computed by the leave-one-out cross-validation together with Monte Carlo dropout, revealing that the prediction improves notably because of the neurons with a probability of being ignored during training and validation. On average, the R-squared value reduces by 10 % for DBI, but the prediction errors of MAE and RMSE fall by nearly half. In the case of DBII, the R-squared value rises by 20 % while the prediction errors shrink by almost 40 % . Once again, the extraction from three frequency rhythms is more effective. Additionally, the performance results using the thresholding procedure seem to improve the prediction assessment (for which the input predictor sets are denoted with *), but to some extent. As a result, the improved validation procedure combined with thresholding reduces the overfitting prediction effect, making it more effective in input measures with higher variability.
For interpretation purposes, Figure 4 depicts the DNR weights that mostly support the prediction performance, produced by wPLI and PLV after introducing the leave-one-out cross-validation. For DBI, the former measure weights (left column) are robust and spread all over the scalp, as happens with the latter FC measure (third column). DBII faces a similar situation (fifth and seventh columns), though providing fewer estimates. This result can be explained because of the higher complexity of DBII. Next, the use of leave-one-out cross-validation with Monte Carlo dropout (noted with ) allows the number of contributing links to decrease sharply, therefore avoiding the DNR overfitting, as seen in all even columns regardless of the validated data collection. Note that in this case, all rhythms contribute, though to a different extent. Besides, most of the relevant links appear over the frontal-occipital and parietal-occipital areas directly related to the MI responses [54].

4.3. Enhanced DNR Prediction Assessment Using Subject Efficiency Clustering

To improve the regression performance, we determine the differences in neural responses among the categorized users depending on their motor skills as a critical factor affecting the data-driven estimator in Equation (6). Therefore, we find the number of subject groups from the MI classification performance using a clustering approach. The Silhouette score-based cost is utilized, finding three clusters and then applying the k-means algorithm to compute each subject membership.
The classification performance is presented in Figure 5 for the studied databases. A feature selection strategy is applied over the well-known CSP-based features to predict the MI label based on a Linear Discriminant Analysis classifier. A 10 × 10 -fold cross-validation scheme is adopted. Thus, the three obtained groups are depicted in color bars: Group I contains the subjects performing the best (denoted by the green color), Group II with the subjects with intermediate accuracy (yellow color), Group III with the worst-performing subjects (red color). As seen, while DBI holds compactly distributed clusters, the neighboring groups are mixed in DBII. Of note, Group I includes the lowest number of individuals ( 10 % in DBII), whereas groups II and III involve the remaining part.
For these cases, we calculate the DRN prediction with the performance improved. That is, we test the functional connectivity indicators of all three frequency rhythms extracted from resting-state data together with the leave-one-out cross-validation, including Monte Carlo dropout layers. Table 5 shows that the prediction analysis improves (the values of R-squared increase while the errors decrease) regardless of the subject group under consideration. Moreover, the improvement becomes higher for DBII, which means that the regression analysis using partitions can be also effective in databases with more complex EEG measurements.
The next aspect refers to the assessment of resting-state activation on a reduced number of electrodes. To this end, we evaluate the DRN performance for the predictor sets extracted over the sensorimotor area, selecting the following electrodes, as suggested in [44]: (FC1, FC2, FC3, FC4, FC5, FC6, Cz, C1, C2, C3, C4, C5, C6, CPz, CP1, CP2, CP3, CP4, CP5, and CP6). In Table 5, the lower part of each database presents the assessments computed for the sensorimotor zone (denoted with ), showing that the channel selection strategy also improves the performance of every subject partition compared to the corresponding values in Table 4 obtained by the whole set of individuals. Nevertheless, the incorporation of the total number of electrodes increases in a higher degree the DNR prediction.
To provide further interpretation of the assessed regression weights, Figure 6 draws their estimates over the scalp surface performed by each subject partition. In DBI, the topograms of groups I and II are similar regardless of the extracted rhythm, especially for both graph indexes’ relevance values. Instead, Group-III subjects behave differently and tend to be smaller and more spread, as noticed on the topograms performed by either connectivity measure. Note that the FC measures emphasize the sensorimotor more than the graph indexes that highlight the occipital and frontal zones related to attention and visual tasks. Therefore, the figures performed by DBII are comparable to DBI but hold more variability.

4.4. Network Regression Prediction of Initial-Training Synchronization

To assess the initial training synchronization, we evaluate the DNR performance of predicting the MI accuracy at each run, which holds few trials and is affected by the learning changes. We evaluate two wakefulness data situations for extracting the FC predictor (namely, before the cue-onset of the conventional MI trial timing noted as T 0 and resting-state noted as T r s ). As the FC predictor, we only consider the PLV measure that, together with the single-run target accuracy vector, feeds the DNR estimator. This choice of FC is conditioned by its feasibility to be extracted in a single-trial mode from wakefulness data.
Table 6 presents the prediction capability of the synchronization behavior appraised by DNR in each subject partition. Compared to the all set performance, the FC extracted from resting-state data increasing each group’s prediction, meaning that the network can predict under a reduced number of trials per run to some extent. The alternative case of extracting PLV from T 0 enables a further improvement in each group’s prediction capability. It is worth noting that the R-squared value remains high over the run sequence, particularly in Group III with the subjects that need more guidance for promoting BCI skills. Overall, this test result of T 0 raises the possibility of the initial-training assessment carried out without additional EEG data acquisition. In the database DBII with considerable variability, however, the target response vector computed within each run yields a very low accuracy due to the lack of statistics (only 14 trials per run) so that the DRN performance drops noticeably, resulting in R-squared values below 0.2 .
Utilizing a transfer-learning approach, we train the DNR estimator to cope with this issue, gathering the values learned from each run’s MI data with the weights inferred at the predecessor run. For DBI and DBII, Table 6 also displays the outcomes achieved by both EEG collections (extracting PLV from T 0 ) and shows that the DNR prediction ability outperforms in all three subject partitions the former strategies evaluated for predicting Initial-training Synchronization. As a result, each subject’s competence can be prognosticated with a high enough level after each run to carry out procedures, aiming at improving his performance in practicing MI tasks.
It is worth noting that the use of functional connectivity measures in BCI inefficiency prediction is still in the exploring stage rather than the power-based predictors extracted from the sensorimotor rhythms. Table 7 compares several works recently presented that employ correlates between accuracy and SMR or FC indicators, showing that the latter predictors combined with DNR are promising.

5. Concluding Remarks

Here, we develop a methodology for predicting MI practicing’s neurophysiological inefficiency using EEG phase synchronization measures. A deep network regression evaluates over 150 subjects’ predicting capability in assessing the pre-training desynchronization and the initial training synchronization. The prediction estimates should help determine whether a specific user needs to undergo an additional calibration, supplying interpretation of subjects’ learning properties. Although our algorithm training can be time-consuming, growing considerably as the database set increases, such a training stage can be implemented offline. Once the Deep Network Regressor’s weights are learned, our predictor evaluation is as fast as baseline models, enabling real-time applications like the run-based prediction of initial-training synchronization.
From the obtained results of validation, the following aspects are to be emphasized:
Electrophysiological predictors based on functional connectivity. We explore the Phase Locking Value and Weighted Phase Locking Index as connectivity measures together with their brain graph predictors (strength and clustering coefficient) to build a predictive regression model of BCI control. From the obtained results for the linear regression model (simple Table 1 and multiple Table 2), we can conclude that the FC predictors extracted from resting-state enable fair values of prediction performance (R-squared below 0.4 ) with notable variations, regardless of the input arrangement configuration employed. This behavior worsens in DBII that is an EEG collection with greater structure variableness.
With regard to DNR, all considered FC predictors present similar and even performance, reaching more competitive prediction values (see Table 3, Table 4 and Table 5). In terms of providing interpretation, the DNR weights mostly supporting the prediction performance are comparable in wPLI and PLV predictors (see Figure 4). One more consideration is the limited effectiveness of the thresholding method, usually performed to remove false connections and noise. The thresholding performance may be jeopardized by the high intrasubject variability, demanding the application of subject-related tuning algorithms. Therefore, the network regression models ease the need for elaborate feature extraction procedures based on functional connectivity analysis. It is worth noting that the network regression estimator benefits from all considered rhythms (i.e., θ + μ + β low ), though each contributes differently.
Quality of network regression models. While widely-common procedures can appraise linear regression models’ statistical significance, assessing and enhancing network regression models’ prediction quality is a much more challenging task because of the risk of overfitting [56]. Here, we propose the leave-one-out cross-validation that includes Monte Carlo dropout layers (holding neurons with a probability of being ignored during training and validation) for decreasing the probability that the learned rules from specific training data cannot be generalized to new observations. As a result, the DNR prediction errors of MAE and RMSE fall by nearly half (see Table 4). Furthermore, including the Monte Carlo dropout layers allows selecting a reduced set of FC links enhancing the prediction performance (see Figure 4). Consequently, this aspect improves the physiological interpretability of network regression models.
In practice, assessment of resting-state activation is frequently performed with a reduced number of electrodes to reduce computational complexity and the set-up time. To this end, we evaluate the DRN performance for the predictor sets extracted over the sensorimotor area, showing that the channel selection strategy underperforms the whole electrode set’s inclusion. This issue becomes more manifest in subjects with a more prominent EEG variability (that is, high BCI inefficiency). As suggested in [57], the learned network weights depend on the variability resulting from the channel selection used, making the prediction performance vary notably from one subject partition to another.
Regression assessments using subject clustering of BCI inefficiency. One more issue impacting the regression prediction is the user’s categorization depending on their SMR activity and classifier performance during the MI runs. The obtained results show that the prediction performance improves (the values of R-squared increase while the errors decrease) regardless of the subject group under consideration. Therefore, we hypothesize that the regression analysis using partitions may be more effective in databases with complex EEG measurements. Consequently, clustering combined with DNR models enhances understanding of the factors influencing subjects’ accuracy performance with significant BCI inefficiency.
DNR prediction with transfer learning. We also assess the DNR performance of predicting the MI accuracy at each run using the single-trial PLV predictor of wakefulness data. However, we associate the values learned from each run’s MI data with the weights inferred at the predecessor run to deal with the few-trials sets. Thus, compared to the all set performance, the initial-training synchronization prediction increases in each group of individuals.
For future work, the authors plan to enhance FC predictors’ feature extraction, providing a better understanding of their impact and interaction on BCI-related tasks to identify potential non-learners.
Profiting from MI-based BCI learning progression, dynamic network regression models must be developed to capture the sequence regression’s latent trends. In this line of analysis, the cluster-based enhancing procedure and the vector accuracy response should also account for FC predictors’ dynamic behavior. Intending to improve the DNR prediction, an extended panel of standardized and validated psychological questionnaires are to be included within the network estimator, accounting for user’s specific characteristics like daily motor activity and age.
One more aspect to explore is to adjust the DNR pipeline to learn the weights for supporting prediction in a broader clinical application class, relying on the ability of deep learning architectures to extract complex random structures from EEG data.

6. Author Resume

  • Julian Caicedo-Acosta received his undergraduate degree in electronic engineering (2018) and his M.Sc. degree in engineering industrial automation (2019) from the Universidad Nacional de Colombia. Currently, he is a PhD student at the same university. His research interests include machine learning, signal processing and bioengineering.
  • German A. Castaño received his undergraduate degree in economics (1986) and business administration (1987), and his specialization of informatics administration (1993) from the Universidad Nacional de Colombia. Currently, he is a Professor in the Department of Administration at the Universidad Nacional de Colombia – Manizales. In addition, he is Chairman of the “Grupo de investigación Cultura de la Calidad en la Educación”at the same university. His research interests include quality of education, peace, and post-conflict.
  • Carlos Acosta-Medina received a B.S. degree in Mathematics from Universidad de Sucre in Colombia in 1996. In 2000, he received a M.Sc. degree in mathematics, and in 2008 a Ph.D. in mathematics, both of them from Universidad Nacional de Colombia—Sede Medellín. Currently he is Associated Professor at Universidad Nacional de Colombia—Sede Manizales. His research interests are Regularization, Conservation Laws, and Discrete Mollification.
  • Andres Alvarez-Meza received his undergraduate degree in electronic engineering (2009), his M.Sc. degree in engineering industrial automation (2011), and his Ph.D. in engineering—automatics (2015) from the Universidad Nacional de Colombia. Currently, he is a Professor in the Department of Electrical, Electronic, and Computation Engineering at the Universidad Nacional de Colombia – Manizales. His research interests include machine learning and signal processing.
  • German Castellanos-Dominguez received his undergraduate degree in radiotechnical systems and his Ph.D. in processing devices and systems from the Moscow Technical University of communications and Informatics, in 1985 and 1990 respectively. Currently, he is a Professor in the Department of Electrical, Electronic, and Computation Engineering at the Universidad Nacional de Colombia, Manizales. In addition, he is Chairman of the GCPDS at the same university. His teaching and research interests include information and signal theory, digital signal processing, and bioengineering.

Author Contributions

Conceptualization, C.A.-M., A.A.-M., and G.C.-D.; methodology, J.C.-A., A.A.-M., and G.C.-D.; software, J.C.-A.; validation, J.C.-A. and A.A.-M.; formal analysis, C.A.-M., G.A.C., A.A.-M., and G.C.-D.; investigation, J.C.-A.; resources, G.A.C.; data curation, J.C.-A.; writing—original draft preparation, J.C.-A., G.C.-D.; writing—review and editing, A.A.-M., C.A.-M., and G.A.C.; visualization, J.C.-A.; supervision, A.A.-M. and G.C.-D.; project administration, C.A.-M. and G.A.C.; funding acquisition, G.A.C. All authors have read and agreed to the published version of the manuscript.

Funding

This manuscript is the result of the research developed by “PROGRAMA DE INVESTIGACIÓN RECONSTRUCCIÓN DEL TEJIDO SOCIAL EN ZONAS DE POSCONFLICTO EN COLOMBIA Código SIGP: 57579 con el proyecto de investigación Fortalecimiento docente desde la alfabetización mediática Informacional y la CTel, como estrategia didáctico-pedagógica y soporte para la recuperación de la confianza del tejido social afectado por el conflicto. Código SIGP 58950. Financiado en el marco de la convocatoria Colombia Científica, Contrato No. FP44842-213-2018”.

Institutional Review Board Statement

Ethical review and approval was waived for this study, due to the use of public data sets previously submitted to ethical reviews.

Informed Consent Statement

No aplicable since this study uses duly anonymized public databases.

Data Availability Statement

The databases used in this study are public and can be found at the following links: DBI: http://gigadb.org/dataset/100295, accessde on 30 January 2021; DBII: https://physionet.org/content/eegmmidb/1.0.0/, accessde on 30 January 2021.

Conflicts of Interest

The authors declare that this research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Kimura, T.; Nakano, W. Repetition of a cognitive task promotes motor learning. Hum. Mov. Sci. 2019, 66, 109–116. [Google Scholar] [CrossRef]
  2. Agosti, V.; Sirico, M. Motor imagery as a tool for motor learning and improving sports performance: A mini review on the state of the art. Sport Sci. 2020, 13, 13–17. [Google Scholar]
  3. Kraeutner, S.; Eppler, S.; Stratas, A.; Boe, S. Generate, maintain, manipulate? Exploring the multidimensional nature of motor imagery. Psychol. Sport Exerc. 2020, 48, 101673. [Google Scholar] [CrossRef]
  4. Bunno, Y. Effectiveness of Motor Imagery on Physical Therapy: Neurophysiological Aspects of Motor Imagery. In Physical Therapy Effectiveness; IntechOpen: London, UK, 2019. [Google Scholar]
  5. Sirico, F.; Romano, V. Effect of Video Observation and Motor Imagery on Simple Reaction Time in Cadet Pilots. J. Funct. Morphol. Kinesiol. 2020, 5, 89. [Google Scholar] [CrossRef]
  6. Nicholson, V.; Watts, N.; Chani, Y.; Keogh, J. Motor imagery training improves balance and mobility outcomes in older adults: A systematic review. J. Physiother. 2019, 65, 200–207. [Google Scholar] [CrossRef] [PubMed]
  7. Bhoyroo, R.; Hands, B.; Wilmut, K.; Hyde, C.; Wigley, A. Motor planning with and without motor imagery in children with Developmental Coordination Disorder. Acta Psychol. 2019, 199, 102902. [Google Scholar] [CrossRef]
  8. Teixeira-Machado, L.; Arida, R.; de Jesus Mari, J. Dance for neuroplasticity: A descriptive systematic review. Neurosci. Biobehav. Rev. 2019, 96, 232–240. [Google Scholar] [CrossRef]
  9. Canepa, P.; Sbragi, A.; Saino, F.; Biggio, M.; Bove, M.; Bisio, A. Thinking Before Doing: A Pilot Study on the Application of Motor Imagery as a Learning Method During Physical Education Lesson in High School. Front. Sport. Act. Living 2020, 2, 126. [Google Scholar] [CrossRef] [PubMed]
  10. Asensio-Cubero, J.; Gan, J.; Palaniappan, R. Multiresolution analysis over simple graphs for brain computer interfaces. J. Neural Eng. 2013, 10 4, 046014. [Google Scholar] [CrossRef]
  11. Singh, A.; Lal, S.; Guesgen, H. Small Sample Motor Imagery Classification Using Regularized Riemannian Features. IEEE Access 2019, 7, 46858–46869. [Google Scholar] [CrossRef]
  12. Miladinović, A.; Ajčević, M.; Jarmolowska, J.; Marusic, U.; Coluss, M.; Silveri, G.; Battaglini, P.; Accardo, A. Effect of power feature covariance shift on BCI spatial-filtering techniques: A comparative study. Comput. Methods Programs Biomed. 2021, 198, 105808. [Google Scholar] [CrossRef]
  13. Saha, S.; Baumert, M. Intra- and Inter-subject Variability in EEG-Based Sensorimotor Brain Computer Interface: A Review. Front. Comput. Neurosci. 2020, 13, 87. [Google Scholar] [CrossRef] [Green Version]
  14. Zakkay, E.; Abu-Rmileh, A.; Geva, A.B.; Shriki, O. Asynchronous Brain Computer Interfaces Using Echo State Networks. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  15. Botrel, L.; Acqualagna, L.; Blankertz, B.; Kubler, A. Short progressive muscle relaxation or motor coordination training does not increase performance in a brain-computer interface based on sensorimotor rhythms (SMR). Int. J. Psychophysiol. Off. J. Int. Organ. Psychophysiol. 2017, 121, 29–37. [Google Scholar] [CrossRef] [PubMed]
  16. Herszage, J.; Dayan, E.; Sharon, H.; Censor, N. Explaining Individual Differences in Motor Behavior by Intrinsic Functional Connectivity and Corticospinal Excitability. Front. Neurosci. 2020, 14, 76. [Google Scholar] [CrossRef] [Green Version]
  17. Zhang, R.; Xu, P.; Chen, R.; Li, F.; Guo, L.; Li, P.; Zhang, T.; Yao, D. Predicting Inter-session Performance of SMR-Based Brain–Computer Interface Using the Spectral Entropy of Resting-State EEG. Brain Topogr. 2015, 28, 680–690. [Google Scholar] [CrossRef] [PubMed]
  18. Velásquez-Martínez, L.; Caicedo-Acosta, J.; Castellanos-Domínguez, G. Entropy-Based Estimation of Event-Related De/Synchronization in Motor Imagery Using Vector-Quantized Patterns. Entropy 2020, 22, 703. [Google Scholar] [CrossRef] [PubMed]
  19. Bonita, J.; Ambolode, L.; Rosenberg, B.; Cellucci, C.; Watanabe, T.; Rapp, P.; Albano, A. Time domain measures of inter-channel EEG correlations: A comparison of linear, nonparametric and nonlinear measures. Cogn. Neurodynamics 2013, 8, 1–15. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Corsi, M.; Chavez, M.; Schwartz, D.; George, N.; Hugueville, L.; Kahn, A.; Dupont, S.; Bassett, D.; De Vico, F. Functional disconnection of associative cortical areas predicts performance during BCI training. NeuroImage 2020, 209, 116500. [Google Scholar] [CrossRef]
  21. Bakhshali, M.; Ebrahimi-Moghadam, A.; Khademi, A.; Moghimi, S. Coherence-based correntropy spectral density: A novel coherence measure for functional connectivity of EEG signals. Measurement 2019, 140, 354–364. [Google Scholar] [CrossRef]
  22. Gonuguntla, V.; Wang, Y.; Veluvolu, V. Event-Related Functional Network Identification: Application to EEG Classification. IEEE J. Sel. Top. Signal Process. 2016, 10, 1284–1294. [Google Scholar] [CrossRef]
  23. Filho, C.; Attux, R.; Castellano, G. Can graph metrics be used for EEG-BCIs based on hand motor imagery? Biomed. Signal Process. Control. 2018, 40, 359–365. [Google Scholar] [CrossRef]
  24. Feng, F.; Qian, L.; Hu, H.; Sun, Y. Functional Connectivity for Motor Imaginary Recognition in Brain-computer Interface. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON, Canada, 11–14 October 2020; pp. 3678–3682. [Google Scholar]
  25. Manuel, A.; Guggisberg, A.; Thézé, R.; Turri, F.; Schnider, A. Resting-state connectivity predicts visuo-motor skill learning. NeuroImage 2018, 176, 446–453. [Google Scholar] [CrossRef]
  26. Yoon, J.; Lee, M. Effective Correlates of Motor Imagery Performance based on Default Mode Network in Resting-State. In Proceedings of the 2020 8th International Winter Conference on Brain-Computer Interface (BCI), Gangwon, Korea, 26–28 February 2020; pp. 1–5. [Google Scholar]
  27. Gonzalez-Astudillo, J.; Cattai, T.; Bassignana, G.; Corsi, M.; De Vico Fallani, F. Network-based brain computer interfaces: Principles and applications. J. Neural Eng. 2020. [Google Scholar] [CrossRef] [PubMed]
  28. Hamedi, H.; Salleh, S.; Noor, A. Electroencephalographic Motor Imagery Brain Connectivity Analysis for BCI: A Review. Neural Comput. 2016, 28, 999–1041. [Google Scholar] [CrossRef] [PubMed]
  29. Li, X.; Wu, Y.; Wei, M.; Guo, Y.; Yu, Z.; Wang, H.; Li, Z.; Fan, H. A novel index of functional connectivity: Phase lag based on Wilcoxon signed rank test. Cogn. Neurodynamics 2020, 1–16. [Google Scholar] [CrossRef]
  30. Orsborn, L.; Pesaran, B. Parsing learning in networks using brain–machine interfaces. Curr. Opin. Neurobiol. 2017, 46, 76–83. [Google Scholar] [CrossRef] [PubMed]
  31. Robinson, N.; Thomas, K.; Vinod, A. Neurophysiological predictors and spectro-spatial discriminative features for enhancing SMR-BCI. J. Neural Eng. 2018, 15 6, 066032. [Google Scholar] [CrossRef]
  32. Sugata, H.; Yagi, K.; Yazawa, S.; Nagase, Y.; Tsuruta, K.; Ikeda, T.; Nojima, I.; Hara, M.; Matsushita, M.; Kawakami, K.; et al. Role of beta-band resting-state functional connectivity as a predictor of motor learning ability. NeuroImage 2020, 210, 116562. [Google Scholar] [CrossRef] [PubMed]
  33. Wang, S.; Williams, J.; Wilmut, K. Constraints on motor planning across the life span: Physical, cognitive, and motor factors. Psychol. Aging 2019, 35, 421. [Google Scholar] [CrossRef]
  34. He, T.; Kong, R.; Holmes, A.; Nguyen, M.; Sabuncu, M.; Eickhoff, S.; Bzdok, D.; Feng, J.; Yeo, B. Deep neural networks and kernel regression achieve comparable accuracies for functional connectivity prediction of behavior and demographics. NeuroImage 2020, 206, 116276. [Google Scholar] [CrossRef] [PubMed]
  35. Cha, H.; Han, C.; Im, C. Prediction of Individual User’s Dynamic Ranges of EEG Features from Resting-State EEG Data for Evaluating Their Suitability for Passive Brain—Computer Interface Applications. Sensors 2020, 20, 988. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Li, F.; Yi, C.; Song, L.; Jiang, Y.; Peng, W.; Si, Y.; Zhang, T.; Zhang, R.; Yao, D.; Zhang, Y.; et al. Brain Network Reconfiguration During Motor Imagery Revealed by a Large-Scale Network Analysis of Scalp EEG. Brain Topogr. 2018, 32, 304–314. [Google Scholar] [CrossRef] [PubMed]
  37. Hayashi, M.; Tsuchimoto, S.; Mizuguchi, N.; Miyatake, M.; Kasuga, S.; Ushiba, J. Two-stage regression of high-density scalp electroencephalograms visualizes force regulation signaling during muscle contraction. J. Neural Eng. 2019, 16, 056020. [Google Scholar] [CrossRef] [PubMed]
  38. Shu, X.; Chen, S.; Yao, L.; Sheng, X.; Zhang, D.; Jiang, N.; Jia, J.; Zhu, X. Fast Recognition of BCI-Inefficient Users Using Physiological Features from EEG Signals: A Screening Study of Stroke Patients. Front. Neurosci. 2018, 12, 93. [Google Scholar] [CrossRef] [Green Version]
  39. G, C.; Ward, B.; Xie, C.; Li, W.; Chen, G.; Goveas, J.; P, A.; Li, S. A clustering-based method to detect functional connectivity differences. NeuroImage 2012, 61, 56–61. [Google Scholar]
  40. Kim, Y.; Lee, S.; Kim, H.; Lee, S.; Lee, S.; Kim, D. Reduced Burden of Individual Calibration Process in Brain-Computer Interface by Clustering the Subjects based on Brain Activation. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Toronto, ON, Canada, 6–9 October 2019; pp. 2139–2143. [Google Scholar]
  41. Sannelli, C.; Vidaurre, C.; Müller, K.; Blankertz, B. A large scale screening study with a SMR-based BCI: Categorization of BCI users and differences in their SMR activity. PLoS ONE 2019, 14, e0207351. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Fiederer, L.; Völker, M.; Schirrmeister, R.; Burgard, W.; Boedecker, J.; Ball, T. Hybrid Brain-Computer-Interfacing for Human-Compliant Robots: Inferring Continuous Subjective Ratings With Deep Regression. Front. Neurorobotics 2019, 13, 76. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Andrade, C. The P Value and Statistical Significance: Misunderstandings, Explanations, Challenges, and Alternatives. Indian J. Psychol. Med. 2019, 41, 210–215. [Google Scholar] [CrossRef]
  44. Lee, M.; Yoon, Y.; Lee, S. Predicting Motor Imagery Performance From Resting-State EEG Using Dynamic Causal Modeling. Front. Hum. Neurosci. 2020, 14, 321. [Google Scholar] [CrossRef]
  45. Mormann, F.; Lehnertz, K.; David, P.; Elger, C.E. Mean phase coherence as a measure for phase synchronization and its application to the EEG of epilepsy patients. Phys. Nonlinear Phenom. 2000, 144, 358–369. [Google Scholar] [CrossRef]
  46. Bruns, A. Fourier-, Hilbert-and wavelet-based signal analysis: Are they really different approaches? J. Neurosci. Methods 2004, 137, 321–332. [Google Scholar] [CrossRef] [PubMed]
  47. Vinck, M.; Oostenveld, R.; Van Wingerden, M.; Battaglia, F.; Pennartz, C.M. An improved index of phase-synchronization for electrophysiological data in the presence of volume-conduction, noise and sample-size bias. Neuroimage 2011, 55, 1548–1565. [Google Scholar] [CrossRef]
  48. García-Prieto, J.; Bajo, R.; Pereda, E. Efficient computation of functional brain networks: Towards real-time functional connectivity. Front. Neuroinform. 2017, 11, 8. [Google Scholar] [CrossRef] [PubMed]
  49. Cheng, H.; Koc, L.; Harmsen, J.; Shaked, T.; Aradhye, H.; Anderson, G.; Corrado, G.; Chai, W.; Ispir, M.; Anil, R.; et al. Wide & deep learning for recommender systems. In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems, Boston, MA, USA, 15 September 2016; pp. 7–10. [Google Scholar]
  50. Cho, H.; Ahn, M.; Ahn, S.; Kwon, M.; Jun, S. EEG datasets for motor imagery brain-computer interface. GigaScience 2017, 6, gix034. [Google Scholar] [CrossRef]
  51. Mingjun, L.; Junxing, W. An Empirical Comparison of Multiple Linear Regression and Artificial Neural Network for Concrete Dam Deformation Modelling. Math. Probl. Eng. 2019, 2019, 13. [Google Scholar]
  52. Chen, Y.; Meng, L.; Zhang, J. Graph Neural Lasso for Dynamic Network Regression. arXiv 2019, arXiv:1907.11114. [Google Scholar]
  53. Padilla-Buritica, J.; Hurtado, J.; Castellanos-Dominguez, G. Supervised piecewise network connectivity analysis for enhanced confidence of auditory oddball tasks. Biomed. Signal Process. Control 2019, 52, 341–346. [Google Scholar] [CrossRef]
  54. Zhang, Y.; Xu, P.; Guo, D.; Yao, D. Prediction of SSVEP-based BCI performance by the resting-state EEG network. J. Neural Eng. 2013, 10, 066017. [Google Scholar] [CrossRef] [Green Version]
  55. Vidaurre, C.; Haufe, S.; Jorajuría, T.; Müller, K.R.; Nikulin, V.V. Sensorimotor functional connectivity: A neurophysiological factor related to BCI performance. Front. Neurosci. 2020, 14, 1278. [Google Scholar] [CrossRef]
  56. Calesella, F.; Testolin, A.; De Filippo De Grazia, M.; Zorzi, M. A Systematic Assessment of Feature Extraction Methods for Robust Prediction of Neuropsychological Scores from Functional Connectivity Data. In Brain Informatics; Mahmud, M., Vassanelli, S.O., Kaiser, M., Zhong, N., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 29–40. [Google Scholar]
  57. Baig, M.; Aslam, N.; Shum, H. Filtering techniques for channel selection in motor imagery EEG applications: A survey. Artif. Intell. Rev. 2019, 53, 1207–1232. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Dolutegravir may inhibit HIV-resistant viruses from becoming archived within viral reservoir.
Figure 1. Dolutegravir may inhibit HIV-resistant viruses from becoming archived within viral reservoir.
Sensors 21 01932 g001
Figure 2. Dolutegravir may inhibit HIV-resistant viruses from becoming archived within the viral reservoir.
Figure 2. Dolutegravir may inhibit HIV-resistant viruses from becoming archived within the viral reservoir.
Sensors 21 01932 g002
Figure 3. Deep Network Regressor’s architecture.
Figure 3. Deep Network Regressor’s architecture.
Sensors 21 01932 g003
Figure 4. DNR weights mostly supporting the prediction performance, learned for wPLI and PLV predictors using two validation scenarios: leave-one-out cross-validation and leave-one-out cross-validation with Monte Carlo dropout (noted with †).
Figure 4. DNR weights mostly supporting the prediction performance, learned for wPLI and PLV predictors using two validation scenarios: leave-one-out cross-validation and leave-one-out cross-validation with Monte Carlo dropout (noted with †).
Sensors 21 01932 g004
Figure 5. Partitions of individuals clustered by the CSP-based accuracy within the motor imagery interval. Each subject performance is painted according to his appraised inefficiency partition: Group I (green), Group II (yellow), and Group III (red).
Figure 5. Partitions of individuals clustered by the CSP-based accuracy within the motor imagery interval. Each subject performance is painted according to his appraised inefficiency partition: Group I (green), Group II (yellow), and Group III (red).
Sensors 21 01932 g005
Figure 6. Topograms depicting the DNR weights performed by each inefficiency subject partition using FC predictors.
Figure 6. Topograms depicting the DNR weights performed by each inefficiency subject partition using FC predictors.
Sensors 21 01932 g006
Table 1. Predicting performance of the FC predictors extracted from the resting-state data, employing the matrix indicator, the average, and the best index computed across the whole channel set. Notation mean stands for the indicator R 2 averaged across the frequency bands. Abbreviation na is not applicable.
Table 1. Predicting performance of the FC predictors extracted from the resting-state data, employing the matrix indicator, the average, and the best index computed across the whole channel set. Notation mean stands for the indicator R 2 averaged across the frequency bands. Abbreviation na is not applicable.
DBIDBII
Ω Strenght
[Hz]multichannelaveragebest channelmultichannelaveragebest channel
R 2 p-val R 2 p-val R 2 p-val R 2 p-val R 2 p-val R 2 p-val
θ 0.472>0.010.0880.5420.3220.0220.0900.3600.0980.3180.2230.022
μ 0.2960.0360.2680.0590.3820.0060.1550.1130.1690.0830.3550.002
β low 0.0690.6330.1310.3610.3480.0130.0320.7420.1240.2060.1530.118
mean0.279 0.162 0.351 0.092 0.130 0.243
Clustering coefficient
θ 0.1800.2100.1770.2170.4300.0010.2050.0350.0550.5730.2310.017
μ 0.1780.2150.2950.0370.3630.0090.1390.1540.1500.1240.387>0.01
β low 0.1670.3160.3070.0290.2550.0730.2250.0200.1430.1450.1640.092
mean0.167 0.260 0.349 0.190 0.117 0.261
wPLI
θ 0.1040.4680.0280.845nana0.2480.0100.3000.001nana
μ 0.4180.0020.3520.012nana0.0750.4440.482>0.01nana
β low 0.2690.0580.2030.155nana0.2020.0370.346>0.01nana
mean0.263 0.194 0.175 0.376
PLV
θ 0.471>0.010.4060.003nana0.0860.3780.0880.368nana
μ 0.4540.0010.6270nana0.0660.4970.2610.006nana
β low 0.4250.0020.3050.030nana0.0640.5140.3170.001nana
mean0.450 0.445 0.074 0.222
Table 2. Prediction performance of the multiple linear regression for the graph indexes extracted from the resting-state data.
Table 2. Prediction performance of the multiple linear regression for the graph indexes extracted from the resting-state data.
DBIDBII
Ω AverageMultichannelAverageMultichannel
R 2 p-val R 2 p-val R 2 p-val R 2 p-val
θ 0.1920.1800.3120.0260.2340.1020.0850.387
μ 0.1340.3510.1730.2280.2130.1370.1210.222
β low 0.0320.8220.0080.9510.1010.4840.1160.239
mean0.120 0.164 0.183 0.107
Table 3. Obtained prediction performance of the Wide&Deep neural network regression fed by the tested functional connectivity predictors extracted from the resting-state data, contrasting two different wide path configurations of rhythm extraction: considered rh and μ + β low .
Table 3. Obtained prediction performance of the Wide&Deep neural network regression fed by the tested functional connectivity predictors extracted from the resting-state data, contrasting two different wide path configurations of rhythm extraction: considered rh and μ + β low .
θ + μ + β low
Ω DBIDBII
R 2 RMSEMAE R 2 RMSEMAE
Strength0.7870.4880.4240.4120.4400.393
Clus. Coefficient0.8040.4440.3670.4720.4710.436
Both indexes0.8000.4190.3530.3210.3670.314
wPLI0.7910.4760.3960.5910.4480.391
PLV0.7850.4680.3950.4650.4210.374
mean0.7930.4540.3860.4520.3900.343
μ + β low
Strength0.7660.4700.3880.5250.4060.342
Clus. Coefficient0.6950.4750.3960.4560.3720.316
Both indexes0.7880.4640.3820.3920.4040.338
wPLI0.7720.5440.4800.5250.3540.305
PLV0.7000.4260.3530.3620.4050.350
mean0.7440.4850.4100.4290.3670.309
Table 4. Prediction performance of the Wide&Deep neural network regression fed by the tested functional connectivity indicators extracted from the resting-state data, employing the validation procedure that includes the Monte Carlo dropout layers combined with the threshold procedure. The input predictor sets denoted with * are thresholded. Bold numbers show the best result of each experiment.
Table 4. Prediction performance of the Wide&Deep neural network regression fed by the tested functional connectivity indicators extracted from the resting-state data, employing the validation procedure that includes the Monte Carlo dropout layers combined with the threshold procedure. The input predictor sets denoted with * are thresholded. Bold numbers show the best result of each experiment.
θ + μ + β low
Ω DBIDBII
R 2 RMSEMAE R 2 RMSEMAE
Strength0.7310.2470.2120.3410.2930.247
Strength *0.7630.1840.1420.6460.2160.179
Clus. Coefficient0.4530.2380.1920.6350.2400.201
Clus. Coefficient *0.7530.2080.1740.6980.2420.202
Both indexes0.8100.1450.1100.7410.1650.127
Both indexes *0.6720.1860.1450.7010.1670.132
PLV0.7300.1830.1400.5190.3910.355
wPLI0.7950.1460.1120.5730.2160.173
mean0.7130.1920.1530.6060.2410.202
μ + β low
Strength0.6660.2000.1510.2700.2520.198
Strength *0.7810.1680.1280.4390.2770.239
Clus. Coefficient0.7950.1550.1260.8160.2020.177
Clus. Coefficient *0.6760.1860.1550.7710.1800.150
Both indexes0.5970.2080.1580.6290.2320.187
Both indexes *0.6660.1820.1500.6930.2070.170
PLV0.7420.2100.1730.3440.2720.213
wPLI0.8640.1230.0980.7630.2120.185
mean0.6600.2260.1530.5470.2290.189
Table 5. DNR Performance of the functional connectivity (FC) predictors derived from the resting-state data achieved by each subject partition. Prediction is carried employing Monte Carlo dropout and the wide path configuration of all extracted rhythms ( θ + μ + β low ).
Table 5. DNR Performance of the functional connectivity (FC) predictors derived from the resting-state data achieved by each subject partition. Prediction is carried employing Monte Carlo dropout and the wide path configuration of all extracted rhythms ( θ + μ + β low ).
DBI
Ω I(8)II(15)III(27)
R 2 RMSEMAE R 2 RMSEMAE R 2 RMSEMAE
Strength0.6250.3110.2550.8680.1400.1110.7990.1520.115
Clus. Coefficient0.7920.2600.1700.8530.1600.1050.7770.1790.137
Both indexes0.8280.2480.1980.7670.2290.1660.7500.1690.134
wPLI0.8720.2280.1900.9070.1410.1130.9580.0730.061
PLV0.9470.1640.1240.8830.1400.1120.9390.0950.072
mean0.8130.2420.1870.8560.1620.1210.8450.1340.104
Gain [%]+14+7478+20−16−21+18−30−32
Strength 0.8600.2100.1910.6360.2650.1660.7880.1610.136
Clus. Coefficient 0.9730.1090.0750.6900.2140.1600.7980.1420.108
Both indexes 0.9460.1920.1470.8190.1710.1150.6710.1900.146
wPLI0.5700.3170.2410.9210.1090.0800.8300.1520.119
PLV0.8410.1890.1030.8020.1850.1410.7390.1670.135
mean0.8380.2030.1510.7730.1880.1320.7650.1620.128
Gain [%] +17+6−1+8−2−14+7−15−16
DBII
I (11)II (43)III (51)
Strength0.8780.1390.1190.7230.1970.1630.8370.1930.157
Clus. Coefficient0.9400.0890.0750.6180.2130.1730.8850.1280.099
Both indexes0.8840.1270.0900.4990.2970.2400.8460.1570.13
wPLI0.9710.0660.0390.7160.1850.1500.7490.1420.110
PLV0.8240.1660.1180.8040.1610.1370.8330.1300.103
mean0.8990.1170.0880.6720.2100.1720.8300.1500.119
Gain [%]+48−51−56+11−13−15+37−38−41
Strength 0.8060.1640.1050.6260.2880.2500.7150.2680.231
Clus. Coefficient 0.8030.1830.1470.7130.2240.1840.6730.6730.308
Both indexes 0.8890.1590.1130.1300.2860.2290.6310.2210.167
wPLI0.6180.2860.2380.6560.1980.1540.6380.2100.173
PLV0.8820.1280.0920.7400.1320.1000.8250.1550.125
mean0.7990.1840.1390.5730.2250.1830.6960.3050.200
Gain [%] +32−24−31−5−7−9+15+26−1
Table 6. DNR performance in predicting initial-training Synchronization, employing Monte Carlo dropout and the wide path configuration of all extracted rhythms ( θ + μ + β low ). The network regression is fed by the PLV predictor as the only single-trial FC indicator.
Table 6. DNR performance in predicting initial-training Synchronization, employing Monte Carlo dropout and the wide path configuration of all extracted rhythms ( θ + μ + β low ). The network regression is fed by the PLV predictor as the only single-trial FC indicator.
DBI
RunI(8)II(15)III(27)All Set
R 2 RMSEMAE R 2 RMSEMAE R 2 RMSEMAE R 2 RMSEMAE
Resting-State vs. Single-Run Accuracy Target
10.6010.2450.1920.6270.2140.1430.8880.1260.0850.8140.1920.158
20.8970.2000.1580.7450.2140.1760.8740.1170.0870.7950.1410.109
30.8620.1840.1210.6550.2750.1460.9100.1230.0990.7560.2260.226
40.9170.1720.1230.9720.0690.0530.9360.1100.0900.8900.1090.089
50.5500.2930.2170.9060.1310.1060.8190.1710.1300.6890.2310.190
mean0.7650.2190.1620.7810.1810.1250.8850.1290.0980.7890.1800.154
single-run before-unset interval vs. single-run accuracy target
10.7330.3180.2800.7510.1710.1100.9140.1100.0800.8090.1340.104
20.9660.1130.0740.7820.1870.1160.7770.2090.1620.7910.1400.107
30.9000.1890.1290.6780.2480.1350.8550.1670.1230.8060.1680.126
40.9240.1340.1000.9430.1200.0930.8930.1020.0830.8470.1310.110
50.6780.2650.2330.9110.1300.1020.7410.1920.1370.8350.1650.136
mean0.8400.2040.1630.8130.1710.1110.8360.1560.1170.8180.1480.117
single-run before-unset interval vs. all-run accuracy target using transfer learning
10.8750.1870.1420.8640.1440.1090.9560.0770.0580.8480.1290.101
20.9540.1430.1020.9180.1220.0930.9470.1020.0810.8130.1500.127
30.9650.1370.0940.9430.1050.0860.9240.0890.0670.7580.1560.110
40.9830.1070.0890.9320.1090.0890.9410.0880.0650.7860.1920.160
50.9810.1050.0850.9520.0950.0700.9420.0860.0590.7330.2480.210
mean0.9520.1360.1020.9220.1150.0890.9420.0850.0660.7880.1750.142
DBII
single-run before-unset interval vs all-run accuracy target using transfer learning
10.7460.2010.1270.6290.1850.1520.7080.1750.1380.3400.2360.190
20.7920.1840.1500.6300.2200.1730.5960.2250.1760.4820.2820.233
30.8770.1460.1150.7550.1680.1340.5770.2250.1830.4410.2690.231
mean0.8050.1770.1310.6710.1910.1530.6270.2080.1660.4210.2620.218
Table 7. Performance comparison with works recently presented that employ correlates between accuracy and sensorimotor rhythms (SMR) or FC indicators. S denotes the number of subjects in respective datasets.
Table 7. Performance comparison with works recently presented that employ correlates between accuracy and sensorimotor rhythms (SMR) or FC indicators. S denotes the number of subjects in respective datasets.
SMR [41]FC [55]ERSP [38]FC [44]DRN DBIDRN DBII
R 2 sub R 2 sub R 2 LI R 2 CASsub R 2 R 2 single-runsub R 2 R 2 single-runsub R 2 R 2 single-runsub
0.37800.5380-0.730.64340.310.54540.860.84500.810.48104
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Caicedo-Acosta, J.; Castaño, G.A.; Acosta-Medina, C.; Alvarez-Meza, A.; Castellanos-Dominguez, G. Deep Neural Regression Prediction of Motor Imagery Skills Using EEG Functional Connectivity Indicators. Sensors 2021, 21, 1932. https://doi.org/10.3390/s21061932

AMA Style

Caicedo-Acosta J, Castaño GA, Acosta-Medina C, Alvarez-Meza A, Castellanos-Dominguez G. Deep Neural Regression Prediction of Motor Imagery Skills Using EEG Functional Connectivity Indicators. Sensors. 2021; 21(6):1932. https://doi.org/10.3390/s21061932

Chicago/Turabian Style

Caicedo-Acosta, Julian, German A. Castaño, Carlos Acosta-Medina, Andres Alvarez-Meza, and German Castellanos-Dominguez. 2021. "Deep Neural Regression Prediction of Motor Imagery Skills Using EEG Functional Connectivity Indicators" Sensors 21, no. 6: 1932. https://doi.org/10.3390/s21061932

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop