Application of Transfer Learning in EEG Decoding Based on Brain-Computer Interfaces: A Review

The algorithms of electroencephalography (EEG) decoding are mainly based on machine learning in current research. One of the main assumptions of machine learning is that training and test data belong to the same feature space and are subject to the same probability distribution. However, this may be violated in EEG processing. Variations across sessions/subjects result in a deviation of the feature distribution of EEG signals in the same task, which reduces the accuracy of the decoding model for mental tasks. Recently, transfer learning (TL) has shown great potential in processing EEG signals across sessions/subjects. In this work, we reviewed 80 related published studies from 2010 to 2020 about TL application for EEG decoding. Herein, we report what kind of TL methods have been used (e.g., instance knowledge, feature representation knowledge, and model parameter knowledge), describe which types of EEG paradigms have been analyzed, and summarize the datasets that have been used to evaluate performance. Moreover, we discuss the state-of-the-art and future development of TL for EEG decoding. The results show that TL can significantly improve the performance of decoding models across subjects/sessions and can reduce the calibration time of brain–computer interface (BCI) systems. This review summarizes the current practical suggestions and performance outcomes in the hope that it will provide guidance and help for EEG research in the future.


Introduction
A brain-computer interface (BCI) is a communication method between a user and a computer that does not rely on the normal neural pathways of the brain and muscles [1]. According to the methods of electroencephalography (EEG) signal collection, BCIs can be divided into three types, namely, non-invasive, invasive, and partially-invasive BCIs. Among them, non-invasive BCIs realize the control of external equipment via EEG and by transforming EEG recordings into a command, which have been widely used due to their convenient operation. Figure 1 shows a typical non-invasive BCI system framework based on EEG, which usually consists of three parts: EEG signal acquisition, signal decoding, and external device control. During this process, signal decoding is the key step to ensure the operation of the whole system. The representation of EEG typically takes the form of a high-dimensional matrix, which includes the information of sampling points, channels, trials, and subjects [2]. Meanwhile, the most common features of EEG-based BCIs include spatial filtering, band power, time points, and so on. Recently, machine learning (ML) has shown its powerful ability for feature extraction in EEG-based BCI tasks [3,4].
BCI technology based on EEG has made great progress, but the challenges of weak robustness and low accuracy greatly hinder the application of BCIs in practice [5]. From the perspective of signal decoding, the reasons are as follows: First, one of the main assumptions of ML is that training and test data belong to the same feature space and are subject to the same probability distribution. However, this assumption is often violated in the field of bioelectric signal processing, because differences in physiological structure and psychological states may cause obvious variation in EEG. Therefore, signals from different sessions/subjects on the same task show different features and distribution.
Second, EEG signals are extremely weak and are always accompanied by unrelated artifacts from other areas of the brain, which potentially mislead discriminant results and decrease the classification accuracy. Third, the strict requirements for the experimental conditions of BCI systems make it difficult to obtain large and high-quality datasets in practice. It is difficult for a classification model based on small-scale samples to obtain strong robustness and high classification accuracy. However, large-scale and high-quality datasets are the basis for guaranteeing the decoding accuracy of models.
One promising approach to solve these problems is transfer learning (TL). The principle of TL is realizing the knowledge transfer from different but related tasks, i.e., using existing knowledge learned from accomplished tasks to help with new tasks. The definition of TL is as follows: A given domain D consists of a feature space X and a marginal probability distribution P(X). A task T consists of a label space y and a prediction function . A source domain and a target domain may have different feature spaces or different marginal probability distributions, i.e., ≠ or ( ) ≠ ( ). Meanwhile, tasks and are subject to different label spaces. The aim of TL is to help improve the learning ability of the target predictive function (·) in using the knowledge in and [6]. There are two main scenarios in EEG-based BCIs, namely, cross-subject transfer and crosssession transfer. The goal of TL is to find the similarity between new and original tasks and then to realize the discriminative and stationary information transfer across domains [7]. In this study, we attempted to summarize the transferred knowledge for EEG based on following three types: Knowledge of instance, knowledge of feature representation, and knowledge of model parameters.
This review of TL applications for EEG classification attempted to address the following critical questions: What problems does TL solve for EEG decoding? (Section 3.1); which paradigms of EEG are The representation of EEG typically takes the form of a high-dimensional matrix, which includes the information of sampling points, channels, trials, and subjects [2]. Meanwhile, the most common features of EEG-based BCIs include spatial filtering, band power, time points, and so on. Recently, machine learning (ML) has shown its powerful ability for feature extraction in EEG-based BCI tasks [3,4].
BCI technology based on EEG has made great progress, but the challenges of weak robustness and low accuracy greatly hinder the application of BCIs in practice [5]. From the perspective of signal decoding, the reasons are as follows: First, one of the main assumptions of ML is that training and test data belong to the same feature space and are subject to the same probability distribution. However, this assumption is often violated in the field of bioelectric signal processing, because differences in physiological structure and psychological states may cause obvious variation in EEG. Therefore, signals from different sessions/subjects on the same task show different features and distribution.
Second, EEG signals are extremely weak and are always accompanied by unrelated artifacts from other areas of the brain, which potentially mislead discriminant results and decrease the classification accuracy. Third, the strict requirements for the experimental conditions of BCI systems make it difficult to obtain large and high-quality datasets in practice. It is difficult for a classification model based on small-scale samples to obtain strong robustness and high classification accuracy. However, large-scale and high-quality datasets are the basis for guaranteeing the decoding accuracy of models.
One promising approach to solve these problems is transfer learning (TL). The principle of TL is realizing the knowledge transfer from different but related tasks, i.e., using existing knowledge learned from accomplished tasks to help with new tasks. The definition of TL is as follows: A given domain D consists of a feature space X and a marginal probability distribution P(X). A task T consists of a label space y and a prediction function f . A source domain D s and a target domain D T may have different feature spaces or different marginal probability distributions, i.e., X s X T or P s (X) P T (X). Meanwhile, tasks T s and T T are subject to different label spaces. The aim of TL is to help improve the learning ability of the target predictive function f T (·) in D T using the knowledge in D s and T s [6].
There are two main scenarios in EEG-based BCIs, namely, cross-subject transfer and cross-session transfer. The goal of TL is to find the similarity between new and original tasks and then to realize the discriminative and stationary information transfer across domains [7]. In this study, we attempted to summarize the transferred knowledge for EEG based on following three types: Knowledge of instance, knowledge of feature representation, and knowledge of model parameters.
This review of TL applications for EEG classification attempted to address the following critical questions: What problems does TL solve for EEG decoding? (Section 3.1); which paradigms of EEG are used for TL analysis? (Section 3.2); what kind of datasets can we refer to in order to verify the performance of these methods? (Section 3.3); what types of TL frameworks are available? (Section 3.4).
First, the search methods for the identification of studies are introduced in Section 2. Then, the principle and classification criteria of TL are analyzed in Section 3. Next, the TL algorithms for EEG Sensors 2020, 20, 6321 3 of 25 from 2010 to 2020 are described in Section 4. Finally, the current challenges of TL in EEG decoding are discussed in Section 5.

Methodology
A wide literature search from 2010 to 2020 was conducted, resorting to the main databases, such as Web of Science, PubMed, and IEEE Xplore. The keywords used for the electronic search were TL, electroencephalogram, brain-computer interface, inter-subject, and covariate shift. Table 1 lists the collection criteria for inclusion or exclusion. Table 1. Inclusion and exclusion criteria.

Inclusion Criteria Exclusion Criteria
• Published within the last 10 years (as transfer learning (TL) for EEG has been proposed and developed in recent years).
• A focus on the processing of invasive EEG, electrocorticography (ECoG), magnetoencephalography (MEG), source imaging, fMRI, and so on, or joint studies with EEG.

•
A focus on non-invasive EEG signals (as the object for discussion in this review).
• No specific description of TL for EEG processing.
• A specific explanation of how to apply TL to EEG signal processing. \ The search method of this review is shown in Figure 2, which was used to identify and to narrow down the collection of TL-based studies, resulting in a total of 246 papers. Duplicates between all datasets and studies without full-text links were excluded. Finally, 80 papers that meet the inclusion criteria were included.
Sensors 2020, 20, x FOR PEER REVIEW 3 of 25 used for TL analysis? (Section 3.2); what kind of datasets can we refer to in order to verify the performance of these methods? (Section 3.3); what types of TL frameworks are available? (Section 3.4). First, the search methods for the identification of studies are introduced in Section 2. Then, the principle and classification criteria of TL are analyzed in Section 3. Next, the TL algorithms for EEG from 2010 to 2020 are described in Section 4. Finally, the current challenges of TL in EEG decoding are discussed in Section 5.

Methodology
A wide literature search from 2010 to 2020 was conducted, resorting to the main databases, such as Web of Science, PubMed, and IEEE Xplore. The keywords used for the electronic search were TL, electroencephalogram, brain-computer interface, inter-subject, and covariate shift. Table 1 lists the collection criteria for inclusion or exclusion.

Inclusion Criteria Exclusion Criteria •
Published within the last 10 years (as transfer learning (TL) for EEG has been proposed and developed in recent years).

•
A focus on the processing of invasive EEG, electrocorticography (ECoG), magnetoencephalography (MEG), source imaging, fMRI, and so on, or joint studies with EEG. • A focus on non-invasive EEG signals (as the object for discussion in this review).

•
No specific description of TL for EEG processing.
• A specific explanation of how to apply TL to EEG signal processing. \ The search method of this review is shown in Figure 2, which was used to identify and to narrow down the collection of TL-based studies, resulting in a total of 246 papers. Duplicates between all datasets and studies without full-text links were excluded. Finally, 80 papers that meet the inclusion criteria were included.

What Problems Does Transfer Learning Solve?
This review of the literature on TL applications for EEG attempted to address the following critical questions: Although advanced methods such as machine learning have been proven to be a critical tool in EEG processing or analysis, they still suffer from some limitations that hinder their wide application in practice. Consistency of the feature space and probability distribution of training and test data is an important prior condition of machine learning. However, in the field of biomedical engineering, such as EEG based on BCIs, this hypothesis is often violated. Obvious variation in feature distribution typically occurs in representations of EEG across sessions/subjects. This phenomenon results in a scattered distribution of EEG signal features, an increase in the difficulty of feature extraction, and a reduction in the performance of the classifier.

The Problem of Small Sample Size
In recent years, machine learning and deep neural networks have provided good results for the classification of linguistic features, images, sounds, and natural texts. A main reason for its success is that their massive amount of data guarantees the performance of the classifier. However, in practical applications of BCI, it is difficult to collect high-quality and large EEG datasets due to the limitations of strict requirements for the experimental environment and available subjects. The performance of these methods is highly sensitive to the number of samples; a small sample size tends to lead to overfitting during model training, which adversely affects the classification accuracy [8].

The Problem of Time-Consuming Calibration
A large amount of data are required to calibrate a BCI system when a subject performs a specific EEG task. This requirement commonly takes a long calibration session, which is inevitable for a new user. For example, when a subject performs a steady-state visually evoked potential (SSVEP) speller task, the various commands cause a long calibration time. However, collecting calibration data is time-consuming and laborious, which reduces the efficiency of the BCI system.

EEG Paradigms for Transfer Learning
There are four paradigms of EEG-BCIs discussed in this paper and the percentage of these paradigms across collected studies are shown in Figure 3. This review of the literature on TL applications for EEG attempted to address the following critical questions:

The Problem of Differences across Subjects/Sessions
Although advanced methods such as machine learning have been proven to be a critical tool in EEG processing or analysis, they still suffer from some limitations that hinder their wide application in practice. Consistency of the feature space and probability distribution of training and test data is an important prior condition of machine learning. However, in the field of biomedical engineering, such as EEG based on BCIs, this hypothesis is often violated. Obvious variation in feature distribution typically occurs in representations of EEG across sessions/subjects. This phenomenon results in a scattered distribution of EEG signal features, an increase in the difficulty of feature extraction, and a reduction in the performance of the classifier.

The Problem of Small Sample Size
In recent years, machine learning and deep neural networks have provided good results for the classification of linguistic features, images, sounds, and natural texts. A main reason for its success is that their massive amount of data guarantees the performance of the classifier. However, in practical applications of BCI, it is difficult to collect high-quality and large EEG datasets due to the limitations of strict requirements for the experimental environment and available subjects. The performance of these methods is highly sensitive to the number of samples; a small sample size tends to lead to overfitting during model training, which adversely affects the classification accuracy [8].

The Problem of Time-Consuming Calibration
A large amount of data are required to calibrate a BCI system when a subject performs a specific EEG task. This requirement commonly takes a long calibration session, which is inevitable for a new user. For example, when a subject performs a steady-state visually evoked potential (SSVEP) speller task, the various commands cause a long calibration time. However, collecting calibration data is time-consuming and laborious, which reduces the efficiency of the BCI system.

EEG Paradigms for Transfer Learning
There are four paradigms of EEG-BCIs discussed in this paper and the percentage of these paradigms across collected studies are shown in Figure 3.

Motor Imagery
Motor imagery (MI) is a mental process that imitates motor intention without real motion output [9], which activates the neural potential in primary sensorimotor areas. Different imagery tasks Sensors 2020, 20, 6321 5 of 25 will induce potential activity in different regions of the brain. Thus, this response can be converted into a classification task. The feature of MI signals is often expressed in the form of frequency or band energy [10]. Due to task objectives and various feature representations, a variety of machine learning algorithms (e.g., deep learning and Riemannian geometry) can be applied to the decoding of MI [11,12].

Steady-State Visually Evoked Potentials
When a human receives a fixed frequency of flashing visual stimuli, the potential activity of the cerebral cortex is modulated to produce a continuous response related to the frequency (same or multiples) of these stimuli. This physiological phenomenon is referred to a SSVEP [13]. Due to their stable and obvious representation of signals, BCI systems based on SSVEP are widely used to control equipment such as mobile devices, wheelchairs, and spellers.

Event-Related Potentials
Event-related potentials are responses for multiple or diverse stimuli corresponding to specific meanings [14]. P300 is the most representative type of ERP, which occurs about 300 ms after a visual or auditory stimulus. A feature classification model can be used for decoding P300.

Passive BCIs
A passive BCI is a form of interaction that does not rely on external stimuli. It achieves a brain control task by encoding the mental activity from different states of the brain [15]. Common types of passive BCI tasks include driver drowsiness, emotion recognition, mental workload assessment, and epileptic detection [16], which can be decoded by regression and classification models [17,18].

Case Studies on a Shared Dataset
Analysis between different datasets is not valid because they use different equipment or communication protocols. In addition, different mental tasks and collecting procedures also bring great differences to EEG. Therefore, the reviewed studies mainly concentrate on the TL across subjects/sessions in the same dataset. In Table 2, we briefly summarize the publicly available EEG dataset in this review.

Transfer Learning Architecture
In this review, we summarize previous studies according to "what knowledge should be transferred in EEG processing." Multi-step processing for EEG across subjects/sessions results in discriminative information in different steps. Therefore, determining what should be transferred is the key problem according to different EEG tasks. Pan et al. [6] proposed authoritative classification approaches based on "what to transfer." All papers collected in this review were classified according to this method ( Figure 4). In the following sections, we have selected several representative methods for analysis.
Sensors 2020, 20, x FOR PEER REVIEW 6 of 25 In this review, we summarize previous studies according to "what knowledge should be transferred in EEG processing." Multi-step processing for EEG across subjects/sessions results in discriminative information in different steps. Therefore, determining what should be transferred is the key problem according to different EEG tasks. Pan et al. [6] proposed authoritative classification approaches based on "what to transfer." All papers collected in this review were classified according to this method ( Figure 4). In the following sections, we have selected several representative methods for analysis.

Transfer Learning Based on Instance Knowledge
It is often assumed that we can easily obtain large amounts of markup data from a source domain, but this data cannot be directly reused. Instance transfer approaches re-weight some source domain data as a supplement for the target domain. Based on instance transfer, the majority of the literature utilized the measurement method to evaluate the similarity between data from the source and target domains. The similarity metric was then converted into the transfer weight coefficient, which was directly used to instance transfer by re-weighting the source domain data [30][31][32]. Herein, we have listed a few typical methods based on instance transfer.
Reference [33] proposed an instance TL method based on K-L divergence measurements. They measured the similarity of the normal distribution between two domains and transformed this similarity into a transfer weight coefficient for the target subject.
Suppose that the normal distribution from the two datasets and can be expressed as: where and ∑ are the mean value and variance (i = 1/0), respectively. The K-L divergence of the two distributions can be expressed as: where K denotes the dimension of the data, represents the mean value, and Σ is the variance, det represents calculation of the determinant.
The similarity weight can be calculated by: where is the balancing coefficient and is the summed divergence of the distribution characteristics of the target subjects. The results show that instance transfer can effectively reduce the calibration time and can significantly improve the average classification accuracy of MI tasks.
Li et al. [34] proposed importance-weighted linear discriminant analysis (IWLDA) with bootstrap aggregation. They defined the ratio ( ) of test and training input densities as transfer weight:

Transfer Learning Based on Instance Knowledge
It is often assumed that we can easily obtain large amounts of markup data from a source domain, but this data cannot be directly reused. Instance transfer approaches re-weight some source domain data as a supplement for the target domain. Based on instance transfer, the majority of the literature utilized the measurement method to evaluate the similarity between data from the source and target domains. The similarity metric was then converted into the transfer weight coefficient, which was directly used to instance transfer by re-weighting the source domain data [30][31][32]. Herein, we have listed a few typical methods based on instance transfer.
Reference [33] proposed an instance TL method based on K-L divergence measurements. They measured the similarity of the normal distribution between two domains and transformed this similarity into a transfer weight coefficient for the target subject.
Suppose that the normal distribution from the two datasets N 0 and N 1 can be expressed as: where µ i and Σ i are the mean value and variance (i = 1/0), respectively. The K-L divergence of the two distributions can be expressed as: where K denotes the dimension of the data, µ represents the mean value, and Σ is the variance, det represents calculation of the determinant. The similarity weight δ s can be calculated by: where ∂ is the balancing coefficient and KL is the summed divergence of the distribution characteristics of the target subjects. The results show that instance transfer can effectively reduce the calibration time and can significantly improve the average classification accuracy of MI tasks. Li et al. [34] proposed importance-weighted linear discriminant analysis (IWLDA) with bootstrap aggregation. They defined the ratio r(x) of test and training input densities as transfer weight: where P tr and P te represent the marginal probability distribution of the training set and the test set, respectively. Then, they optimized the parameters of the LDA model by adding a regularization coefficient and transfer weights: where y i refers to the target labels corresponding to the feature vectors x i for i-th trials. Parameter θ is learned by least-squares.
The least-squares solution can be obtained by: where λ (≥0) is the regularization parameter, D is the diagonal matrix with the i-th diagonal element, I is the identity matrix andθ IWLDA is the least-squares solution. They also combined the bagging method that independently constructs accurate and diverse base learners to improve the classification accuracy and to reduce the variance. The weighted parameters of the LDA model in the target domain can thus be optimized. Covariate shift [35] is a common phenomenon in EEG processing across subjects/sessions. It is defined as follows: Given an input space X and an output space Y, the marginal distribution of D s is inconsistent with D T , i.e., P S (x) P T (x). However, the conditional distribution of the two domains is the same, P S (y/x) = P T (y/x). Covariate shift obviously affects the unbiasedness of a model in standard model selection, which reduces the generalization ability of the machine model during EEG decoding [30].
To address this issue, research has proposed covariate shift adaptation. For example, Raza et al. [36] proposed a transductive learning model based on the k-nearest neighbor principle. They initialized the classifier using data from the calibration stage and trained the optimal classification boundary. Then, adaptation was executed to update the classifier. The updated rules are as follows: First, the Euclidean distance is used to measure unlabeled and labeled data: where p and q refer to the unlabeled and labeled data points, respectively, and dist is the Euclidean distance. Then, the k-nearest neighbors are selected based on the Euclidean distance. Next, this distance is converted to inverse form dist inv(i) , which represents the corresponding pattern in the training database that is closer to the current unlabeled feature set.
where i is the label and = 0.001 is the bias. To decide if the current trial's features and estimated label should be added to the existing knowledge base, a confidence ratio CR is calculated: The CR index is calculated to predict the label for the unlabeled test data. The predicted test data are then added into the knowledge database, following which the decision boundary is recalculated to realize the update.

Transfer Learning Based on Feature Representation
TL based on feature representation can be achieved by reducing the difference between two domains by feature transformation or projecting the feature from two domains into the uniform feature space [37][38][39]. Unlike instance transfer, feature representation TL aims to encode the shared information across subjects/sessions into a feature representation. For example, spatial filtering and time-frequency transformation are used to transform the raw data into feature representations.
Nakanish et al. proposed a spatial filtering approach called the task-related component analysis (TRCA) method to enhance the reproducibility during SSVEP tasks and to improve the performance of an SSVEP-based BCI [40].
Suppose that two domain signals consist of two parts: A task-related signal s(t) and a task-unrelated signal z(t). A multichannel signal from x(t) can be calculated as: x i (t) = a 1,i s(t) + a 2,i z(t), i = 1, 2, 3 . . . n (12) where i represents the number of channels and a refers to the project coefficients; 1 and 2 represent labels.
where y(t) refers to the target data, and the optimization goal is to solve a 1,i = 1 and a 2,i = 0. The covariance between the j 1 − th and the j 2 − th trials is described as: All combinations of the trials are summed as: where j represents the number of trials and ω refers to the spatial filters. Matrix s is defined as: Sensors 2020, 20, 6321 9 of 25 The variance of y(t) is constrained to obtain a finite solution: The optimization is calculated as: where ω is the optimal spatial filter. Finally, the correlation coefficient is calculated by Pearson's correlation analysis between the data from the two domains. In their study, spatial filters as a feature representation were transferred to the target domain. The results showed that this method significantly improves the information transfer rates and classification accuracy. Based on this research, Tanaka [41] improved the TRCA method by maximizing the similarity across group of subjects, and they named this novel method group TRCA. The results showed that the group representation calculated by the group TRCA method achieve high consistency between two domains and offer effective data supplementation during brain-computer interaction. CSP is a popular method for feature extraction, which is often used for MI classification. During calculation, a spatial filter is adopted to maximize the separation between the class variances of EEG. However, heterogeneous data across subjects/sessions causes poor classification performance of the model in the training stage. One feasible approach to solve the limitation is regularization. Lotte [42] presented regularized CSP to improve the classification accuracy across subjects. In their study, they discussed two strategies. One of them was regularizing the covariance matrix estimated. They can be, respectively, expressed as: where S i represents the initial spatial covariance matrix for class i, S i is the regularized estimate, I is the identity matrix, c i is a constant scaling parameter, and D i represents the generic covariance matrix. The regularization parameters can be defined as γ and β. This strategy aims to optimize the covariance matrix by transforming other subjects' data into covariance combined with the regularization parameters and by transferring this feature to the target subject. Another approach is regularizing the CSP objective function. CSP uses spatial filters ω to extremize the function: where C i is spatial covariance matrix from class i. This approach optimizes CSP algorithms by regularizing the CSP objective function itself: where P(ω) represents a penalty function for the measurement distance between the spatial filter and the prior information. The goal of the objective function is to maximize G P 1 (ω) and to minimize P(ω). ∂ is a user-defined regularization parameter. The prior information from the source domain provides a good solution to guide the optimization direction of the estimation of spatial filters. In addition, adaptation regularization is a typical feature TL method based on the structural risk minimization principle and the regularization theory. Cross-domain feature transfer is mainly operated by three methods: (1) Utilize the structural risk minimization principle and minimize the structural risk functional; (2) minimize the distribution difference between the joint probability distributions; (3) maximize the manifold consistency underlying the marginal distributions [43]. In recent research, Chen et al. [44] developed an efficient cross-subject TL framework for driving status detection. They used adaptation regularization to measure and reduce the difference of the features from the two domains and to extract the features by filtering algorithms. The results showed that this framework can achieve high recognition accuracy and good transfer ability.

Transfer Learning Based on Model Parameters
The assumption of model parameter TL is that individual models across subjects/sessions should share some parameters. The key step of this approach is to find shared parameter information and to realize knowledge transfer. The domain adaption (DA) of a classifier is the common method of model parameter transfer. The knowledge of the parameter information from D s is reused and adjusted according to the prior distribution of D T [45]. A DA method, named adaptive extreme learning machine (ELM), was proposed by Bamdadian et al. [46]. ELM is a single-hidden layer feedforward neural network, which determines the output weights by operating the inverse operation of the hidden layer weight matrices [47]. This method has two steps: First, the classifier is initialized by data from the calibration session. Then, the update rule for the output weight based on least-square minimization is calculated. The update rule is calculated as follows: The initial output weight α can be defined as: where H is the output matrix of hidden layer, ϕ = H T H and H + refer to the Moore-Penrose pseudo-inverse of H, and T represents the label category. The updated weight α m+1 is calculated as: where k is k-th hidden node, ϕ is orthogonal matrix calculated by H. The experiential results showed that adaptive ELM can significantly improve the classification accuracy in MI classification across subjects. Another strategy is ensemble learning, which combines multiple weak classifiers from the source domain into a strong classifier. Dalhoumi et al. [48] proposed a novel ensemble strategy based on Bayesian model averaging. They calculated the probability of having a class label y q+1 given a feature vector h q+1 : where x n q+1 is the logarithmic variance feature vector, j n is a set of hypotheses from the source domain, and T is the test set. The hypothesis prior P j n T is estimated in the following method: where x k p is the projection of the feature vector x on the spatial filters of subject k. The learned ensemble classifier can be used to predict labels for the target user: The results showed that this ensemble strategy can improve the classification performance in small-scale EEG data by evaluation on a real dataset.
In recent years, deep neural networks have provided good results for the processing of EEG signals [49,50]. Due to their end-to-end model structure and automatic feature extraction ability, deep neural networks minimize the interference of redundant information and improve the classification performance. Inspired by computer vision, a deep neural network learns generic feature representations by lower layers of the model. Specific feature representations with the relevant specific subjects or sessions are learned by the high layer [51]. Therefore, freezing lower layers and fine-tuning higher layers is a good way to realize model parameter transfer based on deep learning.
Zhao et al. [52] proposed an end-to-end deep convolution network for MI classification. To avoid the limitation of a small sample and overfitting, they utilized the data from D s to pre-train the source network and to transfer the parameters of several layers to initialize the target network. First, the network was pre-trained using data from the source domain. Then, they used the M source subjects W s to initialize the nth layer's target network by a weight average: where ρ represents the strength of the source network and W s mn refers to the connecting weights of the nth layer to the next layer. The next stage is to fine-tune the target initialized network by data from D T . The results showed that the parameter transfer strategy can reduce the calibration time for new subjects and can help the deep convolution network to obtain better classification performance.
Raghu et al. used CNN combined with TL to recognize epileptic seizures [53]. They proposed two different transfer methods: To finetune a pre-trained network and then extract image features by said pre-trained network, and to classify the status of brain using an SVM. Popular networks such as Alexnet, VGG16net, VGG19net, and Squeezenet, were used to verify the performance of the proposed framework.
The summary of collected studies is shown in Table 3.

Discussion
Based on the numerous papers surveyed herein, we briefly summarized the development of the application of TL to EEG decoding. This will help researchers scan the status of this field and receive useful guidance in future work.
According to the various studies surveyed in this paper, it is not hard to determine the points of interest that researchers focus on. As shown in Figure 2, more studies have focused on active BCI (i.e., MI, SSVEP, and ERP) among these different EEG paradigms. One possible explanation is that the goal of these mental activity decoding studies is to categorize EEG from different classes. This would allow many machine learning methods to be applied to this paradigm. From Table 2, it can be seen that the application scenarios of TL in the existing literature have focused almost only on classification and regression tasks.
The method of model parameter transfer is not applicable to only a few subjects with initially low BCI performance. The feature of EEG from these subjects exhibits inseparability in feature space. Therefore, the parameter optimization of the classifier does not significantly improve the classification results. It is worth noting that the adaptive strategy of the classifier should be considered a supplement to achieve the goal of a calibration-free mode of operation [123]. The combination of TL and the adaptive strategy may receive increasing attention in future studies.
It is also worth noting that TL showed good results across subjects/experiments, but the detail of variability across sessions/subjects was unclear. Some studies proposed that the Bayesian model is a promising approach to capture variability. This model is built based on multitask learning, and variation in some features is often extracted, such as spectral and spatial [124,125].
Due to its end-to-end structure and competitive performance, deep learning has been successful in processing EEG data [126]. However, the computational power and small-scale data are a limitation during practical operation. A hybrid structure based on TL and deep learning is a promising way to address this issue. For example, one of the methods is fine-tuning the pre-trained network, which has proven to be effective. With the development of deep learning technology, the research for such a hybrid structure is still a hot topic for future research.
As reported in the above-cited studies, TL is instrumental in EEG decoding across subjects/sessions. However, knowledge transfer across tasks/device is still a blank field. This issue is worth exploring and will make EEG-based BCI systems much more practical.

Conclusions
In this paper, we reviewed the research on TL for EEG decoding that was published between 2010 and 2020. We discussed numerous approaches that can be divided into three categories: Instance transfer, feature representation transfer, and parameter of classifier transfer. Based on the summary of their results, we can conclude that TL can effectively improve the decoding performance in classification and regression tasks. In addition, TL provides adequate performance in initializing BCI systems for a new subject, which reduces the length of time of the calibration process. Although there are some limitations for using TL for EEG decoding, such as the scope of application of TL and suboptimal performance on some occasions, TL shows strong robustness. Overall, TL is instrumental in EEG decoding across subjects/sessions. In addition, achieving a calibration-free model of operation and higher accuracy of decoding are worthy of further research.

Conflicts of Interest:
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.