Next Article in Journal
Key Factors Influencing Elderly Persons’ Willingness to Rent Assistive Devices via a Product Service System
Previous Article in Journal
Distance-Based Decision Making, Consensus Building, and Preference Aggregation Systems: A Note on the Scale Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Transfer EEG Emotion Recognition by Combining Semi-Supervised Regression with Bipartite Graph Label Propagation

1
School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China
2
Key Laboratory of Brain Machine Collaborative Intelligence of Zhejiang Province, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Systems 2022, 10(4), 111; https://doi.org/10.3390/systems10040111
Submission received: 29 June 2022 / Revised: 25 July 2022 / Accepted: 26 July 2022 / Published: 29 July 2022
(This article belongs to the Section Systems Practice in Social Science)

Abstract

:
Individual differences often appear in electroencephalography (EEG) data collected from different subjects due to its weak, nonstationary and low signal-to-noise ratio properties. This causes many machine learning methods to have poor generalization performance because the independent identically distributed assumption is no longer valid in cross-subject EEG data. To this end, transfer learning has been introduced to alleviate the data distribution difference between subjects. However, most of the existing methods have focused only on domain adaptation and failed to achieve effective collaboration with label estimation. In this paper, an EEG feature transfer method combined with semi-supervised regression and bipartite graph label propagation (TSRBG) is proposed to realize the unified joint optimization of EEG feature distribution alignment and semi-supervised joint label estimation. Through the cross-subject emotion recognition experiments on the SEED-IV data set, the results show that (1) TSRBG has significantly better recognition performance in comparison with the state-of-the-art models; (2) the EEG feature distribution differences between subjects are significantly minimized in the learned shared subspace, indicating the effectiveness of domain adaptation; (3) the key EEG frequency bands and channels for cross-subject EEG emotion recognition are achieved by investigating the learned subspace, which provides more insights into the study of EEG emotion activation patterns.

1. Introduction

In 1964, Micheal Beldoch first introduced the idea of Emotional Intelligence (EI) in [1] which examined three modes of communication (i.e., vocal, musical, and graphic) to identify nonverbal emotional expressions. In 1990, Salovey and Mayer formally put forward the concept of EI and considered emotional intelligence as an important component of artificial intelligence in addition to logical intelligence [2]. The key of EI is that machines can recognize the emotional state of humans automatically and accurately. Endowing machines with EI is indispensable to natural human–machine interaction, which makes machines more humanized in communication [3,4]. In addition, endowing machines with EI has great impacts in many fields such as artificial intelligence emotional nursing, human health, and patient monitoring [5]. Emotion is a state that integrates people’s feelings, thoughts, and behaviors. It includes not only people’s psychological response to the external environment or self-stimulation, but also the physiological response accompanying this psychological response [6]. Compared with the widely used data modalities such as image, video, speech, and text [7,8,9], EEG has its unique advantages such as high time resolution. In addition, EEG is difficult to camouflage in emotion recognition since it is directly generated from the neural activities of the central nervous system [10]. Therefore, EEG is widely used in the field of objective emotion recognition [11] and some other brain–computer interface paradigms [12].
Nowadays, with the continuous development of computer technology, bioscience, neuroscience, and other disciplines, EEG-based emotion recognition has more and more potential applications in diverse fields such as healthcare, education, entertainment, and neuromarketing [13,14,15]. Meanwhile, researchers have been paying continuous attention to it and many machine learning or deep learning models for EEG-based emotion recognition have been proposed. Murugappan et al. mixed the EEG samples of subjects’ composed of four states, i.e., happiness, fear, disgust, and surprise, and divided them by fuzzy c-means clustering method. After that, the samples with similar characteristics were identified by looking for the inherent characteristics of the category itself [16]. Thejaswini et al. extracted EEG time-frequency features and then performed emotion recognition by a channel fusion method together with a channel-wise supervised SVM classifier [17]. The experimental results of [18] verified the possibility of exploring robust EEG features in cross-subject emotional recognition. Ali et al. proposed to decompose EEG signals via multivariate empirical mode decomposition (MEMD), and then employed deep learning methods to classify different emotional states [19]. Although deep learning models generally obtained better performance, their results were usually difficult to interpret due to the black-box training mode [20,21], which were widely used in subject-independent EEG emotion recognition. In [22], deep networks were used to simultaneously minimize the recognition error on source data and force the latent representation similarity (LRS) of source and target data to be similar. To reduce the risk of negative transfer, a transferrable attention neural network was proposed to learn the emotional discriminative information by highlighting the transferrable brain regions data and samples by local and global attention mechanism [23]. According to the emotional brain’s asymmetries between left and right hemispheres, EEG data of both hemispheres are separately mapped into discriminative feature spaces [24,25]. Zheng et al. firstly introduced the deep belief network into EEG-based emotion recognition to classify the three states, i.e., positive, neutral, and negative. Although they studied the key frequency bands and channels of EEG emotion recognition [26], its underlying mechanism was still not intuitive enough. In subsequent studies, it was found that the G a m m a frequency band is the most important one in emotion recognition [27]. Recent advances in EEG-based emotion recognition can be found in [5,28,29,30].
Though EEG could objectively and accurately describe the emotional state of subjects, EEG is typically weak and nonstationary. Therefore, EEG data collected from different subjects under the same emotional state might have considerable discrepancies due to the distinction of individual physiology and psychology [5], leading to the poor performance of traditional machine learning methods in cross-subject EEG emotion recognition. To solve this problem, the concept of Transfer Learning is introduced to reduce the differences between cross-subject EEG data, and to improve the universality of affective brain–computer interface system [31,32]. Its basic idea is to use the knowledge of auxiliary domain to facilitate the emotion recognition task of the target domain. The feature transformation-based transfer learning method is the most widely one among the existing models, which aims to project the features of the source and target domain data into a subspace where the between-domain data distribution difference is minimized. Zheng et al. early on proposed to build personalized EEG-based emotion recognition using transfer learning and in [33], both knowledge transfer by both feature transformation and model parameters sharing were tested for cross-subject emotion recognition. Zhou et al. proposed a novel transfer learning framework with Prototypical Representation-based Pairwise Learning to characterize EEG data with prototypical representations. The characterized prototypical representations are evident with a high feature concentration within one single emotion category and a high feature separability across different emotion categories. They finally formulated the EEG-based emotion recognition task as pairwise learning [34]. Bahador et al. proposed to extract spectral features from the collected 10-channel EEG data through a pre-trained network to quantify the direct influence among channels. The spectral-phase information of EEG data was encoded into a bi-dimensional map, which is further used to perform knowledge transfer by characterizing the propagation patterns from one channel to the others [35].
Although transfer learning has been widely used in EEG-based emotion recognition to align the EEG data from different subjects [36], most of the existing researches simply place the emphasis on domain-invariant feature learning and recognition accuracy. Therefore, it is necessary to jointly optimize the recognition process in combination with the domain-invariant feature learning. In [22], neural networks were used to simultaneously minimize the recognition error on source data and force the latent representations of source and target data to be similar. Ding et al. constructed an undirected graph to characterize the source and target sample connections, based on which the transfer feature distribution alignment process is optimized together with the graph-based semi-supervised label propagation task [37]. However, this graph was constructed by the original space data and is not dynamically updated during the model optimization; therefore, it cannot well describe the sample connections between the two domains. In addition to the recognition accuracy, most existing studies only visualized the aligned distributions of source and target EEG data and did not sufficiently investigate the properties of the learned shared subspace in emotion expression [22,38,39].
In view of the above shortcomings, this paper proposes an EEG transfer emotion recognition method combining semi-supervised regression with bipartite-graph label propagation. Compared with the existing studies, the present work makes the following contributions.
  • The semi-supervised label propagation method based on sample-feature bipartite graph and semi-supervised regression method are combined to form a unified framework for joint common subspace optimization and emotion recognition. We first achieve better data feature distribution alignment through EEG feature transfer, based on which we then construct a better sample-feature bipartite graph and sample-label mapping matrix to promote the estimation of EEG emotional state in the target domain;
  • The EEG emotional state in the target domain is estimated by a bi-model fusion strategy. First, a sample-feature bipartite graph is constructed based on the premise that similar samples have similar feature distributions. This graph is used to characterize the sample-feature connections between the source and the target domain for label propagation, as shown by the ‘Bi-graph label propagation’ part of Figure 1. Furthermore, a semi-supervised regression is used to learn a mapping matrix to describe the intra-domain connections between samples and labels, which aims to estimate the EEG emotional state of the target domain. By fusing both models, the EEG emotional state of the target domain is estimated from the perspective of similar feature distributions should be shared by samples from the same emotional state;
  • We explore the EEG emotion activation patterns from the learned common subspace shared by source and target domains, which is based on the rationality that the subspace should retain the common features of the source and the target domain and inhibit the non-common features. We measure the importance of each EEG feature dimension by the normalized 2 -norm of each row of the projection matrix. Based on the coupling correspondence between EEG features and the frequency bands and channels, the importance of frequency bands and brain regions in EEG emotion recognition are quantified.
Notations. In this paper, the EEG frequency bands are represented by Delta, Theta, Alpha, Beta, and Gamma. Greek letters such as α , λ represent the model parameters. Matrices and vectors are denoted by boldface uppercase and lowercase letters, respectively. The 2 , 1 -norm of matrix A R r × c is defined as A 2 , 1 = i = 1 r j = 1 c a i j 2 = i = 1 r a i 2 , where a i is the i-th row of A .

2. Methodology

In this section, we first introduce its model formulation and then its optimization algorithm.

2.1. Problem Definition

Suppose that the labeled EEG samples from one subject { X s , Y s } = { ( x s i , y s i ) } i = 1 n s define the source domain D s , and the unlabeled EEG samples from the other subject { X t } = { x t j } j = 1 n t form the target domain D t , where X s R d × n s , X t R d × n t , Y s R n s × c . x s i R d , x t j R d are, respectively, the i-th and j-th samples in the source and target domains. y s i | i = 1 n s R 1 × c is the label vector of sample i-th source sample which is encoded in one-hot vector, d is the feature dimension, c is the number of emotional states, n s and n t are the number of samples in source and target domains, respectively, and n = n s + n t is the total number of all domains samples. The feature space and label space of both domains are the same, i.e., X s = X t and Y s = Y t ; however, their marginal distributions and conditional distributions are different due to the individual differences of EEG, i.e., P s ( X s ) P t ( X t ) and P s ( Y s | X s ) P t ( Y t | X t ) .
As shown in Figure 1, we propose a joint method for EEG emotion recognition. The model consists of two parts, domain adaptation, and semi-supervised joint label estimation. Below, we introduce them in detail.

2.2. Domain Alignment

Suppose that the distribution differences of source and target EEG data can be minimized in their subspace representations. We measure the marginal and conditional distribution differences between the source and target domain subspace data through the Maximum Mean Discrepancy (MMD) criterion [40]. In detail, we project the source and target domain data into respective subspaces by two matrices; that is, we define P s R d × p is the projection matrix of the source domain and P t R d × p is the one of the target domain, where p ( p d ) is the subspace dimensionality. Then, the projected data of two domains can be represented as P s T X s and P t T X t , respectively. Marginal distribution alignment can be achieved by minimizing the distance between the sample means of the two domains, that is,
M d i s t ( P s , P t ) = 1 n s i = 1 n s P s T x s i 1 n t j = 1 n t P t T x t j 2 2 = P s T X s 1 n s n s P t T X t 1 n t n t 2 2 .
Similarly, conditional distribution alignment aims to minimize the distance between the sample means belonging to the same class of the two domains, that is,
C d i s t ( P s / t , F t ) = k = 1 c 1 n s k i = 1 n s k P s T x s i 1 n t k j = 1 n t k f t ( k , j ) P t T x t j 2 2 = P s T X s Y s N s P t T X t F t N t 2 2 ,
where n s k and n t k denote the number of samples belonging to the k - th | k = 1 c emotional state in source and target domains, respectively. 1 n s R n s and 1 n t R n t are all-one column vectors. f t ( ( k , j ) ) > 0 ( k = 1 c f t ( ( k , j ) ) = 1 ) denote the probability that the j-th target domain sample belongs to the k-th emotional state category. N s ( N t is the diagonal matrix whose k-th diagonal element is 1 / n s k ( 1 / n t k ). However, the label information of target domain data is not available. Here, we utilize the probability class adaptive formula [37] to estimate the target domain label and we denote by F t R n t × c .
For simplicity, we combine M d i s t and C d i s t with the same weight. Thus, the joint distribution alignment is formulated as
D i s t = M d i s t + C d i s t .
For clarity, we rewrite (3) in matrix form as
D i s t = min P s / t , F t P s T X s Y ¯ s N ¯ s P t T X t F ¯ t N ¯ t F 2 s . t . P s / t T X s / t H s / t X s / t T P s / t = I p ,
where H s / t = I n s / t 1 / n s / t 1 n s / t 1 n s / t T is the centralization matrix, I n s / t R n s / t × n s / t is the identify matrix, Y ¯ s = [ I n s , Y s ] R n s × ( c + 1 ) , F ¯ t = [ I n t , F t ] R n t × ( c + 1 ) is the extended label matrix, N ¯ s / t = d i a g ( 1 / n s / t , N s / t ) R ( c + 1 ) × ( c + 1 ) . Additionally, to avoid too much divergence between source and target domain in the projecting process, we minimize the distance between them by
min P s , P t P s P t 2 , 1 .

2.3. Label Estimation

We reduce the divergence between the source and the target domain by Equation (4) and simultaneously expect that better target labels can be calculated. In order to describe the target domain label estimation process from two aspects, we use a bi-model fusion method to estimate the target domain label. On one aspect, a semi-supervised label propagation method is used for emotional state estimation which is based on the sample-feature bipartite graph. The graph is constructed by characterizing the connections among EEG features and samples. On the other aspect, a semi-supervised regression method is used to estimate the EEG emotional state in the target domain. The two models are adaptively balanced to achieve more accurate target domain label estimation.

2.3.1. Bipartite Label Propagation

The semi-supervised label propagation method based on a sample-feature bipartite graph is used to estimate the label of target domain samples, which has the following formula
min G , F t S A F 2 + λ Tr ( Y T L Y ) ,
where A = [ 0 n , B ; B T , 0 p ] R ( n + p ) × ( n + p ) is the bipartite graph similar matrix, 0 n R n × n , 0 p R p × p are all-zero matrices, and the matrix B R n × p is the sample-feature similarity matrix determined by both source and target data in their subspace representations. Based on matrix B , we expect to learn a better bipartite graph similarity matrix G R n × p , and then we can form the corresponding matrix S = [ 0 n , G ; G T , 0 p ] R ( n + p ) × ( n + p ) with respect to A . λ is a regularization parameter, Y = [ Y s ; F t ; F d ] R ( n + p ) × c is the label matrix including samples label matrix F = [ Y s , F t ] R n × c and features label matrix F d R p × c for the subspace features, matrix L = D S R ( n + p ) × ( n + p ) is the graph Laplacian matrix, and D = [ D 1 , 0 n × p ; 0 p × n , D 2 ] R ( n + p ) × ( n + p ) is a diagonal matrix whose diagonal elements are d i i | i = 1 n + p = j = 1 n + p s i j , 0 n × p R n × p and 0 p × n R p × n are all-zero matrices, s i j is the element in row i and column j of matrix S . Tr ( · ) is the trace of a certain matrix.

2.3.2. Semi-Supervised Regression

For the semi-supervised regression method in target domain label estimation, we have its formula as
min W , F t , b X new T W F + 1 b T F 2 + γ W 2 , 1 2 ,
where W R p × c is the sample-label mapping matrix, γ is a regularization parameter, X n e w R n × p is the subspace data and b R 1 × c is the offset variable. · 2 , 1 2 represents the squared 2 , 1 -norm.

2.3.3. Fused Label Estimation Model

Based on the above analysis in Section 2.3.1 and Section 2.3.2, we combined the two models in (6) and (7), and we obtained the fused model objective function for target domain label estimation as
min W , b , G , F t α ( X new T W F + 1 b T F 2 + γ W 2 , 1 2 ) + β ( S A F 2 + λ Tr ( Y T L Y ) ) s . t . G 0 , G 1 p = 1 n , F t 0 , F t 1 c = 1 n t ,
where α , β is the regularization parameter, 1 p , 1 n , 1 c , 1 n t are the all-one column vector with dimensions R p × 1 , R n × 1 , R c × 1 , R n t × 1 .

2.4. Overall Objective Function

As stated previously, we jointly optimize domain adaptation and semi-supervised joint label estimation. On the one hand, domain adaptation effectively reduces the differences in EEG data feature distribution among subjects and provides well-aligned data for joint label estimation; on the other hand, a better target domain label can promote the alignment of conditional distributions of source and target domains. Therefore, we combine them in a unified framework and finally obtain the objective function of TSRBG as
min P s T X s Y ¯ s N ¯ s P t T X t F ¯ t N ¯ t F 2 + α X T PW F + 1 b T F 2 + γ ( W 2 , 1 2 + P s P t 2 , 1 ) + β G B F 2 + λ Tr ( Y T L s Y ) s . t . P s / t T X s / t H s / t X s / t T P s / t = I p , G 0 , G 1 = 1 , F 0 , F 1 = 1 ,
where α , β , γ , λ are the regularization parameters.

2.5. Optimization

There are seven variables in Equation (9), which are the mapping matrix W , the offset vector b , the source domain projection matrix P s , the target domain projection matrix P t , the sample-feature similar matrix G , the feature label matrix F d , and the target domain label matrix F t . We propose to update one variable by fixing the others. The detailed updating rule for each variable is derived below.
  • Update W . The objective function in terms of variable W is
    min W α X T PW F + 1 b T F 2 + γ W 2 , 1 2 .
There are four variables, P , W , b , F t , in Equation (10). We need to initialize these variables apart from W . For target domain label matrix F t , we utilize the probability class adaptive formula [37] to estimate the target domain label and the initial value of each element is 1 c , where c is the number of emotional state categories. For subspace projection matrix P = [ P s , P t ] , we initialize them by Principal Component Analysis (PCA) [41] on the original EEG data.
Taking the derivative of Equation (10) w.r.t. b and setting it to zero, we have
b = 1 n ( Y T 1 W T P T X 1 ) .
By substituting Equation (11) into (10), we obtain
min W H X T PW H F F 2 + γ α W 2 , 1 2 ,
where H = I n 1 n 1 n 1 n T R n × n is centralization matrix and I n R n × n is identify matrix, 1 n R n × n is an all-one matrix.
Constructing Lagrange function about W based on Equation (12), we have
L ( W ) = H X T PW H F F 2 + γ α Tr ( W T QW ) ,
where Q R p × p is a diagonal matrix whose i-th diagonal element is
q i i = i = 1 p w i 2 2 + ϵ w i 2 2 + ϵ ,
and ϵ is a fixed minimal constant value, w i R 1 × c is i-th row vector of W , · 2 2 represents the squared 2 -norm.
Taking the derivative of Equation (13) w.r.t. W and setting it to zero, we obtain
W = ( P T X H X T P + 2 γ α Q ) 1 ( P T X H F ) .
  • Update P . The objective function in terms of variable P is
    min P s T X s Y ¯ s N ¯ s P t T X t F ¯ t N ¯ t F 2 + α X T PW F + 1 b T F 2 + γ P s P t 2 , 1 .
First, we need to convert the 2 , 1 -norm into the trace form. Similar to matrix Q , we define M = [ M 0 , M 0 ; M 0 , M 0 ] R 2 d × 2 d , where M 0 R d × d is a diagonal matrix with its i-th diagonal elements
m i i = 1 ( P s P t ) i 2 .
Here ( P s P t ) i is i-th row vector of ( P s P t ) , · 2 2 represents the squared 2 -norm. By defining
T = X s Y ¯ s N ¯ s N ¯ s T Y ¯ s T X s T X s Y ¯ s N ¯ s N ¯ t T F ¯ t T X t T X t F ¯ t N ¯ t N ¯ s T Y ¯ s T X s T X t F ¯ t N ¯ t N ¯ t T F ¯ t T X t T R 2 d × 2 d ,
we construct the Lagrangian function in terms of variable P as
L ( P ) = Tr ( P T T P ) + α Tr ( P T XHX T PWW T ) α Tr ( P T XHFW T ) + γ Tr ( P T M P ) .
Taking the derivative of Equation (19) w.r.t. P and setting it to zero, we have
( XHX T ) 1 ( T + γ M ) P + P ( α WW T ) = ( XHX T ) 1 ( XHFW T ) .
For Equation (20), we can solve it by Sylvester equation [42] and then obtain the source domain projection matrix P s and the target domain projection matrix P t .
  • Update G . The corresponding objective function is
    min β S A F 2 + λ Tr ( Y T L s Y ) s . t . G 0 , G 1 = 1 .
We propose to solve G in a row-wise manner. Accordingly, we convert Equation (21) to
β i = 1 n j = 1 p ( g i j b i j ) 2 + λ i = 1 n j = 1 p f i f d j 2 2 g i j ,
where g i j , b i j are the ( i , j ) -elements of matrix G , B respectively, f i is i-th row vector of label matrix F and f d j is j-th row vector of matrix F d .
By defining v i j = f i f d j 2 2 , and completing the squared form of g i , Equation (21) is equivalent to
min g i g i ( b i λ 2 β v i ) 2 2 s . t . g i 0 , g i 1 = 1 ,
which defines an Euclidean distance on a simplex [43].
  • Update F d . The objective function in terms of variable F d is
    min F d λ Tr ( Y T L s Y ) ,
which can be decomposed into
min F d λ Tr ( F d T D 2 F d 2 F d T G T F ) .
Then, the Lagrangian function of Equation (25) is
L ( F d ) = λ Tr ( F d T D 2 F d 2 F d T G T F ) .
Taking the derivative of Equation (26) w.r.t. F d and setting it to zero, we have
F d = ( D 2 ) 1 G T F .
  • Update F t . The objective function in terms of variable F t is
    min P s T X s Y ¯ s N ¯ s P t T X t F ¯ t N ¯ t F 2 + α X T PW F + 1 b T F 2 + λ Tr ( Y T L s Y ) s . t . F 0 , F 1 = 1 .
By some linear algebra transforms, the first term of Equation (28) can be reformulated as
Tr ( P t T X t F t N t N t F t T X t T P t ) 2 Tr ( P s T X s Y s N s N t F t T X t T P t ) .
Similarly, the last two terms of Equation (28) can be written as
α ( Tr ( F t T H t F t ) 2 Tr ( F t T H t X t T P t W ) ) + λ ( Tr ( F t T D t F t ) 2 Tr ( F t T G t T F d ) ) ,
where H t = I n t 1 n t 1 n t 1 n t T .
By constructing the Lagrangian function based on Equations (28)–(30), we have
L ( F t ) = Tr ( P t T X t F t N t N t F t T X t T P t ) 2 Tr ( P s T X s Y s N s N t F t T X t T P t ) + α ( Tr ( F t T H t F t ) 2 Tr ( F t T H t X t T P t W ) ) + λ ( Tr ( F t T D t F t ) 2 Tr ( F t T G t T F d ) ) + Tr ( Φ F t ) + η 1 n t F t 1 c 2 2 .
Taking the derivative of Equation (31) w.r.t. F t and setting it to zero, we have
X t T P t P t T X t F t N t N t X t T P t P s T X s Y s N s N t + ( α H t + λ D t ) F t α ( H t X t T P t W ) λ G t T F d + Φ η ( 1 n t F t 1 c ) 1 c T = 0 .
To simplify the notations, we define
Z t = X t T P t P t T X t F t N t N t + α H t F t = Z t + Z t Z s = X t T P t P s T X s Y s N s N t + α ( H t X t T P t W ) = Z s + Z s ,
where Z t + and Z s + means all negative elements in matrix Z t and Z s are replaced by zero; similarly, Z t and Z s means all positive elements in matrix Z t and Z s are replaced by zero and the negative take the absolute value.
Based on the Karush–Kuhn–Tucker (KKT) condition Φ F t = 0 (where ⊙ is the Hadamard product), we have
F t = Z t + Z s + + λ G t T Y d + η 1 n t × c Z t + + Z s + λ D t F t + η F t 1 c × c F t .
We summarize the optimization procedure of our proposed model TSRBG in Algorithm 1.
Algorithm 1 The procedure for TSRBG framework
Input: 
Data and labels of the source domain { X s , Y s } , data of the target domain X t ; Subspace dimension p; Parameters α , λ , γ , and β ;
Output: 
Sample-label mapping matrix W ; Source domain projection matrix P s ; Target domain projection matrix P t ; Sample-feature similar matrix G ; Feature label matrix F d ; Target domain label matrix F t .
1:
Initialize P s , P t with PCA; Target domain label F t = 1 c 1 n t × c ; Feature label matrix F d = 1 c 1 p × c ;
2:
while not converge do
3:
   Compute W by Equation (15) and then update Q ;
4:   
Using Sylvester equation to compute subspace projection matrix P by Equation (20) and split it to obtain the subspace projection matrix of source and target domain respectively and then compute M ;
5:   
Update sample-feature similar matrix G by optimizing Equation (23) and then update S and Laplacian matrix L = D S ;
6:
   Compute Feature label matrix F d by Equation (27);
7:
   Compute Target domain label matrix F t by Equation (34);
8:
end while

2.6. Computational Complexity

We assume that the complexity between individual matrix elements is O ( 1 ) . The computational complexity of TSRBG consists of the following parts. We need O ( p n 2 ) to calculate W and O ( p c ) to update Q . When updating P , the calculation of the Sylvester equation needs O ( d 3 p 3 + d 2 p 2 ) , and then O ( d p ) complexity is used to update M . For i [ 1 , , n ] , the updating of g i costs O ( p ) , so the complexity is O ( n p ) in updating G . For the label indicator matrix, F d costs O ( p 2 c + p n c ) and F t costs O ( n t 2 c + n t c 2 + n t c + n t p c ) complexities. As a result, the computational complexity of TSRBG is O ( T ( p n 2 + d 3 p 3 + n t 2 c ) ) , where T is the number of iterations.

3. Experiments

3.1. Dataset

SEED-IV [44] is a video-evoked emotional EEG dataset provided by the brain-like computing and machine intelligence center, Shanghai Jiao Tong University. In SEED-IV, 72 movie clips with obvious emotional tendency were used to evoke four emotional states of happiness, sadness, fear, and neutrality in 15 subjects and each subject had three sessions. In each session, each subject was asked to watch 24 movie clips; that is, every six movie clips correspond to one emotional state. EEG data was recorded by the ESI NeuroScan System with a 62-channel cap with sampling frequency of 1000 Hz. To reduce the computational burden, it was then down-sampled to 200 Hz. By band-pass filtering EEG data to 1 50 Hz , Differential Entropy (DE) feature was extracted from five different EEG frequency bands, including the Delta (1–3Hz), Theta (4–7 Hz), Alpha (8–13 Hz), Beta (14–30 Hz), and Gamma (31–50 Hz). The DE feature is defined as
h ( X ) = X p ( x ) l n ( p ( x ) ) d x ,
where X is a random variable, p ( x ) is the corresponding probability density function. Assuming that the collected EEG signals obey the Gaussian distribution N ( μ , σ 2 ) , the DE feature can be calculated by
h ( X ) = p ( x ) 1 2 l n ( 2 π σ 2 ) ( x μ ) 2 2 σ 2 = 1 2 l n ( 2 π σ 2 ) + V a r ( X ) 2 σ 2 = 1 2 l n ( 2 π σ 2 ) .
The data format provided by SEED-IV is 62 × n × 5 , where n is the number of EEG samples in each session. To be specific, there are 851, 832 and 822 samples in the three sessions, respectively. We reshape DE features into 310 × n by concatenating the 62 values of 5 frequency bands into a vector and then normalize them into [−1, 1] by row.

3.2. Experimental Settings

We set up a cross-subject EEG emotion recognition task based on SEED-IV. For each session, samples as well as their labels from the first subject form the labeled source domain and samples from each of the other subjects form target domain. Therefore, for each session, we have 14 cross-subject tasks.
To evaluate the performance of TSRBG, we compare it with several methods including four non-deep transfer learning methods (Joint Distribution Adaptation (JDA) [45], Graph Adaptation Knowledge Transfer (GAKT) [37], Maximum Independent Domain Adaptation (MIDA) [24], Feature Selection Transfer Subspace Learning (FSTSL) [46]), one semi-supervised classification method (Structured Optimal Bipartite Graph learning (SOBG) [47]), and two deep learning methods (DGCNN [48] and LRS [22]). DGCNN is a deep learning method which uses the graph structure to depict the relationship of EEG channels. LRS is a deep transfer method to minimize the discrepancies of latent representations of source and target EEG data.
In the experiments, the parameters of each method are tuned as follows. For JDA, linear kernel was used and the dimension h of the subspace was tuned from { 10 , 20 , , 100 } and the parameter λ was searched from { 10 3 , 10 2 , , 10 3 } . For GAKT, the dimension p of the subspace was tuned from { 10 , 20 , , 100 } and the parameter λ and α were searched from { 10 3 , 10 2 , , 10 3 } . For MIDA, the linear kernel was used, the regularization parameter μ and kernel parameter γ were searched from { 10 3 , 10 2 , , 10 3 } . For FSTSL, the parameters α , β , γ were tuned from { 10 3 , 10 2 , ⋯, 10 3 } . For SOBG, the parameters λ , η were tuned from { 10 3 , 10 2 , , 10 3 } . In TSRBG, we tuned the parameters α , β , γ , λ from { 10 3 , 10 2 , , 10 3 } and the subspace dimensionality was searched from { 10 , 20 , , 100 } .

3.3. Recognition Results and Analysis

The recognition accuracies of the above eight models in the cross-subject EEG emotional state recognition tasks in 3 sessions are shown in Table 1, Table 2 and Table 3 respectively. In these tables, ‘sub2’ indicates that the samples from the first subject were used as the labeled source domain data while the samples from the second subject were used as the unlabeled target domain data, and so on; ‘AVG.’ represents the average accuracy of all the 14 groups cross-subject cases in the session. We mark in bold the highest recognition accuracy of each emotion recognition case (each row of the tables).
According to these obtained results shown in Table 1, Table 2 and Table 3, we draw the following observations.
  • TSRBG has achieved better EEG emotional state recognition accuracy than the other compared models in most cases. The highest recognition accuracy is the 15th subject of session 2, which is 88.58%. The average recognition accuracy of the three sessions are better than the other seven models, which are 72.83%, 76.49%, and 77.50%, respectively. On the whole, it verifies that the proposed TSRBG model is effective.
  • By comparing the average recognition accuracy of the eight models in three sessions, it can be found that the joint optimization of semi-supervised EEG emotional states estimation and EEG feature transfer alignment in a tight coupling way can obtain better recognition accuracy. By setting GAKT and TSRBG as control groups, we find that the accuracy of TSRBG is significantly better than that of GAKT, and the main difference between them is the semi-supervised EEG emotion state estimation process. GAKT constructs an undirected graph based on the unaligned original data and this graph will not be updated with the data distribution alignment. In the double projection feature alignment subspace, it fails to well describe the sample association between the two domains. As a result, it cannot accurately estimate the EEG emotion state in the target domain, which affects the alignment effect of conditional distribution. However, TSRBG estimates the EEG emotional states of target domain by a bi-model fusing method. One model is used to construct a sample-feature bipartite graph to characterize inter-domain associations for label propagation. The initialized graph is dynamically updated based on the data subspace representations. The other model is the semi-supervised regression, which can effectively build the connection between subspace data representations and the label indicator matrix.
In order to describe the recognition performance advantages of our proposed model in more detail, we use the Friedman test [49] to judge whether the eight models have the same performance in cross-subject EEG emotion state recognition tasks. The underlying assumption is that “the performance of all models is the same”. We rank the performance of the compared models in each group of cross-subject emotion state recognition experiments (in our experiment, the higher the recognition accuracy, the higher the ranking), and calculate the average ranking r i of each model. Assuming that there are K models and N data sets, we calculate the variable τ X 2 as
τ X 2 = 12 N K ( K + 1 ) i = 1 K r i 2 K ( K + 1 ) 2 4 ,
which follows the X 2 distribution with degree of freedom K 1 . In our work, there are 8 comparative models and 42 groups of cross-subject EEG emotion state recognition tasks. That is, K = 8 , and N = 42 .
Then, we can calculate the variable τ F as
τ F = ( N 1 ) τ X 2 N ( K 1 ) τ X 2 ,
which obeys the F distribution with degree of freedom K 1 and ( K 1 ) ( N 1 ) .
According to the recognition results of different models in Table 1, Table 2 and Table 3, we calculate that the average rankings of them are [3.79, 3.36, 4.81, 4.5, 6.19, 5.14, 6.79, 1.29]. Based on (37) and (38), we obtain the value of variable τ F is 35.682. If the significance level α is 0.05, then the critical value of Friedman test is 2.0416, which can be obtained through MATLAB expression ‘ i c d f (‘F’, α , K 1 , ( K 1 ) ( N 1 ) ) ’ [49]. Since 35.682 is far greater than 2.0416, the assumption “the performance of all models is the same” is rejected. It is necessary to further distinguish the algorithms through the Nemenyi test-based post-hoc test. The results are shown in Figure 2. The models are sorted based on the value of average ranking r i and the model with higher ranking is closer to the figure top. The length of the corresponding vertical line of these models is called the critical distance (CD), whose value 1.620 is calculated by
C D = q α K ( K + 1 ) 6 N ,
where the critical value q α is 3.031 when α = 0.05 . We can judge whether there are significant differences between models by whether there are overlaps in the vertical lines corresponding to the models in Figure 2. For example, the rank value of TSRBG is 1.29 while it is 3.36 for GAKT, the gap between them is 2.07, which is greater than the CD value 1.620, so there is no overlap between their corresponding vertical lines. Therefore, the TSRBG is significantly better than GAKT in the cross-subject EEG emotion recognition tasks. Similar analysis can be performed on the other models.
Further, the average recognition results of these models are reorganized by confusion matrices to analyze the recognition performance of each model in each emotional state. The results are shown in Figure 3. We find that TSRBG has a high average recognition accuracy of 82.48% in neutrality state, which is the highest recognition accuracy among the four kinds of emotional states. The proportions of the neutral EEG samples were wrongly divided into sadness, fear, and happiness by 6.90%, 6.56%, and 4.06%, respectively. Compared with the other models, the recognition accuracies of the sadness and neutrality states were significantly improved by TSRBG. For example, the recognition rate of sad EEG emotional states was improved by at least 16.85% compared with the other models. Moreover, the recognition accuracy of the fear emotion category was improved slightly, at 3.45%.

3.4. Subspace Analysis and Mining

In this work, the process of EEG feature transfer is to seek dual subspaces, which are expected to reduce distribution differences between the source and the target domain data as much as possible. For each domain, subspace data representation is obtained by projecting the original data with a projection matrix. In order to intuitively reflect the alignment effect of two domain data in the subspace, we use the t-SNE method [50] to visualize two groups of experimental data before and after alignment. As shown in Figure 4, we see that the data distributions of source and target domain in the subspace have been effectively aligned.
The subspace feature dimension is p. In order to obtain the subspace dimension suitable for data distribution alignment, we show the change of model recognition accuracy with the adjustment of subspace dimension in Figure 5. It is observed that TSRBG is generally insensitive to the subspace dimension. When the subspace dimension is adjusted within the interval [30, 60], TSRBG generally have satisfactory recognition accuracies.
From the perspective of transfer learning, the subspace should reserve the common information and exclude the non-common information between subjects; that is, in the learned subspace, the common components between the source and the target domain should be preserved while the subject-dependent components should be excluded. The subject-independent common components are considered as the intrinsic component of emotion that does not change between subjects. The subject-dependent non-common components are considered as the unique external information of different subjects. From the perspective of EEG features, the subject-independent common EEG features should have larger weights and contribute more to cross-subject emotion recognition. By contrast, the subject-dependent non-common EEG features should have smaller weights and contribute less in cross-subject emotion recognition. If we can quantify the importance of different EEG feature dimensions, according to the corresponding relationship between EEG feature dimension and frequency band [51], the common EEG activation patterns in cross-subject emotion recognition can be explored.
We assume that θ s i and θ t i are the importance measurement factors of the i-dimensional features of the source and target domain respectively. Based on the 2 , 1 -norm feature selection theory [52], θ s i and θ t i can be obtained by calculating the normalized 2 -norm of the i-row vector of the subspace projection matrix of the source and target domain, respectively. That is,
θ ( s / t ) i = p ( s / t ) i 2 j = 1 d p ( s / t ) j 2 ,
where p ( s / t ) i is the i-th row vector of the subspace projection matrix. Then, we can quantitatively calculate the importance of the a-th frequency band and the l-th channel through
ω ( a ) = θ ( a 1 ) 62 + 1 + θ ( a 1 ) 62 + 2 + + θ a 62 , ψ ( l ) = θ l + θ l + 62 + θ l + 124 + θ l + 186 + θ l + 248 ,
where a = 1, 2, 3, 4, 5 denote the Delta, Theta, Alpha, Beta, and Gamma frequency bands, respectively. l = 1, ⋯, 62 denote the 62 channels, which are FP1, FPZ, ⋯, CB2.
In SEED-IV, the DE features are extracted from five frequency bands and 62 channels. Therefore, the corresponding relationship between the feature importance measurement and different frequency bands (channels) can be established, as shown in Figure 6.
As shown in Figure 7, we quantify the importance of different EEG frequency bands in cross-subject emotion recognition, according to the above analysis. Figure 7a presents the results obtained by analyzing the source projection matrix P s in three sessions, respectively, and their average results. Figure 7b displays the results obtained by analyzing the target projection matrix P t in three sessions, and their average result. Figure 7c presents the average results of the source and target domains in three sessions, and the average results of both across all sessions. From the perspective of data-driven and pattern recognition, it is believe that the Gamma frequency band is the most important one in the cross-subject EEG emotion recognition.
Furthermore, we calculated the importance of different EEG channels, as shown in Figure 8. In Figure 8a, we showed the importance of each brain region in the form of the brain topographic map. We observed that the position of the left side of the prefrontal lobe had high weight in all results, and believe that this brain region should have higher importance in cross-subject EEG emotion recognition. The top 10 important channels of each session and the overall average are quantitatively analyzed in Figure 8b. We believe that FP1, P06, P05, O1, P4, and P8 are more important for the cross-subject EEG emotion recognition. Considering that the model has good performance for sadness and neutral EEG emotional states, the above brain region and channels might be more closely related to these two emotional states.

4. Conclusions

In this paper, we proposed a new model termed TSRBG for cross-subject emotion recognition from EEG, whose main merits are generally summarized as follows. (1) The unification of the feature domain adaptation and the target domain label estimation was effectively realized in a unified framework. Better-aligned source and target data can effectively improve the target domain label estimation performance; in turn, more accurately estimated target domain label information can better facilitate the modeling of conditional distribution modeling, leading to better domain adaptation performance. (2) The intra- and inter-domain connections were investigated based on the subspace aligned data, which formulated a bi-model fusion strategy for target domain label estimation, leading to significantly better recognition accuracy. (3) The learned subspace of TSRBG provided us with a quantitative way to explore the key EEG frequency bands and channels in emotional expression. The experimental results on the SEED-IV data set demonstrated that: (1) The joint learning mode in TSRBG effectively improved the cross-subject EEG emotion state recognition performance; (2) The Gamma frequency band and the prefrontal brain region are identified as more important components in emotion expression.

Author Contributions

Conceptualization, Y.P.; Data curation, W.L.; Investigation, Y.P.; Methodology, W.L. and Y.P.; Software, W.L. and Y.P.; Validation, Y.P.; Writing—original draft preparation, W.L. and Y.P.; Writing—review and editing, W.L. and Y.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Zhejiang Provincial Natural Science Foundation of China (LY21F030005), National Natural Science Foundation of China (61971173, U20B2074), Fundamental Research Funds for the Provincial Universities of Zhejiang (GK209907299001-008), China Postdoctoral Science Foundation (2017M620470), CAAC Key Laboratory of Flight Techniques and Flight Safety (FZ2021KF16), and Guangxi Key Laboratory of Optoelectronic Information Processing, Guilin University of Electronic Technology (GD21202).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee) of Shanghai Jiao Tong University (protocol code 2017060).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Acknowledgments

The authors also would like to thank the anonymous reviewers for their comments on this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Beldoch, M. Sensitivity to expression of emotional meaning in three modes of communication. In The Communication of Emotional Meaning; McGraw-Hill: New York, NY, USA, 1964; pp. 31–42. [Google Scholar]
  2. Salovey, P.; Mayer, J.D. Emotional intelligence. Imagin. Cogn. Personal. 1990, 9, 185–211. [Google Scholar] [CrossRef]
  3. Chen, L.; Wu, M.; Pedrycz, W.; Hirota, K. Emotion Recognition and Understanding for Emotional Human-Robot Interaction Systems; Springer: Cham, Switzerland, 2020; pp. 1–247. [Google Scholar]
  4. Papero, D.; Frost, R.; Havstad, L.; Noone, R. Natural systems thinking and the human family. Systems 2018, 6, 19. [Google Scholar] [CrossRef] [Green Version]
  5. Li, W.; Huan, W.; Hou, B.; Tian, Y.; Zhang, Z.; Song, A. Can emotion be transferred?—A review on transfer learning for EEG-Based Emotion Recognition. IEEE Trans. Cogn. Dev. Syst. 2021. [Google Scholar] [CrossRef]
  6. Nie, Z.; Wang, X.; Duan, R.; Lu, B. A survey of emotion recognition based on EEG. Chin. J. Biomed. Eng. 2012, 31, 12. [Google Scholar]
  7. Ko, B.C. A brief review of facial emotion recognition based on visual information. Sensors 2018, 18, 401. [Google Scholar] [CrossRef] [PubMed]
  8. Akçay, M.B.; Oğuz, K. Speech emotion recognition: Emotional models, databases, features, preprocessing methods, supporting modalities, and classifiers. Speech Commun. 2020, 116, 56–76. [Google Scholar] [CrossRef]
  9. Alswaidan, N.; Menai, M.E.B. A survey of state-of-the-art approaches for emotion recognition in text. Knowl. Inf. Syst. 2020, 62, 2937–2987. [Google Scholar] [CrossRef]
  10. Khare, S.K.; Bajaj, V.; Sinha, G.R. Adaptive tunable Q wavelet transform-based emotion identification. IEEE Trans. Instrum. Meas. 2020, 69, 9609–9617. [Google Scholar] [CrossRef]
  11. Becker, H.; Fleureau, J.; Guillotel, P.; Wendling, F.; Merlet, I.; Albera, L. Emotion recognition based on high-resolution EEG recordings and reconstructed brain sources. IEEE Trans. Affect. Comput. 2020, 11, 244–257. [Google Scholar] [CrossRef]
  12. Wang, H.; Pei, Z.; Xu, L.; Xu, T.; Bezerianos, A.; Sun, Y.; Li, J. Performance enhancement of P300 detection by multiscale-CNN. IEEE Trans. Instrum. Meas. 2021, 70, 1–12. [Google Scholar] [CrossRef]
  13. Hondrou, C.; Caridakis, G. Affective, natural interaction using EEG: Sensors, application and future directions. In Lecture Notes in Computer Science, Proceedings of the Artificial Intelligence: Theories and Applications—7th Hellenic Conference on AI (SETN 2012), Lamia, Greece, 28–31 May 2012; Maglogiannis, I., Plagianakos, V.P., Vlahavas, I.P., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7297, pp. 331–338. [Google Scholar] [CrossRef] [Green Version]
  14. Marei, A.; Yoon, S.A.; Yoo, J.U.; Richman, T.; Noushad, N.; Miller, K.; Shim, J. Designing feedback systems: Examining a feedback approach to facilitation in an online asynchronous professional development course for high school science teachers. Systems 2021, 9, 10. [Google Scholar] [CrossRef]
  15. Mammone, N.; De Salvo, S.; Bonanno, L.; Ieracitano, C.; Marino, S.; Marra, A.; Bramanti, A.; Morabito, F.C. Brain network analysis of compressive sensed high-density EEG signals in AD and MCI subjects. IEEE Trans. Ind. Inform. 2018, 15, 527–536. [Google Scholar] [CrossRef]
  16. Murugappan, M.; Rizon, M.; Nagarajan, R.; Yaacob, S.; Hazry, D.; Zunaidi, I. Time-frequency analysis of EEG signals for human emotion detection. In Proceedings of the 4th Kuala Lumpur International Conference on Biomedical Engineering 2008, Kuala Lumpur, Malaysia, 25–28 June 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 262–265. [Google Scholar]
  17. Thejaswini, S.; Ravi Kumar, K.M.; Aditya Nataraj, J.L. Analysis of EEG based emotion detection of DEAP and SEED-IV databases using SVM. SSRN Electron. J. 2019, 8, 576–581. [Google Scholar]
  18. Li, X.; Song, D.; Zhang, P.; Zhang, Y.; Hou, Y.; Hu, B. Exploring EEG features in cross-subject emotion recognition. Front. Neurosci. 2018, 12, 162. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Ali Olamat, P.O.; Atasever, S. Deep learning methods for multi-channel EEG-based emotion recognition. Int. J. Neural Syst. 2022, 32, 2250021. [Google Scholar] [CrossRef]
  20. Lew, W.C.L.; Wang, D.; Shylouskaya, K.; Zhang, Z.; Lim, J.H.; Ang, K.K.; Tan, A.H. EEG-based emotion recognition using spatial-temporal representation via Bi-GRU. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 116–119. [Google Scholar]
  21. Gong, S.; Xing, K.; Cichocki, A.; Li, J. Deep learning in EEG: Advance of the last ten-year critical period. IEEE Trans. Cogn. Dev. Syst. 2022, 14, 348–365. [Google Scholar] [CrossRef]
  22. Li, J.; Qiu, S.; Du, C.; Wang, Y.; He, H. Domain adaptation for EEG emotion recognition based on latent representation similarity. IEEE Trans. Cogn. Dev. Syst. 2020, 12, 344–353. [Google Scholar] [CrossRef]
  23. Gong, B.; Shi, Y.; Sha, F.; Grauman, K. Geodesic flow kernel for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 2066–2073. [Google Scholar]
  24. Yan, K.; Kou, L.; Zhang, D. Learning domain-invariant subspace using domain features and independence maximization. IEEE Trans. Cybern. 2018, 48, 288–299. [Google Scholar] [CrossRef] [Green Version]
  25. Li, Y.; Fu, B.; Li, F.; Shi, G.; Zheng, W. A novel transferability attention neural network model for EEG emotion recognition. Neurocomputing 2021, 447, 92–101. [Google Scholar] [CrossRef]
  26. Zheng, W.L.; Lu, B.L. Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Trans. Auton. Ment. Dev. 2015, 7, 162–175. [Google Scholar] [CrossRef]
  27. Zheng, W.L.; Zhu, J.Y.; Lu, B.L. Identifying stable patterns over time for emotion recognition from EEG. IEEE Trans. Affect. Comput. 2017, 10, 417–429. [Google Scholar] [CrossRef] [Green Version]
  28. Quan, X.; Zeng, Z.; Jiang, J.; Zhang, Y.; Lu, B.; Wu, D. Physiological signals based affective computing: A systematic review. Acta Autom. Sin. 2021, 47, 1769–1784. (In Chinese) [Google Scholar]
  29. Suhaimi, N.S.; Mountstephens, J.; Teo, J. EEG-based emotion recognition: A state-of-the-art review of current trends and opportunities. Comput. Intell. Neurosci. 2020, 2020, 1–19. [Google Scholar] [CrossRef] [PubMed]
  30. Lu, B.; Zhang, Y.; Zheng, W. A survey of affective brain-computer interface. Chin. J. Intell. Sci. Technol. 2021, 3, 36–48. (In Chinese) [Google Scholar]
  31. Niu, S.; Liu, Y.; Wang, J.; Song, H. A decade survey of transfer learning (2010–2020). IEEE Trans. Artif. Intell. 2020, 1, 151–166. [Google Scholar] [CrossRef]
  32. Zhuang, F.; Qi, Z.; Duan, K.; Xi, D.; Zhu, Y.; Zhu, H.; Xiong, H.; He, Q. A comprehensive survey on transfer learning. Proc. IEEE 2020, 109, 43–76. [Google Scholar] [CrossRef]
  33. Zheng, W.L.; Lu, B.L. Personalizing EEG-based affective models with transfer learning. In Proceedings of the 25th International Joint Conference on Artificial Intelligence, New York, NY, USA, 9–15 July 2016; pp. 2732–2738. [Google Scholar]
  34. Zhou, R.; Zhang, Z.; Yang, X.; Fu, H.; Zhang, L.; Li, L.; Huang, G.; Dong, Y.; Li, F.; Liang, Z. A novel transfer learning framework with prototypical representation based pairwise learning for cross-subject cross-session EEG-based emotion recognition. arXiv 2022, arXiv:2202.06509. [Google Scholar]
  35. Bahador, N.; Kortelainen, J. Deep learning-based classification of multichannel bio-signals using directedness transfer learning. Biomed. Signal Process. Control 2022, 72, 103300. [Google Scholar] [CrossRef]
  36. Jayaram, V.; Alamgir, M.; Altun, Y.; Scholkopf, B.; Grosse-Wentrup, M. Transfer learning in brain-computer interfaces. IEEE Comput. Intell. Mag. 2016, 11, 20–31. [Google Scholar] [CrossRef] [Green Version]
  37. Ding, Z.; Li, S.; Shao, M.; Fu, Y. Graph adaptive knowledge transfer for unsupervised domain adaptation. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 37–52. [Google Scholar]
  38. Lan, Z.; Sourina, O.; Wang, L.; Scherer, R.; Müller-Putz, G.R. Domain adaptation techniques for EEG-based emotion recognition: A comparative study on two public datasets. IEEE Trans. Cogn. Dev. Syst. 2019, 11, 85–94. [Google Scholar] [CrossRef]
  39. Cui, J.; Jin, X.; Hu, H.; Zhu, L.; Ozawa, K.; Pan, G.; Kong, W. Dynamic Distribution Alignment with Dual-Subspace Mapping For Cross-Subject Driver Mental State Detection. IEEE Trans. Cogn. Dev. Syst. 2021. [Google Scholar] [CrossRef]
  40. Gretton, A.; Sriperumbudur, B.; Sejdinovic, D.; Strathmann, H.; Balakrishnan, S.; Pontil, M.; Fukumizu, K. Optimal kernel choice for large-scale two-sample tests. In Curran Associates, Incorporated, Proceedings of the 26th Annual Conference on Neural Information Processing Systems (NIPS 2012); Pereira, F., Burges, C., Bottou, L., Weinberger, K., Eds.; Curran Associates, Incorporated: Lake Tahoe, NV, USA, 2012; Volume 25, pp. 1205–1213. [Google Scholar]
  41. Abdi, H.; Williams, L.J. Principal component analysis. Wiley Interdiscip. Rev. Comput. Stat. 2010, 2, 433–459. [Google Scholar] [CrossRef]
  42. Bartels, R.H.; Stewart, G.W. Solution of the matrix equation AX + XB = C [F4]. Commun. ACM 1972, 15, 820–826. [Google Scholar] [CrossRef]
  43. Peng, Y.; Zhu, X.; Nie, F.; Kong, W.; Ge, Y. Fuzzy graph clustering. Inf. Sci. 2021, 571, 38–49. [Google Scholar] [CrossRef]
  44. Zheng, W.L.; Liu, W.; Lu, Y.; Lu, B.L.; Cichocki, A. Emotionmeter: A multimodal framework for recognizing human emotions. IEEE Trans. Cybern. 2018, 49, 1110–1122. [Google Scholar] [CrossRef]
  45. Long, M.; Wang, J.; Ding, G.; Sun, J.; Yu, P.S. Transfer feature learning with joint distribution adaptation. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 1–8 December 2013; pp. 2200–2207. [Google Scholar]
  46. Song, P.; Zheng, W. Feature selection based transfer subspace learning for speech emotion recognition. IEEE Trans. Affect. Comput. 2018, 11, 373–382. [Google Scholar] [CrossRef]
  47. Nie, F.; Wang, X.; Deng, C.; Huang, H. Learning a structured optimal bipartite graph for co-clustering. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 4132–4141. [Google Scholar]
  48. Song, T.; Zheng, W.; Song, P.; Cui, Z. EEG Emotion Recognition Using Dynamical Graph Convolutional Neural Networks. IEEE Trans. Affect. Comput. 2020, 11, 532–541. [Google Scholar] [CrossRef] [Green Version]
  49. Zhou, Z. Machine Learning Beijing; Tsinghua University Press: Beijing, China, 2016; pp. 42–44. [Google Scholar]
  50. Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  51. Peng, Y.; Qin, F.; Kong, W.; Ge, Y.; Nie, F.; Cichocki, A. GFIL: A unified framework for the importance analysis of features, frequency bands and channels in EEG-based emotion recognition. IEEE Trans. Cogn. Dev. Syst. 2021. [Google Scholar] [CrossRef]
  52. Nie, F.; Huang, H.; Cai, X.; Ding, C. Efficient and robust feature selection via joint 2,1-norms minimization. In Proceedings of the 23rd International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 6–9 December 2010; Volume 2, pp. 1813–1821. [Google Scholar]
Figure 1. The overall framework of TSRBG.
Figure 1. The overall framework of TSRBG.
Systems 10 00111 g001
Figure 2. Nemenyi test on the emotion recognition results of the compared models in our experiments. The critical distance value is 1.620.
Figure 2. Nemenyi test on the emotion recognition results of the compared models in our experiments. The critical distance value is 1.620.
Systems 10 00111 g002
Figure 3. The recognition results organized by confusion matrices.
Figure 3. The recognition results organized by confusion matrices.
Systems 10 00111 g003
Figure 4. Source and target data distributions. (a) Original space of session2 subject8; (b) subspace of session2 subject8; (c) original space of session3 subject12; (d) subspace of session3 subject12.
Figure 4. Source and target data distributions. (a) Original space of session2 subject8; (b) subspace of session2 subject8; (c) original space of session3 subject12; (d) subspace of session3 subject12.
Systems 10 00111 g004
Figure 5. Recognition performance of TSRBG in terms of different subspace dimensions.
Figure 5. Recognition performance of TSRBG in terms of different subspace dimensions.
Systems 10 00111 g005
Figure 6. The framework of emotion activation mode analysis.
Figure 6. The framework of emotion activation mode analysis.
Systems 10 00111 g006
Figure 7. Quantitative importance measures of EEG frequency bands in emotion expression. (a) Source domain; (b) Target domain; (c) Average.
Figure 7. Quantitative importance measures of EEG frequency bands in emotion expression. (a) Source domain; (b) Target domain; (c) Average.
Systems 10 00111 g007
Figure 8. Critical brain regions correlated to emotion expression and the top 10 EEG channels. (a) Critical brain regions; (b) Top 10 EEG channels (%).
Figure 8. Critical brain regions correlated to emotion expression and the top 10 EEG channels. (a) Critical brain regions; (b) Top 10 EEG channels (%).
Systems 10 00111 g008aSystems 10 00111 g008b
Table 1. Cross-subject emotion recognition results in session 1 (%).
Table 1. Cross-subject emotion recognition results in session 1 (%).
SubjectJDAGAKTMIDAFSTSLSOBGDGCNNLRSTSRBG
sub257.8173.0967.6966.5135.0254.6449.1273.68
sub364.7562.6358.4059.9363.6957.4639.2567.33
sub468.2758.9944.7760.5250.5358.9942.8971.33
sub548.5339.7246.5356.9948.5349.1232.6773.44
sub651.5953.1147.8346.5349.2440.4221.3967.57
sub770.1558.8754.9954.7644.5448.1842.6675.32
sub865.4562.5166.3942.3043.9551.1247.5980.96
sub964.8663.6953.3561.6945.9562.9843.2474.74
sub1065.6951.1263.8155.1147.4742.6646.7778.73
sub1151.9462.1659.3447.8347.2451.0042.4273.80
sub1254.2959.3459.1148.0650.1855.9363.3471.21
sub1362.9864.2850.6554.0552.6452.2933.4968.51
sub1455.5865.4543.9549.8249.5953.2340.8968.86
sub1569.1052.4146.6557.5833.7353.8233.7374.15
Avg.60.7959.1054.5354.4147.3152.2741.3972.83
Table 2. Cross-subject emotion recognition results in session 2 (%).
Table 2. Cross-subject emotion recognition results in session 2 (%).
SubjectJDAGAKTMIDAFSTSLSOBGDGCNNLRSTSRBG
sub290.7568.0366.8374.8850.1265.8778.1378.49
sub369.5961.5469.2368.9978.7368.9980.4181.25
sub460.4979.5763.8251.5655.0559.3831.8574.52
sub558.8963.2271.0367.5548.3256.1355.0574.04
sub661.7856.4941.4754.0936.6652.2836.1875.84
sub764.5468.8769.5977.2842.9164.5452.0478.13
sub878.4968.6366.3554.8168.3949.7650.1277.16
sub959.1354.3360.4641.8361.4254.8137.0276.92
sub1041.1182.3362.1450.0067.1960.3459.3876.56
sub1163.5872.0051.5860.8232.8153.0042.9174.28
sub1256.4944.5941.1168.8749.8847.7227.7669.23
sub1362.9864.9053.3760.3432.8149.1658.4171.75
sub1446.5150.4849.0444.7148.3261.6652.2874.16
sub1577.7688.8255.5384.0161.1860.4657.5788.58
Avg.63.7265.9958.6861.4151.2758.7751.3476.49
Table 3. Cross-subject emotion recognition results in session 3 (%).
Table 3. Cross-subject emotion recognition results in session 3 (%).
SubjectJDAGAKTMIDAFSTSLSOBGDGCNNLRSTSRBG
sub254.6260.1087.9688.9345.9964.6055.9679.56
sub364.1165.5776.7670.0742.0949.5149.2772.26
sub457.6669.3443.9263.2657.0656.0843.1981.14
sub563.7567.6474.3361.1939.5446.3539.0579.68
sub657.6662.6557.4254.9940.8872.1441.8584.91
sub766.9979.9347.4972.6355.6059.4918.8677.86
sub862.4159.8576.6464.7247.9369.1058.7673.24
sub975.1850.2450.9747.2051.8250.6137.3573.11
sub1051.0969.3441.7358.6432.6050.2445.1376.64
sub1157.0681.7554.1456.0840.6361.9258.7675.79
sub1245.5057.5456.6961.3153.7759.3754.6270.32
sub1355.7261.4446.5946.7242.3450.0042.7076.28
sub1456.4577.2557.0677.6250.8553.4127.0179.56
sub1570.3285.2852.1962.0456.4554.0123.6084.67
Avg.59.8967.7158.8563.2446.9756.9242.5877.50
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, W.; Peng, Y. Transfer EEG Emotion Recognition by Combining Semi-Supervised Regression with Bipartite Graph Label Propagation. Systems 2022, 10, 111. https://doi.org/10.3390/systems10040111

AMA Style

Li W, Peng Y. Transfer EEG Emotion Recognition by Combining Semi-Supervised Regression with Bipartite Graph Label Propagation. Systems. 2022; 10(4):111. https://doi.org/10.3390/systems10040111

Chicago/Turabian Style

Li, Wenzheng, and Yong Peng. 2022. "Transfer EEG Emotion Recognition by Combining Semi-Supervised Regression with Bipartite Graph Label Propagation" Systems 10, no. 4: 111. https://doi.org/10.3390/systems10040111

APA Style

Li, W., & Peng, Y. (2022). Transfer EEG Emotion Recognition by Combining Semi-Supervised Regression with Bipartite Graph Label Propagation. Systems, 10(4), 111. https://doi.org/10.3390/systems10040111

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop