Next Article in Journal
High Morphine Use Disorder Susceptibility Is Predicted by Impaired Learning Ability in Mice
Previous Article in Journal
Availability of Central α4β2* Nicotinic Acetylcholine Receptors in Human Obesity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Granger-Causality-Based Multi-Frequency Band EEG Graph Feature Extraction and Fusion for Emotion Recognition

College of Information and Computer, Taiyuan University of Technology, Taiyuan 030024, China
*
Author to whom correspondence should be addressed.
Brain Sci. 2022, 12(12), 1649; https://doi.org/10.3390/brainsci12121649
Submission received: 14 October 2022 / Revised: 8 November 2022 / Accepted: 25 November 2022 / Published: 1 December 2022
(This article belongs to the Section Neurotechnology and Neuroimaging)

Abstract

:
Graph convolutional neural networks (GCN) have attracted much attention in the task of electroencephalogram (EEG) emotion recognition. However, most features of current GCNs do not take full advantage of the causal connection between the EEG signals in different frequency bands during the process of constructing the adjacency matrix. Based on the causal connectivity between the EEG channels obtained by Granger causality (GC) analysis, this paper proposes a multi-frequency band EEG graph feature extraction and fusion method for EEG emotion recognition. First, the original GC matrices between the EEG signals at each frequency band are calculated via GC analysis, and then they are adaptively converted to asymmetric binary GC matrices through an optimal threshold. Then, a kind of novel GC-based GCN feature (GC-GCN) is constructed by using differential entropy features and the binary GC matrices as the node values and adjacency matrices, respectively. Finally, on the basis of the GC-GCN features, a new multi-frequency band feature fusion method (GC-F-GCN) is proposed, which integrates the graph information of the EEG signals at different frequency bands for the same node. The experimental results demonstrate that the proposed GC-F-GCN method achieves better recognition performance than the state-of-the-art GCN methods, for which average accuracies of 97.91%, 98.46%, and 98.15% were achieved for the arousal, valence, and arousal–valence classifications, respectively.

1. Introduction

Electroencephalogram (EEG) emotion recognition can help promote the interaction between humans and intelligent devices, holding an essential position in the field of human–computer interaction [1,2,3]. The emotion recognition system is usually composed of an emotion dataset, feature extraction and fusion, and an emotion recognition model. Among them, an effective emotion feature extraction and fusion method is one of the keys to improving the performance of emotion recognition. The spatial distribution of EEG electrode channels on the cerebral cortex is discrete and irregular, and also, there is connectivity among the EEG signals from different locations [4]. Therefore, this paper focuses on how to effectively utilize the information of the structure of the human brain and the causal relationships of EEG signals to carry out feature extraction and fusion.
Feature extraction plays an important role in EEG emotion recognition systems. From the perspective of traditional signal processing, the temporal features, frequency features, and time–frequency features of the EEG signals are usually extracted [5,6]. Since it is difficult to mine the deep emotional features contained in EEG signals, it is a challenge to further improve the performance of the emotion recognition task. With the development of deep learning, convolutional neural networks (CNN) have been widely applied in the field of emotion recognition by extracting the local features of the data through the convolution and pooling operations, learning deeper features with multiple convolutional layers [7,8,9]. In [7], a continuous convolutional neural network with four convolutional layers was designed to extract the deep features of EEG signals, which achieved 90.24% and 89.45% recognition accuracies for the arousal and valence classifications, respectively. In [8], CNN and long short-term memory modules were used to learn the spatial frequency and temporal features of EEG signals, and average binary classifications of 94.11% and 94.22% for arousal and valence was achieved, respectively. However, the limitation of CNNs is that the convolution operations are only effective for gridded data, such as pictures and videos. For example, References [7,8,9] mapped the EEG features into regular spaces before using the CNN models. In fact, the irregular spatial distribution of EEG signals means that they are non-gridded data. Therefore, CNNs cannot effectively deal with their connectivity, and they also have difficultly revealing the interaction among EEG channels. Fortunately, graph convolutional neural networks (GCNs) [10,11,12] combine the advantages of graph theory and CNNs, having great effectiveness in extracting the features of EEG signals with their irregular distribution.
GCNs are usually composed of nodes and adjacency matrices. For EEG signals, each EEG electrode channel is defined as a node, and the connections of any two nodes are defined as the adjacency matrix to construct the EEG graph features. In general, the nodes are fixed as the number of EEG electrodes is also fixed, and the adjacency matrix is not unique because of the different measures. The adjacency matrices of the current GCN method is mainly divided into two categories: One uses the spatial position relationship between the EEG channels [13,14,15], such as the typical Gaussian kernel function [13]. The other uses the connectivity of the brain [16,17,18], which can be further divided into functional connectivity and effective connectivity [19,20,21,22,23]. Functional connectivity is defined as the statistical interdependence among the EEG signals; most of the current research adopts functional connectivity matrices, such as the phase-locked value (PLV), Pearson correlation coefficient, and mutual information, as the adjacency matrices [16,17,19]. However, functional-connectivity-based adjacency matrices are symmetrical and ignore the directionality of the transmission of information in the brain. Different from functional connectivity, effective connectivity can further reveal the transmission of causal information in the brain system due to the addition of direction information. Granger causality (GC) is an effective connection measure [24,25,26]. In [26], the GC matrix was used as the adjacency matrix to construct the GCN feature, which can effectively improve the performance of the EEG recognition task. However, the current GCN features usually take the temporal or frequency features of each frequency band of the EEG signal as the nodes, but the functional or effective matrices of the full frequency band of the EEG signals ate taken as the adjacency matrices. As a result, the adjacency matrices cannot completely match the nodes, and the constructed graph features cannot accurately reflect the interaction between EEG signals in each frequency band. Therefore, this paper aims to construct a new GC-GCN feature by extracting the differential entropy (DE) features and GC matrix of each frequency band of the EEG signal, making full use of the causal relationship between the EEG signals to improve the performance of the emotion recognition task.
Research on cognitive neuroscience has shown that EEG signals in different frequency bands reflect various brain activities [27,28,29]. Generally, the EEG signal is decomposed into five bands: δ (0.5–4 Hz), θ (4–8 Hz), α (8–12 Hz), β (12–30 Hz), and γ (30–45 Hz). The existing feature fusion schemes are commonly used in series or in weighted superpositions of EEG features of different frequency bands to effectively improve the performance of the emotion recognition task. Methods such as phase-locked-value-based graph convolutional neural networks (PGCNNs) [19], dynamical graph convolutional neural networks (DGCNNs) [14], and causal graph convolutional neural networks (Causal-GCNs) [26] use a direct cascade of the different frequency band features as the nodes, which achieves a significant improvement over the single-frequency-band features. In [30], a frequency attention mechanism was applied to adaptively allocate weights to different frequency band features, with a 0.67% average improvement over the feature summation fusion scheme. However, the above feature fusion methods are carried out at the frequency band level, i.e., the features of all EEG channels in the same frequency band are assigned the same importance, ignoring the fact that the different channels of the EEG signals have different abilities for emotion recognition. Based on this, this paper proposes a new feature fusion scheme of the different frequency band features for the same node in GC-GCN graph features, which is an effective way to further improve the performance of emotion recognition.
As mentioned above, this paper mainly studies a GC-based multi-frequency band EEG graph feature extraction and fusion method. First, the original GC matrices between EEG signals at each frequency band are calculated through the GC method, and asymmetric binary GC matrices are obtained with an optimal threshold to adaptively select the larger causal value as connections. After this, the DE features and asymmetric binary GC matrices are used as the node values and the adjacency matrices to construct the GC-GCN graph features, respectively. Then, a new multi-frequency band feature fusion method (GC-F-GCN) is proposed by weighted integrating the GC-GCN features of EEG signals at each frequency band for the same node, which can effectively explore the frequency information that more significantly reflects the changes in emotional state. Finally, the recognition results of GC-F-GCN fusion features are obtained by using the Softmax classifier. Experimental results on the database for emotion analysis using physiological signals (DEAP) show that the proposed GC-F-GCN method can achieve better recognition performance.
The remaining parts are organized as follows. Section 2 reviews the GC theory and GCN network. Section 3 describes the proposed GC-F-GCN feature fusion method, including the construction of the GC-GCN graph feature and the multi-frequency band feature fusion scheme. Section 4 presents the experimental results and discussions. Finally, some conclusions are presented in Section 5.

2. Related Works

2.1. Overview of the GC Method

Granger causality is used to determine if causality exists between time series data, that is, whether a time series can be used to predict another. The GC-based EEG brain network has been widely used in the field of emotion recognition recently. Based on the definition of the GC method in [24], the vector autoregressive (VAR) model is applied to predict current value by using past time series values. Let X ( t ) and Y ( t ) be any two EEG series, X ( t i ) and Y ( t i ) represent the i-th time-lagged values of X and Y, respectively. Then, the univariate and the bivariate VAR models between X and Y can be expressed as:
X ( t ) Y ( t ) = i = 1 L a 1 i b 1 i X ( t i ) Y ( t i ) + ε X ( t ) ε Y ( t )
X ( t ) Y ( t ) = i = 1 L a 2 i b 2 i c 2 i d 2 i X ( t i ) Y ( t i ) + η X Y ( t ) η Y X ( t )
where a 1 i , b 1 i , a 2 i , b 2 i , c 2 i , and d 2 i ( i = 1 , 2 , , L ) are the constant coefficients, and L is the lagged coefficient of the model. ε X ( t ) and ε Y ( t ) represent the prediction errors of the univariate VAR model through X and Y, respectively. η Y X ( t ) and η X Y ( t ) represent the prediction errors of the bivariate VAR model through X and Y, respectively.
By comparing the variance of the errors of the univariate and the bivariate VAR models, we can determine whether there is a GC relationship between X and Y. Therefore, the GC value is defined as the logarithm of the ratios of two variances of the errors:
F X Y = ln ( σ ε Y σ η X Y )
F Y X = ln ( σ ε X σ η Y X )
where σ ε X , σ ε Y , σ η X Y , and σ η Y X represent the variances of the error.
As described in Equations (3) and (4), when F X Y > 0 , means that X is the “Granger cause” to Y. Of course, when F Y X > 0 , we say that Y is the “Granger cause” to X.

2.2. Overviews of Graph Convolutional Neural Networks (GCN)

Inspired by the convolution operations of CNNs for regular images, researchers have proposed graph convolution operations for graph-structured data in recent years. The existing GCN schemes are categorized into spectral graph convolution and spatial graph convolution by the types of graph filtering [31,32]. The spectral graph convolution first defines the graph’s Fourier transform. Then, it converts the graph data from the spatial domain to the frequency domain according to spectral graph theory and convolution theory with a solid theoretical foundation. The spatial graph convolution is defined as the process of directly iteratively aggregating a subset of neighbor nodes, which is more suitable for large-scale graph data. In this paper, the more common spectral-based GCN is applied.
Learning from the traditional Fourier-transform and convolution operations, GCNs define the graph convolution by defining the Laplacian matrix L on the graph. For a connected graph with n nodes G = { V , E , A } , where V, E, and A denote the sets of nodes, edges, and the adjacency matrix, respectively. A i j ( i , j = 1 , 2 , , n ) denotes the connection weights between node i and node j. The Laplacian matrix L is defined as
L = D A
where D is the degree matrix of A. The spectral decomposition of the Laplace matrix L is:
L = U Λ U T
where Λ = diag ( [ λ 1 , λ 2 , λ n ] ) is a diagonal matrix, U = [ u 1 , u 2 , a n d u n ] is an orthogonal matrix that consists of the eigenvectors of L.
Let g θ denote the graph filter and X ˜ denote the output of graph convolution operation. The graph filter between X and g θ can be calculated as follows:
X ˜ = X g θ ( A ) = U g θ ( Λ ) U T X
Due to the high computational complexity of g θ ( Λ ) , the Chebyshev polynomials are usually used in GCNs for the computation of the g θ ( Λ ) . In this case, the g θ ( Λ ) can be expressed as:
g θ ( Λ ) = k = 0 K θ k T k ( Λ ˜ )
where θ k are the coefficients of Chebyshev polynomials, Λ ˜ = 2 Λ n / λ max I n is a diagonal matrix, T k is a K-order Chebyshev polynomial. The filter process can be rewritten as:
y = U k = 0 K θ k g k ( Λ ˜ ) U T x = k = 0 K θ k g k ( L ˜ ) x
where L ˜ = 2 L / λ max I n .
In general, the dimensional transformation matrix W is applied to the filtered signal. Therefore, a complete graph convolution process can be expressed as:
G ( X , A ) = X g θ ( A ) W = k = 0 K θ k T k ( L ˜ ) X W

3. The Proposed GC-F-GCN Scheme for EEG Emotion Recognition

Figure 1 shows the framework of the proposed GC-F-GCN graph feature fusion method. The preprocessing module is first performed to the original EEG signal in the DEAP emotion dataset, such as downsampling, 4–45 Hz filtering, and re-referencing. The θ (4–8 Hz), α (8–12 Hz), β (12–30 Hz), and γ (30–45 Hz) frequency bands of EEG signals will be obtained through short-time Fourier transform (STFT). The GC-GCN graph feature module extracts the DE features and GC matrices of EEG signals at each frequency band as the nodes and adjacency matrices to construct the GC-GCN graph features. After that, the GC-F-GCN graph feature fusion module designs a new multi-frequency feature fusion scheme to integrate the GC-GCN features at different frequency bands for the same node. Finally, the Softmax layer module obtains the emotion recognition results.

3.1. Preprocessing of EEG Signals

The DEAP dataset [33] is an effective dataset that collects the EEG and peripheral physiological signals from 32 healthy participants. Before the experiment, all participants were informed in detail about the whole experiment. During the experiment, all the subjects watched 40 excerpts of one-minute duration music videos displaying different emotional states. The participants were asked to evaluate their levels of arousal, valence, linking, and dominance with self-assessment manikin (SAM). Finally, the EEG signals were recorded from 32 channels at a sampling rate of 512 Hz.
This paper only uses the EEG signals in the DEAP dataset for emotion recognition research. According to the 1 9 selfassessment scores of participants, we select the median score 5 as the threshold, with higher than 5 representing high class and less than or equal to 5 representing low class. The arousal space is divided into two parts: high arousal (HA) and low arousal (LA). The valence space is divided into two parts: high valence (HV) and low valence (LV). The valence-arousal (VA) space is divided into four parts, i.e., low arousal-low valence (LALV), high arousal-low valence (HALV), low arousal-high valence (LAHV), and high arousal-high valence (HAHV).
For the original EEG signal of each subject, the following processing was first performed: down-sampling to 128 Hz, removal of the electrooculogram (EOG) artifacts, bandpass filtering between 4 and 45 Hz, and removal of 3 s baseline. Then, each 60 s EEG signal was divided into ( ( 60 T w ) / T o + 1 ) segments with a window length of T w and an overlap time of T o . Our previous work [34] has shown that the best recognition performance can be achieved when the T w and T o are 3 s and 1.5 s, respectively. EEG signals of four frequency bands were extracted using STFT. Finally, the DE features and the GC matrices were calculated for the EEG signals at each frequency band, where the GC matrix of EEG signals can be obtained from Equations (1)–(5), and the DE feature can be calculated as follows [35]:
h ( x ) = f ( x ) log ( f ( x ) ) d x
where x represents an EEG signal channel and f ( x ) is the probability density function of x. For an EEG signal that approximately obeys the Gaussian distribution, i.e., x N ( μ , σ i 2 ) , the DE feature can be written as:
h ( x ) = 1 2 π σ i 2 e ( x μ ) 2 2 σ i 2 log ( 1 2 π σ i 2 e ( x μ ) 2 2 σ i 2 ) d x = 1 2 log ( 2 π e σ i 2 )

3.2. The Construction of GC-GCN Graph Feature

As mentioned previously, the GCN features are composed of node values and adjacency matrices. In this paper, the DE features at each frequency band were used as nodes and the GC values between any two EEG signals were used as adjacency matrices to construct the GC-GCN graph features, which can effectively represent the spatial causal relationship between EEG electrode channels. As shown in Figure 2, the GC-GCN graph feature construction process in this paper is mainly divided into the following steps:
(1) Feature extraction. Figure 2a shows the original EEG signals with 32 channels. After the preprocessing described in Section 3.1, the DE features and the original GC matrices of EEG signals at θ , α , β , and γ frequency bands can be obtained, as shown in Figure 2b,c.
(2) Calculation of the GC-GCN adjacency matrix. Let E i and E j ( i , j = 1 , 2 , , 32 , i j ) denote any two EEG signals, and the GC values between them can be calculated from Equations (1)–(5). Therefore, the GC-GCN adjacency matrix A G C can be expressed as follows:
A G C = F i j = ln ( σ ε j σ η i j ) i > j F j i = ln ( σ ε i σ η j i ) i < j 1 i = j
Figure 2c shows the original GC matrices A G C , where the GC values of the EEG signals range from [0, 1]. The higher the GC value, the stronger the causality between the corresponding EEG signals. On the contrary, smaller GC values indicate weaker causal relationships or connections caused by noise [34]. Figure 2c shows that the number of causal values in the original GC matrix of EEG signals at each frequency band is 32 × 32 . In this paper, we first sort these GC values from largest to smallest and then employ a threshold k ( 0 < k < 1 ) to obtain the set of the selected GC values, where the largest 32 × 32 × k causal values are retained. In this way, the original GC matrix is converted into a binary GC matrix A k , as described in Figure 2d and Equation (14).
A k = 1 A G C ( i , j ) X k 0 A G C ( i , j ) X k
(3) Construction of GC-GCN graph features. The DE features and the binary GC matrices A k of the EEG signals at each frequency band are used to construct the GC-GCN graph feature, as shown in Figure 2e.

3.3. Multi-Frequency Band Graph Feature Fusion Method

Based on the structural characteristics of GC-GCN graph features, we propose a new GC-F-GCN graph feature fusion method, which can adaptively integrate the GC-GCN graph features of the EEG signals at each frequency band for the same EEG node to better utilize the frequency information significantly associated with the different emotional states. The framework of the GC-F-GCN method is shown in Figure 3, which includes two graph convolution layers (GCN), a multi-frequency band feature fusion layer (F-GCN), two fully connected layers (FC), and a Softmax layer.
(a) GCN layer. By substituting the adjacency matrix A k in Equation (14) into Equation (10), we can obtain a complete graph convolution operation process described as G c o n v in Figure 3. For the GC-GCN graph feature constructed in Figure 2, we first employ the G c o n v operation on X G to extract the shallow graph features X S , and then obtain the deep graph feature X D by the secondary G c o n v operation. the X S and X D can be expressed as follows:
X S = G ( X G , A k )
X D = G ( X S , A k )
(b) F-GCN layer. Considering the different emotion recognition abilities of the EEG features at the different frequency bands, a new multi-frequency band GC-GCN feature fusion method is designed for X S and X D , which integrates GC-GCN features of the EEG signals at the θ , α , β , and γ frequency bands for the same EEG electrode channel. Since the fusion processes of X S and X D are similar, we will take X S as an example to describe the detailed fusion process, as shown in Figure 4.
As shown in Figure 4, let X S ( θ ) , X S ( α ) , X S ( β ) ,   a n d X S ( γ ) denote the shallow features of θ , α , β , and γ frequency EEG signals. p θ , p α , p β , a n d p γ denote the weight coefficients of θ , α , β , and γ frequency band GC-GCN features in the feature fusion process, respectively. X F S is the shallow fusion feature. Therefore, the fusion process of the F-GCN method can be expressed as Equation (17):
X F S = p θ X S ( θ ) + p α X S ( α ) + p β X S ( β ) + p γ X S ( γ )
where p θ + p α + p β + p γ = 1 .
Similarly, the deep fusion features X F D can be calculated using Equation (17). To obtain the optimal weight coefficients p θ , p α , p β , p γ , we employ the cross-entropy loss function to iteratively update the values of p θ , p α , p β , p γ in backpropagation, which will ensure the features with better recognition performance have a higher importance in the fusion features. The cross-entropy loss function can be expressed as:
L o s s = i = 1 C y i log y ^ i
where C denotes the number of the emotional state, y i and y ^ i represent the actual emotional labels and the predicted emotional labels, respectively.
(c) FC layers. X F S and X F D are sequentially passed through the R e l u activation function and FC1 layer, resulting in the FC shallow features F S and FC deep features F D , respectively, as shown in Equations (19) and (20):
F S = f 1 ( R e l u ( X F S ) )
F D = f 1 ( R e l u ( X F D ) )
where f 1 represents the mapping process of FC1 layer, and the R e l u activation function can be expressed as R e l u ( x ) = m a x ( x , 0 ) .
To ensure F S and F D have the same importance in the final emotion recognition process, we stipulate that F S and F D have the same dimensionality. The FC1 layer features F S and F D are then directly cascaded and integrated by the FC2 layer to obtain the final fusion feature X o . This process can be expressed as:
X o = f 2 ( c o n c a t ( F S , F D ) )
where f 2 represents the mapping process of the FC1 layer, c o n c a t represents the cascade features directly.
(d) Softmax layer. The final fusion feature X o is input into the Softmax layer to obtain the final emotion recognition results.

4. Experimental Results and Discussion

To evaluate the performance of the proposed scheme, we conducted three groups of experiments. In the first group, we discussed the influence of the threshold k in the GC-GCN graph feature and compared the emotion recognition performance of the GCN graph features with different adjacency matrices to verify the effectiveness of the GC-GCN graph feature. Next, we tested the performance of the proposed GC-F-GCN method through ablation experiments. Finally, we compared the emotion performances of the GC-F-GCN method with other state-of-art GCN methods. In addition, all the experiments were carried out on the computer with Nvidia GeForce RTX 1080 and Pytorch framework, and all the experimental results were averaged by 5-fold cross validation.

4.1. The Emotion Recognition Performance of the GC-GCN Graph Feature

As mentioned in Section 3.2, the GC matrices at different thresholds k correspond to different EEG topological connection structures. In this section, we mainly focus on the effect of the threshold k on the emotion recognition performance, where the threshold value k is chosen from 10% to 90%. Table 1 shows the emotion recognition performance of GC-GCN graph features with different thresholds k. It can be seen that the recognition performance of both binary and quadruple classifications is more sensitive to the threshold k. When the threshold values k are 60%, 30%, and 70%, the arousal, valence, and arousal–valence classifications show the best recognition performance, and the corresponding optimal average recognition accuracy values are 91.82%, 92.01%, and 87.38%, respectively. Therefore, only the optimal threshold k is used to calculate the GC adjacency matrix in the following section.
In addition, this section also compares three commonly used adjacency matrices for constructing GCN features: the random matrix, the shortest distance matrix, and the PLV-based matrix [19]. For fairness, all the recognition models adopted the same parameters and network structure. Table 2 shows the emotion recognition performance of the GCN graph features with different adjacency matrices.
(1) For the single-frequency-band EEG signals, the emotion recognition accuracy of the proposed GC-GCN graph feature is 4–14% higher than that of the random-based, the shortest distance-based, and PLV-based GCN graph features in the arousal, valence, and arousal–valence classifications.
(2) For the full-frequency-band EEG signals, the emotion recognition accuracies of proposed GC-GCN graph features are 91.82%, 92.01% and 87.38% for arousal, valence, and arousal–valence classifications. Compared to the GCN graph features constructed by the random matrix, shortest distance matrix, and PLV-based matrix, the GC-GCN features can achieve average improvements of 4.37%, 4.95%, and 0.93% for binary classifications of arousal, 4.70%, 4.43%, and 1.66% for binary classifications in valence, and 3.50%, 3.52%, and 1.74% for four classifications in arousal–valence, respectively.
(3) The paired samples t-test is utilized to verify whether there were significant differences in classification performance between GC-GCN graph features and other methods. The results also show that the GC-GCN graph features can obtain significantly higher accuracy (PLV: p = 0.014 < 0.05, others p < 0.01) in the arousal classification, (p < 0.01) in the valence classification, and (PLV: p = 0.015 < 0.05, others p < 0.01) in the arousal–valence classification, respectively.

4.2. The Performance of the GC-F-GCN Feature Fusion Method

In this section, we perform ablation experiments on the GC-F-GCN feature fusion method to analyze the contribution of each module, including the following three ablation experiments:
(1) The shallow graph feature (SGC-GCN) and the shallow fusion feature (SGC-F-GCN);
(2) The deep graph feature (DGC-GCN) and the deep fusion feature (DGC-F-GCN);
(3) The shallow and deep cascade graph feature (SDGC-GCN) and the shallow and deep fusion feature (SDGC-F-GCN).
Table 3 shows the emotion recognition performance of the ablation experiments performed using the GC-F-GCN method, from which the following conclusions can be obtained:
(1) The emotion recognition performance of the fusion graph features is significantly improved than the original graph features. For SGC-GCN shallow features, the average recognition performance with SGC-F-GCN shallow fusion features is 4.97%, 6.40%, and 9.60% higher than that of arousal, valence, and arousal–valence classifications, respectively. For the DGC-GCN deep features, the average recognition performance with DGC-F-GCN deep fusion features is 5.59%, 6.22%, and 10.40% higher than that of arousal, valence, and arousal–valence classifications, respectively. Otherwise, the average recognition performance of GC-F-GCN fusion features is 4.33%, 5.21%, and 5.77% higher than that of SDGC-GCN features in arousal, valence, and arousal–valence classifications, respectively.
(2) The GC-F-GCN graph features achieve a significant improvement over the single fusion graph features. Specifically, compared with the SGC-F-GCN and DGC-F-GCN features, GC-F-GCN features improved by 0.79% and 0.50% in arousal classification, 1.35% and 0.23% in valence classification, and 1.02% and 0.37% in arousal–valence classification, respectively.
(3) The paired samples t-test is utilized to verify whether there were significant differences in classification performance between these GC-GCN graph features. The results show that the SDGC-GCN graph features can obtain significantly higher accuracy (SGC-GCN: p = 0.044 < 0.05, DGC-F-GCN: 0.015 < 0.05, and others p < 0.01) in the arousal classification, (DGC-F-GCN: 0.034 < 0.05, and others p < 0.01) in the valence classification, and (SGC-GCN: 0.046 < 0.05, SDGC-GCN: 0.045 < 0.05, DGC-F-GCN: 0.041 < 0.05, and others p < 0.01) in the arousal–valence classification, respectively.

4.3. Performance Comparisons for Latest GCN Graph Feature

In this section, we compare the proposed scheme with other state-of-the-art graph features in the literature, and the comparison results are shown in Table 4, including the support vector machine (SVM), artificial neural network (ANN), DGCNN [14], PGCNN [19], and Causal-GCN [26]. All the schemes adopt the same EEG signal division and 5-fold cross validation. The parameter numbers of the network and the average training time of the corresponding recognition model are presented in Table 5.
We make the following statements based on the results:
(1) Compared with the DGCNN, PGCNN, and causal-GCN graph features, the GC-GCN graph features can achieve an average improvement of 1% to 5% in arousal, valence, and arousal–valence classifications, the average parameter numbers of the corresponding recognition model are almost the same, and the average training time increased by about 60%, while the
(2) Compared with the DGCNN, PGCNN, and causal-GCN graph features, GC-F-GCN fusion features are increased by 9.35%, 9.42%, and 9.43% in arousal classification, 10.10%, 10.20%, and 10% in valence classifications, and 15.75%, 15.50% and 15.80% in arousal–valence classifications, respectively.
(3) The GC-F-GCN fusion features are 6–10% higher than the GC-GCN graph features. The average parameter numbers and the training time of the recognition model increased by about 492% and 335%, respectively. Compared with the SVM and ANN classifiers, the GC-F-GCN method is increased by 13.71% and 5.04% in arousal classification, 15.03% and 6.18% in valence classifications, and 10.47% and 10.62% in arousal–valence classifications, respectively. This further demonstrates that the proposed model has a better recognition performance.
(4) The paired samples t-test is utilized to verify whether there were significant differences in classification performance between GC-GCN graph features and other methods. The results show that the GC-GCN graph features can obtain significantly higher accuracy (p < 0.01) both in the arousal, valence, and arousal–valence classifications, respectively.

5. Discussions

For the adjacency matrix of the GCN graph features of EEG signals, compared with the spatial position relationship between the EEG channels [13,14,15] and functional-connectivity-based adjacency matrices [16,17,19], GC-based adjacency matrices provide potential possibility (i.e., offering more direction information) to improve the recognition accuracy for the emotion-based system. In our work, the GC-based GCN graph feature shows the superiority to the other matrices, which is consistent with the reference [26].
In this paper, we discuss the emotion recognition performance of three commonly used adjacency matrices for constructing GCN features with the proposed GC-GCN graph features in Table 2. It can be seen that the proposed GC-GCN graph features achieve the best recognition performance in both the single frequency band and full frequency band. This is because the adjacency matrix of GC-GCN features not only utilizes the directionality of the causal information flow between EEG channels but also constructs GC-GCN graph features for each frequency EEG signal, making the nodes match the adjacency matrix, which is more suitable for the cognitive mechanism of the brain. We could also observe that the emotion recognition performance of the full frequency band EEG signal is significantly better than that of the single frequency band EEG signals. Previous studies also have shown that the full frequency band is more comprehensive and effective than a single frequency band [27,30]. Therefore, the complementary between the GCN graph feature of the EEG signals at the different frequency bands can be effectively used to improve emotion recognition performance effectively.
To analyze the contribution of each module of the GC-F-GCN feature fusion method, the ablation experiments are presented in Table 3. Compared with the original graph features, the GC-F-GCN scheme has significant improvements in arousal, valence, and arousal–valence classifications, respectively. For the original SGC-GCN and DGC-GCN graph features, the corresponding SGC-F-GCN and DGC-F-GCN graph fusion features can be obtained by using the frequency fusion module only once. The experimental results in Table 3 indicate that using the frequency fusion module on the original GC-GCN features can effectively improve the EEG signal emotion recognition performance.
Finally, several state-of-the-art GCN features are used to verify the validity of the proposed GC-F-GCN graph features, such as PGCNN [14], DGCNN [19], and Causal-GCN [26]. The results in Table 4 show that the GC-F-GCN graph feature can effectively improve the performance of emotion recognition than the other GCN graph feature. This is because the DGCNN only considers the spatial location relationship of the EEG signals, the PLV-based adjacency matrix of the PGCNN feature is undirected, and both of them ignore the fact that the connections between EEG channels are directional. The causal-GCN feature only adopts the GC matrix of the full-frequency EEG signals to obtain the adjacency matrix, making the nodes in the constructed GCN features mismatch with the adjacency matrices. The GC-F-GCN method constructs GC-GCN features for each frequency band EEG signal and integrates the shallow and deep graph features, which makes the emotion-related information in the EEG signal can be better extracted. In addition, the increased parameters and training time of the GC-F-GCN method mainly come from additional F-GCN layers and FC layers, which are the essential costs of recognition performance improvement.

6. Conclusions

In this paper, a novel multi-frequency band graph feature extraction method and fusion method are proposed for EEG emotion recognition, which simultaneously considers the spatial, frequency, and causal information of emotional EEG signals. First, the GCN adjacency matrix was constructed by setting the threshold value, and the larger causal values in the GC matrix of the EEG signals at each frequency band were selected adaptively, which can preserve more significant causal connections between EEG channels and make the selected EEG connections more suitable for brain cognitive patterns. Then, the DE features and selected binary GC matrix of the EEG signals at each frequency band signal were applied as node values and adjacency matrices to construct the GC-GCN graph feature. Finally, based on the structural characteristics of GC-GCN graph features, a new multi-frequency band EEG feature fusion method was proposed to integrate the graph information of EEG signals at different frequency bands for the same node, which can further improve the performance of emotion recognition. Experimental results show that the emotion recognition accuracy of the proposed GC-F-GCN scheme is 97.91% and 98.46% for the arousal and valence binary classifications, while 98.15% for the arousal–valence classification, respectively. In future work, we want to try to combine the spatial attention mechanism with the GC-F-GCN method to explore the brain regions significantly associated with human emotions, which would further improve the ability of the EEG emotion recognition system.

Author Contributions

Conceptualization, J.Z. and X.Z.; methodology, J.Z.; software, J.Z. and G.C.; validation, J.Z., X.Z. and G.C.; formal analysis, J.Z. and X.Z.; writing—original draft preparation, J.Z.; writing—review and editing, J.Z., X.Z., G.C. and Q.Z.; funding acquisition, X.Z. and G.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China grant number No. 62271342 and No. 62201377, and Shanxi Scholarship Council of China grant number HGKY2019025.

Data Availability Statement

The database used in this study is publicly available at websites: DEAP: http://www.eecs.qmul.ac.uk/mmv/datasets/deap/, accessed on 10 October 2022.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
EEGElectroencephalogram
GCGranger causality
GCNGraph convolutional neural network
GC-GCNGC-based GCN graph feature
GC-F-GCNMulti-frequency band GC-GCN feature fusion method
PLVPhase Locking Value
SGC-GCNShallow graph feature
DGC-GCNDeep graph feature
SDGC-GCNShallow and deep cascade graph feature
SGC-F-GCNShallow fusion feature
DGC-F-GCNDeep fusion feature
SDGC-F-GCNShallow and deep fusion feature

References

  1. Zhang, J.H.; Yin, Z.; Chen, P.; Nichele, P. Emotion recognition using multi-modal data and machine learning techniques: A tutorial and review. Inf. Fusion 2020, 59, 103–126. [Google Scholar] [CrossRef]
  2. Luo, C.; Li, F.; Li, P.; Yi, C.; Li, C.; Tao, Q.; Zhang, X.; Si, Y.; Yao, D.; Yin, G.; et al. A survey of brain network analysis by electroencephalographic signals. Cogn. Neurodyn. 2022, 16, 17–41. [Google Scholar] [CrossRef] [PubMed]
  3. Álvarez-Pato, V.M.; Sánchez, C.N.; Domínguez-Soberanes, J.; Méndoza-Pérez, D.E.; Velázquez, R. A Multisensor Data Fusion Approach for Predicting Consumer Acceptance of Food Products. Foods 2020, 9, 774. [Google Scholar] [CrossRef] [PubMed]
  4. Jiang, T.Z.; He, Y.; Zang, Y.F.; Weng, X.C. Modulation of functional connectivity during the resting state and the motor task. Hum. Brain Mapp. 2004, 22, 63–71. [Google Scholar] [CrossRef]
  5. Zheng, W.L.; Lu, B.L. Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Trans. Auton. Ment. Dev. 2015, 7, 162–175. [Google Scholar] [CrossRef]
  6. Cai, J.; Xiao, R.L.; Cui, W.J.; Zhang, S.; Liu, G.D. Application of EEG-based machine learning in emotion recognition: A Review. Front. Syst. Neurosci. 2021, 15, 146. [Google Scholar] [CrossRef]
  7. Yang, Y.L.; Wu, Q.; Fu, Y.Z.; Chen, X.W. Continuous convolutional neural network with 3d input for EEG-based emotion recognition. In Proceedings of the 25th International Conference on Neural Information Processing, Siem Reap, Cambodia, 13–16 December 2018; pp. 433–443. [Google Scholar] [CrossRef]
  8. Shen, F.; Dai, G.; Lin, G. EEG-based emotion recognition using 4D convolutional recurrent neural network. Cogn. Neurodyn. 2020, 14, 815–828. [Google Scholar] [CrossRef]
  9. Zhang, J.; Zhang, X.Y.; Chen, G.J.; Yan, C. EEG emotion recognition based on the 3D-CNN and spatial-frequency attention mechanism. J. Xidian Univ. 2022, 49, 191–198. [Google Scholar] [CrossRef]
  10. Defferrard, M.; Bresson, X.; Vandergheynst, P. Convolutional neural networks on graphs with fast localized spectral filtering. In Proceedings of the 30th Annual Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 3844–38523. [Google Scholar]
  11. Levie, R.; Monti, F.; Bresson, X.; Bronstein, M. Cayleynets: Graph convolutional neural networks with complex rational spectral filters. IEEE Trans. Signal Process. 2018, 67, 97–109. [Google Scholar] [CrossRef]
  12. Kipf, T.N.; Welling, M.; Vandergheynst, P. Semi-supervised classification with graph convolutional networks. In Proceedings of the 5th International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
  13. Yin, Y.Q.; Zheng, X.W.; Hu, B.; Zhang, Y.; Cui, X.C. EEG emotion recognition using fusion model of graph convolutional neural networks and LSTM. Appl. Soft Comput. 2021, 100, 106954. [Google Scholar] [CrossRef]
  14. Song, T.F.; Zheng, W.M.; Song, P.; Cui, Z. EEG emotion recognition using dynamical graph convolutional neural networks. IEEE Trans. Affect. Comput. 2018, 11, 32–541. [Google Scholar] [CrossRef] [Green Version]
  15. Song, T.F.; Zheng, W.M.; Liu, S.Y.; Zong, Y.; Cui, Z.; Li, Y. Graph-Embedded Convolutional Neural Network for Image-based EEG Emotion Recognition. IEEE Trans. Emerg. Top. Comput. 2021, 10, 1399–1413. [Google Scholar] [CrossRef]
  16. Wang, H.T.; Liu, X.C.; Li, J.H.; Xu, T.; Bezerianos, A.; Sun, Y.; Wan, F. Driving fatigue recognition with functional connectivity based on phase synchronization. IEEE Trans. Cogn. Dev. Syst. 2020, 13, 668–678. [Google Scholar] [CrossRef]
  17. Tian, W.L.; Li, M.; Ju, X.Y.; Liu, Y.D. Applying Multiple Functional Connectivity Features in GCN for EEG-Based Human Identification. Brain Sci. 2022, 12, 1072. [Google Scholar] [CrossRef] [PubMed]
  18. Shi, Y.; Liu, M.; Sun, A.; Liu, J.J.; Men, H. A fast Pearson graph convolutional network combined with electronic nose to identify the origin of rice. IEEE Sens. J. 2021, 21, 21175–21183. [Google Scholar] [CrossRef]
  19. Wang, Z.M.; Tong, Y.; Heng, X. Phase-locking value-based graph convolutional neural networks for emotion recognition. IEEE Access 2019, 7, 93711–93722. [Google Scholar] [CrossRef]
  20. Cao, J.; Zhao, Y.F.; Shan, X.C.; Wei, H.L.; Guo, Y.Z.; Chen, L.Y.; Erkoyuncu, A.J.; Sarrigiannis, P.G. Brain Functional and Effective Connectivity Based on Electroencephalography Recordings: A Review. Hum. Brain Mapp. 2022, 43, 860–879. [Google Scholar] [CrossRef]
  21. Esposito, R.; Bortoletto, M.; Miniussi, C. Integrating TMS, EEG, and MRI as an approach for studying brain connectivity. Neuroscientist 2020, 26, 471–486. [Google Scholar] [CrossRef]
  22. Moon, S.E.; Chen, C.J.; Hsieh, C.J.; Wang, J.L.; Lee, J.S. Emotional EEG Classification Using Connectivity Features and Convolutional Neural Networks. Neural Netw. 2020, 132, 96–107. [Google Scholar] [CrossRef]
  23. Reid, A.T.; Headley, D.B.; Mill, R.D.; Sanchez-Romero, R.; Uddin, L.Q.; Marinazzo, D.; Lurie, D.J.; Valdés-Sosa, P.A.; Hanson, S.J.; Biswal, B.B.; et al. Advancing functional connectivity research from association to causation. Nat. Neurosci. 2019, 22, 1751–1760. [Google Scholar] [CrossRef]
  24. Granger, C.W.J. Investigating Causal Relations by Econometric Models and Cross-Spectral Methods. Econom. J. Econom. Soc. 1969, 10, 424–438. [Google Scholar] [CrossRef]
  25. Gao, Y.Y.; Wang, X.K.; Potter, T.; Zhang, J.H.; Zhang, Y.C. Single-trial EEG emotion recognition using Granger Causality/Transfer Entropy analysis. J. Neurosci. Methods 2020, 346, 108904. [Google Scholar] [CrossRef]
  26. Kong, W.Z.; Qiu, M.; Li, M.H.; Jin, X.Y.; Zhu, L. Causal Graph Convolutional Neural Network for Emotion Recognition. IEEE Trans. Cogn. Dev. Syst. 2022; Early Access. [Google Scholar] [CrossRef]
  27. Shen, F.Y.; Peng, Y.; Kong, W.Z.; Dai, G.J. Multi-scale frequency bands ensemble learning for EEG-based emotion recognition. Sensors 2021, 21, 1262. [Google Scholar] [CrossRef] [PubMed]
  28. Zheng, W.L.; Liu, W.; Lu, Y.F.; Lu, B.L.; Cichocki, A. Emotionmeter: A Multimodal Framework for Recognizing Human Emotions. IEEE Trans. Cybern. 2018, 48, 1110–1122. [Google Scholar] [CrossRef] [PubMed]
  29. Li, D.D.; Xie, L.; Chai, B.; Wang, Z.; Yang, H. Spatial-frequency convolutional self-attention network for EEG emotion recognition. Appl. Soft Comput. 2022, 122, 108740. [Google Scholar] [CrossRef]
  30. Xiao, G.W.; Shi, M.; Ye, M.W.; Xu, B.W.; Chen, Z.D.; Ren, Q.S. 4D attention-based neural network for EEG emotion recognition. Cogn. Neurodyn. 2022, 16, 805–818. [Google Scholar] [CrossRef]
  31. Zhang, S.; Tong, H.H.; Xu, J.J.; Maciejewski, R. Graph convolutional networks: Algorithms, applications and open challenges. In Proceedings of the 7th International Conference on Computational Data and Social Networks, Shanghai, China, 18–12 December 2018. [Google Scholar]
  32. Zhang, S.; Tong, H.H.; Xu, J.J.; Maciejewski, R. Graph convolutional networks: A comprehensive review. Comput. Soc. Netw. 2019, 6, 11. [Google Scholar] [CrossRef] [Green Version]
  33. Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, L. DEAP: A Database for Emotion Analysis Using Physiological Signals. IEEE Trans. Affect. Comput. 2011, 3, 18–31. [Google Scholar] [CrossRef]
  34. Zhang, J.; Zhang, X.Y.; Chen, G.J.; Huang, L.X.; Sun, Y. EEG emotion recognition based on cross-frequency granger causality feature extraction and fusion in the left and right hemispheres. Front. Neurosci. 2022, 16, 974673. [Google Scholar] [CrossRef]
  35. Shi, L.C.; Jiao, Y.Y.; Lu, B.L. Differential entropy feature for EEG-based vigilance estimation. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Osaka, Japan, 3–7 July 2013; pp. 6627–6630. [Google Scholar]
Figure 1. The framework of the proposed GC-F-GCN graph fusion feature method.
Figure 1. The framework of the proposed GC-F-GCN graph fusion feature method.
Brainsci 12 01649 g001
Figure 2. The construction process of the GC-GCN graph feature. (a) The original EEG signal. (b) The DE features. (c) The original GC matrices of the θ , α , β , and γ frequency EEG signal. (d) The binary GC matrices with the given threshold selection. (e) The final GC-GCN graph features.
Figure 2. The construction process of the GC-GCN graph feature. (a) The original EEG signal. (b) The DE features. (c) The original GC matrices of the θ , α , β , and γ frequency EEG signal. (d) The binary GC matrices with the given threshold selection. (e) The final GC-GCN graph features.
Brainsci 12 01649 g002
Figure 3. The framework of the proposed GC-F-GCN feature fusion method.
Figure 3. The framework of the proposed GC-F-GCN feature fusion method.
Brainsci 12 01649 g003
Figure 4. The fusion processes of the proposed F-GCN method for X S .
Figure 4. The fusion processes of the proposed F-GCN method for X S .
Brainsci 12 01649 g004
Table 1. Emotion recognition performance of GC-GCN graph features with different threshold k.
Table 1. Emotion recognition performance of GC-GCN graph features with different threshold k.
kArousalValenceArousal–Valence
10%91.35 ± 5.3791.34 ± 5.4586.90 ± 8.23
20%91.56 ± 5.1491.66 ± 5.2587.21 ± 8.31
30%91.22 ± 5.5392.01 ± 5.2386.71 ± 8.10
40%91.15 ± 5.3491.23 ± 5.5286.93 ± 8.45
50%91.55 ± 5.3391.46 ± 5.5787.31 ± 8.78
60%91.82 ± 4.8291.27 ± 5.8886.80 ± 9.1
70%91.73 ± 5.6691.25 ± 4.7987.38 ± 7.94
80%91.48 ± 5.2191.91 ± 4.8887.30 ± 9.06
90%91.38 ± 5.2691.60 ± 5.0986.87 ± 8.91
Table 2. The recognition performance of the GCN features with different adjacency matrices (%).
Table 2. The recognition performance of the GCN features with different adjacency matrices (%).
EmotionAdjacency Matrix θ α β γ θ + α + β + γ
ArousalRandom71.89 ± 5.4472.12 ± 5.8071.22 ± 5.7871.54 ± 4.8787.45 ± 5.53
Distance71.42 ± 5.9372.52 ± 6.2475.32 ± 6.2475.10 ± 5.7487.23 ± 5.17
PLV [19]72.31 ± 5.2471.78 ± 5.1771.44 ± 4.8071.95 ± 5.5890.89 ± 4.68
GC-GCN72.58 ± 5.1678.09 ± 5.9478.68 ± 5.8279.55 ± 5.5691.82 ± 4.82
ValenceRandom69.92 ± 6.4669.71 ± 6.6569.42 ± 6.5069.80 ± 5.4687.31 ± 4.75
Distance70.61 ± 6.3670.67 ± 6.2670.33 ± 7.1269.89 ± 5.7587.58 ± 5.42
PLV [19]69.94 ± 6.2269.34 ± 6.2470.56 ± 6.3370.50 ± 5.8390.35 ± 5.02
GC-GCN78.06 ± 5.8478.25 ± 5.0176.85 ± 6.3777.66 ± 5.4592.01 ± 5.23
Arousal–ValenceRandom55.92 ± 9.2454.35 ± 8.3453.80+9.3855.34+9.2983.88 ± 9.00
Distance54.51 ± 9.5754.90 ± 8.2854.00 ± 8.4055.27 ± 9.5783.86 ± 8.66
PLV [19]53.76 ± 8.9654.88 ± 9.5170.36 ± 6.2271.07 ± 70.185.64 ± 7.99
GC-GCN71.47 ± 4.9372.07 ± 5.0771.97 ± 4.3672.09 ± 5.1987.38 ± 7.94
Table 3. Emotion recognition performance of GC-GCN graph features with different threshold k.
Table 3. Emotion recognition performance of GC-GCN graph features with different threshold k.
FeaturesArousalValenceArousal–Valence
SGC-GCN92.12 ± 3.0690.71 ± 5.0087.53 ± 2.91
DGC-GCN91.82 ± 4.8292.01 ± 5.2387.38 ± 7.94
SDGC-GCN93.58 ± 5.7893.25 ± 4.2292.38 ± 3.05
SGC-GCN97.12 ± 2.5397.11 ± 2.3097.13 ± 2.66
DGC-GCN97.41 ± 3.2998.23 ± 1.3897.78 ± 3.26
SDGC-F-GCN97.91 ± 2.5098.46 ± 1.1698.15 ± 2.39
Table 4. The recognition performance with latest schemes (%).
Table 4. The recognition performance with latest schemes (%).
EmotionMethod θ α β γ θ + α + β + γ
ArousalSVM74.19 ± 8.5576.05 ± 8.6880.10 ± 8.4783.74 ± 7.1784.20 ± 9.39
ANN77.72 ± 10.8289.60 ± 7.7387.14 ± 8.3886.67 ± 7.0192.87 ± 5.45
DGGCN [14]72.63 ± 5.5972.33 ± 5.7772.23 ± 5.7572.43 ± 5.8588.56 ± 5.98
PGCNN [19]72.31 ± 5.2471.78 ± 5.1771.44 ± 4.8071.95 ± 5.5890.89 ± 4.68
Causal-GCN [26]72.24 ± 5.4171.93 ± 5.1972.15 ± 4.9872.61 ± 5.2088.48 ± 5.71
GC-GCN72.58 ± 5.1678.09 ± 5.9478.68 ± 5.8279.55 ± 5.5691.82 ± 4.82
GC-F-GCN84.14 ± 7.6193.45 ± 5.4292.62 ± 5.6091.69 ± 4.7297.91 ± 2.50
ValenceSVM67.68 ± 9.4272.79 ± 9.1477.17 ± 11.8078.76 ± 11.0283.43 ± 11.50
ANN78.63 ± 10.1389.65 ± 6.8187.73 ± 6.0787.90 ± 5.3892.28 ± 6.12
DGGCN [14]72.64 ± 5.4972.25 ± 5.9471.79 ± 4.8672.27 ± 5.7988.36 ± 5.48
PGCNN [19]69.94 ± 6.2269.34 ± 6.2470.56 ± 6.3370.50 ± 5.8390.35 ± 5.02
Causal-GCN [26]71.92 ± 5.6371.95 ± 5.3671.69 ± 5.4071.96 ± 5.6288.46 ± 5.11
GC-GCN78.06 ± 5.8478.25 ± 5.0176.85 ± 6.3777.66 ± 5.4592.01 ± 5.23
GC-F-GCN84.02 ± 8.0184.19 ± 7.8683.98 ± 7.8383.82 ± 7.7698.46 ± 1.16
arousal–ValenceSVM69.54 ± 14.1078.61 ± 11.4482.31 ± 9.6786.95 ± 7.8987.68 ± 6.98
ANN69.03 ± 13.0568.64 ± 14.3962.34 ± 16.2060.78 ± 14.0587.53 ± 12.25
DGGCN [14]72.65 ± 5.6073.15 ± 5.8672.65 ± 5.5973.00 ± 5.2982.40 ± 9.32
PGCNN [19]53.76 ± 8.9654.88 ± 9.5170.36 ± 6.2271.07 ± 70.185.64 ± 7.99
Causal-GCN [26]73.10 ± 5.3973.22 ± 5.3773.14 ± 5.3773.39 ± 6.2982.35 ± 9.23
GC-GCN71.47 ± 4.9372.07 ± 5.0771.97 ± 4.3672.09 ± 5.1987.38 ± 7.94
GC-F-GCN77.28 ± 11.9677.20 ± 12.6877.07 ± 12.1077.05 ± 12.3298.15 ± 2.39
Table 5. The parameter numbers of network and average training time of different GCN scheme.
Table 5. The parameter numbers of network and average training time of different GCN scheme.
ModelArousalValenceArousal–Valence
ParametersTimes (s)ParametersTimes (s)ParametersTimes (s)
DGGCN [14]433815433815440416
PGCNN [19]427416427416434016
Causal-GCN [26]440217440217446818
GC-GCN421825421826428429
GC-F-GCN497012324,97012425,076126
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, J.; Zhang, X.; Chen, G.; Zhao, Q. Granger-Causality-Based Multi-Frequency Band EEG Graph Feature Extraction and Fusion for Emotion Recognition. Brain Sci. 2022, 12, 1649. https://doi.org/10.3390/brainsci12121649

AMA Style

Zhang J, Zhang X, Chen G, Zhao Q. Granger-Causality-Based Multi-Frequency Band EEG Graph Feature Extraction and Fusion for Emotion Recognition. Brain Sciences. 2022; 12(12):1649. https://doi.org/10.3390/brainsci12121649

Chicago/Turabian Style

Zhang, Jing, Xueying Zhang, Guijun Chen, and Qing Zhao. 2022. "Granger-Causality-Based Multi-Frequency Band EEG Graph Feature Extraction and Fusion for Emotion Recognition" Brain Sciences 12, no. 12: 1649. https://doi.org/10.3390/brainsci12121649

APA Style

Zhang, J., Zhang, X., Chen, G., & Zhao, Q. (2022). Granger-Causality-Based Multi-Frequency Band EEG Graph Feature Extraction and Fusion for Emotion Recognition. Brain Sciences, 12(12), 1649. https://doi.org/10.3390/brainsci12121649

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop