EEG-Based Emotion Recognition by Exploiting Fused Network Entropy Measures of Complex Networks across Subjects

It is well known that there may be significant individual differences in physiological signal patterns for emotional responses. Emotion recognition based on electroencephalogram (EEG) signals is still a challenging task in the context of developing an individual-independent recognition method. In our paper, from the perspective of spatial topology and temporal information of brain emotional patterns in an EEG, we exploit complex networks to characterize EEG signals to effectively extract EEG information for emotion recognition. First, we exploit visibility graphs to construct complex networks from EEG signals. Then, two kinds of network entropy measures (nodal degree entropy and clustering coefficient entropy) are calculated. By applying the AUC method, the effective features are input into the SVM classifier to perform emotion recognition across subjects. The experiment results showed that, for the EEG signals of 62 channels, the features of 18 channels selected by AUC were significant (p < 0.005). For the classification of positive and negative emotions, the average recognition rate was 87.26%; for the classification of positive, negative, and neutral emotions, the average recognition rate was 68.44%. Our method improves mean accuracy by an average of 2.28% compared with other existing methods. Our results fully demonstrate that a more accurate recognition of emotional EEG signals can be achieved relative to the available relevant studies, indicating that our method can provide more generalizability in practical use.


Introduction
The basic abilities of people in social communication include accurately distinguishing emotions and making reasonable responses [1]. With the ability of recognizing emotions, machines could think like humans, perceive human emotional states, and make rational responses [2]. However, the relevant research focuses more on logic and ignores the importance of emotions related to human-computer interfaces (BCI) [3,4]. Current emotion recognition methods are mainly based on non-physiological signals and physiological signals. Non-physiological studies mainly contain speech signals [5], facial expressions [6], body postures [7], and gestures [8]. On the other hand, physiological signals excellently reflect the functions of humans, with the advantages of objectivity and accuracy. Common physiological signals are also diverse, and include electroencephalogram (EEG) [9], electromyogram (EMG) [10], and electrocardiogram (ECG) [11], etc. Among the above physiological signals, EEG signals can be obtained from the cerebral cortex using noninvasive devices, with the advantages of being direct, noninvasive and safe. Emotion states can be directly reflected by EEG signals related to corresponding brain regions. Based on the above case, emotion recognition based on EEG signals has attracted more and more attention due to its characteristics of being safe, noninvasive and intuitive [12][13][14][15].
Relevant studies on affective classification based on EEG have faced many scientific challenges in the past decade. First, in the same context, when the emotional incentives

Materials and Methods
The framework of our proposed emotion recognition model is shown in Figure 1, comprising several processes. First, for the subjects, video stimuli were used to elicit emotional reactions and EEG data were recorded. When the preprocessing of EEG signals was completed, the EEG signals were mapped as a complex network using the visibility graph method. Next, two features (NDE and CCE) were calculated from the complex network. In addition, the AUC values of each channel were calculated, and the effective channels were selected according to the defined criterion (AUC = 0.8) to classify emotions. The features of the selected channels were fused to a feature vector sent to a machine learning model. Lastly, the support vector machine (SVM) was used as a pattern classifier to recognize different emotions.
The organization of this paper is as follows: Section 2 describes the visibility graph method and the extraction of network entropy measures. Section 3 presents the data analysis and experimental results. In Section 4, we discuss the experimental results in comparison with recently reported studies. Lastly, the article is concluded in Section 5.

Materials and Methods
The framework of our proposed emotion recognition model is shown in Figure 1, comprising several processes. First, for the subjects, video stimuli were used to elicit emotional reactions and EEG data were recorded. When the preprocessing of EEG signals was completed, the EEG signals were mapped as a complex network using the visibility graph method. Next, two features (NDE and CCE) were calculated from the complex network. In addition, the AUC values of each channel were calculated, and the effective channels were selected according to the defined criterion (AUC = 0.8) to classify emotions. The features of the selected channels were fused to a feature vector sent to a machine learning model. Lastly, the support vector machine (SVM) was used as a pattern classifier to recognize different emotions.

Experimental Dataset
Our study used an open dataset for analysis, namely, SEED (SJTU Emotion EEG Dataset) [15]. This dataset includes 15 Chinese subjects (7 males and 8 females; mean value: 23.27; STD: 2.37). A total of 15 Chinese film clips (positive, neutral, and negative emotions) were selected as the subjects' emotional stimuli, with each film lasting about 4 min. A total of three groups of experiments were conducted, with each subject participating in 15 trials in each case. Three kinds of emotions were induced in each subject, i.e., positive, neutral, and negative emotions, through an emotional film clip. Therefore, each subject participated in three experiments of 15 trials each, i.e., the subjects conducted 45 (15 × 3) trials using the SEED dataset. For more information about the SEED database, please refer to http://bcmi.sjtu.edu.cn/seed/ (accessed on 25 July 2021). In this study, we used EEG signals corresponding to trials of positive, neutral, and negative emotions to distinguish among the three types of emotions. We intercepted EEG signals from SEED for 30 s of data, extracted from the middle portion of each trial, i.e., from 60 s to 90 s.

Experimental Dataset
Our study used an open dataset for analysis, namely, SEED (SJTU Emotion EEG Dataset) [15]. This dataset includes 15 Chinese subjects (7 males and 8 females; mean value: 23.27; STD: 2.37). A total of 15 Chinese film clips (positive, neutral, and negative emotions) were selected as the subjects' emotional stimuli, with each film lasting about 4 min. A total of three groups of experiments were conducted, with each subject participating in 15 trials in each case. Three kinds of emotions were induced in each subject, i.e., positive, neutral, and negative emotions, through an emotional film clip. Therefore, each subject participated in three experiments of 15 trials each, i.e., the subjects conducted 45 (15 × 3) trials using the SEED dataset. For more information about the SEED database, please refer to http: //bcmi.sjtu.edu.cn/seed/ (accessed on 25 July 2021). In this study, we used EEG signals corresponding to trials of positive, neutral, and negative emotions to distinguish among the three types of emotions. We intercepted EEG signals from SEED for 30 s of data, extracted from the middle portion of each trial, i.e., from 60 s to 90 s.

Constructing Complex Networks Using Visibility Graph from EEG Signals
The EEG results are shown in the form of a signal time series, which has nonlinear and nonstationary characteristics. Time series signals can be effectively and quantitatively analyzed using complex network theory, e.g., visibility graphs. EEG signals contain 62 channels, each of which is a time series of N points. The main operation method of the visibility graph is to transform the time series into numerical points x chj (i) , i = 1, 2, . . . , N, j = 1, 2, . . . 62, and each numerical point corresponds to the height of the vertical axis coordinate system, Entropy 2021, 23, 984 4 of 14 displayed as vertical lines; then, the numerical points are defined as a network, and these network nodes are connected as a function of the numerical points. As shown in Figure 2, node i has neighbor nodes x chj i = x chj 1, x chj 2, . . . , x chj m − 1, x chj m , 1 ≤ i ≤ m, where m is the total number of neighbor nodes. The visibility graph method remains unchanged following affine changes; as such, its topological properties can effectively describe the characteristics of a time series.

Constructing Complex Networks Using Visibility Graph from EEG Signals
The EEG results are shown in the form of a signal time series, which has nonlinear and nonstationary characteristics. Time series signals can be effectively and quantitatively analyzed using complex network theory, e.g., visibility graphs. EEG signals contain 62 channels, each of which is a time series of N points. The main operation method of the visibility graph is to transform the time series into numerical points { ℎ ( )}, = 1,2, . . . , , = 1,2, . . .62, and each numerical point corresponds to the height of the vertical axis coordinate system, displayed as vertical lines; then, the numerical points are defined as a network, and these network nodes are connected as a function of the numerical points. As shown in Figure 2, node has neighbor nodes ℎ = ( ℎ 1, ℎ 2,⋅⋅⋅, ℎ − 1, ℎ ), 1 ≤ ≤ , where is the total number of neighbor nodes. The visibility graph method remains unchanged following affine changes; as such, its topological properties can effectively describe the characteristics of a time series. Figure 2. Construction of complex networks using visibility graph derived from EEG signals.

Fusion of Network Entropy Measures
After constructing the complex network corresponding to the EEG signals, for its comprehensive characterization, we described the spatial characteristics of the brain network as a function of its global and local characteristics using the CCE and NDE of EEG signals for different emotions calculated on each channel. CCE, as a local feature of the network, describes the attribute of local node connectivity in the network, and represents the distribution of the proportion of interconnected neighbor nodes in the whole network [26][27][28]. CCE also represents the clustering state of nodes in the network. In a network with a large number of nodes, the nodes form high-density connections, which lead to local clustering. In the complex network corresponding to the EEG signals, CCE is defined as where is the one-hop connection between neighbor node p and node q, and is the total number of neighbor nodes connecting to node .
As a global characteristic of the network, NDE describes the neighborhood degree of global nodes in the network, which represents an effective measure of network node

Fusion of Network Entropy Measures
After constructing the complex network corresponding to the EEG signals, for its comprehensive characterization, we described the spatial characteristics of the brain network as a function of its global and local characteristics using the CCE and NDE of EEG signals for different emotions calculated on each channel. CCE, as a local feature of the network, describes the attribute of local node connectivity in the network, and represents the distribution of the proportion of interconnected neighbor nodes in the whole network [26][27][28]. CCE also represents the clustering state of nodes in the network. In a network with a large number of nodes, the nodes form high-density connections, which lead to local clustering. In the complex network corresponding to the EEG signals, CCE is defined as where e pq is the one-hop connection between neighbor node p and node q, and m i is the total number of neighbor nodes connecting to node i.
As a global characteristic of the network, NDE describes the neighborhood degree of global nodes in the network, which represents an effective measure of network node integration. NDE is an effective method to evaluate node heterogeneity from the perspective of neighborhood degree. In complex networks, NDE is defined as where p i is the probability description of the nodal degree about node i. This is defined as where d i is the neighbor node connected to node i. The pseudo code for emotion recognition based on complex networks is presented in Algorithm 1. The sampling rate was 200 Hz. Each data point was used as a network node. The time series was constructed into complex networks using the visibility graph method. The CCE and NDE features were extracted from the complex networks, and both features were set as the feature vector. for CC(j,t) ← (1/t)*(( (2*KB(j,t))/(S Y (j,1)*(S Y (j,1)−1)))) 15: CCE(j,t) ← −(CC(j,t).*(log(CC(j,t)))) 16: ∀t, 1 ≤ t ≤ m i (j) do begin 17: ND(j,t) ← (HC(j,t))./L(j,1) 18: NDE(j,t) ← −(ND(j,t).*(log(ND(h,t)))) 19: end Output: vector <CCE, NDE>

Feature Dimension Reduction
The redundant information in the feature dimension should be reduced to optimize the feature vector for machine learning. The accuracy of the classification result and the machine learning efficiency can be improved by eliminating invalid feature dimensions and retaining valid feature dimensions. In our work, feature dimensionality reduction was carried out using the AUC method. Aiming at reducing feature dimensionality, this paper first screened 62 channels in the SEED dataset. Then, we selected the most effective channels among the 62 channels. Several feature dimension reduction methods have been introduced, such as paired t-test, mutual information, PCA, and AUC. The performance of these feature dimension reduction methods varied. In our work, we used AUC as the feature dimension reduction method, which is defined as the area bounded by the coordinate axis under the ROC curve. Obviously, the value of this area is less than 1. Since the ROC curve is generally above the line y = x, the value range of AUC is between 0.5 and 1. An AUC value closer to 1 denotes the higher authenticity of the test method and a higher application value. The AUC method judges the difference between samples by calculating the area under the ROC curve of multiple samples. After channel screening using the AUC method, the entropy of the network structure of positive and negative emotions derived from EEG signals was calculated for the selected channel to achieve the effect of dimensionality reduction.

Support Vector Machine Classifier
As a machine learning algorithm, the support vector machine is a classification model whose purpose is to find a hyperplane with which to segment samples. The principle of segmentation is to maximize the interval. Each channel of EEG signals was calculated as a network structural feature. We obtained 128-dimensional features by calculating CCE and NDE for 62 channels. Then, we used the leave-one-subject-out verification strategy to specify the test and training sets. The SEED dataset contained 15 subjects, and the data of one subject were used as the testing set, while the data of the remaining subjects were used as the training set; this was repeated for each subject. SVM was used as a classifier of positive and negative emotions or of positive, negative, and neutral emotions. SVM is a kernel-based classifier that can achieve both linear and nonlinear classification via the use of various kernel functions, which differ in their performance. We compared several of the most commonly used existing kernels of SVM, with the results revealing the radial basis function (RBF) as the best performer.
The LIBSVM software package was used for the SVM classifier, along with the RBF kernel. The parameters in the SVM classifier included S, T, C, and other default values. Parameter T was set to 2 and parameter S was set to 0. Parameter C was an optimum value determined through a one-step search of the parameter space (10 −3:2 ). The framework of complex network entropy measures for emotion recognition using the SVM classifier is described in Figure 3.
calculating the area under the ROC curve of multiple samples. After channel screening using the AUC method, the entropy of the network structure of positive and negative emotions derived from EEG signals was calculated for the selected channel to achieve the effect of dimensionality reduction.

Support Vector Machine Classifier
As a machine learning algorithm, the support vector machine is a classification model whose purpose is to find a hyperplane with which to segment samples. The principle of segmentation is to maximize the interval. Each channel of EEG signals was calculated as a network structural feature. We obtained 128-dimensional features by calculating CCE and NDE for 62 channels. Then, we used the leave-one-subject-out verification strategy to specify the test and training sets. The SEED dataset contained 15 subjects, and the data of one subject were used as the testing set, while the data of the remaining subjects were used as the training set; this was repeated for each subject. SVM was used as a classifier of positive and negative emotions or of positive, negative, and neutral emotions. SVM is a kernel-based classifier that can achieve both linear and nonlinear classification via the use of various kernel functions, which differ in their performance. We compared several of the most commonly used existing kernels of SVM, with the results revealing the radial basis function (RBF) as the best performer.
The LIBSVM software package was used for the SVM classifier, along with the RBF kernel. The parameters in the SVM classifier included S, T, C, and other default values. Parameter T was set to 2 and parameter S was set to 0. Parameter C was an optimum value determined through a one-step search of the parameter space (10 −3:2 ). The framework of complex network entropy measures for emotion recognition using the SVM classifier is described in Figure 3.

Results
Firstly, we analyzed the NDE and CCE features of the complex network, which were extracted to classify positive and negative emotions from EEG signals. The time duration of EEG data used in the study was 30 s, selected from the middle of the 62-channel EEG signals (i.e., from 60 s to 90 s). The SEED dataset included 15 subjects and 62-channel EEG signals. We calculated the values of network structural entropy from each channel,

Results
Firstly, we analyzed the NDE and CCE features of the complex network, which were extracted to classify positive and negative emotions from EEG signals. The time duration of EEG data used in the study was 30 s, selected from the middle of the 62-channel EEG signals (i.e., from 60 s to 90 s). The SEED dataset included 15 subjects and 62-channel EEG signals. We calculated the values of network structural entropy from each channel, thereby obtaining two types of 62-dimensional features, namely, CCE and NDE. Figure 4 shows the results for the CCE of EEG signals in classifying positive and negative emotions, positive and neutral emotions, and negative and neutral emotions. The CCE of most channels presented significant differences. As shown in Figure 4, 68% of the EEG channels exhibited a significant difference between positive and negative emotions (p < 0.005), 74% exhibited a significant difference between positive and neutral emotions (p < 0.005), and 65% exhibited a significant difference between neutral and negative emotions (p < 0.005). The NDE of most channels presented significant differences. As shown in Figure 5, 64% of EEG channels exhibited a significant difference between positive and negative emotions (p < 0.005), 69% exhibited a significant difference between positive and neutral emotions (p < 0.005), and 58% exhibited a significant difference between neutral and negative emotions (p < 0.005). Therefore, according to the results of these two network structural entropies, positive, negative, and neutral emotions could be deduced. thereby obtaining two types of 62-dimensional features, namely, CCE and NDE. Figure 4 shows the results for the CCE of EEG signals in classifying positive and negative emotions, positive and neutral emotions, and negative and neutral emotions. The CCE of most channels presented significant differences. As shown in Figure 4, 68% of the EEG channels exhibited a significant difference between positive and negative emotions (p < 0.005), 74% exhibited a significant difference between positive and neutral emotions (p < 0.005), and 65% exhibited a significant difference between neutral and negative emotions (p < 0.005). The NDE of most channels presented significant differences. As shown in Figure 5, 64% of EEG channels exhibited a significant difference between positive and negative emotions (p < 0.005), 69% exhibited a significant difference between positive and neutral emotions (p < 0.005), and 58% exhibited a significant difference between neutral and negative emotions (p < 0.005). Therefore, according to the results of these two network structural entropies, positive, negative, and neutral emotions could be deduced.   thereby obtaining two types of 62-dimensional features, namely, CCE and NDE. Figure 4 shows the results for the CCE of EEG signals in classifying positive and negative emotions, positive and neutral emotions, and negative and neutral emotions. The CCE of most channels presented significant differences. As shown in Figure 4, 68% of the EEG channels exhibited a significant difference between positive and negative emotions (p < 0.005), 74% exhibited a significant difference between positive and neutral emotions (p < 0.005), and 65% exhibited a significant difference between neutral and negative emotions (p < 0.005). The NDE of most channels presented significant differences. As shown in Figure 5, 64% of EEG channels exhibited a significant difference between positive and negative emotions (p < 0.005), 69% exhibited a significant difference between positive and neutral emotions (p < 0.005), and 58% exhibited a significant difference between neutral and negative emotions (p < 0.005). Therefore, according to the results of these two network structural entropies, positive, negative, and neutral emotions could be deduced.  Depending on the emotion, these two network structural entropies presented different discriminative abilities for each EEG channel. We analyzed the contrast in area under the ROC curve for any two kinds of emotions (i.e., various combinations of positive, negative, and neutral emotions). Figure 6 shows the AUC values of structural entropy for the 62-channel EEG network with respect to these combinations. The horizontal axis represents the 62 EEG channels, while the vertical axis represents the AUC values. In our work, we defined the AUC threshold as 0.8, whereby values above this threshold were considered to represent effective channels. Figure 6 also presents the electrode positions for the SEED dataset, wherein the selected channels using network structure entropy and the ROC method are highlighted in red, namely, C1, C5, FCZ, FC1, FC2, FC3, FC4, FC6, FZ, F2, F3, F4, F6, AF3, AF4, FPZ, FP1, and FP2. As shown in Figure 6, most of the selected channels came from the anterior hemisphere. Depending on the emotion, these two network structural entropies presented different discriminative abilities for each EEG channel. We analyzed the contrast in area under the ROC curve for any two kinds of emotions (i.e., various combinations of positive, negative, and neutral emotions). Figure 6 shows the AUC values of structural entropy for the 62-channel EEG network with respect to these combinations. The horizontal axis represents the 62 EEG channels, while the vertical axis represents the AUC values. In our work, we defined the AUC threshold as 0.8, whereby values above this threshold were considered to represent effective channels. Figure 6 also presents the electrode positions for the SEED dataset, wherein the selected channels using network structure entropy and the ROC method are highlighted in red, namely, C1, C5, FCZ, FC1, FC2, FC3, FC4, FC6, FZ, F2, F3, F4, F6, AF3, AF4, FPZ, FP1, and FP2. As shown in Figure 6, most of the selected channels came from the anterior hemisphere. In order to evaluate the effectiveness of our proposed emotion recognition method, we compared the performances of the 62-channel and newly created 18-channel SEED datasets in classifying positive and negative emotions using CCE and NDE as a feature vector. Figure 7 shows the classification results for positive and negative emotions, while Figure 8 shows the classification results for positive, neutral, and negative emotions. The horizontal axis represents the 15 subjects, while the vertical axis represents the classification results. As shown in Figure 7, the 18 selected channels performed better in classifying In order to evaluate the effectiveness of our proposed emotion recognition method, we compared the performances of the 62-channel and newly created 18-channel SEED datasets in classifying positive and negative emotions using CCE and NDE as a feature vector. Figure 7 shows the classification results for positive and negative emotions, while Figure 8 shows the classification results for positive, neutral, and negative emotions. The horizontal axis represents the 15 subjects, while the vertical axis represents the classification results. As shown in Figure 7, the 18 selected channels performed better in classifying positive and negative emotions, with a significant improvement in accuracy (p < 0.005) compared to the 62-channel dataset. For each subject in the dataset, the accuracy increases were 0.08, 0.09, 0.10, 0.08, 0.08, 0.09, 0.14, 0.08, 0.08, 0.09, 0.04, 0.06, 0.03, 0.14, and 0.07 in classifying positive and negative emotions (p < 0.005).

Discussion
This study proposed a method for fusing network entropy measures, used to achieve effective emotion recognition results based on EEG signals across subjects. The main innovations of the fused network entropy measures method are as follows: (1) mapping the time

Discussion
This study proposed a method for fusing network entropy measures, used to achieve effective emotion recognition results based on EEG signals across subjects. The main innovations of the fused network entropy measures method are as follows: (1) mapping the time series of EEG signals to complex networks using the visibility graph method; (2) exploiting

Discussion
This study proposed a method for fusing network entropy measures, used to achieve effective emotion recognition results based on EEG signals across subjects. The main innovations of the fused network entropy measures method are as follows: (1) mapping the time series of EEG signals to complex networks using the visibility graph method; (2) exploiting the CCE and NDE features from the complex network, describing the spatial properties of EEG signals in the form of local and global information, respectively; (3) using the cross-subject emotion training method based on the SEED dataset to overcome individual differences, thus improving the universality and generalizability of emotion recognition.
To illustrate the excellent performance of our method for emotion recognition, we compared our results with those of other studies based on same dataset, as well as a different dataset. We compared our work with the studies of Li et al. [7], Yucel et al. [29], Hao et al. [30], and Lu et al. [31], which also used the SEED dataset. In Li's study, the positive and negative emotions in the EEG data collected from the SEED dataset were categorized. Eighteen linear and nonlinear EEG features (singular entropy, spectral entropy, permutation entropy, etc.) were extracted from the EEG signals. Then, these features were combined with the SVM classifier to categorize positive and negative emotions. The average value of each feature was used as the input of the support vector machine. In Li's research, the framework of pattern learning used for emotion recognition was based on average entropy. In our work, the EEG dataset, classifier, and validation strategy were the same as in Li's study, thus ensuring the fairness of performance comparison. In Yucel's study, the convolutional neural network (CNN) architecture was exploited, whereby raw EEG data were used after applying windowing, preadjustments, and normalization. In Hao's study, the raw feature vector sequence was extracted from multichannel EEG signals using a sliding window. The K-nearest neighbor algorithm was employed to estimate the emotion state. Lu et al. proposed dynamic entropy-based pattern learning with SVM to identify emotions from EEG signals, and then the positive and negative emotions in EEG data collected from the SEED dataset were categorized. Table 1  Although their mean accuracy of emotion recognition was better than that in this study, the memory consumption was excessive and the model was bloated. As shown in Table 2   Differential entropy combined with SVM was used by Zheng et al. to achieve three categories (negative, positive, and neutral) of emotion recognition across subjects using the SEED dataset [6]. An average accuracy of 60.93% was obtained in Zheng's study. An average accuracy of 64.15% was achieved in Lu's research [31], which used a dynamic entropy-based pattern learning model to identify emotions from EEG signals. However, we achieved a higher average accuracy of 68.44% using network entropy measures of complex networks.
At present, using entropy measures as the input feature is one of the most effective methods for emotion recognition based on EEG signals. The human brain is a nonlinear dynamic system, and EEG signals have nonlinear and complex characteristics. Entropy measures are widely used to quantify the complexity of dynamic systems. A large amount of evidence has shown that clinically meaningful information can be effectively extracted from EEG signals using entropy measures. Using entropy measures and a machine learning classifier to recognize emotions based on EEG signals is an excellent method, as reported in various application cases. However, the entropy methods used in the current studies on emotion recognition of EEG signals lack spatial characteristics. By mapping EEG signals into complex networks and extracting the network structural entropy, we could not only effectively express the complexity of EEG signals, but also retain their spatial properties. The EEG signals are described in the two dimensions of time and space in order to obtain more features. Network structural entropy was used to create probability measures based on network parameters such as nodes and links. CCE and NDE represent the information of a complex network from local and global perspectives, respectively.
In order to determine the influences of different dimension reduction methods on our proposed emotion recognition method based on network entropy measures, we compared PCA, one of the currently popular methods, with AUC, which was adopted in our work. For the PCA method, the original data matrix was constructed from the data of 62 channels from the SEED dataset. Firstly, covariances among the data were calculated and sorted according to the eigenvalue of their covariance, from the largest to the smallest. The data with an eigenvalue of 0 were excluded, and the data with the lowest 50% eigenvalue were removed. The dimension reduction was completed by projecting the original data matrix onto the eigenmatrix corresponding to the selected eigenvalue. The classification results of the comparison are shown in Figure 9. In our study, we proposed an effective emotion recognition method based on EEG signals, which achieved effective results. By measuring the entropy characteristics of the complex network structure, the spatial characteristics of EEG signals were increased. The EEG emotional patterns based on entropy measurements of complex networks were compared in detail with other existing studies, where it was shown that our method can effectively and accurately identify emotions.
Although excellent emotion classification accuracy was obtained in our work, there remains room for improvement. First, we will determine the interpretability aspects of complex networks associated with brain networks. Moreover, we will explore other advanced machine learning methods, such as deep neural networks and deep learning, to investigate emotion recognition across individuals.

Conclusions
In our work, we exploited network entropy measures with the SVM classifier to implement an excellent model for emotion recognition based on EEG signals. We mapped the EEG signals to complex networks and extracted the network entropy measures to represent the temporal and spatial characteristics of EEG information. We conducted a classification study on three emotions: positive, neutral, and negative. In order to improve the universality of our approach, we used the cross-subject emotion training method based on complex networks to overcome individual differences and build a more robust emotion recognition method. After reducing the dimensionality of features using the AUC method, effective features were fused to a feature vector input into a support vector machine to recognize emotions. In addition, we carried out a detailed comparison between our work and existing studies. The results fully prove that emotion recognition utilizing network entropy measures of complex networks can achieve better results. Owing to its excellent generalizability, our proposed emotion recognition method has great potential applicability in brain-computer interfaces.   In our study, we proposed an effective emotion recognition method based on EEG signals, which achieved effective results. By measuring the entropy characteristics of the complex network structure, the spatial characteristics of EEG signals were increased. The EEG emotional patterns based on entropy measurements of complex networks were compared in detail with other existing studies, where it was shown that our method can effectively and accurately identify emotions.
Although excellent emotion classification accuracy was obtained in our work, there remains room for improvement. First, we will determine the interpretability aspects of complex networks associated with brain networks. Moreover, we will explore other advanced machine learning methods, such as deep neural networks and deep learning, to investigate emotion recognition across individuals.

Conclusions
In our work, we exploited network entropy measures with the SVM classifier to implement an excellent model for emotion recognition based on EEG signals. We mapped the EEG signals to complex networks and extracted the network entropy measures to represent the temporal and spatial characteristics of EEG information. We conducted a classification study on three emotions: positive, neutral, and negative. In order to improve the universality of our approach, we used the cross-subject emotion training method based on complex networks to overcome individual differences and build a more robust emotion recognition method. After reducing the dimensionality of features using the AUC method, effective features were fused to a feature vector input into a support vector machine to recognize emotions. In addition, we carried out a detailed comparison between our work and existing studies. The results fully prove that emotion recognition utilizing network entropy measures of complex networks can achieve better results. Owing to its excellent generalizability, our proposed emotion recognition method has great potential applicability in brain-computer interfaces.