Next Article in Journal
Density Distribution of Strongly Quantum Degenerate Fermi Systems Simulated by Fictitious Identical Particle Thermodynamics
Previous Article in Journal
Transmit Power Optimization for Simultaneous Wireless Information and Power Transfer-Assisted IoT Networks with Integrated Sensing and Communication and Nonlinear Energy Harvesting Model
Previous Article in Special Issue
A Spike Train Production Mechanism Based on Intermittency Dynamics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved EEG-Based Emotion Classification via Stockwell Entropy and CSP Integration

National Engineering Research Center for E-Learning, Central China Normal University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(5), 457; https://doi.org/10.3390/e27050457
Submission received: 31 March 2025 / Revised: 20 April 2025 / Accepted: 22 April 2025 / Published: 24 April 2025

Abstract

:
Traditional entropy-based learning methods primarily extract the relevant entropy measures directly from EEG signals using sliding time windows. This study applies differential entropy to a time-frequency domain that is decomposed by Stockwell transform, proposing a novel EEG emotion recognition method combining Stockwell entropy and a common spatial pattern (CSP). The results demonstrate that Stockwell entropy effectively captures the entropy features of high-frequency signals, and CSP-transformed Stockwell entropy features show superior discriminative capability for different emotional states. The experimental results indicate that the proposed method achieves excellent classification performance in the Gamma band (30–46 Hz) for emotion recognition. The combined approach yields high classification accuracy for binary tasks (“positive vs. neutral”, “negative vs. neutral”, and “positive vs. negative”) and maintains satisfactory performance in the three-class task (“positive vs. negative vs. neutral”).

1. Introduction

Emotion recognition refers to the detection and analysis of human emotional states through technological means [1]. The electroencephalogram (EEG) technique measures brain activity by recording the electrical activity of cortical neurons [2]. Due to its non-invasive nature and real-time capabilities, EEG technology plays a crucial role in brain function research, particularly in studies that require the prolonged continuous monitoring of brain activity [3]. EEG signals are commonly represented in the form of waveforms, where distinct waveforms, such as alpha, beta, theta, and delta waves, correspond to different states of brain activity [4]. By analyzing key features of EEG signals, including frequency, power spectral density, and phase, researchers are able to investigate the functional state of the brain and discern various emotional states, such as happiness, sadness, and anxiety [5]. As a result, the potential of EEGs as a tool for emotion recognition is considered highly promising [6].
However, due to the asymmetric and non-stationary nature of electroencephalogram (EEG) signals [7], emotion recognition based on EEGs remains a complex scientific challenge [8]. Recently, learning approaches based on entropy measures have been demonstrated to be one of the most effective technical pathways for emotion-related EEG recognition [9]. Entropy, as a measure of information uncertainty, possesses a strong capability to extract clinically meaningful regularity information from EEG signals [10]. Entropy measures can be utilized to quantify the irregularity, randomness, and complexity of physiological signals [9]. Moreover, there is substantial and compelling evidence indicating that entropy-based metrics are highly effective for analyzing EEG signals [9,11]. For instance, Beatriz García-Martínez and Arturo Martínez-Rodrigo et al. were the first to introduce the application of three entropy-based metrics, namely, sample entropy, quadratic sample entropy, and distribution entropy, to differentiate between calm and negative-stress emotional states [12]. Ruo-Nan Duan and Jia-Yi Zhu et al. proposed differential entropy (DE) features and compared them with traditional energy spectrum (ES) features, demonstrating that DE features and their associated combinations offer superior performance for emotion recognition [11]. Yun Lu and Mingjiang Wang et al. introduced a dynamic entropy-based pattern-learning framework for recognizing inter-individual emotions from EEG signals, aiming to address the poor generalization capability of existing emotion recognition methods, which is caused by individual variability [9]. Wei-Long Zheng and Bao-Liang Lu utilized differential entropy features extracted from multi-channel EEG data to train a deep belief network for identifying positive, neutral, and negative emotions. They also investigated the weights of the trained deep belief network to explore key frequency bands and channels [13]. Research by Wu et al. demonstrates that Stockwell entropy, combined with the Hilbert transform, effectively detects events of interest (EoIs) compared to the standard Hilbert transform, achieving accurate identification of both EoIs and non-EoIs [14]. The experimental results indicated that neural features related to different emotions do, indeed, exist and exhibit commonalities across different experiments and individuals [13].
In this study, we propose combining Stockwell differential entropy with the CSP algorithm for the classification and recognition of emotional EEG signals. Specifically, differential entropy is first applied to the time-frequency domain decomposed by the Stockwell transform, followed by the use of the CSP algorithm to extract feature vectors corresponding to different emotional states. Additionally, this study further investigates how signal frequency and amplitude influence Stockwell entropy and its classification performance.

2. Related Work

The successful implementation of any EEG-based emotion recognition system relies on the accurate identification of features representing emotional states, which necessitates efficient feature extraction algorithms [15]. The extracted features must exhibit high discriminability to achieve superior recognition rates. The Stockwell transform is a time-frequency analysis tool that combines the advantages of short-time Fourier transform (STFT) and wavelet transform analyses, providing a detailed view of frequency variations over time [16,17]. CSP is another feature extraction technique that is widely used in biomedical applications. CSP employs a linear transformation method to project multi-channel EEG data into a low-dimensional space, thereby enhancing its discriminative capability for classifying EEG data across different categories [18,19]. K. Venu and P. Natesan proposed an approach called the HC+SMA-SSA scheme, which extracts features using improved Stockwell transform (ST) and CSP for motor imagery task classification, demonstrating superior performance in key metrics [20]. S. Sethi and R. Upadhyay et al. proposed a feature extraction method that integrates the Stockwell transform technique with CSP for designing motor imagery-based brain-computer interfaces, significantly improving the discriminability of motor imagery tasks [21]. Mausovi et al. combined wavelet transform (WT) with the CSP algorithm, achieving high classification accuracy in asynchronous offline brain–computer interface applications [22]. M.I. Chacon-Murguia and E. Rivas-Posada evaluated five feature extraction methods, including Stockwell, CSP + CWT, and CSP + ST, for classifying two types of motor imagery signals. Their proposed methods demonstrated superior performance compared to conventional approaches [23].
The main contributions of this paper are organized as follows: Section 3 introduces the EEG dataset used in this study, along with the Stockwell transform, Stockwell entropy, the CSP algorithm, and the experimental procedure. Section 4 provides a detailed comparison of the emotion recognition performance of the combined Stockwell entropy and CSP approach across different frequency bands and emotional states, presenting both data analysis and experimental results. Section 5 offers an in-depth analysis of the experimental outcomes and the underlying reasons for this observed classification performance. The final section summarizes the key findings and conclusions of this study.

3. Dataset and Methods

3.1. Dataset

The SEED dataset is a publicly available dataset provided by Shanghai Jiao Tong University, which was primarily designed for research in affective computing [13]. This dataset comprises electroencephalogram (EEG) signals collected from 15 participants (7 males and 8 females), recorded using a 62-channel EEG system, arranged according to the international 10–20 standard, and covering major regions of the brain. During the experiments, the participants watched 15 carefully selected film clips (encompassing positive, neutral, and negative emotions), each lasting approximately 4 min, to induce coherent emotional responses. Consequently, each participant’s data file contains 15 segments of preprocessed EEG data (channels × time-series data). The experiments were conducted three times for each participant, with intervals of approximately 1 week or longer between sessions [13].

3.2. Preprocessing

This study utilizes the “Preprocessed_EEG” brain electrical data files provided by the SEED dataset, as these EEG data files have already been downsampled and preprocessed. The data has been downsampled to 200 Hz and filtered using a 0–75 Hz bandpass filter [13]. On this basis, this study further applied a 0.1–46 Hz bandpass filter.

3.3. Methods

3.3.1. Stockwell Transform

Stockwell transform, also known as the S-transform, combines the advantages of the Fourier transform and the wavelet transform, providing a means for achieving the multi-resolution analysis of signals. It is particularly well-suited for the analysis of non-stationary signals [24,25].
Stockwell transform can be mathematically defined as follows:
S ( τ , f ) = + x ( t ) w ( τ t ) e j 2 π f t d t
In the given expression, x(t) denotes the original signal, and the window function w(t) is typically a Gaussian window. Here, τ represents the time position, and f represents the frequency. This integral expression provides the complex value of the signal at a given time and frequency, while the magnitude represents the energy level at that specific time and frequency [26,27].
The formula demonstrates that the Stockwell transform technique exhibits multi-resolution analysis capabilities [27]. Through the time-frequency decomposition of non-stationary signals, the technqiue generates a time-frequency representation of the signal [28]. The spectrum characteristics of the signal vary over time, thereby enabling the visualization of frequency component variations at different time points [29]. Unlike the short-time Fourier transform (STFT), the S-transform employs a frequency-dependent Gaussian window function, comprising a narrower window for high frequencies and a wider window for low frequencies [27,30]. Consequently, at high frequencies, the S-transform achieves higher time resolution through a narrower time window, while at low frequencies, it maintains better frequency resolution using a wider time window [31]. This adaptive window function enables the S-transform to more accurately capture transient features in non-stationary signals, making it a valuable tool for analyzing such signals in numerous fields including EEG and ECG [32,33,34].

3.3.2. Stockwell Transform Entropy

Shannon entropy serves as a measure of information uncertainty. Time-frequency signals processed using the Stockwell transform technique yield a new feature vector by computing the differential entropy in the time domain for each frequency band through a sliding time window approach. This feature vector is termed “Stockwell entropy” [14].
By calculating Stockwell entropy, we can quantify the complexity and randomness of EEG signals across different frequency bands over certain time periods. The computational process of Stockwell entropy using a sliding window approach is depicted in Figure 1.
Under the assumption that the time series X follows a Gaussian distribution N ( μ , σ 2 ) , the formula for differential entropy shows that, for a fixed-length EEG sequence, the differential entropy in a specific frequency band equals the logarithm of the energy spectrum [35]:
f ( X ) = p ( x ) log ( p ( x ) ) d x = 1 2 log ( 2 π e σ 2 )
where: p ( x ) = 1 2 π σ 2 e ( x μ ) 2 2 σ 2
Suppose that in a certain epoch, the EEG signal of one channel, after employing Stockwell transform, has a time-series of length N in the time-frequency signal, denoted as S = [ s 1 , s 2 , , s N ] , and a sliding window with a width of k, where k N . Then, the data processing process of the sliding window on the time-series in the time-frequency signal is as follows.
For each position i, where 1 i N k + 1 , the sub-sequence covered by the sliding window is S i = [ s i , s i + 1 , , s i + k 1 ] .
Step 1. Calculate the average value of all elements within the window Si.
Step 2. Calculate the standard deviation of the sub-sequence covered by the sliding window Si.
Step 3. Apply Formula (1) to each sub-sequence covered by the sliding window, to obtain a new sequence T = [ t 1 , t 2 , , t N k + 1 ] . The new formula can be expressed as follows:
T = f ( [ s i , s i + 1 , , s i + k 1 ] ) = 1 2 log ( 2 π e σ i 2 )
The obtained sequence T = [ t 1 , t 2 , , t N k + 1 ] is Stockwell entropy.

3.3.3. Common Spatial Pattern

The CSP is a feature extraction algorithm that highlights spatial distribution patterns in the EEG signals associated with specific tasks [36,37]. The core principle of the CSP algorithm involves finding a set of spatial filters that maximize the variance between two types of trials [38]. When these spatial filters are applied to the original EEG signals, the signals are projected into a new feature space, generating features that are optimized for classification. In this transformed space, the samples from different categories become more distinguishable [39,40]. Additionally, CSP reduces the dimensionality of the original EEG data by extracting discriminative features while preserving classification-relevant information. Finally, the extracted feature vectors are fed into machine learning classifiers, such as support vector machines (SVM) or random forests, for training and testing to classify the different task states. In this study, Stockwell entropy is processed using the CSP algorithm to generate eight distinct feature vectors for subsequent analysis [40].

3.3.4. Feature Extraction Process Combining Stockwell Entropy and CSP

To obtain certain features of EEG activity based on Stockwell entropy and CSP, in this study, the EEG time periods for one experiment for each subject (including 15 EEG segments) were first decomposed into several epochs. In this study, the length of each epoch was set to 3 s, and each epoch served as a sample, corresponding to an emotional state code (1—positive, 0—neutral, and −1—negative). Subsequently, time-frequency decomposition based on Stockwell transform was performed on each channel of every epoch [41]. For this paper, the ‘epochs.compute_tfr’ function from the MNE library was used with the following parameters: ‘method = “stockwell”’, ‘freqs = (0.1, 46.0)’, and ‘width = 1’ [42]. Then, the coefficients of the Stockwell transform were grouped, and the time-frequency domain was divided into six different frequency sub-bands, namely, the full frequency band (0.1, 46) Hz, Delta (0.1, 4) Hz, Theta (4, 8) Hz, Alpha (8, 12) Hz, Beta (12, 30) Hz, and Gamma (30, 46) Hz frequency bands. Next, the differential entropy value was calculated for each “frequency sub-band-time” unit on each channel of every epoch. The result of this step was to obtain a new “channel × frequency-sub-band × Stockwell entropy” matrix. Subsequently, the CSP algorithm was executed separately on each frequency sub-band to obtain the corresponding emotional feature vectors of each frequency sub-band on each channel [41]. At the same time, the effect of dimensionality reduction was also achieved. Finally, the generated feature vectors were input into classifiers such as SVM or random forest for the classification of emotional states. Figure 2 shows the flowchart of the Stockwell entropy-CSP feature extraction method.

3.4. Classification

In the first step of this study, stratified k-fold cross-validation was adopted to evaluate the effectiveness of the “Stockwell entropy–CSP combination model”; at the same time, this could avoid data leakage in the second-step classification comparison. This study used the most common stratified 5-fold cross-validation technique, dividing the training dataset into 5 subsets. In each iteration, a different subset was used as the test set, and the remaining part was used as the training set. Firstly, the “positive, neutral, and negative” emotions of each subject were combined. Three groups of experiments, namely, “Positive vs. Neutral”, “Positive vs. Negative”, and “Neutral vs. Negative”, respectively, were designed to verify the model’s performance. Meanwhile, the influence of different frequency bands on the classification of emotional states was examined. Then, the frequency band with the best evaluation effect was tested under different window states and for different emotional classifications. The final result of the experiment was an average recognition rate of 15 subjects in the three groups of experiments.
In the second step, two classifiers—SVM and random forest (RF)—were employed to compare the model’s classification performance on the test set data. SVM performs exceptionally well in high-dimensional spaces and can effectively address nonlinear classification problems using kernel functions. In contrast, random forest, as an ensemble learning method, demonstrates strong robustness against noise and overfitting [42]. By comparing the recognition rates of these two classifiers, the stability and reliability of the classification results were comprehensively validated. Four experimental setups were designed: Positive vs. Neutral vs. Negative, Positive vs. Neutral, Positive vs. Negative, and Neutral vs. Negative. In each setup, the data were randomly divided into training and test sets, with 80% being allocated to the training set and the remaining 20% to the test set. The final results were reported as the average recognition rates across the four experimental groups for all 15 subjects.

4. Results

4.1. Impact of Different Frequency Bands on Emotional State Binary Classification

As described in Section 3.4, this experiment employed stratified five-fold cross-validation. Three experimental setups—Positive vs. Neutral, Positive vs. Negative, and Neutral vs. Negative—were examined to assess the impact of different frequency bands on emotional state classification. The impact of different frequency bands is evaluated for each experimental setup. The classifier employed here is an RBF-kernel SVM, with its parameters set to kernel = ‘rbf’, C = 20, and Gamma = ‘scale’.
Table 1 summarizes the recognition accuracy and standard deviation obtained using stratified five-fold cross-validation for three binary classification tasks: Positive vs. Neutral, Negative vs. Neutral, and Positive vs. Negative. The results are reported across different window widths (W_5, W_10, W_20) and frequency bands (full frequency band, Delta, Theta, Alpha, Beta, and Gamma). Abbreviations are defined as follows: Pos = positive, Neu = neutral, and Neg = negative.
Figure 3 illustrates the recognition accuracy of pairwise classifications among three emotion categories (Positive vs. Neutral, Negative vs. Neutral, Positive vs. Negative) across different frequency bands: the full frequency band, Delta, Theta, Alpha, Beta, and Gamma. The x-axis denotes the frequency bands, while the y-axis indicates the recognition accuracy. Specifically, Figure 3a corresponds to a sliding window width of 5 during Stockwell entropy calculation, Figure 3b corresponds to a sliding window width of 10, and Figure 3c corresponds to a sliding window width of 20.
Table 1 and the three images in Figure 3 indicate that the Gamma frequency band achieved the highest accuracy across all three emotional combinations, with the full frequency band following closely. The classification accuracies of the Delta and Theta frequency bands were generally low, indicating significant differences in classification performance across the different frequency bands and tasks. Selecting an appropriate frequency band is crucial for enhancing the accuracy of EEG-based emotion classification.

4.2. Influence of Gamma Frequency Band on the Classification of Four Emotional Combinations

Since the Gamma frequency band exhibited the highest accuracy, the classification performance of this band was further evaluated for four emotional combinations: three-class classification (Positive, Negative, and Neutral), Positive vs. Neutral, Negative vs. Neutral, and Positive vs. Negative. This evaluation was conducted under different sliding window widths (W_5, W_10, and W_20). The classifier employed was an RBF-kernel SVM with the parameters set as follows: kernel = ‘rbf’, C = 20, and Gamma = ‘scale’, as shown in Figure 4. The x-axis denotes the window width, while the y-axis indicates the recognition accuracy. W_5, W_10, and W_20 correspond to sliding window widths of 5, 10, and 20, respectively, during Stockwell entropy calculation.
Figure 4 shows that, for the Gamma frequency band, increasing the window width from W_5 to W_20 had minimal impact on classification accuracy, thereby demonstrating the relatively stable performance of the Stockwell entropy-CSP algorithm in EEG-based emotion recognition. The accuracy of the Gamma frequency band in two-class classification tasks (Positive vs. Neutral, Negative vs. Neutral, and Positive vs. Negative) was generally high, with the lowest accuracy reaching 92.8%. Moreover, the accuracies of Positive vs. Neutral and Positive vs. Negative both remained above 96%. When the window width was 20, the classification accuracy of Positive vs. Neutral reached its highest value of 96.8%. Generally, the CSP algorithm is more suitable for two-class classification scenarios. However, as shown in Figure 4, the CSP algorithm was also applied to evaluate the three-class classification case (Positive vs. Negative vs. Neutral). Although the three-class classification accuracy was lower than that for the two-class case, the lowest level of accuracy still reached 88.7%.
As shown above, as the window width increased from 5 to 20, the model’s classification accuracy tended to improve. However, increasing the window width had a limited impact on classification accuracy. Most classification tasks demonstrated consistent performance across all window widths. For two-class tasks (Positive vs. Neutral, Negative vs. Neutral, and Positive vs. Negative), high accuracy was achieved at all window widths. The three-class classification task (Positive vs. Negative vs. Neutral) achieved lower accuracy than the two-class tasks but remained above 88% across all window widths.

4.3. Influence of Different Classification Methods on Emotional State Recognition

In the first two sections, the model’s performance was evaluated using stratified five-fold cross-validation. In this section, the classification performance of two classifiers—RBF-SVM (kernel = ‘rbf’, C = 20, Gamma = ‘scale’) and random forest (n_estimators = 256)—on the test set data was compared. In the experiment, the emotional states of each subject (positive, neutral, and negative) were used to design four experimental setups: Positive vs. Neutral vs. Negative, Positive vs. Neutral, Positive vs. Negative, and Neutral vs. Negative. In each setup, the data were randomly divided into training and test sets, with 80% of the data allocated to the training set and the remaining 20% to the test set. The final results were reported as the average recognition rates across the four experimental groups for all 15 subjects.
The experimental results demonstrate that the varying window widths have minimal impact on classification accuracy. Figure 5 compares the classification accuracy across different frequency bands for binary tasks (Positive vs. Neutral, Negative vs. Neutral, and Positive vs. Negative) and the ternary task (Positive vs. Negative vs. Neutral). The analysis was performed using a sliding window width of 20. The x-axis denotes the frequency bands (full frequency band, Delta, Theta, Alpha, Beta, and Gamma), while the y-axis indicates the classification accuracy. Figure 5 reveals that the overall performance of SVM and RF was similar across all frequency bands. RF outperformed SVM in the Delta, Theta, and Alpha bands, whereas SVM achieved slightly higher accuracy in the Gamma band. Notably, the Gamma band yielded the highest accuracy under both classifiers. Binary classification tasks generally achieved greater accuracy than ternary tasks. Nevertheless, both classifiers exceeded 88% accuracy in the Gamma band for ternary classification tasks.

5. Discussion

5.1. Experimental Conclusions

This study’s innovative nature is demonstrated by applying differential entropy to the time-frequency domain decomposed by the Stockwell transform and by proposing an EEG-based emotion extraction method that integrates Stockwell entropy with the CSP algorithm.
The experiments demonstrate the following:
  • The classification accuracy was highest in the Gamma frequency band.
  • Increasing the sliding window width from W_5 to W_20 had a minimal impact on classification accuracy, demonstrating that the Stockwell entropy–CSP algorithm exhibits relatively stable performance in EEG-based emotion recognition.
  • The accuracy of binary classification tasks—namely, Positive vs. Neutral, Negative vs. Neutral, and Positive vs. Negative—was generally high. Among these tasks, Positive vs. Neutral and Positive vs. Negative achieved the highest recognition rates.
  • Although the CSP algorithm is more suitable for binary classification tasks, it also demonstrated strong performance in the three-class task (Positive vs. Negative vs. Neutral).
In prior neurophysiological research, the overall body of evidence also supported an association between Gamma oscillations and emotional states [43]. Yang et al. (2020) observed increased Gamma connection density between the prefrontal, temporal, parietal, and occipital regions during emotional processing [44]. Luther et al. (2023) reported increased Gamma power over posterior areas for unpleasant compared to pleasant and neutral pictures [45]. These findings suggest that Gamma oscillations play a role in various aspects of emotional processing, including the perception of emotional stimuli, the cognitive regulation of emotions, and the interaction between emotions and other cognitive processes [46].

5.2. Analysis of the Reasons Behind the Experimental Results

5.2.1. Influence of Signal Frequency and Amplitude on Stockwell Entropy

This study primarily focuses on the analysis of swept-frequency signals. To better simulate the behavior of Stockwell entropy for real EEG signals, the frequency range of the swept-frequency signal was set to 0–46 Hz, with a sampling rate of 200 Hz. Figure 6 illustrates the influence of different signal states on Stockwell entropy. Figure 6a demonstrates the effect of increasing frequency with constant amplitude on Stockwell entropy. As the frequency of the swept-frequency signal increases, the entropy values of high-frequency signals gradually stabilize, whereas those of low-frequency signals exhibit significant fluctuations. Figure 6b depicts the scenario where both signal frequency and amplitude increase. The results indicated that the entropy values of high-frequency signals increased approximately linearly and remained relatively concentrated, while those of the low-frequency signals fluctuated significantly. Figure 6c represents the scenario where signal frequency increases while amplitude decreases. It can be seen that the entropy values of high-frequency signals decreased approximately linearly and remained relatively concentrated, whereas those of low-frequency signals continued to fluctuate significantly. Collectively, Figure 6a–c demonstrates that as the frequency increases, the entropy values across different window widths converge toward stability.
As shown in Figure 6d,e, we used the controlled variable method to analyze the effects of amplitude and frequency changes on Stockwell entropy. Figure 6d illustrates the scenario where the signal amplitude increased while the frequency remained constant, with the signal frequency fixed at 5 Hz. The results indicated that when the signal frequency was held constant and the amplitude increased, the entropy value exhibited oscillatory growth, accompanied by significant fluctuations. Figure 6e depicts the scenario where the signal amplitude remained unchanged while the frequency varied. It is evident that low-frequency signals exhibit substantial fluctuations in entropy values, whereas high-frequency signals demonstrate relatively stable behavior.
Table 2 compares the standard deviations of Stockwell entropy for the signal sin(2πft) across different frequencies and window widths (W = 5, 10, 20). The table lists only the standard deviations at the boundary frequencies of different frequency bands. The results indicate that as the frequency increased, the standard deviations generally decreased. When the frequency reached the Gamma band, the standard deviations reported in Table 2 typically dropped below 1, thereby showing that Stockwell entropy values are relatively stable and suitable for effective emotional classification. Figure 6e compares the results for frequencies of 13 Hz and 30 Hz.
From Figure 6 and Table 2, the following conclusions can be drawn:
  • The entropy values of high-frequency signals are relatively stable, while those of low-frequency signals fluctuate significantly.
  • Both an increase and a decrease in amplitude can cause changes in the entropy values of high-frequency signals, and these changes are approximately linear. Therefore, the entropy values of high-frequency signals respond well to amplitude changes and can be used to detect such changes.
  • As the frequency increases, the values under different window conditions tend to stabilize, indicating that the selection of window width has little impact on the Stockwell entropy values of high-frequency signals. This demonstrates that Stockwell entropy values are highly stable for classification and recognition.
These conclusions indicate that the entropy values in the high-frequency Gamma band remain relatively stable, thereby contributing to the identification of emotional states. This is why the classification accuracy in this band is relatively high.

5.2.2. Influence of the CSP Algorithm on Emotional State Classification

In this study, the CSP algorithm was applied to decompose Stockwell entropy into eight components. This study uses the CSP component features of the “Positive vs. Negative” binary classification state in the Gamma frequency band of the 15th subject as an example. For clarity, Figure 7 displays the curves of three representative components. It can be observed from Figure 7 that negative emotions correspond to low values on the orange curve and high values on the blue curve, whereas positive emotions correspond to high values on the orange curve and low values on the blue curve. Thus, the CSP-transformed Stockwell entropy features exhibit high discriminability for emotional EEG signals, effectively distinguishing between different emotional states and achieving high recognition rates.

5.3. Deficiencies and Prospects

In this study, an emotion recognition method combining Stockwell entropy and CSP was compared with the findings of previous studies by Ruo-Nan Duan, Wei-Long Zheng, and Bao-Liang Lu. Ruo-Nan Duan and Jia-Yi Zhu investigated the classification accuracy of four features—the energy spectrum (ES), differential entropy (DE), differential asymmetry (DASM), and rational asymmetry (RASM)—for the Positive vs. Negative binary classification task. The average classification accuracies of these features were 76.56%, 84.22%, 80.96%, and 83.28%, respectively [11]. In contrast, the classification accuracies of the SVM and RF classifiers in the Gamma frequency band for the same task reached 96.1% and 95.7%, respectively. Wei-Long Zheng, Bao-Liang Lu, and their colleagues developed an EEG-based emotion recognition model using deep belief networks (DBNs). The DBNs were trained on differential entropy features extracted from multi-channel EEG data. The SEED dataset was utilized to classify three emotional states: positive, neutral, and negative, achieving a maximum accuracy of 86.65%. The average classification accuracies of DBN, SVM, logistic regression (LR), and K-nearest neighbors (KNN) were 86.08%, 83.99%, 82.70%, and 72.60%, respectively [13]. In this study, the classification accuracies of the SVM and RF classifiers for the three-class task in the Gamma frequency band reached 88.5% and 88.0%, respectively.
Although the combination of Stockwell entropy and CSP has demonstrated strong performance in classifying emotional EEG signals, this study still has certain limitations. In existing emotion recognition research, individual differences remain a significant challenge. Emotion-related signal patterns exhibit substantial inter-individual variability, potentially achieving high performance in within-subject tests but performing poorly in cross-subject scenarios. The final experimental results reported in this study represent the average recognition rates across 15 subjects; however, the emotional data used for each classification test were derived from the same individual. Therefore, future work will focus on enabling individual-independent emotion recognition and refining the proposed method accordingly. Additionally, investigating the spatial distribution of entropy features in the brain may help identify the key cortical regions associated with emotional changes, which could serve as a promising direction for future research.

6. Conclusions

This study proposes a method for extracting emotional EEG features by integrating Stockwell entropy with the CSP technique. Stockwell transform, a time-frequency analysis tool, provides a localized time-frequency representation of non-stationary EEG signals, while Stockwell entropy captures entropy features in the time-frequency domain. The CSP algorithm, a widely used feature extraction method, achieves dimensionality reduction and is frequently applied to spatial filtering in binary motor imagery tasks within the brain–computer interface (BCI). This study integrates these two techniques to enhance the accuracy and stability of emotional EEG recognition.
The experimental results indicate that the proposed combined method exhibits both excellent classification performance and strong stability for emotional EEG signals in the Gamma (30–46 Hz) frequency band. The method achieves high classification accuracy in binary classification tasks, including Positive vs. Neutral, Negative vs. Neutral, and Positive vs. Negative. Furthermore, it also achieves satisfactory classification results in the three-class task (Positive vs. Negative vs. Neutral). Additionally, this study investigates and analyzes the impact of varying window sizes and frequency bands on classification accuracy.

Author Contributions

Conceptualization, Y.L.; methodology, Y.L.; validation, Y.L.; data curation, Y.L.; writing—original draft preparation, Y.L.; writing—review and editing, Y.L. and J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China under Grant 62377018, the National Key R\&D Program of China under Grant 2024YFC3308300, the self-determined Research Funds of CCNU from the Colleges’ Basic Research and Operation of MOE under Grants CCNU24ZZ154.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Alarcão, S.M.; Fonseca, M.J. Emotions Recognition Using EEG Signals: A Survey. IEEE Trans. Affect. Comput. 2019, 10, 374–393. [Google Scholar] [CrossRef]
  2. Im, C.; Seo, J.M. A review of electrodes for the electrical brain signal recording. Biomed. Eng. Lett. 2016, 6, 104–112. [Google Scholar] [CrossRef]
  3. Abdulridha, F.; Albaker, B.M. Non-invasive real-time multimodal deception detection using machine learning and parallel computing techniques. Soc. Netw. Anal. Min. 2024, 14, 97. [Google Scholar] [CrossRef]
  4. Wankar, R.V.; Shah, P.; Sutar, R. Feature extraction and selection methods for motor imagery EEG signals: A review. In Proceedings of the 2017 International Conference on Intelligent Computing and Control (I2C2), Coimbatore, India, 15–16 June 2017; pp. 1–9. [Google Scholar]
  5. Chanel, G.; Rebetez, C.; Bétrancourt, M.; Pun, T. Emotion Assessment From Physiological Signals for Adaptation of Game Difficulty. IEEE Trans. Syst. Man, Cybern. Part A Syst. Hum. 2011, 41, 1052–1063. [Google Scholar] [CrossRef]
  6. Shi, S.; Liu, W. Interactive Multi-Agent Convolutional Broad Learning System for EEG Emotion Recognition. Expert Syst. Appl. 2025, 260, 125420. [Google Scholar] [CrossRef]
  7. Wang, C.; Li, Y.; Wang, L.; Liu, S.; Yang, S. A study of EEG non-stationarity on inducing false memory in different emotional states. Neurosci. Lett. 2023, 809, 137306. [Google Scholar] [CrossRef]
  8. Kim, J.Y.; Kim, H.G. Emotion Recognition Using EEG Signals and Audiovisual Features with Contrastive Learning. Bioengineering 2024, 11, 997. [Google Scholar] [CrossRef]
  9. Lu, Y.; Wang, M.; Wu, W.; Han, Y.; Zhang, Q.; Chen, S. Dynamic entropy-based pattern learning to identify emotions from EEG signals across individuals. Measurement 2020, 150, 107003. [Google Scholar] [CrossRef]
  10. Yao, L.; Lu, Y.; Qian, Y.; He, C.; Wang, M. High-Accuracy Classification of Multiple Distinct Human Emotions Using EEG Differential Entropy Features and ResNet18. Appl. Sci. 2024, 14, 6175. [Google Scholar] [CrossRef]
  11. Duan, R.N.; Zhu, J.Y.; Lu, B.L. Differential Entropy Feature for EEG-based Emotion Classification. In Proceedings of the 6th International IEEE EMBS Conference on Neural Engineering (NER), San Diego, CA, USA, 6–8 November 2013; pp. 81–84. [Google Scholar]
  12. García-Martínez, B.; Martínez-Rodrigo, A.; Zangróniz Cantabrana, R.; Pastor García, J.M.; Alcaraz, R. Application of Entropy-Based Metrics to Identify Emotional Distress from Electroencephalographic Recordings. Entropy 2016, 18, 221. [Google Scholar] [CrossRef]
  13. Zheng, W.-L.; Lu, B.-L. Investigating Critical Frequency Bands and Channels for EEG-based Emotion Recognition with Deep Neural Networks. IEEE Trans. Auton. Ment. Dev. 2015, 7, 162–175. [Google Scholar] [CrossRef]
  14. Wu, M.; Wan, T.; Wan, X.; Fang, Z.; Du, Y. A New Localization Method for Epileptic Seizure Onset Zones Based on Time-Frequency and Clustering Analysis. Pattern Recognit. 2020, 111, 107687. [Google Scholar] [CrossRef]
  15. Moukadem, A.; Bouguila, Z.; Abdeslam, D.O.; Dieterlen, A. A new optimized Stockwell transform applied on synthetic and real non-stationary signals. Digit. Signal Process. 2015, 46, 226–238. [Google Scholar] [CrossRef]
  16. Zhong, X.; Liu, G.; Dong, X.; Li, C.; Li, H.; Cui, H.; Zhou, W. Automatic Seizure Detection Based on Stockwell Transform and Transformer. Sensors 2024, 24, 77. [Google Scholar] [CrossRef]
  17. Dash, S.; Dash, D.K.; Tripathy, R.K.; Pachori, R.B. Time–frequency domain machine learning for detection of epilepsy using wearable EEG sensor signals recorded during physical activities. Biomed. Signal Process. Control 2025, 100, 107041. [Google Scholar] [CrossRef]
  18. Blanco-Díaz, C.F.; Guerrero-Mendez, C.D.; Delisle-Rodriguez, D.; Jaramillo-Isaza, S.; Ruiz-Olaya, A.F.; Frizera-Neto, A.; de Souza, A.F.; Bastos-Filho, T. Evaluation of temporal, spatial and spectral filtering in CSP-based methods for decoding pedaling-based motor tasks using EEG signals. Biomed. Phys. Eng. Express 2024, 10, 035003. [Google Scholar] [CrossRef]
  19. Jiang, Q.; Zhang, Y.; Ge, G.; Xie, Z. An Adaptive CSP and Clustering Classification for Online Motor Imagery EEG. IEEE Access 2020, 8, 156117–156128. [Google Scholar] [CrossRef]
  20. Venu, K.; Natesan, P. Hybrid optimization assisted channel selection of EEG for deep learning model-based classification of motor imagery task. Biomedizinische Technik. Biomed. Eng./Biomed. Tech. 2023, 69, 125–140. [Google Scholar] [CrossRef]
  21. Sethi, S.; Upadhyay, R.; Singh, H.S. Stockwell-common spatial pattern technique for motor imagery-based Brain Computer Interface design. Comput. Electr. Eng. 2018, 71, 492–504. [Google Scholar] [CrossRef]
  22. Mousavi, E.A.; Maller, J.J.; Fitzgerald, P.B.; Lithgow, B.J. Wavelet Common Spatial Pattern in asynchronous offline brain computer interfaces. Biomed. Signal Process. Control 2011, 6, 121–128. [Google Scholar] [CrossRef]
  23. Chacon-Murguia, M.I.; Rivas-Posada, E. Feature Extraction Evaluation for Two Motor Imagery Recognition Based on Common Spatial Patterns, Time-Frequency Transformations and SVM. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020. [Google Scholar]
  24. Poh, K.K.; Marziliano, P. Analysis of Neonatal EEG Signals using Stockwell Transform. In Proceedings of the International Conference of the IEEE Engineering in Medicine & Biology Society, Lyon, France, 22–26 August 2007; pp. 594–597. [Google Scholar]
  25. Kulkarni, D.; Dixit, V.V. EEG-based emotion classification Model: Combined model with improved score level fusion. Biomed. Signal Process. Control 2024, 95, 106352. [Google Scholar] [CrossRef]
  26. Gupta, B.; Verma, A.K. Linear canonical Stockwell transform and the associated multiresolution analysis. Math. Methods Appl. Sci. 2024, 47, 9287–9312. [Google Scholar] [CrossRef]
  27. Stockwell, R.G.; Mansinha, L.; Lowe, R.P. Localisation of the complex spectrum: The S Transform. IEEE Trans. Signal Process. 1996, 44, 998–1001. [Google Scholar] [CrossRef]
  28. Bajaj, A.; Kumar, S. Design of Novel Time–Frequency Tool for Non-stationary α-Stable Environment and its Application in EEG Epileptic Classification. Arab. J. Sci. Eng. 2024, 49, 15863–15881. [Google Scholar] [CrossRef]
  29. Shin, Y.; Hwang, S.; Lee, S.-B.; Son, H.; Chu, K.; Jung, K.-Y.; Lee, S.K.; Park, K.-I.; Kim, Y.-G. Using spectral and temporal filters with EEG signal to predict the temporal lobe epilepsy outcome after antiseizure medication via machine learning. Sci. Rep. 2023, 13, 22532. [Google Scholar] [CrossRef]
  30. Jibon, F.A.; Miraz, M.H.; Khandaker, M.U.; Rashdan, M.; Salman, M.; Tasbir, A.; Nishar, N.H.; Siddiqui, F.H. Epileptic seizure detection from electroencephalogram (EEG) signals using linear graph convolutional network and DenseNet based hybrid framework. J. Radiat. Res. Appl. Sci. 2023, 16, 3. [Google Scholar] [CrossRef]
  31. Guerrero, C.M.; Trigueros, A.M.; Franco, J.I. Time-frequency EEG analysis in epilepsy: What is more suitable? In Proceedings of the Fifth IEEE International Symposium on Signal Processing and Information Technology, Athens, Greece, 18–21 December 2005; pp. 202–207. [Google Scholar] [CrossRef]
  32. Rao, B.M.; Kumar, A.; Bachwani, N.; Marwaha, P. Detection of atrial fibrillation based on Stockwell transformation using convolutional neural networks. Int. J. Inf. Technol. 2023, 15, 1937–1947. [Google Scholar] [CrossRef]
  33. Kumar, N.; Raj, S. An Adaptive Scheme for Real-Time Detection of Patient-Specific Arrhythmias Using Single-Channel Wearable ECG Sensor. IEEE Sens. Lett. 2024, 8, 7001504. [Google Scholar] [CrossRef]
  34. Burnos, S.; Hilfiker, P.; Sürücü, O.; Scholkmann, F.; Krayenbühl, N.; Grunwald, T.; Sarnthein, J. Human intracranial high frequency oscillations (HFOs) detected by automatic time-frequency analysis. PLoS ONE 2014, 9, e94381. [Google Scholar] [CrossRef]
  35. Shi, L.-C.; Jiao, Y.-Y.; Lu, B.-L. Differential entropy feature for EEG-based vigilance estimation. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 6627–6630. [Google Scholar] [CrossRef]
  36. Jiang, Q.; Zhang, Y.; Hu, X.; Wang, W.; Ge, G.Y. Geometry-aware Common Spatial Patterns for Motor Imagery-based Brian-Computer Interfaces. IAENG Int. J. Appl. Math. 2024, 54, 1476–1489. [Google Scholar]
  37. Jiang, X.; Meng, L.; Chen, X.; Wu, D. A CSP-based retraining framework for motor imagery based brain-computer interfaces. Sci. China Inf. Sci. 2024, 67, 189403. [Google Scholar] [CrossRef]
  38. Grosse-Wentrup, M.; Buss, M. Multiclass Common Spatial Patterns and Information Theoretic Feature Extraction. IEEE Trans. Biomed. Eng. 2008, 55, 1991–2000. [Google Scholar] [CrossRef] [PubMed]
  39. Mishuhina, V.; Jiang, X. Complex Common Spatial Patterns on Time-Frequency Decomposed EEG for Brain-Computer Interface. Pattern Recognit. 2021, 115, 107918. [Google Scholar] [CrossRef]
  40. Lu, Y.; Wang, H.; Lu, Z.; Niu, J.; Liu, C. Gait pattern recognition based on electroencephalogram signals with common spatial pattern and graph attention networks. Eng. Appl. Artif. Intell. 2025, 141, 109680. [Google Scholar] [CrossRef]
  41. Gramfort, A.; Luessi, M.; Larson, E.; Engemann, D.A.; Strohmeier, D.; Brodbeck, C.; Goj, R.; Jas, M.; Brooks, T.; Parkkonen, L.; et al. MEG and EEG data analysis with MNE-Python. Front. Neurosci. 2013, 7, 267. [Google Scholar] [CrossRef]
  42. Huang, Y.; Zha, W.X. Comparison on Classification Performance Between Random Forests and Support Vector Machine. Software 2012, 33, 107–110. [Google Scholar]
  43. Kang, J.-H.; Jeong, J.W.; Kim, H.T.; Kim, S.H.; Kim, S.-P. Representation of cognitive reappraisal goals in frontal gamma oscillations. PLoS ONE 2014, 9, e113375. [Google Scholar] [CrossRef]
  44. Yang, K.; Tong, L.; Shu, J.; Zhuang, N.; Yan, B.; Zeng, Y. High Gamma Band EEG Closely Related to Emotion: Evidence From Functional Network. Front. Hum. Neurosci. 2020, 14, 89. [Google Scholar] [CrossRef]
  45. Luther, L.; Horschig, J.M.; van Peer, J.M.; Roelofs, K.; Jensen, O.; Hagenaars, M.A. Oscillatory brain responses to emotional stimuli are effects related to events rather than states. Front. Hum. Neurosci. 2023, 16, 868549. [Google Scholar] [CrossRef]
  46. Kang, J.H.; Ahn, H.M.; Jeong, J.W.; Hwang, I.; Kim, H.T.; Kim, S.H.; Kim, S.P. The modulation of parietal gamma oscillations in the human electroencephalogram with cognitive reappraisal. Neuroreport 2012, 23, 995–999. [Google Scholar] [CrossRef]
Figure 1. Calculation process of Stockwell entropy with sliding windows.
Figure 1. Calculation process of Stockwell entropy with sliding windows.
Entropy 27 00457 g001
Figure 2. Flowchart of the Stockwell entropy-CSP feature extraction method.
Figure 2. Flowchart of the Stockwell entropy-CSP feature extraction method.
Entropy 27 00457 g002
Figure 3. Classification of the combinations of three emotional combinations under different frequency bands: (a) when the width of the sliding window is 5; (b) when the width of the sliding window is 10; (c) when the width of the sliding window is 20.
Figure 3. Classification of the combinations of three emotional combinations under different frequency bands: (a) when the width of the sliding window is 5; (b) when the width of the sliding window is 10; (c) when the width of the sliding window is 20.
Entropy 27 00457 g003aEntropy 27 00457 g003b
Figure 4. Classification of the combinations of four emotional states.
Figure 4. Classification of the combinations of four emotional states.
Entropy 27 00457 g004
Figure 5. Classification of emotional states by SVM and RF in different frequency bands: (a) Positive vs. Neutral; (b) Negative vs. Neutral; (c) Positive vs. Negative; (d) Positive vs. Negative vs. Neutral.
Figure 5. Classification of emotional states by SVM and RF in different frequency bands: (a) Positive vs. Neutral; (b) Negative vs. Neutral; (c) Positive vs. Negative; (d) Positive vs. Negative vs. Neutral.
Entropy 27 00457 g005aEntropy 27 00457 g005b
Figure 6. Influence of signal frequency and amplitude on Stockwell entropy: (a) the frequency of the swept-frequency signal increases; (b) both the frequency and amplitude of the swept-frequency signal increase; (c) the frequency of the swept-frequency signal increases, while its amplitude decreases; (d) the frequency of the signal is fixed, while the amplitude increases; (e) the amplitude of the signal is fixed, but the frequencies are different.
Figure 6. Influence of signal frequency and amplitude on Stockwell entropy: (a) the frequency of the swept-frequency signal increases; (b) both the frequency and amplitude of the swept-frequency signal increase; (c) the frequency of the swept-frequency signal increases, while its amplitude decreases; (d) the frequency of the signal is fixed, while the amplitude increases; (e) the amplitude of the signal is fixed, but the frequencies are different.
Entropy 27 00457 g006
Figure 7. CSP components in the “Positive vs. Negative” state.
Figure 7. CSP components in the “Positive vs. Negative” state.
Entropy 27 00457 g007
Table 1. Classification of three emotional combinations according to different frequency bands and different window widths.
Table 1. Classification of three emotional combinations according to different frequency bands and different window widths.
BandsW_5%W_10%W_20%
Neu vs. NegPos vs. NegPos vs. NeuNeu vs. NegPos vs. NegPos vs. NeuNeu vs. NegPos vs. NegPos vs. Neu
All90.1 ± 1.195.2 ± 0.995.8 ± 0.890.2 ± 1.295.3 ± 0.995.9 ± 0.890.3 ± 1.295.5 ± 0.895.9 ± 0.7
Delta70.9 ± 1.671.9 ± 1.871.1 ± 270.9 ± 1.872.1 ± 1.771 ± 1.970.7 ± 1.772.2 ± 1.670.9 ± 2.1
Theta67.7 ± 1.971.5 ± 1.771.6 ± 1.867.9 ± 1.771.3 ± 1.671.9 ± 1.767.8 ± 271.1 ± 1.971.6 ± 1.7
Alpha72.2 ± 278.9 ± 1.579.4 ± 1.672.4 ± 2.279 ± 1.679.1 ± 1.572.3 ± 2.379 ± 1.879.2 ± 1.5
Beta87.1 ± 1.394.3 ± 1.194.1 ± 187 ± 1.394.3 ± 1.194.1 ± 187.1 ± 1.394.4 ± 0.994.1 ± 1
Gamma92.8 ± 1.296.2 ± 0.996.7 ± 0.892.9 ± 1.396.2 ± 0.896.7 ± 0.792.8 ± 1.396.2 ± 0.896.8 ± 0.8
Table 2. Standard deviations of Stockwell entropy at different frequencies with different window widths.
Table 2. Standard deviations of Stockwell entropy at different frequencies with different window widths.
BandsFrequencyWindow = 5
Std
Window = 10
Std
Window = 20
Std
Delta116.6215.4313.66
314.6612.299
Theta413.8211.097.26
711.938.173.04
Alpha811.377.331.88
129.394.40.76
Beta138.943.750.59
302.820.360
Gamma312.510.610.24
361.030.840.29
410.230.220.21
420.430.410.33
430.620.550.32
440.790.630.17
450.930.640.04
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lu, Y.; Chen, J. Improved EEG-Based Emotion Classification via Stockwell Entropy and CSP Integration. Entropy 2025, 27, 457. https://doi.org/10.3390/e27050457

AMA Style

Lu Y, Chen J. Improved EEG-Based Emotion Classification via Stockwell Entropy and CSP Integration. Entropy. 2025; 27(5):457. https://doi.org/10.3390/e27050457

Chicago/Turabian Style

Lu, Yuan, and Jingying Chen. 2025. "Improved EEG-Based Emotion Classification via Stockwell Entropy and CSP Integration" Entropy 27, no. 5: 457. https://doi.org/10.3390/e27050457

APA Style

Lu, Y., & Chen, J. (2025). Improved EEG-Based Emotion Classification via Stockwell Entropy and CSP Integration. Entropy, 27(5), 457. https://doi.org/10.3390/e27050457

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop