Next Article in Journal
Collaborative Perception—The Missing Piece in Realizing Fully Autonomous Driving
Previous Article in Journal
Design of Acoustic Signal for Positioning of Smart Devices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time EEG-Based Emotion Recognition

1
College of Computer Science and Technology, Qingdao University, Qingdao 266071, China
2
School of Automation, Qingdao University, Qingdao 266071, China
3
Shandong Key Laboratory of Industrial Control Technology, Qingdao 266071, China
4
Institute for Future, Qingdao University, Qingdao 266071, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(18), 7853; https://doi.org/10.3390/s23187853
Submission received: 10 August 2023 / Revised: 8 September 2023 / Accepted: 9 September 2023 / Published: 13 September 2023
(This article belongs to the Section Biomedical Sensors)

Abstract

:
Most studies have demonstrated that EEG can be applied to emotion recognition. In the process of EEG-based emotion recognition, real-time is an important feature. In this paper, the real-time problem of emotion recognition based on EEG is explained and analyzed. Secondly, the short time window length and attention mechanisms are designed on EEG signals to follow emotion change over time. Then, long short-term memory with the additive attention mechanism is used for emotion recognition, due to timely emotion updates, and the model is applied to the SEED and SEED-IV datasets to verify the feasibility of real-time emotion recognition. The results show that the model performs relatively well in terms of real-time performance, with accuracy rates of 85.40% and 74.26% on SEED and SEED-IV, but the accuracy rate has not reached the ideal state due to data labeling and other losses in the pursuit of real-time performance.

1. Introduction

Affective computing represents a relatively novel research domain that addresses the challenge of enabling computer systems to accurately process, recognize, and understand human-expressed emotional information for natural human–computer interaction, making it of paramount importance in the field of artificial intelligence [1,2]. The advancement of affective computing requires interdisciplinary efforts, and its research outcomes contribute to progress in various fields, including computer science and psychology. Emotion recognition is a facet of affective computing that is increasingly attracting the attention of researchers with interdisciplinary backgrounds. Emotions, as psychological and physiological states accompanying cognitive and conscious processes, play a crucial role in human society. The emergence of artificial intelligence has laid the foundation for affective-aware artificial intelligence, aiming to imbue machines with emotions and facilitating enhanced human–computer interaction beyond previous isolated interactions. Therefore, the realization of artificial intelligence hinges on achieving affective-aware artificial intelligence, with a pivotal focus on endowing artificial intelligence with the capability to perceive, understand, and regulate emotions.
There are various methods for recognizing emotions, including both non-physiological and physiological signals. Non-physiological signals include facial expressions and body posture, which are used in psychology to infer a person’s internal emotional state. Non-physiological signals offer the advantage of being easily accessible, but they also have limitations. For example, facial expressions can be deceptive; as seen in cases of fraud, individuals can maintain their original expressions even when faced with evidence, making it challenging to discern their inner emotional activity solely from their facial expressions. Acquiring physiological signals is more complex but yields greater accuracy. There are several types of physiological signals, such as electroencephalography (EEG) and electrocardiography (ECG). Among these, EEG is widely used for emotion detection [3,4]. EEG captures signals directly from the surface of the cerebral cortex, offering a direct and comprehensive means to recognize emotions with excellent temporal resolution [5,6,7]. Compared to other detection methods, EEG has advantages including simplicity, portability, ease of use, and non-invasiveness. Compared to other physiological signals like ECG, skin temperature, and skin conductance, EEG signals can achieve higher classification accuracy [8]. Furthermore, EEG detects the actual response of the brain, rather than relying on subjective facial expressions, making it a more objective measure.
To recognize emotions using EEG, it is necessary to first quantify emotional states. Currently, there are two types of emotional models based on EEG: discrete models and dimensional models. Discrete models consist of a limited number of discrete basic emotions, such as happiness, sadness, surprise, and fear. The SEED-IV dataset is a commonly used discrete model EEG emotion recognition dataset, which includes four emotions: neutral, happy, sad, and fearful. Dimensional models mainly refer to the valence and arousal dimensions. Valence represents the positive or negative aspect of an emotion, while arousal represents its intensity. Dominance and liking are also used as supplementary dimensions. Dominance refers to the degree of control of the emotion, while liking indicates the degree of pleasure associated with the emotion. The DEAP dataset, widely used for EEG emotion recognition, uses these four dimensions as emotion rating standards.
There are many aspects of emotion recognition based on EEG that can be studied, and in this paper, we focus mainly on the real-time aspect of EEG emotion recognition. As mentioned earlier, EEG has good temporal resolution [5,6,7], so real-time processing is an important research area for EEG emotion recognition. Significant achievements have been made in previous studies on real-time emotion recognition. Viet Hoang et al. [9] developed a real-time emotion recognition system to identify two valence classes and two arousal classes, resulting in four basic emotions (happiness, relaxation, sadness, and anger) and neutral state combinations. The average accuracy of emotion recognition for all subjects was 70.5% (ranging from 66% to 76%). W.-C. Fang et al. [10] proposed a real-time EEG-based emotion recognition hardware system architecture based on a multiphase convolutional neural network (CNN) algorithm. In this work, six EEG channels (FP1, FP2, F3, F4, F7, and F8) were selected, and EEG images were generated from the fusion of spectrograms. The average accuracy of valence and arousal for the subjects was 83.36% and 76.67%, respectively. J. W. Li et al. [11] proposed a technique called Brain Rhythm Sequencing (BRS), which interprets EEG signals based on dominant brain rhythms with maximum instantaneous power at each 0.2-s timestamp. Results from Music Emotion Recognition (MER) experiments and three emotion datasets (SEED, DEAP, and MAHNOB) demonstrate that the classification accuracy of single-channel data with a duration of 10 s ranges from 70–82%. Li, Z. et al. [12] proposed an improved feature selection algorithm based on EEG signals for recognizing participants’ emotional states, and designed an online emotion recognition brain–computer interface (BCI) system combining this feature selection method. Results show that the average accuracy for four-level emotion recognition reached 76.67%. Y. -J. Liu et al. [13] proposed a real-time movie-induced emotion recognition system that identifies individuals’ emotional states by analyzing their brain waves. Overall accuracy reached 92.26% for identifying high arousal and positive emotions from neutral, and 86.63% for identifying positive emotions from negative emotions.
Based on the above research, the work of this paper is as follows:
(1)
The real-time problem of emotion recognition based on EEG is explained, and the problem is fully dissected.
(2)
Aiming at the real-time problem, it is proposed to find the application of the appropriate time window length to improve the real-time performance, and many experiments are carried out to find it.
(3)
In order to prove the real-time performance, the model combined with LSTM and the attention mechanism is used to conduct experiments on SEED and SEED-IV datasets, and compare with other methods.

2. Problem Statement

Emotion is continuous and variable [14], which makes emotion recognition difficult during emotion changing. EEG is an electrical signal that reflects brain activity. It is real-time and can be measured quickly, which makes it suitable for capturing high-frequency emotional change.
During emotion recognition, changes in emotions should be captured as early as possible. In current research on EEG-based emotion recognition, a typical approach involves using one-minute data as a sample, with each sample corresponding to a specific emotional label. This results in only one emotion being recognized in a one-minute sample. In order to improve the recognition efficiency and identify new emotions earlier, the existing method separates the one-minute data into multiple segments of the same length. J. W. Li et al. proposed a technique called Brain Rhythms Sequencing (BRS), which achieved a classification accuracy of 70–82% for single-channel data lasting only 10 s during experimental evaluation. Due to the different duration of each emotion [15,16,17], the 10 s sample may contain multiple emotions and they cannot be fully identified, causing the 10 s segmentation to still not be suitable for real-time emotion recognition.
With the existing methods, an emotion can be recognized only when it occupies a dominant proportion within a data segment. The results focus on the predominant emotion, and real-time performance cannot be guaranteed. As shown in Figure 1, a data segment contains two emotions: sadness and happiness. Initially, sadness is recognized first, but when happiness appears, it is not identified in a timely manner due to its extremely short duration. When both sadness and happiness occupy the same account of time within the segment window, the result is random. Only when happiness exists for a sufficient time can it be recognized. There exists a time Δ t between the appearance of happiness and its identification, which represents the wasted time during the recognition process. Real-time emotion recognition aims to minimize Δ t as much as possible.
As mentioned above, when segmenting the sample data into multiple parts of equal length with time windows, Δ t can be reduced by changing the length of the segments. An appropriate value should be selected as the length of the windows. If the length is too short, although it allows for quicker updates of emotional states, there are few features in the data related to emotions, which might be insufficient for accurate emotion recognition. If the length is too long, the recognition outcome will be affected by the averaging over longer time intervals, preventing real-time performance. Only with an appropriate length can emotion be accurately captured in real-time changes without being influenced by averaging over longer time intervals, enabling faster analysis and generation of emotion recognition results. Additionally, real-time requires timely recognition of the latest emotions, so paying more attention to the latest data is also a key problem of real-time performance. When identifying the latest emerging emotions, the weight of the new data should be increased to minimize the impact of past data on the recognition results and to ensure the accuracy of emotion recognition.

3. Methodology

3.1. Emotion Recognition Model

Addressing the real-time problems mentioned above needs a suitable EEG emotion recognition model, whose key components include the size of the time window and the sliding step. Previous research has indicated that the window length for emotion recognition should not exceed 3 s. If that is the case, many emotions whose duration is significantly shorter than 3 s may go unrecognized, which does not meet real-time requirements. Additionally, when sliding the window, the step size should not exceed the window length. An excessively large step size can lead to a loss of detailed information, resulting in inaccurate results. The step size should be set at 50% to 80% of the window length to preserve more emotional variation information and short-term features in the signal and to reduce edge effects.
For emotion recognition models, using attention mechanisms can significantly improve recognition ability by weighting data. When identifying continuous signal data within sliding time windows, more attention should be given to the emotion that just appeared at the end of the time window. The use of attention mechanisms is to achieve the coordination of data weights between the past and future. In recent years, attention mechanisms have shown great performance in various fields such as image processing and natural language processing [18]. Among several attention mechanisms, we choose to use additive attention. It calculates attention weights by learning the similarity between different time points in the EEG signal sequence. During weight calculation, key vectors and query vectors at each time point are weighted and combined, and the result is used as the numerator of the attention weight. This mechanism allows learning, weighting, combining, and outputting of key information in the EEG signal sequence.
The modeling approach should incorporate EEG spatial, temporal, and frequency features. As mentioned above, focusing on the emergence of new emotions requires the model to possess the ability to combine attention mechanisms. Meanwhile, recognizing emotions in new data depends on the past data, so the model should retain and access previous data. EEG data are noisy and contain various interference signals, such as muscle movements. In addition to preprocessing the data to reduce interference, the model must be capable of handling noise and outliers. EEG detection devices are multi-channel, which allows obtaining more accurate and detailed information. Thus, the model needs to preserve the relationship between all EEG channels. In summary, we introduce LSTM to achieve this.
We propose a model suitable for EEG real-time emotion recognition, as shown in Figure 2. This model uses shorter time windows to capture detailed information and focuses on emotional changes. It includes the processing module, STFT module, modeling module, and recognition module. Firstly, in the processing module, the data are downsampled to 200 Hz, and a band-pass filter is applied to remove signals outside the 8–45 Hz range. In the STFT module, the data are processed using the Short-Time Fourier Transform (STFT) to segment and extract features from each EEG signal channel. The processed data are then input to the modeling module. The data pass through the LSTM layer first and then enter the attention layer. After obtaining the output from the attention layer, it is weighted, fused with the output from the LSTM layer, and subjected to flattening operations. Finally, the recognition results are obtained through the softmax layer of the recognition module.

3.2. Feature Extraction

Feature extraction is a crucial step in EEG-based emotion recognition. Extracting representative and discriminative features from EEG signals facilitates subsequent classification and recognition. During the recognition process, both temporal and spectral features need to be extracted since the features of EEG signals vary over time. Additionally, it is essential to perform feature extraction separately for each EEG signal channel without concatenating the extracted features, to maintain the independence of each channel and preserve more detailed information at every time point. To achieve these objectives, we employ the Short-Time Fourier Transform (STFT) for feature extraction. By applying time-domain windowing to EEG signals and performing frequency-domain Fourier Transform on the windowed signals, we obtain the spectral distribution of EEG signals at various time instances and frequencies, revealing the short-term frequency change characteristics of the EEG signals. STFT is a commonly used method for time-frequency analysis, which decomposes signals into short-time frequency components. The formula is as follows:
F ( τ , ω ) = f ( t ) w ( t τ ) e j w t d t
where f ( t ) is the input signal and w ( t τ ) is a window function. In previous research, different window lengths have been used for feature extraction in EEG signal processing. Ouyang et al. [19] studied various window lengths for EEG-based emotion recognition and found that the optimal window length for emotion recognition is 1–2 s. We conduct multiple experiments using different time window lengths and ultimately select a one-second time window with a 50% overlap.

3.3. LSTM with Attention Mechanism

After careful consideration, we adopt a network model that combines LSTM with the additive attention mechanism. This model demonstrates excellent capabilities in handling time-series data, such as EEG signals, allowing itself to sequentially capture temporal information from the input signal data and adapt well to the characteristics of EEG signals.
LSTM [20] is a variant of RNN [21] and is known for effectively processing sequential data. Its effectiveness in extracting temporal information from biological signals has been demonstrated. LSTM comprises cell states that propagate and store temporal information over time, as well as input and output gates. The formulas of LSTM are as follows:
i t = σ ( W x i x t + W h i h t 1 + b i )
o t = σ ( W x o x t + W h o h t 1 + b o )
f t = σ ( W x f x t + W h f h t 1 + b f )
C t ˜ = tanh ( W x c x t + W h c h t 1 + b c )
C t = f t C t 1 + i t C t ˜
h t = o t tanh ( C t )
Among them, W x i , W h i , b i , W x o , W h o , b o , W x f , W h f , b f , W x c , W h c , b c are the learned parameters, h t 1 is the hidden state of the previous time step, x t is the input of the current time step, i t is the input gate vector, and o t is the output gate Vector, f t is the forget gate vector, C t ˜ is the candidate cell state of the current time step, C t is the cell state of the current time step, h t is the hidden state of the current time step, σ represents the sigmoid function, tanh represents the hyperbolic tangent function, and ⊙ represents Dot multiplication.
The additive attention mechanism calculates attention weights by learning the similarity between different time points in the EEG signal sequence. During weight calculation, key vectors and query vectors at every time point are weighted and combined, and the result is used as the numerator of the attention weight. This method allows learning, weighting, combining, and outputting of key information in the EEG signal sequence.
A = softmax ( w T X ) R n
Z = X A T R d
The attention weight is within the range of [0, 1], and the sum of weights is equal to 1.

4. Experimental Results and Discussion

4.1. Datasets

To validate the performance of the proposed algorithm and demonstrate the feasibility of real-time emotion recognition, we conducted experiments on the SEED [22,23] dataset and SEED-IV [24] dataset. The SEED dataset consists of EEG and eye-tracking data from 12 subjects, along with EEG data from the other 3 subjects. The data were collected while they were watching selected movie clips that elicited positive, negative, and neutral emotions. The SEED-IV dataset is an evolution of the SEED dataset, increasing the emotional categories from three to four, including happiness, sadness, fear, and neutral. Unlike the SEED dataset, SEED-IV is a multi-modal dataset designed for emotion recognition. It provides not only EEG signals but also eye movement features obtained from SMI eye-tracking glasses. These two datasets were chosen because they are based on discrete models, which are preferable for real-time emotion recognition research compared to dimensional models. Additionally, both of these datasets encompass a variety of emotions, which enhances the persuasiveness of the experimental results.
During the process of data collection, EEG signals are susceptible to external interference, resulting in significant noise within the EEG data. To mitigate the impact of the noise, it is necessary to perform denoising on the signals. The noise can originate from various sources, such as sensors, electromagnetic interference, and other unexpected signal sources. In this study, we downsampled the EEG signal data to 200 Hz. Extensive research has shown that the electrical activity of cortical neurons is relatively weak, and selecting a frequency range suitable for EEG emotion recognition is beneficial for the experiment. There are typically five types of brain waves: Delta, Theta, Alpha, Beta, and Gamma. Delta waves usually occur between 0.5 Hz to 4 Hz and are associated with deep sleep and physical recovery. They are often observed in the EEG of infants with incomplete brain development and in the deep sleep of adults with certain brain disorders. Theta waves typically occur between 4 to 8 Hz and are associated with sleep, relaxation, meditation, and creative thinking. Alpha waves usually occur between 8 Hz to 13 Hz and are mainly observed during relaxation, stillness, closing the eyes, and deep relaxation. Beta waves typically occur between 13 Hz and 30 Hz and are primarily observed during high cognitive load states, such as thinking, attention, and anxiety. Gamma waves usually occur above 30 Hz and are associated with higher cognitive functions such as learning, attention, consciousness, and perception. Considering the factors mentioned above, we used a band-pass filter to select data within the 8–45 Hz range for our experiments.

4.2. Experimental Setting

The experimental effects of different parameters were compared through multiple sets of experiments, and the following parameter combinations were tried:
Number of nodes: 32, 62, 128
Learning rate: 0.01, 0.001, 0.005
Dropout rate: 0.2, 0.3, 0.5
Window length: 0.5 s, 1 s, 1.5 s, 2 s
Window overlap rate: 20%, 30%, 50%
The experimental results indicate that the window length of one-second performs better. Compared to the two-second window, it can more accurately capture subtle temporal variations and dynamic changes in emotions. The shorter window length, however, leads to a reduced amount of data, resulting in information loss and an inability to better reflect the signal’s dynamic changes, making 0.5 s perform worse than 1 s. Additionally, the results also demonstrate that the 50% window overlap rate better preserves signal variations and short-term features. The network was trained using the Adam optimization algorithm with 128 nodes, the window length is 1 s, the window overlap rate is 50%, the learning rate is 0.001, the dropout rate is 0.5, and the maximum epochs is 500. Furthermore, the EarlyStopping callback function was employed, which stops the training if the validation accuracy does not increase by 0.0001 for the past 50 epochs or if the validation loss ceases to decrease.

4.3. Results

4.3.1. Real-Time Verification

Compared to many previous studies, our experiment shows higher real-time performance. For a 60 s data segment, using a 10 s time window without overlap, we can recognize 6 instances of emotions. However, by using a 1 s time window without overlap, we can recognize 60 instances of emotions. With a 50% overlap rate, it means updating the recognition result every 0.5 s, significantly improving the efficiency of emotion recognition. As shown in Figure 3, the above figure uses the result of 5 s time window length recognition, which can only identify one major emotion; the following figure shows that when using the 1 s time window length and 50% overlap rate, while using the attention mechanism to give higher weight to the latest data, 9 emotions can be identified, greatly improving the efficiency of recognition.

4.3.2. Comparison with Methods

The experimental results when using the SEED dataset are shown in Table 1. The model used in our study outperforms KNN [22] and CNN [25] in accuracy, reaching 85.40%. This clearly demonstrates the significant effect of using shorter time windows and focusing on the latest data during the emotion recognition process, showcasing the feasibility of real-time EEG emotion recognition. However, the experimental results are not as good as DGCNN [26], GLEM [27], BODF [28], and FGCN [29]. This may be due to other trade-offs made in achieving real-time performance, resulting in some data loss in other aspects.
The accuracy and loss of the first 60 epochs of model training are shown in Figure 4. From the figure, it can be observed that the loss is still decreasing, indicating that the EarlyStopping callback function has not been invoked at this point, and the training process will continue. To complete the analysis of the results, many studies use the F1-score in addition to accuracy [30]. After the completion of training, the F1-score on the SEED dataset’s test set is 0.854, demonstrating that the model has achieved a good balance between accuracy and recall. This suggests that the model is capable of correctly identifying samples from the three categories while also minimizing the occurrence of excessive misclassifications.
The confusion matrix is shown in Figure 5. From this, we can observe that the recognition rate for neutral emotion is the highest, reaching 90.8%, followed by negative emotion at 82.9%, and the recognition rate for positive emotion is the lowest, at only 82.4%. The recognition rate of neutral emotions is higher than that of positive and negative emotions, which may be related to the high proportion of neutral emotion samples in the data, or the model has better recognition ability for neutral emotion features.
The experimental results when using the SEED-IV dataset are shown in Table 2. The performance of the model is compared with SVM [31], BiDANN [32], and BiHDM [33]. SVM is a classical machine learning method, while the other methods are more advanced. In SVM, the EEG features are directly input into the support vector machine for emotion state prediction. In the graph-based methods, the brain features are precomputed before inputting into the network. From Table 2, it can be observed that our model achieves relatively high accuracy compared to both SVM and the graph-based methods, reaching 74.26%, which is slightly lower than the 74.35% accuracy of BiHDM.
The training process of the model is shown in Figure 6, displaying the accuracy and loss. From the figure, it can be observed that the loss has started to fluctuate, and the EarlyStopping function was invoked shortly thereafter, leading to the termination of training. After completing the training, the model achieved an F1-score of 0.707 on the SEED dataset during testing, indicating a favorable comprehensive performance of the model with room for improvement. In the future, the model will be further refined through methods such as adjusting model parameters, utilizing more complex network structures, and increasing the training dataset.
The confusion matrix is shown in Figure 7. From this, it can be observed that the model performs well in recognizing fear emotion, achieving an accuracy of 78.1%. And, the following are sadness emotion and neutral emotion, while the recognition rate for happy emotion is the lowest, with an accuracy of only 71.3%. The higher recognition rate for fear emotion could be attributed to both the characteristics of the data themselves and the possibility that fear emotion exhibits more prominent features in the EEG data. This is an area worth investigating further. Different emotions may show distinct trends in EEG data, and if the features of fear emotion are more recognizable, it can be beneficial for predicting emotions during emotional transitions, thereby greatly advancing the development of real-time emotion recognition based on EEG.
Two sets of experiments were conducted on the SEED and SEED-IV datasets, where the SEED group achieved higher recognition accuracy due to having only three emotions. However, the accuracy of both sets of experiments did not reach the expected goals, indicating that applying the results in real-life scenarios still presents certain challenges. To improve the accuracy of the experiments, the first step is to find suitable datasets. The datasets used in the experiment consist of 60 s of samples, with each sample labeled with a single emotion. When partitioning the data into windows, the emotions assigned to the windowed data are directly based on the original emotion labels whether using 1-s or 3-s windows. However, the true emotions within the windowed data may not necessarily match the original labels, which significantly affects the accuracy of the results. Therefore, collecting suitable datasets is of necessity for future research. In addition, due to the limitations of current brainwave detection devices, emotion recognition can only be performed after data collection. In the future, there is a need to overcome these equipment limitations and achieve real-time emotion recognition while detecting, enabling the application of real-time emotion recognition in everyday life.

5. Conclusions

This paper elaborates on the real-time problems of EEG-based emotion recognition. To achieve real-time emotion recognition, Δ t needs to be minimized, and each emotion in every segment of data should be accurately identified, rather than just the dominant emotion of that segment. This requires finding more appropriate lengths of the time windows for data segmentation and placing greater emphasis on the latest data during the recognition process, reducing the impact of the past data on the recognition results. Furthermore, we apply a model that combines LSTM and attention mechanisms for validation on the SEED datasets and SEED-IV datasets. The model focuses more on the latest data, making it more significant during the recognition process. Experimental results indicate the feasibility of the real-time EEG-based emotion recognition. However, the results did not meet the expected goals since there are potential losses in realizing real-time performance. The method used in this article requires signal preprocessing, which increases the workload and makes it impossible to apply in practice. Lai [34] proposes a new architecture for convolutional neural networks (CNNs) using EEG signals that does not require complex signal preprocessing, feature extraction, or feature selection stages. Reducing these stages can effectively improve efficiency and is one of the key points to consider in the future. Future work will involve exploring a more suitable length of time window and, based on that, predicting and categorizing emotional changes. The early exploration after this paper may focus on emotions with significant EEG data variation, such as fear and sadness. The experimental process induces subjects’ emotions to continually fluctuate between fear and sadness, leveraging significant trends in these changes to predict emotions during the transition phase. After achieving accurate predictions, emotions with smaller trend changes can be considered.

Author Contributions

Conceptualization, X.Y. and Z.L.; methodology, Y.L. and X.Y.; software, X.Y.; validation, X.Y., Z.L. and Z.Z.; formal analysis, Y.L.; investigation, X.Y.; resources, X.Y.; data curation, Z.Z.; writing—original draft preparation, X.Y.; writing—review and editing, Y.L., X.Y. and Z.L.; visualization, X.Y. and Z.L.; supervision, Y.L.; project administration, X.Y. and Z.Z.; funding acquisition, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

The research is funded by the National Key Research and Development Project (Grant No.: 2020YFB1313604).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the research teams who collected and made the datasets available publicly.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Picard, R.W. Building HAL: Computers that sense, recognize, and respond to human emotion. In Proceedings of the Human Vision and Electronic Imaging VI, San Jose, CA, USA, 8 June 2001; pp. 518–523. [Google Scholar] [CrossRef]
  2. Acampora, G.; Cook, D.J.; Rashidi, P.; Vasilakos, A.V. A Survey on Ambient Intelligence in Health Care. Proc. IEEE Inst. Electr. Electron. Eng. 2013, 101, 2470–2494. [Google Scholar] [CrossRef]
  3. Zhang, X.; Liu, J.; Shen, J. Emotion Recognition from Multimodal Physiological Signals Using a Regularized Deep Fusion of Kernel Machine. IEEE Trans. Cybern. 2021, 51, 4386–4399. [Google Scholar] [CrossRef]
  4. Pandey, P.; Seeja, K.R. Subject independent emotion recognition system for people with facial deformity: An EEG based approach. J. Ambient Intell. Human. Comput. 2021, 12, 2311–2320. [Google Scholar] [CrossRef]
  5. Balconi, M.; Lucchiari, C. EEG correlates (event-related desynchronization) of emotional face elaboration: A temporal analysis. Neurosci. Lett. 2006, 392, 118–123. [Google Scholar] [CrossRef]
  6. Bekkedal, M.Y.; Rossi, J., 3rd; Panksepp, J. Human brain EEG indices of emotions: Delineating responses to affective vocalizations by measuring frontal theta event-related synchronization. Neurosci. Biobehav. Rev. 2011, 35, 1959–1970. [Google Scholar] [CrossRef]
  7. Davidson, P.R.; Jones, R.D.; Peiris, M.T. EEG-based lapse detection with high temporal resolution. IEEE Trans. Biomed. Eng. 2007, 54, 832–839. [Google Scholar] [CrossRef]
  8. Nie, D.; Wang, X.-W.; Shi, L.-C.; Lu, B.-L. EEG-based emotion recognition during watching movies. In Proceedings of the 2011 5th International IEEE/EMBS Conference on Neural Engineering, Cancun, Mexico, 27 April–1 May 2011; pp. 667–670. [Google Scholar] [CrossRef]
  9. Anh, V.H.; Van, M.N.; Ha, B.B.; Quyet, T.H. A real-time model based Support Vector Machine for emotion recognition through EEG. In Proceedings of the 2012 International Conference on Control, Automation and Information Sciences (ICCAIS), Saigon, Vietnam, 26–29 November 2012; pp. 191–196. [Google Scholar] [CrossRef]
  10. Fang, W.-C.; Wang, K.-Y.; Fahier, N.; Ho, Y.-L.; Huang, Y.-D. Development and Validation of an EEG-Based Real-Time Emotion Recognition System Using Edge AI Computing Platform with Convolutional Neural Network System-on-Chip Design. IEEE J. Emerg. Sel. Top. Circuits Syst. 2019, 9, 645–657. [Google Scholar] [CrossRef]
  11. Li, J.W.; Barma, S.; Mak, P.U. Single-Channel Selection for EEG-Based Emotion Recognition Using Brain Rhythm Sequencing. IEEE J. Biomed. Health Inform. 2022, 26, 2493–2503. [Google Scholar] [CrossRef]
  12. Li, Z.; Qiu, L.; Li, R. Enhancing BCI-Based Emotion Recognition Using an Improved Particle Swarm Optimization for Feature Selection. Sensors 2020, 20, 3028. [Google Scholar] [CrossRef]
  13. Liu, Y.-J.; Yu, M.; Zhao, G.; Song, J.; Ge, Y.; Shi, Y. Real-Time Movie-Induced Discrete Emotion Recognition from EEG Signals. IEEE Trans. Affect. Comput. 2018, 9, 550–562. [Google Scholar] [CrossRef]
  14. Cole, P.M.; Ramsook, K.A.; Ram, N. Emotion dysregulation as a dynamic process. Dev. Psychopathol. 2019, 31, 1191–1201. [Google Scholar] [CrossRef]
  15. Verduyn, P.; Delvaux, E.; Van Coillie, H.; Tuerlinckx, F.; Van Mechelen, I. Predicting the duration of emotional experience: Two experience sampling studies. Emotion 2009, 9, 83–91. [Google Scholar] [CrossRef] [PubMed]
  16. Verduyn, P.; Van Mechelen, I.; Kross, E.; Chezzi, C.; Van Bever, F. The relationship between self-distancing and the duration of negative and positive emotional experiences in daily life. Emotion 2012, 12, 1248–1263. [Google Scholar] [CrossRef] [PubMed]
  17. Verduyn, P.; Van Mechelen, I.; Tuerlinckx, F. The relation between event processing and the duration of emotional experience. Emotion 2011, 11, 20–28. [Google Scholar] [CrossRef] [PubMed]
  18. Parikh, A.; Täckström, O.; Das, D.; Uszkoreit, J. A Decomposable Attention Model for Natural Language Inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, TX, USA, 6 June 2016; pp. 2249–2255. [Google Scholar] [CrossRef]
  19. Ouyang, D.; Yuan, Y.; Li, G.; Guo, Z. The Effect of Time Window Length on EEG-Based Emotion Recognition. Sensors 2022, 22, 4939. [Google Scholar] [CrossRef]
  20. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  21. Vinyals, O.; Toshev, A.; Bengio, S.; Erhan, D. Show and tell: A neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3156–3164. [Google Scholar] [CrossRef]
  22. Zheng, W.-L.; Lu, B.-L. Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks. IEEE Trans. Auton. Ment. Dev. 2015, 7, 162–175. [Google Scholar] [CrossRef]
  23. Duan, R.-N.; Zhu, J.-Y.; Lu, B.-L. Differential entropy feature for EEG-based emotion classification. In Proceedings of the 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER), San Diego, CA, USA, 6–8 November 2013; pp. 81–84. [Google Scholar] [CrossRef]
  24. Zheng, W.-L.; Liu, W.; Lu, Y.; Lu, B.-L.; Cichocki, A. EmotionMeter: A Multimodal Framework for Recognizing Human Emotions. IEEE Trans. Cybern. 2019, 49, 1110–1122. [Google Scholar] [CrossRef]
  25. Cimtay, Y.; Ekmekcioglu, E. Investigating the Use of Pretrained Convolutional Neural Network on Cross-Subject and Cross-Dataset EEG Emotion Recognition. Sensors 2020, 20, 2034. [Google Scholar] [CrossRef]
  26. Song, T.; Zheng, W.; Song, P.; Cui, Z. EEG Emotion Recognition Using Dynamical Graph Convolutional Neural Networks. IEEE Trans. Affect. Comput. 2020, 11, 532–541. [Google Scholar] [CrossRef]
  27. Zheng, W.-L.; Zhu, J.-Y.; Lu, B.-L. Identifying Stable Patterns over Time for Emotion Recognition from EEG. IEEE Trans. Affect. Comput. 2019, 10, 417–429. [Google Scholar] [CrossRef]
  28. Asghar, M.A.; Khan, M.J.; Fawad; Amin, Y.; Rizwan, M.; Rahman, M.; Badnava, S.; Mirjavadi, S.S. EEG-Based Multi-Modal Emotion Recognition using Bag of Deep Features: An Optimal Feature Selection Approach. Sensors 2019, 19, 5218. [Google Scholar] [CrossRef] [PubMed]
  29. Li, M.; Qiu, M.; Kong, W.; Zhu, L.; Ding, Y. Fusion Graph Representation of EEG for Emotion Recognition. Sensors 2023, 23, 1404. [Google Scholar] [CrossRef] [PubMed]
  30. Rabcan, J.; Levashenko, V.; Zaitseva, E.; Kvassay, M. Review of Methods for EEG Signal Classification and Development of New Fuzzy Classification-Based Approach. IEEE Access 2020, 8, 189720–189734. [Google Scholar] [CrossRef]
  31. Wang, X.W.; Nie, D.; Lu, B.L. EEG-based emotion recognition using frequency domain features and support vector machines. In Proceedings of the International Conference on Neural Information Processing (ICONIP 2011), Shanghai, China, 13–17 November 2011; pp. 734–743. [Google Scholar] [CrossRef]
  32. Li, Y.; Zheng, W.; Cui, Z.; Zhang, T.; Zong, Y. A novel neural network model based on cerebral hemispheric asymmetry for EEG emotion recognition. In Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI’18), Stockholm, Sweden, 13–19 July 2018; pp. 1561–1567. [Google Scholar]
  33. Li, Y. A Novel Bi-Hemispheric Discrepancy Model for EEG Emotion Recognition. IEEE Trans. Cogn. Dev. Syst. 2021, 13, 354–367. [Google Scholar] [CrossRef]
  34. Lai, C.Q.; Ibrahim, H.; Suandi, S.A.; Abdullah, M.Z. Convolutional Neural Network for Closed-Set Identification from Resting State Electroencephalography. Mathematics 2022, 10, 3442. [Google Scholar] [CrossRef]
Figure 1. Recognition results under different proportion of new emotions.
Figure 1. Recognition results under different proportion of new emotions.
Sensors 23 07853 g001
Figure 2. Framework for real–time EEG–based emotion recognition.
Figure 2. Framework for real–time EEG–based emotion recognition.
Sensors 23 07853 g002
Figure 3. Recognition results with different time window lengths.
Figure 3. Recognition results with different time window lengths.
Sensors 23 07853 g003
Figure 4. Model training loss changes with epoch on the SEED dataset.
Figure 4. Model training loss changes with epoch on the SEED dataset.
Sensors 23 07853 g004
Figure 5. The confusion matrix of experimental results based on SEED dataset.
Figure 5. The confusion matrix of experimental results based on SEED dataset.
Sensors 23 07853 g005
Figure 6. Model training loss changes with epoch on the SEED-IV dataset.
Figure 6. Model training loss changes with epoch on the SEED-IV dataset.
Sensors 23 07853 g006
Figure 7. The confusion matrix of experimental results based on SEED-IV dataset.
Figure 7. The confusion matrix of experimental results based on SEED-IV dataset.
Sensors 23 07853 g007
Table 1. The model used was compared with other methods on the SEED dataset, and the best result is highlighted in bold.
Table 1. The model used was compared with other methods on the SEED dataset, and the best result is highlighted in bold.
StudiesAccuracy
KNN72.60%
CNN78.34%
DGCNN90.40%
GELM91.07%
BODF93.80%
FGCN94.10%
THIS WORK85.40%
Table 2. The model used was compared with other methods on the SEED-IV dataset, and the best result is highlighted in bold.
Table 2. The model used was compared with other methods on the SEED-IV dataset, and the best result is highlighted in bold.
StudiesAccuracy
SVM56.61%
BiDANN70.29%
BiHDM74.35%
THIS WORK74.26%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, X.; Li, Z.; Zang, Z.; Liu, Y. Real-Time EEG-Based Emotion Recognition. Sensors 2023, 23, 7853. https://doi.org/10.3390/s23187853

AMA Style

Yu X, Li Z, Zang Z, Liu Y. Real-Time EEG-Based Emotion Recognition. Sensors. 2023; 23(18):7853. https://doi.org/10.3390/s23187853

Chicago/Turabian Style

Yu, Xiangkun, Zhengjie Li, Zhibang Zang, and Yinhua Liu. 2023. "Real-Time EEG-Based Emotion Recognition" Sensors 23, no. 18: 7853. https://doi.org/10.3390/s23187853

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop