Method for Emotion Recognition of EEG Signals Based on Recursive Graph and Spatiotemporal Attention Mechanism
Abstract
1. Introduction
2. Materials and Methods
2.1. Methodology
2.1.1. Preprocessing
2.1.2. TCSA
2.1.3. MBConv-TCSA
2.1.4. FusedMBConv
2.1.5. TCSA-Efficientnet
- Base Layer
- 2.
- Backbone Layer
- 3.
- Head Layer
| Algorithm 1 Process of TCSA-Efficientnet |
| 1: Input: x: (B, C, H, W) 2: x ← ConvBNAct #Start TCSA-Efficientnet blocks 3: for i in range (2): x ← FusedMBConv 4: for i in range (4): x ← FusedMBConv 5: for i in range (4): x ← FusedMBConv 6: for i in range (6): x ←MBConv-TCSA 7: for i in range (9): x ← MBConv-TCSA 8: for i in range (15): x ← MBConv-TCSA # End TCSA-Efficientnet blocks 9: x ← AdaptiveAvgPool2d (1) 10: x ← Flatten (x) 11: x ← Linear (num_features, num_classes) 12: Output: y: (B, num_classes) |
2.2. Experiment
2.2.1. Dataset Introduction
- DEAP
- 2.
- DREAMER
2.2.2. Exchange Channels
2.2.3. Experiment Details
2.2.4. Evaluating Indicator
- Accuracy
- 2.
- F1-score
- 3.
- AUC (Area Under Curve)
2.2.5. Experiment Design
- Baseline Models
- 2.
- State-of-the-art Models
- 3.
- Ablation Experiment
3. Results
3.1. Experiment Results
3.2. Comparsion of DEAP Dataset
3.3. Comparison of DREAMER Dataset
3.4. Comparison of State-of-the-Art Models
3.5. Ablation Experiment
- (1)
- For the CNN model, integrating TCSA resulted in an average accuracy increase of approximately 33.125 percentage points (Valence: +35.28 pp, Arousal: +30.97 pp). Concurrently, the F1-score and AUC improved by approximately 0.69 and 0.43, respectively.
- (2)
- The Vgg model exhibited an average accuracy gain of approximately 33.43 percentage points, with F1-score and AUC improvements of approximately 0.65 and 0.42, respectively.
- (3)
- Although ResNet-18 demonstrated relatively strong baseline performance, incorporating TCSA still led to an average accuracy improvement of 0.9 percentage points, alongside F1-score and AUC gains of approximately 0.01 and 0.002.
- (4)
- Efficientnet achieved an accuracy increase of 0.7 percentage points, with F1-score and AUC improvements of approximately 0.006 and 0.001.
- (1)
- The CNN model showed substantial improvements after TCSA integration: Valence accuracy increased from 63.6% to 75.91% (+12.31 pp), and Arousal accuracy rose from 77.65% to 85.08% (+7.43 pp).
- (2)
- The Vgg model demonstrated a similar trend of improvement: Valence accuracy increased from 63.0% to 79.43% (+16.43 pp), and Arousal accuracy improved from 76.57% to 87.5% (+10.93 pp). This indicates that TCSA provides particularly significant enhancements for models with weaker baseline performance.
- (3)
- ResNet-18 achieved Valence and Arousal accuracy gains of 4.43 and 2.27 percentage points, respectively.
- (4)
- Efficientnet, which exhibited the strongest baseline performance without TCSA, still attained Valence and Arousal accuracy improvements of 2.65 and 2.60 percentage points after TCSA integration.
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Li, X.; Zhang, Y.; Tiwari, P.; Song, D.; Hu, B.; Yang, M.; Zhao, Z.; Kumar, N.; Marttinen, P. EEG Based Emotion Recognition: A Tutorial and Review. ACM Comput. Surv. 2023, 55, 79. [Google Scholar] [CrossRef]
- Nikolova, D.; Petkova, P.; Manolova, A.; Georgieva, P. ECG-based Emotion Recognition: Overview of Methods and Applications. In Proceedings of the ANNA ‘18; Advances in Neural Networks and Applications 2018, St. Konstantin and Elena Resort, Bulgaria, 15–17 September 2018; pp. 1–5. [Google Scholar]
- Wu, G.; Liu, G.; Hao, M. The Analysis of Emotion Recognition from GSR Based on PSO. In Proceedings of the 2010 International Symposium on Intelligence Information Processing and Trusted Computing, Huanggang, China, 28–29 October 2010; pp. 360–363. [Google Scholar] [CrossRef]
- Wang, L.; Hao, J.; Zhou, T.H. ECG Multi-Emotion Recognition Based on Heart Rate Variability Signal Features Mining. Sensors 2023, 23, 8636. [Google Scholar] [CrossRef] [PubMed]
- Nawaz, R.; Cheah, K.H.; Nisar, H.; Yap, V.V. Comparison of different feature extraction methods for EEG-based emotion recognition. Biocybern. Biomed. Eng. 2020, 40, 910–926. [Google Scholar] [CrossRef]
- Zheng, W.-L.; Zhu, J.-Y.; Peng, Y.; Lu, B.-L. EEG-based emotion classification using deep belief networks. In Proceedings of the 2014 IEEE International Conference on Multimedia and Expo (ICME), Chengdu, China, 14–18 July 2014; pp. 1–6. [Google Scholar] [CrossRef]
- Petrantonakis, P.C.; Hadjileontiadis, L.J. Emotion recognition from EEG using higher order crossings. IEEE Trans. Inf. Technol. Biomed. 2010, 14, 186–197. [Google Scholar] [CrossRef] [PubMed]
- Lin, Y.P.; Wang, C.H.; Jung, T.P.; Wu, T.L.; Jeng, S.K.; Duann, J.R.; Chen, J.H. EEG-based emotion recognition in music listening. IEEE Trans. Biomed. Eng. 2010, 57, 1798–1806. [Google Scholar] [CrossRef]
- Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.-S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. DEAP: A database for emotion analysis using physiological signals. IEEE Trans. Affect. Comput. 2012, 3, 18–31. [Google Scholar] [CrossRef]
- Jenke, R.; Peer, A.; Buss, M. Feature extraction and selection for emotion recognition from EEG. IEEE Trans. Affect. Comput. 2014, 5, 327–339. [Google Scholar] [CrossRef]
- Subasi, A. EEG signal classification using wavelet feature extraction and a mixture of expert model. Expert Syst. Appl. 2007, 32, 1084–1093. [Google Scholar] [CrossRef]
- Li, M.; Xu, H.; Liu, X.; Lu, S. Emotion recognition from multichannel EEG signals using k-nearest neighbor classification. Technol. Health Care 2018, 26, 509–519. [Google Scholar] [CrossRef]
- Atkinson, J.; Campos, D. Improving BCI-based emotion recognition by combining EEG feature selection and kernel classifiers. Expert Syst. Appl. 2016, 47, 35–41. [Google Scholar] [CrossRef]
- Chanel, G.; Rebetez, C.; Bétrancourt, M.; Pun, T. Emotion assessment from physiological signals for adaptation of game difficulty. IEEE Trans. Syst. Man Cybern. 2009, 41, 1052–1063. [Google Scholar] [CrossRef]
- Li, X.; Song, D.; Zhang, P.; Yu, G.; Hu, B. Emotion recognition from multi-channel EEG data through convolutional recurrent neural network. In Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Shenzhen, China, 15–18 December 2016; pp. 352–359. [Google Scholar]
- Li, D.; Xie, L.; Chai, B.; Wang, Z.; Yang, H. Spatial-frequency convolutional self-attention network for EEG emotion recognition. Appl. Soft Comput. 2022, 122, 108740. [Google Scholar] [CrossRef]
- Zhu, Y.; Guo, Y.; Zhu, W.; Di, L.; Yin, Z. Subject-independent emotion recognition of EEG signals using graph attention-based spatial-temporal pattern learning. In Proceedings of the 2022 41st Chinese Control Conference (CCC), Hefei, China, 25–27 July 2022; pp. 7070–7075. [Google Scholar] [CrossRef]
- Luo, Y.; Lu, B.-L. EEG Data Augmentation for Emotion Recognition Using a Conditional Wasserstein GAN. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018. [Google Scholar] [CrossRef]
- Soleymani, M.; Lichtenauer, J.; Pun, T.; Pantic, M. A multimodal database for affect recognition and implicit tagging. IEEE Trans. Affect. Comput. 2012, 3, 42–55. [Google Scholar] [CrossRef]
- Zhang, X.; Wang, M.-J.; Guo, X.-D. Multi-modal Emotion Recognition Based on Deep Learning in Speech, Video and Text. In Proceedings of the 2020 IEEE 5th International Conference on Signal and Image Processing (ICSIP), Nanjing, China, 23–25 October 2020; pp. 328–333. [Google Scholar] [CrossRef]
- Lan, Y.-T.; Liu, W.; Lu, B.-L. Multimodal Emotion Recognition Using Deep Generalized Canonical Correlation Analysis with an Attention Mechanism. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Tan, M.; Le, Q. EfficientNetV2: Smaller Models and Faster Training. In Proceedings of the International Conference on Machine Learning (ICML), Virtual Event, 18-24 July 2021; PMLR: Baltimore, MD, USA, 2021; Volume 139, pp. 10096–10106. Available online: https://proceedings.mlr.press/v139/tan21a.html (accessed on 24 March 2026).
- Homan, R.W.; Herman, J.; Purdy, P. Cerebral location of international 10–20 system electrode placement. Electroencephalogr. Clin. Neurophysiol. 1987, 66, 376–382. [Google Scholar] [CrossRef] [PubMed]
- Thiel, M.; Romano, M.C.; Kurths, J. How much information is contained in a recurrence plot? Phys. Lett. A 2004, 330, 343–349. [Google Scholar] [CrossRef]
- Katsigiannis, S.; Ramzan, N. DREAMER: A Database for Emotion Recognition Through EEG and ECG Signals from Wireless Low-cost Off-the-Shelf Devices. IEEE J. Biomed. Health Inform. 2018, 22, 98–107. [Google Scholar] [CrossRef] [PubMed]
- Fan, C.; Wang, J.; Huang, W.; Yang, X.; Pei, G.; Li, T.; Lv, Z. Light-weight residual convolution-based capsule network for EEG emotion recognition. Adv. Eng. Inform. 2024, 61, 102522. [Google Scholar] [CrossRef]
- Yang, Y.; Wu, Q.; Fu, Y.; Chen, X. Continuous convolutional neural network with 3D input for EEG-based emotion recognition. In Proceedings of the Neural Information Processing: 25th International Conference, ICONIP 2018, Siem Reap, Cambodia, 13–16 December 2018; pp. 433–443, Part VII 25. [Google Scholar]
- Suykens, J.; Lukas, L.; Van Dooren, P.; De Moor, B.; Vandewalle, J. Least squares support vector machine classifiers: A large scale algorithm. In Proceedings of the European Conference on Circuit Theory and Design (ECCTD), St. Julians, Malta, 29 August–2 September 1999; IEEE: Piscataway, NJ, USA, 1999; Volume 99, pp. 1–6. [Google Scholar]
- Ji, S.; Xu, W.; Yang, M.; Yu, K. 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 221–231. [Google Scholar] [CrossRef]
- Song, T.; Zheng, W.; Song, P.; Cui, Z. EEG emotion recognition using dynamical graph convolutional neural networks. IEEE Trans. Affect. Comput. 2018, 11, 532–541. [Google Scholar] [CrossRef]
- Li, C.; Wang, F.; Zhao, Z.; Wang, H.; Schuller, B.W. Attention-Based Temporal Graph Representation Learning for EEG-Based Emotion Recognition. IEEE J. Biomed. Health Inform. 2024, 28, 5755–5767. [Google Scholar] [CrossRef]
- Guo, W.; Wang, Y. Convolutional gated recurrent unit-driven multidimensional dynamic graph neural network for subject-independent emotion recognition. Expert Syst. Appl. 2024, 238, 121889. [Google Scholar] [CrossRef]
- Zhang, Z.; Liu, Y.; Zhong, S.-H. GANSER: A Self-Supervised Data Augmentation Framework for EEG-Based Emotion Recognition. IEEE Trans. Affect. Comput. 2023, 14, 2048–2063. [Google Scholar] [CrossRef]
- Rudakov, E.; Laurent, L.; Cousin, V.; Roshdi, A.; Fournier, R.; Nait-Ali, A.; Beyrouthy, T.; Al Kork, S. Multi-Task CNN model for emotion recognition from EEG Brain maps. In Proceedings of the 2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART), Paris, France, 8–10 December 2021; pp. 1–4. [Google Scholar] [CrossRef]
- Dhara, T.; Singh, P.K.; Mahmud, M. A Fuzzy Ensemble-Based Deep learning Model for EEG-Based Emotion Recognition. Cogn. Comput. 2024, 16, 1364–1378. [Google Scholar] [CrossRef]
- Xu, Y.; Du, Y.; Li, L.; Lai, H.; Zou, J.; Zhou, T.; Xiao, L.; Liu, L.; Ma, P. AMDET: Attention Based Multiple Dimensions EEG Transformer for Emotion Recognition. IEEE Trans. Affect. Comput. 2024, 15, 1067–1077. [Google Scholar] [CrossRef]
- Li, C.; Zhang, Z.; Zhang, X.; Huang, G.; Liu, Y.; Chen, X. EEG-based emotion recognition via transformer neural architecture search. IEEE Trans. Ind. Inform. 2023, 19, 6016–6025. [Google Scholar] [CrossRef]
- Liu, S.; Zhao, Y.; An, Y.; Zhao, J.; Wang, S.-H.; Yan, J. GLFANet: A global to local feature aggregation network for EEG emotion recognition. Biomed. Signal Process. Control 2023, 85, 104799. [Google Scholar] [CrossRef]
- Liu, J.; He, L.; Chen, H.; Jiang, D. Directional Spatial and Spectral Attention Network (DSSA Net) for EEG-based emotion recognition. Front. Neurorobot. 2025, 18, 1481746. [Google Scholar] [CrossRef] [PubMed]
- Liu, W.; Qiu, J.-L.; Zheng, W.-L.; Lu, B.-L. Multimodal emotion recognition using deep canonical correlation analysis. arXiv 2019, arXiv:1908.05349. [Google Scholar] [CrossRef]
- Zhang, D.; Yao, L.; Chen, K.; Monaghan, J. A convolutional recurrent attention model for subject-independent EEG signal analysis. IEEE Signal Process. Lett. 2019, 26, 715–719. [Google Scholar] [CrossRef]
- Liu, Y.; Ding, Y.; Li, C.; Cheng, J.; Song, R.; Wan, F.; Chen, X. Multi-channel EEG-based emotion recognition via a multi-level features guided capsule network. Comput. Biol. Med. 2020, 123, 103927. [Google Scholar] [CrossRef]
- Li, Q.; Zhang, T.; Chen, C.L.P.; Zhang, X.; Hu, B. DGC-Link: Dual-Gate Chebyshev Linkage Network on EEG Emotion Recognition. IEEE Trans. Affect. Comput. 2025, 16, 3499–3511. [Google Scholar] [CrossRef]


















| Experiments | Videos | Channels | Sample | Metrics |
|---|---|---|---|---|
| 32 | 40 | 40 | 128 Hz | V/A/D/L |
| Experiments | Videos | Channels | Sample | Metrics |
|---|---|---|---|---|
| 23 | 18 | 16 | 128 Hz | V/A/D |
| Electrode Channel | Cerebral Cortex Partition |
|---|---|
| Fp1, Fp2 | frontal lobe |
| AF3, F3, F7, AF4, F4, F8, Fz | Left frontal lobe → right frontal lobe → midline |
| FC5, C3, T7, FC6, C4, T8, FC1, FC2, Cz | Left central area → Right central area → Median line |
| CP5, P3, P7, CP6, P4, P8, CP1, CP2, Pz | Left parietal lobe → right parietal lobe → midline |
| PO3, O1, PO4, O2, Oz | occipital lobe |
| Electrode Channel | Cerebral Cortex Partition |
|---|---|
| AF3, F3, F7, AF4, F4, F8 | Left frontal lobe → right frontal lobe |
| FC5, T7, FC6, T8 | Left central area → Right central area |
| P7, P8 | Left parietal lobe → right parietal lobe |
| O1, O2 | occipital lobe |
| Valence | Arousal | |
|---|---|---|
| Accuracy/STD | 99.11%/0.25 | 99.33%/0.58 |
| F1-score | 0.98 | 0.99 |
| AUC | 0.99 | 0.99 |
| Valence | Arousal | |
|---|---|---|
| Accuracy/STD | 98.08%/0.93 | 97.49%/0.21 |
| F1-score | 0.97 | 0.92 |
| AUC | 0.99 | 0.98 |
| Models | Valence | Arousal | ||||
|---|---|---|---|---|---|---|
| Acc/STD | F1-Score | AUC | Accuracy | F1-Score | AUC | |
| DT [27] | 68.28% | - | - | 71.16% | - | - |
| SVM [28] | 86.6% | - | - | 87.43% | - | - |
| MLP [27] | 87.73% | - | - | 88.88% | - | - |
| 3DCNN [29] | 89.45% | - | - | 90.42% | - | - |
| DGCNN [30] | 92.55% | - | - | 93.5% | - | - |
| TCSA-Efficientnet (ours) | 99.11% | 0.98 | 0.99 | 99.33% | 0.99 | 0.99 |
| Models | Valence | Arousal | ||||
|---|---|---|---|---|---|---|
| Acc/STD | F1-Score | AUC | Accuracy | F1-Score | AUC | |
| DT [27] | 68.28% | - | - | 71.16% | - | - |
| SVM [28] | 86.6% | - | - | 87.43% | - | - |
| MLP [27] | 87.73% | - | - | 88.88% | - | - |
| 3DCNN [29] | 89.45% | - | - | 90.42% | - | - |
| DGCNN [30] | 92.55% | - | - | 93.5% | - | - |
| TCSA-Efficientnet (ours) | 98.08% | 0.97 | 0.99 | 97.49% | 0.92 | 0.98 |
| Models | Acc%/STD | |
|---|---|---|
| Valence | Arousal | |
| ATGRNet [31] | 78.22/18.33 | 76.46/19.48 |
| CGRU-MDGN [32] | 89.45/- | 90.24/- |
| GANSER [33] | 93.86/- | 94 /- |
| MT-CNN [34] | 96.28/- | 96.62/- |
| Gompertz Fuzzy Ensemble [35] | 95.78/- | 95.97/- |
| AMDET [36] | 97.48/0.99 | 96.85/1.66 |
| LresCapsule [26] | 97.45/1.49 | 97.58/1.31 |
| Supernet [37] | 94.88/- | 93.39/- |
| GLFANet [38] | 94.53/- | 94.51/- |
| DSSA Net [39] | 94.97/4.23 | 94.73/3.27 |
| TCSA-Efficientnet (ours) | 99.11/0.25 | 99.33/0.58 |
| Models | Acc%/STD | |
|---|---|---|
| Valence | Arousal | |
| Supernet [37] | 94.88/- | 93.39/- |
| GLFANet [38] | 94.57/- | 94.82/- |
| DEEP-CCA [40] | 90.57/- | 88.99/- |
| CRAM [41] | 92.27/- | 93.03/- |
| MLF-CapsNet [42] | 93.94/0.37 | 94.29/0.43 |
| DGC-Link [43] | 98.58/1.74 | 92.04/5.23 |
| TCSA-Efficientnet (ours) | 98.08/0.93 | 97.49/0.21 |
| Models | Valence | Arousal | ||||
|---|---|---|---|---|---|---|
| Acc | F1-Score | AUC | Acc | F1-Score | AUC | |
| CNN | 63.3% | 0.2917 | 0.5633 | 67.18% | 0.3017 | 0.5834 |
| CNN + TCSA | 98.58% | 0.9857 | 0.9987 | 98.15% | 0.9762 | 0.9970 |
| Vgg | 62.41% | 0.3379 | 0.5709 | 68.4% | 0.3571 | 0.5812 |
| Vgg + TCSA | 98.62% | 0.9853 | 0.9985 | 99.05% | 0.9905 | 0.9991 |
| Resnet-18 | 97.59% | 0.9961 | 99.6106 | 97.71% | 0.9737 | 0.9959 |
| Resnet-18 + TCSA | 98.41% | 0.9840 | 0.9983 | 98.69% | 0.9856 | 0.9982 |
| Efficientnet | 98.39% | 0.9844 | 0.9985 | 98.71% | 0.9866 | 0.9979 |
| TCSA-Efficientnet (ours) | 99.11% | 0.9882 | 0.9987 | 99.33% | 0.9918 | 0.9993 |
| Models | Valence | Arousal | ||||
|---|---|---|---|---|---|---|
| Acc | F1-Score | AUC | Acc | F1-Score | AUC | |
| CNN | 63.6% | 0.2326 | 0.5217 | 77.65% | 0.09 | 0.5991 |
| CNN + TCSA | 75.91% | 0.6521 | 0.8117 | 85.08% | 0.5658 | 0.8417 |
| Vgg | 63% | 0.1804 | 0.5078 | 76.57% | 0.04 | 0.566 |
| Vgg + TCSA | 79.43% | 0.716 | 0.8508 | 87.5% | 0.6 | 0.88 |
| Resnet-18 | 73.26% | 0.7752 | 99.6106 | 81.83% | 0.4886 | 0.7619 |
| Resnet-18 + TCSA | 77.69% | 0.6886 | 0.8239 | 84.1% | 0.5151 | 0.8015 |
| Efficientnet | 95.43% | 0.9403 | 0.9789 | 94.89% | 0.8879 | 0.959 |
| TCSA-Efficientnet (ours) | 98.08% | 0.9752 | 0.991 | 97.49% | 0.925 | 0.9806 |
| DEAP | DREAMER | Parameters | FLOPs | |||
|---|---|---|---|---|---|---|
| Models | Valence | Arousal | Valence | Arousal | ||
| Efficientnet | 98.39% | 98.71% | 95.43% | 94.89% | 20.31 M | 2.90 G |
| EfficientNet + Temporal Convolution | 98.38% | 99.09% | 94.39% | 96.6% | 22.28 M | 3.07 G |
| EfficientNet + Channel Attention | 81.58% | 99.28% | 86.47% | 97.23% | 20.97 M | 2.95 G |
| EfficientNet + Multi-scale Spatial Attention | 99.4% | 84% | 73.76% | 83.43% | 48.53 M | 5.32 G |
| EfficientNet + Temporal Convolution + Channel Attention | 77.18% | 99.02% | 94.33% | 96.43% | 22.94 M | 3.12 G |
| EfficientNet + Temporal Convolution + Multi-scale Spatial Attention | 79.22% | 79.2% | 70.77% | 81.29% | 50.51 M | 5.49 G |
| EfficientNet + Channel Attention + Multi-scale Spatial Attention | 99.13% | 99.14% | 71.24% | 82.73% | 49.19 M | 5.38 G |
| EfficientNet + full TCSA module | 99.11% | 99.33% | 98.08% | 97.49% | 51.17 M | 5.55 G |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Huang, D.; Xu, L.; Li, Y. Method for Emotion Recognition of EEG Signals Based on Recursive Graph and Spatiotemporal Attention Mechanism. Brain Sci. 2026, 16, 377. https://doi.org/10.3390/brainsci16040377
Huang D, Xu L, Li Y. Method for Emotion Recognition of EEG Signals Based on Recursive Graph and Spatiotemporal Attention Mechanism. Brain Sciences. 2026; 16(4):377. https://doi.org/10.3390/brainsci16040377
Chicago/Turabian StyleHuang, Dong, Lin Xu, and Yuwen Li. 2026. "Method for Emotion Recognition of EEG Signals Based on Recursive Graph and Spatiotemporal Attention Mechanism" Brain Sciences 16, no. 4: 377. https://doi.org/10.3390/brainsci16040377
APA StyleHuang, D., Xu, L., & Li, Y. (2026). Method for Emotion Recognition of EEG Signals Based on Recursive Graph and Spatiotemporal Attention Mechanism. Brain Sciences, 16(4), 377. https://doi.org/10.3390/brainsci16040377

