Virtual Electroencephalogram Acquisition: A Review on Electroencephalogram Generative Methods
Abstract
:1. Introduction
2. Data Augmentation
3. Search Method
4. Basic Concepts
4.1. VAE
4.2. GAN
4.3. Diffusion Model
4.4. Classical Paradigms in BCI
4.5. Evaluation Metrics
5. VAE for EEG
5.1. Review of Related Work
5.1.1. VAE Models for Motor Imagery
5.1.2. VAE Models for Emotion Recognition
5.1.3. VAE Models for External Stimulation
5.1.4. VAE Models for Epilepsy
5.1.5. VAE Models for Other EEG Applications
5.2. Conclusions
6. GAN for EEG
6.1. Review of Related Work
6.1.1. GAN Models for Motor Imagery
6.1.2. GAN Models for Emotion Recognition
6.1.3. GAN Models for External Stimulation
6.1.4. GAN Models for Epilepsy
Study | Objective | Dataset | GAN Type | Evaluation Metrics | Results |
---|---|---|---|---|---|
Wei et al., 2019 [118] | Propose a novel automatic epileptic EEG detection method | CHB-MIT Scalp | WGAN-GP | Classification accuracy, sensitivity, specificity | (with respect to raw data) Acc: +0.0351 sen: +0.0143 spec: +0.0359 |
Pascual et al., 2021 [120] | Generate epileptic EEG signals with privacy-preserving characteristics | EPILEPSIAE | EpilepsyGAN (cWGAN) | Classification accuracy, synthetic data recall geometric mean (sen and spec) | Acc: +1.3% median: +3.2% +1.3% |
Usman et al., 2021 [121] | Generate preictal samples to address the class imbalance problem | CHB-MIT Scalp | GAN | Anticipation time, sensitivity, specificity | Average 32 min sen: 93% spec: 92.5% |
Salazar et al., 2021 [122] | Oversample the training set of a classifier with extreme data scarcity | Private dataset (Barcelona test) | GAN + vMRF | Probability of error | (with respect to SMOTE) −0.4 |
Rasheed et al., 2021 [123] | Improve seizure prediction performance | CHB-MIT+ Epilepsyecosystem | DCGAN | Sensitivity, false positive rate, accuracy, specificity | (CHB-MIT) AUC: +6% (Epilepsyecosystem) AUC: +10% Sen: +15% |
Xu et al., 2022 [124] | Generate synthetic multi-channel EEG preictal samples | CHB-MIT | DCWGAN | FDRMSE, FID, WD prediction accuracy, AUC | (with respect to DCGAN) FDRMSE: −1.71 FID: −1 WD: −0.21 (with respect to all-real data) Acc: +5.0 AUC: +0.028 |
6.1.5. GAN Models for Other EEG Applications
6.2. Conclusions
7. Diffusion Models for EEG
7.1. Review of Related Work
7.2. Conclusions
8. Summary and Outlook
- Ensuring the validity of generated data is an important issue. Most current research uses metrics like FID, JS divergence, MMD, and IS to evaluate the similarity between generated data and real data. However, to date, there has not been a direct metric that can be considered a true assessment of the quality of generated EEG signals, nor one that directly correlates with model performance. Therefore, designing better evaluation metrics is a problem that needs to be addressed. Future research should also focus on how to extract more representative features from generated EEG signals and establish boundary conditions for the feature distribution of the generated data.
- From lab to real world. Although various generative models have made significant progress in the field of BCI in recent years, the vast majority of studies still rely on datasets collected under highly controlled laboratory conditions. According to the review by Altaheri et al. [9], EEG signals are inherently noisy, non-stationary, and highly susceptible to individual differences and environmental influences. Laboratory-collected data typically contain fewer physiological artifacts and less environmental interference, which does not accurately reflect the complexity of real-world applications. Moreover, although many deep learning models claim to work with raw EEG signals, in practice, they still heavily depend on preprocessed data, such as artifact removal and band-pass filtering, limiting their adaptability in unstructured environments. Therefore, designing robust MI-BCI systems that can operate under real-world conditions, which are characterized by high noise, dynamic variability, and low signal-to-noise ratios, remains a critical and unresolved challenge. Future research should place greater emphasis on collecting EEG data in naturalistic settings and developing models with strong generalization capabilities to enhance their practicality and reliability in real-world applications.
- The impact of individual differences. There are significant differences in the EEG signals of each subject, and current research mostly focuses on complex network architecture designs while neglecting the issue of target data distribution differences. Therefore, in the future, an effective model that encompasses a wide range of subject variations and can handle data from different subjects could be developed. Some existing studies have already employed unsupervised end-to-end inter-subject transfer methods to address the inter-subject problem [74].
- The incorporation of other modalities. Some other biological signals, such as ECG or EMG, share similar characteristics with EEG signals, including non-linearity, non-stationarity, and temporal correlation. Current research has successfully transformed EEG signals into fMRI signals using innovative multi-dimensional feature extraction and deep learning methods [168]. Future studies could consider combining EEG signals with other biological signals (such as eye movement data, electromyography signals) or non-biological signals (such as images or speech) for further improvements.
- The strategy of using generated data. Currently, some studies have explored the impact of varying proportions of generated data on performance and have demonstrated that the effect of data augmentation is not a linear relationship with the amount of generated data [65]. Therefore, determining the optimal amount of data to generate for the most effective enhancement of classifier performance is a direction for future exploration.
- Hybrid models. The three models outlined in this paper are popular deep learning generative models, each with unique features and suitable for different application scenarios. Each model has its strengths and weaknesses. In future research, combining these models could be considered, such as combining VAE and diffusion models to perform diffusion in a low-dimensional space, thus improving computational efficiency and capturing the low-dimensional representations of neuro-physiological recordings. This approach could leverage the strengths of various generative models and further enhance the quality and diversity of synthetic data. Current research has successfully combined cGAN with VAE to translate scalp EEG signals into higher quality intracranial EEG [135].
- Model interpretability. VAE, GAN, and diffusion models are all data-driven, generating new data by learning from data. These models do not directly rely on an understanding of the system’s intrinsic physical or biological mechanisms but depend on large amounts of data to train the networks, enabling them to generate realistic data samples. Future research should focus on improving the interpretability of these models to better understand the decision-making processes and the characteristics of the data they generate. Some studies have already employed Layer-Wise Relevance Propagation (LRP) techniques to explain the decision-making processes of these models [135].
- The integration of large models. The emergence of large-scale EEG foundation models such as LaBraM [169], CBraMod [170] and NeuroLM [171] has not only advanced representation learning and multi-task transfer but also offered new inspiration for EEG data augmentation. These models, trained on massive unlabeled EEG datasets using techniques like masked modeling, autoregressive prediction, and instruction tuning, have demonstrated strong capabilities in learning structured spatiotemporal dependencies and in implicitly capturing the generative properties of EEG dynamics. From a modeling perspective, such architectures provide the foundation for new forms of data augmentation: masked patch modeling enables missing data reconstruction or pseudosample creation; neural tokenization combined with language-model-style decoders can support conditional EEG synthesis; and pre-trained latent representations can serve as priors for downstream generative modules. Looking ahead, these foundation models could be extended to support reconstruction-based generation, label-conditional generation, or cross-task transfer augmentation, offering more flexible and efficient solutions for data augmentation tasks.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Biasiucci, A.; Franceschiello, B.; Murray, M.M. Electroencephalography. Curr. Biol. 2019, 29, R80–R85. [Google Scholar] [PubMed]
- McFarland, D.J.; Wolpaw, J.R. EEG-based brain–computer interfaces. Curr. Opin. Biomed. Eng. 2017, 4, 194–200. [Google Scholar] [PubMed]
- Vidal, J.J. Toward direct brain-computer communication. Annu. Rev. Biophys. Bioeng. 1973, 2, 157–180. [Google Scholar]
- Habashi, A.G.; Azab, A.M.; Eldawlatly, S.; Aly, G.M. Generative adversarial networks in EEG analysis: An overview. J. Neuroeng. Rehabil. 2023, 20, 40. [Google Scholar] [PubMed]
- Ko, W.; Jeon, E.; Jeong, S.; Phyo, J.; Suk, H.I. A survey on deep learning-based short/zero-calibration approaches for EEG-based brain–computer interfaces. Front. Hum. Neurosci. 2021, 15, 643386. [Google Scholar]
- Lashgari, E.; Liang, D.; Maoz, U. Data augmentation for deep-learning-based electroencephalography. J. Neurosci. Methods 2020, 346, 108885. [Google Scholar]
- Nia, A.F.; Tang, V.; Talou, G.M.; Billinghurst, M. Synthesizing affective neurophysiological signals using generative models: A review paper. J. Neurosci. Methods 2024, 406, 110129. [Google Scholar]
- Carrle, F.P.; Hollenbenders, Y.; Reichenbach, A. Generation of synthetic EEG data for training algorithms supporting the diagnosis of major depressive disorder. Front. Neurosci. 2023, 17, 1219133. [Google Scholar]
- Altaheri, H.; Muhammad, G.; Alsulaiman, M.; Amin, S.U.; Altuwaijri, G.A.; Abdul, W.; Bencherif, M.A.; Faisal, M. Deep learning techniques for classification of electroencephalogram (EEG) motor imagery (MI) signals: A review. Neural Comput. Appl. 2023, 35, 14681–14722. [Google Scholar]
- Rosenberg, R.S.; Van Hout, S. The American Academy of Sleep Medicine inter-scorer reliability program: Sleep stage scoring. J. Clin. Sleep Med. 2013, 9, 81–87. [Google Scholar]
- Rommel, C.; Paillard, J.; Moreau, T.; Gramfort, A. Data augmentation for learning predictive models on EEG: A systematic comparison. J. Neural Eng. 2022, 19, 066020. [Google Scholar]
- Alhussein, M.; Muhammad, G.; Hossain, M.S. EEG Pathology Detection Based on Deep Learning. IEEE Access 2019, 7, 27781–27788. [Google Scholar] [CrossRef]
- Fu, R.; Wang, Y.; Jia, C. A new data augmentation method for EEG features based on the hybrid model of broad-deep networks. Expert Syst. Appl. 2022, 202, 117386. [Google Scholar]
- Schirrmeister, R.T.; Springenberg, J.T.; Fiederer, L.D.J.; Glasstetter, M.; Eggensperger, K.; Tangermann, M.; Hutter, F.; Burgard, W.; Ball, T. Deep learning with convolutional neural networks for EEG decoding and visualization. Hum. Brain Mapp. 2017, 38, 5391–5420. [Google Scholar] [PubMed]
- Freer, D.; Yang, G.Z. Data augmentation for self-paced motor imagery classification with C-LSTM. J. Neural Eng. 2020, 17, 016041. [Google Scholar]
- Lotte, F.; Bougrain, L.; Cichocki, A.; Clerc, M.; Congedo, M.; Rakotomamonjy, A.; Yger, F. A review of classification algorithms for EEG-based brain–computer interfaces: A 10 year update. J. Neural Eng. 2018, 15, 031005. [Google Scholar]
- Mohsenvand, M.N.; Izadi, M.R.; Maes, P. Contrastive representation learning for electroencephalogram classification. In Proceedings of the Machine Learning for Health, Durham, NC, USA, 7–8 August 2020; PMLR: Birmingham, UK, 2020; pp. 238–253. [Google Scholar]
- Rommel, C.; Moreau, T.; Paillard, J.; Gramfort, A. CADDA: Class-wise automatic differentiable data augmentation for EEG signals. arXiv 2021, arXiv:2106.13695. [Google Scholar]
- Schwabedal, J.T.; Snyder, J.C.; Cakmak, A.; Nemati, S.; Clifford, G.D. Addressing class imbalance in classification problems of noisy signals by using fourier transform surrogates. arXiv 2018, arXiv:1806.08675. [Google Scholar]
- Cheng, J.Y.; Goh, H.; Dogrusoz, K.; Tuzel, O.; Azemi, E. Subject-aware contrastive learning for biosignals. arXiv 2020, arXiv:2007.04871. [Google Scholar]
- Zhang, K.; Xu, G.; Han, Z.; Ma, K.; Zheng, X.; Chen, L.; Duan, N.; Zhang, S. Data augmentation for motor imagery signal classification based on a hybrid neural network. Sensors 2020, 20, 4485. [Google Scholar] [CrossRef]
- Shovon, T.H.; Al Nazi, Z.; Dash, S.; Hossain, M.F. Classification of motor imagery EEG signals with multi-input convolutional neural network by augmenting STFT. In Proceedings of the 2019 5th International Conference on Advances in Electrical Engineering (ICAEE), Dhaka, Bangladesh, 26–28 September 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 398–403. [Google Scholar]
- Sakai, A.; Minoda, Y.; Morikawa, K. Data augmentation methods for machine-learning-based classification of bio-signals. In Proceedings of the 2017 10th Biomedical Engineering International Conference (BMEiCON), Hokkaido, Japan, 31 August–2 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–4. [Google Scholar]
- Deiss, O.; Biswal, S.; Jin, J.; Sun, H.; Westover, M.B.; Sun, J. HAMLET: Interpretable human and machine co-learning technique. arXiv 2018, arXiv:1803.09702. [Google Scholar]
- Saeed, A.; Grangier, D.; Pietquin, O.; Zeghidour, N. Learning from heterogeneous EEG signals with differentiable channel reordering. In Proceedings of the ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 6–11 June 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1255–1259. [Google Scholar]
- Kingma, D.P.; Welling, M. Auto-encoding variational bayes. arXiv 2013, arXiv:1312.6114. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. In Advances in Neural Information Processing Systems 27 (NIPS 2014); Curran Associates, Inc.: Red Hook, NY, USA, 2014. [Google Scholar]
- Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A.C. Improved training of wasserstein gans. Adv. Neural Inf. Process. Syst. 2017, 30, 5769–5779. [Google Scholar]
- Jeannerod, M. The representing brain: Neural correlates of motor intention and imagery. Behav. Brain Sci. 1994, 17, 187–202. [Google Scholar]
- Adolphs, R.; Anderson, D. The Neuroscience of Emotion: A New Synthesis; Princeton University Press: Princeton, NJ, USA, 2018. [Google Scholar]
- Jafari, M.; Shoeibi, A.; Khodatars, M.; Bagherzadeh, S.; Shalbaf, A.; García, D.L.; Gorriz, J.M.; Acharya, U.R. Emotion recognition in EEG signals using deep learning methods: A review. Comput. Biol. Med. 2023, 165, 107450. [Google Scholar]
- Abiri, R.; Borhani, S.; Sellers, E.W.; Jiang, Y.; Zhao, X. A comprehensive review of EEG-based brain–computer interface paradigms. J. Neural Eng. 2019, 16, 011001. [Google Scholar]
- Wang, M.; Daly, I.; Allison, B.Z.; Jin, J.; Zhang, Y.; Chen, L.; Wang, X. A new hybrid BCI paradigm based on P300 and SSVEP. J. Neurosci. Methods 2015, 244, 16–25. [Google Scholar] [PubMed]
- Xu, D.; Tang, F.; Li, Y.; Zhang, Q.; Feng, X. An Analysis of Deep Learning Models in SSVEP-Based BCI: A Survey. Brain Sci. 2023, 13, 483. [Google Scholar]
- Sazgar, M.; Young, M.G.; Sazgar, M.; Young, M.G. Seizures and epilepsy. In Absolute Epilepsy and EEG Rotation Review: Essentials for Trainees; Springer: Berlin/Heidelberg, Germany, 2019; pp. 9–46. [Google Scholar]
- Fisher, R.S. The new classification of seizures by the international league against epilepsy 2017. Curr. Neurol. Neurosci. Rep. 2017, 17, 1–6. [Google Scholar]
- Aznan, N.K.N.; Atapour-Abarghouei, A.; Bonner, S.; Connolly, J.D.; Al Moubayed, N.; Breckon, T.P. Simulating brain signals: Creating synthetic eeg data via neural-based generative models for improved ssvep classification. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–8. [Google Scholar]
- Aznan, N.K.N.; Connolly, J.D.; Moubayed, N.A.; Breckon, T. Using Variable Natural Environment Brain-Computer Interface Stimuli for Real-time Humanoid Robot Navigation. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Naples, Italy, 25–27 February 2019; pp. 4889–4895. [Google Scholar]
- Luo, Y.; Zhu, L.Z.; Wan, Z.Y.; Lu, B.L. Data augmentation for enhancing EEG-based emotion recognition with deep generative models. J. Neural Eng. 2020, 17, 056021. [Google Scholar]
- Krishna, G.; Tran, C.; Carnahan, M.; Tewfik, A. Constrained variational autoencoder for improving EEG based speech recognition systems. arXiv 2020, arXiv:2006.02902. [Google Scholar]
- Krishna, G.; Tran, C.; Carnahan, M.; Han, Y.; Tewfik, A.H. Improving eeg based continuous speech recognition. arXiv 2019, arXiv:1911.11610. [Google Scholar]
- Krishna, G.; Tran, C.; Carnahan, M.; Tewfik, A. Advancing speech recognition with no speech or with noisy speech. In Proceedings of the 2019 27th European Signal Processing Conference (EUSIPCO), A Coruña, Spain, 2–6 September 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–5. [Google Scholar]
- Özdenizci, O.; Erdoğmuş, D. On the use of generative deep neural networks to synthesize artificial multichannel EEG signals. In Proceedings of the 2021 10th International IEEE/EMBS Conference on Neural Engineering (NER), Virtual, 4–6 May 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 427–430. [Google Scholar]
- Yang, J.; Yu, H.; Shen, T.; Song, Y.; Chen, Z. 4-class mi-eeg signal generation and recognition with cvae-gan. Appl. Sci. 2021, 11, 1798. [Google Scholar]
- Bao, G.; Yan, B.; Tong, L.; Shu, J.; Wang, L.; Yang, K.; Zeng, Y. Data augmentation for EEG-based emotion recognition using generative adversarial networks. Front. Comput. Neurosci. 2021, 15, 723843. [Google Scholar]
- Bethge, D.; Hallgarten, P.; Grosse-Puppendahl, T.; Kari, M.; Chuang, L.L.; Özdenizci, O.; Schmidt, A. EEG2Vec: Learning affective EEG representations via variational autoencoders. In Proceedings of the 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Prague, Czech Republic, 9–12 October 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 3150–3157. [Google Scholar]
- George, O.; Smith, R.; Madiraju, P.; Yahyasoltani, N.; Ahamed, S.I. Data augmentation strategies for EEG-based motor imagery decoding. Heliyon 2022, 8, e10240. [Google Scholar]
- Cho, H.; Ahn, M.; Ahn, S.; Kwon, M.; Jun, S.C. EEG datasets for motor imagery brain–computer interface. GigaScience 2017, 6, gix034. [Google Scholar]
- Kaya, M.; Binli, M.K.; Ozbay, E.; Yanar, H.; Mishchenko, Y. A large electroencephalographic motor imagery dataset for electroencephalographic brain computer interfaces. Sci. Data 2018, 5, 1–16. [Google Scholar]
- Wang, Y.; Qiu, S.; Li, D.; Du, C.; Lu, B.L.; He, H. Multi-modal domain adaptation variational autoencoder for EEG-based emotion recognition. IEEE/CAA J. Autom. Sin. 2022, 9, 1612–1626. [Google Scholar]
- Li, H.; Yu, S.; Principe, J. Causal recurrent variational autoencoder for medical time series generation. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 8562–8570. [Google Scholar]
- Kramer, M.A.; Kolaczyk, E.D.; Kirsch, H.E. Emergent network topology at seizure onset in humans. Epilepsy Res. 2008, 79, 173–186. [Google Scholar]
- Zancanaro, A.; Zoppis, I.; Manzoni, S.; Cisotto, G. veegnet: A new deep learning model to classify and generate eeg. In Proceedings of the 9th International Conference on Information and Communication Technologies for Ageing Well and e-Health, ICT4AWE 2023, Prague, Czech Republic, 22–24 April 2023; Science and Technology Publications: Tokyo, Japan, 2023; Volume 2023, pp. 245–252. [Google Scholar]
- Ahmed, T.; Longo, L. Interpreting disentangled representations of person-specific convolutional variational autoencoders of spatially preserving eeg topographic maps via clustering and visual plausibility. Information 2023, 14, 489. [Google Scholar] [CrossRef]
- Tian, C.; Ma, Y.; Cammon, J.; Fang, F.; Zhang, Y.; Meng, M. Dual-encoder VAE-GAN with spatiotemporal features for emotional EEG data augmentation. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 2018–2027. [Google Scholar] [CrossRef] [PubMed]
- Lawhern, V.J.; Solon, A.J.; Waytowich, N.R.; Gordon, S.M.; Hung, C.P.; Lance, B.J. EEGNet: A compact convolutional neural network for EEG-based brain–computer interfaces. J. Neural Eng. 2018, 15, 056013. [Google Scholar] [CrossRef]
- Zheng, W.L.; Lu, B.L. Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Trans. Auton. Ment. Dev. 2015, 7, 162–175. [Google Scholar] [CrossRef]
- Zheng, W.L.; Liu, W.; Lu, Y.; Lu, B.L.; Cichocki, A. Emotionmeter: A multimodal framework for recognizing human emotions. IEEE Trans. Cybern. 2018, 49, 1110–1122. [Google Scholar]
- Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. Deap: A database for emotion analysis; using physiological signals. IEEE Trans. Affect. Comput. 2011, 3, 18–31. [Google Scholar] [CrossRef]
- Abdelfattah, S.M.; Abdelrahman, G.M.; Wang, M. Augmenting the size of EEG datasets using generative adversarial networks. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–6. [Google Scholar]
- Goldberger, A.L.; Amaral, L.A.; Glass, L.; Hausdorff, J.M.; Ivanov, P.C.; Mark, R.G.; Mietus, J.E.; Moody, G.B.; Peng, C.K.; Stanley, H.E. PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation 2000, 101, e215–e220. [Google Scholar] [CrossRef]
- Hartmann, K.G.; Schirrmeister, R.T.; Ball, T. EEG-GAN: Generative adversarial networks for electroencephalograhic (EEG) brain signals. arXiv 2018, arXiv:1806.01875. [Google Scholar]
- Zhang, Q.; Liu, Y. Improving brain computer interface performance by data augmentation with conditional deep convolutional generative adversarial networks. arXiv 2018, arXiv:1806.07108. [Google Scholar]
- Roy, S.; Dora, S.; McCreadie, K.; Prasad, G. MIEEG-GAN: Generating artificial motor imagery electroencephalography signals. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–8. [Google Scholar] [CrossRef]
- Debie, E.; Moustafa, N.; Whitty, M.T. A privacy-preserving generative adversarial network method for securing EEG brain signals. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–8. [Google Scholar] [CrossRef]
- Brunner, C.; Leeb, R.; Müller-Putz, G.; Schlögl, A.; Pfurtscheller, G. BCI Competition 2008–Graz Data Set A; Institute for Knowledge Discovery (Laboratory of Brain-Computer Interfaces), Graz University of Technology: Graz, Austria, 2008; Volume 16, pp. 1–6. [Google Scholar]
- Luo, T.J.; Fan, Y.; Chen, L.; Guo, G.; Zhou, C. EEG signal reconstruction using a generative adversarial network with wasserstein distance and temporal-spatial-frequency loss. Front. Neuroinform. 2020, 14, 15. [Google Scholar] [CrossRef]
- Fahimi, F.; Dosen, S.; Ang, K.K.; Mrachacz-Kersting, N.; Guan, C. Generative adversarial networks-based data augmentation for brain–computer interface. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 4039–4051. [Google Scholar] [CrossRef]
- Song, Y.; Yang, L.; Jia, X.; Xie, L. Common spatial generative adversarial networks based EEG data augmentation for cross-subject brain-computer interface. arXiv 2021, arXiv:2102.04456. [Google Scholar]
- Xu, F.; Rong, F.; Leng, J.; Sun, T.; Zhang, Y.; Siddharth, S.; Jung, T.P. Classification of left-versus right-hand motor imagery in stroke patients using supplementary data generated by CycleGAN. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 2417–2424. [Google Scholar] [CrossRef]
- Xie, J.; Chen, S.; Zhang, Y.; Gao, D.; Liu, T. Combining generative adversarial networks and multi-output CNN for motor imagery classification. J. Neural Eng. 2021, 18, 046026. [Google Scholar] [CrossRef]
- Raoof, I.; Gupta, M.K. A conditional input-based GAN for generating spatio-temporal motor imagery electroencephalograph data. Neural Comput. Appl. 2023, 35, 21841–21861. [Google Scholar] [CrossRef]
- Dong, Y.; Tang, X.; Tan, F.; Li, Q.; Wang, Y.; Zhang, H.; Xie, J.; Liang, W.; Li, G.; Fang, P. An Approach for EEG Data Augmentation Based on Deep Convolutional Generative Adversarial Network. In Proceedings of the 2022 IEEE International Conference on Cyborg and Bionic Systems (CBS), Wuhan, China, 24–26 March 2023; pp. 347–351. [Google Scholar] [CrossRef]
- Yin, K.; Lim, E.Y.; Lee, S.W. GITGAN: Generative inter-subject transfer for EEG motor imagery analysis. Pattern Recognit. 2024, 146, 110015. [Google Scholar] [CrossRef]
- Lee, M.H.; Kwon, O.Y.; Kim, Y.J.; Kim, H.K.; Lee, Y.E.; Williamson, J.; Fazli, S.; Lee, S.W. EEG dataset and OpenBMI toolbox for three BCI paradigms: An investigation into BCI illiteracy. GigaScience 2019, 8, giz002. [Google Scholar] [CrossRef]
- Leeb, R.; Brunner, C.; Müller-Putz, G.; Schlögl, A.; Pfurtscheller, G. BCI Competition 2008–Graz Data Set B; Graz University of Technology: Graz, Austria, 2008; Volume 16, pp. 1–6. [Google Scholar]
- Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein generative adversarial networks. In Proceedings of the International Conference on Machine Learning, Sydney, NSW, Australia, 6–11 August 2017; PMLR: Birmingham, UK, 2017; pp. 214–223. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Luo, T.j.; Lv, J.; Chao, F.; Zhou, C. Effect of different movement speed modes on human action observation: An EEG study. Front. Neurosci. 2018, 12, 219. [Google Scholar] [CrossRef] [PubMed]
- Luciw, M.D.; Jarocka, E.; Edin, B.B. Multi-channel EEG recordings during 3936 grasp and lift trials with varying weight and friction. Sci. Data 2014, 1, 1–11. [Google Scholar] [CrossRef]
- Tangermann, M.; Müller, K.R.; Aertsen, A.; Birbaumer, N.; Braun, C.; Brunner, C.; Leeb, R.; Mehring, C.; Miller, K.J.; Müller-Putz, G.R.; et al. Review of the BCI competition IV. Front. Neurosci. 2012, 6, 55. [Google Scholar] [CrossRef]
- Gu, X.; Cai, W.; Gao, M.; Jiang, Y.; Ning, X.; Qian, P. Multi-source domain transfer discriminative dictionary learning modeling for electroencephalogram-based emotion recognition. IEEE Trans. Comput. Soc. Syst. 2022, 9, 1604–1612. [Google Scholar] [CrossRef]
- Luo, Y.; Lu, B.L. EEG data augmentation for emotion recognition using a conditional Wasserstein GAN. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 2535–2538. [Google Scholar]
- Chang, S.; Jun, H. Hybrid deep-learning model to recognise emotional responses of users towards architectural design alternatives. J. Asian Archit. Build. Eng. 2019, 18, 381–391. [Google Scholar] [CrossRef]
- Chang, S.W.; Dong, W.H.; Jun, H.J. An EEG-based Deep Neural Network Classification Model for Recognizing Emotion of Users in Early Phase of Design. J. Archit. Inst. Korea Plan. Des. 2018, 34, 85–94. [Google Scholar]
- Dong, Y.; Ren, F. Multi-reservoirs EEG signal feature sensing and recognition method based on generative adversarial networks. Comput. Commun. 2020, 164, 177–184. [Google Scholar] [CrossRef]
- Zhang, A.; Su, L.; Zhang, Y.; Fu, Y.; Wu, L.; Liang, S. EEG data augmentation for emotion recognition with a multiple generator conditional Wasserstein GAN. Complex Intell. Syst. 2021, 8, 3059–3071. [Google Scholar] [CrossRef]
- Liang, Z.; Zhou, R.; Zhang, L.; Li, L.; Huang, G.; Zhang, Z.; Ishii, S. EEGFuseNet: Hybrid unsupervised deep feature characterization and fusion for high-dimensional EEG with an application to emotion recognition. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 1913–1925. [Google Scholar] [CrossRef]
- Pan, B.; Zheng, W. Emotion recognition based on EEG using generative adversarial nets and convolutional neural network. Comput. Math. Methods Med. 2021, 2021, 2520394. [Google Scholar] [CrossRef]
- Liu, Q.; Hao, J.; Guo, Y. EEG data augmentation for emotion recognition with a task-driven GAN. Algorithms 2023, 16, 118. [Google Scholar] [CrossRef]
- Qiao, W.; Sun, L.; Wu, J.; Wang, P.; Li, J.; Zhao, M. EEG emotion recognition model based on attention and GAN. IEEE Access 2024, 12, 32308–32319. [Google Scholar] [CrossRef]
- Gretton, A.; Borgwardt, K.; Rasch, M.; Schölkopf, B.; Smola, A. A kernel method for the two-sample-problem. Adv. Neural Inf. Process. Syst. 2006, 19, 513–520. [Google Scholar]
- Li, X.; Song, D.; Zhang, P.; Zhang, Y.; Hou, Y.; Hu, B. Exploring EEG features in cross-subject emotion recognition. Front. Neurosci. 2018, 12, 162. [Google Scholar] [CrossRef]
- Li, Y.; Huang, J.; Wang, H.; Zhong, N. Study of emotion recognition based on fusion multi-modal bio-signal with SAE and LSTM recurrent neural network. J. Commun. 2017, 38, 109–120. [Google Scholar]
- Zheng, W.L.; Zhu, J.Y.; Lu, B.L. Identifying stable patterns over time for emotion recognition from EEG. IEEE Trans. Affect. Comput. 2017, 10, 417–429. [Google Scholar] [CrossRef]
- Ganin, Y.; Lempitsky, V. Unsupervised domain adaptation by backpropagation. In Proceedings of the International Conference on Machine Learning, Lille, France, 7–9 July 2015; PMLR: Birmingham, UK, 2015; pp. 1180–1189. [Google Scholar]
- Odena, A.; Olah, C.; Shlens, J. Conditional image synthesis with auxiliary classifier gans. In Proceedings of the International Conference on Machine Learning, Sydney, NSW, Australia, 6–11 August 2017; PMLR: Birmingham, UK, 2017; pp. 2642–2651. [Google Scholar]
- Zhou, D.; Huang, J.; Schölkopf, B. Learning with hypergraphs: Clustering, classification, and embedding. Adv. Neural Inf. Process. Syst. 2006, 19, 1601–1608. [Google Scholar]
- Katsigiannis, S.; Ramzan, N. DREAMER: A database for emotion recognition through EEG and ECG signals from wireless low-cost off-the-shelf devices. IEEE J. Biomed. Health Inform. 2017, 22, 98–107. [Google Scholar] [CrossRef]
- Panwar, S.; Rad, P.; Quarles, J.; Huang, Y. Generating EEG signals of an RSVP experiment by a class conditioned wasserstein generative adversarial network. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1304–1310. [Google Scholar]
- Panwar, S.; Rad, P.; Jung, T.P.; Huang, Y. Modeling EEG data distribution with a Wasserstein generative adversarial network to predict RSVP events. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 1720–1730. [Google Scholar] [CrossRef]
- Kunanbayev, K.; Abibullaev, B.; Zollanvari, A. Data augmentation for p300-based brain-computer interfaces using generative adversarial networks. In Proceedings of the 2021 9th International Winter Conference on Brain-Computer Interface (BCI), Gangwon, South Korea, 22–24 February 2024; IEEE: Piscataway, NJ, USA, 2021; pp. 1–7. [Google Scholar]
- Aricò, P.; Aloise, F.; Schettini, F.; Salinari, S.; Mattia, D.; Cincotti, F. Influence of P300 latency jitter on event related potential-based brain–computer interface performance. J. Neural Eng. 2014, 11, 035008. [Google Scholar] [CrossRef] [PubMed]
- Guo, T.; Zhang, L.; Ding, R.; Zhang, D.; Ding, J.; Ma, M.; Xia, L. Constrained generative model for EEG signals generation. In Proceedings of the Neural Information Processing: 28th International Conference, ICONIP 2021, Sanur, Bali, Indonesia, 8–12 December 2021; Proceedings, Part III 28. Springer: Berlin/Heidelberg, Germany, 2021; pp. 596–607. [Google Scholar]
- Yin, X.; Han, Y.; Sun, H.; Xu, Z.; Yu, H.; Duan, X. Multi-attention generative adversarial network for multivariate time series prediction. IEEE Access 2021, 9, 57351–57363. [Google Scholar] [CrossRef]
- Aceves-Fernandez, M. EEG Steady-State Visual Evoked Potential Signals; UCI Machine Learning Repository; University of California: Irvine, CA, USA, 2018. [Google Scholar] [CrossRef]
- Yin, X.; Han, Y.; Xu, Z.; Liu, J. VAECGAN: A generating framework for long-term prediction in multivariate time series. Cybersecurity 2021, 4, 22. [Google Scholar] [CrossRef]
- Xu, M.; Chen, Y.; Wang, Y.; Wang, D.; Liu, Z.; Zhang, L. BWGAN-GP: An EEG data generation method for class imbalance problem in RSVP tasks. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 30, 251–263. [Google Scholar] [CrossRef]
- Kwon, J.; Im, C.H. Novel Signal-to-Signal translation method based on StarGAN to generate artificial EEG for SSVEP-based brain-computer interfaces. Expert Syst. Appl. 2022, 203, 117574. [Google Scholar] [CrossRef]
- Pan, Y.; Li, N.; Zhang, Y.; Xu, P.; Yao, D. Short-length SSVEP data extension by a novel generative adversarial networks based framework. Cogn. Neurodyn. 2024, 18, 2925–2945. [Google Scholar] [CrossRef] [PubMed]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 295–307. [Google Scholar] [CrossRef]
- Fahimi, F.; Zhang, Z.; Goh, W.B.; Ang, K.K.; Guan, C. Towards EEG generation using GANs for BCI applications. In Proceedings of the 2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), Chicago, IL, USA, 19–22 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–4. [Google Scholar]
- Donahue, C.; McAuley, J.; Puckette, M. Adversarial audio synthesis. arXiv 2018, arXiv:1802.04208. [Google Scholar]
- Korczowski, L.; Cederhout, M.; Andreev, A.; Cattan, G.; Rodrigues, P.L.C.; Gautheret, V.; Congedo, M. Brain Invaders Calibration-Less P300-Based BCI with Modulation of Flash Duration Dataset (bi2015a). Ph.D. Thesis, GIPSA-Lab, Grenoble, France, 2019. [Google Scholar]
- Nakanishi, M.; Wang, Y.; Wang, Y.T.; Mitsukura, Y.; Jung, T.P. A high-speed brain speller using steady-state visual evoked potentials. Int. J. Neural Syst. 2014, 24, 1450019. [Google Scholar] [CrossRef]
- Chen, X.; Wang, Y.; Gao, S.; Jung, T.P.; Gao, X. Filter bank canonical correlation analysis for implementing a high-speed SSVEP-based brain–computer interface. J. Neural Eng. 2015, 12, 046008. [Google Scholar] [CrossRef]
- Saminu, S.; Xu, G.; Zhang, S.; Ab El Kader, I.; Aliyu, H.A.; Jabire, A.H.; Ahmed, Y.K.; Adamu, M.J. Applications of artificial intelligence in automatic detection of epileptic seizures using EEG signals: A review. In Proceedings of the Artificial Intelligence and Applications, Wuhan, China, 18–20 November 2023; Volume 1, pp. 11–25. [Google Scholar]
- Wei, Z.; Zou, J.; Zhang, J.; Xu, J. Automatic epileptic EEG detection using convolutional neural network with improvements in time-domain. Biomed. Signal Process. Control 2019, 53, 101551. [Google Scholar] [CrossRef]
- Shoeb, A.H. Application of Machine Learning to Epileptic Seizure Onset Detection and Treatment. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2009. [Google Scholar]
- Pascual, D.; Amirshahi, A.; Aminifar, A.; Atienza, D.; Ryvlin, P.; Wattenhofer, R. Epilepsygan: Synthetic epileptic brain activities with privacy preservation. IEEE Trans. Biomed. Eng. 2020, 68, 2435–2446. [Google Scholar] [CrossRef] [PubMed]
- Usman, S.M.; Khalid, S.; Bashir, Z. Epileptic seizure prediction using scalp electroencephalogram signals. Biocybern. Biomed. Eng. 2021, 41, 211–220. [Google Scholar] [CrossRef]
- Salazar, A.; Vergara, L.; Safont, G. Generative Adversarial Networks and Markov Random Fields for oversampling very small training sets. Expert Syst. Appl. 2021, 163, 113819. [Google Scholar] [CrossRef]
- Rasheed, K.; Qadir, J.; O’Brien, T.J.; Kuhlmann, L.; Razi, A. A generative model to synthesize EEG data for epileptic seizure prediction. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 2322–2332. [Google Scholar] [CrossRef]
- Xu, Y.; Yang, J.; Sawan, M. Multichannel synthetic preictal EEG signals to enhance the prediction of epileptic seizures. IEEE Trans. Biomed. Eng. 2022, 69, 3516–3525. [Google Scholar] [CrossRef] [PubMed]
- Ihle, M.; Feldwisch-Drentrup, H.; Teixeira, C.A.; Witon, A.; Schelter, B.; Timmer, J.; Schulze-Bonhage, A. EPILEPSIAE–A European epilepsy database. Comput. Methods Programs Biomed. 2012, 106, 127–138. [Google Scholar] [CrossRef]
- Quintana, M.; Pena-Casanova, J.; Sánchez-Benavides, G.; Langohr, K.; Manero, R.M.; Aguilar, M.; Badenes, D.; Molinuevo, J.L.; Robles, A.; Barquero, M.S.; et al. Spanish multicenter normative studies (Neuronorma project): Norms for the abbreviated Barcelona Test. Arch. Clin. Neuropsychol. 2011, 26, 144–157. [Google Scholar] [CrossRef]
- Cook, M.J.; O’Brien, T.J.; Berkovic, S.F.; Murphy, M.; Morokoff, A.; Fabinyi, G.; D’Souza, W.; Yerra, R.; Archer, J.; Litewka, L.; et al. Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: A first-in-man study. Lancet Neurol. 2013, 12, 563–571. [Google Scholar] [CrossRef] [PubMed]
- Yao, Y.; Plested, J.; Gedeon, T. Information-preserving feature filter for short-term EEG signals. Neurocomputing 2020, 408, 91–99. [Google Scholar] [CrossRef]
- Hazra, D.; Byun, Y.C. SynSigGAN: Generative adversarial networks for synthetic biomedical signal generation. Biology 2020, 9, 441. [Google Scholar] [CrossRef]
- Tazrin, T.; Rahman, Q.A.; Fouda, M.M.; Fadlullah, Z.M. LiHEA: Migrating EEG analytics to ultra-edge IoT devices with logic-in-headbands. IEEE Access 2021, 9, 138834–138848. [Google Scholar] [CrossRef]
- Krishna, G.; Tran, C.; Carnahan, M.; Han, Y.; Tewfik, A.H. Generating EEG features from acoustic features. In Proceedings of the 2020 28th European Signal Processing Conference (EUSIPCO), Amsterdam, The Netherlands, 18–21 January 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1100–1104. [Google Scholar]
- Lee, W.; Lee, J.; Kim, Y. Contextual imputation with missing sequence of EEG signals using generative adversarial networks. IEEE Access 2021, 9, 151753–151765. [Google Scholar] [CrossRef]
- An, Y.; Lam, H.K.; Ling, S.H. Auto-denoising for EEG signals using generative adversarial network. Sensors 2022, 22, 1750. [Google Scholar] [CrossRef]
- Sawangjai, P.; Trakulruangroj, M.; Boonnag, C.; Piriyajitakonkij, M.; Tripathy, R.K.; Sudhawiyangkul, T.; Wilaiprasitporn, T. EEGANet: Removal of ocular artifacts from the EEG signal using generative adversarial networks. IEEE J. Biomed. Health Inform. 2021, 26, 4913–4924. [Google Scholar] [CrossRef]
- Abdi-Sargezeh, B.; Shirani, S.; Valentin, A.; Alarcon, G.; Sanei, S. EEG-to-EEG: Scalp-to-Intracranial EEG Translation using a Combination of Variational Autoencoder and Generative Adversarial Networks. Sensors 2023, 25, 494. [Google Scholar] [CrossRef] [PubMed]
- Yin, J.; Liu, A.; Li, C.; Qian, R.; Chen, X. A GAN guided parallel CNN and transformer network for EEG denoising. IEEE J. Biomed. Health Inform. 2023, 1–12. [Google Scholar] [CrossRef]
- Wickramaratne, S.D.; Parekh, A. SleepSIM: Conditional GAN-based non-REM sleep EEG Signal Generator. In Proceedings of the 2023 45th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Sydney, Australia, 24–27 July 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–4. [Google Scholar]
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
- Li, Y.; Dzirasa, K.; Carin, L.; Carlson, D.E. Targeting EEG/LFP synchrony with neural nets. Adv. Neural Inf. Process. Syst. 2017, 30, 4620–4630. [Google Scholar]
- Krishna, G.; Tran, C.; Han, Y.; Carnahan, M.; Tewfik, A.H. Speech synthesis using EEG. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Virtual, 4–9 May 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1235–1238. [Google Scholar]
- Krishna, G.; Han, Y.; Tran, C.; Carnahan, M.; Tewfik, A.H. State-of-the-art speech recognition using eeg and towards decoding of speech spectrum from eeg. arXiv 2019, arXiv:1908.05743. [Google Scholar]
- Supratak, A.; Dong, H.; Wu, C.; Guo, Y. DeepSleepNet: A model for automatic sleep stage scoring based on raw single-channel EEG. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 1998–2008. [Google Scholar] [CrossRef]
- Mousavi, S.; Afghah, F.; Acharya, U.R. SleepEEGNet: Automated sleep stage scoring with sequence to sequence deep learning approach. PLoS ONE 2019, 14, e0216456. [Google Scholar] [CrossRef]
- Li, C.; Wand, M. Precomputed real-time texture synthesis with markovian generative adversarial networks. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part III 14. Springer: Berlin/Heidelberg, Germany, 2016; pp. 702–716. [Google Scholar]
- Zhang, H.; Zhao, M.; Wei, C.; Mantini, D.; Li, Z.; Liu, Q. EEGdenoiseNet: A benchmark dataset for deep learning solutions of EEG denoising. J. Neural Eng. 2021, 18, 056057. [Google Scholar] [CrossRef]
- Moody, G.B.; Mark, R.G. The impact of the MIT-BIH Arrhythmia Database. IEEE Eng. Med. Biol. Mag. 2001, 20, 45–50. [Google Scholar] [CrossRef]
- Klados, M.A.; Bamidis, P.D. A semi-simulated EEG/EOG dataset for the comparison of EOG artifact rejection techniques. Data Brief 2016, 8, 1004–1006. [Google Scholar] [CrossRef]
- Chen, X.; Wang, R.; Zee, P.; Lutsey, P.L.; Javaheri, S.; Alcántara, C.; Jackson, C.L.; Williams, M.A.; Redline, S. Racial/ethnic differences in sleep disturbances: The Multi-Ethnic Study of Atherosclerosis (MESA). Sleep 2015, 38, 877–888. [Google Scholar] [CrossRef]
- Tosato, G.; Dalbagno, C.M.; Fumagalli, F. EEG synthetic data generation using probabilistic diffusion models. arXiv 2023, arXiv:2303.06068. [Google Scholar]
- Shu, K.; Zhao, Y.; Wu, L.; Liu, A.; Qian, R.; Chen, X. Data augmentation for seizure prediction with generative diffusion model. arXiv 2023, arXiv:2306.08256. [Google Scholar] [CrossRef]
- Aristimunha, B.; de Camargo, R.Y.; Chevallier, S.; Lucena, O.; Thomas, A.G.; Cardoso, M.J.; Pinaya, W.H.L.; Dafflon, J. Synthetic Sleep EEG Signal Generation using Latent Diffusion Models. In Proceedings of the DGM4H 2023—1st Workshop on Deep Generative Models for Health at NeurIPS 2023, New Orleans, LA, USA, 15 December 2023. [Google Scholar]
- Sharma, G.; Dhall, A.; Subramanian, R. MEDiC: Mitigating EEG Data Scarcity Via Class-Conditioned Diffusion Model. In Proceedings of the DGM4H 2023—1st Workshop on Deep Generative Models for Health at NeurIPS 2023, New Orleans, LA, USA, 16 December 2023. [Google Scholar]
- Torma, S.; Szegletes, L. EEGWave: A Denoising Diffusion Probabilistic Approach for EEG Signal Generation. EasyChair Preprint 2023, 10275. Available online: https://easychair.org/publications/preprint/TBvM (accessed on 16 February 2025).
- Torma, S.; Tevesz, J.; Szegletes, L. Generating Visually Evoked Potentials Using a Diffusion Probabilistic Model. In Proceedings of the 2023 14th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Budapest, Hungary, 22–23 September 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 000061–000066. [Google Scholar]
- Soingern, N.; Sinsamersuk, A.; Chatnuntawech, I.; Silpasuwanchai, C. Data Augmentation for EEG Motor Imagery Classification Using Diffusion Model. In Proceedings of the International Conference on Data Science and Artificial Intelligence, Bangkok, Thailand, 27–30 November 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 111–126. [Google Scholar]
- Zhou, T.; Chen, X.; Shen, Y.; Nieuwoudt, M.; Pun, C.M.; Wang, S. Generative AI Enables EEG Data Augmentation for Alzheimer’s Disease Detection Via Diffusion Model. In Proceedings of the 2023 IEEE International Symposium on Product Compliance Engineering-Asia (ISPCE-ASIA), Shanghai China, 3–5 November 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–6. [Google Scholar]
- Vetter, J.; Macke, J.H.; Gao, R. Generating realistic neurophysiological time series with denoising diffusion probabilistic models. bioRxiv 2023. [Google Scholar] [CrossRef]
- Wang, Y.; Zhao, S.; Jiang, H.; Li, S.; Luo, B.; Li, T.; Pan, G. DiffMDD: A Diffusion-based Deep Learning Framework for MDD Diagnosis Using EEG. IEEE Trans. Neural Syst. Rehabil. Eng. 2024, 32, 728–738. [Google Scholar] [CrossRef] [PubMed]
- Mumtaz, W. MDD Patients and Healthy Controls EEG Data (New). Figshare Dataset. 2016. Available online: https://figshare.com/articles/dataset/EEG_Data_New/4244171/2 (accessed on 16 February 2025). [CrossRef]
- Thoduparambil, P.P.; Dominic, A.; Varghese, S.M. EEG-based deep learning model for the automatic detection of clinical depression. In Physical and Engineering Sciences in Medicine; Springer: Berlin/Heidelberg, Germany, 2020; Volume 43, pp. 1349–1360. [Google Scholar]
- Wang, T.; Zhong, S.H. Fingerprinting in EEG Model IP Protection Using Diffusion Model. In Proceedings of the 2024 International Conference on Multimedia Retrieval, Phuket, Thailand, 10–14 June 2024; pp. 120–128. [Google Scholar]
- Klein, G.; Guetschel, P.; Silvestri, G.; Tangermann, M. Synthesizing eeg signals from event-related potential paradigms with conditional diffusion models. arXiv 2024, arXiv:2403.18486. [Google Scholar]
- Wang, F.; Wu, S.; Zhang, W.; Xu, Z.; Zhang, Y.; Wu, C.; Coleman, S. Emotion recognition with convolutional neural network and EEG-based EFDMs. Neuropsychologia 2020, 146, 107506. [Google Scholar] [CrossRef]
- Kemp, B.; Zwinderman, A.H.; Tuk, B.; Kamphuisen, H.A.; Oberye, J.J. Analysis of a sleep-dependent neuronal feedback loop: The slow-wave microcontinuity of the EEG. IEEE Trans. Biomed. Eng. 2000, 47, 1185–1194. [Google Scholar] [CrossRef]
- Quan, S.F.; Howard, B.V.; Iber, C.; Kiley, J.P.; Nieto, F.J.; O’Connor, G.T.; Rapoport, D.M.; Redline, S.; Robbins, J.; Samet, J.M.; et al. The sleep heart health study: Design, rationale, and methods. Sleep 1997, 20, 1077–1085. [Google Scholar]
- Robbins, K.; Su, K.M.; Hairston, W.D. An 18-subject EEG data collection using a visual-oddball task, designed for benchmarking algorithms and headset performance comparisons. Data Brief 2018, 16, 227–230. [Google Scholar] [CrossRef] [PubMed]
- Miltiadous, A.; Tzimourta, K.D.; Afrantou, T.; Ioannidis, P.; Grigoriadis, N.; Tsalikakis, D.G.; Angelidis, P.; Tsipouras, M.G.; Glavas, E.; Giannakeas, N.; et al. A dataset of scalp EEG recordings of Alzheimer’s disease, frontotemporal dementia and healthy subjects from routine EEG. Data 2023, 8, 95. [Google Scholar] [CrossRef]
- Li, Y.; Lou, A.; Xu, Z.; Zhang, S.; Wang, S.; Englot, D.J.; Kolouri, S.; Moyer, D.; Bayrak, R.G.; Chang, C. NeuroBOLT: Resting-state EEG-to-fMRI Synthesis with Multi-dimensional Feature Mapping. arXiv 2024, arXiv:2410.05341. [Google Scholar]
- Jiang, W.B.; Zhao, L.M.; Lu, B.L. Large brain model for learning generic representations with tremendous EEG data in BCI. arXiv 2024, arXiv:2405.18765. [Google Scholar]
- Wang, J.; Zhao, S.; Luo, Z.; Zhou, Y.; Jiang, H.; Li, S.; Li, T.; Pan, G. CBraMod: A Criss-Cross Brain Foundation Model for EEG Decoding. arXiv 2024, arXiv:2412.07236. [Google Scholar]
- Jiang, W.B.; Wang, Y.; Lu, B.L.; Li, D. NeuroLM: A Universal Multi-task Foundation Model for Bridging the Gap between Language and EEG Signals. arXiv 2024, arXiv:2409.00101. [Google Scholar]
Author | Geo. | Generative Model | Tra. | EEG Tasks | |||||
---|---|---|---|---|---|---|---|---|---|
VAE | GAN | Diffusion | MI | ER | Ext. | Hea. | |||
Ko et al. [5] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |
Lashgari et al. [6] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |
Habashi et al. [4] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||
Tang et al. [7] | ✓ | ✓ | ✓ | ✓ | |||||
Carrle et al. [8] | ✓ | ✓ | |||||||
Altaheri et al. [9] | ✓ | ✓ | ✓ | ✓ | |||||
Ours | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
Category | Purpose | Common Metrics |
---|---|---|
Classification/ Detection/ Prediction/ Reconstruction | Evaluate downstream task performance | Accuracy, AUC (Area under Curve), Cohen’s kappa, F1-score, FPR (False Positive Rate), Precision, Recall, Sensitivity, Specificity, WER (Word Error Rate) |
Interpretability/ Visualization | Support qualitative evaluation | ERD/ERS maps PCA (Principal Component Analysis), Spectral map, Topographic maps, t-SNE (t-distributed Stochastic Neighbor Embedding) |
Task-Specific Metrics | Measure the transmission efficiency Emotion recognition | ITR (Information Transfer Rate), Valence/Arousal Scores |
Category | Purpose | Common Metrics |
---|---|---|
Generative Quality | Evaluate the difference between generated data and real data | ABA (Amplitude Bias Analysis), FID (Fréchet Inception Distance), IS (Inception Score), I value (Mutual Information-based Index) JSD (Jensen–Shannon Divergence), KS test (Kolmogorov–Smirnov test) KL divergence (Kullback–Leibler Divergence), MAE (Mean Absolute Error) MAPE (Mean Absolute Percentage Error), MSE (Mean Squared Error), MS-SSIM (Multi-Scale Structural Similarity Index Measure), MMD (Maximum Mean Discrepancy), NMI (Normalized Mutual Information), PAD (Phase–Amplitude Distance), PCC (Pearson Correlation Coefficient), PLD (Power Level Distance), PRD (Percent Root Mean Square Difference), PSD (Power Spectral Density), RMSE (Root Mean Square Error), R value (Pearson’s Correlation Coefficient), score (Coefficient of Determination), SNR (Signal-to-Noise Ratio), SWD (Sliced Wasserstein Distance), SMAPE (Symmetric Mean Absolute Percentage Error), SD-MD (Spectral-Domain Mean Distance), WD (Wasserstein Distance) |
Study | BCI Paradigm | Dataset | Model Type | Evaluation Metrics | Results |
---|---|---|---|---|---|
Aznan et al., 2019 [37] | SSVEP | Private dataset [38] | VAE | Classification accuracy | +0.37 |
Luo et al., 2020 [39] | Emotion recognition | SEED, DEAP | sVAE | Average recognition accuracy | SVM: +3.0% DNN: +4.4% |
Krishna et al., 2020 [40] | Speech recognition | Private dataset [41,42] | RNN-VAE | Classification accuracy, WER | +4.55%, −5.2% (120) |
Ozan et al., 2021 [43] | Motor imagery | PhysioBank | CVAE | ERD/ERS maps, TFRs | NA |
Yang et al., 2021 [44] | Motor imagery | Private dataset [44]+ BCI competition IV 2a | CVAE-GAN | IS, FID, SWD, classification accuracy | (with respect to real data) IS: −0.121, FID: +11.364, SWD: +0.067, D1: mean~+4% D2: mean~+2.5% |
Bao et al., 2021 [45] | Emotion recognition | SEED, SEED IV | VAE-D2GAN | IS, FID, MMD, classification accuracy | (with respect to VAE) SEED: +0.670/ −17.197/−0.522, SEED-IV: +0.475/ −398.504/−0.678, SEED: +1.5%, SEED-IV: +3.5% |
Bethge et al., 2022 [46] | Emotion recognition | SEED | EEG2Vec (CVAE) | Classification accuracy | +3% |
George et al., 2022 [47] | Motor imagery | Public dataset [48,49] | AVG, RT, RF, NS, CPS, cVAE | Classification accuracy, topographic head plots, FID, t-SNE plots | (Dataset I) ACC: +0.57%, FID: ~160, (Dataset II) ACC: −0.81%, FID: ~40 |
Wang et al., 2022 [50] | Emotion recognition | SEED, SEED IV | MMDA-VAE | Mean accuracy | (with respect to BDAE) SEED: +9.51%, SEEDIV: +8.96% |
Li et al., 2023 [51] | Epilepsy | Public dataset [52] | CR-VAE | MMD, RMSE, AUROC | MMD: 0.05, RMSE: 0.024 |
Zancanaro et al., 2023 [53] | Motor imagery | BCI competition IV 2a | vEEGNet | MRCP | NA |
Ahmed et al., 2023 [54] | Emotion recognition | DEAP | CNN-VAE | SSIM, MAE, MSE, MAPE | 1.0, 1.03 × , 1.9 × , 4.2 × |
Tian et al., 2023 [55] | Emotion recognition | SEED | DEVAE-GAN | Classification accuracy | +5.00% |
Study | Objective | Dataset | GAN Type | Evaluation Metrics | Results |
---|---|---|---|---|---|
Abdelfattah et al., 2018 [60] | Improve the performance of EEG signal classification models | PhysioNet [61] | RGAN | Classification accuracy | +36.1% |
Hartmann et al., 2018 [62] | Improve the stability of training | Private dataset | EEG-GAN (WGAN) | IS, FJD, ED, SWD | 1.363, 9.523, −0.056, 0.078 |
Zhang et al., 2018 [63] | Improve the model’s classification performance | BCI competition II dataset III | cDCGAN | Classification accuracy | 82.86% |
Roy et al., 2020 [64] | Generate artificial EEG data for motor imagery | BCI competition IV dataset 2b | MIEEG-GAN (Bi-LSTM) | STFT spectrograms, first-/second-order characteristics, PSD | Qualitative analysis |
Debie et al., 2020 [65] | Generate and classify EEG data while protecting data privacy | Graz A 2008 [66] | Privacy-preserving GAN | Classification accuracy, privacy budget, privacy guarantee | 4–10% |
Luo et al., 2020 [67] | Improve reconstruction quality and reduce costs | BCI competition IV dataset 2a, AO dataset, GAL dataset | WGAN (TSF-MSE loss function) | Classification accuracy | MI: +2.03%, AO: +4.1%, GAL: +4.11% |
Zhang et al., 2020 [21] | Improve the model’s classification performance | BCI competition IV datasets (2b + 1) | CNN-DCGAN | Classification accuracy, kappa value, FID | 2b: +12.6%, 0.677, 98.2 1: +8.7%, 0.564, 126.4 |
Fahimi et al, 2021 [68] | Improve the model’s classification performance | Private dataset | DCGAN | Classification accuracy | Diverted attention: +7.32% Focused attention: +5.45% IVa: +3.57% |
Song et al., 2021 [69] | Improve the accuracy of cross-subject EEG signal classification | BCI competition IV dataset 2a | CS-GAN | Classification accuracy | +15.85% (3000 fake samples) |
Xu et al., 2021 [70] | Improve the model’s classification performance | Private dataset | CycleGAN | Classification accuracy | +18.3% |
Xie et al., 2021 [71] | Improve the model’s classification performance | BCI competition IV datasets (2a + 2b) | LGANs+ MoCNN+ Attention | Classification accuracy, R value, I value, kappa value | 2a (with respect to raw data): LGAN: +8.23% Att-LAGN: +9.34% 2b (with respect to raw data): Att-LAGN: +5.64% ~6.6% |
Raoof et al., 2023 [72] | Generate spatiotemporal MI EEG data | PhysioNet | cGAN + encoder + decoder | Classification accuracy, KL Divergence, KS test, PCA, t-SNE | +9.1% |
Dong et al., 2023 [73] | Address the issues of EEG data scarcity or imbalance in the BCI field | Physionet | DCGAN | FFT, CWT, spectral map | Qualitative analysis |
Yin et al., 2024 [74] | Improve the performance of cross-subject EEG data transfer and classification | BCI competition IV dataset 2a + OpenBMI [75] | GITGAN | Classification accuracy, F1-score, kappa value | (2a)Acc: 82.9% F1: 80.4 Kappa: 58.1 (OpenBMI) Acc: 84.0% F1: 81.4 Kappa: 58.3 |
Study | Objective | Dataset | GAN Type | Evaluation Metrics | Results |
---|---|---|---|---|---|
Luo et al., 2018 [83] | Improve the accuracies of emotion recognition models | SEED + DEAP | cWGAN | Classification accuracy, Wasserstein distance, MMD | SEED: +2.97% DEAP-Arousal: +9.15% DEAP-Valence: +20.13% |
Chang et al., 2019 [84] | Recognize the emotional responses of users towards given architectural design | Private dataset [85] | GAN | Classification accuracy | +0.5% |
Luo et al., 2020 [39] | Improve the accuracies of emotion recognition models | SEED + DEAP | cWGAN + sWGAN | Classification accuracy | (SEED) cWGAN + DNN: +8.3% sWGAN + DNN: +10.2% (DEAP) cWGAN + SVM: +3.5% sWGAN + SVM: +5.4% |
Dong et al., 2020 [86] | Improve the accuracies of emotion recognition models | DEAP | MCLFS-GAN | Classification accuracy | (with respect to CNN+LSTM) SAP MCLFS-GAN: +14.95% LOSO MCLFS-GAN: +19.52% |
Zhang et al., 2021 [87] | Address the issue of insufficient high-quality training data | SEED | MG-CWGAN | Classification accuracy, Wasserstein distance, MMD, t-SNE | SVM: +5% KNN: +2.5% |
Liang et al., 2021 [88] | Extract valid and reliable features from high-dimensional EEG | MAHNOB-HCI + SEED + DEAP | EEGFuseNet | Recognition accuracy, F1-score, NMI | +7.69%,
+5.07, +0.1512 (two-class) +0.0824 (three-class) |
Pan et al., 2021 [89] | Solve the problem of EEG sample shortage and sample category imbalance | MAHNOB-HCI + DEAP | PSD-GAN | Recognition accuracy | Two-classification task: 6.5% and 6.71% Four-classification task: 10.92% and 14.47% |
Liu et al., 2023 [90] | Generate high-quality artificial data | DEAP | CWGAN | Classification accuracy, Wasserstein distance, MMD |
(Tasknet) 10.07%, 8.41, 10.76% |
Qiao et al., 2024 [91] | Solve the defects in EEG with weak features and easily disturbed them | DREAMER + SEED | GAN + attention network | Recognition accuracy | (with respect to SVM) SEED: +38.14% DREAMER: +30.69% |
Study | Objective | Dataset | GAN Type | Evaluation Metrics | Results |
---|---|---|---|---|---|
Panwar et al., 2019 [100] | Data augmentation for different cognitive events in RSVP experiments | BCIT X2 | cWGAN-GP | Classifier AUC | Same subject: +0.81% (2CNN), +3.28% (3CNN) cross-subject: 3.1% (2CNN), +5.18% (3CNN) |
Aznan et al., 2019 [37] | Improve the performance of SSVEP classification models | Video-Stimuli Dataset + NAO Dataset | DCGAN, WGAN | Classification accuracy | DCGAN: +3% WGAN: +2% |
Panwar et al., 2020 [101] | Generate EEG signals and predict RSVP events | BCIT X2 | WGAN-GP + CC-WGAN-GP | Classifier AUC, GMM log-likelihood scores | (with respect to EEGNet) +5.83% |
Kunanbayev et al., 2021 [102] | Generate artificial training data for the classification of P300 in EEG | Arico et al. [103] | DCGAN + WGAN-GP | Classification accuracy, t-SNE | (Subject-specific) WGAN-GP: +0.61% (Subject-independent) WGAN-GP: +0.98% |
Guo et al., 2021 [104] | Minimize the issues of insufficient diversity and poor similarity | Bi2015a | WaveGAN | MS, SWD, STFT | (with respect to CC-WGAN-GP) MS: +0.2858 SWD: −0.1977 |
Yin et al., 2021 [105] | Generate high-quality time series prediction data | SSVEP dataset [106] | MAGAN | MSE, RMSE, MAE, MAPE, SMAPE, score | (with respect to MARNN) MSE: +0.0077 RMSE: +0.0063 MAE: −0.0455 MAPE: −3.06% SMAPE: −0.0244 score: −0.0043 |
Yin et al., 2021 [107] | Improve the accuracy of long-term prediction | SSVEP dataset [106] | VAECGAN | MAE, RMSE | (with respect to VAE) MAE: −0.0233 RMSE: −0.035 |
Xu et al., 2022 [108] | Solve class imbalance problem in RSVP tasks | Private dataset | BWGAN-GP | Classification accuracy, AUC, F1-score, Cohen’s kappa | (with respect to WGAN-GP-RSVP) Acc: +0.0391 AUC: +0.0247 F1: +0.0809 Kappa: +0.0458 |
Kwon et al., 2022 [109] | Convert resting-state EEG signals into specific task-state signals | Private dataset | StarGAN | Classification accuracy, ITR | (with respect to FBCCA) Acc: +3.44% ITR: +8.29 bit/min |
Pan et al., 2024 [110] | Transform short-length signals into long-length artificial signals | Direction dataset + Dial dataset | TEGAN | Classification accuracy, ITR | (Direction SSVEP) Acc: +39.28% ITR: +36.36bits/min (Dial SSVEP) Acc: +37.04% ITR: +35 bits/min |
Study | Objective | Dataset | GAN Type | Evaluation Metrics | Results |
---|---|---|---|---|---|
Yao et al., 2018 [128] | Filter out unwanted features from EEG | UCI | CycleGAN | Classification accuracy | −66.3% |
Hazra et al., 2020 [129] | Generate data to protect patient privacy | Siena Scalp EEG Database | SynSigGAN | RMSE, PRD, MAE, FD, PCC | RMSE: 0.0314 PRD: 5.985% MAE: 0.0475 FD: 0.982 PCC: 0.997 |
Tazrin et al., 2021 [130] | Enhance the training data to improve the model’s performance | Confused Student EEG Dataset | DCGAN | Classification accuracy | +20% |
Krishna et al., 2020 [131] | Predict various types of EEG features from acoustic features | Private Dataset | RNN-GAN | RMSE | 0.36 |
Lee et al., 2021 [132] | Fill in missing data in EEG signal sequences | Sleep-EDF Database | SIG-GAN | IS, FID, classification accuracy | (With respect to EEGGAN) IS: +0.75 FID: −5.11 Acc: 75.75% |
An et al., 2022 [133] | Denoise the multi-channel EEG signal automatically | HaLT | WGAN | Correlation, RMSE | 0.7771, 0.0757 |
Sawangjai et al., 2022 [134] | Remove ocular artifacts from EEG without EOG channels or eye-blink detection algorithms | BCI Competition IV 2b + EEG Eye Artifact Dataset + Multimodal Signal Dataset | GAN (ResNet) | PCC, RMSE, classification accuracy, F1-score | (With respect to CNN-AE) PCC: −0.011 RMSE: −3.023 (With respect to raw data) Acc: +3.1% F1: +3.7% |
Abdi-Sargezeh et al., 2023 [135] | Map the sEEG to iEEG to enhance the sEEG resolution | Private Dataset | VAE-cGAN | Classification accuracy, sensitivity, specificity | (With respect to LSR) Acc: +7% (intra), +11% (inter) Sen: +7% Spec: +11% |
Yin et al., 2023 [136] | Remove various physiological artifacts in EEG signals | Semi-simulated EEG/EOG + MIT-BIH arrhythmia + EEGDenoiseNet + BioSource | GAN (CNN–Transformer) | RRMSE, CC, SNR | RRMSE: −0.008 CC: +0.002 SNR: +0.313 |
Wickramaratne et al., 2023 [137] | Generate unique samples of non-REM sleep EEG signals | MESA | CGAN | Relative Spectral Power Visual Inspection |
Study | Objective | Dataset | Model Type | Evaluation Metrics | Results |
---|---|---|---|---|---|
Tosato et al., 2023 [149] | Address the issue of insufficient quality and quantity of EEG data | SEED | DDPM | Classification accuracy | +1.55% |
Shu et al., 2023 [150] | Address the issue of data imbalance in seizure prediction | CHB-MIT + Kaggle | DiffEEG | Sensitivity, FPR, AUC |
(CHB-MIT) Sens: +4.9% FPR: −0.113 AUC: +0.083 (Kaggle) Sens: +8.1% FPR: −0.047 AUC: +0.062 |
Aristimunha et al., 2023 [151] | Generate EEG signals of sleep stages with the correct neural oscillation | Sleep EDFx + SHHS | AE-KL+ LDM | MS-SSIM, FID, PSD |
(Sleep EDFx) FID: −11.625 MS-SSIM: +0.310 (SHHS) FID: −0.768 MS-SSIM: +0.370 |
Sharma et al., 2023 [152] | Generate synthetic EEG embeddings to address the shortage of data | Public dataset | MEDIC | Precision, recall, F1-score, JSD score |
(With respect to VAE) Precision: +0.19 Recall: +0.22 F1: +0.23 |
Torma et al., 2023 [153] | Generate multi-channel EEG signals with P300 components | BCI Competition III dataset II | EEGWave | IS, FID, SWD | (With respect to WGAN) IS: +0.0004 FID: −0.2623 SWD: −0.1966 |
Torma et al., 2023 [154] | Generate high-quality visually evoked potentials | VEPESS | DPM | SWD, IS, FID, dGMM | SWD: −72.3609 IS: +0.21 FID: −11.32 dGMM: −4988.2779 |
Soingern et al., 2023 [155] | Data augmentation method for motor imagery classification | BCI Competition IV 2a | WaveGrad | Classification accuracy, KL divergence | (With respect to noise) Acc: +17.17% KL: −158 |
Zhou et al., 2023 [156] | Improve the detection of Alzheimer’s disease | Public dataset | Diff-EEG | Accuracy, Precision, Recall | (With respect to VAE-GAN) Acc: +2.2% Precision: +3.0% Recall: +5.3% |
Vetter et al., 2023 [157] | Generate realistic neuro-physiological time series | Public dataset | DDPM | Evoked potentials, PSD | Qualitative analysis |
Wang et al., 2024 [158] | Diagnose MDD using EEG data | Mumtaz, 2016 [159], Arizona, 2020 [160] | DiffMDD | ACC, F1-score, recall, precision, subject-wise accuracy | (Mumtaz2016) F1-score: 91.25% ACC: 91.22% Subject-wise: 94.06% (Arizona2020) F1-score: 83.29% ACC: 83.71% Subject-wise: 85.86% |
Wang et al., 2024 [161] | Protect the intellectual property for models based on EEG | DEAP | CDDPM | AUC, PCA visualization, topographic maps | AUC:0.91 (average) |
Klein et al., 2024 [162] | Mitigating the class imbalance problem in ERP | Visual ERP dataset | DDPM (VP SDE) | ABA, SWD, MSE, JSD, FID, PLD, PAD, SD-MD |
(With respect to baseline) FID: 0.0093 ABA: −0.001 PAD: −0.35 PLD: −0.026 SD-MD: −2.77 SWD: −0.44 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
You, Z.; Guo, Y.; Zhang, X.; Zhao, Y. Virtual Electroencephalogram Acquisition: A Review on Electroencephalogram Generative Methods. Sensors 2025, 25, 3178. https://doi.org/10.3390/s25103178
You Z, Guo Y, Zhang X, Zhao Y. Virtual Electroencephalogram Acquisition: A Review on Electroencephalogram Generative Methods. Sensors. 2025; 25(10):3178. https://doi.org/10.3390/s25103178
Chicago/Turabian StyleYou, Zhishui, Yuzhu Guo, Xiulei Zhang, and Yifan Zhao. 2025. "Virtual Electroencephalogram Acquisition: A Review on Electroencephalogram Generative Methods" Sensors 25, no. 10: 3178. https://doi.org/10.3390/s25103178
APA StyleYou, Z., Guo, Y., Zhang, X., & Zhao, Y. (2025). Virtual Electroencephalogram Acquisition: A Review on Electroencephalogram Generative Methods. Sensors, 25(10), 3178. https://doi.org/10.3390/s25103178