TSFNet: Temporal-Spatial Fusion Network for Hybrid Brain-Computer Interface
Abstract
1. Introduction
- A novel algorithm for hybrid BCI called TSFNet is proposed to enable effective complementarity between EEG and fNIRS signals, thereby enhancing the classification performance in subject-dependent cross-session classification experiments.
- An EEG-fNIRS-guided fusion layer is designed to perform a more effective feature fusion based on EEG-guided temporal feature learning and fNIRS-guided spatial feature learning, where a hybrid attention map is generated for complementary integration of spatiotemporal information.
- A cross-attention-based feature enhancement layer is introduced, which enables multimodal feature representations to be further refined via cross-attention mechanism, selectively enhancing informative components and suppressing noise or redundant information.
2. Dataset
2.1. Dataset Description
2.2. Data Preprocessing
2.3. Three-Dimensional Tensor Transformation
3. Method
3.1. Three-Dimensional CNN Structure
3.2. EEG-fNIRS-Guided Fusion Layer
3.3. Cross-Attention-Based Feature Enhancement Layer
3.4. Classifier
3.5. Loss Function
3.5.1. Classification Loss
3.5.2. EFGF Regularization Loss
3.5.3. CAFE Regularization Loss
3.6. Experimental Setup
4. Results
4.1. Performance Comparison
4.1.1. Overall Performance
4.1.2. Performance Across Time Windows
4.2. Ablation Study
4.2.1. Ablation on Sub-Layers of TSFNet
4.2.2. Ablation on Regularization Loss
4.3. Computational Complexity of TSFNet
5. Limitations and Future Work
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Ramadan, R.A.; Vasilakos, A.V. Brain Computer Interface: Control Signals Review. Neurocomputing 2017, 223, 26–44. [Google Scholar] [CrossRef]
- Saha, S.; Mamun, K.A.; Ahmed, K.; Mostafa, R.; Naik, G.R.; Darvishi, S.; Khandoker, A.H.; Baumert, M. Progress in Brain Computer Interface: Challenges and Opportunities. Front. Syst. Neurosci. 2021, 15, 578875. [Google Scholar] [CrossRef]
- Gu, X.; Cao, Z.; Jolfaei, A.; Xu, P.; Wu, D.; Jung, T.-P.; Lin, C.-T. EEG-Based Brain-Computer Interfaces (BCIs): A Survey of Recent Studies on Signal Sensing Technologies and Computational Intelligence Approaches and Their Applications. IEEE/ACM Trans. Comput. Biol. Bioinform. 2021, 18, 1645–1666. [Google Scholar] [CrossRef] [PubMed]
- Lawhern, V.J.; Solon, A.J.; Waytowich, N.R.; Gordon, S.M.; Hung, C.P.; Lance, B.J. EEGNet: A Compact Convolutional Neural Network for EEG-Based Brain–Computer Interfaces. J. Neural Eng. 2018, 15, 056013. [Google Scholar] [CrossRef]
- Naseer, N.; Hong, K.-S. fNIRS-Based Brain-Computer Interfaces: A Review. Front. Hum. Neurosci. 2015, 9, 172. [Google Scholar] [CrossRef] [PubMed]
- Li, X.; Chen, J.; Shi, N.; Yang, C.; Gao, P.; Chen, X.; Wang, Y.; Gao, S.; Gao, X. A Hybrid Steady-State Visual Evoked Response-Based Brain-Computer Interface with MEG and EEG. Expert Syst. Appl. 2023, 223, 119736. [Google Scholar] [CrossRef]
- Ahn, M.; Ahn, S.; Hong, J.H.; Cho, H.; Kim, K.; Kim, B.S.; Chang, J.W.; Jun, S.C. Gamma Band Activity Associated with BCI Performance: Simultaneous MEG/EEG Study. Front. Hum. Neurosci. 2013, 7, 848. [Google Scholar] [CrossRef]
- Du, B.; Cheng, X.; Duan, Y.; Ning, H. fMRI Brain Decoding and Its Applications in Brain–Computer Interface: A Survey. Brain Sci. 2022, 12, 228. [Google Scholar] [CrossRef]
- Zich, C.; Debener, S.; Kranczioch, C.; Bleichner, M.G.; Gutberlet, I.; De Vos, M. Real-Time EEG Feedback during Simultaneous EEG–fMRI Identifies the Cortical Signature of Motor Imagery. NeuroImage 2015, 114, 438–447. [Google Scholar] [CrossRef] [PubMed]
- Belkhiria, C.; Boudir, A.; Hurter, C.; Peysakhovich, V. EOG-Based Human–Computer Interface: 2000–2020 Review. Sensors 2022, 22, 4914. [Google Scholar] [CrossRef]
- Mai, X.; Ai, J.; Ji, M.; Zhu, X.; Meng, J. A Hybrid BCI Combining SSVEP and EOG and Its Application for Continuous Wheelchair Control. Biomed. Signal Process. Control 2024, 88, 105530. [Google Scholar] [CrossRef]
- Altaheri, H.; Muhammad, G.; Alsulaiman, M.; Amin, S.U.; Altuwaijri, G.A.; Abdul, W.; Bencherif, M.A.; Faisal, M. Deep Learning Techniques for Classification of Electroencephalogram (EEG) Motor Imagery (MI) Signals: A Review. Neural Comput. Appl. 2023, 35, 14681–14722. [Google Scholar] [CrossRef]
- Liu, Z.; Shore, J.; Wang, M.; Yuan, F.; Buss, A.; Zhao, X. A Systematic Review on Hybrid EEG/fNIRS in Brain-Computer Interface. Biomed. Signal Process. Control 2021, 68, 102595. [Google Scholar] [CrossRef]
- Jiang, X.; Bian, G.-B.; Tian, Z. Removal of Artifacts from EEG Signals: A Review. Sensors 2019, 19, 987. [Google Scholar] [CrossRef]
- Ahn, S.; Jun, S.C. Multi-Modal Integration of EEG-fNIRS for Brain-Computer Interfaces—Current Limitations and Future Directions. Front. Hum. Neurosci. 2017, 11, 503. [Google Scholar] [CrossRef] [PubMed]
- Hong, K.S.; Khan, M.J.; Hong, M.J. Feature extraction and classification methods for hybrid fNIRS-EEG brain-computer interfaces. Front. Hum. Neurosci. 2018, 12, 246. [Google Scholar] [CrossRef] [PubMed]
- Putze, F.; Hesslinger, S.; Tse, C.Y.; Huang, Y.; Herff, C.; Guan, C.; Schultz, T. Hybrid fNIRS-EEG based classification of auditory and visual perception processes. Front. Neurosci. 2014, 8, 373. [Google Scholar] [CrossRef] [PubMed]
- Chiarelli, A.M.; Croce, P.; Merla, A.; Zappasodi, F. Deep learning for hybrid EEG-fNIRS brain–computer interface: Application to motor imagery classification. J. Neural Eng. 2018, 15, 036028. [Google Scholar] [CrossRef]
- Saadati, M.; Nelson, J.; Ayaz, H. Multimodal fNIRS-EEG classification using deep learning algorithms for brain-computer interfaces purposes. In International Conference on Applied Human Factors and Ergonomics; Springer International Publishing: Cham, Switzerland, 2019. [Google Scholar]
- Ghonchi, H.; Fateh, M.; Abolghasemi, V.; Ferdowsi, S.; Rezvani, M. Deep recurrent–convolutional neural network for classification of simultaneous EEG–fNIRS signals. IET Signal Process. 2020, 14, 142–153. [Google Scholar] [CrossRef]
- Al-Shargie, F.; Tang, T.B.; Kiguchi, M. Stress Assessment Based on Decision Fusion of EEG and fNIRS Signals. IEEE Access 2017, 5, 19889–19896. [Google Scholar] [CrossRef]
- Bourguignon, N.J.; Bue, S.L.; Guerrero-Mosquera, C.; Borragán, G. Bimodal EEG-fNIRS in Neuroergonomics. Curr. Evid. Prospect. Future Res. Front. Neuroergon. 2022, 3, 934234. [Google Scholar] [CrossRef]
- Cai, H.; Qu, Z.; Li, Z.; Zhang, Y.; Hu, X.; Hu, B. Feature-Level Fusion Approaches Based on Multimodal EEG Data for Depression Recognition. Inform. Fusion 2020, 59, 127–138. [Google Scholar] [CrossRef]
- Sun, Z.; Huang, Z.; Duan, F.; Liu, Y. A Novel Multimodal Approach for Hybrid Brain–Computer Interface. IEEE Access 2020, 8, 89909–89918. [Google Scholar] [CrossRef]
- Pandey, P.; McLinden, J.; Rahimi, N.; Kumar, C.; Shao, M.; Spencer, K.M.; Ostadabbas, S.; Shahriari, Y. fNIRSNET: A Multi-View Spatio-Temporal Convolutional Neural Network Fusion for Functional near-Infrared Spectroscopy-Based Auditory Event Classification. Eng. Appl. Artif. Intell. 2024, 137, 109256. [Google Scholar] [CrossRef]
- Liu, M.; Yang, B.; Meng, L.; Zhang, Y.; Gao, S.; Zan, P.; Xia, X. STA-Net: Spatial–Temporal Alignment Network for Hybrid EEG-fNIRS Decoding. Inform. Fusion 2025, 119, 103023. [Google Scholar] [CrossRef]
- Arif, A.; Wang, Y.; Yin, R.; Zhang, X.; Helmy, A. EF-Net: Mental State Recognition by Analyzing Multimodal EEG-fNIRS via CNN. Sensors 2024, 24, 1889. [Google Scholar] [CrossRef]
- Gao, Y.; Jia, B.; Houston, M.; Zhang, Y. Hybrid EEG-fNIRS Brain Computer Interface Based on Common Spatial Pattern by Using EEG-Informed General Linear Model. IEEE Trans. Instrum. Meas. 2023, 72, 1–10. [Google Scholar] [CrossRef]
- Ghasimi, A.; Shamekhi, S. Assessment of Cognitive Workload Using Simultaneous EEG and fNIRS: A Comparison of Feature Combinations. Comput. Electr. Eng. 2024, 119, 109619. [Google Scholar] [CrossRef]
- Poria, S.; Cambria, E.; Bajpai, R.; Hussain, A. A Review of Affective Computing: From Unimodal Analysis to Multimodal Fusion. Inform. Fusion 2017, 37, 98–125. [Google Scholar] [CrossRef]
- Yuan, X.; Zhang, Y.; Rolfe, P. IIMCNet: Intra- and Inter-Modality Correlation Network for Hybrid EEG-fNIRS Brain-Computer Interface. IEEE J. Biomed. Health Inform. 2025, 1–12. [Google Scholar] [CrossRef] [PubMed]
- Nia, A.F.; Tang, V.; Maso Talou, G.D.; Billinghurst, M. Decoding Emotions through Personalized Multi-Modal fNIRS-EEG Systems: Exploring Deterministic Fusion Techniques. Biomed. Signal Process. Control 2025, 105, 107632. [Google Scholar] [CrossRef]
- Qin, Y.; Li, B.; Wang, W.; Shi, X.; Peng, C.; Wang, X.; Wang, H. ECA-FusionNet: A Hybrid EEG-fNIRS Signals Network for MI Classification. J. Neural Eng. 2025, 22, 016030. [Google Scholar] [CrossRef] [PubMed]
- Rabbani, M.H.R.; Islam, S.M.R. Deep Learning Networks Based Decision Fusion Model of EEG and fNIRS for Classification of Cognitive Tasks. Cogn. Neurodynamics 2024, 18, 1489–1506. [Google Scholar] [CrossRef]
- Si, X.; Zhang, S.; Yang, Z.; Yu, J.; Ming, D. A Bidirectional Cross-Modal Transformer Representation Learning Model for EEG-fNIRS Multimodal Affective BCI. Expert Syst. Appl. 2025, 266, 126081. [Google Scholar] [CrossRef]
- Debie, E.; Fernandez Rojas, R.; Fidock, J.; Barlow, M.; Kasmarik, K.; Anavatti, S.; Garratt, M.; Abbass, H.A. Multimodal Fusion for Objective Assessment of Cognitive Workload: A Review. IEEE Trans. Cybern. 2021, 51, 1542–1555. [Google Scholar] [CrossRef] [PubMed]
- Li, R.; Yang, D.; Fang, F.; Hong, K.-S.; Reiss, A.L.; Zhang, Y. Concurrent fNIRS and EEG for Brain Function Investigation: A Systematic, Methodology-Focused Review. Sensors 2022, 22, 5865. [Google Scholar] [CrossRef]
- Li, R.; Zhao, C.; Wang, C.; Wang, J.; Zhang, Y. Enhancing fNIRS Analysis Using EEG Rhythmic Signatures: An EEG-Informed fNIRS Analysis Study. IEEE Trans. Biomed. Eng. 2020, 67, 2789–2797. [Google Scholar] [CrossRef]
- Kwak, Y.; Song, W.-J.; Kim, S.-E. FGANet: fNIRS-Guided Attention Network for Hybrid EEG-fNIRS Brain-Computer Interfaces. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 30, 329–339. [Google Scholar] [CrossRef] [PubMed]
- Xu, H.; She, Q.; Meng, M.; Gao, Y.; Zhang, Y. EFDFNet: A Multimodal Deep Fusion Network Based on Feature Disentanglement for Attention State Classification. Biomed. Signal Process. Control 2025, 109, 108042. [Google Scholar] [CrossRef]
- Wang, M.; Gu, J.; Yang, L.; Chang, H.; Gao, D. DMPNet: Dual-Modal Parallel Network with Spatiotemporal Fusion for Cognitive Task Classification. SIViP 2025, 19, 748. [Google Scholar] [CrossRef]
- Qiu, L.; Zhong, Y.; He, Z.; Pan, J. Improved Classification Performance of EEG-fNIRS Multimodal Brain-Computer Interface Based on Multi-Domain Features and Multi-Level Progressive Learning. Front. Hum. Neurosci. 2022, 16, 973959. [Google Scholar] [CrossRef]
- Zhang, Y.; Qiu, S.; He, H. Multimodal Motor Imagery Decoding Method Based on Temporal Spatial Feature Alignment and Fusion. J. Neural Eng. 2023, 20, 026009. [Google Scholar] [CrossRef]
- Shin, J.; Von Luhmann, A.; Blankertz, B.; Kim, D.-W.; Jeong, J.; Hwang, H.-J.; Muller, K.-R. Open Access Dataset for EEG+NIRS Single-Trial Classification. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 1735–1745. [Google Scholar] [CrossRef]
- Shin, J.; Von Lühmann, A.; Kim, D.-W.; Mehnert, J.; Hwang, H.-J.; Müller, K.-R. Simultaneous Acquisition of EEG and NIRS during Cognitive Tasks for an Open Access Dataset. Sci. Data 2018, 5, 180003. [Google Scholar] [CrossRef]
- Lee, T.-W.; Girolami, M.; Sejnowski, T.J. Independent Component Analysis Using an Extended Infomax Algorithm for Mixed Subgaussian and Supergaussian Sources. Neural Comput. 1999, 11, 417–441. [Google Scholar] [CrossRef] [PubMed]
- Baker, W.B.; Parthasarathy, A.B.; Busch, D.R.; Mesquita, R.C.; Greenberg, J.H.; Yodh, A.G. Modified Beer-Lambert Law for Blood Flow. Biomed. Opt. Express 2014, 5, 4053. [Google Scholar] [CrossRef] [PubMed]
- Kwak, Y.; Kong, K.; Song, W.-J.; Min, B.-K.; Kim, S.-E. Multilevel Feature Fusion With 3D Convolutional Neural Network for EEG-Based Workload Estimation. IEEE Access 2020, 8, 16009–16021. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhang, T.; Wu, C.; Tao, R. Multi-Scale Spatiotemporal Feature Fusion Network for Video Saliency Prediction. IEEE Trans. Multimed. 2024, 26, 4183–4193. [Google Scholar] [CrossRef]
- Dou, Q.; Yu, L.; Chen, H.; Jin, Y.; Yang, X.; Qin, J.; Heng, P.-A. 3D Deeply Supervised Network for Automated Segmentation of Volumetric Medical Images. Med. Image Anal. 2017, 41, 40–54. [Google Scholar] [CrossRef]
- Ji, S.; Zhang, C.; Xu, A.; Shi, Y.; Duan, Y. 3D Convolutional Neural Networks for Crop Classification with Multi-Temporal Remote Sensing Images. Remote Sens. 2018, 10, 75. [Google Scholar] [CrossRef]
- Clevert, D.-A.; Unterthiner, T.; Hochreiter, S. Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs). arXiv 2016, arXiv:1511.07289. [Google Scholar] [CrossRef]
- Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the International Conference on Machine Learning, PMLR, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Cooney, C.; Folli, R.; Coyle, D. A Bimodal Deep Learning Architecture for EEG-fNIRS Decoding of Overt and Imagined Speech. IEEE Trans. Biomed. Eng. 2022, 69, 1983–1994. [Google Scholar] [CrossRef]
- He, Q.; Feng, L.; Jiang, G.; Xie, P. Multimodal Multitask Neural Network for Motor Imagery Classification With EEG and fNIRS Signals. IEEE Sens. J. 2022, 22, 20695–20706. [Google Scholar] [CrossRef]
- Qiu, L.; Feng, W.; Ying, Z.; Pan, J. EFMLNet: Fusion Model Based on End-to-End Mutual Information Learning for Hybrid EEG-fNIRS Brain-Computer Interface Applications. In Proceedings of the Annual Meeting of the Cognitive Science Society, Rotterdam, The Netherland, 24–27 July 2024. [Google Scholar]
- Yu, B.; Cao, L.; Jia, J.; Fan, C.; Dong, Y.; Zhu, C. E-FNet: A EEG-fNIRS Dual-Stream Model for Brain–Computer Interfaces. Biomed. Signal Process. Control 2025, 100, 106943. [Google Scholar] [CrossRef]
Branch | Layer | Kernel | Output | ||
---|---|---|---|---|---|
Size | Stride | Channel | Dimension | ||
EEG | Conv1 | 4 × 4 × 12 | 2 × 2 × 6 | 16 | 8 × 8 × 100 × 16 |
Conv2 | 2 × 2 × 6 | 2 × 2 × 2 | 32 | 4 × 4 × 50 × 32 | |
fNIRS | Conv1 | 4 × 4 × 6 | 2 × 2 × 2 | 16 | 11 × 8 × 8 × 15 × 16 |
Conv2 | 2 × 2 × 3 | 2 × 2 × 2 | 32 | 11 × 4 × 4 × 8 × 32 | |
Fusion | Conv1 | 4 × 4 × 12 | 2 × 2 × 6 | 16 | 8 × 8 × 100 × 16 |
EFGF1 Conv1 | 2 × 2 × 6 | 1 × 1 × 1 | 1 | 8 × 8 × 100 × 1 | |
EFGF1 Conv2 | 2 × 2 × 3 | 1 × 1 × 1 | 1 | 11 × 8 × 8 × 15 × 1 | |
Conv2 | 2 × 2 × 6 | 2 × 2 × 2 | 32 | 4 × 4 × 50 × 32 | |
EFGF2 Conv1 | 2 × 2 × 3 | 1 × 1 × 1 | 1 | 4 × 4 × 50 × 1 | |
EFGF2 Conv2 | 2 × 2 × 3 | 1 × 1 × 1 | 1 | 11 × 4 × 4 × 8 × 1 |
Methods | MI | MA | WG | |||
---|---|---|---|---|---|---|
Accuracy (%) | Kappa | Accuracy (%) | Kappa | Accuracy (%) | Kappa | |
FGANet [39] | 64.95 ± 8.65 | 0.30 ± 0.17 | 76.58 ± 9.00 | 0.53 ± 0.18 | 71.33 ± 9.01 | 0.43 ± 0.18 |
EF-Net [27] | 61.80 ± 8.04 | 0.24 ± 0.16 | 75.59 ± 9.55 | 0.51 ± 0.19 | 68.51 ± 9.30 | 0.37 ± 0.19 |
BiMNC [56] | 67.25 ± 7.37 | 0.34 ± 0.15 | 82.36 ± 9.24 | 0.65 ± 0.18 | 76.86 ± 10.22 | 0.54 ± 0.20 |
M2NN [57] | 66.21 ± 9.85 | 0.32 ± 0.20 | 82.72 ± 7.30 | 0.65 ± 0.15 | 76.07 ± 9.87 | 0.52 ± 0.20 |
EFMLNet [58] | 66.92 ± 8.99 | 0.34 ± 0.18 | 83.14 ± 7.64 | 0.66 ± 0.15 | 77.49 ± 9.05 | 0.55 ± 0.18 |
E-FNet [59] | 64.91 ± 7.51 | 0.30 ± 0.15 | 83.66 ± 7.87 | 0.67 ± 0.16 | 76.74 ± 9.89 | 0.53 ± 0.20 |
STA-Net [26] | 69.03 ± 9.46 | 0.38 ± 0.20 | 84.77 ± 7.83 | 0.70 ± 0.16 | 78.99 ± 8.77 | 0.58 ± 0.18 |
TSFNet | 70.18 ± 10.84 | 0.40 ± 0.21 | 86.26 ± 7.45 | 0.73 ± 0.15 | 81.13 ± 7.18 | 0.62 ± 0.14 |
Subject Numbers | MI Accuracy (%) | MA Accuracy (%) | WG Accuracy (%) | |||
---|---|---|---|---|---|---|
TSFNet | STA-Net | TSFNet | STA-Net | TSFNet | STA-Net | |
01 | 78.83 | 78.50 | 91.50 | 89.67 | 79.33 | 81.33 |
02 | 80.00 | 78.33 | 85.50 | 86.17 | 85.83 | 88.67 |
03 | 81.33 | 79.83 | 93.17 | 92.50 | 88.83 | 87.00 |
04 | 64.33 | 70.33 | 91.00 | 87.50 | 81.83 | 81.33 |
05 | 59.33 | 58.33 | 81.83 | 77.33 | 81.83 | 81.67 |
06 | 57.00 | 57.17 | 93.83 | 91.67 | 69.33 | 62.17 |
07 | 63.50 | 62.50 | 88.00 | 87.83 | 86.67 | 78.33 |
08 | 60.33 | 60.83 | 95.50 | 93.67 | 70.50 | 70.83 |
09 | 89.67 | 86.00 | 73.00 | 80.67 | 84.17 | 84.50 |
10 | 56.67 | 59.33 | 89.67 | 85.67 | 79.33 | 74.00 |
11 | 58.17 | 59.17 | 73.50 | 76.00 | 71.17 | 63.67 |
12 | 64.33 | 64.67 | 80.67 | 75.67 | 84.83 | 83.00 |
13 | 68.00 | 66.50 | 89.00 | 92.83 | 69.67 | 69.67 |
14 | 62.00 | 62.33 | 85.67 | 85.67 | 88.17 | 89.00 |
15 | 71.50 | 69.00 | 78.33 | 77.67 | 67.50 | 66.67 |
16 | 74.67 | 72.50 | 73.83 | 72.50 | 92.33 | 90.17 |
17 | 77.00 | 70.83 | 92.50 | 93.33 | 80.50 | 81.67 |
18 | 61.00 | 66.50 | 94.17 | 94.67 | 80.17 | 76.17 |
19 | 86.50 | 77.67 | 84.17 | 79.33 | 83.00 | 77.67 |
20 | 60.83 | 56.17 | 64.67 | 62.17 | 79.33 | 70.50 |
21 | 71.00 | 67.33 | 89.67 | 84.33 | 69.17 | 61.67 |
22 | 63.17 | 59.67 | 87.83 | 81.50 | 90.00 | 88.50 |
23 | 83.33 | 74.17 | 84.00 | 83.00 | 88.83 | 87.00 |
24 | 55.67 | 61.33 | 91.50 | 90.83 | 85.00 | 85.33 |
25 | 84.00 | 82.83 | 86.83 | 75.50 | 84.83 | 85.50 |
26 | 89.50 | 87.83 | 94.67 | 94.00 | 87.17 | 87.67 |
27 | 84.50 | 86.50 | 87.83 | 82.50 | ||
28 | 59.17 | 59.50 | 86.83 | 93.50 | ||
29 | 69.83 | 66.17 | 93.00 | 90.67 |
Task | Model | Accuracy (%) | Kappa |
---|---|---|---|
MI | No EFGF | 66.38 ± 13.36 | 0.33 ± 0.27 |
No CAFE | 66.15 ± 13.46 | 0.33 ± 0.27 | |
TSFNet | 70.18 ± 10.84 | 0.40 ± 0.21 | |
MA | No EFGF | 83.11 ± 9.40 | 0.66 ± 0.19 |
No CAFE | 84.28 ± 6.89 | 0.69 ± 0.14 | |
TSFNet | 86.26 ± 7.45 | 0.73 ± 0.15 | |
WG | No EFGF | 79.25 ± 7.48 | 0.59 ± 0.15 |
No CAFE | 78.18 ± 9.01 | 0.56 ± 0.18 | |
TSFNet | 81.13 ± 7.18 | 0.62 ± 0.14 |
Task | Loss Function | Accuracy (%) | Kappa |
---|---|---|---|
MI | No Lefgf | 66.41 ± 10.80 | 0.33 ± 0.22 |
No Lcafe | 66.56 ± 12.77 | 0.33 ± 0.26 | |
L | 70.18 ± 10.84 | 0.40 ± 0.21 | |
MA | No Lefgf | 83.25 ± 8.56 | 0.67 ± 0.17 |
No Lcafe | 84.44 ± 7.79 | 0.69 ± 0.16 | |
L | 86.26 ± 7.45 | 0.73 ± 0.15 | |
WG | No Lefgf | 75.94 ± 9.55 | 0.52 ± 0.19 |
No Lcafe | 79.22 ± 8.43 | 0.58 ± 0.17 | |
L | 81.13 ± 7.18 | 0.62 ± 0.14 |
Model | Parameters | Ttrain (ms) | Tinfer (ms) |
---|---|---|---|
TSFNet | 3,335,576 | 594.21 | 25.88 |
STA-Net | 3,325,150 | 565.66 | 24.72 |
No EFGF | 3,334,224 | 513.81 | 25.18 |
No CAFE | 569,238 | 556.61 | 25.33 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Y.; Yin, B.; Yuan, X. TSFNet: Temporal-Spatial Fusion Network for Hybrid Brain-Computer Interface. Sensors 2025, 25, 6111. https://doi.org/10.3390/s25196111
Zhang Y, Yin B, Yuan X. TSFNet: Temporal-Spatial Fusion Network for Hybrid Brain-Computer Interface. Sensors. 2025; 25(19):6111. https://doi.org/10.3390/s25196111
Chicago/Turabian StyleZhang, Yan, Bo Yin, and Xiaoyang Yuan. 2025. "TSFNet: Temporal-Spatial Fusion Network for Hybrid Brain-Computer Interface" Sensors 25, no. 19: 6111. https://doi.org/10.3390/s25196111
APA StyleZhang, Y., Yin, B., & Yuan, X. (2025). TSFNet: Temporal-Spatial Fusion Network for Hybrid Brain-Computer Interface. Sensors, 25(19), 6111. https://doi.org/10.3390/s25196111