Impact of Temporal Window Shift on EEG-Based Machine Learning Models for Cognitive Fatigue Detection
Abstract
1. Introduction
- RQ1:
- Shift–performance sensitivity: When the window length is fixed, how does varying the temporal window shift influence overall accuracy and macro-F1 across the six models considered?
- RQ2:
- Class-wise effects: How does the shift alter per-class precision and recall, and the structure of confusions, with particular attention to the moderate-fatigue level and across-participant variability?
- RQ3:
- Data volume versus independence: How does the shift modulate the effective sample count and temporal redundancy, and to what extent do these factors mediate the observed performance differences?
- RQ4:
- Model dependence: Are the effects of shift consistent across classical learners and a lightweight transformer, or do interactions between model capacity and shift emerge under the same segmentation and features?
2. Related Work
3. Materials and Methods
3.1. Dataset Characteristics
3.2. Processing Pipeline
- Bandpass filtering;
- Artifact handling using Independent Component Analysis (ICA);
- Window-level statistical feature extraction on the analysis windows defined in Section 3.3;
- Normalization aligned with the evaluation protocol.
3.2.1. Bandpass Filtering
3.2.2. Artifact Handling Using Independent Component Analysis
3.2.3. Wavelet-Based Feature Extraction
3.2.4. Statistical Feature Extraction
- Central tendency and dispersion: mean, standard deviation, and variance;
- Range and energy-related measures: peak-to-peak amplitude (ptp), minimum, maximum, mean-square, and root mean square (RMS);
- Indices of extrema: index of the minimum, index of the maximum (useful to indicate within-window timing of salient excursions);
- Shape descriptors: skewness and kurtosis (higher-order moments summarizing asymmetry and tail heaviness);
- Absolute successive differences: a simple variability proxy sensitive to short-lived fluctuations within the window.
3.3. Windowing and Temporal Shift
3.4. Classifier Models
3.4.1. Traditional Machine Learning Models
3.4.2. Transformer Model
3.5. Evaluation
4. Results and Discussion
4.1. Effect of Temporal Shift on Aggregate Metrics (RQ1)
4.2. Class-Wise Effects (RQ2)
4.3. Data Volume and Sample Independence (RQ3)
4.4. Model Dependence (RQ4)
4.5. Relation to Prior Reports and Practical Implications
- the evaluation regime (within-subject or trial-wise versus subject-wise),
- segmentation choices (window length and step/overlap),
- label granularity and class mapping, and
- feature/model families and tuning.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Feltman, K.A.; Vogl, J.F.; McAtee, A.; Kelley, A.M. Measuring aviator workload using EEG: An individualized approach to workload manipulation. Front. Neuroergon. 2024, 5, 1397586. [Google Scholar] [CrossRef] [PubMed]
- Aricò, P.; Borghini, G.; Di Flumeri, G.; Colosimo, A.; Bonelli, S.; Golfetti, A.; Pozzi, S.; Imbert, J.P.; Granger, G.; Benhacene, R.; et al. Adaptive Automation Triggered by EEG-Based Mental Workload Index: A Passive Brain–Computer Interface Application in Realistic Air Traffic Control Environment. Front. Hum. Neurosci. 2016, 10, 539. [Google Scholar] [CrossRef]
- Lees, T.; Chalmers, T.; Burton, D.; Zilberg, E.; Penzel, T.; Lal, S. Psychophysiology of Monotonous Driving, Fatigue and Sleepiness in Train and Non-Professional Drivers: Driver Safety Implications. Behav. Sci. 2023, 13, 788. [Google Scholar] [CrossRef]
- Afzal, U.; Prouzeau, A.; Lee, L.; Dwyer, T.; Bichinepally, S.; Liebman, A.; Goodwin, S. Investigating Cognitive Load in Energy Network Control Rooms: Recommendations for Future Designs. Front. Psychol. 2022, 13, 812677. [Google Scholar] [CrossRef]
- Lim, W.L.; Sourina, O.; Wang, L. STEW: Simultaneous Task EEG Workload Dataset. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 2106–2114. [Google Scholar] [CrossRef]
- Brookshire, G.; Kasper, J.; Blauch, N.M.; Wu, Y.C.; Glatt, R.; Merrill, D.A.; Gerrol, S.; Yoder, K.J.; Quirk, C.; Lucero, C. Data leakage in deep learning studies of translational EEG. Front. Neurosci. 2024, 18, 1373515. [Google Scholar] [CrossRef]
- Lee, H.T.; Cheon, H.R.; Lee, S.H.; Shim, M.; Hwang, H.J. Risk of data leakage in estimating the diagnostic performance of a deep-learning-based CAD system for psychiatric disorders. Sci. Rep. 2023, 13, 16633. [Google Scholar] [CrossRef]
- Del Pup, F.; Zanola, A.; Tshimanga, L.F.; Bertoldo, A.; Finos, L.; Atzori, M. The role of data partitioning on the performance of EEG-based deep learning models in supervised cross-subject analysis: A preliminary study. Comput. Biol. Med. 2025, 196, 110608. [Google Scholar] [CrossRef]
- Falih, M.H.; Alshamasin, M.S.; Abukhurma, R.; Khdour, T. Impact of Sliding Window Overlap Ratio on EEG-Based ASD Diagnosis Using Machine Learning Techniques. Appl. Sci. 2024, 14, 1702. [Google Scholar] [CrossRef]
- Alghanim, M.; Attar, H.; Rezaee, K.; Khosravi, M.; Solyman, A.; Kanan, M.A. A Hybrid Deep Neural Network Approach to Recognize Brain Fatigue. Comput. Intell. Neurosci. 2024, 2024, 9898333. [Google Scholar] [CrossRef]
- Christou, V.; Miltiadous, A.; Tsoulos, I.; Karvounis, E.; Tzimourta, K.D.; Tsipouras, M.G.; Anastasopoulos, N.; Tzallas, A.T.; Giannakeas, N. Evaluating the Window Size’s Role in Automatic EEG Epilepsy Detection. Sensors 2022, 22, 9233. [Google Scholar] [CrossRef] [PubMed]
- Yuvaraj, R.; Samyuktha, S.; Fogarty, J.; Huang, J.S.; Tan, S.; Wong, T.K. Optimal EEG Time Window Length for Boredom Classification using Combined Non-linear Features. In Proceedings of the 2024 32nd European Signal Processing Conference (EUSIPCO), Lyon, France, 26–30 August 2024; pp. 1756–1761, ISBN 978-9-4645-9361-7. [Google Scholar]
- Safari, M.; Shalbaf, R.; Bagherzadeh, S.; Shalbaf, A. Classification of mental workload using brain connectivity and machine learning on electroencephalogram data. Sci. Rep. 2024, 14, 9153. [Google Scholar] [CrossRef] [PubMed]
- Sun, C.; Mou, C. Survey on the research direction of EEG-based signal processing. Front. Neurosci. 2023, 17, 1203059. [Google Scholar] [CrossRef]
- Hamzah, H.A.; Atia, A.; Serag, A.; Elmisery, A.M.; Ware, A.; Khan, S.; Ma, J. EEG-based emotion recognition systems: Comprehensive review of feature extraction methods. Heliyon 2024, 10, e31485. [Google Scholar] [CrossRef]
- Vafaei, E.; Lee, J. Transformers in EEG Analysis: A Review of Architectures and Applications in Motor Imagery, Seizure, and Emotion Classification. Sensors 2025, 25, 1293. [Google Scholar] [CrossRef]
- Pfeffer, M.A.; Ling, S.S.H.; Wong, J.K.W. Exploring the frontier: Transformer-based models in EEG signal analysis for brain–computer interfaces. Comput. Biol. Med. 2024, 178, 108705. [Google Scholar] [CrossRef]
- Demirezen, G.; Taşkaya Temizel, T.; Brouwer, A.M. Reproducible machine learning research in mental workload classification using EEG. Front. Neuroergon. 2024, 5, 1346794. [Google Scholar] [CrossRef]
- Al Imran, M.A.; Nasirzadeh, F.; Karmakar, C. Designing a practical fatigue detection system: A review on recent developments and challenges. J. Saf. Res. 2024, 90, 100–114. [Google Scholar] [CrossRef]
- Afzal, M.A.; Gu, Z.; Bukhari, S.U.; Afzal, B. Brainwaves in the Cloud: Cognitive Workload Monitoring Using Deep Gated Neural Network and Industrial Internet of Things. Appl. Sci. 2024, 14, 5830. [Google Scholar] [CrossRef]
- Zeynali, M.; Seyedarabi, H.; Afrouzian, R. Classification of EEG signals using Transformer-based deep learning and ensemble models. Biomed. Signal Process. Control 2023, 86, 105130. [Google Scholar] [CrossRef]
- Yao, X.; Li, T.; Ding, P.; Wang, F.; Zhao, L.; Gong, A.; Nan, W.; Fu, Y. Emotion Classification Based on Transformer and CNN for EEG Spatial–Temporal Feature Learning. Brain Sci. 2024, 14, 268. [Google Scholar] [CrossRef] [PubMed]
- Si, X.; Huang, D.; Sun, Y.; Ming, D. Temporal Aware Mixed Attention-based Convolution and Transformer Network (MACTN) for EEG Emotion Recognition. Comput. Biol. Med. 2024, 181, 108973. [Google Scholar] [CrossRef]
- Cheng, Z.; Du, W.; Li, Y.; Zhang, Z.; Zheng, W. EEG-based emotion recognition using multi-scale dynamic 1D CNN and gated Transformer. Sci. Rep. 2024, 14, 31319. [Google Scholar] [CrossRef]
- Xu, Y.; Sha, Y.; Chen, F.; Chen, X.; Wu, X.; Xu, P. AMDET: Attention-Based Multiple Dimensions EEG Transformer. IEEE Trans. Affect. Comput. 2024, 15, 2293–2307. [Google Scholar] [CrossRef]
- Liang, S.; Li, L.; Zu, W.; Feng, W.; Hang, W. Adaptive deep feature representation learning for cross-subject EEG decoding. BMC Bioinform. 2024, 25, 393. [Google Scholar] [CrossRef]
- Joshi, A.; Kale, S.; Chandel, S.; Pal, D.K. Likert Scale: Explored and Explained. Br. J. Appl. Sci. Technol. 2015, 7, 396–403. [Google Scholar] [CrossRef]
- de Cheveigné, A.; Nelken, I. Filters: When, Why, and How (Not) to Use Them. Neuron 2019, 102, 280–293. [Google Scholar] [CrossRef]
- Singh, A.K.; Krishnan, S. Trends in EEG signal feature extraction applications. Front. Artif. Intell. 2023, 5, 1072801. [Google Scholar] [CrossRef] [PubMed]
- Ranjan, R.; Chandra Sahana, B.; Kumar Bhandari, A. Ocular artifact elimination from electroencephalography signals: A systematic review. Biocybern. Biomed. Eng. 2021, 41, 960–996. [Google Scholar] [CrossRef]
- Frølich, L.; Dowding, I. Removal of muscular artifacts in EEG signals: A comparison of linear decomposition methods. Brain Inform. 2018, 5, 13–22. [Google Scholar] [CrossRef]
- Kang, T.; Thielen, J.; Vidaurre, C.; Lemm, S. I see artifacts: ICA-based EEG artifact removal does not improve deep network decoding across three BCI tasks. J. Neural Eng. 2024, 21, 066036. [Google Scholar] [CrossRef]
- Mutanen, T.P.; Mäkinen, J.; Ilmoniemi, R.; Rocchi, L.; Nieminen, J.O. A simulation study: Comparing independent component analysis and signal-space projection–source-informed reconstruction for rejecting muscle artifacts evoked by TMS. Front. Hum. Neurosci. 2024, 18, 1324958. [Google Scholar] [CrossRef]
- Guo, T.; Zhang, T.; Lim, E.; López-Benítez, M.; Ma, F.; Yu, L. A Review of Wavelet Analysis and Its Applications: Challenges and Opportunities. IEEE Access 2022, 10, 58869–58903. [Google Scholar] [CrossRef]
- Acharya, S.; Patel, N.; Subramanian, R. A systematic review of EEG-based automated mental stress quantification. Appl. Soft Comput. 2025, 159, 103368. [Google Scholar] [CrossRef]
- Nayak, A.B.; Shah, A.; Maheshwari, S.; Anand, V.; Chakraborty, S.; Kumar, T.S. An empirical wavelet transform-based approach for motion artifact removal in electroencephalogram signals. Decis. Anal. J. 2024, 10, 100420. [Google Scholar] [CrossRef]
- Mehmood, I.; Li, H.; Umer, W.; Arsalan, A.; Anwer, S.; Mirza, M.A.; Ma, J.; Antwi-Afari, M.F. Multimodal integration for data-driven classification of mental fatigue during construction equipment operations: Incorporating electroencephalography, electrodermal activity, and video signals. Dev. Built Environ. 2023, 15, 100198. [Google Scholar] [CrossRef]
- Campos, M.S.R.; McCracken, H.S.; Uribe-Quevedo, A.; Grant, B.L.; Yielder, P.C.; Murphy, B.A. A Machine Learning Approach to Classifying EEG Data Collected with or without Haptic Feedback during a Simulated Drilling Task. Brain Sci. 2024, 14, 894. [Google Scholar] [CrossRef]
- Guo, H.; Chen, S.; Zhou, Y.; Xu, T.; Zhang, Y.; Ding, H. A hybrid critical channels and optimal feature subset selection framework for EEG fatigue recognition. Sci. Rep. 2025, 15, 2139. [Google Scholar] [CrossRef] [PubMed]
- Avola, D.; Cascio, M.; Cinque, L.; Fagioli, A.; Foresti, G.L.; Marini, M.R.; Pannone, D. Analyzing EEG data with machine and deep learning: A benchmark. In Proceedings of the International Conference on Image Analysis and Processing; Springer: Cham, Switzerland, 2022; pp. 335–345. [Google Scholar]
- Siddhad, G.; Gupta, A.; Dogra, D.P.; Roy, P.P. Efficacy of Transformer Networks for Classification of Raw EEG Data. Biomed. Signal Process. Control 2024, 87, 105488. [Google Scholar] [CrossRef]
No of Samples S (Shift) | Shift (in s) | Overlap (%) | Windows per Recording | Total Windows |
---|---|---|---|---|
128 | 1.00 | 75.00 | 147 | 13,230 |
64 | 0.50 | 87.50 | 293 | 26,370 |
32 | 0.25 | 93.75 | 585 | 52,650 |
No of Samples S (Shift) | Low (N = 42) | Moderate (N = 23) | High (N = 25) | Total (N = 90) |
---|---|---|---|---|
128 | 6174 | 3381 | 3675 | 13,230 |
64 | 12,306 | 6739 | 7325 | 26,370 |
32 | 24,570 | 13,455 | 14,625 | 52,650 |
Model | Key Settings (Capacity Descriptor) | Parameters |
---|---|---|
Decision Tree (DT) | Gini; no max depth; stop at pure/leaf = 1 | n/a (data-dependent) |
Random Forest (RF) | 100 trees; Gini; max_features = | n/a (data-dependent) |
SVM (RBF) | ; = scale | n/a (support vectors) |
kNN | ; Euclidean distance | n/a (uses training set) |
MLP | ; ReLU; dropout 0.5; Adam 0.001 | ≈18,600 |
Transformer | Embed ; 1 encoder block; 4 heads; FF width 64; dropout 0.1; Adam 0.001 | ≈23,500 |
Shift = 128 | Shift = 64 | Shift = 32 | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Model | Acc | Prec | Rec | F1 | Acc | Prec | Rec | F1 | Acc | Prec | Rec | F1 |
Decision Tree | 0.88 | 0.87 | 0.87 | 0.87 | 0.93 | 0.93 | 0.93 | 0.93 | 0.97 | 0.97 | 0.97 | 0.97 |
Random Forest | 0.97 | 0.98 | 0.97 | 0.97 | 0.99 | 0.99 | 0.99 | 0.99 | 0.99 | 0.99 | 0.99 | 0.99 |
SVM | 0.46 | 0.72 | 0.34 | 0.22 | 0.47 | 0.77 | 0.34 | 0.23 | 0.47 | 0.56 | 0.36 | 0.31 |
kNN | 0.90 | 0.89 | 0.89 | 0.89 | 0.96 | 0.96 | 0.95 | 0.96 | 0.98 | 0.98 | 0.98 | 0.98 |
MLP | 0.62 | 0.60 | 0.58 | 0.57 | 0.74 | 0.73 | 0.72 | 0.71 | 0.80 | 0.78 | 0.78 | 0.78 |
Transformer | 0.64 | 0.61 | 0.58 | 0.58 | 0.73 | 0.72 | 0.69 | 0.69 | 0.77 | 0.76 | 0.74 | 0.74 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wosiak, A.; Sumiński, M.; Żykwińska, K. Impact of Temporal Window Shift on EEG-Based Machine Learning Models for Cognitive Fatigue Detection. Algorithms 2025, 18, 629. https://doi.org/10.3390/a18100629
Wosiak A, Sumiński M, Żykwińska K. Impact of Temporal Window Shift on EEG-Based Machine Learning Models for Cognitive Fatigue Detection. Algorithms. 2025; 18(10):629. https://doi.org/10.3390/a18100629
Chicago/Turabian StyleWosiak, Agnieszka, Michał Sumiński, and Katarzyna Żykwińska. 2025. "Impact of Temporal Window Shift on EEG-Based Machine Learning Models for Cognitive Fatigue Detection" Algorithms 18, no. 10: 629. https://doi.org/10.3390/a18100629
APA StyleWosiak, A., Sumiński, M., & Żykwińska, K. (2025). Impact of Temporal Window Shift on EEG-Based Machine Learning Models for Cognitive Fatigue Detection. Algorithms, 18(10), 629. https://doi.org/10.3390/a18100629