Next Article in Journal
Machine-Learning Insights from the Framingham Heart Study: Enhancing Cardiovascular Risk Prediction and Monitoring
Next Article in Special Issue
Modulation of a Rubber Hand Illusion by Different Levels of Mental Workload: An EEG Study
Previous Article in Journal
Advanced Deep Learning Methods to Generate and Discriminate Fake Images of Egyptian Monuments
Previous Article in Special Issue
Uncertainty-Aware Deep Learning for Robust and Interpretable MI EEG Using Channel Dropout and LayerCAM Integration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

EEG-Based Attention Classification for Enhanced Learning Experience

1
College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110819, China
2
School of Mechanical Engineering and Automation, Northeastern University, Shenyang 110819, China
3
School of Software, Tsinghua University, Beijing 100084, China
4
Sino-Pak Center for Artificial Intelligence, School of Computing, Pak-Austria Fachhochschule Institute of Applied Sciences and Technology (PAF-IAST), Haripur 22620, Pakistan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(15), 8668; https://doi.org/10.3390/app15158668
Submission received: 9 May 2025 / Revised: 3 July 2025 / Accepted: 18 July 2025 / Published: 5 August 2025
(This article belongs to the Special Issue EEG Horizons: Exploring Neural Dynamics and Neurocognitive Processes)

Abstract

This paper presents a novel EEG-based learning system designed to enhance the efficiency and effectiveness of studying by dynamically adjusting the difficulty level of learning materials based on real-time attention levels. In the training phase, EEG signals corresponding to high and low concentration levels are recorded while participants engage in quizzes to learn and memorize Chinese characters. The attention levels are determined based on performance metrics derived from the quiz results. Following extensive preprocessing, the EEG data undergoes several feature extraction steps: removal of artifacts due to eye blinks and facial movements, segregation of waves based on their frequencies, similarity indexing with respect to delay, binary thresholding, and (PCA). These extracted features are then fed into a k-NN classifier, which accurately distinguishes between high and low attention brain wave patterns, with the labels derived from the quiz performance indicating high or low attention. During the implementation phase, the system continuously monitors the user’s EEG signals while studying. When low attention levels are detected, the system increases the repetition frequency and reduces the difficulty of the flashcards to refocus the user’s attention. Conversely, when high concentration levels are identified, the system escalates the difficulty level of the flashcards to maximize the learning challenge. This adaptive approach ensures a more effective learning experience by maintaining optimal cognitive engagement, resulting in improved learning rates, reduced stress, and increased overall learning efficiency. Our results indicate that this EEG-based adaptive learning system holds significant potential for personalized education, fostering better retention and understanding of Chinese characters.

1. Introduction

Rapid developments in neurotechnology and machine learning have opened new possibilities for improving digital learning environments. However, a persistent challenge in education remains: traditional instructional systems often treat learners as homogenous, static entities, neglecting the moment-to-moment cognitive fluctuations—especially attentional variations—that are critical for effective learning. This gap is particularly pronounced in cognitively demanding domains such as second-language acquisition, where sustained focus significantly impacts memory encoding and retention [1].
Despite the availability of EEG (electroencephalography) as a real-time brain monitoring tool [2,3], most existing learning systems that incorporate EEG signals fall short in several areas. They either rely on passive monitoring, analyze data offline, or lack real-time feedback loops that could personalize the learning experience. Moreover, they often treat attention as a unidimensional signal, overlooking the complex inter-regional brain dynamics that underlie attentional states [4,5].
Recent studies have attempted to detect attentional states from EEG data, using techniques ranging from frequency band power analysis to machine learning classifiers [6,7,8]. However, these efforts typically lack integration across four critical dimensions: (1) optimized frequency band selection tailored to cognitive states; (2) real-time instructional adaptation based on EEG-derived attention levels; (3) brain connectivity modeling using graph-theoretic metrics like Phase Lag Index (PLI); and (4) interpretable classification methods such as k-NN that offer both accuracy and transparency.
To address these limitations, this study presents a fully integrated, EEG-based neuroadaptive learning system designed for Mandarin vocabulary acquisition. The proposed framework dynamically adjusts task difficulty based on real-time classification of attention into HA and LA states. The training phase uses quiz performance to label EEG segments, followed by wavelet-based feature extraction and artifact removal. These features are classified using a k-NN model, and connectivity patterns are captured via Hilbert transform to generate functional correlation matrices [9,10].
A distinctive contribution of this work lies in its use of thresholded functional brain networks to extract graph features, enabling deeper insights into attentional dynamics. While graph metrics such as clustering coefficient and path length have been used in other fields [9,11], they remain underutilized in adaptive education systems. By incorporating these features into the attention classification pipeline, our system offers a network-informed approach that goes beyond simple band-power metrics.
During real-time deployment, the system detects HA and LA states and modulates task difficulty accordingly—reducing cognitive load during LA and increasing challenge during HA to sustain engagement. This closed-loop feedback mechanism represents a novel application of adaptive learning aligned with cognitive neurodynamics.

Related Work

EEG-based attention monitoring has gained traction in educational neuroscience due to its ability to reveal real-time cognitive states. Apicella et al. [6] proposed an alpha-band-based system for measuring learner engagement but lacked adaptive control. Lingelbach et al. [7] incorporated eye-tracking to guide tutoring strategies, though feedback remained passive and limited to specific frequency bands.
Supervised learning methods have been extensively applied to decode attentional states from physiological data. For example, Gogna et al. [12] employed the k-nearest neighbors (k-NN) algorithm for binary workload classification, while Rehman et al. [8] and Alhagry et al. [13] utilized deep learning models such as long short-term memory (LSTM) networks. However, these approaches often suffer from limited interpretability and lack integration with adaptive feedback mechanisms.
Graph-theoretic modeling of brain connectivity has been investigated in various domains, including deception detection and industrial safety [9,11]. Small-world network metrics, which reflect an optimal balance between local specialization and global integration, have been shown to be essential for evaluating brain efficiency [14]. In the context of education, Renton et al. [15] and Gamboa et al. [16] introduced network-based features for classifying attention states. Nevertheless, their implementations were not embedded within closed-loop or neuroadaptive learning systems.
Neuroadaptive systems that respond to attentional fluctuations are still emerging. Verma et al. [17] presented an attention-aware deep learning model, but its interpretability and integration with educational content were limited. Tuckute et al. [18] demonstrated real-time decoding but did not incorporate content adaptation or graph-based connectivity features.
In contrast, our work integrates these critical components into a unified, real-time neuroadaptive system for language learning. It uniquely combines frequency-optimized EEG features, graph-theoretic connectivity metrics, and interpretable k-NN classification to adjust task difficulty dynamically based on HA and LA states. As shown in Figure 1, the EAC system captures and classifies real-time EEG signals to adapt flashcard difficulty levels during learning sessions. To the best of our knowledge, no prior work offers such an end-to-end, network-informed adaptive framework for second-language acquisition.
Key contributions of this paper include:
  • Development of a real-time EEG-based neuroadaptive learning framework integrating frequency band selection, graph-theoretic connectivity, and interpretable classification.
  • Empirical validation showing gamma band superiority for HA/LA classification, supported by connectivity metrics.
  • Demonstration of improved engagement and learning efficiency through dynamic task adjustment based on real-time EEG feedback.
  • A practical implementation tailored for Mandarin vocabulary learning, with potential generalizability to broader educational contexts.
By combining neuroscience, signal processing, and educational theory, this study advances the state of neuroadaptive learning and offers a scalable solution for intelligent tutoring systems. The remainder of this paper is structured as follows:
Recent developments in EEG-based attention classification are reviewed in this section. The study’s methodology, which includes feature extraction, preprocessing, k-nearest neighbors (k-NN) classification, and real-time neurofeedback technique, is described in Section 2. Results on the efficiency of k-NN for real-time neurofeedback and attention classification are shown in Section 3, along with performance metrics. The findings’ consequences for comprehending attention mechanisms, as well as their influence on pedagogical approaches and neurocognitive research, are covered in Section 4. Section 5 concludes the study, emphasizing how real-time modifications based on EEG data might enhance educational research and learning experiences.

2. Materials and Methods

2.1. Task Design and Stimuli

Nineteen healthy, right-handed adults aged 25–40 participated, all with beginner-level (CEFR A1) Chinese proficiency assessed using a standardized language proficiency test. Participants’ native languages included English (70%), Urdu (20%), and other languages (10%). None had prior formal training in Chinese or exposure to phonetic systems such as Pinyin.
Motivational levels and visual memory were assessed using pre-task surveys. Only participants scoring above 60% on the engagement scale were included to minimize variability. The task was implemented using STIM software and followed a methodology adapted from previous research [19]. The experiment assessed attention by combining active learning phases with subsequent recall tasks as shown in Figure 2. The Ethics Committee of the Institute approved this study, and all subjects gave written informed consent before beginning the experiment.
Participants were exposed to Chinese vocabulary words under two experimental conditions: (i) Chinese characters presented with Pinyin to assist pronunciation, and (ii) Chinese characters presented without Pinyin, requiring participants to memorize pronunciation and meaning without phonetic support. Each learning session was followed by a recall phase to assess memory retention and attention levels.
Key performance measures included:
  • Response accuracy: Correct answers during recall were considered indicative of high attention (HA), while incorrect answers signaled low attention (LA).
A comprehensive overview of the data analysis process is illustrated in Figure 3. This figure outlines each step from EEG signal acquisition to feature extraction and classification.
The EEG signals were recorded using a 30-channel electrode setup following the international 10–20 system. The specific electrode locations included: Fp1, Fp2, F3, F4, C3, C4, P3, P4, O1, O2, F7, F8, T7, T8, P7, P8, Fz, Cz, Pz, Oz, FC1, FC2, CP1, CP2, FC5, FC6, CP5, CP6, PO3, and PO4.
Data acquisition was carried out using a 30-channel Brain-Computer Interface (BCI) system equipped with silver/silver chloride (Ag/AgCl) electrodes. These electrodes were selected for their high signal fidelity and low noise characteristics, ensuring reliable measurements for attention-related EEG analysis [19]. Electrode impedance was maintained below 5 kΩ using a built-in impedance meter integrated into the acquisition system, with conductive gel applied to ensure proper connectivity [9,19].
Figure 4 shows the denoised EEG signal from channel P3 during high attention (HA) and low attention (LA) conditions. The EEG signals were sampled at 1000 Hz to achieve a balance between high temporal resolution and computational efficiency, enabling the detection of neural activity up to 100 Hz. Baseline correction was applied using the −100 to 0 ms prestimulus interval to eliminate low-frequency drifts, as commonly required in event-related EEG analyses.
The details of our preprocessing pipeline based on the EEGLAB toolbox [20] are given as follows:

2.2. Noise Removal and Artifact Correction

EEG preprocessing was conducted using the EEGLAB toolbox (version 2022.1) implemented in MATLAB R2022a, following standardized pipelines for artifact removal and signal enhancement.
  • Filtering: A 4th-order Butterworth bandpass filter (1–40 Hz) was applied to the EEG signals to retain relevant frequency components while removing low-frequency drifts and high-frequency noise. Additionally, a 50 Hz notch filter was used to suppress power line interference.
  • Bad Channel Detection: Noisy EEG channels were identified through a combination of visual inspection and automated thresholding, using an absolute amplitude criterion of ±80 µV.
  • Independent Component Analysis (ICA): ICA decomposition was performed using the runica algorithm in EEGLAB. The number of components was set equal to the number of EEG channels (after bad channel exclusion). Components with high correlation ( r > 0.8 ) to vertical and horizontal EOG signals were removed to eliminate ocular artifacts, following the guidelines of Delorme et al. (2007) [21].
  • Channel Interpolation: Following ICA, bad channels were interpolated using spherical spline interpolation in EEGLAB. This order was chosen to preserve the full rank of the data, ensuring accurate component separation, as recommended by Kim et al. (2023) [22].
  • Baseline Correction: Baseline correction was applied using the 100 to 0 ms prestimulus interval relative to stimulus onset, to eliminate low-frequency drifts and enable consistent epoch comparisons.
  • Epoch Rejection: EEG epochs exceeding an absolute amplitude of ± 100 µV were automatically rejected. This step removed approximately 5% of the total data.
  • Re-referencing: All EEG signals were re-referenced to the common average reference to reduce spatial bias and enhance overall signal quality.

2.3. Frequency Division Using Wavelet Transform

Figure 5 demonstrates the use of wavelet decomposition to distinguish frequency patterns across attention states.
Following interpolation and re-referencing, the EEG signals were decomposed into frequency-specific components using the discrete wavelet transform (DWT). A 4th-order Daubechies wavelet (db4) was employed to perform a 5-level decomposition. This procedure enabled the extraction of standard EEG bands, including Delta (0.5–4 Hz), Theta (4–8 Hz), Alpha (8–12 Hz), Beta (12–30 Hz), and Gamma (30–50 Hz). Such frequency partitioning provided detailed insights into neural dynamics, supporting more targeted analyses of both time-domain and frequency-domain features.

2.4. Feature Extraction

Phase Correlation and Weighted Networks

The phase lag index (PLI) quantifies the phase synchronization between two signals x ( t ) and y ( t ) , providing a robust measure of neural connectivity [9]. Using the Hilbert transform, the instantaneous phases of the signals are computed, and the phase difference is analyzed to calculate PLI as:
PLI ( x , y ) = 1 N n = 1 N sgn [ ϕ x ( t n ) ϕ y ( t n ) ] ,
where ϕ x ( t n ) and ϕ y ( t n ) represent the instantaneous phases of signals x ( t ) and y ( t ) at time t n , and N refers to the total number of sampling points. PLI values range from 0 (no synchronization) to 1 (perfect synchronization).
Connectivity matrices derived from PLI represent weighted networks, where each edge weight reflects the synchronization strength between electrode pairs.

2.5. Network Construction and Thresholding

To convert the weighted brain functional network (BFN) into a binary network, a fixed threshold was applied to the adjacency matrix. This threshold, selected empirically, retained the most significant neural connections while excluding weaker or less relevant ones. The choice of threshold was informed by dataset properties and prior literature on functional brain network analysis [9]. This approach ensured that the retained connections reflected meaningful neural interactions for subsequent network feature analysis.

2.6. Brain Functional Network Analysis (BFNs)

Binary brain functional networks (BFNs) were constructed from the thresholded adjacency matrices and analyzed using graph-theoretic metrics. The nodes of the network represent EEG electrodes, and the edges represent functional connections based on the synchronization of EEG signals between pairs of electrodes.
The following graph-theoretic metrics were calculated:
  • Network Degree (ND):  N D i = j = 1 N A i j , which represents the total number of edges connected to node i, where A i j is the corresponding element in the adjacency matrix.
  • Clustering Coefficient (Cc):  C c i = 2 T i k i ( k i 1 ) , where T i is the number of triangles through node i, and k i is the degree of node i. This metric evaluates the local interconnectivity or cliquishness of a node’s neighborhood.
  • Path Length (PL):  P L = 1 N ( N 1 ) i j d i j , the average of the shortest path lengths d i j between all pairs of nodes i and j. It reflects the network’s overall routing efficiency.
  • Global Efficiency (GE):  G E = 1 N ( N 1 ) i j 1 d i j , which quantifies the efficiency of information exchange across the whole network by considering the inverse of the shortest path lengths.
  • Network Density (NDn):  N D n = 2 E N ( N 1 ) , where E is the number of existing edges and N is the number of nodes. This metric describes how densely the network is connected.
  • Small-World Network Coefficient (SW): The small-world property was quantified using the small-worldness coefficient:
    S W = γ λ , γ = C real C rand , λ = L real L rand
    where C real and L real are the clustering coefficient and characteristic path length of the actual brain network, and C rand , L rand represent the same metrics computed from randomized networks that preserve node degree distributions.
  • Small-Worldness Criterion: The network is considered to exhibit small-world characteristics when:
    C real L real > C rand L rand
    indicating higher clustering and similar or shorter path lengths than a random graph.
This comparison assesses whether the constructed brain network exhibits small-world properties relative to randomized surrogate networks.

2.7. Statistical Analysis

To compare neural responses between HA and LA states, we performed paired sample t-tests and one-way repeated measures ANOVA separately for each cognitive task. Statistical analyzes were performed across four frequency bands (theta, alpha, beta, gamma) and multiple network-level metrics (e.g., clustering coefficient, characteristic path length, global efficiency, local efficiency, modularity, and small-worldness). To control for multiple comparisons, we applied the Bonferroni correction within each task. Based on the number of planned comparisons, the significance threshold was adjusted to α = 0.005 . Only results below this corrected threshold were considered statistically significant.
Effect sizes were calculated using Cohen’s d. Positive values of d indicate higher values in the (HA) state compared to the (LA) state, while negative values indicate the reverse. Cohen’s d thresholds were interpreted as follows: small ( | d | < 0.2 ), medium ( 0.2 | d | < 0.5 ), and large ( | d | 0.5 ).

2.8. Feature Reduction and Classification

PCA and k-NN Classifier: After feature reduction, the extracted principal components were used to train a k-nearest neighbors (k-NN) classifier. Principal Component Analysis (PCA) was employed to reduce the dimensionality of EEG signals, extracting key features while minimizing noise. This process is essential for managing high-dimensional data and enhancing the efficacy of subsequent classification algorithms.
Analyzing these properties across frequency bands and attentional states (HA and LA) allowed us to identify significant variations in network organization and synchronization. For this reason, we performed network analysis (NA) using 30 EEG channels, calculating six key metrics: clustering coefficient, path length, network degree, network density, global efficiency, and small-worldness (SWN).
Classification performance was evaluated using 5-fold cross-validation, ensuring that training and testing sets remained stratified to balance high and low attention conditions across folds. Metrics including accuracy, precision, recall, and F1-score were computed as averages across folds.

2.9. Implementing Real-Time Neurofeedback-Based Learning Using the EAC Model

The EAC model was trained to classify attention states (HA vs. LA) based on correct and incorrect responses during the recall phase. The process involved preprocessing, wavelet feature extraction, constructing correlation matrices, applying thresholding to create binary networks, and classifying attention states. The system continuously monitored EEG signals in real-time to assess and adapt to the user’s attention levels during the study.
The system continuously monitored the user’s EEG signals throughout the study sessions to assess attention levels. As shown in Figure 6, the system dynamically detected the user’s attention states. Based on this real-time analysis, it adjusted the learning process by increasing the repetition frequency and reducing the flashcard difficulty when attention levels decreased in order to refocus the user and optimize learning engagement.
Conversely, when high concentration levels were identified, the system raised the difficulty to maintain an optimal learning challenge. This adaptive approach aimed to keep students engaged at an optimal cognitive level, thereby improving learning efficiency and reducing stress.
Mathematically, to adjust the difficulty of the quiz based on the concentration levels of the students, we define a binary variable C, where C = 1 indicates high attention (HA) and C = 0 indicates low attention (LA). The performance of the model was evaluated on the basis of classification accuracy, enabling an adaptive learning system tailored to individual attention states.
Let D represent the difficulty level of the quiz, ranging from 1 to 10. The adjustment function is defined as follows:
D new = D + Δ D , Δ D = k ( C C threshold )
where k is a scaling factor, and C threshold = 0.5 . When C > C threshold , the difficulty D increases, and when C < C threshold , it decreases. This formulation allows the system to iteratively adjust the quiz difficulty based on real-time attention levels, thereby optimizing the learning challenge dynamically.

2.10. Data Analysis

Data analysis was performed to evaluate the effectiveness of the adaptive learning system from the EAC model. Metrics such as quiz performance scores, response times, and EEG-derived attention levels were analyzed. Statistical methods, including t-tests and ANOVA, were used to compare performance and attention levels between different phases of the experiment. The same group of participants was used for both the k-NN classifier training and the real-time neurofeedback-based learning system implementation. This consistency ensured that the results were directly comparable and that individual differences in attention responses and learning outcomes could be accurately assessed across both experimental conditions. Each participant’s EEG data were collected under controlled conditions, providing a comprehensive dataset for both the training of the k-NN classifier and the evaluation of the neurofeedback system’s effectiveness [23].

3. Results

This study examines the performance of the k-nearest neighbors (k-NN) classifier, real-time neurofeedback efficacy, EEG frequency band analysis, and network analysis features. These areas highlight the effectiveness of integrating machine learning and neurofeedback techniques to enhance learning.

3.1. Training the k-NN Classifier

Integrating attention mechanisms into the k-NN classifier significantly enhanced classification accuracy. The feature extraction process effectively distinguished between high and low attention states, and the optimal EEG frequency band for real-time processing was identified, contributing to the development of more efficient neurofeedback systems.
Network analysis revealed that high attention states were associated with a more integrated and efficient neural network structure. Wavelet packet decomposition facilitated a detailed breakdown of EEG signals into four frequency bands: Theta (3–7 Hz), Alpha (7–15 Hz), Beta (15–30 Hz), and Gamma (30–50 Hz), as illustrated in Figure 7. These results indicated that neural synchronization patterns across these bands varied significantly between attention states.
Neural synchronization across these bands revealed clear differences between high and low attention states.
Table 1 presents the mean synchronization values, standard deviations, p-values, and Cohen’s d values for the Theta, Alpha, Beta, and Gamma frequency bands, highlighting significant differences between high attention (HA) and low attention (LA) states.
Notably, HA showed higher synchronization in the Theta band, while LA exhibited greater synchronization in the Alpha, Beta, and Gamma bands. Cohen’s d values indicate small effect sizes across all frequency bands, with the Theta band showing the smallest effect. In contrast, the Beta and Gamma bands show slightly larger, though still small, negative effects. These findings suggest that while EEG frequency bands are influenced by attention states, the practical significance of these differences remains modest.
HA networks showed increased synchronization in the Theta band, while LA networks exhibited higher synchronization in the Alpha, Beta, and Gamma bands. Figure 8 illustrates neural synchronization patterns across these frequency bands within a weighted 30 × 30 network, based on the 10–20 electrode placement system. Significant differences between HA and LA conditions are highlighted, with synchronization matrices color-coded from blue ( H = 0 ) to red ( H = 1 ), indicating strong correlations between specific channel pairs.
In this study, the synchronization matrix was converted into an unweighted graph for each participant by selecting a threshold value T h from a range of 0.01 to 0.6, with increments of 0.001 [24]. The average ratio between C real and L real was calculated as a function of T h defined in Figure 9. Statistical analysis indicated that the most significant differences between high and low attention states were observed at the following thresholds: 0.42 (p = 0.001) for Theta, 0.31 (p = 0.002) for Alpha, 0.24 (p = 0.005) for Beta, and 0.28 (p = 0.005) for Gamma. Specifically, the ratio of C real Theta, Alpha, and Beta L real was slightly higher in the high attention state (Theta, Alpha, Beta), while the Gamma band exhibited the largest discrepancy between the two attention states. Binary matrix comparisons, shown in Figure 10, revealed complex network structures across various brain regions [25].
Key network metrics such as clustering coefficient, path length, and network degree depicted in Table 2 distinct variations, particularly within the Beta and Gamma bands.
These metrics provide insights into the neural network’s structure and function, highlighting differences in connectivity patterns under varying cognitive states. The bar graph in Figure 11 illustrates the comparison of global efficiency, small-worldness (SWN), and network density between high attention (HA) and low attention (LA) task conditions across Theta, Alpha, Beta, and Gamma frequency bands. Error bars emphasize significant differences (p < 0.05) between conditions. These results underscore robust differences in network topology and efficiency between HA and LA states, emphasizing the impact of attentional states on cognitive network organization.
The results reveal distinct neural synchronization patterns across attentional states. HA networks show higher global efficiency and small-worldness, indicating enhanced information transfer and a better balance between local specialization and global integration, particularly noticeable in the Theta and Gamma bands. Moreover, HA networks demonstrate denser connectivity across all frequency bands compared to LA networks, suggesting improved network robustness and functional integration during tasks that demand sustained attention. Figure 12 depicts topological differences between high and low attention states, defining brain region properties.
Alpha and Beta bands exhibit complex patterns involving temporal, parietal, occipital, and frontal regions. In contrast, the Theta band shows predominant activity in the right hemisphere, while the Gamma band displays increased connections in the left hemisphere. Overall, these findings support the hypothesis that attention states influence neural synchronization patterns and network properties, shaping the functional organization of the brain under different cognitive conditions.
Table 3 presents a comparison of classification performance between NA + KNN and NA + PCA + KNN across all frequency bands.We compared the performance of NA + KNN and NA + PCA + KNN across EEG frequency bands (Theta, Alpha, Beta, and Gamma) and network metrics (global efficiency [GE], small-worldness [SWN], and network density [ND]) for high attention (HA) and low attention (LA) tasks. Paired t-tests (Bonferroni) were conducted to compare the methods, while repeated measures ANOVA was used to assess differences in network metrics between the HA and LA conditions.
In the Beta band, however, both methods performed similarly, with no significant difference (p-value = 0.150). Overall, PCA enhanced classification performance, particularly in the Theta, Alpha, and Gamma bands.

3.2. Real-Time Neurofeedback-Based Learning

The implementation of real-time neurofeedback resulted in substantial improvements in adaptive learning. The system dynamically adjusted learning parameters based on real-time EEG data, enhancing engagement and optimizing cognitive load. Participants experienced better learning outcomes and reduced stress levels due to these adaptive difficulty adjustments [23].
These findings demonstrate the model’s efficacy in distinguishing between attention levels, which is crucial for real-time quiz difficulty adjustment as mentioned in Figure 6. The model utilizes EEG data to classify attention states into high and low categories, leveraging advanced signal processing and machine learning techniques.
The k-NN classifier within the EAC model was trained using features extracted from preprocessed EEG signals. Initially, features from all major frequency bands—Theta, Alpha, Beta, and Gamma—were used to determine the most suitable band for attention state classification. Among these, the gamma band emerged as the most effective predictor, exhibiting the highest classification performance. This finding aligns with gamma’s known role in supporting high-level cognitive functions such as attention, memory encoding, and sensory integration. In contrast, theta, alpha, and beta bands demonstrated relatively lower discriminative power.
As shown in Table 4, the EAC model achieved an average classification accuracy of 87% for both high and low attention states. For high attention states, the model demonstrated a precision of 91%, recall of 87%, and an F1 score of 88%. For low attention states, the precision was slightly lower at 83%, recall at 85%, and an F1 score of 84%.
These results indicate that the EAC model is proficient in distinguishing between high and low attention states, which is crucial for dynamically adjusting quiz difficulty in real-time to match the learner’s cognitive state [10]. This adaptive approach aims to optimize the learning experience by maintaining an appropriate level of challenge, thus enhancing engagement and potentially improving learning outcomes. Figure 13 presents three boxplots illustrating the effects of real-time adjustments by the adaptive learning system on quiz performance scores, response times, and EEG-derived attention levels. The data show that participants achieved significantly higher quiz scores and shorter response times during periods of high attention, as detected and managed by the system. EEG-derived attention levels also indicated higher values in high attention conditions. Statistical analyses confirmed significant differences between low and high attention states for all metrics (p < 0.01), demonstrating that the system’s real-time adjustments effectively improved learning efficiency and engagement by optimizing task difficulty based on continuous attention monitoring.
As shown in Table 5, the results indicate a significant improvement in the percentage of correct answers after implementing the EAC model for quiz difficulty adjustment. The average correct answers increased from 71% to 89%, and the standard deviation decreased from 15% to 10%, indicating more consistent performance across participants.

4. Discussion

This study presents a comprehensive investigation into attentional dynamics during second-language learning using an EEG-based neuroadaptive system. By emphasizing gamma-band EEG features and integrating graph-theoretic metrics, the proposed framework offers both theoretical and practical advancements over traditional and prior EEG-based learning systems. Gamma activity is known to reflect attentional binding and memory integration, likely explaining its superior performance. Unlike previous neurofeedback models that primarily focused on alpha-band oscillations. Our system targets gamma-band activity, which is more directly associated with active cognitive processing and sustained attention.This frequency-specific tuning allowed for finer discrimination between HA and LA states, enabling more precise real-time instructional adjustments. The classifications were achieved through a hybrid approach involving wavelet-based feature extraction, PCA for dimensionality reduction, and interpretable machine learning K-NN classifier.
A key innovation of this work lies in the integration of graph-theoretic features derived from thresholded functional connectivity matrices, obtained via the Hilbert transform and Phase Lag Index (PLI). Metrics such as clustering coefficient and path length [17] captured the small-world properties of neural networks during varying attentional states. This network-informed approach enhanced interpretability and provided a neurophysiological basis for adaptive feedback mechanisms. Our system also improves upon black-box deep learning models by offering transparency and physiological plausibility. Unlike models that optimize only for accuracy, our framework is grounded in cognitive theory, specifically aligning instructional changes with inferred attentional states in a closed-loop architecture. For instance, flashcard difficulty was reduced in LA states to avoid disengagement and increased during HA to maximize learning gains—realizing a dynamic adaptation strategy rooted in cognitive load theory. Compared to recent neuroadaptive proposals [17], this work is distinctive in its simultaneous application of gamma-band analysis, functional brain network features, and real-time feedback deployment. Moreover, preprocessing techniques such as artifact removal and wavelet denoising preserved signal fidelity and minimized the influence of ocular or muscle artifacts [26]. PLI-based connectivity analyses validated the hypothesis that high attention is associated with more efficient brain network topologies [14], providing both a diagnostic tool and a feedback mechanism.
Taken together, these contributions mark a significant advancement in adaptive educational technologies. Our system bridges the gap between cognitive neuroscience and instructional design, offering a transparent, data-driven approach to personalized learning.

Limitations and Future Work

This study’s generalizability is constrained by its participant pool of 19 right-handed, beginner-level Chinese learners. Future research should include participants from more diverse linguistic, cultural, and cognitive backgrounds to validate broader applicability.
While gamma-band features demonstrated strong classification power, multiband models and cross-frequency coupling (e.g., gamma-theta interactions) could enhance accuracy and robustness. These relationships may provide a richer representation of attention and cognitive load. Furthermore, exploring advanced architectures like convolutional neural networks (CNNs) or graph neural networks (GNNs) may improve the model’s ability to capture spatiotemporal patterns.
The current study assessed short-term effects within a single experimental session. Longitudinal studies are needed to investigate long-term benefits such as retention, transfer of learning, and user adaptation over time. Additionally, usability testing, learner trust, and motivational aspects should be evaluated to guide deployment in real-world educational platforms.
Finally, system scalability and compatibility with wearable EEG headsets should be explored for broader classroom integration and mobile learning applications.

5. Conclusions

This research demonstrates the effectiveness of a real-time EEG-based adaptive learning system that dynamically modulates instructional difficulty based on attention classification. Leveraging machine learning and graph-theoretic EEG analysis, the system reliably distinguished between HA and LA states and adjusted content difficulty accordingly.
The k-NN classifier, in conjunction with gamma-band EEG features and functional connectivity metrics, provided strong classification accuracy. This allowed the system to personalize learning in real time, increasing engagement and optimizing cognitive load.
In summary, this work contributes a scalable, interpretable, and neurophysiologically grounded solution for adaptive learning. Its successful implementation for Mandarin vocabulary instruction paves the way for future integration of EEG-based systems in broader educational contexts.

Author Contributions

M.K.S. conceived the study, designed the methodology, conducted the investigation, curated the data, performed formal analysis, prepared the graphics, and wrote the original draft. H.W. supervised the project, acquired funding, provided resources, validated the results, and reviewed and edited the manuscript. A.A.S. developed the software, created visualizations, assisted with graphics, and contributed to validation. S.Q. reviewed the manuscript. M.A.G. contributed to data curation. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China, grant number 2021YFF0306400502.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Northeastern University (Protocol Code: NEU-EC-2024B033S; Approval Date: 30 May 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.

Acknowledgments

We thank the participants and volunteers involved in EEG data collection, as well as the lab team for technical support during the experiments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, D.; Huang, H.; Bao, X.; Pan, J.; Li, Y. An EEG-based attention recognition method: Fusion of time domain, frequency domain, and non-linear dynamics features. Front. Neurosci. 2023, 17, 1194554. [Google Scholar] [CrossRef] [PubMed]
  2. Debettencourt, M.T.; Cohen, J.D.; Lee, R.F.; Norman, K.A.; Turk-Browne, N.B. Closed-loop training of attention with real-time brain imaging. Nat. Neurosci. 2015, 18, 470–475. [Google Scholar] [CrossRef] [PubMed]
  3. Hagmann, P.; Cammoun, L.; Gigandet, X.; Meuli, R.; Honey, C.J.; Wedeen, V.J.; Sporns, O. Mapping the structural core of human cerebral cortex. PLoS Biol. 2008, 6, e159. [Google Scholar] [CrossRef] [PubMed]
  4. Walter, C.; Rosenstiel, W.; Bogdan, M.; Gerjets, P.; Spüler, M. Online EEG-based workload adaptation of an arithmetic learning environment. Front. Hum. Neurosci. 2017, 11, 286. [Google Scholar] [CrossRef] [PubMed]
  5. Mills, C.; Fridman, I.; Soussou, W.; Waghray, D.; Olney, A.M.; D’Mello, S.K. Put your thinking cap on: Detecting cognitive load using EEG during learning. In Proceedings of the 7th International Learning Analytics & Knowledge Conference, Vancouver, BC, Canada, 13–17 March 2017; pp. 80–89. [Google Scholar]
  6. Apicella, A.; Arpaia, P.; Frosolone, M.; Improta, G.; Moccaldi, N.; Pollastro, A. EEG-Based measurement system for student engagement detection in Learning 4.0. Int. J. Adv. Comput. Sci. Appl. 2021, 12, 5857. [Google Scholar] [CrossRef]
  7. Lingelbach, K.; Gado, S.; Bauer, W. Neuro-adaptive tutoring systems. Competence Dev. Learn. Assist. Syst. Data-Driven Future 2021, 243–260. [Google Scholar] [CrossRef]
  8. Rehman, A.U.; Shi, X.; Ullah, F.; Wang, Z.; Ma, C. Measuring student attention based on EEG brain signals using deep reinforcement learning. Expert Syst. Appl. 2025, 269, 126426. [Google Scholar] [CrossRef]
  9. Wang, H.; Chang, W.; Zhang, C. Functional brain network and multichannel analysis for the P300-based brain computer interface system of lying detection. Expert Syst. Appl. 2016, 53, 117–128. [Google Scholar] [CrossRef]
  10. Zhang, J.; Wang, T.; Hu, D.; Cao, J. EEG Channel and Spectrum Weighting Based Attention State Evaluation. In Proceedings of the 2023 China Automation Congress (CAC), Chongqing, China, 17–19 November 2023; pp. 8597–8601. [Google Scholar]
  11. Hua, C.; Wang, H.; Wang, H.; Lu, S.; Liu, C.; Khalid, S.M. A novel method of building functional brain network using deep learning algorithm with application in proficiency detection. Int. J. Neural Syst. 2019, 29, 1850015. [Google Scholar] [CrossRef] [PubMed]
  12. Gogna, Y.; Tiwari, S.; Singla, R. Towards a versatile mental workload modeling using neurometric indices. Biomed. Eng./Biomed. Technik 2023, 68, 297–316. [Google Scholar] [CrossRef] [PubMed]
  13. Alhagry, S.; Fahmy, A.A.; El-Khoribi, R.A. Emotion recognition based on EEG using LSTM recurrent neural network. Int. J. Adv. Comput. Sci. Appl. 2017, 8, 355–361. [Google Scholar] [CrossRef]
  14. Watts, D.J.; Strogatz, S.H. Collective dynamics of ‘small-world’ networks. Nature 1998, 393, 440–442. [Google Scholar] [CrossRef] [PubMed]
  15. Renton, A.I.; Painter, D.R.; Mattingley, J.B. Optimising the classification of feature-based attention in frequency-tagged electroencephalography data. Sci. Data 2022, 9, 296. [Google Scholar] [CrossRef] [PubMed]
  16. Gamboa, P.; Varandas, R.; Rodrigues, J.; Cepeda, C.; Quaresma, C.; Gamboa, H. Attention classification based on biosignals during standard cognitive tasks for occupational domains. Computers 2022, 11, 49. [Google Scholar] [CrossRef]
  17. Verma, D.; Bhalla, S.; Santosh, S.V.S.; Yadav, S.; Parnami, A.; Shukla, J. AttentioNet: Monitoring Student Attention Type in Learning with EEG-Based Measurement System. In Proceedings of the 2023 11th International Conference on Affective Computing and Intelligent Interaction (ACII), Cambridge, MA, USA, 10–13 September 2023; pp. 1–8. [Google Scholar]
  18. Tuckute, G.; Hansen, S.T.; Kjaer, T.W.; Hansen, L.K. Real-time decoding of attentional states using closed-loop EEG neurofeedback. Neural Comput. 2021, 33, 967–1004. [Google Scholar] [CrossRef] [PubMed]
  19. Syed, M.K.; Wang, H. EEG analysis of the Brain Language Processing oriented to Intelligent Teaching Robot. In Proceedings of the 2018 IEEE International Conference on Intelligence and Safety for Robotics (ISR), Shenyang, China, 24–27 August 2018; pp. 278–281. [Google Scholar]
  20. EEGLAB. EEGLAB: An Open Source Toolbox for Analysis of Single-Trial EEG Dynamics Including Independent Component Analysis. Available online: https://sccn.ucsd.edu/eeglab/ (accessed on 16 December 2024).
  21. Delorme, A.; Sejnowski, T.; Makeig, S. Enhanced detection of artifacts in EEG data using higher-order statistics and independent component analysis. Neuroimage 2007, 34, 1443–1449. [Google Scholar] [CrossRef] [PubMed]
  22. Kim, H.; Luo, J.; Chu, S.; Cannard, C.; Hoffmann, S.; Miyakoshi, M. ICA’s bug: How ghost ICs emerge from effective rank deficiency caused by EEG electrode interpolation and incorrect re-referencing. Front. Signal Process. 2023, 3, 1064138. [Google Scholar] [CrossRef]
  23. Niimura, Y.; Takemoto, J.; Kai, A.; Nakagawa, S. Attention-based CNN and Relative Phase Feature Modeling for Improved Imagined Speech Recognition. In Proceedings of the 2023 Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Taipei, Taiwan, 31 October–3 November 2023; pp. 8–14. [Google Scholar]
  24. Stam, C.J. Characterization of anatomical and functional connectivity in the brain: A complex networks perspective. Int. J. Psychophysiol. 2010, 77, 186–194. [Google Scholar] [CrossRef] [PubMed]
  25. Sawicki, J.; Hartmann, L.; Bader, R.; Schöll, E. Modelling the perception of music in brain network dynamics. Front. Netw. Physiol. 2022, 2, 910920. [Google Scholar] [CrossRef] [PubMed]
  26. Huang, D.; Wang, Y.; Fan, L.; Yu, Y.; Zhao, Z.; Zeng, P.; Wang, K.; Li, N.; Shen, H. Decoding Subject-Driven Cognitive States from EEG Signals for Cognitive Brain–Computer Interface. Brain Sci. 2024, 14, 498. [Google Scholar] [CrossRef] [PubMed]
Figure 1. System Design of the EEG-Based Attention Classification (EAC) Model.
Figure 1. System Design of the EEG-Based Attention Classification (EAC) Model.
Applsci 15 08668 g001
Figure 2. 32-channel EEG recording setup using a Quik-Cap system. (a) Electrode placement according to the international 10–20 system for comprehensive brain activity monitoring. (b) EEG signal acquisition during sequential experimental conditions: rest, learning, rest, testing, and rest phases.
Figure 2. 32-channel EEG recording setup using a Quik-Cap system. (a) Electrode placement according to the international 10–20 system for comprehensive brain activity monitoring. (b) EEG signal acquisition during sequential experimental conditions: rest, learning, rest, testing, and rest phases.
Applsci 15 08668 g002
Figure 3. Preprocessing steps, feature extraction, feature reduction, and feature classification pipeline used in this study.
Figure 3. Preprocessing steps, feature extraction, feature reduction, and feature classification pipeline used in this study.
Applsci 15 08668 g003
Figure 4. EEG signal from channel P3 during high attention (HA) and low attention (LA) conditions, after denoising.
Figure 4. EEG signal from channel P3 during high attention (HA) and low attention (LA) conditions, after denoising.
Applsci 15 08668 g004
Figure 5. Wavelet decomposition shows frequency discrimination between attention levels.
Figure 5. Wavelet decomposition shows frequency discrimination between attention levels.
Applsci 15 08668 g005
Figure 6. This flowchart illustrates how the adaptive learning system operates, showing how the system monitors EEG signals in real-time to identify attention levels. It details the actions taken when low or high attention levels are detected, such as increasing repetition and reducing difficulty for low attention or increasing difficulty for high attention.
Figure 6. This flowchart illustrates how the adaptive learning system operates, showing how the system monitors EEG signals in real-time to identify attention levels. It details the actions taken when low or high attention levels are detected, such as increasing repetition and reducing difficulty for low attention or increasing difficulty for high attention.
Applsci 15 08668 g006
Figure 7. Wavelet packet decomposition of the EEG signal into Theta (3–7 Hz), Alpha (7–15 Hz), Beta (15–30 Hz), and Gamma (30–50 Hz) frequency bands.
Figure 7. Wavelet packet decomposition of the EEG signal into Theta (3–7 Hz), Alpha (7–15 Hz), Beta (15–30 Hz), and Gamma (30–50 Hz) frequency bands.
Applsci 15 08668 g007
Figure 8. Weighted network graph illustrating neural synchronization patterns across Theta, Alpha, Beta, and Gamma bands for high attention (HA) and low attention (LA) states.
Figure 8. Weighted network graph illustrating neural synchronization patterns across Theta, Alpha, Beta, and Gamma bands for high attention (HA) and low attention (LA) states.
Applsci 15 08668 g008
Figure 9. (a) Averaged normalized ratio of L real (path length) to C real (clustering coefficient) across Theta, Alpha, Beta, and Gamma bands, plotted against thresholds (0.01–0.6). (b) F-Score values for network metric parameters in Theta, Alpha, Beta, and Gamma bands. Statistical significance was assessed using paired t-tests with Bonferroni corrections.
Figure 9. (a) Averaged normalized ratio of L real (path length) to C real (clustering coefficient) across Theta, Alpha, Beta, and Gamma bands, plotted against thresholds (0.01–0.6). (b) F-Score values for network metric parameters in Theta, Alpha, Beta, and Gamma bands. Statistical significance was assessed using paired t-tests with Bonferroni corrections.
Applsci 15 08668 g009
Figure 10. Binary matrices of the averaged non-linear synchronization matrices for high and low attention states across Theta, Alpha, Beta, and Gamma frequency bands. Individual thresholds are set, and the strength of the synchronization is represented using a black-gray scale, where yellow indicates H = 0 and black indicates H = 1.
Figure 10. Binary matrices of the averaged non-linear synchronization matrices for high and low attention states across Theta, Alpha, Beta, and Gamma frequency bands. Individual thresholds are set, and the strength of the synchronization is represented using a black-gray scale, where yellow indicates H = 0 and black indicates H = 1.
Applsci 15 08668 g010
Figure 11. Bar graph depicting global efficiency (GE), small-worldness (SWN), and network density (ND) for high attention (HA) and low attention (LA) tasks across the Theta, Alpha, Beta, and Gamma frequency bands. Error bars emphasize significant differences (p < 0.05) between conditions.
Figure 11. Bar graph depicting global efficiency (GE), small-worldness (SWN), and network density (ND) for high attention (HA) and low attention (LA) tasks across the Theta, Alpha, Beta, and Gamma frequency bands. Error bars emphasize significant differences (p < 0.05) between conditions.
Applsci 15 08668 g011
Figure 12. Graph of binary matrices from Figure 10, showing a top-down view of 30 electrodes arranged by the 10–20 system. Red lines indicate channel pairs with higher synchronization in high attention (HA) states, highlighting enhanced functional connectivity. Note: Ensure that node labels are clearly visible and not overlapped by the red lines in the final image.
Figure 12. Graph of binary matrices from Figure 10, showing a top-down view of 30 electrodes arranged by the 10–20 system. Red lines indicate channel pairs with higher synchronization in high attention (HA) states, highlighting enhanced functional connectivity. Note: Ensure that node labels are clearly visible and not overlapped by the red lines in the final image.
Applsci 15 08668 g012
Figure 13. Boxplots illustrating quiz performance scores, response times, and EEG-derived attention levels under low and high attention conditions, along with normalized values of neural connectivity metrics across conditions.
Figure 13. Boxplots illustrating quiz performance scores, response times, and EEG-derived attention levels under low and high attention conditions, along with normalized values of neural connectivity metrics across conditions.
Applsci 15 08668 g013
Table 1. Mean, standard deviation (Std. Dev.), p-value, Cohen’s d, and comparison between HA and LA conditions.
Table 1. Mean, standard deviation (Std. Dev.), p-value, Cohen’s d, and comparison between HA and LA conditions.
VariableConditionMeanStd. Dev.p-ValueCohen’s dComparison
ThetaHA0.52120.2528 3.28 × 10 8 0.031HA > LA
LA0.51360.2356 2.38 × 10 20
AlphaHA0.44620.2551 4.79 × 10 67 0.024 HA < LA
LA0.45200.2354 1.03 × 10 21
BetaHA0.36560.2455 2.54 × 10 13 0.068 HA < LA
LA0.38190.2368 1.64 × 10 11
GammaHA0.36200.2420 1.41 × 10 74 0.098 HA < LA
LA0.38530.2349 1.00 × 10 29
Table 2. Comparison of HA and LA w.r.t. different network parameters.
Table 2. Comparison of HA and LA w.r.t. different network parameters.
ParameterThetaAlphaBetaGammap-Value
HALAHALAHALAHALA
CC0.77670.73370.83220.66360.86950.78990.86500.82700.06
PL1.30701.36431.16361.31351.12691.17731.15281.21230.04
C_real1.11841.07540.99951.10951.01481.02911.03790.99970.06
PL_real1.00051.02351.00041.00050.99961.00001.00041.00460.03
Degree15.5314.7320.8715.2022.4720.2721.3319.130.02
Statistical significance (p < 0.05) was assessed using paired t-tests with Bonferroni correction applied for multiple comparisons.
Table 3. Comparison of performance metrics for NA + KNN and NA + PCA + KNN across EEG frequency bands. Asterisks (*) denote the higher value between the two methods.
Table 3. Comparison of performance metrics for NA + KNN and NA + PCA + KNN across EEG frequency bands. Asterisks (*) denote the higher value between the two methods.
BandNA + KNNNA + PCA + KNN
Acc.Prec.Rec.F1-Scr.Acc.Prec.Rec.F1-Scr.
Theta71.67 *72.89 *69.0070.8980.00 *78.0081.25 *80.00 *
Alpha74.8373.9676.67 *75.29 *80.00 *80.00 *80.00 *80.00 *
Beta83.17 *85.41 *80.0082.6283.0080.3384.8683.00
Gamma89.67 *92.20 *86.67*89.35 *91.00 *89.00 *92.71 *91.00 *
* Statistical significance (p < 0.05) was assessed using paired t-tests with Bonferroni correction applied for multiple comparisons.
Table 4. Details of the EEG adaptive classification model.
Table 4. Details of the EEG adaptive classification model.
AspectDetails
Model NameEEG Adaptive Classification (EAC) Model
PurposeReal-time quiz difficulty adjustment based on attention levels
Performance MetricClassification accuracy, precision, recall, and F1-Score
Attention StatePerformance Metrics
High Attention (HA)Accuracy: 87%    Precision: 91%    Recall: 87%    F1-Score: 88%
Low Attention (LA)Accuracy: 87%    Precision: 83%    Recall: 85%    F1-Score: 84%
Table 5. Quiz performance before and after EAC adjustment.
Table 5. Quiz performance before and after EAC adjustment.
MetricBefore EAC AdjustmentAfter EAC Adjustment
Average Correct Answers (%)71%89%
Standard Deviation (%)15%10%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Syed, M.K.; Wang, H.; Siddiqi, A.A.; Qureshi, S.; Gouda, M.A. EEG-Based Attention Classification for Enhanced Learning Experience. Appl. Sci. 2025, 15, 8668. https://doi.org/10.3390/app15158668

AMA Style

Syed MK, Wang H, Siddiqi AA, Qureshi S, Gouda MA. EEG-Based Attention Classification for Enhanced Learning Experience. Applied Sciences. 2025; 15(15):8668. https://doi.org/10.3390/app15158668

Chicago/Turabian Style

Syed, Madiha Khalid, Hong Wang, Awais Ahmad Siddiqi, Shahnawaz Qureshi, and Mohamed Amin Gouda. 2025. "EEG-Based Attention Classification for Enhanced Learning Experience" Applied Sciences 15, no. 15: 8668. https://doi.org/10.3390/app15158668

APA Style

Syed, M. K., Wang, H., Siddiqi, A. A., Qureshi, S., & Gouda, M. A. (2025). EEG-Based Attention Classification for Enhanced Learning Experience. Applied Sciences, 15(15), 8668. https://doi.org/10.3390/app15158668

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop