Next Article in Journal
Correction: Qi et al. Review of Hybrid Aerial Underwater Vehicle: Potential Applications in the Field of Underwater Marine Optics. Drones 2025, 9, 667
Previous Article in Journal
Development of a Closed-Loop PLM Application for Vibration-Based Structural Health Monitoring of UAVs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cross-Subject Cognitive State Assessment for Unmanned System Operators Based on Brain Functional Connectivity

1
Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing 400064, China
2
School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Drones 2025, 9(11), 808; https://doi.org/10.3390/drones9110808
Submission received: 5 October 2025 / Revised: 10 November 2025 / Accepted: 14 November 2025 / Published: 19 November 2025

Highlights

What are the main findings?
  • A novel cognitive state assessment method using brain functional connectivity matrices and a dual attention module-based convolutional neural network (DAM-CNN) achieves a high mean cross-subject classification accuracy of 98.76%.
  • The proposed DAM-CNN model demonstrates superior robustness with significantly lower accuracy variance (0.0113) across different subjects compared to existing state-of-the-art methods.
What is the implication of the main finding?
  • The proposed approach enables timely and accurate identification of operator cognitive decline, enhancing human–machine interaction reliability and flight safety in unmanned systems.
  • The method’s strong cross-subject performance reduces the need for individual calibration, facilitating more practical and adaptive real-world deployment.

Abstract

During the operation of Unmanned Aerial Vehicles (UAVs), the cognitive state of operators is prone to decline, posing a risk to task performance. However, many existing cognitive state assessment methods rely directly on raw electroencephalography (EEG) signals, yet exhibit limited robustness when applied across different individuals. To address this limitation and leverage the spatial information and inter-electrode relationships effectively captured by brain functional connectivity networks, this paper proposes an assessment method based on functional connectivity networks. Data from ten participants under three cognitive states were used to train and test various models on a per-subject basis, where each participant’s data was partitioned into separate training and testing sets. The results demonstrate that the proposed method achieves a mean recognition accuracy of 98.76% with a variance of 0.0113, representing an improvement of at least 7.01% in accuracy and a reduction of at least 0.0191 in variance compared to conventional approaches. This approach facilitates timely cognitive state identification, thereby enhancing the reliability of human–machine interaction in unmanned systems.

1. Introduction

In recent years, through the approach of manned–unmanned teaming, unmanned aerial vehicles (UAVs) have been playing increasingly important roles in various fields, such as emergency rescue, maritime enforcement, agricultural irrigation, environmental monitoring, traffic management, public safety, and healthcare [1]. This expanded reliance, however, imposes heightened demands on human operators. Prolonged operation and sustained attention can easily lead to operators’ mental fatigue, resulting in slower reaction time, spatial disorientation, impaired judgment, and reduced control precision. These factors then result in increased probabilities of operational errors and accidents, thereby posing a serious threat to overall system safety [2].
The operational risks are exacerbated in dynamic operational scenarios, such as sudden weather changes, signal interference, or equipment failure, which can compromise UAV stability and endurance. A recent human factors analysis and classification system (HFACS) analyzed 77 UAV accidents and found that up to 55% of the leading causes were operator decision errors [3]. This often occurred when operators faced an unexpected hazard, while taking manual control or overrode automation due to uncertainty, sometimes exacerbating the situation instead of recovering from it. Additionally, Ghasri and Maghrebi once adopted 138 UAV incident records across Australia for failure analysis, and they found that operators’ loss of awareness is a critical factor [4]. In so far, the civil and aviation safety authority (CASA) regulations require operators to have remote license to fly drones larger than 2 kg, which in turn is an indication of a proactive approach regarding human factors. Similar incidental crashes were identified and analyzed in [5,6].
According to the National Transportation Safety Board (NTSB) investigation, spatial disorientation and the operator’s stress may lead to incorrect operations, ultimately causing the accident. The NTSB concluded that the accident might have been avoidable if timely intervention had been implemented to help the pilot regain a normal cognitive state and operate correctly. Therefore, to mitigate such risks, it is essential for operators to maintain a high level of cognitive readiness, enabling them to anticipate hazards and sustain situational awareness. Consequently, the real-time monitoring and assessment of the operator’s cognitive state is crucial for facilitating timely interventions, enhancing decision-making, and ultimately ensuring mission efficacy and safety.
Cognitive state, defined as the internal state generated during an individual’s interaction with external objects [7], effectively reflects an operator’s control capacity. Therefore, real-time assessment of cognitive state holds significant practical importance, as it enables timely intervention to prevent serious flight safety incidents [8]. Current methodologies for cognitive state recognition largely follow a dichotomy, relying on either non-physiological or physiological signals. The former, which involves modalities such as text, voice, facial expressions, and gestures, constitutes the foundation of most existing studies [9,10,11]. Nevertheless, this approach is inherently limited in reliability, as it is susceptible to deliberate concealment, where individuals may voluntarily mask their facial cues or modulate their tone. In comparison, physiological signals offer superior reliability due to their involuntary nature, which prevents subjects from deliberately masking them [12].
Electroencephalography (EEG) has emerged as the preferred non-invasive modality for cognitive state assessment among all available physiological signals, recognized for its reliability and effectiveness. This recognition stems from its ability to directly and effectively capture the brain’s electrical activity, a property that has led to its widespread application in the field of cognitive state research [13,14,15]. With the advancement of sensor networks [16,17], intelligent sensing system [18,19,20], and energy-efficient biomedical systems [21,22], EEG-based methods have gained feasibility under varied application domains. There is a growing recognition that the performance of evaluation models is highly dependent on the quality of the input EEG signal. This has spurred the parallel development of advanced EEG pre-processing and artifact-removal pipelines tailored for portable, low-intrusiveness systems [23,24,25,26,27]. Such robust pre-processing is paramount for the practical deployment of cognitive monitoring in operational environments like UAV control. Consequently, a significant body of work now utilizes these quality-assured signals as the foundation for reliable cognitive state evaluation.
Yang et al. [28] introduce a real-time emotion detection algorithm for UAV operators, leveraging 2D feature maps and CNN analysis. The approach transforms 1D EEG signals into 2D representations via Differential Entropy (DE) extraction, 2D mapping, and sparse computation; the CNN then exploits its superior capability for automatic deep feature learning to decipher the embedded emotional information, accomplishing successful three-state classification. Shi et al. [29] propose a multi-source domain adaptation method for EEG-based emotion recognition. It first transfers multiple source domains to the target domain individually to mitigate interference from diverse EEG sources; then employs a feature processing mechanism to progressively disentangle domain-specific interference factors like physiological differences and emotional fluctuations; finally, introduces a domain-specific classifier based on a Long Short-Term Memory network to capture temporal dependencies and enhance the model’s capacity for complex feature expression.
Jiang et al. [30] adopted an attention-based multi-scale feature fusion network (AM-MSFFN) and achieved state-of-the-art performance in EEG-based emotion recognition. Specifically, the classification accuracy for both arousal and valence dimensions exceeded 99% on the DEAP dataset. By integrating multi-scale feature extraction and the attention mechanism, this model effectively addressed subject-specific variations and noise interference, thereby resolving the generalization issue. Tang et al. [31] proposed a graph representation learning framework, which can automatically construct and optimize the graph structure of EEG signals for classification. This model demonstrated outstanding performance in both subject-dependent and subject-independent experiments, exhibiting strong generalization ability. Xie et al. [32] converted EEG signals into time-frequency images and applied a convolutional neural network, achieving a high classification accuracy of 88.83% in the automatic sleep staging task.
Although EEG-based cognitive state assessment has proven to be effective, EEG signals exhibit limitations, such as non-stationarity, instability, and nonlinearity. Coupled with differences in subjects’ head shapes, brain patterns, and habits, EEG signals become highly complex, significantly impacting the generalization capability and accuracy of assessment methods [14]. In contrast, brain functional connectivity quantifies the strength and directionality of functional connections, reflecting associations between different brain regions, thereby providing deeper insights into information flow and integration within the brain, offering better accuracy [33].
However, cross-subject generalization is crucial for addressing the aforementioned issue of individual differences. Due to the significant inter-subject variability inherent in EEG signals, models trained on data from one subject often fail to generalize effectively to others, substantially limiting the practical deployment of cognitive state assessment models. To mitigate this, researchers have proposed various cross-subject approaches. For instance, Xiong et al. [34] adopted a multi-source domain adaptation method to reduce interference from diverse EEG sources, while Zhang et al. [35] combined source microstate analysis with style transfer mapping, achieving performance in cross-subject emotion recognition that surpassed traditional features such as differential entropy. Nevertheless, existing methods still exhibit limitations: most approaches relying on raw EEG signals or conventional features may not adequately capture the fundamental characteristic of inter-regional neural coordination within the brain.
Therefore, this paper addresses the aforementioned concerns and establishes a UAV operator’s cognitive state evaluation model, namely the dual attention module-based convolutional neural network (DAM-CNN). Specifically, this study adopts brain functional connectivity for evaluation rather than standard EEG signals, providing a richer spatial, frequency, and inter-regional brain activation information. In addition, this study proposes an advanced dual-attention mechanism by integrating Position Attention Module (PAM) and Channel Attention Module (CAM) to understand contextual dependencies. Then the model has been empirically validated across different subjects to demonstrate its solid generalization. Experimental verification shows that the recognition accuracy of this method is significantly improved, demonstrating its practical values for promoting dynamic human–UAV collaboration and ensuring smooth interaction. The detailed cognitive state assessment workflow is presented in Figure 1.

2. Model Establishment

2.1. Cognitive State Classification Process

During EEG signal acquisition, we cannot directly obtain usable brain functional connectivity matrices. It is often necessary to filter clutter from the raw signals, select the required electrical signals, and finally compute the correlation coefficient matrix from the processed signals. This matrix is then used as the dataset input to a CNN model for training, resulting in a classifiable model.
Electrodes were positioned based on the international 10–20 system using a 20-channel Ag/AgCl active EEG system. The electrodes used were: Fp1, Fpz, Fp2, F7, F3, Fz, F4, F8, T7, C3, Cz, C4, T8, P7, P3, Pz, P4, P8, O1, and O2. Throughout the acquisition step, the electrode impedance was maintained below 5 kΩ, as per the specifications of the Winfull Instruments Co., Ltd. (Shanghai, China) system. This threshold is within the established standard for high-quality EEG recordings, ensuring a strong signal-to-noise ratio and the overall reliability of the acquired neural data. The captured EEG signals will record voltage readings from the aforementioned 20 electrodes across N sampling points, ultimately forming a 20 × N matrix.
It is important to note that during the EEG signal acquisition process, data corruption may occur due to poor electrode contact, which can manifest as significant baseline drift or various artifacts. Additionally, electromyographic effects originating from eye blinks or movements can introduce substantial signal noise. Therefore, preprocessing is an essential step to enhance the overall data quality. For this study, we utilized the EEGLab toolbox (Swartz Center for Computational Neuroscience, San Diego, CA, USA) for the preprocessing of the EEG data, which included procedures for electrode localization, data filtering, interpolation of faulty channels, baseline correction, rejection of artifact-contaminated segments, and finally, artifact removal via Independent Component Analysis (ICA). This study used ICLabel (Swartz Center for Computational Neuroscience, San Diego, CA, USA) to pre-classify all components. Then, these automated labels were verified through manual review. Any discrepancies between the automated classification and the manual inspection were resolved in favor of the manual assessment. This strategy ensures both the consistency of a standardized pipeline and the accuracy of expert oversight. The detailed processing workflow is depicted in Figure 2.
After EEG signal preprocessing, brain functional connectivity was studied by computing the correlation between 20 EEG electrodes, providing data support for subsequent cognitive state assessment modeling. This paper employs the Pearson correlation coefficient for functional connectivity construction. as it is a well-established and effective metric for assessing linear synchrony between EEG signals from different brain regions, as demonstrated in numerous cognitive state evaluation studies [36,37,38,39]. The Pearson method is particularly suitable for the objectives as it provides a robust measure of the quantitative dependencies in the time-domain signals, which are indicative of coordinated neural activity by capturing a broader, more integrated view of brain network coordination relevant to the sustained cognitive states involved in UAV operation. The Pearson correlation coefficient between the i-th and j-th electrodes is calculated as follows:
ρ i , j = t = 1 T z i t z j t t = 1 T z i t 2 t = 1 T z j t 2 ,
where z i t represents the EEG time-domain signal of the i-th electrode at the t-th sampling point, and t represents the total number of sampling points. The correlation coefficients between all 20 electrodes form a brain functional connectivity matrix, serving as the input for the cognitive state classification model.

2.2. CNN Model Establishment

Common models for state classification using EEG signals include EEGNet and ShallowConvNet [40,41]. The EEGNet architecture is based on depth-wise separable convolution, sequentially comprising a temporal convolutional layer, a spatial convolutional layer (depth-wise convolution), and a feature mixing layer (pointwise convolution), supplemented with normalization constraints and dropout regularization. Although EEGNet has a compact structure, its performance remains notably sensitive to several key hyperparameters. For new datasets, a certain degree of hyperparameter tuning is still required to achieve optimal performance. The ShallowConvNet architecture consists of two convolutional layers (temporal convolution followed by spatial convolution), a square nonlinear activation function (f(x) = x2), an average pooling layer, and a logarithmic nonlinear function. Specifically designed for oscillatory signal classification by extracting features related to log-band power, the ShallowConvNet architecture consequently possesses certain inherent limitations.
Common analytical methods for brain functional connectivity matrices include Support Vector Machine (SVM) [42,43,44], Random Forests (RF) [45], and Convolutional Neural Networks (CNN) [46]. While SVM is often suitable for small-sample learning scenarios, it typically involves complex parameter configuration and exhibits poor performance with large datasets, including prolonged training times and insufficient robustness. RF requires constructing numerous random trees, consuming substantial computational resources and resulting in extended prediction durations during the inference phase. In contrast, CNNs feature relatively fewer parameters, demonstrate faster training speeds, and maintain strong generalization capabilities, while also possessing certain robustness to input matrix transformations due to their inherent translation and rotation invariance properties. Therefore, this study selects a CNN-based model for classifying brain functional connectivity matrices, with SVM, RF, EEGNet and ShallowConvNet all serving as baseline models for comparison purposes.
Using the entire signal over a period as input is overly complex, and hinders effective feature extraction. This paper uses brain functional connectivity matrices between 20 electrodes within a time window as feature inputs. This reduces data volume and at the same time better reflects activity relationships between brain regions.
Let the original brain functional connectivity matrix be:
M R N × N ,
where N is the number of electrodes (N = 20 in this paper). After normalization, we obtain the normalized matrix M n o r m 0 , 1 N × N . To meet CNN input requirements, reshape it into a 4D tensor (batch size, channels, height, width):
X i = r e s h a p e M n o r m i = M n o r m i R 1 × 20 × 20 ,
The final dataset contains N samples:
X i , l a b e l _ i n d e x i i = 1 N ,
We first input these N samples into the CNN’s convolutional feature extraction layers, consisting of two convolutional and pooling layers:
Z = MaxPool 2 Re LU W c o n v 2 MaxPool 2 Re LU W c o n v 1 X b a t c h + b c o n v 1 + b c o n v 2 ,
W c o n v 1 R 32 × 1 × 3 × 3 , W c o n v 2 R 64 × 32 × 3 × 3 are the convolutional kernel weights for the first and second layers, respectively. Both layers use 3 × 3 kernels to simulate functional interactions between neighboring brain regions. ∗ is the 2D convolution. b c o n v 1 R 32 , b c o n v 2 R 64 are the biases for the first and second convolutional operations. Both pooling layers employ max pooling with a kernel size of 2. The final output is Z R B × 64 × 5 × 5
After obtaining the feature output Z, the subsequent classification is performed through fully connected layers. Within the methodology presented in this paper, the fully connected layer component first effectively flattens the input tensor into a one-dimensional vector, then systematically employs two distinct linear transformation layers to achieve progressive dimensionality reduction and final categorical classification. However, employing solely the standard CNN architecture without additional enhancements produces inadequate and suboptimal classification performance. This fundamental limitation primarily arises because the meaningful information is not uniformly distributed across the entire functional connectivity matrix—in fact, meaningful information tends to be concentrated in specific regions while numerous irrelevant or redundant connections significantly degrade the model’s effectiveness and generalization capability [47]. To resolve this fundamental challenge, we implement a sophisticated attention mechanism and construct an advanced CNN framework incorporating a Dual Attention Module, thereby creating the DAM-CNN architecture. This strategic integration fundamentally empowers the CNN model to dynamically and selectively concentrate on the most critical neurophysiological features while actively suppressing non-essential distractions, consequently enhancing the model’s representational power. The complete architectural configuration of our proposed model is visually presented in Figure 3.

2.3. Dual Attention Module (DAM)

To model long-range functional connectivity between brain regions, a self-attention mechanism is introduced. Self-attention is an internal attention mechanism that associates different positions within a single sequence, encoding sequence data based on importance scores [48]. It is popular for improving long-range dependency modeling [49]. The attention function maps a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, with weights assigned based on the compatibility function between the query and the corresponding key.
The DAM with a self-attention mechanism combines a Position Attention Module (PAM) and a Channel Attention Module (CAM) to enhance feature representation in segmentation tasks. PAM learns spatial interdependencies of features, while CAM models channel interdependencies. Together, they capture contextual dependencies over local features, improving segmentation results.
Taking the spatial attention module (upper part of Figure 4) as an example, we first apply a convolutional layer to obtain dimension-reduced features. These features are then input to the Position Attention Module, generating new features containing long-range spatial context through three steps: First, a spatial attention matrix is generated, modeling spatial relationships between any two electrodes. Next, matrix multiplication is performed between this attention matrix and the original features. Finally, an element-wise summation is performed between the resulting matrix and the original features to obtain the final representation reflecting long-range context. The output for position attention features is given by:
E p = α · r e s h a p e D · softmax B T C T + A ,
where A is the input, B = ωBA, C = ωCA, D = ωDA, and α is a scale parameter.
Simultaneously, long-range contextual information along the channel dimension is captured by the Channel Attention Module. The process of capturing channel relationships is similar to PAM, except the channel attention matrix is computed in the channel dimension. The output for channel attention features is given by:
E c = β · r e s h a p e softmax A · A T T · A + A ,
where A is the input and β is a weight learned starting from 0.
Finally, outputs from both attention modules are aggregated strategically to obtain contextually enhanced feature representations.

3. Experiment and Analysis

3.1. Subjects

With the ethics approval obtained from the Medical and Experimental Animal Ethics Committee of Northwestern Polytechnical University (Project ID: 20250201), this study acquired 10 subjects aged between 22 and 24 to conduct research. Informed written consents were attained from all participants. The selection of participants follows a standardized and rigorous protocol that they have to meet the following requirements:
(1)
All participants are with normal hearing.
(2)
All participants have normal or correct-to-normal vision and free from color vision deficiencies.
(3)
All participants have adequate sleep before experiments.
(4)
All participants are asked to avoid strenuous exercise before experiments.
(5)
All participants are in good health, with no history of mental, intelligent, or neurological illness, and no physical disabilities.
(6)
All participants are right-handed.
(7)
All participants are computer-literate.
(8)
All participants are highly educated.
(9)
All participants are capable of independently completing experimental tasks.
Notably, the participant pool was defined by the target application, that is, the operation of a UAV swarm, which is typically performed by personnel with similar training and experience levels, leading to a degree of inherent homogeneity in the user population. Hence, in order to ensure participant proficiency with the experimental task, all individuals completed a structured training course prior to data collection. This consisted of two hours of supervised practice using the UAV swarm simulation, continuing until each participant successfully achieved a predefined performance benchmark by completing a human–UAV teaming collaboration task. Although participants were not professionally certified operators, this targeted training protocol ensured they possessed the necessary operational competence to perform the experimental tasks reliably. Consequently, the cognitive load responses measured during the study reflect valid reactions to the task demands. The specific details and scores of the participants are shown in Table 1.
The overall experiment for each participant is 12 min, and experimental sessions are randomly scheduled across morning, midday, and evening time periods.

3.2. Experimental Task and Procedure

The experiment required participants to complete two tasks: a primary task involving the operation of 50 UAVs to complete a cruising mission, and a secondary task consisting of mental arithmetic exercises. This arithmetic task is a well-established, validated, and standardized method in neuron-ergonomics literature for inducing different levels of cognitive loads [50,51,52]. Additionally, it ensures reproducibility for the incremental manipulation of working memory and attentional demand, which is essential for isolating specific neural correlates of cognitive states.
The primary task simulated the operational demands on UAV operators, requiring participants to complete the mission within 2 min. This 2 min duration was chosen to balance several critical factors, as it was designed to mitigate participant fatigue that could confound EEG signals across multiple task repetitions, while also ensuring a sufficient time window for stable data acquisition to capture sustained cognitive engagement. Through extensive testing, this specific duration was established as the optimal period that allowed participants to meaningfully engage with the swarm guidance task while simultaneously managing the arithmetic distractor, thereby successfully inducing the targeted high-workload state.
The 50 UAVs were divided into 4 groups, and participants were required to verbally command these 4 groups to avoid radar detection while following designated routes. The secondary task comprised three difficulty levels, each requiring participants to complete 5 sets of random two-digit arithmetic problems within a fixed time limit. With each increasing difficulty level, the allotted time decreased by 5 s. The varying difficulty levels of the secondary task were designed to apply different levels of pressure on participants and objectively reflect their cognitive state through calculation accuracy rates.
Prior to the experiment, the researcher set the secondary task difficulty by determining the available time for each arithmetic problem. Participants familiarized themselves with interaction methods in the simulated task and practiced the cruising procedure. After completing preparation, participants wore EEG caps. Once the task began, participants commanded 4 UAV groups to cruise along designated routes. The system recorded group number, frequency, and timestamp whenever UAVs deviated from the prescribed route or failed to avoid radar detection. During primary task execution, the researcher randomly initiated secondary tasks during selected periods while simultaneously recording EEG signals. When secondary tasks were activated, participants needed to maintain normal UAV trajectory while performing mental calculations of two-digit arithmetic expressions displayed on the left side of the screen, entering answers via keyboard within the specified time limit. Arithmetic expressions were presented at equal time intervals.
Upon secondary task completion, the experimental program automatically calculated participants’ response accuracy rates and printed primary task error reports. Researchers then terminated EEG recording, and participants completed NASA-TLX questionnaires. Objective scores were calculated based on mental arithmetic accuracy and primary task error rates, while subjective scores were derived from NASA-TLX questionnaire results. Here, the NASA-TLX questionnaire was not used as a direct input feature for the proposed model. Instead, it serves as a critical ground-truth validation and experimental design purpose. Specifically, the NASA-TLX scores were collected to confirm that the designed tasks (UAV guidance, mental arithmetic, and their combination) successfully induced different levels of subjective cognitive load in the participants. In addition, the scores of the task performance and NASA-TLX were integrated to compute a composite cognitive state rating. This approach ensures that the final cognitive state labels (i.e., Low, Medium, High load) were not based solely on task efficacy, which can be ambiguous, but were also grounded in the participants’ subjective experiences. The NASA-TLX questionnaire was therefore instrumental in creating a more robust valid ground-truth for model training. After all the 10 participants completed their experiments, the final state ratings were sorted in descending order and equally divided into three groups, each assigned distinct cognitive state labels.
Researchers compiled all EEG signal files with corresponding cognitive state labels into a complete dataset for model training and testing.

3.3. Experimental Interface

The experimental interface layout is structured in the following configuration: The central area prominently displays the UAV navigation map, with red sections clearly indicating radar detection zones and yellow areas representing designated UAV route boundaries. The right section presents multiple types of situational awareness information, while the left side features a dedicated digital arithmetic module. This particular module contains a green indicator box where target arithmetic expressions for calculation will be displayed after experiment initiation. Mental arithmetic difficulty is selected by the experimenter using an orange drop-down menu, with task initiation and system reset controlled via the green “start” and red “restart” buttons, respectively. Participants are required to input their mental calculation results in the yellow entry box positioned below the display area and must confirm their answers by pressing the “Enter” key on the computer keyboard. The complete detailed layout is illustrated in Figure 5.

3.4. Results and Analysis

Following preprocessing of the acquired EEG signals, the data were segmented using a 5 s sliding window with a 0.5 s step size. For each temporal window, pairwise Pearson correlation coefficients were computed across all 20 electrode signals to construct functional connectivity matrices.
For the baseline models, classification was implemented using the preprocessed EEG signals as input. For the DAM-CNN model, the functional connectivity matrices served as input. We selected two traditional machine learning methods (SVM and RF) along with three deep learning approaches (CNN, EEGNet, and ShallowConvNet) as baseline models. This study specifically designed the cross-subject validation to test generalization within this realistic operational context through leave-one-subject-out cross-validation. A comparative analysis was performed between these five baseline models and the proposed model in terms of average accuracy and robustness, as illustrated in Figure 6.
On this dataset, the proposed DAM-CNN method achieved a mean three-class classification accuracy of 98.76% across 10 subjects. The traditional machine learning methods, SVM and RF, yielded mean accuracies of 86.30% and 86.90%, respectively. The deep learning methods EEGNet, and ShallowConvNet attained mean accuracies of 93.18%, and 93.09% across 10 subjects, while the CNN approach employed by Yang et al. (using raw EEG signals as input) [28] achieved a mean accuracy of 91.75% on the 10-subject dataset. Under identical experimental conditions, the proposed method demonstrated accuracy improvements of 12.46%, 11.86%, 5.58%, and 5.67% relative to the four baseline models across 10 subjects, representing a 7.01% mean accuracy improvement compared to Yang et al.’s model [28]. This validates that the cross-subject cognitive state assessment framework based on brain functional connectivity effectively processes EEG signals while fully leveraging task context information. As clearly shown in Figure 7, model accuracy varies across different subjects. We calculated the accuracy variance across 10 subjects: SVM exhibited a variance of 0.0130, RF 0.0257, EEGNet 0.0225, ShallowConvNet 0.0302, Yang et al.’s CNN method 0.0304, and the proposed DAM-CNN model 0.0113. Comparatively, the variances were reduced by 0.0017, 0.0144, 0.0112, 0.0189, and 0.0191, respectively. These results clearly demonstrate that DAM-CNN exhibits superior stability across different subjects, indicating enhanced generalization capability. The specific test metrics for each model are shown in Table 2. Meanwhile, we calculated the overall prediction results of the DAM-CNN model on the 10-subject dataset and plotted the corresponding confusion matrix, as shown in Figure 8.
The application of neurophysiological monitoring, particularly through EEG, holds significant promise for enhancing safety and performance in aviation and unmanned systems operations [58,59,60]. Unlike subjective self-reports or performance metrics alone, EEG provides an objective, continuous, and real-time window into an operator’s cognitive state. This capability is crucial in high-stakes environments where cognitive overload, fatigue, or a loss of situational awareness can lead to critical errors. The key advantages driving this research include the potential for real-time cognitive state assessment, which could enable adaptive systems to mitigate overload by simplifying interfaces or reallocating tasks. Furthermore, it facilitates a shift towards proactive safety paradigms by identifying cognitive degradation before it results in an operational failure, and it provides a quantitative basis for personalized training by revealing individual neurocognitive responses to complex scenarios. As depicted in Figure 9, it quantifies the magnitude of performance differences between the proposed DAM-CNN and baseline models. The matrix reveals that DAM-CNN achieves extremely large effect sizes when compared to SVM (d = 9.73), RF (d = 5.68), and CNN (d = 2.90), demonstrating its substantially superior performance. Meanwhile, EEGNet and ShallowConvNet show large negative effect sizes (−2.98 and −2.36, respectively) relative to DAM-CNN, indicating their inferiority. This visualization corroborates that DAM-CNN not only outperforms all baseline models significantly but also does so with a large margin, aligning with the quantitative accuracy and variance analyses presented earlier. Our work directly contributes to this evolving field by developing a robust framework for cross-subject cognitive state assessment, a necessary step toward the practical deployment of such brain-aware systems in real-world aviation contexts.
Traditional EEG-based cognitive state assessment methods suffer from excessive data volume, numerous interfering factors, and insufficient feature representation in deep learning frameworks, failing to adequately capture inter-electrode correlations. These limitations often lead to inaccurate assessment of operators’ cognitive states, making it difficult to promptly adjust their operational readiness. In contrast, the cognitive state classification model developed in this study consistently and accurately determines operators’ cognitive states by comprehensively capturing functional connectivity relationships between electrodes. This approach provides reliable support for successful real-time UAV mission operations, significantly enhancing operators’ decision-making accuracy and overall system interaction fluency, thereby facilitating dynamic human–UAV collaboration and seamless task execution.
Notably, the performance of the proposed architecture might be impacted by a more diverse operator population. Specifically, age diversity could introduce variability in baseline neural rhythms and cognitive processing speed, potentially requiring model adaptation for different age cohorts within a broader operator population. While varying fatigue levels might manifest as a confounding factor, altering EEG signatures in ways that could be mis-classified as other cognitive states. In addition, differences in operational skill could affect the cognitive load experienced for the same task, meaning our model may need to be conditioned on expertise level for optimal accuracy in a heterogeneous group.
The ultimate goal of this work is to integrate the proposed EEG-based cognitive state evaluation model directly into the UAV operational loop. Given that UAV operators work in a more stable environment compared to pilots, and considering the non-intrusive nature of EEG signal acquisition, the integration process is relatively straightforward. This work envisions that operators wear a lightweight EEG headset to transmit brainwave signals to the ground control station software. The built-in DAM-CNN model would then process the brain functional connectivity data in real-time, providing a continuous assessment of the operator’s cognitive state. This output could then be visualized on the console interface via a simple, intuitive indicator. In an advanced autonomous supervision framework, this real-time cognitive readout could trigger adaptive system responses, such as simplifying the interface during high mental workload, issuing alerts for cognitive fatigue, or even suggesting a transfer of control to a secondary operator to prevent performance degradation and enhance overall mission safety and efficacy. In subsequent research, new software will be developed specifically for the UAV console. This program will adapt the interface and modulate the amount of information displayed based on the cognitive state classification results, thereby aligning the human–machine interaction with the operator’s current cognitive capacity. When necessary, the system will issue warnings to prompt the operator to suspend operations at an appropriate time, notify relevant personnel to intervene promptly, or even assume control of the UAV to execute avoidance maneuvers and prevent safety-critical incidents. Sample demonstration of the implementation can be visualized in Figure 10 below.

4. Conclusions

To achieve more robust and accurate assessment of cognitive states in unmanned system operators, this study innovatively transforms EEG signals into brain functional connectivity matrices, highlighting the association strength between neural activities in different cerebral regions. By constructing a specialized convolutional neural network model integrated with a Dual Attention Module (DAM), we have effectively enhanced the classification accuracy of cognitive states based on EEG signals while simultaneously reducing inter-subject variance across participants. These significant findings establish a crucial technical foundation for enabling smooth and efficient human-UAV interaction in future operational scenarios.
The findings of this study have direct and significant implications for enhancing UAV operational reliability, safety assurance, and the development of adaptive autonomy. By enabling a robust, cross-subject, and real-time assessment of the operator’s cognitive state, our DAM-CNN model provides a critical missing data stream for next-generation human–machine systems. This capability directly contributes to operational reliability by allowing for the early fatigue or overload detection of cognitive decline before it leads to performance errors. From a safety assurance perspective, this real-time cognitive monitoring acts as a vital safeguard, creating an opportunity for proactive intervention. This could range from alerting the operator and supervisory personnel to the system temporarily simplifying the interface or assuming lower-level control tasks to prevent mishaps. Furthermore, this work lays a foundational stone for sophisticated human–machine teaming architectures. The reliable cognitive state output can serve as a key input for adaptive autonomous systems, enabling a UAV’s level of automation and information presentation to dynamically align with the operator’s current cognitive capacity. Ultimately, this paves the way for more resilient and collaborative human–machine teams and can also be integrated into future training and evaluation frameworks to objectively assess and enhance operator proficiency under various cognitive demands.
However, this study has certain limitations. The limited number of participants, with their age distribution concentrated within a narrow range, gender imbalance, and relatively homogeneous physical conditions, constrains our ability to more comprehensively validate the model’s accuracy and robustness. Although the model does not require parameter recalibration for different subjects, retraining still remains necessary. In future research, we will expand both the sample size and diversity of participants, while developing more robust models capable of generalizing across different subjects with a single training instance effectively.

Author Contributions

Conceptualization, F.Z. and X.H.; methodology, F.Z.; software, F.Z.; validation, F.Z. and K.J.; formal analysis, X.H.; resources, J.C.; writing—original draft preparation, F.Z. and X.Z.; writing—review and editing, X.Z. and J.C.; visualization, K.J.; supervision, X.H.; project administration, J.C.; funding acquisition, X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by China Postdoctoral Science Foundation, grant numbers 2024M764253 and GZB20240985, and National Natural Science Foundation of China, grant number 6257075360.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Medical and Experimental Animal Ethics Committee of Northwestern Polytechnical University (Project ID: 20250201) on 5 March 2025.

Informed Consent Statement

Written informed consent was obtained from the participants.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
UAVunmanned aerial vehicles
EEGelectroencephalogram
DAMDual Attention Module
ICAIndependent Component Analysis
SVMSupport Vector Machine
RFRandom Forests
PAMPosition Attention Module
CAMChannel Attention Module

References

  1. AL-Dosari, K.; Hunaiti, Z.; Balachandran, W. Systematic review on civilian drones in safety and security applications. Drones 2023, 7, 210. [Google Scholar] [CrossRef]
  2. Bansler, J.P.; Havn, E. Pilot implementation of health information systems: Issues and challenges. Int. J. Med. Inform. 2010, 79, 637–648. [Google Scholar] [CrossRef] [PubMed]
  3. Grindley, B.; Phillips, K.; Parnell, K.; Cherrett, T.; Scanlan, J.; Plant, K. Over a decade of UAV incidents: A human factors analysis of causal factors. App. Ergon. 2024, 121, 104355. [Google Scholar] [CrossRef]
  4. Ghasri, M.; Maghrebi, M. Factors affecting unmanned aerial vehicles’ safety: A post-occurrence exploratory data analysis of drones’ accidents and incidents in Australia. Saf. Sci. 2021, 139, 105273. [Google Scholar] [CrossRef]
  5. Asghari, O.; Ivaki, N.; Madeira, H. UAV Operations Safety Assessment: A Systematic Literature Review. ACM Comput. Surv. 2025, 57, 1–37. [Google Scholar] [CrossRef]
  6. Wiegmann, D.A.; Shappell, S.A. Human error analysis of commercial aviation accidents: Application of the Human Factors Analysis and Classification System (HFACS). Aviat. Space Environ. Med. 2001, 72, 1006–1016. [Google Scholar]
  7. Akhanda, M.; Islam, S.; Rahman, M. Detection of cognitive state for brain–computer interfaces. In Proceedings of the 2013 International Conference on Electrical Information and Communication Technologies (EICT), Khulna, Bangladesh, 20–22 December 2013; IEEE: Piscataway, NJ, USA, 2014; pp. 1–6. [Google Scholar]
  8. Wilson, N.; Guragain, B.; Verma, A.; Archer, L.; Tavakolian, K. Blending human and machine: Feasibility of measuring fatigue through the aviation headset. Hum. Factors 2020, 62, 553–564. [Google Scholar] [CrossRef]
  9. Liu, Y.; Sourina, O.; Nguyen, M.K. Real-time EEG-based human emotion recognition and visualization. In Proceedings of the 2010 International Conference on Cyberworlds, Singapore, 20–22 October 2010; pp. 262–269. [Google Scholar]
  10. Anderson, K.; McOwan, P.W. A real-time automated system for the recognition of human facial expressions. IEEE Trans. Syst. Man. Cybern. Part B 2006, 36, 96–105. [Google Scholar] [CrossRef]
  11. Ang, J.; Dhillon, R.; Krupski, A.; Shriberg, E.; Stolcke, A. Prosody-based automatic detection of annoyance and frustration in human–computer dialog. In Proceedings of the 7th International Conference on Spoken Language Processing (ICSLP 2002), Denver, CL, USA, 16–20 September 2002; pp. 2037–2040. [Google Scholar]
  12. Vijayan, A.E.; Sen, D.; Sudheer, A.P. EEG-based emotion recognition using statistical measures and auto-regressive modeling. In Proceedings of the 2015 IEEE International Conference on Computational Intelligence & Communication Technology, Ghaziabad, India, 13–14 February 2015; pp. 587–591. [Google Scholar]
  13. Herwig, U.; Satrapi, P.; Schönfeldt-Lecuona, C. Using the international 10–20 EEG system for positioning of transcranial magnetic stimulation. Brain Topogr. 2003, 16, 95–99. [Google Scholar] [CrossRef]
  14. Cherian, R.; Kanaga, E.G. Theoretical and methodological analysis of EEG based seizure detection and prediction: An exhaustive review. J. Neurosci. Methods 2022, 369, 109483. [Google Scholar] [CrossRef]
  15. Singh, A.K.; Krishnan, S. Trends in EEG signal feature extraction applications. Front. Artif. Intell. 2023, 5, 1072801. [Google Scholar] [CrossRef]
  16. Pirbhulal, S.; Zhang, H.; Wu, W.; Mukhopadhyay, S.C.; Zhang, Y.-T. Heartbeats based biometric random binary sequences generation to secure wireless body sensor networks. IEEE Trans. Biomed. Eng. 2018, 65, 2751–2759. [Google Scholar] [CrossRef]
  17. Pirbhulal, S.; Zhang, H.; Mukhopadhyay, S.C.; Li, C.; Wang, Y.; Li, G.; Wu, W.; Zhang, Y.-T. An efficient biometric-based algorithm using heart rate variability for securing body sensor networks. Sensors 2015, 15, 15067–15089. [Google Scholar] [CrossRef]
  18. Li, M.; Lu, B.-L. Emotion classification based on gamma-band EEG. In Proceedings of the 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, St. Paul, MN, USA, 2–6 September 2009; pp. 1223–1226. [Google Scholar]
  19. Wu, W.; Pirbhulal, S.; Sangaiah, A.K.; Mukhopadhyay, S.C.; Li, G. Optimization of signal quality over comfortability of textile electrodes for ECG monitoring in fog computing based medical applications. Future Gener. Comput. Syst. 2018, 86, 515–526. [Google Scholar] [CrossRef]
  20. Pirbhulal, S.; Zhang, H.; Alahi, M.E.; Ghayvat, H.; Mukhopadhyay, S.C.; Zhang, Y.-T.; Wu, W. A novel secure IoT-based smart home automation system using a wireless sensor network. Sensors 2017, 17, 69. [Google Scholar] [CrossRef] [PubMed]
  21. Sodhro, A.H.; Pirbhulal, S.; Qaraqe, M.; Lohano, S.; Sodhro, G.H.; Junejo, N.U.R.; Luo, Z. Power control algorithms for media transmission in remote healthcare systems. IEEE Access 2018, 6, 42384–42393. [Google Scholar] [CrossRef]
  22. Acharya, U.R.; Sree, S.V.; Swapna, G.; Martis, R.J.; Suri, J.S. Automated EEG analysis of epilepsy: A review. Knowl.-Based Syst. 2013, 45, 147–165. [Google Scholar] [CrossRef]
  23. Ronca, V.; Flumeri, G.D.; Giorgi, A.; Vozzi, A.; Capotorto, R.; Germano, D.; Sciaraffa, N.; Borghini, G.; Babiloni, F.; Aricò, P. o-CLEAN: A novel multi-stage algorithm for the ocular artifacts’ correction from EEG data in out-of-the-lab applications. J. Neural Eng. 2024, 21, 056023. [Google Scholar] [CrossRef]
  24. Kobler, R.J.; Sburlea, A.I.; Dias, C.L.; Schwarz, A.; Hirata, M.; Müller-Putz, G.R. Corneo-retinal-dipole and eyelid-related eye artifacts can be corrected offline and online in electroencephalographic and magnetoencephalographic signals. NeuroImage 2020, 218, 117000. [Google Scholar] [CrossRef]
  25. Park, J.; Park, J.; Shin, D.; Choi, Y. A BCI Based Alerting System for Attention Recovery of UAV Operators. Sensors 2021, 21, 2447. [Google Scholar] [CrossRef]
  26. Li, Q.; Molloy, O.; El-Fiqi, H.; Eves, G. Applications of Machine Learning in Assessing Cognitive Load of Uncrewed Aerial System Operators and in Enhancing Training: A Systematic Review. Drones 2025, 9, 760. [Google Scholar] [CrossRef]
  27. Deng, T.; Huo, Z.; Zhang, L.; Dong, Z.; Niu, L.; Kang, X.; Huang, X. A VR-based BCI interactive system for UAV swarm control. Biomed. Signal Process. Control 2023, 85, 104944. [Google Scholar] [CrossRef]
  28. Yang, Y.; Liu, C. The emotional status testing of UAV operators based on the two-dimensional feature maps and CNN analysis. Comput. Meas. Control 2024, 32, 96–102. [Google Scholar]
  29. Shi, P.; Wang, H.; Liu, L. Cross-subject and cross-session EEG-based approach to emotion recognition. Comput. Appl. Res. 2025, 42, 156–164. [Google Scholar]
  30. Jiang, Y.; Xie, S.; Xie, X.; Cui, Y.; Tang, H. Emotion recognition via multiscale feature fusion network and attention mechanism. IEEE Sens. J. 2023, 23, 10790–10800. [Google Scholar] [CrossRef]
  31. Tang, H.; Xie, S.; Xie, X.; Cui, Y.; Li, B.; Zheng, D.; Tian, Z. Multi-domain based dynamic graph representation learning for EEG emotion recognition. IEEE J. Biomed. Health Inform. 2024, 28, 5227–5238. [Google Scholar] [CrossRef]
  32. Xie, S.; Li, Y.; Xie, X.; Wang, W.; Duan, X. The analysis and classification of sleep stage using deep learning network from single-channel EEG signal. In Proceedings of the Neural Information Processing (International Conference on Neural Information Processing (ICONIP)), Guangzhou, China, 14–18 October 2017; Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, E.S.M., Eds.; Springer: Cham, Switzerland, 2017; pp. 752–758. [Google Scholar]
  33. Fu, Z.; Zhu, H.; Zhao, Y.; Huan, R.; Zhang, Y.; Chen, S.; Pan, Y. GMAEEG: A self-supervised graph masked autoencoder for EEG representation learning. IEEE J. Biomed. Health Inform. 2024, 28, 6486–6497. [Google Scholar] [CrossRef]
  34. Xiong, H.; Chen, J.J.; Wang, M.H.; Zhang, L.; Lin, F. Enhancing neural activation in older adults: Action observation-primed swallowing imagery reveals age-related connectivity patterns. IEEE Trans. Neural Syst. Rehabil. Eng. 2025, 33, 1574–1584. [Google Scholar] [CrossRef]
  35. Zhang, L.; Xiao, D.; Guo, X.; Li, F.; Liang, W.; Zhou, B. Cross-subject emotion EEG signal recognition based on source microstate analysis. Front. Neurosci. 2023, 17, 1288580. [Google Scholar] [CrossRef]
  36. Pang, Y.; Wei, Q.; Zhao, S.; Li, N.; Li, Z.; Lu, F.; Pang, J.; Zhang, R.; Wang, K.; Chu, C.; et al. Enhanced default mode network functional connectivity links with electroconvulsive therapy response in major depressive disorder. J. Affect. Disord. 2022, 306, 47–54. [Google Scholar] [CrossRef]
  37. Kong, X.; Kong, R.; Orban, C.; Wang, P.; Zhang, S.; Anderson, K.M.; Holmes, A.J.; Murray, J.D.; Deco, G.; van den Heuvel, M.; et al. Sensory-motor cortices shape functional connectivity dynamics in the human brain. Nat. Commun. 2021, 12, 6373. [Google Scholar] [CrossRef]
  38. Yen, C.; Lin, C.L.; Chiang, M.C. Exploring the frontiers of neuroimaging: A review of recent advances in understanding brain functioning and disorders. Life 2023, 13, 1472. [Google Scholar] [CrossRef] [PubMed]
  39. Apicella, A.; Arpaia, P.; D’Errico, G.; Marocco, D.; Mastrati, G.; Moccaldi, N.; Prevete, R. Toward cross-subject and cross-session generalization in EEG-based emotion recognition: Systematic review, taxonomy, and methods. Neurocomputing 2024, 604, 128354. [Google Scholar] [CrossRef]
  40. Lawhern, V.J.; Solon, A.J.; Waytowich, N.R.; Gordon, S.M.; Hung, C.P.; Lance, B.J. EEGNet: A compact convolutional neural network for EEG-based brain–computer interfaces. J. Neural Eng. 2018, 15, 056013. [Google Scholar] [CrossRef]
  41. Kumar, R.; Chug, A.; Singh, A.P. An efficient plant leaf disease detection model using Shallow-ConvNet. Appl. Ecol. Environ. Res. 2023, 21, 3193–3211. [Google Scholar] [CrossRef]
  42. Song, B.; Park, K. Comparison of outdoor compost pile detection using unmanned aerial vehicle images and various machine learning techniques. Drones 2021, 5, 31. [Google Scholar] [CrossRef]
  43. Zhu, X.; Yuan, F.; Zhou, G.; Nie, J.; Wang, D.; Hu, P.; Ouyang, L.; Kong, L.; Liao, W. Cross-network interaction for diagnosis of major depressive disorder based on resting state functional connectivity. Brain Imaging Behav. 2021, 15, 1279–1289. [Google Scholar] [CrossRef]
  44. Ho, C.S.; Chan, Y.; Tan, T.W.; Tay, G.W.; Tang, T. Improving the diagnostic accuracy for major depressive disorder using machine learning algorithms integrating clinical and near-infrared spectroscopy data. J. Psychiatr. Res. 2022, 147, 194–202. [Google Scholar] [CrossRef]
  45. Plitt, M.; Barnes, K.A.; Martin, A. Functional connectivity classification of autism identifies highly predictive brain features but falls short of biomarker standards. NeuroImage Clin. 2015, 7, 359–366. [Google Scholar] [CrossRef]
  46. Hu, D. An introductory survey on attention mechanisms in NLP problems. In Intelligent Systems and Applications, Proceedings of the 2019 Intelligent Systems Conference (IntelliSys) Volume 2, London, UK, 5–7 September 2019; Bi, Y., Bhatia, R., Kapoor, S., Eds.; Springer: London, UK, 2019; pp. 432–448. [Google Scholar]
  47. Görlich, F.; Marks, E.; Mahlein, A.K.; König, K.; Lottes, P.; Stachniss, C. UAV-based classification of cercospora leaf spot using RGB images. Drones 2021, 5, 34. [Google Scholar] [CrossRef]
  48. Daniluk, M.; Rocktäschel, T.; Welbl, J.; Riedel, S. Frustratingly short attention spans in neural language modeling. arXiv 2017, arXiv:1702.04521. [Google Scholar] [CrossRef]
  49. Chen, L.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar] [CrossRef]
  50. Katmah, R.; Al-Shargie, F.; Tariq, U.; Babiloni, F.; Al-Mughairbi, F.; Al-Nashash, H. A Review on Mental Stress Assessment Methods Using EEG Signals. Sensors 2021, 21, 5043. [Google Scholar] [CrossRef] [PubMed]
  51. Varshney, A.; Ghosh, S.K.; Padhy, S.; Tripathy, R.K.; Acharya, U.R. Automated Classification of Mental Arithmetic Tasks Using Recurrent Neural Network and Entropy Features Obtained from Multi-Channel EEG Signals. Electronics 2021, 10, 1079. [Google Scholar] [CrossRef]
  52. Sharma, L.D.; Chhabra, H.; Chauhan, U.; Saraswat, R.K.; Sunkaria, R.K. Mental arithmetic task load recognition using EEG signal and Bayesian optimized K-nearest neighbor. Int. J. Inf. Technol. 2021, 13, 2363–2369. [Google Scholar] [CrossRef]
  53. Wang, Y.; Li, Z.; Zhang, Y.; Long, Y.; Xie, X.; Wu, T. Classification of partial seizures based on functional connectivity: A MEG study with support vector machine. Front. Neuroinform. 2022, 16, 934480. [Google Scholar] [CrossRef]
  54. Wu, X.K.; Yan, Y.; Jia, Z.H.; Bai, X.L.; Wang, L. Mental arithmetic task classification based on topological representation of EEG-based functional connectivity. Appl. Res. Comput. 2022, 39, 356–360. [Google Scholar]
  55. Zyma, I.; Tukaev, S.; Seleznov, I.; Kiyono, K.; Popov, A.; Chernykh, M.; Shpenkov, O. Electroencephalograms during mental arithmetic task performance. Data 2019, 4, 14. [Google Scholar] [CrossRef]
  56. Raza, H.; Chowdhury, A.; Bhattacharyya, S.; Samothrakis, S. Single-trial EEG classification with EEGNet and neural structured learning for improving BCI performance. In Proceedings of the 2020 International Joint Conference on Neural Networks, Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  57. Wang, T.; Dong, E.; Du, S.; Jia, C. A shallow convolutional neural network for classifying MI-EEG. In Proceedings of the China Automation Congress (CAC), Hangzhou, China, 22–24 November 2019; pp. 5837–5841. [Google Scholar]
  58. Borghini, G.; Aricò, P.; Di, F.G.; Ronca, V.; Giorgi, A.; Sciaraffa, N.; Conca, C.; Stefani, S.; Verde, P.; Landolfi, A.; et al. Air Force Pilot Expertise Assessment during Unusual Attitude Recovery Flight. Safety 2022, 8, 38. [Google Scholar] [CrossRef]
  59. Chen, J.; Chen, A.; Jiang, B.; Zhang, X. Physiological records-based situation awareness evaluation under aviation context: A comparative analysis. Heliyon 2024, 10, e26409. [Google Scholar] [CrossRef]
  60. Chen, A.; Xie, F.; Wang, J.; Chen, J. Intelligent optimization method of human–computer interaction interface for UAV cluster attack mission. Electronics 2023, 12, 4426. [Google Scholar] [CrossRef]
Figure 1. The cognitive status assessment method employed in this study.
Figure 1. The cognitive status assessment method employed in this study.
Drones 09 00808 g001
Figure 2. The Preprocessing Pipeline for EEG Signals.
Figure 2. The Preprocessing Pipeline for EEG Signals.
Drones 09 00808 g002
Figure 3. The proposed DAM-CNN architecture.
Figure 3. The proposed DAM-CNN architecture.
Drones 09 00808 g003
Figure 4. The DAM structure being used.
Figure 4. The DAM structure being used.
Drones 09 00808 g004
Figure 5. Drone Mission Execution Interface.
Figure 5. Drone Mission Execution Interface.
Drones 09 00808 g005
Figure 6. Boxplot for the four classifiers’ performance.
Figure 6. Boxplot for the four classifiers’ performance.
Drones 09 00808 g006
Figure 7. Performance of four models on different subjects.
Figure 7. Performance of four models on different subjects.
Drones 09 00808 g007
Figure 8. DAM-CNN output heatmap visualization.
Figure 8. DAM-CNN output heatmap visualization.
Drones 09 00808 g008
Figure 9. Cohen’s d Effect Size Matrix for Model Performance Comparison.
Figure 9. Cohen’s d Effect Size Matrix for Model Performance Comparison.
Drones 09 00808 g009
Figure 10. Sample model integration demonstration.
Figure 10. Sample model integration demonstration.
Drones 09 00808 g010
Table 1. Description of subject Situation.
Table 1. Description of subject Situation.
IndexAgeGenderExperimental SessionScore
122FemaleMorning89
52
17
224MaleMorning92
48
23
323MaleAfternoon85
55
19
422MaleEvening95
45
12
524FemaleEvening88
50
21
623MaleMorning91
49
16
723FemaleAfternoon87
53
18
824MaleAfternoon93
47
20
922FemaleEvening86
51
14
1023MaleEvening90
46
16
Table 2. Soft comparison of the proposed DAM-CNN and the state-of-the-art architectures.
Table 2. Soft comparison of the proposed DAM-CNN and the state-of-the-art architectures.
ReferenceMethodsAccPrecisionRecall:F1Auc
Yang et al. [28]CNN91.75%92.26%91.64%91.95%91.07%
Wang et al. [53]SVM86.30%86.44%86.27%86.27%90.33%
Wu et al. [54,55]RF86.90%87.34%86.94%87.14%84.92%
Raza et al. [56]EEGNet93.18%93.48%93.08%93.28%93.20%
Wang et al. [57]ShallowConvNet93.09%93.28%93.03%93.15%93.44%
Current StudyDAM-CNN98.76%98.89%98.53%98.71%98.54%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, J.; Zhao, F.; Zhang, X.; Hu, X.; Ji, K. Cross-Subject Cognitive State Assessment for Unmanned System Operators Based on Brain Functional Connectivity. Drones 2025, 9, 808. https://doi.org/10.3390/drones9110808

AMA Style

Chen J, Zhao F, Zhang X, Hu X, Ji K. Cross-Subject Cognitive State Assessment for Unmanned System Operators Based on Brain Functional Connectivity. Drones. 2025; 9(11):808. https://doi.org/10.3390/drones9110808

Chicago/Turabian Style

Chen, Jun, Fanzhou Zhao, Xinyu Zhang, Xiaoyu Hu, and Kailun Ji. 2025. "Cross-Subject Cognitive State Assessment for Unmanned System Operators Based on Brain Functional Connectivity" Drones 9, no. 11: 808. https://doi.org/10.3390/drones9110808

APA Style

Chen, J., Zhao, F., Zhang, X., Hu, X., & Ji, K. (2025). Cross-Subject Cognitive State Assessment for Unmanned System Operators Based on Brain Functional Connectivity. Drones, 9(11), 808. https://doi.org/10.3390/drones9110808

Article Metrics

Back to TopTop