1. Introduction
The introduction of Remote Digital Towers (RDTs) represents a major technological transformation in the global Air Traffic Management (ATM) system, enabling air traffic controllers (ATCOs) to remotely manage airport operations through real-time panoramic video, sensor fusion, and interactive display interfaces [
1,
2]. Among the various human factors that affect aviation operational safety in RDT environments, fatigue remains one of the most significant challenges. Shift-based scheduling often disrupts circadian rhythms and contributes to chronic fatigue [
3]. Moreover, compared to conventional towers, RDT workspaces lack natural lighting and visual depth cues [
4], which have been shown to impair ocular accommodation and increase fatigue-related symptoms [
5,
6,
7,
8]. Combined with prolonged screen exposure and elevated cognitive demands, ATCOs face a high risk of visual and psychological fatigue. Empirical studies confirm that fatigue increases memory and perception errors, making it a critical threat to aviation safety [
9].
Fatigue assessment methods can be broadly categorized into subjective and objective approaches. Subjective methods rely on rating scales such as the Samn–Perelli (SP) scale, the Karolinska Sleepiness Scale (KSS), and the NASA Task Load Index (NASA-TLX) [
10,
11,
12]. However, subjective measures are limited by self-report biases and intra-individual variability [
13]. Objective approaches, including electroencephalography (EEG), heart rate variability (HRV), and eye-tracking, provide quantifiable physiological indicators of fatigue [
14,
15]. Among them, EEG has emerged as a particularly promising modality due to its direct measurement of neurophysiological activity.
Nevertheless, most prior works on EEG-based fatigue detection have been conducted in controlled laboratory or driving scenarios, while real-world ATC operations require continuous communication and multitasking, making the problem more complex and ecologically valid [
16].
This work specifically addresses fatigue monitoring for ATCOs operating in remote tower environments. The main contributions of this study are as follows:
Construction of a task-specific EEG dataset under simulated remote tower conditions, reflecting realistic ATCO–pilot interactions and dynamic workloads.
Design of an EEG-based functional brain-region graph to capture spatial relationships between channels for effective feature learning.
Integration of center loss into the training objective to enhance feature discriminability, thereby improving cross-subject generalization.
To situate our approach within the broader research landscape, the following section reviews related works on EEG-based fatigue detection, highlighting existing gaps that motivate the design of STG-CLNet.
2. Related Works
EEG-based fatigue detection has been widely investigated in aviation, driving, and industrial contexts, and a variety of deep learning frameworks have been developed in recent years. Existing methods can be broadly categorized into CNN- and autoencoder-based models, graph neural networks (GNNs) and spatio-temporal approaches, attention-based models, transfer learning strategies, and multimodal fusion frameworks.
2.1. CNN- and Autoencoder-Based Models
Convolutional neural networks (CNNs) have been widely used to extract spatial features from EEG signals [
17,
18,
19]. For example, Deng et al. proposed a deep sparse contractive autoencoder for pilot fatigue recognition, outperforming traditional machine learning baselines [
20]. Similarly, Rundo et al. reported nearly perfect drowsiness classification results using stacked autoencoders [
21]. More recently, Wang et al. extended CNN-based modeling to agricultural contexts by developing a 3D-CNN with attention (CBAM) for beekeeper fatigue assessment, achieving 94.9% accuracy [
22].
2.2. Graph Neural Networks and Spatio-Temporal Approaches
Graph-based approaches have demonstrated strong potential in capturing the spatial dependencies of EEG channels. Several studies showed that graph convolutional networks (GCNs) can effectively model functional connectivity in EEG [
23,
24,
25,
26]. Lin et al. further constructed dynamic functional brain networks for driving fatigue recognition using GNNs, showing robust performance under noise [
27]. In addition, spatio-temporal CNN variants were explored by Gao et al. [
28], who proposed the ESTCNN and achieved 97.37% accuracy, while Gao et al.introduced a recurrence network–CNN hybrid model that reached 92.95% [
29]. Hybrid deep networks such as the Inception-dilated ResNet proposed by Alghanim et al. also achieved 98.87% accuracy on benchmark datasets [
30].
2.3. Recurrent and Attention-Based Models
Recurrent neural networks (RNNs), especially Long Short-Term Memory (LSTM) architectures, have proven effective in capturing temporal dependencies in EEG signals [
31,
32,
33]. In parallel, attention-based deep learning models have recently gained traction [
34,
35,
36,
37]. By focusing on the most informative temporal or spatial features, these models improve both classification accuracy and interpretability in EEG-based cognitive state monitoring.
2.4. Transfer Learning and Domain Adaptation
Given the substantial variability in EEG signals across individuals, transfer learning and domain adaptation have been explored to enhance cross-subject generalization. Liu et al. introduced inter-subject transfer learning strategies, reporting 73.01% accuracy with multi-channel EEG data [
38]. Zeng et al. proposed a GAN-enhanced DANN framework (GDANN), achieving 91.63% accuracy in cross-subject fatigue detection [
39]. A comprehensive survey by Wang et al. further highlighted the growing importance of transfer learning in bridging subject-specific variability and supporting practical deployment [
40].
2.5. Multimodal Fusion Approaches
Beyond single-modality EEG, multimodal approaches have been developed to exploit complementary physiological signals. Yu et al. proposed RecMF, a multimodal framework combining EEG and eye movement signals through an attention-enabled CNN–LSTM architecture [
41]. This model significantly improved fatigue recognition performance for air traffic controllers (ATCOs) but required higher computational complexity and additional sensor integration.
2.6. Summary and Comparison
In summary, state-of-the-art EEG-based fatigue detection models have reported accuracies ranging from approximately 73% to 98%, depending on the architecture, dataset, and application domain. CNN- and autoencoder-based models provide strong within-subject performance but are limited in cross-subject generalization. GNNs and spatio-temporal networks achieve high accuracies yet are computationally demanding, while RNNs and attention-based models improve temporal dynamics modeling but may increase complexity. Transfer learning frameworks improve generalization but sometimes at the cost of accuracy in real-world conditions. Multimodal fusion approaches deliver superior detection performance but rely on extra sensors, complicating deployment.
Compared with these methods, the proposed STG-CLNet achieves a balanced performance with 96.73% accuracy, 92.01% recall, and 87.15% F1-score under simulated remote tower scenarios. By jointly modeling spatial and temporal EEG dynamics and incorporating center loss for discriminative feature learning, STG-CLNet ensures both robustness and computational efficiency. This balance makes it particularly suitable for real-time fatigue monitoring in air traffic control environments. Building on these insights, the following section presents the architecture and methodological design of STG-CLNet in detail.
3. Experiments
This section describes the participants, the experimental setup, procedures, data acquisition, and labeling strategies employed in this study. A remote tower simulation environment was developed to replicate realistic air traffic control (ATC) operations, allowing participants to perform high- and low-traffic control tasks under controlled yet ecologically valid conditions. Electroencephalography (EEG) signals were continuously recorded using a multi-channel system, and fatigue labels were derived from a combination of validated subjective scales. The methodology ensures reliable physiological data collection, precise scenario control, and robust fatigue labeling for subsequent model training and evaluation.
3.1. Participants
In total, 34 qualified participants were recruited for the study, consisting of 30 males and 4 females. All were undergraduate students majoring in Air Traffic Control at the Civil Aviation Flight University of China (CAFUC). Each had completed basic ATC training and passed the relevant operational assessments. All participants met the Class I medical certificate requirements issued by the Civil Aviation Administration of China (CAAC). Each individual had normal or corrected-to-normal vision and no reported history of ophthalmic or neurological disorders. To minimize confounding factors, they were instructed to abstain from caffeine, nicotine, and other central nervous system stimulants for at least 24 h prior to the experiment.
In order to ensure the inclusion of both rested and fatigued cognitive states, the participants were evenly divided into two groups. The first group, labeled as “pseudo-normal”, was required to have at least 6 h of sleep before the session, whereas the second group, labeled as “pseudo-fatigue”, had less than 6 h of sleep the night prior to the experiment. This grouping strategy allowed the simulation to capture a wider range of fatigue-related neural activity under operationally relevant conditions.
3.2. Experimental Setup
The experiment utilized the Tower Client system, a high-fidelity simulation platform designed for remote tower control. This system supports the rendering of detailed airfield environments, real-time traffic processing, and interactive air traffic control operations. Through the instructor interface, researchers were able to customize operational scenarios with precision. Participants engaged with the simulation via a remote tower interface, performing core ATC functions such as delivering verbal clearances, handling electronic flight strips, and coordinating ground taxi procedures.
To replicate real-world remote tower conditions, the simulation was conducted in a sealed control room with a consistent environment and uniform screen illumination to ensure a stable visual context. Simulated pilot–controller interactions were implemented via a voice communication interface. Experimenters, stationed in an acoustically isolated room, acted as pilots and communicated with participants using realistic phraseology, closely mimicking standard air–ground radio procedures.
To ensure ecological validity, a high-fidelity remote tower simulator was employed in this study. The simulator provided a 120° panoramic display of the airport runway and ground traffic in real time, enabling participants to monitor dynamic operations similar to those in actual remote tower control centers. A standard voice communication system, based on ICAO phraseology, was integrated to allow continuous exchanges between controllers and pilots during the scenarios. Task conditions were designed to reflect realistic variations in traffic load, including both high-density and low-density periods. Prior to the experiment, all participants underwent a short training session in standard ATC communication procedures to ensure operational consistency and familiarity with the simulator interface.
The virtual setting, referred to as “Hansa Airport”, was constructed based on the layouts and operational characteristics of Wuhan Tianhe (ZHHH) and Changsha Huanghua (ZGHA) international airports, aiming to reflect realistic apron control dynamics. Scenarios included complete sequences of aircraft movement such as arrival, taxiing, gate assignment, and departure scheduling. To maintain consistent task demands across the low-traffic and high-traffic conditions, a fixed traffic schedule was employed. Several pre-experiment trial runs were conducted to confirm the consistency and reliability of the scenario timing and structure. The overall setup of the simulated remote tower control environment and the EEG acquisition procedure is illustrated in
Figure 1.
To ensure familiarity with ATC–pilot communication, all participants underwent a short training session in standard ICAO phraseology prior to the experiment. Instructors supervised the sessions to ensure compliance with communication protocols. A typical example of the exchanges during the simulation is given below:
Pilot: “Hansa Tower CCA1234 approaching holding point runway 04.”
ATCO: “CCA1234 runway 04, line up and wait behind.”
Pilot: “Runway 04, lining up and wait, CCA1234.”
Pilot: “CCA1234 ready for departure.”
ATCO: “CCA1234, surface wind 324 degrees 8 m/s, QNH 1012, runway 04, cleared for takeoff.”
Pilot: “LKO-1D Departure, runway 04, cleared for takeoff, CCA1234.”
ATCO: “CCA1234, contact Hansa Departure on 121.2.”
Pilot: “121.2, good day, CCA1234.”
3.3. Procedure
After arriving at the simulated remote tower laboratory, participants were given a comprehensive briefing on the experimental procedures and signed informed consent. They completed a training session before the formal trial. EEG equipment was fitted five minutes prior to the start of each session. In addition to physiological signals, participants completed three subjective fatigue questionnaires both before and after each trial: the NASA-TLX, the Samn–Perelli (SP) Fatigue Scale, and the Situational Awareness Rating Technique (SART). The NASA-TLX form used for subjective fatigue assessment is illustrated in
Figure 2.
During each task, participants maintained a standard seated posture while three synchronized monitors displayed the simulated tower view, including aircraft, runways, taxiways, and stands. All air–ground communication was conducted via simulated radio, with experimenters playing the role of pilots.
Each participant completed two experimental scenarios on the same day. The low-traffic session was conducted at 9:00 in the morning, followed by a high-traffic session at 15:00 in the afternoon. A 90 min rest break, including a supervised nap opportunity, was scheduled between the two sessions to minimize residual fatigue and restore alertness. Two participants undertook the experiment simultaneously during each session. No effort was made to maintain a completely silent environment in order to simulate the operational background noise and ambient conditions typical of real-world remote tower settings. This within-subject design allowed for controlled comparisons while mitigating potential circadian or fatigue-related confounds.
Low-Traffic Scenario: Involved 2 arriving and 3 departing aircraft. At any time, no more than 2 aircraft were present on the taxiways, resulting in a relatively low workload that resembled daily remote tower operations. Each low-traffic scenario lasted approximately 30 min.
High-Traffic Scenario: Involved 4 arrivals and 5 departures. Up to 5 aircraft could be active on taxiways simultaneously, increasing ground traffic complexity and cognitive demand. This scenario simulated periods of elevated workload after extended monitoring in remote tower operations. Each high-traffic scenario lasted approximately 60 min.
3.4. Dataset and Labeling
EEG signals were recorded using a 32-channel cap. However, only 14 channels (AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8 and AF4) were selected for analysis, as shown in
Figure 3. These channels were chosen based on their established associations with fatigue, attention, and cognitive control, covering frontal, temporal, parietal, and occipital regions. Reducing the number of channels minimized motion artifacts and noise from irrelevant regions (e.g., around the ears or face) and facilitated practical deployment in wearable systems.
While initial session grouping (pseudo-normal and pseudo-fatigue) was based on participants’ self-reported sleep duration, these assignments only served as pseudo-labels to ensure the inclusion of both fatigued and non-fatigued cognitive states during data collection. Final fatigue labels were determined using a decision strategy based on participants’ responses to three validated subjective fatigue assessments: the NASA-TLX, the SP scale, and the SART. This multi-criteria decision rule was designed to incorporate multiple dimensions of cognitive fatigue, combining perceived exertion, mental workload, and attentional state. The resulting fatigue labels were temporally aligned with the EEG segments for use in model training and evaluation. The participant-level distributions of the three subjective fatigue scales are presented in
Figure 4.
To ensure robust labeling, we used three subjective instruments: the Samn–Perelli (SP) fatigue scale, the Situation Awareness Rating Technique (SART), and the NASA Task Load Index (NASA-TLX). Before assigning labels, we examined the association and discriminative ability of these instruments with respect to the fatigue state. The statistical results are summarized in
Table 1, which showed that SP and SART strongly discriminated between fatigue and normal conditions (both
p < 0.001). NASA-TLX also showed a significant difference between the two groups (
p = 0.024), though the effect size was smaller compared with SP and SART. A limitation of our protocol is that SP scores exhibited limited variation within the one-hour experimental sessions, which may reduce their sensitivity to capturing task-induced fluctuations in fatigue. To mitigate these limitations, we integrated three complementary scales (SP, SART, and NASA-TLX) to establish a more robust labeling framework. The final labels were determined using a multi-criteria rule:
together with either
or
.
This labeling approach was designed to integrate multiple cognitive dimensions of fatigue—perceived exertion, mental workload, and attentional capacity—thus improving the reliability and ecological validity of ground truth annotations used for model training and evaluation. Once binary fatigue labels were generated based on subjective scores, the corresponding EEG signals underwent a series of preprocessing steps to ensure data quality and facilitate effective model training.
A bandpass filter ranging from 1 to 40 Hz was applied to remove artifacts and electromagnetic interference, such as noise from wireless communication devices. Furthermore, data were downsampled from 256 Hz to 64 Hz to reduce computational complexity while retaining essential temporal features. Finally, a sliding window of 5 s with an 80% overlap was applied to segment the continuous EEG signals into training samples suitable for spatio-temporal modeling. Each EEG segment was represented as a tensor of size 14 × 320 (14 spatial channels × 320 temporal points at 64 Hz). Each resulting segment was assigned a binary label: 0 indicating a normal state and 1 indicating a fatigued state.
After preprocessing and labeling, the final data set consisted of 64,932 EEG segments, 23,124 labeled fatigue, and 41,808 normal. A total of 34 participants were included. To address inter-subject variability, we adopted a participant-wise Group K-Fold protocol (K = 5), ensuring that all samples from the same participant appear exclusively in either the training or the validation set in each fold. This avoids leakage and provides a realistic estimate of cross-subject generalization for ATCO monitoring. In addition to accuracy, we report precision, recall, and F1-score, and early stopping is based on the validation F1-score. All results are averaged across folds.
4. Method
4.1. Network Architecture Overview
This section outlines the proposed STG-CLNet, a spatio-temporal graph fusion network with center loss for EEG-based fatigue detection in remote tower air traffic controllers (ATCOs). The model is composed of three core modules: spatial feature extraction using a graph convolutional network (GCN), temporal sequence modeling using a multi-layer LSTM architecture, and a fusion-classification pipeline enhanced with a center loss objective. The complete framework is illustrated in
Figure 5.
To model spatial dependencies among EEG channels, each input sample is formulated as a graph
, where
represents EEG electrodes and
denotes functional connectivity. The adjacency matrix
captures the pairwise relations among
V channels, where
V is the number of selected EEG electrodes. A three-stage GCN module is employed to extract spatial representations by successively applying the graph convolution operation:
where
is the normalized adjacency matrix with self-loops,
is the trainable weight matrix, and
denotes the ReLU activation. This structure enables learning hierarchical spatial features sensitive to the functional topology of the brain.
The spatial module outputs an embedding vector for each sample, which is passed to the fusion block.
The model was trained using the Adam optimizer with an initial learning rate of 0.001 and a batch size of 64. Training was performed for up to 200 epochs, with early stopping (patience = 30) applied to prevent overfitting. These hyperparameters were empirically chosen to ensure stable convergence and reliable performance. This integrated dataset and labeling strategy provided the basis for training and evaluating the proposed STG-CLNet, ensuring both ecological validity and cross-subject generalizability.
4.2. Functional Connectivity-Based Graph Construction
To capture dynamic interactions among EEG channels, we constructed the adjacency matrix of the GCN using functional connectivity measures rather than relying on fixed or purely distance-based topologies. Specifically, the Phase Lag Index (PLI) was employed to quantify phase synchronization between two EEG channels while mitigating the influence of volume conduction and common sources.
For two EEG channels
i and
j, the instantaneous phases
and
were obtained via the Hilbert transform. The PLI between channel
i and channel
j was then defined as:
where
T denotes the number of time samples. The resulting functional connectivity matrix
contains values in the range
, reflecting the strength of phase coupling between channels.
To construct the adjacency matrix
, we applied a thresholding scheme such that:
where
is a connectivity threshold that controls the sparsity of the graph. In this study,
was set empirically to retain the most informative connections while avoiding spurious weak couplings.
Furthermore, to incorporate prior spatial information, was initialized according to the electrode topology defined by the international 10–20 system, encoding inter-electrode physical proximity. During training, was treated as learnable, enabling adaptive refinement of inter-channel relationships. This hybrid strategy, which combines functional connectivity, anatomical plausibility, and data-driven optimization, enhances the ability of the GCN to capture task-relevant brain dynamics associated with fatigue.
4.3. Temporal Feature Extraction via LSTM
To capture the dynamic evolution of EEG signals over time, a temporal feature extractor is constructed using a stack of three unidirectional LSTM layers. This hierarchical design allows the network to progressively learn short-, mid-, and long-term temporal dependencies from raw EEG signals. The input tensor is reshaped from into a suitable format and passed through the stacked LSTM blocks, producing a compact temporal feature embedding, where N is the batch size, T is the temporal length, and V is the number of EEG channels.
This design omits bidirectionality to reduce computational complexity while preserving the ability to model fatigue-related neural fluctuations. The use of multiple LSTM layers, instead of a simple convolutional alternative, enables the network to capture long-range dependencies that are critical for differentiating fatigue states, which often evolve gradually rather than being reflected in only local temporal windows. The final LSTM layer outputs a fixed-dimensional temporal representation for each EEG segment.
Furthermore, the temporal representations are optimized jointly with a center loss constraint. While the cross-entropy objective enforces inter-class separation, it does not explicitly minimize intra-class variability, which is substantial in EEG due to inter-subject differences and noise. The center loss term encourages compactness within each class while preserving inter-class margins, thereby enhancing discriminability and stability of the temporal embeddings. This complementary design, combining hierarchical LSTM modeling with center loss regularization, strengthens the robustness of the proposed STG-CLNet for cross-subject fatigue detection. The effectiveness of this design will be further demonstrated in the
Section 5.
4.4. Feature Fusion and Discriminative Learning with Center Loss
The spatial representations extracted by the GCN and the temporal embeddings obtained from the stacked LSTM layers are concatenated to form a unified feature vector that encodes both structural and sequential EEG characteristics. This fused representation is then passed through a fully connected classification layer to predict fatigue or normal states.
To further enhance the discriminative quality of the learned feature space, a center loss term is introduced alongside the cross-entropy objective. The center loss enforces compact intra-class feature distributions and improves inter-class separation, defined as:
where
denotes the feature vector of the
i-th sample and
is the learned center corresponding to class
. The total loss function is defined as the combination of cross-entropy loss (
) and center loss (
):
where
is a balancing coefficient empirically set to 0.005. This joint optimization encourages the model to generate robust and well-separated feature representations, thereby strengthening generalization to unseen subjects.
4.5. Optimization Strategy
To provide a clear overview of the training procedure, the optimizing process of STG-CLNet is summarized in Algorithm 1. This algorithm outlines the initialization, feature extraction, feature fusion, and joint optimization with cross-entropy and center loss.
Algorithm 1 Optimizing procedure of the STG-CLNet |
Input: EEG dataset , total epochs , batch size , model parameters |
Output: Optimized model parameters and learned feature centers C- 1:
Initialize model parameters and class centers C - 2:
for to do - 3:
for to do - 4:
Sample a batch from - 5:
Extract spatial features from using GCN - 6:
Extract temporal features from using three-layer stacked LSTM - 7:
Fuse and to obtain - 8:
Feed into classifier to predict - 9:
Compute cross-entropy loss - 10:
Compute center loss using class centers C - 11:
Compute total loss: - 12:
Update and C using SGD based on - 13:
end for - 14:
end for
|
4.6. Evaluation Metrics
To quantitatively evaluate model performance, four standard classification metrics are employed: accuracy, precision, recall, and F1-score. Their definitions are given as follows:
(True Positives) refers to the number of correctly predicted fatigue samples, (True Negatives) refers to the number of correctly predicted non-fatigue samples, (False Positives) denotes non-fatigue samples incorrectly predicted as fatigue, and (False Negatives) denotes fatigue samples incorrectly classified as non-fatigue.
5. Results
5.1. Overall Performance
The evaluation of STG-CLNet and baseline models was conducted using four standard metrics: accuracy, precision, recall, and F1-score, as defined in
Section 4.6. These metrics jointly provide a balanced assessment of predictive capability and robustness. As summarized in
Table 2, the proposed STG-CLNet achieves superior results across all evaluation metrics, indicating enhanced discriminability and generalization compared with conventional baselines.
The proposed STG-CLNet achieves the best performance across all evaluation metrics. Specifically, it obtains an accuracy of 0.9673, a recall of 0.9201, a precision of 0.8378, and an F1-score of 0.8715, demonstrating strong capability in both detecting fatigue and maintaining class balance. Compared to CNN–LSTM, which shows high precision (0.8738) but relatively low recall (0.7307), our method achieves a more balanced trade-off, indicating better robustness in recognizing true fatigue cases without sacrificing specificity. The overall performance comparison across all models is summarized in
Table 2. In addition, the STG-CLNet without center loss achieved an accuracy of 0.8642 and an F1-score of 0.8388, both notably lower than the full model (accuracy = 0.9673, F1-score = 0.8715). This decline highlights the importance of center loss in enhancing intra-class compactness and inter-class separability, thereby improving the robustness of fatigue detection.
Further validation of improvements over baseline models was carried out using paired
t-tests across participants’ F1-scores. As shown in
Figure 6, the proposed STG-CLNet achieved significantly higher performance than all baseline models. In particular, the differences with CNN–LSTM, CNN–Attention, EEGNet, and GCN were statistically significant (
p < 0.001). These results confirm that the observed performance gains are robust and not due to random variation.
To further assess cross-subject generalization, we analyzed the per-subject results for all 34 participants under both low- and high-traffic scenarios.
Figure 7 presents the F1-score distribution of five compared models using a combination of boxplots and swarmplots. Each point corresponds to one participant’s F1-score, while the black markers with error bars indicate the mean ± standard deviation for each model. The results reveal that STG-CLNet achieves the highest mean F1-score (0.87) and the narrowest dispersion, suggesting superior robustness across subjects. In contrast, CNN-Attention and EEGNet display larger variability and lower average performance, highlighting their limited generalization ability. Despite the large inter-individual variability in EEG responses, the proposed STG-CLNet maintains consistently higher F1-scores across participants. This demonstrates its robustness and stability in cross-subject scenarios, which is crucial for real-world ATCO fatigue detection.
5.2. Robustness Analysis
The robustness of the proposed framework was further evaluated under controlled noise perturbations to simulate real-world ATC multitasking and distractions. Three scenarios were examined: channel dropout, temporal masking, and additive Gaussian noise. As shown in
Figure 8, STG-CLNet consistently outperformed baseline models across all conditions. Although performance declined as noise severity increased, the reduction was considerably smaller than that of the baselines, indicating that STG-CLNet maintains stable and discriminative feature representations under signal corruption. These findings demonstrate the strong robustness and practical applicability of STG-CLNet in noisy and dynamic ATC environments.
This robustness can be attributed to two design aspects of STG-CLNet. First, the graph convolutional module leverages functional connectivity among EEG channels rather than treating them as independent inputs. As a result, when a subset of channels is corrupted or dropped, the network can still propagate informative signals through intact connections, thereby mitigating the effect of missing or noisy electrodes. Second, the hierarchical LSTM layers capture long-range temporal dependencies, which smooth out transient disturbances caused by temporal masking or additive noise. Consequently, noise perturbations primarily reduce local information, but global temporal dynamics remain preserved. In contrast, CNN-based baselines rely more heavily on local patterns, making them more sensitive to localized corruption. These properties explain why STG-CLNet demonstrates smaller performance degradation across all perturbation scenarios.
5.3. Function of Center Loss
To evaluate the discriminative capability of the learned feature space, t-distributed Stochastic Neighbor Embedding (t-SNE) was employed to project the high-dimensional feature vectors into a two-dimensional space for visual inspection. As illustrated in
Figure 9, the left subfigure depicts the feature distribution learned without center loss, where the two classes (fatigued vs. non-fatigued) exhibit substantial overlap, reflecting poor intra-class compactness and weak inter-class separability. In contrast, the right subfigure shows the feature distribution with center loss, where the samples form compact clusters within each class and display clearer boundaries across classes. This result demonstrates that the incorporation of center loss significantly enhances the discriminability of the latent representation, which in turn supports more reliable classification and improves generalization to unseen subjects with high EEG variability.
6. Discussion
The primary objective of this study was to establish a robust EEG-based fatigue detection framework specifically designed for remote tower air traffic controllers (ATCOs). The proposed STG-CLNet integrates graph-based spatial modeling with temporal dynamics learning through stacked LSTM layers. The addition of the center loss term enhances discriminability by reducing intra-class variability and improving inter-class separation. Together, these components enabled the model to achieve superior performance compared with conventional baselines.
In comparison with CNN–LSTM, CNN–Attention, and EEGNet, which either neglect spatial correlations or capture only limited temporal dynamics, STG-CLNet leverages neurophysiologically meaningful functional connectivity. This design allows the network to achieve a balanced trade-off between recall and precision, which is critical in safety-sensitive contexts where missed detections pose greater risks than false alarms. Nevertheless, excessive false positives could undermine operator trust and increase unnecessary workload; our results demonstrate that STG-CLNet maintains a low false positive rate while substantially reducing false negatives, thereby ensuring both safety and operational reliability. The center loss component further contributes by mitigating inter-subject variability, as supported by the feature distribution patterns visualized through t-SNE.
From an application perspective, STG-CLNet was designed to remain computationally efficient while ensuring operational feasibility. The reduction to 14 channels and the lightweight structure enable real-time inference on both CPUs and GPUs, confirming its suitability for deployment in air traffic control environments. This balance of accuracy, generalization, and efficiency positions the framework as a practical solution for real-time fatigue monitoring.
Several limitations should be acknowledged. First, the tri-criteria labeling scheme based on SP, SART, and NASA-TLX improved construct validity but was constrained by the relative stability of SP scores within single-session experiments. This may reduce sensitivity to short-term fatigue fluctuations. Future research should explore continuous or probabilistic labeling to represent fatigue as a spectrum rather than a binary construct. Second, although the high-fidelity simulator preserved critical ecological factors such as panoramic monitoring and ICAO-standard communication, it could not fully reproduce the unpredictability of real operations. We are currently conducting eye-tracking data collection experiments, and future work will extend the proposed framework to multimodal integration. Combining EEG with additional modalities such as eye tracking or ECG is expected to leverage complementary information, thereby further improving both the accuracy and interpretability of fatigue detection systems.
In summary, STG-CLNet advances both theoretical and applied research by demonstrating that spatio-temporal modeling combined with discriminative feature learning can significantly improve fatigue detection performance. Its robustness, interpretability, and operational feasibility suggest strong potential for integration into remote tower environments and other safety-critical domains.
7. Conclusions
This study presented STG-CLNet, a spatio-temporal graph network with center loss regularization, for EEG-based fatigue detection in remote tower air traffic controllers. The framework effectively captured both spatial dependencies among EEG channels and temporal dynamics of neural activity, while the center loss term enhanced discriminability and stability of the learned representations.
A dedicated dataset was collected under simulated remote tower conditions to evaluate the system across realistic cognitive workloads. The results confirmed that STG-CLNet outperformed conventional baseline models across multiple evaluation metrics, highlighting its robustness and generalization ability.
The findings demonstrate the feasibility of single-modality EEG for fatigue detection in operationally relevant ATC settings. This contributes to the advancement of objective fatigue monitoring technologies that can improve aviation safety and support sustainable Air Traffic Management. Future work will focus on expanding participant diversity, incorporating multimodal physiological signals, and validating the approach in live operational environments.
Author Contributions
Conceptualization, P.L.; methodology, P.L.; validation, P.L. and R.H.; formal analysis, L.Z.; investigation, Q.Z. (Qingwei Zhong); resources, Y.L.; data curation, P.L. and R.H.; writing—original draft preparation, P.L.; writing—review and editing, L.Z., P.L. and Y.L.; visualization, P.L.; supervision, Y.L. and Q.Z. (Qinghai Zuo); project administration, Y.A.; funding acquisition, W.P. All authors have read and agreed to the published version of the manuscript.
Funding
This research was supported by the National Natural Science Foundation of China (U2333207), Sichuan Science and Technology Program (2025YFHZ0023), Sichuan Flight Engineering Technology Research Center Foundation (GY2024-22C), the Fundamental Research Funds for the Central Universities (25CAFUC09018, 25CAFUC01012, 25CAFUC04065), the R&D Program of CAAC Key Laboratory of Flight Techniques and Flight Safety (No. FZ2025ZX08).
Institutional Review Board Statement
The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of the Civil Aviation Flight University of China.
Informed Consent Statement
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement
Due to confidentiality agreements and the need to protect participant privacy, the datasets generated and analyzed during the current study are not publicly available. Interested researchers may contact the corresponding author at peilinluo0519@gmail.com to request access under appropriate conditions.
Conflicts of Interest
Author Linfeng Zhong was employed by the company Chengdu GoldtTel Industry Group Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
ACC | Accuracy |
ATCO | Air Traffic Controller |
EEG | Electroencephalography |
F1 | F1-Score |
GCN | Graph Convolutional Network |
LSTM | Long Short-Term Memory |
NASA-TLX | NASA Task Load Index |
PLI | Phase Lag Index |
RDT | Remote Digital Tower |
SART | Situational Awareness Rating Technique |
SP | Samn-Perelli Fatigue Scale |
SSS | Stanford Sleepiness Scale |
STG-CLNet | Spatio-Temporal Graph with Center Loss Network |
t-SNE | t-distributed Stochastic Neighbor Embedding |
References
- Yamazaki, K.; Inoue, S.; Aoyama, H.; Kimiaki, T. Remote digital tower to support air traffic control systems. In Advances in Human Factors of Transportation, Proceedings of the 15th International Conference on Applied Human Factors and Ergonomics and the Affiliated Conferences, Nice, France, 24–27 July 2024; AHFE International: Orlando, FL, USA, 2024; Volume 148. [Google Scholar]
- Kearney, P.; Li, W.C. Multiple remote tower for Single European Sky: The evolution from initial operational concept to regulatory approved implementation. Transp. Res. Part A Policy Pract. 2018, 116, 15–30. [Google Scholar] [CrossRef]
- Zhang, J.; Chen, Z.; Liu, W.; Ding, P.; Wu, Q. A field study of work type influence on air traffic controllers’ fatigue based on data-driven PERCLOS detection. Int. J. Environ. Res. Public Health 2021, 18, 11937. [Google Scholar] [CrossRef]
- Lu, T.T.; An, Z.; Shen, H.; Wang, Y.; Li, W.C. Comparative Analysis of Remote Tower Controllers and Traditional Physical Tower Controllers Based on Human Ergonomics in China. In Proceedings of the International Conference on Human-Computer Interaction, Washington, DC, USA, 29 June–4 July 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 259–268. [Google Scholar]
- Mocci, F.; Serra, A.; Corrias, G. Psychological factors and visual fatigue in working with video display terminals. Occup. Environ. Med. 2001, 58, 267–271. [Google Scholar] [CrossRef]
- Fürstenau, N.; Papenfuss, A.; Jakobi, J. Virtual and Remote Control Tower; Springer: Cham, Switzerland, 2016. [Google Scholar]
- Durgut, M. The Effects of Lighting Conditions on Visual Comfort, Satisfaction and Motivation Levels of Architecture Students. Master’s Thesis, Izmir Institute of Technology (Turkey), Urla, Turkey, 2021. [Google Scholar]
- Balta, E.; Psarrakis, A.; Vatakis, A. The effects of increased mental workload of air traffic controllers on time perception: Behavioral and physiological evidence. Appl. Ergon. 2024, 115, 104162. [Google Scholar] [CrossRef] [PubMed]
- Williamson, A.; Lombardi, D.A.; Folkard, S.; Stutts, J.; Courtney, T.K.; Connor, J.L. The link between fatigue and safety. Accid. Anal. Prev. 2011, 43, 498–515. [Google Scholar] [CrossRef] [PubMed]
- Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in Psychology; North-Holland: Amsterdam, The Netherlands, 1988; Volume 52, pp. 139–183. [Google Scholar]
- Chang, Y.H.; Yang, H.H.; Hsu, W.J. Effects of work shifts on fatigue levels of air traffic controllers. J. Air Transp. Manag. 2019, 76, 1–9. [Google Scholar] [CrossRef]
- Triyanti, V.; Azis, H.; Iridiastadi, H.; Yassierli. Workload and fatigue assessment on air traffic controller. In Proceedings of the 12th International Seminar on Industrial Engineering & Management (ISIEM): “Industrial Intelligence System on Engineering, Information, and Management”, Batu, Malang, Indonesia, 17–19 March 2020. [Google Scholar]
- Endsley, M.R. The divergence of objective and subjective situation awareness: A meta-analysis. J. Cogn. Eng. Decis. Mak. 2020, 14, 34–53. [Google Scholar] [CrossRef]
- Hui, L.; Pei, Z.; Quan, S.; Ke, X.; Zhe, S. Cognitive Workload Detection of Air Traffic Controllers Based on mRMR and Fewer EEG Channels. Brain Sci. 2024, 14, 811. [Google Scholar] [CrossRef] [PubMed]
- Honn, K.A.; Satterfield, B.C.; McCauley, P.; Caldwell, J.L.; Van Dongen, H.P. Fatiguing effect of multiple take-offs and landings in regional airline operations. Accid. Anal. Prev. 2016, 86, 199–208. [Google Scholar] [CrossRef]
- Luo, Y.; Liu, W.; Li, H.; Lu, Y.; Lu, B.L. A cross-scenario and cross-subject domain adaptation method for driving fatigue detection. J. Neural Eng. 2024, 21, 046004. [Google Scholar] [CrossRef]
- Lawhern, V.J.; Solon, A.J.; Waytowich, N.R.; Gordon, S.M.; Hung, C.P.; Lance, B.J. EEGNet: A compact convolutional neural network for EEG-based brain–computer interfaces. J. Neural Eng. 2018, 15, 056013. [Google Scholar] [CrossRef]
- Chaabene, S.; Bouaziz, B.; Boudaya, A.; Hökelmann, A.; Ammar, A.; Chaari, L. Convolutional neural network for drowsiness detection using EEG signals. Sensors 2021, 21, 1734. [Google Scholar] [CrossRef]
- Cui, J.; Lan, Z.; Liu, Y.; Li, R.; Li, F.; Sourina, O.; Müller-Wittig, W. A compact and interpretable convolutional neural network for cross-subject driver drowsiness detection from single-channel EEG. Methods 2022, 202, 173–184. [Google Scholar] [CrossRef] [PubMed]
- Deng, P.Y.; Qiu, X.Y.; Tang, Z.; Zhang, W.M.; Zhu, L.M.; Ren, H.; Zhou, G.R.; Sheng, R.S. Detecting fatigue status of pilots based on deep learning network using EEG signals. IEEE Trans. Cogn. Dev. Syst. 2020, 13, 575–585. [Google Scholar] [CrossRef]
- Rundo, F.; Rinella, S.; Massimino, S.; Coco, M.; Fallica, G.; Parenti, R.; Conoci, S.; Perciavalle, V. An innovative deep learning algorithm for drowsiness detection from EEG signal. Computation 2019, 7, 13. [Google Scholar] [CrossRef]
- Wang, P.; Nam, J.S.; Han, X. Development of a comprehensive fatigue detection model for beekeeping activities based on deep learning and EEG signals. Comput. Electron. Agric. 2024, 225, 109265. [Google Scholar] [CrossRef]
- Demir, A.; Koike-Akino, T.; Wang, Y.; Haruna, M.; Erdogmus, D. EEG-GNN: Graph neural networks for classification of electroencephalogram (EEG) signals. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Virtual Conference, 1–5 November 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1061–1067. [Google Scholar]
- Liu, C.; Zhou, X.; Xiao, J.; Zhu, Z.; Zhai, L.; Jia, Z.; Liu, Y. VSGT: Variational spatial and gaussian temporal graph models for EEG-based emotion recognition. In Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, Jeju, Republic of Korea, 3–9 August 2024; pp. 3078–3086. [Google Scholar]
- Hu, F.; Zhang, L.; Yang, X.; Zhang, W.A. EEG-Based driver Fatigue Detection using Spatio-Temporal Fusion network with brain region partitioning strategy. IEEE Trans. Intell. Transp. Syst. 2024, 25, 9618–9630. [Google Scholar] [CrossRef]
- Xu, M.; Chen, Q.; Chen, D.; Ding, Y.; Wang, J.; Gu, P.; Pan, Y.; Huang, D.; Zhang, X.; Guo, J. Exact Fit Attention in Node-Holistic Graph Convolutional Network for Improved EEG-Based Driver Fatigue Detection. arXiv 2025, arXiv:2501.15062. [Google Scholar]
- Lin, Z.; Qiu, T.; Liu, P.; Zhang, L.; Zhang, S.; Mu, Z. Fatigue driving recognition based on deep learning and graph neural network. Biomed. Signal Process. Control 2021, 68, 102598. [Google Scholar] [CrossRef]
- Gao, Z.; Wang, X.; Yang, Y.; Mu, C.; Cai, Q.; Dang, W.; Zuo, S. EEG-based spatio–temporal convolutional neural network for driver fatigue evaluation. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 2755–2763. [Google Scholar] [CrossRef]
- Gao, Z.K.; Li, Y.L.; Yang, Y.X.; Ma, C. A recurrence network-based convolutional neural network for fatigue driving detection from EEG. Chaos Interdiscip. J. Nonlinear Sci. 2019, 29, 113126. [Google Scholar] [CrossRef] [PubMed]
- Alghanim, M.; Attar, H.; Rezaee, K.; Khosravi, M.; Solyman, A.; Kanan, M.A. A hybrid deep neural network approach to recognize driving fatigue based on EEG signals. Int. J. Intell. Syst. 2024, 2024, 9898333. [Google Scholar] [CrossRef]
- Davidson, P.; Jones, R.; Peiris, M. Detecting behavioral microsleeps using EEG and LSTM recurrent neural networks. In Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China, 31 August–3 September 2005; IEEE: Piscataway, NJ, USA, 2006; pp. 5754–5757. [Google Scholar]
- Mehmood, I.; Li, H.; Qarout, Y.; Umer, W.; Anwer, S.; Wu, H.; Hussain, M.; Antwi-Afari, M.F. Deep learning-based construction equipment operators’ mental fatigue classification using wearable EEG sensor data. Adv. Eng. Inform. 2023, 56, 101978. [Google Scholar] [CrossRef]
- Lee, D.H.; Kim, S.J.; Kim, S.H. Decoding Fatigue Levels of Pilots Using EEG Signals with Hybrid Deep Neural Networks. In Proceedings of the 2025 13th International Conference on Brain-Computer Interface (BCI), Gangwon, Republic of Korea, 24–26 February 2025; IEEE: Piscataway, NJ, USA, 2025; pp. 1–4. [Google Scholar]
- Cai, S.; Su, E.; Xie, L.; Li, H. EEG-based auditory attention detection via frequency and channel neural attention. IEEE Trans.-Hum.-Mach. Syst. 2021, 52, 256–266. [Google Scholar] [CrossRef]
- Wang, Y.; Fang, Z.; Sun, X.; Lin, X.; Niu, L.; Ma, W. An adaptive driver fatigue classification framework using EEG and attention-based hybrid neural network with individual feature subsets. Biomed. Signal Process. Control 2023, 85, 105045. [Google Scholar] [CrossRef]
- Ardabili, S.Z.; Bahmani, S.; Lahijan, L.Z.; Khaleghi, N.; Sheykhivand, S.; Danishvar, S. A novel approach for automatic detection of driver fatigue using EEG signals based on graph convolutional networks. Sensors 2024, 24, 364. [Google Scholar] [CrossRef]
- Guo, Y.; Yang, K.; Wu, Y. A Multi-Modality Attention Network for Driver Fatigue Detection Based on Frontal EEG, EDA and PPG Signals. IEEE J. Biomed. Health Inform. 2025, 29, 4009–4022. [Google Scholar] [CrossRef]
- Liu, Y.; Lan, Z.; Cui, J.; Sourina, O.; Müller-Wittig, W. Inter-subject transfer learning for EEG-based mental fatigue recognition. Adv. Eng. Inform. 2020, 46, 101157. [Google Scholar] [CrossRef]
- Zeng, H.; Li, X.; Borghini, G.; Zhao, Y.; Aricò, P.; Di Flumeri, G.; Sciaraffa, N.; Zakaria, W.; Kong, W.; Babiloni, F. An EEG-based transfer learning method for cross-subject fatigue mental state prediction. Sensors 2021, 21, 2369. [Google Scholar] [CrossRef]
- Wang, F.; Wan, Y.; Li, M.; Huang, H.; Li, L.; Hou, X.; Pan, J.; Wen, Z.; Li, J. Recent Advances in Fatigue Detection Algorithm Based on EEG. Intell. Autom. Soft Comput. 2023, 35, 3573–3586. [Google Scholar] [CrossRef]
- Yu, X.; Chen, C.H.; Yang, H. Air traffic controllers’ mental fatigue recognition: A multi-sensor information fusion-based deep learning approach. Adv. Eng. Inform. 2023, 57, 102123. [Google Scholar] [CrossRef]
Figure 1.
Data collection environment used in the simulated remote tower experiment. The figure illustrates the panoramic display, ATCOs’ workstation, and EEG acquisition setup.
Figure 1.
Data collection environment used in the simulated remote tower experiment. The figure illustrates the panoramic display, ATCOs’ workstation, and EEG acquisition setup.
Figure 2.
NASA-TLX form employed to assess post-session cognitive workload and fatigue through six subjective subscales.
Figure 2.
NASA-TLX form employed to assess post-session cognitive workload and fatigue through six subjective subscales.
Figure 3.
Electrode layout of the 32-channel EEG system. The 14 highlighted electrodes (blue circles) were selected for analysis, covering frontal, temporal, parietal, and occipital regions.
Figure 3.
Electrode layout of the 32-channel EEG system. The 14 highlighted electrodes (blue circles) were selected for analysis, covering frontal, temporal, parietal, and occipital regions.
Figure 4.
Score distributions of the three subjective scales used for fatigue labeling: (a) NASA–TLX, (b) Samn–Perelli, and (c) SART. Orange, red, and blue bars correspond to the respective scales in subfigures (a–c).
Figure 4.
Score distributions of the three subjective scales used for fatigue labeling: (a) NASA–TLX, (b) Samn–Perelli, and (c) SART. Orange, red, and blue bars correspond to the respective scales in subfigures (a–c).
Figure 5.
Overall architecture of the proposed STG-CLNet, including functional connectivity-based graph construction, spatio- temporal modeling with stacked LSTMs, and discriminative learning with center loss.
Figure 5.
Overall architecture of the proposed STG-CLNet, including functional connectivity-based graph construction, spatio- temporal modeling with stacked LSTMs, and discriminative learning with center loss.
Figure 6.
F1-score comparison between STG-CLNet and baseline models with statistical significance. Blue bars represent baseline models, and the orange bar represents the proposed STG-CLNet. Error bars indicate standard deviation across participants. All comparisons between STG-CLNet and baseline models reached statistical significance (*** p < 0.001).
Figure 6.
F1-score comparison between STG-CLNet and baseline models with statistical significance. Blue bars represent baseline models, and the orange bar represents the proposed STG-CLNet. Error bars indicate standard deviation across participants. All comparisons between STG-CLNet and baseline models reached statistical significance (*** p < 0.001).
Figure 7.
Per-subject F1-score distribution of all models. Each dot represents one participant, with boxplots showing the overall distribution. STG-CLNet achieves the highest mean and the most stable performance across participants. Colors correspond to different models as labeled on the x-axis.
Figure 7.
Per-subject F1-score distribution of all models. Each dot represents one participant, with boxplots showing the overall distribution. STG-CLNet achieves the highest mean and the most stable performance across participants. Colors correspond to different models as labeled on the x-axis.
Figure 8.
Robustness analysis of STG-CLNet and baseline models under different perturbation conditions simulating noisy ATC multitasking and distraction scenarios. (a) Random channel dropout at varying levels, (b) temporal masking with different masking ratios, and (c) additive Gaussian noise at different signal-to-noise ratios (SNRs). Both F1-score (solid lines) and Accuracy (dashed lines) are shown. All legend entries correspond to plotted lines, though some overlap due to similar values.
Figure 8.
Robustness analysis of STG-CLNet and baseline models under different perturbation conditions simulating noisy ATC multitasking and distraction scenarios. (a) Random channel dropout at varying levels, (b) temporal masking with different masking ratios, and (c) additive Gaussian noise at different signal-to-noise ratios (SNRs). Both F1-score (solid lines) and Accuracy (dashed lines) are shown. All legend entries correspond to plotted lines, though some overlap due to similar values.
Figure 9.
t-SNE visualization comparing feature distributions with and without center loss.
Figure 9.
t-SNE visualization comparing feature distributions with and without center loss.
Table 1.
Group comparison of subjective scales between normal and fatigue states. SP and SART showed significant differences (p < 0.001), and NASA–TLX also reached significance (p = 0.024) but was considered mainly as a workload indicator in the labeling strategy.
Table 1.
Group comparison of subjective scales between normal and fatigue states. SP and SART showed significant differences (p < 0.001), and NASA–TLX also reached significance (p = 0.024) but was considered mainly as a workload indicator in the labeling strategy.
Scale
| Normal (Mean) | Fatigue (Mean) | p-Value |
---|
SP | 2.61 | 5.25 | < |
SART | 35.96 | 21.83 | < |
NASA | 54.61 | 63.08 | 0.024 |
Table 2.
Overall performance of compared models.
Table 2.
Overall performance of compared models.
Models | Precision | Accuracy | Recall | F1-Score |
---|
CNN-LSTM | 0.8738 | 0.8020 | 0.7307 | 0.7484 |
CNN-Attention | 0.7173 | 0.6866 | 0.6880 | 0.6761 |
EEGNet | 0.6130 | 0.5873 | 0.6224 | 0.5885 |
GCN | 0.8474 | 0.7952 | 0.7406 | 0.7543 |
STG-CLNet w/o Center Loss | 0.8167 | 0.8642 | 0.9022 | 0.8388 |
STG-CLNet | 0.8378 | 0.9673 | 0.9201 | 0.8715 |
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).