Next Article in Journal
Autonomous UAV-Based Volcanic Gas Monitoring: A Simulation-Validated Case Study in Santorini
Next Article in Special Issue
A Chimpanzee Troop-Inspired Algorithm for Multiple Unmanned Aerial Vehicles on Patrolling Missions
Previous Article in Journal
Advancing Real-Time Aerial Wildfire Detection Through Plume Recognition and Knowledge Distillation
Previous Article in Special Issue
NightTrack: Joint Night-Time Image Enhancement and Object Tracking for UAVs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DPAD: Distribution-Driven Perturbation-Adaptive Defense for UAV Time-Series Regression Under Hybrid Adversarial Attacks

1
School of Cybersecurity, Northwestern Polytechnical University, Xi’an 710072, China
2
Control System Research Company of AECC, Wuxi 214024, China
3
Data Communication Technology Research Institute, Beijing 100089, China
*
Author to whom correspondence should be addressed.
Drones 2025, 9(12), 828; https://doi.org/10.3390/drones9120828
Submission received: 24 October 2025 / Revised: 22 November 2025 / Accepted: 24 November 2025 / Published: 28 November 2025

Highlights

What are the main findings?
  • A novel Distribution-driven Perturbation-Adaptive Defense (DPAD) framework is proposed to enhance the robustness of UAV time-series regression models against hybrid adversarial attacks.
  • The framework integrates a Gaussian Mixture Model (GMM)-based perturbation strength predictor with a dynamic defense selection mechanism, achieving adaptive correction under varying perturbation strengths.
What are the implications of the main finding?
  • DPAD significantly improves UAV model resilience and reliability in safety-critical missions, reducing prediction errors by up to 80% while maintaining real-time inference speed.
  • The proposed approach provides a generalizable defense strategy for other deep learning-based time-series regression applications in aerial systems.

Abstract

Time-series regression models are essential components in unmanned aerial vehicles (UAVs) for accurate trajectory and state prediction. Nevertheless, they are still vulnerable to hybrid adversarial attacks, which can lead to a compromised mission performance and cause huge economic loss. For this challenge, we propose the Distribution-driven Perturbation-Adaptive Defense (DPAD) framework. DPAD improves perturbation detection with Gaussian Mixture Model (GMM)-based feature augmentation that raises the accuracy of perturbation strength prediction, increasing from 0.685 to 0.943 R 2 , and dynamically chooses a suitable defense sub-model or the original model for adaptive correction. The experiments on UAV_Delivery show that DPAD significantly enhances robustness by achieving about 80% reduction in prediction errors under hybrid attacks while maintaining high accuracy on clean samples with an inference speed of 2.744 ms per sample. The proposed framework can scale up an effective solution to defend UAV time-series regression models against complex adversarial scenarios.

1. Introduction

In recent years, unmanned aerial vehicles (UAVs) have demonstrated a strong potentiality in several applications like logistics delivery, infrastructure inspection, and disaster response [1,2,3]. These systems increasingly rely on Deep Neural Networks (DNNs) [4] for performing time-series regression tasks, such as state prediction and trajectory planning [5]. However, a UAV operating in a complex and dynamic environment is affected by several sources of perturbation: sensor noise, communication disruptions, and adversarial interference [6]. Because of the high sensitivity of DNNs to input variations, even a small perturbation can lead to critical failures [7,8]. In detail, adversarial attacks such as Fast Gradient Sign Method (FGSM) [9], Projected Gradient Descent (PGD) [10], or Carlini & Wagner (CW) attack [11] can manipulate the UAV monitoring systems, causing a misdiagnosis of defects, incorrect adjustments of the UAV’s trajectory, or an inappropriate fault response. For instance, in a power line inspection scenario, tampered vibration signals may mask structural damage or lead to false alarms, forcing the system to enact emergency landing with a consequent impact on mission reliability [12]. Moreover, as UAVs are increasingly interconnected in Internet-of-Drones (IoD) networks, where multiple drones collaborate and share sensor data in real time, the security and robustness of each individual UAV model directly impact the reliability of the overall network [13]. In such IoD scenarios, adversarial attacks on even a single UAV can propagate errors through cooperative control or shared decision-making, amplifying mission-critical risks. The above challenges raise the need for improving the robustness of UAV state estimation models for safe and dependable operation under real-world conditions [14].
Different strategies are proposed for enhancing model robustness, such as adversarial training, input denoising, structural modification, and certifiable defense [15,16,17]. Among them, adversarial training has gradually stood out to become one of the most effective approaches, as it provides aggressive examples during model training and allows the model to learn more robust representations resistant to known attack patterns [18]. However, several challenges still frustrate adversarial training, the foremost being the fundamental trade-off between accuracy and robustness. Others are the heavy computation overhead and the limited generalization regarding unseen or adaptive attacks [7,19]. In order to defeat the above limitations, Ensemble Adversarial Training (EAT) explicitly combines multiple adversarial generators or sub-models during the model’s training for the purpose of expanding its defense coverage [20]. Although ensemble-based methods have achieved further robustness in classification models [21,22,23], their performance in time-series regression models under hybrid perturbations remains less explored.
Adversarial attacks in time-series regression bear characteristics such as temporal dependencies, visible anomalous patterns over time [24], and different levels of destructive effects according to various perturbation strengths and attack types, including FGSM [9], CW [11], Basic Iterative Method (BIM) [25], PGD [10], and Auto Projected Gradient Descent (APGD) [26]. Various defense methods, including The TRadeoff-inspired Adversarial DEfense via Surrogate-loss minimization (TRADES) method [27] and simple adversarial training, often suffer from limitations inherent in the accuracy–robustness trade-off [22,28], poor generalization for unseen attacks, and performance degradation due to hybrid attack conditions where multiple kinds of attack types and magnitudes coexist. Such challenges present more significantly in time-series regression tasks and provide the necessity for systematic solutions that would enable handling multiple and hybrid adversarial situations.
In this context, we propose a Distribution-driven Perturbation-Adaptive Defense (DPAD), a new framework to address the issues of multi-type and multi-strength hybrid adversarial attacks in UAV time-series regression tasks. The proposed framework of DPAD mainly includes two stages: (1) a front-end perturbation strength predictor to estimate the perturbation strength of a given sample (e.g., ϵ { 0.01 , 0.05 , 0.1 } ), and (2) a back-end dynamic defense mechanism to choose or adjust appropriate sub-models based on the predicted perturbation strength. To improve the prediction stability and interpretability, the front-end leverages Gaussian Mixture Models (GMMs) [29] to model the input and output distributions, where the log-likelihood and responsibility values from GMM are provided as feature expansion inputs for the perturbation predictor and the back-end decision module. This distribution-driven design enables the model to distinguish “normal distribution patterns” from “anomalous perturbation patterns” and hence enhance its ability to handle various adversarial perturbations in complex environments. The main contributions of this paper are as follows:
(1)
We design a Distribution-driven Perturbation-Adaptive Defense (DPAD) framework tailored for UAV time-series regression tasks. The framework efficiently responds to multi-type and multi-strength hybrid adversarial attacks via front-end perturbation prediction and dynamic scheduling of back-end layered defense sub-models.
(2)
We systematically integrate GMM-based log-likelihood and responsibility values into the perturbation discrimination and defense decision process. This, combined with the perturbation strength predictor, significantly enhances the stability of perturbation strength prediction and the interpretability of defense strategies.
(3)
We construct a multi-method, multi-strength hybrid attack benchmark on the UAV_Delivery [30] dataset and compare DPAD with the original model, EAT, and ETR (EAT combined with TRADES [27]) models. The results demonstrate that DPAD reduces the average MSE by approximately 80% under hybrid adversarial samples while maintaining accuracy on clean samples, with an inference time of about 2.744 ms per sample, balancing robustness and near-real-time performance.
Compared with representative methods such as EAT and ETR, which rely on a single robust model trained on a fixed combination of clean and adversarial data or with divergence regularization and often lose accuracy on clean samples as adversarial strength increases, DPAD adopts a fundamentally different, distribution-driven, sample-wise adaptive design. By leveraging GMM-based input–output features to enhance perturbation discriminability, DPAD can accurately predict perturbation strength and dynamically select an appropriate defense sub-model, maintaining clean sample accuracy while significantly improving robustness against hybrid adversarial attacks.
The key notations used throughout this article are listed in Table 1.

2. Related Methods

2.1. Adversarial Attacks

2.1.1. FGSM

The Fast Gradient Sign Method (FGSM) [9] is an early and computationally efficient adversarial attack strategy. It perturbs inputs in the direction of the gradient of the loss with respect to the input:
x a d v = x + ϵ · sign ( x L ( x , y ) ) ,
where x is the clean input, ϵ controls perturbation magnitude, and L denotes the prediction loss. FGSM requires only a single gradient calculation, making it fast but often less effective against models with smooth decision boundaries or adversarial training.

2.1.2. CW

The Carlini & Wagner (CW) attack [11] formulates adversarial generation as a constrained optimization problem, minimizing perturbation size while forcing output deviation:
min δ δ 2 + c · L ( x + δ , y ) ,
where δ denotes the perturbation, δ 2 is the 2 -norm of the perturbation, and c is a balancing coefficient. CW is highly effective, producing subtle yet strong perturbations, but its reliance on iterative optimization leads to high computational cost, limiting its practicality for large-scale or real-time scenarios.

2.1.3. BIM

The Basic Iterative Method (BIM) [25] extends FGSM by applying it iteratively with small step size α , and constrains the perturbation within an ϵ -ball:
x a d v ( t + 1 ) = Clip x , ϵ x a d v ( t ) + α · sign ( x L ( x a d v ( t ) , y ) ) .
BIM generates stronger perturbations than FGSM by gradual refinement. However, as it still relies on the gradient sign, it may fail to fully exploit adversarial directions in high-dimensional regression spaces.

2.1.4. PGD

Projected Gradient Descent (PGD) [10] generalizes BIM by explicitly projecting each iterative update onto the allowed perturbation set:
x a d v ( t + 1 ) = Proj B ( x , ϵ ) x a d v ( t ) + α · sign ( x L ( x a d v ( t ) , y ) ) ,
where B ( x , ϵ ) is the -ball centered at x. PGD is widely regarded as a strong first-order adversary and serves as a benchmark for adversarial robustness. For regression, it effectively induces controlled but substantial deviations, though at increased computational expense.

2.1.5. APGD

Auto-PGD (APGD) [26] further improves PGD by incorporating adaptive step sizes and momentum, enhancing both convergence and attack strength. Under an 2 -norm constraint, the update rule is
x a d v ( t + 1 ) = Proj B ( x , ϵ ) x a d v ( t ) + α t · x L ( x a d v ( t ) , y ) | | x L ( x a d v ( t ) , y ) | | 2 ,
where α t is a time-varying adaptive step size. APGD achieves high attack success rates, particularly for non-linear regression models, though it requires more complex implementation and tuning.

2.2. Adversarial Defense Methods

To mitigate adversarial vulnerability, several defense strategies have been developed, ranging from adversarial training to trade-off-based optimization.

2.2.1. Adversarial Training

Adversarial Training (AT) [10] improves robustness by solving a min–max optimization problem:
min θ E ( x , y ) D max δ S L ( f θ ( x + δ ) , y ) ,
where ( x , y ) D denotes a sample from the training data distribution, and δ S is a perturbation constrained within a predefined set 𝒮 .
By exposing the model to adversarial samples during training, AT achieves strong robustness against known perturbations. However, it increases training cost and often reduces accuracy on clean data, particularly in regression tasks sensitive to output precision.

2.2.2. Ensemble Adversarial Training

Ensemble Adversarial Training (EAT) [20] diversifies robustness by incorporating perturbations from multiple models. Two main variants exist: (1) adversarial sample generation from multiple pre-trained models, and (2) aggregation of predictions from independently trained sub-models. The first variant is adopted in this work, where clean and adversarial examples are mixed to balance accuracy and robustness:
min θ E ( x , y ) D η · L ( f θ ( x ) , y ) + ( 1 η ) · L ( f θ ( x + δ ) , y ) ,
with η controlling the trade-off. While EAT strengthens defense against diverse perturbations, it introduces higher training and inference complexity.

2.2.3. TRADES

The TRadeoff-inspired Adversarial DEfense via Surrogate-loss minimization (TRADES) method [27] formalizes the robustness–accuracy trade-off by penalizing discrepancies between outputs on clean and adversarial inputs:
L TRADES = E ( x , y ) D L ( f θ ( x ) , y ) + β · KL ( f θ ( x ) f θ ( x a d v ) ) ,
where β is a balancing hyperparameter. TRADES often achieves a better compromise between clean accuracy and robustness, though it requires careful tuning and incurs additional training cost.

2.3. Gaussian Mixture Model

The Gaussian Mixture Model (GMM) [29] is a probabilistic generative model that expresses data distribution as a weighted sum of K Gaussian components:
p ( x ) = k = 1 K π k · N ( x μ k , Σ k ) ,
where π k represents mixture weights and N ( x μ k , Σ k ) denotes a Gaussian distribution.
GMMs are trained using the Expectation-Maximization (EM) algorithm, and are widely applied in clustering, density estimation, and anomaly detection. While effective for capturing multi-modal distributions, GMMs are sensitive to initialization and the number of components, and training can be computationally expensive in high-dimensional spaces.

2.4. Problem Definition

Consider a time-series regression model
f θ : R d R m ,
where for an input x with ground truth y, the prediction is
y ^ = f θ ( x ) , ( x , y ) D .
Adversarial perturbations δ can modify the input to generate adversarial samples:
x a d v = x + δ , δ p ϵ ,
with the objective of significantly enlarging prediction error:
L ( f θ ( x a d v ) , y ) L ( f θ ( x ) , y ) .
In realistic settings, adversaries often combine multiple attack types and perturbation magnitudes to construct hybrid adversarial datasets:
A h y b = { x + δ δ G ( x , ϵ , AttackType ) } .
Thus, the defense objective is to minimize prediction error under such hybrid conditions:
min f E ( x , y ) D , x a d v A h y b L ( f ( x a d v ) , y ) .
Hybrid adversarial attacks introduce several challenges to time-series regression models:
(1)
Temporal accumulation: Even small perturbations at early steps may cascade over time, resulting in magnified deviations and degraded long-term predictions.
(2)
Heterogeneous perturbations: Attacks that blend different forms and magnitudes of perturbations become highly complex and destructive, significantly complicating both detection and defense.
(3)
Robustness–accuracy trade-off: Increasing robustness tends to reduce accuracy on unperturbed sequences, which presents a critical challenge for maintaining reliable regression performance.

3. Proposed Method

In this work, we introduce the Distribution-driven Perturbation-Adaptive Defense (DPAD) framework, designed to improve the robustness of UAV time-series regression models under multi-type and multi-strength hybrid adversarial attacks. As shown in Figure 1, the DPAD framework unifies probabilistic distribution modeling, feature augmentation, perturbation strength prediction, and hierarchical sub-model defense into an end-to-end adaptive pipeline.
The overall architecture consists of four primary components, each addressing a distinct stage of the defense process:
(1)
Modeling credible input–output distributions: Input and output spaces are modeled using Gaussian Mixture Models (GMMs) to represent the statistical characteristics of clean data and provide distributional references for later modules.
(2)
Training defense sub-models for varying perturbation strengths: A base model is trained on clean samples, and multiple sub-models are trained on adversarial data generated under different perturbation strengths.
(3)
Feature augmentation and perturbation strength prediction: Log-likelihood and responsibility features derived from the GMMs are combined with the base model outputs to train a predictor that estimates the perturbation strength of each input sample.
(4)
Perturbation-adaptive defense: During inference, the framework selects an appropriate sub-model according to the predicted perturbation level to generate corrected outputs.
In summary, DPAD integrates distribution modeling, multi-strength sub-model training, feature-based perturbation prediction, and adaptive defense selection into a unified framework that achieves robustness and adaptability in UAV time-series regression tasks.

3.1. Modeling of Input–Output Credible Distribution

Adversarial perturbations are often imperceptible in raw input space, making it difficult to distinguish clean from adversarial samples or to quantify perturbation strength. To address this, we introduce probabilistic input–output distribution modeling using GMMs.
Input distribution modeling:
p in ( x ) = k = 1 K π k · N x μ k , Σ k ,
Output distribution modeling:
p out ( y ) = k = 1 K π k · N y μ k , Σ k ,
where N ( · μ k , Σ k ) denotes the probability density function of the k-th Gaussian distribution, where μ k and Σ k are the mean vector and covariance matrix, respectively. The coefficient π k represents the mixture weight of the k-th Gaussian component, subject to k = 1 K π k = 1 .

3.2. Training Defense Sub-Models for Different Perturbation Strengths

To explicitly defend against attacks of varying perturbation magnitudes, we train a set of defense sub-models { f k } , each tailored to perturbation level ϵ k , on top of the base model f 0 .
Base model training (clean samples only):
f 0 = arg min θ 1 N 0 i = 1 N 0 f θ ( x i ) y i 2 ,
where ( x i , y i ) denotes clean samples from the training set.
Defense sub-models (adversarial data):
f k = arg min θ 1 N k i = 1 N k f θ ( x i a d v ( ϵ k ) ) y i 2 ,
where x i a d v ( ϵ k ) denotes adversarial samples generated under perturbation strength ϵ k using multiple attack methods (e.g., FGSM, BIM, PGD, CW, APGD).
This strategy ensures that each sub-model is optimized for a specific perturbation level, thereby enhancing robustness under hybrid attack scenarios.

3.3. Training of the Perturbation Strength Prediction Model

We therefore design a prediction model g ψ to estimate perturbation strength from augmented features.
The extended feature vector is constructed as
Z = [ x , y ^ , log p in ( x ) , resp in ( x ) , log p out ( y ^ ) , resp out ( y ^ ) ] ,
where y ^ = f 0 ( x ) denotes the output of the base model, and resp ( · ) represents the responsibility values obtained from the corresponding GMMs.
The prediction network then maps Z to an estimated perturbation level according to
g ψ ( Z i ) ϵ ^ i ,
and is trained by Mean Squared Error (MSE) loss:
L MSE = 1 N i = 1 N ( ϵ i ϵ ^ i ) 2 .
Through this design, the model learns to approximate perturbation severity in a continuous manner, providing a quantitative basis for adaptive sub-model selection during inference.

3.4. Inference and Application of the DPAD Framework

During inference, the input sequence { x t h y b } t = 1 T may contain both clean and adversarial samples with varying perturbation strengths.
For each sample x t , the base model produces y ^ t = f 0 ( x t ) . Input–output log-likelihoods and responsibilities are then extracted via GMMs, forming the augmented feature vector:
Z t = [ x t , y ^ t , log p in ( x t ) , resp in ( x t ) , log p out ( y ^ t ) , resp out ( y ^ t ) ] .
The augmented feature Z t is then passed to the perturbation prediction network g ψ to predict perturbation strength:
ϵ ^ t = g ψ ( Z t ) .
The predicted strength is then discretized to select the appropriate defense sub-model f k ^ t :
k ^ t = 0 , ϵ ^ t < 0.005 , 0.01 , 0.005 ϵ ^ t < 0.025 , 0.05 , 0.025 ϵ ^ t < 0.07 , 0.1 , ϵ ^ t 0.07 .
Here, we map the predicted perturbation strength to discrete bins (0, 0.01, 0.05, 0.1). Since all features in the dataset are normalized to the range [−1, 1], these perturbation levels roughly correspond to 1%, 5%, and 10% changes in the original data. These thresholds align with the sub-models trained in the subsequent experiments and are chosen as a rough guideline based on perturbation magnitude. They can be adjusted in practical applications to optimize the performance of DPAD.
y t d e f = f k ^ t ( Z t ) .
Finally, the selected sub-model receives the augmented feature vector and outputs the defended result.

4. Experiments and Results

4.1. Experimental Setup

4.1.1. Environment Configuration

The experiments were conducted under the following hardware and software configurations shown in Table 2.

4.1.2. Dataset Configuration

The UAV Delivery dataset [30] simulates drone flight trajectories under varying wind speeds, velocities, and altitudes, with folder names formatted as “droneSpeed_windSpeed_altitude” to distinguish different conditions. The original UAV Delivery dataset contains independent folders representing different flight conditions, without predefined training/test splits. To evaluate the generalization ability of the model, we divided the dataset into two candidate datasets based on flight conditions, ensuring they are mutually independent:
  • Candidate Dataset 1: folders 10_5_100, 15_10_100, 20_5_100, 20_10_100 (70%), and 20_10_200.
  • Candidate Dataset 2: folders 20_10_100 (30%) and 20_10_180.
  • Note: The 20_10_100 folder was split 70%/30% into the two candidate datasets, with UAV IDs split at 13,864.
As summarized in Table 3, the final training set was constructed from Candidate Dataset 1 by selecting 100 UAVs via evenly spaced UAV IDs, yielding 2345 independent flight records. The test set includes 100 UAVs from Candidate Dataset 2 with the same IDs as in the training set, plus an additional 50 UAVs selected from Candidate Dataset 1 (IDs selected evenly between 1 and 51, excluding overlap with training IDs), resulting in 1830 flight records. This split ensures that the training and test sets are relatively independent in terms of UAVs and partially in terms of operating conditions, while the test set includes both known and unseen conditions to fairly assess the model’s generalization ability.
All features in the dataset were normalized to the range [−1, 1]. The final dataset overview, including the number of samples and parameter descriptions, is shown in Table 4.

4.1.3. GMM Parameter Configuration

Gaussian Mixture Models (GMMs) were trained separately on the inputs and outputs of the UAV_Split training set to augment the distributional features of adversarial samples. To ensure appropriate parameter selection for the GMMs, the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) were employed, with smaller values indicating better model fit. Considering the base model configuration (Section 4.1.4) and the dataset feature dimensions (Table 4), the model input dimension is 14 and the output dimension is 9. Multiple GMMs with varying numbers of components were trained: for the input-side GMM, the candidate range was K = 1 , 2 , , 16 ; for the output-side GMM, the candidate range was K = 1 , 2 , , 12 . The corresponding AIC and BIC curves as functions of the component number were plotted, as shown in Figure 2.
The GMMs with 15 and 12 components were selected for the input and output of the UAV_Split training set, respectively, as they achieved the minimum AIC and BIC values, and were subsequently used for feature augmentation.

4.1.4. Base Model Configuration

The UAV performance model is defined as x s = F u a v ( w , _ x s ) . In this study, the DNN model is trained on the training splits of UAV_Split as the base model, which also serve as the target model for adversarial attacks. The network architecture configuration of the base model is summarized in Table 5, with ReLU used as the activation function. The training hyperparameters are listed in Table 6.

4.1.5. Adversarial Example Generation Configuration

To comprehensively evaluate the performance of the proposed framework, we adopt several widely used adversarial attack methods, including FGSM, CW, BIM, PGD, and APGD. Based on the base models described in Section 4.1.4 and the datasets introduced in Section 4.1.2, adversarial datasets with varying perturbation magnitudes are generated.
Table 7 provides the full configuration of all adversarial attacks used to construct the adversarial dataset. For clarity, we explicitly list the norm constraint adopted by each method ( for FGSM, BIM, and PGD; 2 for CW and APGD), as ϵ values are not directly comparable across different norms. We further indicate whether each attack uses random initialization (only PGD employs a random start by adding uniform noise within the ϵ -ball), and confirm that gradients are always taken with respect to the input. These details ensure transparent reproducibility of the adversarial data generation process.

4.1.6. Sub-Model Configuration

The network architecture of all defense sub-models followed that of the base model described in Section 4.1.4. For each perturbation strength (0.01, 0.05, 0.1), a corresponding defense sub-model was trained on a hybrid training set comprising adversarial samples generated by multiple attack methods (FGSM, CW, BIM, PGD, and APGD) as detailed in Section 4.1.5. The hyperparameter settings for sub-model training are summarized in Table 8, with early stopping disabled.

4.1.7. Perturbation Strength Prediction Model Training

To enable accurate estimation of perturbation strength in hybrid adversarial samples, we designed and trained a multilayer perceptron (MLP)-based perturbation strength prediction model. The model takes as input the augmented feature vectors constructed by concatenating input/output GMM-derived statistics, and outputs a continuous regression value representing perturbation strength. Unlike conventional discrete classification approaches, this design adopts continuous prediction followed by interval partitioning based on predefined thresholds to map the output to discrete strength levels. This strategy mitigates ambiguity at class boundaries and provides a smoother characterization of perturbation magnitude.
For dataset construction, clean samples were assigned a label of ϵ = 0 , and adversarial samples were generated using FGSM, BIM, PGD, CW, and APGD at perturbation strengths ϵ { 0.01 , 0.05 , 0.1 } . The corresponding ϵ values were used as supervision signals, and we followed standard practice by optimizing the network with MSE loss. As summarized in Table 9, all hidden layers adopted ReLU activations, while a Sigmoid function was applied at the output layer. The detailed hyperparameter settings used for model training are provided in Table 10.

4.1.8. Evaluation Metrics

The Mean Squared Error (MSE) measures the average squared deviation between predicted and true values and directly reflects the model’s prediction accuracy. It is formulated as
MSE = 1 N i = 1 N ( y i y ^ i ) 2 ,
where N denotes the total number of samples, y i is the ground-truth value, and y ^ i represents the corresponding model prediction. A smaller MSE indicates that the model produces predictions closer to the actual observations.
The Coefficient of Determination ( R 2 ) represents the proportion of variance in the dependent variable that can be explained by the model, providing a measure of its overall fit. It is computed as
R 2 = 1 i = 1 N ( y i y ^ i ) 2 i = 1 N ( y i y ¯ ) 2 ,
where y ¯ is the mean of the observed values. Value of R 2 approaching 1 suggest that the model accounts for most of the variability in the data and therefore achieves a strong goodness of fit.

4.2. Experimental Results and Analysis

4.2.1. Robustness and Computational Efficiency Comparison

To examine the robustness of DPAD in time-series regression, we carried out a series of comparative experiments against the baseline regression model (Perf_UAV_DNN), as well as two representative defense strategies, EAT and ETR. Both EAT and ETR adopted the same training dataset as DPAD, containing a balanced mix of clean and adversarial samples. All models shared an identical network backbone with the base model (Section 4.1.4) and used training configurations consistent with the DPAD sub-models (Section 4.1.6). Specifically, five EAT variants were trained with adversarial strengths η { 0.1 , 0.3 , 0.5 , 0.7 , 0.9 } , denoted as D N N η E A T , while another five ETR variants were trained under TRADES loss coefficients λ { 1 , 3 , 5 , 7 , 9 } , denoted as D N N λ E T R . To evaluate robustness under diverse adversarial conditions, we built a hybrid adversarial test set by randomly sampling adversarial examples generated with perturbation strengths ϵ { 0.01 , 0.05 , 0.1 } and attack types (FGSM, CW, BIM, PGD, APGD). Sampling followed the order of the original test set with a fixed random seed of 42, and the resulting hybrid adversarial set matches the size of the clean test set reported in Table 4.
Table 11 summarizes the evaluation results on both clean and hybrid adversarial test datasets, including the original model, DPAD, and all compared EAT ( η { 0.1 , 0.3 , 0.5 , 0.7 , 0.9 } ) and ETR ( λ { 1 , 3 , 5 , 7 , 9 } ) variants. All EAT and ETR models share the same backbone architecture, optimizer, training epochs, early stopping, and clean/adversarial data composition. It is intuitive that the performance of the original model, while very high on clean data, significantly degrades under hybrid adversarial perturbations, hence being vulnerable. On the other hand, DPAD maintains comparable accuracy to the original model on clean samples while largely enhancing robustness against hybrid attacks, reducing the prediction error by about 80%. This indicates the effectiveness of DPAD in complex adversarial environments. To evaluate the sensitivity of this choice, we slightly adjusted the boundaries in Equation (25) to 0.005/0.02/0.07. On the hybrid adversarial test set, DPAD achieved an MSE of 1.826 × 10 3 and R 2 = 0.99465 , and on the clean test set, the MSE was 0.77654 × 10 3 with R 2 = 0.99772 . These results show that small changes to the thresholds cause only minor performance variations, indicating that the bin ranges can be tuned according to practical requirements to further improve DPAD’s overall performance.
Therefore, compared to EAT and ETR, DPAD achieves a higher balance between robustness and generalization. While EAT improves adversarial robustness at small parameter settings, it causes significant loss in clean data accuracy, and the performance degrades as the adversarial strength increases. Adding TRADES regularization in ETR leads to improvement in terms of robustness; however, ETR underperforms in DPAD on both clean and adversarial samples. It is clear that perturbation strength prediction and hierarchical sub-model defense mechanisms make DPAD more well-balanced between robustness and prediction accuracy than state-of-the-art methods.
In addition, computational efficiency is another critical factor for practical deployment. Table 12 compares the average inference time of different models on the hybrid test set, which contains 250,034 samples. EAT and ETR have achieved an extremely low inference time that is suitable for real-time applications, whereas DPAD causes a higher overhead for extra feature extraction, perturbation strength prediction, and hierarchical model selection. However, the increased average inference time per sample is 2.744 ms, which is in the scope of a quasi-real-time requirement in UAV applications. This indicates that the increased robustness of DPAD is worth paying for in computational cost.

4.2.2. Feature Augmentation Comparison

To assess the impact of distribution-driven feature augmentation, we evaluated the perturbation strength prediction model under three input configurations: (1) Z 1 = [ x ] , (2) Z 2 = [ x , y ^ ] , and (3) Z 3 = [ x , y ^ , log p in ( x ) , resp in ( x ) , log p out ( y ^ ) , resp out ( y ^ ) ] . The corresponding MSE and R 2 values are summarized in Table 13.
When only the raw inputs ( Z 1 ) are used, the model performs the weakest. Adding the model’s own outputs ( Z 2 ) improves the fitting performance to some degree, but the improvement remains limited. Once the GMM-derived features—specifically the log-likelihood and responsibility values—are included ( Z 3 ), the performance increases notably, with the MSE reduced to 0.81 × 10 4 and R 2 raised to 0.9434. These observations suggest that the distribution-based features provide complementary statistical information that helps the predictor capture perturbation intensity more accurately. They also emphasize the necessity of the distribution-driven design adopted in DPAD.

5. Discussion

The proposed DPAD framework can effectively defend against multi-type and multi-strength hybrid adversarial attacks in UAV time-series regression tasks by integrating the front-end perturbation strength prediction with the back-end hierarchical sub-model defense. The experimental results show that the DPAD reduces the average MSE about 80% compared to the original model under the hybrid adversarial samples, with nearly the same prediction accuracy on the clean samples. The key factor behind this performance is the feature augmentation based on the GMM. The incorporation of log-likelihood and responsibility values of input–output distributions brings about significantly enhanced feature representation of the model for perturbation strength prediction: the MSE of perturbation strength prediction decreases from 4.53 × 10 4 to 0.81 × 10 4 , while R 2 increases from 0.6847 to 0.9434. The superiority indicates that the GMM-based feature extraction not only strengthens the discriminative power of perturbation strength classification but also improves the precision of sub-model selection, enabling dynamic adaptation and fine-grained defense in complex adversarial environments.
Yet, despite its effectiveness, DPAD still has its limitations. Its inference time is relatively high at about 2.744 ms per data point, maybe leading to latency bottlenecks in batch inference or multi-sensor fusion scenarios, though within real-time control requirements of typical UAVs. Moreover, it depends on several sub-model trainings and GMM distributional assumptions that should be further validated on generalization for unknown attack types and robustness under high-dimensional, non-Gaussian data distributions. To alleviate the computational overhead, future work could explore lightweight surrogate models to approximate GMM log-likelihood and responsibility computations, as well as batch-wise parallel processing of feature augmentation on modern GPUs, which can significantly reduce per-sample inference time without affecting defense performance. Additionally, the current evaluation uses a publicly available simulated UAV delivery dataset, which, while reflecting realistic flight trajectories under varying speeds, altitudes, and wind conditions, is still limited compared to real-world UAV operations. Future studies aim to collect real UAV delivery trajectories to further validate DPAD’s performance under authentic operational conditions, enhancing the applicability of the framework to practical deployment scenarios.
The future directions include jointly optimizing perturbation prediction and hierarchical defense modules for low latency with high scalability. The GMM may be replaced by more expressive distribution modeling techniques, such as variational Bayesian methods [31] or deep energy-based models [32], in order to represent richer input–output feature relationships in the framework with a possibly better perturbation discrimination. Considering online/continual learning paradigms [33] is another promising direction of extending DPAD for evolution with adversarial pattern variations in order to maintain high performance in dynamic operational environments. Moreover, although the current design primarily focuses on adversarial robustness, future extensions of DPAD may incorporate privacy-preserving mechanisms. For example, perturbation prediction and distributional feature extraction can be performed directly on-board UAVs to avoid transmitting raw sensor data, while techniques such as federated learning [34], differential privacy [35], or encrypted model inference [36] could be introduced to protect sensitive flight or environment information. Integrating these privacy-preserving strategies would improve the applicability of DPAD in large-scale UAV and IoD systems where data confidentiality and mission security are critical.

6. Conclusions

This paper has proposed the Distribution-driven Perturbation-Adaptive Defense (DPAD) framework for UAV time-series regression under multi-type, multi-strength hybrid adversarial attacks. By combining perturbation strength prediction, hierarchical sub-model defense, and GMM-based input–output feature augmentation, DPAD has achieved dynamic adaptation to complex attacks while maintaining high prediction accuracy on clean data.
Experimental evaluations demonstrate that GMM-based feature augmentation significantly enhances predictive performance. In perturbation strength estimation, the MSE decreased from 4.53 × 10 4 to 0.81 × 10 4 , and R 2 improved from 0.685 to 0.943. Under hybrid adversarial samples, DPAD reduced the average MSE by approximately 80% compared with the base model, achieving 1.824 × 10 3 in MSE and R 2 = 0.995 , while maintaining almost identical accuracy on clean data (MSE: 7.511 × 10 4 ; R 2 : 0.998 ). In contrast, existing adversarial training approaches including EAT and ETR exhibited noticeably higher prediction errors.
We attribute the superior performance of DPAD to the use of GMM-based distributional features, which enhance perturbation strength estimation and improve sub-model selection for hierarchical defense. Although additional processing is introduced by feature extraction and model selection, the framework achieves an average inference time of 2.744 ms per data point, which is sufficient for near-real-time UAV control.
In summary, DPAD provides a robust and scalable defense framework for security-critical time-series applications. It achieves a practical balance among robustness, accuracy, and computational efficiency in complex and adversarial operational environments.

Author Contributions

Conceptualization, B.X. and Z.L.; methodology, B.X. and Z.L.; software, B.X. and Z.D.; validation, B.X., Z.D. and X.H.; investigation, B.X., K.H. and H.Z.; resources, B.X. and Z.L.; data curation, B.X., J.W. and Y.L.; writing—original draft preparation, B.X., Z.D. and Y.L.; writing—review and editing, B.X., K.H. and X.H.; visualization, B.X. and Y.Z.; supervision, Z.L. and X.L.; project administration, Z.L. and X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The UAV Delivery dataset used in this study is publicly available online https://drive.google.com/drive/folders/18qwp2zaRoBjtkId5sz83vArWgZxrZLCi (accessed on 4 March 2025).

DURC Statement

Current research is limited to the field of artificial intelligence security and trustworthy machine learning, focusing on adversarial defense frameworks that enhance the robustness and reliability of AI models. This work is beneficial for improving the safety of AI applications in critical domains such as autonomous systems, healthcare, and aerospace, and does not pose a threat to public health or national security. The authors acknowledge the dual-use potential of research involving adversarial examples and confirm that all necessary precautions have been taken to prevent potential misuse. As an ethical responsibility, the authors strictly adhere to relevant national and international laws and guidelines related to Dual-Use Research of Concern (DURC). The authors advocate for responsible dissemination, transparency, and regulatory compliance to ensure the research is used solely for defensive and beneficial purposes.

Conflicts of Interest

Author Haolin Zhu was employed by Control System Research Company of AECC. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Betti Sorbelli, F. UAV-Based Delivery Systems: A Systematic Review, Current Trends, and Research Challenges. ACM J. Auton. Transp. Syst. 2024, 1, 1–40. [Google Scholar] [CrossRef]
  2. Meng, W.; Yang, Y.; Zang, J.; Li, H.; Lu, R. DTUAV: A novel cloud–based digital twin system for unmanned aerial vehicles. SIMULATION 2022, 99, 69–87. [Google Scholar] [CrossRef]
  3. Souanef, T.; Al-Rubaye, S.; Tsourdos, A.; Ayo, S.; Panagiotakopoulos, D. Digital Twin Development for the Airspace of the Future. Drones 2023, 7, 484. [Google Scholar] [CrossRef]
  4. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  5. Yang, N.K.; San, K.T.; Chang, Y.S. A Novel Approach for Real Time Monitoring System to Manage UAV Delivery. In Proceedings of the 2016 5th IIAI International Congress on Advanced Applied Informatics (IIAI-AAI), Kumamoto, Japan, 10–14 July 2016; pp. 1054–1057. [Google Scholar] [CrossRef]
  6. Cai, Z.; Liu, Z.; Kou, L. Reliable UAV Monitoring System Using Deep Learning Approaches. IEEE Trans. Reliab. 2022, 71, 973–983. [Google Scholar] [CrossRef]
  7. Costa, J.C.; Roxo, T.; Proença, H.; Inácio, P.R.M. How Deep Learning Sees the World: A Survey on Adversarial Attacks & Defenses. IEEE Access 2024, 12, 61113–61136. [Google Scholar] [CrossRef]
  8. Xu, B.; Liu, Z.; Zhu, H.; Dong, B.; Zhao, B.; Yan, B.; Wei, J. A Novel Adversarial Attack Method for Time-Series Regression Models in IIoT-Based Digital Twins. IEEE Internet Things J. 2025, 12, 29278–29290. [Google Scholar] [CrossRef]
  9. Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. arXiv 2014, arXiv:1412.6572. [Google Scholar] [CrossRef]
  10. Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards Deep Learning Models Resistant to Adversarial Attacks. arXiv 2017, arXiv:1706.06083. [Google Scholar] [CrossRef]
  11. Carlini, N.; Wagner, D. Towards Evaluating the Robustness of Neural Networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 22–26 May 2017; pp. 39–57. [Google Scholar] [CrossRef]
  12. Wang, Z.; Gao, Q.; Xu, J.; Li, D. A Review of UAV Power Line Inspection. In Advances in Guidance, Navigation and Control: Proceedings of 2020 International Conference on Guidance, Tianjin, China, 23–25 October 2020; Springer: Singapore, 2021; pp. 3147–3159. [Google Scholar] [CrossRef]
  13. Felix, O. Securing the skies: A comprehensive survey on internet of drones security challenges and solutions. Architecture 2023, 45, 46. [Google Scholar] [CrossRef]
  14. Khan, S. Robustness, Resilience, and Scalability of State Estimation Algorithms. Ph.D. Thesis, Purdue University Graduate School, West Lafayette, Indiana, 2023. [Google Scholar] [CrossRef]
  15. Zhao, M.; Zhang, L.; Ye, J.; Lu, H.; Yin, B.; Wang, X. Adversarial Training: A Survey. arXiv 2024, arXiv:2410.15042. [Google Scholar] [CrossRef]
  16. Xie, C.; Wu, Y.; van der Maaten, L.; Yuille, A.; He, K. Feature Denoising for Improving Adversarial Robustness. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 501–509. [Google Scholar]
  17. Raghunathan, A.; Steinhardt, J.; Liang, P. Certified Defenses against Adversarial Examples. arXiv 2018, arXiv:1801.09344. [Google Scholar] [CrossRef]
  18. Bai, T.; Luo, J.; Zhao, J.; Wen, B.; Wang, Q. Recent Advances in Adversarial Training for Adversarial Robustness. arXiv 2021, arXiv:2102.01356. [Google Scholar] [CrossRef]
  19. Zhao, W.; Alwidian, S.; Mahmoud, Q.H. Adversarial Training Methods for Deep Learning: A Systematic Review. Algorithms 2022, 15, 283. [Google Scholar] [CrossRef]
  20. Tramèr, F.; Kurakin, A.; Papernot, N.; Goodfellow, I.; Boneh, D.; McDaniel, P. Ensemble Adversarial Training: Attacks and Defenses. arXiv 2017, arXiv:1705.07204. [Google Scholar] [CrossRef]
  21. Wang, H.; Wang, Y. Self-Ensemble Adversarial Training for Improved Robustness. arXiv 2022, arXiv:2203.09678. [Google Scholar] [CrossRef]
  22. Deng, Y.; Mu, T. Understanding and Improving Ensemble Adversarial Defense. Adv. Neural Inf. Process. Syst. 2023, 36, 58075–58087. [Google Scholar]
  23. Mode, G.R.; Hoque, K.A. Adversarial Examples in Deep Learning for Multivariate Time Series Regression. In Proceedings of the 2020 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington DC, DC, USA, 13–15 October 2020. [Google Scholar] [CrossRef]
  24. Pialla, G.; Ismail Fawaz, H.; Devanne, M.; Weber, J.; Idoumghar, L.; Muller, P.A.; Bergmeir, C.; Schmidt, D.F.; Webb, G.I.; Forestier, G. Time series adversarial attacks: An investigation of smooth perturbations and defense approaches. Int. J. Data Sci. Anal. 2023, 19, 129–139. [Google Scholar] [CrossRef]
  25. Kurakin, A.; Goodfellow, I.J.; Bengio, S. Adversarial examples in the physical world. In Artificial Intelligence Safety and Security; Chapman and Hall/CRC: Boca Raton, FL, USA, 2018; pp. 99–112. [Google Scholar]
  26. Croce, F.; Hein, M. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In Proceedings of the 37th International Conference on Machine Learning, Online, 13–18 July 2020; Daumé, H., III, Singh, A., Eds.; PMLR, Proceedings of Machine Learning Research. JMLR: Brookline, MA, USA, 2020; Volume 119, pp. 2206–2216. [Google Scholar]
  27. Zhang, H.; Yu, Y.; Jiao, J.; Xing, E.; Ghaoui, L.E.; Jordan, M. Theoretically Principled Trade-off between Robustness and Accuracy. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; Chaudhuri, K., Salakhutdinov, R., Eds.; PMLR, Proceedings of Machine Learning Research. JMLR: Brookline, MA, USA, 2019; Volume 97, pp. 7472–7482. [Google Scholar]
  28. Silva, S.H.; Najafirad, P. Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey. arXiv 2020, arXiv:2007.00753. [Google Scholar] [CrossRef]
  29. Reynolds, D.A.; Veeraraghavan, A.; Ramanathan, N.; Yam, C.Y.; Nixon, M.S.; Elgammal, A.; Boyd, J.E.; Little, J.J.; Lynnerup, N.; Larsen, P.K.; et al. Gaussian mixture models. Encycl. Biom. 2009, 741, 3. [Google Scholar]
  30. Rigoni, G.; Pinotti, C.M.; Bhumika; Das, D.; Das, S.K. Delivery with UAVs: A simulated dataset via ATS. In Proceedings of the 2022 IEEE 95th Vehicular Technology Conference: (VTC2022-Spring), Helsinki, Finland, 19–22 June 2022; pp. 1–6. [Google Scholar] [CrossRef]
  31. Ni, P.; Li, J.; Hao, H.; Han, Q.; Du, X. Probabilistic model updating via variational Bayesian inference and adaptive Gaussian process modeling. Comput. Methods Appl. Mech. Eng. 2021, 383, 113915. [Google Scholar] [CrossRef]
  32. Hendriks, J.N.; Gustafsson, F.K.; Ribeiro, A.H.; Wills, A.G.; Schön, T.B. Deep Energy-Based NARX Models. IFAC-PapersOnLine 2021, 54, 505–510. [Google Scholar] [CrossRef]
  33. Bidaki, S.A.; Mohammadkhah, A.; Rezaee, K.; Hassani, F.; Eskandari, S.; Salahi, M.; Ghassemi, M.M. Online Continual Learning: A Systematic Literature Review of Approaches, Challenges, and Benchmarks. arXiv 2025, arXiv:2501.04897. [Google Scholar] [CrossRef]
  34. Al Farsi, A.; Khan, A.; Mughal, M.; Bait-Suwailam, M.M. Privacy and security challenges in federated learning for uav systems: A systematic review. IEEE Access 2025, 13, 86599–86615. [Google Scholar] [CrossRef]
  35. Ntizikira, E.; Lei, W.; Alblehai, F.; Saleem, K.; Lodhi, M.A. Secure and Privacy-Preserving Intrusion Detection and Prevention in the Internet of Unmanned Aerial Vehicles. Sensors 2023, 23, 8077. [Google Scholar] [CrossRef]
  36. Arjun, R.K.; Charanjit, J.; Nalini, R. Towards Building Secure UAV Navigation with FHE-aware Knowledge Distillation. arXiv 2024, arXiv:2411.00403. [Google Scholar] [CrossRef]
Figure 1. Distribution feature-driven perturbation-adaptive adversarial defense framework.
Figure 1. Distribution feature-driven perturbation-adaptive adversarial defense framework.
Drones 09 00828 g001
Figure 2. AIC and BIC curves with varying numbers of GMM components: (a) Input-side GMM models trained on the UAV_Split training set. (b) Output-side GMM models trained on the UAV_Split training set.
Figure 2. AIC and BIC curves with varying numbers of GMM components: (a) Input-side GMM models trained on the UAV_Split training set. (b) Output-side GMM models trained on the UAV_Split training set.
Drones 09 00828 g002
Table 1. List of key notations.
Table 1. List of key notations.
NotationDefinition
xModel input or clean sample
x a d v Adversarial sample
yModel output (ground truth label or regression target)
fModel function
f θ Model parameterized by θ
θ Model parameters
δ Adversarial perturbation
δ p p-norm of perturbation
ϵ Maximum perturbation magnitude
S Perturbation constraint set (e.g., p -ball)
L Loss function
Gradient operator
sign ( · ) Sign function
Clip ( x , ϵ ) Clipping function to keep perturbation bounded
B ( x , ϵ ) p -ball centered at x with radius ϵ
η Trade-off coefficient between clean and adversarial loss in Equation (7)
β Regularization coefficient in TRADES loss (Equation (8))
KL ( p q ) Kullback–Leibler divergence between distributions p and q
D Training data distribution
E ( x , y ) D [ · ] Expectation over data samples from distribution D
p ( x ) Probability density function of sample x
π k Mixture weight of the k-th Gaussian component ( k π k = 1 )
μ k Mean vector of the k-th Gaussian component
Σ k Covariance matrix of the k-th Gaussian component
KTotal number of Gaussian components
2 Euclidean norm
maximum norm
Table 2. Experimental environment configuration.
Table 2. Experimental environment configuration.
CategoryConfiguration
HardwareCPU: Intel i7-1360P (16 cores, 2.20 GHz)
Memory: 32 GB
Storage: 1 TB SSD
Operating System: Windows 11 (64-bit)
SoftwareProgramming Language: Python 3.8
Deep Learning Framework: PyTorch 2.4
Libraries: NumPy 1.22.4, Scikit-learn 1.3.2, Matplotlib 3.5.0, Pandas 1.3.4
Development Tools: VSCode 1.87.2 with Python and Jupyter extensions
Environment: Anaconda 4.8.2, Jupyter Client 7.3.4, Jupyter Core 4.12.0, IPython 8.4.0, ipykernel 6.14.0, ipywidgets 8.1.2
Table 3. Training and test set composition.
Table 3. Training and test set composition.
DatasetCompositionFlight Records
Training Set100 UAVs from Candidate 12345
Test Set50 UAVs from Candidate 1 + 100 UAVs from Candidate 21830
Table 4. Dataset overview: sample size and parameter description.
Table 4. Dataset overview: sample size and parameter description.
DatasetN-TrainN-TestParameter Description
UAV_Split291,954250,034w: Operating conditions, includes 5-dimensional parameters such as wind_speed, wind_dir, temp, etc.
_ x s : The previous state, includes 9-dimensional parameters such as lat, lon, alt, tas, cas, etc.
x s : The current state, includes 9-dimensional parameters such as lat, lon, alt, tas, cas, etc.
Table 5. Configuration of network structures for the base model.
Table 5. Configuration of network structures for the base model.
Model NameNetwork Structures
Perf_UAV_DNN[14, 150, 150, 150, 100, 9]
Table 6. Hyperparameters for base model training.
Table 6. Hyperparameters for base model training.
ParamterPerf_UAV_DNN
Loss FunctionMSE
OptimizerAdam (lr =  5 × 10 4 )
Learning Rate SchedulerStepLR (step_size = 1 Epoch, γ  = 0.95)
Epochs100
Batch Size256
Early Stopping Patience3
Table 7. Parameter settings and norm constraints for adversarial sample generation.
Table 7. Parameter settings and norm constraints for adversarial sample generation.
MethodNorm ϵ IterationsStep Size α Random StartOthers/Notes
FGSM 0.01, 0.05, 0.1//NoGradients w.r.t. input
CW 2 0.01, 0.05, 0.1200.02NoLR = 0.01
BIM 0.01150.001NoGradients w.r.t. input
0.05200.003
0.1300.004
PGD 0.01150.001YesUniform init. in [ ϵ , ϵ ]
0.05200.003
0.1300.004
APGD 2 0.01150.001No α × = 1.1 if Δ loss < 10 6
0.05200.003
0.1300.004
Table 8. Hyperparameters for sub-model training.
Table 8. Hyperparameters for sub-model training.
ParamterPerf_UAV_DNN
Loss FunctionMSE
OptimizerAdam (lr =  5 × 10 4 )
Learning Rate SchedulerCosineAnnealingLR (eta_min =  1 × 10 6 )
Epochs100
Batch Size256
Table 9. Configuration of network structures for the perturbation strength prediction model.
Table 9. Configuration of network structures for the perturbation strength prediction model.
Model NameNetwork Structures
Epsilon_Predict_DNN[52, 128, 256, 128, 64, 1]
Table 10. Hyperparameters for perturbation strength prediction model training.
Table 10. Hyperparameters for perturbation strength prediction model training.
ParamterEpsilon_Predict_DNN
Loss FunctionMSE
OptimizerAdam (lr =  5 × 10 4 )
Learning Rate SchedulerCosineAnnealingLR (eta_min =  1 × 10 6 )
Epochs30
Batch Size256
Table 11. MSE and R 2 evaluation metrics for different models on clean and hybrid adversarial samples.
Table 11. MSE and R 2 evaluation metrics for different models on clean and hybrid adversarial samples.
Model NameHybrid Test DatasetClean Test Dataset
MSE ( × 10 3 ) R 2 MSE ( × 10 3 ) R 2
Perf_UAV_DNN9.519370.972090.745920.99781
DPAD1.823780.994650.751140.99780
DNN η = 0.1 E A T 1.872190.994511.290760.99621
DNN η = 0.3 E A T 1.945610.994291.289240.99622
DNN η = 0.5 E A T 1.880740.994481.120790.99671
DNN η = 0.7 E A T 1.902100.994421.009550.99704
DNN η = 0.9 E A T 2.346590.993121.026930.99699
DNN λ = 1.0 E T R 2.109470.993811.172010.99656
DNN λ = 3.0 E T R 2.093830.993861.415170.99585
DNN λ = 5.0 E T R 2.174840.993621.546410.99546
DNN λ = 7.0 E T R 2.409920.992931.837810.99461
DNN λ = 9.0 E T R 2.554660.992512.007990.99411
Note: Red text indicates the best metric in the column, and blue text indicates the second-best metric.
Table 12. Inference time overhead for different models on the hybrid test dataset.
Table 12. Inference time overhead for different models on the hybrid test dataset.
Model NameDPAD FrameworkEAT ModelETR Model
Time Cost(s)686.100 ± 11.3772.288 ± 0.0962.250 ± 0.155
Table 13. The MSE and R 2 metrics of perturbation strength prediction models with different input feature compositions on the test set.
Table 13. The MSE and R 2 metrics of perturbation strength prediction models with different input feature compositions on the test set.
Features Z 1 Z 2 Z 3
MSE 4.53 × 10 4 3.53 × 10 4 0.81 × 10 4
R 2 0.68470.75450.9434
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, B.; Liu, Z.; Dong, Z.; Huang, K.; Huang, X.; Zhu, H.; Wei, J.; Li, Y.; Zhang, Y.; Li, X. DPAD: Distribution-Driven Perturbation-Adaptive Defense for UAV Time-Series Regression Under Hybrid Adversarial Attacks. Drones 2025, 9, 828. https://doi.org/10.3390/drones9120828

AMA Style

Xu B, Liu Z, Dong Z, Huang K, Huang X, Zhu H, Wei J, Li Y, Zhang Y, Li X. DPAD: Distribution-Driven Perturbation-Adaptive Defense for UAV Time-Series Regression Under Hybrid Adversarial Attacks. Drones. 2025; 9(12):828. https://doi.org/10.3390/drones9120828

Chicago/Turabian Style

Xu, Bo, Zhiqiang Liu, Zhongjun Dong, Kaiqi Huang, Xiaopeng Huang, Haolin Zhu, Jun Wei, Yong Li, Yangbai Zhang, and Xiuping Li. 2025. "DPAD: Distribution-Driven Perturbation-Adaptive Defense for UAV Time-Series Regression Under Hybrid Adversarial Attacks" Drones 9, no. 12: 828. https://doi.org/10.3390/drones9120828

APA Style

Xu, B., Liu, Z., Dong, Z., Huang, K., Huang, X., Zhu, H., Wei, J., Li, Y., Zhang, Y., & Li, X. (2025). DPAD: Distribution-Driven Perturbation-Adaptive Defense for UAV Time-Series Regression Under Hybrid Adversarial Attacks. Drones, 9(12), 828. https://doi.org/10.3390/drones9120828

Article Metrics

Back to TopTop