Next Article in Journal
Estimating Brazilian Amazon Canopy Height Using Landsat Reflectance Products in a Random Forest Model with Lidar as Reference Data
Next Article in Special Issue
Small Object Detection in Medium–Low-Resolution Remote Sensing Images Based on Degradation Reconstruction
Previous Article in Journal
Generative Adversarial Networks for SAR Automatic Target Recognition and Classification Models Enhanced Explainability: Perspectives and Challenges
Previous Article in Special Issue
Spectral-Spatial Mamba for Hyperspectral Image Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DBI-Attack:Dynamic Bi-Level Integrated Attack for Intensive Multi-Scale UAV Object Detection

1
School of Information and Navigation, Air Force Engineering University, Xi’an 710082, China
2
School of Computer Science, Northwestern Polytechnical University, Xi’an 710129, China
3
Department of Computer Science, The University of Manchester, Manchester M13 9PL, UK
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(14), 2570; https://doi.org/10.3390/rs16142570
Submission received: 14 May 2024 / Revised: 8 July 2024 / Accepted: 11 July 2024 / Published: 13 July 2024
(This article belongs to the Special Issue Intelligent Remote Sensing Data Interpretation)

Abstract

:
Benefiting from the robust feature representation capability of convolutional neural networks (CNNs), the object detection technology of intelligent high-altitude UAV remote sensing has been developed rapidly. In this field, the adversarial examples (AEs) pose serious security risks and vulnerabilities to deep learning-based systems. Due to the limitation of object size, image degradation, and scene brightness, adding adversarial disturbances to small and dense objects is extremely challenging. To study the threat of AE for UAV object detection, a dynamic bi-level integrated attack (DBI-Attack) is proposed for intensive multi-scale UAV object detection. Firstly, we use the dynamic iterative attack (DIA) method to generate perturbation on the classification level by improving the momentum iterative fast gradient sign method (MIM). Secondly, the bi-level adversarial attack method (BAAM) is constructed to add global perturbation on the decision level for completing the white-box attack. Finally, the integrated black-box attack method (IBAM) is combined to realize the black-box mislabeling and fabrication attacks. We experiment on the real drone traffic vehicle detection datasets to better evaluate the attack effectiveness. The experimental results show that the proposed method can achieve mislabeling and fabrication attacks on the UAV object detectors in black-box conditions. Furthermore, the adversarial training is applied to improve the model robustness. This work aims to call more attention to the adversarial and defensive aspects of UAV target detection models.

1. Introduction

The object detection of remote high-altitude unmanned aerial vehicles (UAVs) is a fundamental and challenging problem in the age of internet of things (IoT), widely used in military and civilian fields [1]. It also provides value-added IoT services for exploration, inspection, and collaborative attack tasks as the IoT platform. However, the object detection algorithms based on deep learning face potential security risks. The attackers can interfere with the UAV object recognition capability by injecting adversarial examples (AEs) from the digital and physical domains. Figure 1 shows the process of how the attacker penetrated the UAV target detection mission. The wireless image transmission technology (such as Wi-Fi) commonly used between UAVs and ground control terminals has vulnerabilities [2,3]. We consider injecting the adversarial samples into the image transmission system based on transmission vulnerabilities during the image transmission process. The injected attack samples interfere with the target detection algorithm to achieve real-time attacks on target detection applications. Subsequently, the UAV misjudges the objects, resulting in problems such as accidents, road congestion, and faulty decisions [4,5,6]. The UAV object pixels in dense regions are limited to local space, and the space only occupies a small proportion, so only a small number of pixels can be revised by the generated perturbation. For the physical attacks, the generated adversarial patches may have bigger perturbed pixel values, reducing the attack concealment compared with digital ones [7,8]. Meanwhile, the physical attacks are hardly to be performed on the object region with the few pixels of tiny targets. Therefore, the vulnerabilities of the object detector can be effectively explored by conducting digital adversarial attacks.
Recent research indicates that UAV object detection algorithms are affected by complex weather, lighting conditions, and aerial height, which brings opportunities and challenges for implementing adversarial attacks. Specifically, the objects in UAV images may become blurred or illegible in foggy weather, so the sensor noise and unstable light can better hide the attack traces [9]. Meanwhile, partial drones must be hovered at high altitudes for urban traffic planning, peccancy monitoring, and emergency treatment tasks. The tiny traffic objects obtained at high altitudes only occupy a few pixels and are easily confused with the pixels in the background region, which makes it difficult to accurately and effectively attack in the UAV images. Recently, the focus on adversarial attacks and defense methods for object detection has been growing [10,11]. These attacks can be classified into white-box and black-box attacks [12,13]. For white-box, the attackers can capture the details, including infrastructures, parameters, gradient values, and backpropagation processes of the victim models. The white-box attacks constitute the basic component of the black-box attacks [14,15]. The black-box models cannot provide parameters or gradient information, so researchers have proposed various black-box attack methods with limited data [16,17,18,19,20].
The proposed dynamic bi-level integrated attack (DBI-Attack) can be further divided into targeted mislabeling and fabrication. The mislabeling attack aims to guide the object detection system in making mistake predictions on the target labels by making minor perturbed modifications. This attack leads to misleading results in the actual application of object detection, posing a threat to the system reliability and security [21,22]. The fabrication attack function aims to make the detector recognize false objects as real objects by forging or fabricating perturbed target instances. Different from the mislabeling attack, this attack needs to generate many false objects based on maintaining the ground truth object detection results for deceiving the object detection system [23].
In this article, the dynamic bi-level integrated attack (DBI-Attack) is proposed to implement adversarial attack for UAV object detection on the unmanned aerial vehicle detection tracking (UAVDT) dataset. The universal attack models have white-box and black-box modules for mislabeling and fabrication attacks against dense small objects. Firstly, we employ the momentum iterative fast gradient sign method (MIM) to introduce momentum. The dynamic iterative attack (DIA) is built by the dynamic iteration step to avoid the iteration process failing to move forward and over-iterating by improving MIM. Secondly, the bi-level adversarial attack method (BAAM) is applied on the decision level of Faster RCNN [24] and YOLOv3 [25] to complete the white-box attack to further improve the attack effect. Finally, the weight balance and weight optimization modules are combined to build the integration black-box attack method (IBAM) to enable attacks to migrate against the black-box model including RetinaNet [26], SSD [27], and Sparse RCNN [28]. Specifically, the main contributions of this work can be concluded as follows:
  • The white-box attack module DIA employs the guidance of classification loss to generate primary adversarial examples from the internal feature space of the bounding box to deceive the classifier, which avoids the iteration stopping and skipping extreme points problem caused by the fixed step sizes.
  • The white-box attack module BAAM further improves the performance of adversarial examples at the decision level by using the RPN classification loss of the two-stage model and the multi-class confidence loss of the one-stage model to improve the attack capability of the model.
  • The black-box attack module IBAM integrates the weight balance and weight optimization modules to combine the perturbations obtained from the white-box model for the black-box model without gradient information. The predefined perturbations generated by the agent white-box model improve the transfer performance of the black-box attack model.
  • The proposed model can fully consider the performance of the white-box attack and improve the effect of black-box attacks. Hence, DBI-Attack does not need to design proprietary adversarial examples for different black-box target detection models. Our attack combines query with transfer to improve the applicability of the white-box attack integrations on the black-box models.
The rest of the article is organized as follows: Section 2 presents the related works, and Section 3 introduces the proposed attack framework. The experiments and analysis are detailed in Section 4, and finally, the conclusion is provided in Section 5.

2. Related Work

The adversarial attack has been applied in the object classification, object detection, and semantic segmentation by adding imperceptible perturbations. These existing attack methods play crucial roles in various attack scenarios, such as autonomous driving, SAR image intelligence interpretation, and aerial object detection. Considering the characteristics of adversarial example (AE) generation models, it can be further discussed regarding white-box and black-box attack methods. These methods are not feasible for the detection of small and dense UAV objects.
White-Box Attack: The white-box attacks have full access to the target detection model and internal structure. The hackers can deceive the target detectors by analyzing the weight, gradient, and other information of the model for generating targeted adversarial examples. Madry et al. [29] studied the saddle point formula corresponding to the optimization problem and used the projected gradient descent (PGD) as a general attack method to attack the local information of the network. Xie et al. [30] pioneered the migration of adversarial example generation into the field of target detection and proposed the Dense Adversarial Generation (DAG) method. This method assigns an adversarial label to each Region of Interest (RoI) and performs iterative gradient backpropagation to generate adversarial examples. Since the one-stage object detection algorithms do not generate proposal boxes, this method can only attack two-stage object detection algorithms. Wei et al. [31] proposed a unified and efficient adversary (UAE) model for better transferability and attack effectiveness on images and videos. The adversarial examples generated by the UAE also exhibit significant visual differences from the original images, which can reduce the perceptibility of the attack and potentially lower the success rate. Du et al. [32] proposed a universal local adversarial network (ULAN) to generate local adversarial examples. Specifically, level-wise relevance propagation (LRP) is used to compute the attention that determines the target region. The ULAN then utilizes U-Net to generate local perturbations specifically targeting the identified target region. Recently, Wang et al. [33] proposed an improved universal adversarial perturbation (UAP) method that generates disturbances by attacking RPNs based on the proposed detectors. Li et al. [34] described a robust perturbation (RAP) method to attack RPN-based object detectors. They designed and optimized a loss function that combines the label loss and new shape loss. Wu et al. [35] realized the printed circuit board (PCB) attack using the man-in-the-middle attack, but its partial contribution is dividing the attack target into three parts—probability, confidence, and bounding box—and constructing, in effect, a universal adaptive attack.Chow et al. [36] designed a series of sophisticated attack strategies of the targeted adversarial objectness gradient (TOG) for modern deep learning driven target detectors, and realized object vanishing, fabrication and mislabeling. Zhang et al. [37] proposed a targeted feature space attack (TFA) to attack the internal feature layer of the detection model rather than the final output layer used in the traditional mislabeling attack. However, these methods are hardly to be conducted on the black-box models. Meanwhile, there is an urgent need for providing a feasible universal white-box attack algorithm for both black-box attacks in one-stage and two-stage object detections.
Black-Box Attack: In real scenarios, attackers often face unknown black-box models and cannot obtain any internal information. Many efforts have been put into black-box attack methods regarding this issue. Zhang et al. [38] proposed contextual adversarial perturbation (CAP) to provide better attacking ability by considering the context information, exploring more aggressive attacking principles, and extending the attack to weakly supervised object detectors (WSODs). Kuang et al. [20] proposed a novel black-box attack called evaporate attack (EA) for object detection models. The model incorporates pixel-wise optimal position guidance and random Gaussian noise into the velocity iteration formula. Wang et al. [39] proposed an improved attack method for discrete cosine transform based on the boundary attack augmentation mechanism. They applied this method to offline and online attacks on black-box object detectors. Li et al. [40] proposed a novel attack method called adaptive square attack (ASA), which bypasses the configuration of the target recognition model. Specifically, the ASA method employs an efficient sampling strategy that can generate perturbations with less query time. Cai et al. [14] proposed an ensemble-based black-box attack (EBAD) on dense prediction. This method can generate a single disturbance based on the limited feedback update query to deceive the black-box detection model. These black-box attacks are designed based on huge targets which occupy most pixels of images. Hence, they are hard to be conducted on small and dense targets effectively. Table 1 lists the overview of adversarial attack methods for object detection.

3. Methodology

The small and dense targets of UAV target detection usually occupy fewer pixels in the images, and the target positions are very close. This brings difficulty to the implementation of adversarial attacks and the design of perturbation. The challenges can be included in two aspects: (1) how to improve the attack effect of perturbation on a limited number of pixels considering the small size of the target, and (2) how to reduce the mutual interference of perturbation between different targets according to the intensity of targets. Many advanced attack methods are conducted based on the single classification loss function [30,31,32]. The other loss functions of greeted significance are ignored for the generation of adversarial examples. Hence, we separate the whole loss function into classification and decision loss. Firstly, in order to improve the attack effect on the limited pixels on small targets, the DIA is designed to obtain the optimal point of the loss function more accurately. This makes the adversarial examples (AEs) generated by the reverse gradient have a more serious impact on the detection models for improving the attack effect on small targets. Secondly, in addition to achieving the wrong positioning of the target, using the loss function in the decision layer to guide the update of the AEs can more accurately locate the attack jamming position. Therefore, the DBIA module can reduce the mutual influence of the AEs between adjacent dense targets and improve the attack effect.
The overall framework of the dynamic bi-level integrated attack (DBI-Attack) method is displayed in Figure 2. The attacking model is conducted from classification and decision levels separately, which improves the attack success rate and reduces the perturbation generation time. In the classification level, we construct a dynamic iterative attack (DIA) module which introduces momentum iteration and dynamic steps to accelerate the iteration speed based on the momentum iterative fast gradient sign method (MIM). The targeted bi-level adversarial attack method (BAAM) module is conducted in the decision level in terms of one- and two-stage target detection algorithms for the mislabeling and fabrication functions of DBI-Attack, respectively. Finally, the integration black-box attack method (IBAM) is applied to improve the portability of the attack algorithm by carrying out white-box attacks on multiple agent models, and generates a single adversarial example by adding the acquired perturbations according to balanced weights for achieving effective attacks on the black-box models.

3.1. Momentum Iterative Fast Gradient Sign Method

The momentum iterative fast gradient sign method (MIM) algorithm is the base model of the dynamic iterative attack (DIA), which is proposed in reference [41] for generating adversarial examples (AEs) for image classification. The algorithm introduces momentum to construct a momentum iterative gradient strategy based on the iterative fast gradient sign method (I-FGSM) [42]. This method can accelerate the generation process of AEs using the previous iteration’s perturbations. Compared with one-step gradient descent algorithms and optimization-based attack algorithms, the model dramatically improves the efficiency of attacks. Therefore, this study takes MIM as the primary attack framework for the classification level of dynamic iterative attack (DIA). The attack on the classification level is shown in Figure 3.
For the given object x, f c ( x ) is the clean example category, and f c x represents the AE category. Assuming that the attacked model is a white-box model, MIM can be used to construct a mislabeling attack framework for DIA. Firstly, the iterative gradient attack method is applied repeatedly with a small step size. Then, the pixel values of the intermediate perturbation are clipped to ensure that they are within the value range of the original image. The iterative attack process is showed as follows:
X n + 1 = Clip X , ϵ X n + α sign x L X n , y true
where X n represents the AEs generated in the n t h iteration, and y true represents the detection label of clean example. The parameter α represents the step size, and n represents the n t h iteration with values ranging from 0 to n 1 . Clip X , ϵ { } represents the clipping of the AE after each iteration to satisfy the L constraint. To attach the momentum to the iterative gradient attack method, MIM accelerates the gradient descent by accumulating the velocity vector of the gradient direction of the loss function. This method can effectively solve the problems of local optimal solutions and overfitting during the iteration process. The momentum generation formula is as follows:
g n + 1 = μ · g n + x n L x n , y true x n L x n , y true 1
X n + 1 = Clip X , ϵ X n + α sign g n + 1
where g n is the accumulated gradient generated by the previous n iterations, μ is the decay factor of g n , and  · 1 is the sum of the absolute values of the elements in the vector. After obtaining the value of momentum, the model is iteratively attacked with the momentum value. The attack formula is updated as Equation (3).

3.2. Dynamic Iterative Attack

For the MIM method, the iteration step size is fixed. Therefore, the iteration step of MIM remains the same, while the iteration direction changes during the generation of adversarial examples (AEs). Significantly, a fixed iteration step may cause the iteration process to fail moving forward or over-iterate [43,44,45]. To change the step size dynamically with the iteration, a dynamic iteration step can be designed to change the perturbation size in the iteration direction so that the AEs can better approach the optimal point of the loss function within a limited number of iterations. In this section, the dynamic iterative attack (DIA) is proposed based on MIM at the classification level which can dynamically generate the primary AEs to deceive the target categories classification in bounding boxes. For targeted attacks with output y , we have to solve the following minimization problem:
x = arg min x L c f ( x ) , y i
where L c is the cross-entropy loss of the classification level, f ( x ) is the label of the clean example, and  y is the label of adversarial examples. The loss of all targets is calculated as follows:
L c = 1 M i = 1 M j = 1 M y i log p x j subject to x x < ε
where real label y i = f x i , log p x j is the prediction probability distribution of classification perturbation by the recognition model, and M presents the number of targets. To ensure that the perturbation of the generated AEs is not noticeable, we give the difference limitation ε between the original image and the adversarial image.
When the iteration point approaches the extremum point of the model loss function, the absolute value is relatively small. To prevent the iteration point from directly jumping over the extremum point for the large step size, the iteration step size needs to be reduced. On the contrary, when the iteration point is close to the middle of two adjacent extremum points, the absolute value of the gradient is relatively large, and its projection value on the horizontal axis is small. In order to quickly reach the nearest extremum point from the iteration point, the iteration step size should be appropriately increased. Therefore, the gradient of the loss function x n × L c at the input point can be used to adjust the iteration step size in a positive direction. In addition, to reflect the direction and intensity of the loss function change, we use the gradient difference between the current iteration point and the previous iteration point as supplementary information for the current iteration step size so as to correct the iteration process using historical information. The process of dynamic iteration is given in Figure 4. The iteration step size is set by Equation (6). In terms of the step size limitation, the dynamic iteration step size α n is normalized to reduce the step size as Equation (7):
x n L d + x n L c x n 1 L c = 2 L c x n x n 1
α n = 2 x n L c x n 1 L c 2 x n L c x n 1 L c 1
To avoid the deviation of the dynamic iteration step size from the initial fixed step size being overly small or large, the step size is distributed in the interval 0.5 × ε N , 1.5 × ε N , avoiding the situation that α n ε or α n ε . When the loss function value of the AE generated by the final iteration is greater than the discrimination threshold, the recognition model will misclassify the object in the bounding box. It can be considered that the AE has successfully attacked the recognition model. The process of DIA can be summarized as follows:
α n = 2 x n × L c x n 1 × L c 2 x n × L d x n 1 + L d 1 g n + 1 = μ · g n + x n × L d x n × L d 1 x n + 1 = Clip x , ε x n + λ α n · sign g n + 1
where λ indicates the disturbance factor of the classification level, which represents the proportion of disturbance when the generating perturbation.

3.3. Bi-Level Adversarial Attack Method

Differing from the classification task, dense object detection has more loss functions for different models due to the complexity of the output space and the diversity of the architecture. Excepting the classification loss of the classification level, the rest loss functions of each white-box model are viewed as the decision level optimization loss to guide the completion of the bi-level adversarial attack method (BAAM) by backpropagation. The BAAM further improves the attack performance by using the bounding box and region proposal network (RPN) loss of the two-stage models and the multi-class confidence loss L c o n f of the one-stage models based on the primary AEs of DIA [46,47]. It is worth noting that the regression loss L b b o x , L x y , and  L w h of the bounding box is crucial for better implementing fabrication attacks [48]. The structure of BAAM is displayed in Figure 5.
In the decision level, the primary AEs x c , and gradient g c generated by DIA are viewed as new inputs to calculate the decision level loss function. According to the model structure, the decision level loss function L d for the one-stage and two-stage models are as follows:
L d = L t w o _ s t a g e = L b b o x + L r p n _ c l a + L r p n _ b b o x L o n e _ s t a g e = L c o n f + L x y + L w h
where L t w o _ s t a g e and L o n e _ s t a g e are the rest sum loss functions of one- and two-stage models excepting classification, L b b o x represents the regression loss function of the bounding box, and L r p n _ c l a and L r p n _ b b o x represent the RPN loss of two-stage models. For one-stage models, L c o n f is the confidence loss function, and L x y and L w h calculate the bounding box loss functions. The  L 1 loss function of the decision level is employed as the regression loss:
L o s s L 1 = i N f x i y i M
where y i represents the true value, f x i represents the logic output of the decision level, and M represents the number of targets. The direction of decision level adversaries is as follows:
sign g n + 1 = sign g c n + 1 + x d × L d x d × L d 1
where x d × L d and g d n + 1 represent the gradients of classification and decision level loss function, and g n + 1 is the gradient accumulation for n t h iteration. After completing the production process of a two-level perturbation, for targeted mislabeling and fabrication attacks, the adversarial examples are updated as follows:
x n + 1 = Clip X , ϵ X n ( 1 λ ) α n sign g n + 1
x n + 1 = Clip X , ϵ X n + ( 1 λ ) α n sign g n + 1
Equations (12) and (13) give the calculations of mislabeling and fabrication adversarial examples, where ( 1 λ ) α n determines the magnitude of residual perturbations from decision level, and X n is the output of the DIA method. When λ = 0 , the perturbation of the classification level is 0, and the bi-level adversarial attack degenerates into a decision level adversarial attack. When 0 < λ < 1 , the BAAM utilizes residual perturbations from the decision level to enhance the attack performance after generating the initial classification adversarial examples.

3.4. Integration Black-Box Attack Method

The black-box models indicate the unknown model structure and parameters. The integrated perturbations P = p 1 , p 2 , , p s of S agent white-box models M = m 1 , m 2 , , m S are utilized to generate a black-box attack for the victim model m v inspired by the ensemble learning method [49,50]. If there is only one agent model for integration, such an attack will become a simple transport attack, with which it is difficult to successfully realize the black-box attack. The architecture of the integration black-box attack method (IBAM) is given in Figure 6. The method consists of two key parts, including weight balance and weight optimization.
The classification and decision level loss functions of the j t h model are represented as L c j and L d j . The weighted average values of the loss functions generated by different agent models are employed to guide the calculation of the finally black-box perturbations. Firstly, we have to calculate the adversarial examples from the classification level of different models as follows:
x c j = arg min L c j x ( f ( x ) , y )
where x c j is the perturbation generated by the agent models with the DIA method, f ( x ) represents the label of the clean sample, and  y is the target labels of final white-box AEs detected by the agent models. Then, we can generate AEs by solving the following optimization problems:
z ( W ) = arg min x j = 1 s w j L d j d j , Y
d j = F j x c j
where z ( W ) is the black-box adversarial sample with a weighted sequence W = { w 1 , w 2 , , w s } , p j represents the output of the decision level including the bounding box, RPN proposal, and confidence generated by updating the classification adversarial examples x c j and clean samples with an ensemble of S models F = F 1 , F 2 , , F S . The setting of the W value plays a significant role in improving the success rate of the transferred attacks in integrated models.
In IBAM, the adversarial attack method succeeds in attacking various white-box models; it is easier to be transferred into the black-box victim model. However, most attack methods have only been validated on classification models with the same cross-entropy loss and produce similar loss values. In contrast, the loss functions of object detectors in integrations may vary greatly and cover a wide range. In this case, models with large loss terms severely affect the optimization process, reducing the attack success rate of models with small loss terms. To overcome this problem, we propose an effective solution to balance the weights assigned to each model. For each perturbation p j = F x c j and target output y , we adjust the weights of the proxy model loss as follows:
w j = 1 S L d j F x c j , y S × L d j F x c j , y
The weight balancing is settled in white-box models to measure the loss accurately. The purpose of weight balancing is to ensure that all proxy models can be effectively attacked, making the generated examples more adversarial to black-box victim models.
We further observe that the transfer-based attack can be further improved by optimizing the weights of the ensemble based on the victim model, input image, and target output, to produce perturbations that reduce the victim model loss, including classification and decision levels. We also apply BAAM on the black-box model to optimize the parameters. Specifically, we can change the individual w j to generate perturbations that reduce the victim model loss L b l a c k b o x without calculating the gradient values of the black box. To achieve this goal, we need to solve the following optimization problem with respect to  W :
W = arg min x L b l a c k b o x ( x ( W ) , y )
where L b l a c k b o x , F x c i , and  y are the decision level loss function, adversarial output, and clean sample output of the black-box model. Equation (18) depicts an ensemble optimization problem which can be solved in an alternating minimization routine. Finally, the optimized parameter sequence W is utilized to update the final black-box attack x ( W ) . Algorithm 1 summarizes the detailed algorithm steps of DBIA.
Algorithm 1 Dynamic bi-level integrated attack.
Input: Original signal object x; GT label l; GT bounding box; loss function L c of the
   classifier; loss function L d of decision level in agent model; loss function L b l a c k b o x of
   decision level in victim model.
Input: Number of iteration N; number of agent models s; weight of agent models
    w 1 , w 2 , , w s ; perturbation constraint ε ; number of inner iterations decay factor
    μ ; classification level perturbation factor λ .
Output: An adversarial example x with x x ε .
  1:  x 0 = x ; g 0 = 0 ;
  2: for j = 1 to S do
  3:       for  n = 0 to N 1  do
  4:          Input x i to the classifier, calculate the l c according to Equation (5), and determine
  the direction of the classification level perturbation according to Equation (7);
  5:          Calculate the perturbation size of each iteration by Equation (7);
  6:          Update the confrontation by Equation (8);
  7:          Calculate the decision loss L d according to Equation (9), and determine the
  direction of the double level perturbation according to Equations (12) and (13);
  8:          Calculate the final adversary sample by solving the optimization problem as
   z ( W ) = arg min x j = 1 s w j L d j d j , Y ;
  9:          Weight balance and optimization by Equations (17) and (18);
10:     end for
11:  end for
12:  return  x = x ( W )

4. Experiments and Analysis

4.1. Dataset

The unmanned aerial vehicle detection tracking (UAVDT) dataset is designed to evaluate the performance of object detection algorithms in the field of UAV vision [51,52,53,54]. The dataset is created by the Chinese University of Hong Kong. Furthermore, the construction of the attack set enables the evaluation of the robustness of object detection algorithms against attacks targeting small objects in high-altitude and high-density scenarios. The dataset covers diverse and complex scenes, including bridges, main roads, toll stations, highways, and intersections. Additionally, to investigate the impact of different weather and lighting conditions on UAV object detection, the dataset includes scenarios with fog and nighttime scenes. The UAVDT dataset contains 80,000 frames of images with a resolution of 1080 × 540 pixels including three object categories of cars, buses, and trucks. The purpose of the paper is to attack small objects detected in high-density aerial images, so 40,409 images are selected from the dataset. Among them, 30,307 images are segmented as the training set, while 10,102 images are used to compute gradients and construct the attack set. The UAV image samples of the UAVDT dataset are displayed in Figure 7.

4.2. Implement Details

The various detection and attack models including white-box and black-box ones are all deployed on the same server. In this server, PyTorch is recruited as the implementation framework on the RTX3090 GPU block with Xeon [email protected] GHz CPU (Intel, Santa Clara, CA, USA). Throughout the experiments, the learning rate of the SGD optimizer is set as 0.0001. To construct the proposed adversarial attacks, we conducted different widely used one- and two-stage object detection as target models, wherein, the agent models including YOLOv3 and Faster RCNN are attacked as white boxes, and the victim models including RetinaNet, SSD, and Sparse RCNN are attacked as black boxes. For the YOLOv3 and SSD detectors, the epoch numbers are all set as 150. With regard to Faster RCNN, Sparse RCNN, and RetinaNet, the training epoch is set as 30.

4.3. Criteria

To compare and analyze the performance of the different object detection attack methods fairly, we use mean average precision (mAP) and the structural similarity index (SSIM) to measure the accuracy and efficiency of the detection algorithm. The mAP is the mean value of AP values for all object categories in the dataset, and the AP is the average of the precision value on the PR curves [55,56]. The mAP metric can be formulated as follows:
m A P = n = 0 N A P n / N
A P = 0 1 P R d R
where N represents the number of object categories, and P and R represent the precision and recall values, respectively. The precision and recall metrics can be formulated as follows:
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
where T P represents the number of positive samples that are correctly detected, F P represents the number of negative samples that are incorrectly detected as positive samples, and F N represents the number of positive samples that are incorrectly detected as negative samples. The single mAP criteria cannot accurately evaluate the attack performance of the fabricated target. Hence, the fabrication ratio (FR) is calculated as follows to evaluate the effect of the fabrication attack:
F R = F F + R
where F and R represent the numbers of fabricated and real targets, respectively. For the real targets, the intersection of union (IoU) is set as 0.50. The higher F R value represents better fabrication attack effect.
The SSIM can be used to evaluate the similarity between the generated adversarial samples and the original samples. The SSIM values calculated between the perturbed and the original samples are significant for measuring the concealment of AEs, avoiding detection by human eyes [57,58]. A higher SSIM value indicates a higher similarities between the generated and the original samples, indicating that the generated sample with better imperceptibility is closer to the original sample. The calculation formula for SSIM is as follows:
S S I M x , y = 2 μ x μ y + C 1 2 σ x σ y + C 2 μ x 2 + μ y 2 + C 1 σ x 2 + σ y 2 + C 2
where x and y are the two images to be compared, μ x and μ y are the mean of them, σ x and σ y are the covariance of these two images, and C 1 and C 2 are two constants.

4.4. Parameter Discussion

To achieve the balance between the effectiveness and concealment of the attack, the key parameter Q which represents the query number for iteration needs to be optimized. We conduct the black-box attacks on SSD in terms of the mislabeling and fabrication functions with different query values. Furthermore, the mAP values are utilized to verify the attack effect, and the SSIM values are calculated to evaluate the imperceptibility of the adversarial attack. In Figure 8, the optimization process displays the effect of parameter Q on mAP and SSIM values. In the optimization process, the mAP and SSIM values are all continuously reduced with the improvement of queries from 0 to 25. For the mislabeling function, the SSIM value dramatically decays, while the mAP remains almost unchanged when the queries are 20. Hence, the parameter Q for the mislabeling function is set as 20. For the fabrication function, the parameter Q for the fabrication function is set as 15. Figure 9 indicates that the cars are easier to be detected as buses with the increase in queries, while the Figure 10 indicates that more fabrication targets are generated with the increase in queries.
The choice of agent number S also has a great influence on the attack effect on black-box models. We construct an ensemble of YOLOv3, Faster RCNN, FCOS, and Cascade RCNN to be successively expanded to be agent models, and the SSD is selected as the victim model. Table 2 gives the experimental results with different agent numbers. When the parameter S is set as 1, the model is not an integrated model, and the transferability of the DBI-Attack attack decays sharply. For the mislabeling attack, the results indicate that adding agent models can improve the limited attack effect, but it will lead to the improvement of the time cost. Figure 11 displays the mislabeling attack results in the black-box SSD with different numbers of agents. For the fabrication attack, the results indicate that the increase in agents leads to the decline in the attack effect. Figure 12 displays the fabrication attack results in the black-box SSD with different numbers of agents. The visualization results also show that the increase in the number of agents leads to a decrease in the number of fabricated targets. In order to balance the attack effect and time cost, the parameter S is set as 2.

4.5. Experimental Results

To comprehensively analyze the attack performance, the experiments are conducted from white-box and black-box perspectives. Specifically, the commonly used YOLOv3 and Faster RCNN are viewed as white-box models. These agent models are attacked to generate weighted perturbations for black-box attacks on victim models including RetinaNet, SSD, and Sparse RCNN after weight balance and weight optimization. The corresponding mAP values of clean samples are 73.70%, 69.23%, 67.61%, 71.68%, and 69.33% with detecting time costs of 0.11 s, 0.32 s, 0.10 s, 0.12 s, and 0.27 s. Except the queries optimized in the parameter section, the attack functions share the same default parameters. We generate adversarial images in the L norm with perturbation level ϵ = 10 , and iterations of updating weight τ = 10 . For the BAAM module, the parameter λ is set as 0.5, which determines the magnitude of the residual disturbance at the decision level. The number of query times are set as 20 for the mislabeling attack and 15 for the fabrication attack. The agent numbers are all set as 2.
The mislabeling attack is a target-specific attack aimed at changing the target category while ensuring that the target position is detected. Table 3 lists the quantitative data of agent and victim models before and after the mislabeling attack. The mAP values of agent models decline to 3.78% and 1.96%, and the victim models decline to 7.69%, 5.27%, and 5.52%. The mAP values indicate that the targets in white-box models take more risks compared with black-box models. The time costs are the sum of the generation time and detection time of the adversarial sample. The time costs rise to 2.17 s, 3.87 s, 4.24 s, 4.04 s, and 6.99 s, respectively. The weight balance and weight optimization raise the attack time of black-box models. In terms of SSIM, the corresponding values are 0.848, 0.817, 0.879, 0.868, and 0.844. The SSIM values illustrate that the black-box adversarial sample is usually similar to the clean sample, which is harder to be noticed by human eyes. Figure 13 gives the visualization result of mislabeling attack results on the UAVDT dataset. In this figure, the targets of the adversarial images are detected with the wrong label; the cars are mislabeled and detected as buses. The experimental results indicate that our attack method has powerful attack capability for both the one- and two-stage models, regardless of whether the model structures and parameters are available.
The attack algorithm ensures that the real targets position can be located, and fabricates many false targets, causing confusion in the target detection algorithm. Table 4 lists the quantitative data of agent and victim models before and after the fabrication attack. The mAP values of agent models decline to 5.87% and 7.31%, and the victim models decline to 11.56%, 8.45%, and 6.68%. The mAP values indicate that the targets in white-box models take more risks compared with black-box models. The time costs rise to 2.34 s, 3.92 s, 4.85 s, 5.17 s, and 7.29 s, respectively. The corresponding SSIM values are 0.799, 0.786, 0.831, 0.833, and 0.815. The SSIM values illustrate that the adversarial fabrication attack has worse concealment compared with the mislabeling attack. Figure 14 gives the visualization results of the fabrication attack on the UAVDT dataset. In this figure, there are many non-existent bus targets that have been fabricated, apart from the position of real targets that can be detected. The results indicate that the fabrication attack method is also suitable for both the one- and two-stage models without the model gradients and parameters.

4.6. Ablation Study

The proposed dynamic bi-level integrated attack (DBI-Attack) model is composed of the fast gradient sign method (MIM), dynamic iterative attack (DIA), bi-level adversarial attack method (BAAM), and integration black-box attack method (IBAM). In this section, we evaluate the effects of each module on the black-box attack. Significantly, the IBAM module is essential for black-box attacks. The SSD model is conducted as the victim model and distributed by the AE combinations of various modules.
The ablation studies of mislabeling attacks on SSD are shown in Table 5. The MIM module just reduces the mAP value to 61.57%. After adding DIA, the mAP value is reduced to 12.23%. The application of BAAM can further reduce the mAP value to 5.27%. Figure 15 gives the detection results of the mislabeling attack with the corresponding feature maps. For the fabrication attacks, Table 6 lists the numerical results of the ablation studies. The results demonstrate that although MIM can reduce mAP to 66.98%, the application of DIA can further decrease the mAP to 16.52%. Moreover, the use of BAAM makes the attack model obtain 8.45% in the mAP value. Figure 16 gives the detection results of fabrication attacks with the corresponding feature maps. The ablation results indicate that the integration of various modules can enhance the mislabeling and fabrication attacks’ effectiveness.

4.7. Compare with State-of-Art Methods

4.7.1. Compared with Other White-Box Attack Methods

In this study, we implement black-box attacks by integrating strategies against multiple white-box attacks. We evaluate the performance of the DBI-Attack (white-box attack) model composed of MIM, DIA, and BAAM from the perspectives of mislabeling and fabrication functions. Due to the scarcity of suitable comparative models, we endeavor to implement part of them on UAVDT. For the mislabeling function, we compare the model with two advanced white-box attack models including TOG-Mislabel [36], DAG [30], and PGD [29]. For the fabrication function, we compare the model with PCB [35], TOG-Fabrication [36], and TFA-Fabrication [37]. The TOG-Mislabel and TOG-Fabrication attacks rely solely on detecting the output class loss function of the model with low time cost. The queries are set as 30 with the maximum destruction γ = 0.03 and the step size δ = 0.008 . The DAG attack is an effective attack algorithm with a simple and reliable structure that ultimately achieves multi-target attacks by attacking the region proposal networks (RPNs). For the DAG and PGD, the queries are set as 160 and 40, and the other parameters are set as defaults.
Table 7 provides the statistics for comparing the experimental results. Compared to TOG-Mislabel, DBI-Attack (white-box attachment) results in decreases of 0.68% and 1.78% in mAP values for YOLOv3 and Faster RCNN. Compared to DAG, the proposed method results in decreases of 67.87% and 6.47% in mAP values, respectively. Compared to PGD, the proposed method results in decreases of 0.38% and 3.63% in mAP values, respectively. Our method has the best attack effect. With regard to time costs, TOG-Mislabel achieves the fastest attack speed, with decreases of 1.25 s and 2.62 s compared to our model on YOLOv3 and Faster RCNN. In addition, DBI-Attack has to generate more perturbed pixels for achieving the best attack effect. Hence, DAG has the highest SSIM values of 0.936 and 0.941, which are 0.091 and 0.124 higher than DBI-Attack (white-box attack). Figure 17 provides the visualized detection results of clean and adversarial samples. The attack results indicate that although TOG-Mislabel can quickly cause serious interference to the detection model in the UAVDT dataset, it can cause a large amount of target loss, which is inconsistent with the purpose of mislabeling attacks. The DAG algorithm just has a serious impact on two-stage object detection algorithms, and the generation of distribution is time-consuming. Therefore, DBI-Attack (white-box attack) is more suitable to be regarded as the fundamental model for black-box mislabeling attacks.
Table 8 provides the statistics for comparing experimental results. The DBI-Attack (white-box attack) results in increases of 2.48% and 3.66% in mAP values for YOLOv3 and Faster RCNN compared to PCB. Our method results in increases of 3.51% and 4.88% in mAP values compared to TOG-Fabrication. Our method results in increases of 3.38% and 4.38% in mAP values compared to TFA-Fabrication. Significantly, the FR value is the crucial criterion for fabrication; the value gaps are attached to 0.559 and 0.233 between DBI-Attack and PCB. The value gaps are attached to 0.525 and 0.172 for TOG-Fabrication. The value gaps are attached to 0.434 and 0.259 for TFA-Fabrication. The more fabricated targets cost more time and need more perturbed pixels to be generated. The TOG-Fabrication achieved the fastest attack speed in terms of time cost, with reductions of 1.67 s and 3.04 s compared to our models on YOLOv3 and Faster RCNN, respectively. The SSIM values of PCB are 0.064 and 0.061 higher than those of DBI-Attack (white-box attack). The TOG-Fabrication is increased 0.050 and 0.062. The TFA-Fabrication is increased 0.031 and 0.070. Figure 18 provides the visual detection results for both clean and adversarial samples. The results indicate that although TOG-Fabrication has higher SSIM values, it cannot make more fabricated targets. Our model can fabricate numerous false targets while ensuring that the region of real targets is detected. Therefore, DBI-Attack (white-box attack) is more suitable as the basic model for black-box fabrication attacks.

4.7.2. Compared with Other Black-Box Methods

The black-box attack is the central function of the DBI-Attack algorithm, which fills the need for mislabeling and fabrication attacks against multiple small targets. The existing black-box adversarial attacks are difficult to be conducted for UAV object detection. In order to more reliably evaluate the black-box mislabeling attack capability of DBI-Attack, we constructed the RAP+IBAM [34], EBAD [14], and TFA-mislabeling+IBAM [37] comparison model. To more reliably evaluate the black-box fabrication attack capability of DBI-Attack, we will use the PCB [35], TOG-Fabrication [36], and TFA-Fabrication [37] combined with IBAM as comparative models. Significantly, our proposed IBAM, as a plug-and-play module, endows the RAP, TOG, and TFA-Mislabeling models with black-box attack capabilities. The robust transitional transition (RAP) is designed to disrupt the region proposal network (RPN) for generating wrong proposals. The EBAD model employs the projected gradient descent (PGD) model to generate a single disturbance for black-box attacks [14,59]. Although the model has an advanced attack effect on large-sized targets, the missing of dense small targets is serious. The queries of RAP, EBAD, and TFA-Mislabeling are set as 30, 10, and 40, respectively; the rest of the parameters are set as defaults. The queries of PCB, TOG-Fabrication, and TFA-Fabrication are set as 10, 30, and 30, respectively.
Table 9 provides the statistics of the black-box mislabeling attack for RetinaNet, SSD, and Sparse RCNN. The mAP values of different models attacked by our model are 45.89%, 45.85%, and 13.15% lower than RAP+IBAM, 18.54%, 13.97%, and 18.67% lower than EBAD, and 4.23%, 6.84%, and 9.15% lower than TFA-Mislabeling+IBAM. In addition to the attack effect, our model devotes more time to generating higher-quality adversarial samples. The EBAD costs the least time, which is decreased by 1.10 s, 0.78 s, and 2.48 s compared to our model. The corresponding SSIM values are 0.879, 0.868, and 0.834, which is 0.076, 0.071, and 0.044 higher than RAP+IBAM. With regard to EBAD, the SSIM value gap is 0.046, 0.008, and 0.007. Compared with TFA-Mislabeling+IBAM, the SSIM value gaps are 0.005, −0.015, and 0.013. Figure 19 gives the visual detection results for both clean and adversarial samples. This figure shows that the RAP+IBAM only generates effective attacks on discrete targets, and has poor effects on attacking dense targets. The overall results illustrate that although EBAD achieves the fastest attack speed, it can only successfully attack partial targets. For the SSD detector, the EBAD reduces mAP values by losing numerous targets. The experimental results indicate that DBI-Attack is feasible and effective in mislabeling the small targets.
Table 10 displays the statistics of the black-box fabrication attack for RetinaNet, SSD, and Sparse RCNN. The mAP values of different models attacked by our model are 4.32%, 0.12%, and 0.52% lower than PCB+IBAM, and 5.30%, 1.07%, and 0.11% lower than TOG-Fabrication+IBAM. Compared with TFA-Fabrication+IBAM, the mAP values of RetinaNet and SSD are 5.61% and 2.14% lower than our method, and the mAP value of Sparse RCNN is 0.50% higher than DBI-Attack. Significantly, the key criteria FR values of our method are 0.376, 0.544, and 0.279 higher than the PCB+IBAM, 0.275, 0.478, and 0.209 higher than the TOG-Fabrication+IBAM, and 0.228, 0.373, and 0.301 higher than the TFA-Fabrication+IBAM. The time cost of the best model PCB+IBAM is decreased by 3.26 s, 3.84 s, and 4.45 s compared to our model. Specifically, our model generates more perturbation targets, the corresponding SSIM values are 0.791, 0.773, and 0.755, which are 0.047, 0.053, and 0.066 lower than TOG-Fabrication+IBAM. The SSIM gaps between PCB and our method are 0.876, 0.093, and 0.074. The SSIM gaps between TFA-Fabrication+IBAM and our method are 0.033, 0.044, and 0.082. Figure 20 gives the visual detection results of the comparisons. It shows that the TOG-Fabrication+IBAM can generate a series of false and disorderly targets. However, it cannot recognize the correct targets against the definition of the black-box attack. The proposed DBI-Attack can fabricate dense false targets based on identifying the true targets. Hence, DBI-Attack provides a feasible fabrication attack tool for dense and tiny targets.

4.8. Adversarial Training

To further evaluate the defense effectiveness of adversarial training (AT) [60,61,62] against black-box DBI-Attack. For the mislabeling and fabrication functions, five different adversarial training processes were set up to study the robustness of SSD models under different queries. Table 11 provides the adversarial training results on black-box DBI-Attack. Statistical data show that for the mislabeling function, the mAP (%) values are increased by 3.15%, 3.90%, 7.74%, 13.58%, and 14.27% with different queries. For the fabrication function, the mAP (%) values are increased by 4.61%, 5.79%, 10.81%, 10.93%, and 12.35% with different queries.
The experimental results show that adversarial training on fabrication is more effective compared to mislabeling attacks. Meanwhile, with the increase in queries, the overall defense effect of adversarial training is also improving. The adversarial training has a limited defensive effect. On the one hand, the method based on multi-levels attacks can still effectively attack samples even after adversarial training. On the other hand, this section only uses the basic direct adversarial training (DAT) strategy [63]. We will consider more training strategies in the future work, including fast adversarial training (FAT) [64], adversarial probabilistic training (APT) [65], adversarial training based on dropped weight [66], and adversarial training methods based on information bottleneck [67].

5. Conclusions

This study provides a novel black-box attack tool, named dynamic bi-level integrated attack (DBI-Attack) for dense and tiny UAV targets. It attacks victim (black-box) models by exploiting the adversarial transferability of agent (white box) models from classification and decision levels. Specifically, the DIA is formed to circumvent issues such as attack iteration termination and the omission of extreme points caused by fixed step sizes. In addition, the BAAM is employed at the decision level to further improve the attack effectiveness. Finally, the IBAM is embedded to achieve the migration of attacks from white-box to black-box models. For black-box RetinaNet, SSD, and Sparse RCNN, mislabeling attacks reduce mAP to 7.69%, 5.27%, and 5.52%, respectively, effectively altering the target category without losing targets. The fabrication attacks increase FR to 0.397, 0.587, and 0.337, respectively. This attack can fabricate more false targets.
The experimental results showcase that DBI-Attack can be applied as a feasible attack for mislabeling and fabricating dense and tiny UAV targets on both one-stage and two-stage models. The proposed attack more comprehensively studies the vulnerability of object detection models and provides an important research basis for model security. The integration model effectively uses the permutations generated from the white-box models, which means that many previous studies on white-box attacks can provide important inspiration for black-box attacks.
In the future, we aim to improve the effectiveness of the proposed attack by solving two main limitations. On the one hand, the victim models are all CNN-based without considering more models like visual transformers (ViTs). On the other hand, the evaluation of adversarial training (AT) indicates its limited defensive efficacy, and powerful defensive mechanisms are still needed. The defense methods including robust model structures and advanced AT algorithms will be conducted on the attacks.

Author Contributions

Conceptualization, Z.Z. and X.Y.; methodology, Z.Z. and Z.W.; software, Z.Z. and Z.W.; validation, Z.W. and X.Y.; formal analysis, Z.Z., Z.W. and X.Y.; writing—original draft preparation, Z.Z., Z.W. and X.Y.; writing—review and editing, X.Y. and B.W.; visualization, Z.Z. and X.Y.; supervision, Z.W. and B.W.; funding acquisition, Z.W., X.Y. and B.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the National Natural Science Foundation of China under Grant 62172338, in part by the National Natural Science Foundation of China under Grant 61671465, in part by the Scientific Research Program Funded by Education Department of Shaanxi Provincial Government under Grant 23JK0701, in part by the Young Talent Fund of Association for Science and Technology in Shaanxi under grant 20240105, and in part by the China Scholarship Council Grant number 202206090122.

Data Availability Statement

Data associated with this research are available online. The UAVDT dataset is available for download at https://sites.google.com/view/grli-uavdt/ accessed on 26 March 2018.

Conflicts of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as potential conflicts of interest.

References

  1. Li, A.; Ni, S.; Chen, Y.; Chen, J. Cross-Modal Object Detection Via UAV. IEEE Trans. Veh. Technol. 2023, 72, 10894–10905. [Google Scholar] [CrossRef]
  2. Lu, Z.; Sun, H.; Xu, Y. Adversarial Robustness Enhancement of UAV-Oriented Automatic Image Recognition Based on Deep Ensemble Models. Remote Sens. 2020, 21, 3007. [Google Scholar] [CrossRef]
  3. Lu, Z.; Sun, H.; Ji, K.; Kuang, G. Adversarial Robust Aerial Image Recognition Based on Reactive-Proactive Defense Framework with Deep Ensembles. Remote Sens. 2023, 15, 4660. [Google Scholar] [CrossRef]
  4. Messenger, R.; Islam, M.; Whitlock, M. Real-Time Traffic End-of-Queue Detection and Tracking in UAV Video. J. Syst. Eng. Electron. 2023, 21, 493–505. [Google Scholar] [CrossRef]
  5. Li, X.; Wu, J. Developing a More Reliable Framework for Extracting Traffic Data from a UAV Video. IEEE Trans. Intell. Transp. Syst. 2023, 24, 12272–12283. [Google Scholar] [CrossRef]
  6. Ren, H.; Huang, T.; Yan, H. Adversarial examples: Attacks and defenses in the physical world. Int. J. Mach. Learn. Cybern. 2021, 12, 3325–3336. [Google Scholar]
  7. Wei, X.; Guo, Y.; Yu, J. Adversarial Sticker: A Stealthy Attack Method in the Physical World. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 2711–2725. [Google Scholar] [CrossRef] [PubMed]
  8. Sun, X.; Cheng, G.; Pei, L.; Li, H.; Han, J. Threatening patch attacks on object detection in optical remote sensing images. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–10. [Google Scholar] [CrossRef]
  9. Sun, L.; Chang, J.; Zhang, J.; Fan, B.; He, Z. Adaptive image dehazing and object tracking in UAV videos based on the template updating siamese network. IEEE Sens. J. 2023, 23, 12320–12333. [Google Scholar] [CrossRef]
  10. Xu, Y.; Sun, H.; Chen, J.; Lei, L.; Kuang, G.; Ji, K. Robust remote sensing scene classification by adversarial self-supervised learning. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 4936–4939. [Google Scholar]
  11. Xu, Y.; Sun, H.; Chen, J.; Lei, L.; Ji, K.; Kuang, G. Adversarial Self-Supervised Learning for Robust SAR Target Recognition. Remote Sens. 2021, 13, 4158. [Google Scholar] [CrossRef]
  12. Zhao, S.; Wang, W.; Du, Z.; Chen, J.; Duan, Z. A Black-Box Adversarial Attack Method via Nesterov Accelerated Gradient and Rewiring Towards Attacking Graph Neural Networks. IEEE Trans. Big Data 2023, 9, 1586–1597. [Google Scholar] [CrossRef]
  13. Zhou, S.; Liu, C.; Ye, D.; Zhu, T.; Zhou, W.; Yu, P.S. Adversarial attacks and defenses in deep learning: From a perspective of cybersecurity. ACM Comput. Surv. 2022, 55, 1–39. [Google Scholar] [CrossRef]
  14. Cai, Z.; Tan, Y.; Asif, M.S. Ensemble-based blackbox attacks on dense prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 18–22 June 2023; pp. 4045–4055. [Google Scholar]
  15. Wei, X.; Yuan, M. Adversarial pan-sharpening attacks for object detection in remote sensing. Pattern Recognit. 2023, 139, 109466–109471. [Google Scholar] [CrossRef]
  16. Tian, J.; Wang, B.; Guo, R.; Wang, Z.; Cao, K.; Wang, X. Adversarial attacks and defenses for deep-learning-based unmanned aerial vehicles. IEEE Internet Things J. 2022, 9, 22399–22409. [Google Scholar] [CrossRef]
  17. Wang, Y.; Wang, K.; Zhu, Z.; Wang, F.-Y. Adversarial attacks on faster RCNN object detector. Neurocomputing 2020, 382, 87–95. [Google Scholar] [CrossRef]
  18. Mumcu, F.; Yilmaz, Y. Sequential architecture-agnostic black-box attack design and analysis. Pattern Recognit. 2023, 15, 110066–110072. [Google Scholar] [CrossRef]
  19. Tian, J.; Shen, C.; Wang, B.; Xia, X.; Zhang, M.; Lin, C.; Li, Q. LESSON: Multi-Label Adversarial False Data Injection Attack for Deep Learning Locational Detection. In IEEE Transactions on Dependable and Secure Computing; IEEE: Piscataway, NJ, USA, 2024. [Google Scholar]
  20. Kuang, X.; Gao, X.; Wang, L.; Zhao, G.; Ke, L.; Zhang, Q. A discrete cosine transform-based query efficient attack on black-box object detectors. Inf. Sci. 2021, 546, 596–607. [Google Scholar] [CrossRef]
  21. Shibly, K.H.; Hossain, M.D.; Inoue, H.; Taenaka, Y.; Kadobayashi, Y. Towards autonomous driving model resistant to adversarial attack. Appl. Artif. Intell. 2023, 37, 2193461–2193470. [Google Scholar] [CrossRef]
  22. Zhu, H.; Zhu, Y.; Zheng, H.; Ren, Y.; Jiang, W. LIGAA: Generative adversarial attack method based on low-frequency information. Comput. Secur. 2023, 125, 103057–103070. [Google Scholar] [CrossRef]
  23. Wang, Z.; Zhang, C. Attacking object detector by simultaneously learning perturbations and locations. Neural Process. Lett. 2023, 55, 2761–2776. [Google Scholar] [CrossRef]
  24. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28, 112–118. [Google Scholar] [CrossRef] [PubMed]
  25. Redmon, J.; Farhadi, A. YOLOv3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  26. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2980–2988. [Google Scholar]
  27. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single shot multibox detector. In Computer Vision ECCV 2016: 14th European Conference, Proceedings, Part I, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
  28. Sun, P.; Zhang, R.; Jiang, Y.; Kong, T.; Xu, C.; Zhan, W.; Tomizuka, M.; Li, L.; Yuan, Z.; Wang, C.; et al. Sparse R-CNN: End-to-end object detection with learnable proposals. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 14454–14463. [Google Scholar]
  29. Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards deep learning models resistant to adversarial attacks. arXiv 2017, arXiv:1706.06083. [Google Scholar]
  30. Xie, C.; Wang, J.; Zhang, Z.; Zhou, Y.; Xie, L.; Yuille, A. Adversarial examples for semantic segmentation and object detection. In Proceedings of the IEEE International Conference on Computer Vision (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1369–1378. [Google Scholar]
  31. Wei, X.; Liang, S.; Chen, N.; Cao, X. Transferable adversarial attacks for image and video object detection. arXiv 2018, arXiv:1811.12641. [Google Scholar]
  32. Du, M.; Bi, D.; Du, M.; Xu, X.; Wu, Z. ULAN: A universal local adversarial network for SAR target recognition based on layer-wise relevance propagation. Remote Sens. 2022, 15, 21. [Google Scholar] [CrossRef]
  33. Wang, D.; Yao, W.; Jiang, T.; Chen, X. Improving Transferability of Universal Adversarial Perturbation with Feature Disruption. IEEE Trans. Image Process. 2024, 33, 722–737. [Google Scholar] [CrossRef] [PubMed]
  34. Li, Y.; Tian, D.; Chang, M.; Bian, X.; Lyu, S. Robust adversarial perturbation on deep proposal-based models. arXiv 2018, arXiv:1809.05962. [Google Scholar]
  35. Wu, H.; Rowlands, S.; Wahlstrom, J. A Man-in-the-Middle Attack against Object Detection Systems. arXiv 2024, arXiv:2208.07174. [Google Scholar]
  36. Chow, K.H.; Liu, L.; Loper, M.; Bae, J.; Gursoy, M.E.; Truex, S.; Wei, W.; Wu, Y. Adversarial objectness gradient attacks in real-time object detection systems. In Proceedings of the 2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA), Atlanta, GA, USA, 28–31 October 2020; pp. 263–272. [Google Scholar]
  37. Zhang, X.; Sun, C.; Han, H. Object-fabrication Targeted Attack for Object Detection. arXiv 2022, arXiv:2212.06431. [Google Scholar]
  38. Zhang, H.; Zhou, W.; Li, H. Contextual adversarial attacks for object detection. In Proceedings of the 2020 IEEE International Conference on Multimedia and Expo (ICME), London, UK, 6–10 July 2020; pp. 1–6. [Google Scholar]
  39. Wang, Y.; Tan, Y.; Zhang, W.; Zhao, Y.; Kuang, X. An adversarial attack on DNN-based black-box object detectors. J. Netw. Comput. Appl. 2020, 161, 102634. [Google Scholar] [CrossRef]
  40. Li, Y.; Xu, X.; Xiao, J.; Li, S.; Shen, H.T. Adaptive square attack: Fooling autonomous cars with adversarial traffic signs. IEEE Internet Things J. 2020, 8, 6337–6347. [Google Scholar] [CrossRef]
  41. Dong, Y.; Liao, F.; Pang, T.; Su, H.; Zhu, J.; Hu, X.; Li, J. Boosting adversarial attacks with momentum. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 9185–9193. [Google Scholar]
  42. Wang, S.; Liu, W.; Chang, C.-H. A new lightweight in situ adversarial sample detector for edge deep neural network. IEEE J. Emerg. Sel. Top. Circuits Syst. 2021, 11, 252–266. [Google Scholar] [CrossRef]
  43. Yin, M.; Li, S.; Cai, Z.; Song, C.; Asif, M.S.; Roy-Chowdhury, A.K.; Krishnamurthy, S.V. Exploiting multi-object relationships for detecting adversarial attacks in complex scenes. In Proceedings of the IEEE/CVF International Conference on Computer Vision (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 7858–7867. [Google Scholar]
  44. Shin, D.; Kim, G.; Jo, J.; Park, J. Low Complexity Gradient Computation Techniques to Accelerate Deep Neural Network Training. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 5743–5759. [Google Scholar] [CrossRef]
  45. Zhang, F.; Meng, T.; Xiang, D.; Ma, F.; Sun, X.; Zhou, Y. Adversarial deception against SAR target recognition network. IEEE J. Select.Top. Appl. Earth Obs. Remote Sens. 2022, 15, 4507–4520. [Google Scholar] [CrossRef]
  46. Shih, K.-H.; Chiu, C.-T.; Lin, J.-A.; Bu, Y.-Y. Real-time object detection with reduced region proposal network via multi-feature concatenation. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 2164–2173. [Google Scholar] [CrossRef]
  47. Ma, X.; Niu, Y.; Gu, L.; Wang, Y.; Zhao, Y.; Bailey, J.; Lu, F. Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recognit. 2021, 110, 107332. [Google Scholar] [CrossRef]
  48. Liu, M.; Zhang, Z.; Chen, Y.; Ge, J.; Zhao, N. Adversarial attack and defense on deep learning for air transportation communication jamming. IEEE Trans. Intell. Transp. Syst. 2023, 25, 973–986. [Google Scholar] [CrossRef]
  49. Jing, C.; Wu, Y.; Cui, C. Ensemble dynamic behavior detection method for adversarial malware. Future Gener. Comput. Syst. 2022, 130, 193–206. [Google Scholar] [CrossRef]
  50. Li, D.; Zhang, J.; Huang, K. Universal adversarial perturbations against object detection. Pattern Recognit. 2021, 110, 107584. [Google Scholar] [CrossRef]
  51. Li, X.; Li, X.; Li, Z.; Xiong, X.; Khyam, M.O.; Sun, C. Robust vehicle detection in high-resolution aerial images with imbalanced data. IEEE Trans. Artif. Intell. 2021, 2, 238–250. [Google Scholar] [CrossRef]
  52. Wu, H.; He, Z.; Gao, M. Gcevt: Learning global context embedding for vehicle tracking in unmanned aerial vehicle videos. IEEE Geosci. Remote Sens. Lett. 2022, 20, 1–5. [Google Scholar] [CrossRef]
  53. Xu, J.; Li, Y.; Wang, S. AdaZoom: Towards scale-aware large scene object detection. IEEE Trans. Multimed. 2022, 25, 4598–4609. [Google Scholar] [CrossRef]
  54. Zhang, Y.; Zheng, Y. Object tracking in UAV videos by multi-feature correlation filters with saliency proposals. IEEE J. Select. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 5538–5548. [Google Scholar] [CrossRef]
  55. Yuan, Y.; Zhang, Y. OLCN: An optimized low coupling network for small objects detection. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
  56. Li, Y.; Pang, Y.; Cao, J.; Shen, J.; Shao, L. Improving single shot object detection with feature scale unmixing. IEEE Trans. Image Process. 2021, 30, 2708–2721. [Google Scholar] [CrossRef]
  57. Wang, W.; Li, F.; Ng, M.K. Structural similarity-based nonlocal variational models for image restoration. IEEE Trans. Image Process. 2019, 28, 4260–4272. [Google Scholar] [CrossRef]
  58. Zhou, Z.; Sun, Y.; Sun, Q.; Li, C.; Ren, Z. Only once attack: Fooling the tracker with adversarial template. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 3173–3184. [Google Scholar] [CrossRef]
  59. Lanfredi, B.; Schroeder, J.; Tasdizen, T. Quantifying the preferential direction of the model gradient in adversarial training with projected gradient descent. Pattern Recognit. 2023, 139, 109430. [Google Scholar] [CrossRef]
  60. Chen, C.; Ye, D.; He, Y.; Tang, L.; Xu, Y. Improving adversarial robustness with adversarial augmentations. IEEE Internet Things J. 2024, 11, 5105–5117. [Google Scholar] [CrossRef]
  61. Huang, Z.; Fan, Y.; Liu, C.; Zhang, W.; Zhang, Y.; Salzmann, M.; Süsstrunk, S.; Wang, J. Fast adversarial training with adaptive step size. IEEE Trans. Image Process. 2023, 32, 6102–6114. [Google Scholar] [CrossRef] [PubMed]
  62. de Araujo-Filho, P.F.; Kaddoum, G.; Ben Nasr, M.C.; Arcoverde, H.F.; Campelo, D.R. Defending Wireless Receivers Against Adversarial Attacks on Modulation Classifiers. IEEE Internet Things J. 2023, 10, 19153–19162. [Google Scholar] [CrossRef]
  63. Li, Z.; Xia, P.; Tao, R.; Niu, H.; Li, B. A new perspective on stabilizing GANs training: Direct adversarial training. IEEE Trans. Emerg. Top. Comput. Intell. 2023, 7, 178–189. [Google Scholar] [CrossRef]
  64. Jia, X.; Zhang, Y.; Wu, B.; Wang, J.; Cao, X. Boosting fast adversarial training with learnable Adversarial Initialization. IEEE Trans. Image Process. 2022, 31, 4417–4430. [Google Scholar] [CrossRef]
  65. Dong, J.; Yang, L.; Wang, Y.; Xie, X.; Lai, J. Toward intrinsic adversarial robustness through probabilistic training. IEEE Trans. Image Process. 2023, 32, 3862–3872. [Google Scholar] [CrossRef]
  66. Ni, S.; Li, J.; Yang, M.; Kao, H.-Y. DropAttack: A random dropped weight attack adversarial training for natural language understanding. IEEE/ACM Trans. Audio Speech Language Process. 2024, 32, 364–373. [Google Scholar] [CrossRef]
  67. Xu, M.; Zhang, T.; Li, Z.; Zhang, D. InfoAT: Improving adversarial training using the information bottleneck principle. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 1255–1264. [Google Scholar] [CrossRef]
Figure 1. The influence of adversarial attack on the object detector.
Figure 1. The influence of adversarial attack on the object detector.
Remotesensing 16 02570 g001
Figure 2. The flowchart of dynamic bi-level integrated attack.
Figure 2. The flowchart of dynamic bi-level integrated attack.
Remotesensing 16 02570 g002
Figure 3. The attack on the classification level of object detection.
Figure 3. The attack on the classification level of object detection.
Remotesensing 16 02570 g003
Figure 4. The sampling process of dynamic iteration.
Figure 4. The sampling process of dynamic iteration.
Remotesensing 16 02570 g004
Figure 5. The structure of the bi-level adversarial attack method.
Figure 5. The structure of the bi-level adversarial attack method.
Remotesensing 16 02570 g005
Figure 6. The architecture of integration black-box attack method.
Figure 6. The architecture of integration black-box attack method.
Remotesensing 16 02570 g006
Figure 7. The UAV image samples of the UAVDT dataset.
Figure 7. The UAV image samples of the UAVDT dataset.
Remotesensing 16 02570 g007
Figure 8. The optimization process of the parameter Q for mislabeling and fabrication attacks.
Figure 8. The optimization process of the parameter Q for mislabeling and fabrication attacks.
Remotesensing 16 02570 g008
Figure 9. The mislabeling attack results in the black-box SSD with different queries.
Figure 9. The mislabeling attack results in the black-box SSD with different queries.
Remotesensing 16 02570 g009
Figure 10. The fabrication attack results in the black-box SSD with different queries.
Figure 10. The fabrication attack results in the black-box SSD with different queries.
Remotesensing 16 02570 g010
Figure 11. The mislabeling attack results in the black-box SSD with different agent numbers.
Figure 11. The mislabeling attack results in the black-box SSD with different agent numbers.
Remotesensing 16 02570 g011
Figure 12. The fabrication attack results in the black-box SSD with different agent numbers.
Figure 12. The fabrication attack results in the black-box SSD with different agent numbers.
Remotesensing 16 02570 g012
Figure 13. The experimental results of the fabrication attack. (a) YOLOv3; (b) Faster RCNN; (c) RetinaNet; (d) SSD; (e) Sparse RCNN; (f) attack on YOLOv3; (g) attack on Faster RCNN; (h) attack on RetinaNet; (i) attack on SSD; (j) attack on Sparse RCNN.
Figure 13. The experimental results of the fabrication attack. (a) YOLOv3; (b) Faster RCNN; (c) RetinaNet; (d) SSD; (e) Sparse RCNN; (f) attack on YOLOv3; (g) attack on Faster RCNN; (h) attack on RetinaNet; (i) attack on SSD; (j) attack on Sparse RCNN.
Remotesensing 16 02570 g013
Figure 14. The experimental results of the fabrication attack. (a) YOLOv3; (b) Faster RCNN; (c) RetinaNet; (d) SSD; (e) Sparse RCNN; (f) Attack on YOLOv3; (g) attack on Faster RCNN; (h) attack on RetinaNet; (i) attack on SSD; (j) attack on Sparse RCNN.
Figure 14. The experimental results of the fabrication attack. (a) YOLOv3; (b) Faster RCNN; (c) RetinaNet; (d) SSD; (e) Sparse RCNN; (f) Attack on YOLOv3; (g) attack on Faster RCNN; (h) attack on RetinaNet; (i) attack on SSD; (j) attack on Sparse RCNN.
Remotesensing 16 02570 g014
Figure 15. The ablation study of the mislabeling attack on SSD with feature map. (a) Clean model; (b) MIM+IBAM; (c) MIM+DIA+IBAM; (d) DBI-Attack.
Figure 15. The ablation study of the mislabeling attack on SSD with feature map. (a) Clean model; (b) MIM+IBAM; (c) MIM+DIA+IBAM; (d) DBI-Attack.
Remotesensing 16 02570 g015
Figure 16. The ablation study of the fabrication attack on SSD with feature map. (a) Clean model; (b) MIM+IBAM; (c) MIM+DIA+IBAM; (d) DBI-Attack.
Figure 16. The ablation study of the fabrication attack on SSD with feature map. (a) Clean model; (b) MIM+IBAM; (c) MIM+DIA+IBAM; (d) DBI-Attack.
Remotesensing 16 02570 g016
Figure 17. The visualized detection results of comparison after white-box mislabeling attacks. (a) Clean model; (b) TOG-Mislabel; (c) DAG; (d) PGD; (e) DBI-Attack (white box).
Figure 17. The visualized detection results of comparison after white-box mislabeling attacks. (a) Clean model; (b) TOG-Mislabel; (c) DAG; (d) PGD; (e) DBI-Attack (white box).
Remotesensing 16 02570 g017
Figure 18. The visualized detection results of comparison after white-box fabrication attacks. (a) Clean model; (b) PCB; (c) TOG-Fabrication; (d) TFA-Fabrication; (e) DBI-Attack (white box).
Figure 18. The visualized detection results of comparison after white-box fabrication attacks. (a) Clean model; (b) PCB; (c) TOG-Fabrication; (d) TFA-Fabrication; (e) DBI-Attack (white box).
Remotesensing 16 02570 g018
Figure 19. The visualized detection results of comparison after black-box mislabeling attacks. (a) Clean model; (b) RAP+IBAM; (c) EBAD; (d) PGD+IBAM; (e) DBI-Attack (Black-Box).
Figure 19. The visualized detection results of comparison after black-box mislabeling attacks. (a) Clean model; (b) RAP+IBAM; (c) EBAD; (d) PGD+IBAM; (e) DBI-Attack (Black-Box).
Remotesensing 16 02570 g019
Figure 20. The visualized detection results of comparison after black-box fabrication attacks. (a) Clean model; (b) PCB+IBAM; (c) TOG-Fabrication+IBAM; (d) TFA-fabrication+IBAM; (e) DBI-Attack (Black-Box).
Figure 20. The visualized detection results of comparison after black-box fabrication attacks. (a) Clean model; (b) PCB+IBAM; (c) TOG-Fabrication+IBAM; (d) TFA-fabrication+IBAM; (e) DBI-Attack (Black-Box).
Remotesensing 16 02570 g020
Table 1. The overview of adversarial attack methods for target detection.
Table 1. The overview of adversarial attack methods for target detection.
MethodReferenceWhite-BoxBlack-BoxMislabelingFabrication
PGDMadry et al. [29]
DAGXie et al. [30]
UAEWei et al. [31]
ULANDu et al. [32]
UAPQin et al. [33]
RAPLi et al. [34]
PCBWu et al. [35]
TOGChow et al. [36]
TFAZhang  et al. [37]
CAPZhang et al. [38]
WSODKuang et al. [20]
EAWang et al. [39]
ASALi et al. [40]
EBADCai et al. [14]
Table 2. The experimental results with different agent numbers.
Table 2. The experimental results with different agent numbers.
FunctionMetricS = 0S = 1S = 2S = 3S = 4
Mislabeling AttackmAP (%) 71.68 58.37 5.27 5.22 4.89
Time Cost (s) 0.12 3.12 4.04 7.26 10.34
Fabrication AttackmAP (%) 71.68 63.43 8.45 10.57 9.74
Time Cost (s) 0.12 3.59 5.17 11.62 15.74
Table 3. The statistic of mislabeling attack results on the UAVDT dataset.
Table 3. The statistic of mislabeling attack results on the UAVDT dataset.
TypeModelsmAP (%)Time Cost (s)SSIM
BenignAdv.BenignAdv.
Agent Models
(White Box)
YOLOV373.703.780.112.170.848
Faster RCNN69.231.960.323.870.817
Victim Models
(Black-Box)
RetinaNet67.617.690.104.240.879
SSD71.685.270.124.040.868
Sparse RCNN69.335.520.276.990.844
Table 4. The statistics of the fabrication attack results on the UAVDT dataset.
Table 4. The statistics of the fabrication attack results on the UAVDT dataset.
TypeModelsmAP (%)Time Cost (s)SSIM
BenignAdv.BenignAdv.
Agent Models
(White Box)
YOLOV373.705.870.112.340.799
Faster RCNN69.237.310.323.920.786
Victim Models
(Black-Box)
RetinaNet67.6111.560.104.850.831
SSD71.688.450.125.170.833
Sparse RCNN69.336.680.277.290.815
Table 5. The ablation study of the mislabeling attack on SSD.
Table 5. The ablation study of the mislabeling attack on SSD.
CleanMIMDIABAAMIBAMSSD
mAP (%)
A----71.68
AA--A61.57
AAA-A12.23
AAAAA5.27
Table 6. The ablation study of the fabrication attack on SSD.
Table 6. The ablation study of the fabrication attack on SSD.
CleanMIMDIABAAMIBAMSSD
mAP (%)
A----71.68
AA--A66.98
AAA-A16.52
AAAAA8.45
Table 7. The comparisons of white-box mislabeling attacks.
Table 7. The comparisons of white-box mislabeling attacks.
AttackMethodmAPTime Cost (s)SSIMQueries
Clean ModelYOLOv373.700.111.0000
Faster RCNN69.230.321.0000
TOG-MislabelYOLOv34.460.920.83430
Faster RCNN3.741.250.82830
DAGYOLOv371.659.370.936160
Faster RCNN8.439.490.941160
PGDYOLOv34.166.280.87940
Faster RCNN5.596.440.85640
DBI-Attack
(White Box)
YOLOv33.782.170.84520
Faster RCNN1.963.870.81720
Table 8. The comparisons of white-box fabrication attacks.
Table 8. The comparisons of white-box fabrication attacks.
AttackMethodmAPTime Cost (s)FASSIMQueries
Clean ModelYOLOv373.700.110.0001.0000
Faster RCNN69.230.320.0001.0000
PCBYOLOv33.390.780.1120.86210
Faster RCNN3.651.090.0770.83510
TOG-FabricationYOLOv32.360.670.1460.84930
Faster RCNN2.430.880.1380.83630
TFA-FabricationYOLOv32.490.830.2370.83030
Faster RCNN2.931.470.0510.84430
DBI-Attack
(White Box)
YOLOv35.872.340.6710.79915
Faster RCNN7.313.920.3100.77415
Table 9. The comparisons of black-box mislabeling attacks.
Table 9. The comparisons of black-box mislabeling attacks.
AttackMethodmAPTime Cost (s)SSIMQueries
Clean ModelRetinaNet67.610.101.000
SSD71.680.121.000
Sparse RCNN69.330.271.000
RAP+IBAMRetinaNet53.583.090.80330
SSD51.123.370.79730
Sparse RCNN18.676.740.79030
EBADRetinaNet26.233.140.83310
SSD19.243.260.86010
Sparse RCNN24.194.510.82710
TFA-Mislabeling
+IBAM
RetinaNet11.9212.260.87440
SSD12.1114.870.88340
Sparse RCNN14.6715.790.82140
DBI-Attack
(Black-Box)
RetinaNet7.694.240.87920
SSD5.274.040.86820
Sparse RCNN5.526.990.83420
Table 10. The comparison of black-box fabrication attacks.
Table 10. The comparison of black-box fabrication attacks.
AttackMethodmAPTime Cost (s)FRSSIMQueries
Clean ModelRetinaNet67.610.100.0001.0000
SSD71.680.120.0001.0000
Sparse RCNN69.330.270.0001.0000
PCB+IBAMRetinaNet7.241.590.0210.87610
SSD8.331.330.0430.86610
Sparse RCNN6.162.840.0590.82910
TOG-Fabrication
+IBAM
RetinaNet6.262.290.1220.83830
SSD7.382.570.1090.82630
Sparse RCNN6.575.040.1280.82130
TFA-Fabrication
+IBAM
RetinaNet5.954.720.1690.82430
SSD6.314.590.2140.81730
Sparse RCNN7.186.300.0360.83730
DBI-Attack
(Black-Box)
RetinaNet11.564.850.3970.79115
SSD8.455.170.5870.77315
Sparse RCNN6.687.290.3370.75515
Table 11. The adversarial training on black-box DBI-Attack.
Table 11. The adversarial training on black-box DBI-Attack.
Queries510152025
MislabelingNo Defense58.3435.9218.435.273.35
AT61.4939.8226.1718.8317.62
FabricationNo Defense41.6821.928.456.626.13
AT46.2927.7118.2617.5518.48
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, Z.; Wang, B.; Wang, Z.; Yao, X. DBI-Attack:Dynamic Bi-Level Integrated Attack for Intensive Multi-Scale UAV Object Detection. Remote Sens. 2024, 16, 2570. https://doi.org/10.3390/rs16142570

AMA Style

Zhao Z, Wang B, Wang Z, Yao X. DBI-Attack:Dynamic Bi-Level Integrated Attack for Intensive Multi-Scale UAV Object Detection. Remote Sensing. 2024; 16(14):2570. https://doi.org/10.3390/rs16142570

Chicago/Turabian Style

Zhao, Zhengyang, Buhong Wang, Zhen Wang, and Xuan Yao. 2024. "DBI-Attack:Dynamic Bi-Level Integrated Attack for Intensive Multi-Scale UAV Object Detection" Remote Sensing 16, no. 14: 2570. https://doi.org/10.3390/rs16142570

APA Style

Zhao, Z., Wang, B., Wang, Z., & Yao, X. (2024). DBI-Attack:Dynamic Bi-Level Integrated Attack for Intensive Multi-Scale UAV Object Detection. Remote Sensing, 16(14), 2570. https://doi.org/10.3390/rs16142570

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop