Next Article in Journal
Deep-Learning-Based Method for the Identification of Typical Crops Using Dual-Polarimetric Synthetic Aperture Radar and High-Resolution Optical Images
Next Article in Special Issue
SCM-YOLO for Lightweight Small Object Detection in Remote Sensing Images
Previous Article in Journal
Quantifying Spatiotemporal Changes in Supraglacial Debris Cover in Eastern Pamir from 1994 to 2024 Based on the Google Earth Engine
Previous Article in Special Issue
Generative Simplex Mapping: Non-Linear Endmember Extraction and Spectral Unmixing for Hyperspectral Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Transferable Targeted Adversarial Attack on Synthetic Aperture Radar (SAR) Image Recognition

1
School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
2
School of Computing, Kyung Hee University, Yongin 17113, Republic of Korea
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(1), 146; https://doi.org/10.3390/rs17010146
Submission received: 24 October 2024 / Revised: 12 December 2024 / Accepted: 18 December 2024 / Published: 3 January 2025

Abstract

:
Deep learning models have been widely applied to synthetic aperture radar (SAR) target recognition, offering end-to-end feature extraction that significantly enhances recognition performance. However, recent studies show that optical image recognition models are widely vulnerable to adversarial examples, which fool the models by adding imperceptible perturbation to the input. Although the targeted adversarial attack (TAA) has been realized in the white box setup with full access to the SAR model’s knowledge, it is less practical in real-world scenarios where white box access to the target model is not allowed. To the best of our knowledge, our work is the first to explore transferable TAA on SAR models. Since contrastive learning (CL) is commonly applied to enhance a model’s generalization, we utilize it to improve the generalization of adversarial examples generated on a source model to unseen target models in the black box scenario. Thus, we propose the contrastive learning-based targeted adversarial attack, termed CL-TAA. Extensive experiments demonstrated that our proposed CL-TAA can significantly improve the transferability of adversarial examples to fool the SAR models in the black box scenario.

1. Introduction

With the development of deep learning, it has been widely applied across various fields, including synthetic aperture radar (SAR) target recognition [1,2]. Unlike traditional SAR target recognition [3,4], which typically relies on a two-step process of manual feature extraction followed by the classification, deep learning methods [5,6,7,8] offer an end-to-end approach with automatic feature extraction, providing significant advantages in target recognition. Recently, deep convolutional neural network (DCNN) models, such as AlexNet [9], VGGNet [10], ResNet [11], and AMS-CNN [5], have been widely used for SAR target recognition. Convolutional kernels allow models to extract features effectively, thus significantly improving performance. Notably, AMS-CNN combines convolutional kernels with an attention mechanism and multi-stream feature extraction, enabling it to capture features effectively.
While deep learning has surpassed traditional methods and achieved remarkable success in target recognition, multiple studies [12,13,14] have demonstrated that these models are highly susceptible to adversarial attacks, where even subtle, imperceptible perturbations added to the input can fool the model. In addition, in the targeted adversarial attack (TAA), the model’s predicted label can be manipulated to match a specific target label. A non-targeted adversarial attack (non-TAA) is considered successful if the model predicts a label different from the original, assuming the initial prediction was correct. However, for a TAA to succeed, the model must predict the exact target label, making it a more challenging task than a non-TAA. Moreover, transferable TAA is yet more challenging because it lacks white box access to the target models, with the adversarial examples generated on a source model.
Despite numerous research on transferable TAA mainly conducted on optical images, it remains unclear whether SAR target recognition models are vulnerable to targeted adversarial attacks. To address this, we conduct the first comprehensive study investigating whether adversarial examples can fool SAR target recognition models to a specified target class in the black box scenario. Unlike three-channel color images, SAR images are single-channel grayscale. Therefore, it is easier to train adversarial perturbations that overfit on the source model, but this makes it harder for the generated adversarial examples to transfer effectively to other target models. Additionally, SAR images tend to have more noise, further complicating the challenge of achieving a transferable adversarial attack. In other words, what makes transferable TAA particularly challenging is the poor generalization of adversarial examples, which struggle to transfer from the source to unseen target models effectively. Contrastive learning (CL) [15], a widely used pre-training technique, has demonstrated its ability to improve the model’s generalization across various tasks by encouraging the anchor sample to attract the positive sample and repel the negative samples. This helps the model capture essential features more effectively and improves its ability to handle hard samples. Since CL can improve the model’s generalization to hard samples, we conjecture that it can also benefit the generalization of adversarial examples. Therefore, we incorporate CL into our training process to enhance the transferability of TAA. Specifically, we use the InfoNCE [16], one of CL’s most effective loss functions, to optimize adversarial examples. By leveraging the model’s logits, which are demonstrated to represent the feature of the image, we calculate the InfoNCE loss based on the logits of anchor, positive, and negative samples.
Overall, with the increasing importance of SAR target recognition, it is essential to understand the robustness of these models. Previous research [5,6,7] has shown that these models are susceptible to adversarial attacks, primarily in a white box setting where full access to the model’s structure and weights is available. In contrast, this work focuses on evaluating the adversarial robustness of SAR target recognition models in a more practical and challenging black box setting. Our contributions are summarized as follows:
  • To understand the transferability of SAR target recognition models, to the best of our knowledge, our work conducts a first yet comprehensive investigation on them.
  • We identify the challenges of transferable TAA as the limited generalization of adversarial examples and mitigate it by leveraging contrastive learning during the training.
  • Extensive experiments verify that our proposed CL-TAA can significantly enhance the performance of transferable TAA.
The rest of the paper is organized as follows: Section 2 discusses related work on adversarial attack and contrastive learning. Section 3 presents the methodology, including classical recognition models, seminal black box attack methods, and our proposed CL-TAA. Section 4 details the experiments, and Section 5 concludes the paper.

2. Related Work

2.1. Adversarial Attack

Deep neural networks are vulnerable to adversarial examples, as revealed in [12,17,18,19,20], where even imperceptible perturbations to the input can fool the model into predicting wrong results. With the revelation of this intriguing property, numerous studies [21,22,23,24,25] have been conducted on the model’s robustness, where these studies can be classified based on the knowledge of the attacker and the goal of the attacker. From the perspective of the attacker’s goal, adversarial attacks can be divided into non-targeted and targeted adversarial attacks [26,27,28,29,30,31,32,33]. Non-targeted adversarial attacks (non-TAAs) aim to fool the model by predicting the wrong label, which is different from the ground truth one [21,22,34]. Targeted attacks, however, pursue a specific output and are considered successful only when the model’s output matches a predefined target label [23,24]. Regarding the attacker’s knowledge, adversarial attacks can be divided into white box and black box attacks [35,36,37]. In the white box setup, the attackers can fully access the model’s structure, parameters, and gradient information, enabling them to exploit the models’ specific vulnerabilities and craft highly tailored attacks [26]. By contrast, the black box setting, which is more practical for real-world scenarios, limits attackers to only the model’s output, leaving them without any information about the model itself [38]. The Fast Gradient Sign Method (FGSM) [23] and Projected Gradient Descent (PGD) [24] method, known for their effectiveness in generating adversarial examples, are widely used for the white box setting. Iterative-FGSM (I-FGSM) [39], as the extension of FGSM, adopts multiple steps to generate the perturbation to enhance the attack’s success rate. It should be noted that PGD is also sometimes referred to as I-FGSM [39]. Carlini and Wagner (C&W) attack [40], another more sophisticated white box attack method, generates highly effective adversarial examples with minimal perturbations. Although these methods are effective in a white box setting, their performance is poor in a black box scenario. Therefore, multiple works [13,14,41] have been proposed to improve the transferability of adversarial examples with the black box setup. Momentum-Iterative FGSM (MI-FGSM)  [13] incorporates momentum to update the adversarial examples and significantly improve its transferability across different models. Similarly, Translation-Invariant FGSM (TI-FGSM) [14] introduces translation invariance to generate more transferable adversarial examples, further enhancing their effectiveness. Diverse-Input FGSM (DI-FGSM) [41] applies random transformations to the input, such as scaling and cropping, to improve the generalization of the adversarial examples across different models. In our work, we focus on the more challenging targeted adversarial attack under the black box setting for SAR target recognition models.

2.2. Contrastive Learning

Contrastive learning (CL) [15,42], the milestone technology for self-supervised learning, is used to learn the augmentation-invariant representation. The core idea of CL is to pull together positive pairs while pushing apart negative pairs, enabling models to learn effective representations. In [43], it leverages a set of source labels via data augmentation, allowing the network to capture the effective features of images. Unlike the method mentioned above, which utilizes the parametric form, another work [44] adopts a non-parameter by using the apparent visual similarity among categories. In addition, it introduces a memory bank to store representations of individual instances, allowing a more efficient comparison of each sample against a wide array of negatives. Similar to using a memory bank, MoCo [45] introduces a momentum encoder with a queue and moving-averaged dynamic dictionary, which enables the maintenance of a large and consistent dictionary of negative samples. Besides utilizing the memory bank, leveraging in-batch negative samples can also avoid additional storage, simplifying the training process. SimCLR [46] sample negative examples implicitly using the other sample and its augmented views in the same batch, effectively learning the meaningful representation. Another work [47] enhances the feature representation of contrastive models by exploring the importance of local and global feature alignments, resulting in the good capture of fine-grained and high-level information. Given the strong generalization capabilities of CL, it has been successfully applied in various tasks, such as image classification [43], object detection [45], segmentation [46], and 3D representation [16]. In this work, we utilize CL techniques to improve the generalization of adversarial examples.

3. Methodology

3.1. Background and Related Models

Transferable targeted adversarial attack. We first introduce the targeted adversarial attack for the classification task and define f t as the target model. Let D represent the labeled image dataset with an image label pair ( x , y ) D , where x represents the image and y is the corresponding ground truth label. The image x has the dimensions H × W × C , with H being the height, W the width, and C the number of channels. Let δ denote the perturbation; the optimization goal is to generate the adversarial example x a d v = x c l e a n + δ , which can fool the model f t to predict a specific label y t . Given the target model is a black box, the purpose of a  targeted transfer attack is to generate the adversarial example with a white box source model f s . Instead of generating an adversarial example by directly optimizing x a d v to minimize the output of the target model and the target label,  transferable targeted attack focuses on minimizing the output of the source model and target label to create adversarial examples, which can be formulated as follows:
δ * = max δ S L C E ( f s ( x c l e a n + δ ) , y t ) .
δ S is used to constrain the maximum perturbation magnitude, ensuring that the perturbation is imperceptible.  L C E is the cross-entropy loss function. After generating adversarial examples x a d v by using the source model, it can directly attack the target model with the goal of f t ( x a d v ) = y t . To enhance the transferability of targeted adversarial attacks by generating adversarial examples, we propose to improve it, as detailed in Section 3.2.
Advances in transferable attack. Transferable attack involves iterative optimization on a single image based on the source model without requiring model training or additional data. As mentioned in Section 2.1, attacks can be classified into non-targeted and targeted depending on whether the target label is specific. We first introduce the classical methods for transferable targeted attacks. The Iterative Fast Gradient Sign Method (I-FGSM) [48], widely used for transferable targeted attacks, is an extension of the Fast Gradient Sign Method (FGSM) [23] for generating adversarial examples by employing multiple steps times with small step sizes instead of using a single step. After each step, the pixel values are clipped to ensure they remain within an ϵ -neighborhood of the original image. The attack optimization of I-FGSM can be expressed as follows:
x 0 = x , x i + 1 = x i α · sign ( x J ( x i , y t ) ) ,
where x i is the perturbed image of the i-th iteration, and J ( · , · ) is the loss function. To ensure that the perturbation is imperceptible, the maximum allowed magnitude is limited, i.e., it satisfies x x p ϵ .
Momentum Iterative–FGSM (MI-FGSM) [13], built upon the I-FGSM, is proposed to utilize a momentum iterative gradient-based method to improve the transferability of generated adversarial examples. The momentum method [49] is a technique that accumulates a velocity vector in the gradient direction of the loss function across iterations to accelerate the gradient descent. The previously accumulated gradients are facilitated to barrel through narrow valleys to stabilize update directions and reduce the chances of becoming stuck in poor local minima [50,51]. The attack optimization of MI-FGSM can be expressed as follows:
g i + 1 = μ · g i + x J ( x i , y t ) x J ( x i , y t ) 1 , x i + 1 = x i α · sign ( g i ) ,
where g i and μ refer to accumulated gradients at the i-th and decay factor, respectively. Another similar technique that uses Nesterov instead of momentum to enhance the performance of transferable attacks is explored in [52].
Translation Invariant–FGSM (TI-FGSM) [14] is proposed to boost the transferability of adversarial examples for the target model. It addresses the issue of adversarial examples overfitting to a specific model by optimizing them using randomly translated input images, where it draws inspiration from data augmentation techniques that are typically used to prevent overfitting during model training. Since calculating gradients for multiple translated images is computationally intensive, the author proposes an efficient approach: instead of repeatedly translating the images, it applies a convolutional kernel to the original image to compute locally smoothed gradients. The attack optimization of TI-FGSM can be expressed as follows:
x i + 1 = x i α · sign ( W x J ( x i , y t ) ) ,
where W represents the convolutional kernel used in the TI-FGSM. It has been demonstrated in [53] that the transferability of TI-FGSM benefits from using a smaller kernel size to optimize adversarial examples.
Diverse-Input FGSM (DI-FGSM) [41], similar to TI-FGSM, is also inspired by data augmentation [9,10,54], where DI-FGSM improves the transferability of adversarial examples by creating a diverse input pattern through resizing, cropping, and rotating. Unlike the TI-FGSM, which adopts the fixed augmentation parameter over iterations, DI-FGSM performs different augmentations on the input data to increase the diversity of input images. The attack optimization of TI-FGSM can be expressed as follows:
x i + 1 = x i α · sign ( x J ( T ( x i , p ) , y t ) ) ,
where T ( · , · ) is the stochastic transformation function.  T ( x i , p ) can be formulated as follows:
T ( x i ; p ) = T ( x i ) with probability p x i with probability 1 p .
T ( · ) is implemented by resizing the input images to a random size and applying random padding, where zeros are added around the images in a randomized way [55].
Models of deep neural network. AlexNet [9] is a milestone in improving classification accuracy for neural networks. Unlike traditional machine learning algorithms [56], which rely on multiple stages to extract the feature, AlexNet proposes a neural network that performs automatic, end-to-end feature extraction composed of five convolutional layers and three fully connected layers. The success of AlexNet is largely attributed to its innovative structure, which includes using the dropout layer, shifting from the Sigmoid function to ReLU for the activation layer, and replacing average pooling with max pooling. Since AlexNet is excellent at extracting the features of examples, it has also been applied to the SAR automatic target recognition task [57].
VGGNet [10], proposed by the Visual Geometry Group (VGG) of Oxford University, is a convolutional neural network algorithm based on AlexNet. It demonstrated that deeper neural networks can improve performance to some extent. One key difference from AlexNet is that VGGNet removes the LRN layer, as it was found to have a minimal impact on network performance. Additionally, instead of using larger 5 × 5 convolutional kernels, VGGNet opts for smaller 3 × 3 kernels. This choice maintains the same receptive field but allows for more linear variations, and it also reduces the number of parameters in the convolutional layer by about 45%. VGGNet has also been applied to the SAR target recognition and achieves competitive results  [58].
ResNet [11], a seminal model for deep neural networks (DNNs), is proposed to address the issue of hard training when increasing the depth of neural networks. Unlike the AlexNet and VGGNet with a more shallow layer, the ResNet model can reach 152 layers. While deeper neural networks theoretically excel at capturing complex and detailed image features, they become increasingly difficult to train for gradient vanishing and explosion. ResNet mitigates the issue by using residual blocks with shortcut connections, where the low-level feature map z (a certain mapping in the network)) is directly used for the input of the high level.
EfficientNet [59] proposes to improve image classification performance by applying scale strategy to all dimensions of CNN, including depth, width, and resolution. The scaling strategy relies on a compound coefficient that adjusts depth, width, and resolution uniformly, ensuring a balanced expansion of the network while optimizing performance. Additionally, EfficientNet scales proportionally in all dimensions based on predefined constants, with the computational resources increasing, maintaining its efficiency and effectiveness. This combination of a scalable design and systematic scaling methodology has set new benchmarks in CNN performance, demonstrating its adaptability and high performance across diverse tasks.
MobileNet [60] is a lightweight CNN model using depth-wise separable convolutions. The implementation of split a standard convolution into two operations: depth-wise convolution for spatial filtering and point-wise (1 × 1) convolution for combining features. This design dramatically lowers computational complexity and reduces model size without compromising accuracy. MobileNet enables two adjustable hyperparameters for enhanced flexibility. The width multiplier uniformly reduces the number of channels in each layer, while the resolution multiplier decreases the input image resolution, thus lowering computational demands. These parameters allow the model to be tailored to specific application requirements, making it well-suited for resource-constrained environments.
The AMS-CNN (attention-based multi-stream CNN), as introduced in [5], represents the pioneering work being carried out to leverage CNNs for SAR image recognition. The model is structured into three key components: the convolutional module, the stream module, and the classification module. The convolutional module, consisting of convolutional layers, pooling layers, a spatial attention layer, and a channel attention layer, aims to extract low-level features. By using these simple yet effective structures, the AMS-CNN considerably improves the performance of SAR image recognition while reducing the number of parameters compared with those in ResNet.
ConvNeXt [61] is a CNN developed in recent years designed to bridge the performance gap between traditional CNN and Vision transformers. It integrates innovative design principles inspired by vision transformers, such as large kernel sizes, layer normalization, and inverted bottleneck blocks, while maintaining the simplicity and efficiency of traditional ConvNets. It utilizes GELU activation functions and employs larger convolutional kernels to expand the receptive field, which mimickes the global self-attention capabilities of transformers. Layer normalization is used in place of batch normalization, and the architecture adopts a simplified design with fewer activation and normalization layers for streamlined efficiency.

3.2. Our Proposed Method

Performing an adversarial attack on a white box model is easier because the attackers can access the model’s structure and weights. Without access to this information, a transferable adversarial attack relies on a source model, where the adversarial example generated on the source model directly applies to the black box models. In this work, we attribute the challenge of this task to the poor generalization of the generated adversarial examples. CL [15] is a widely used pre-training technique for improving the model’s generalization capability before training the model for the target task. Inspired by its success in improving the model’s generalization capability, we aim to apply it to the generation process of adversarial examples so that they can transfer better to unseen models.
The illustration of the difference between traditional CL and our proposed CL-TAA is shown in Figure 1. Contrastive learning is widely used in unsupervised learning to increase the model’s generalization to the unseen hard sample. In essence, contrastive learning is sophisticated in improving the model’s generalization by comparing the similarity between key and positive features and the differences between key and negative features. We conjecture that contrastive learning can also enhance the generalization of adversarial examples by comparing the similarity between key and positive features and the differences between key and negative features. Unlike previous works [46], which utilize CL for visual representation to enhance the model’s generalization of the hard examples, our work specifically focuses on improving the generalization of adversarial examples. In other words, we aim to utilize CL to generate adversarial examples trained on a source model to attack a range of different black box models effectively. Another key distinction in our approach compared to previous studies is that we incorporate CL during the training process to optimize adversarial examples. Earlier works usually employed CL in the pre-training phase. Typically, CL is used in self-supervised settings, where models are trained on unlabeled data and then fine-tuned for downstream tasks. In our case, CL serves as a technique to improve the generalization of adversarial examples rather than for pre-training.
CL-based TAA Method. In the classical CL, three types of samples are typically involved: the anchor sample, positive sample, and negative samples, where these samples typically utilize the feature from the encoder. The anchor sample is sample of interest, and the positive samples are variations or augmentations derived from the anchor sample, while the negative samples are other randomly selected samples. We adopt the same practice in our work to generate the targeted adversarial examples, and what makes it different from the classical CL is the choice of these samples.
The common consensus is that DNNs are for feature extraction where logit values indicate the presence of the feature in the image, and the highest value in the logit is used for identifying the ground truth classes. The logit refers to the output feature of the model before the final softmax layer [29]. In [29], the authors demonstrate that the logit can be viewed as the features of the images. Since classical DNN models are typically designed without an encoder, we adopt logit for our CL. Specifically, we utilize the logit of the image as the anchor sample because it is the input of interest in this context. It has been demonstrated that the effect of adversarial examples is attributed to the governing of perturbation [29]. To improve the perturbation dominance, we create the positive sample by blending a random clean image with the adversarial example. The InfoNCE [16] loss has been used in multiple works and has become a de facto standard loss for CL; thus, we utilize InfoNCE in our work. Following the notation in [45], we represent the logit of the anchor sample, positive sample, and negative sample as q, k+, and k−, respectively. To eliminate scale discrepancies, these logits are typically L2-normalized. Utilizing these normalized logits, InfoNCE loss applied in the contrastive learning-based TAA generation can be formulated as follows:
L i n f o n c e = l o g e x p ( q · k + / τ ) e x p ( q · k + / τ ) + i = 1 K e x p ( q · k i / τ ) ,
where τ is the temperature controlling the sharpness or smoothness of the similarity distribution between feature vectors and thus affects how the model weighs positive and negative pairs when computing the loss. Unlike the classical CL method that generates the negative samples once and then saves them, we adopt the different negative samples for each iteration to increase the generalization of adversarial samples. In other words,   k in Equation (7) for each iteration is different for each iteration.
Practical Implementation. Contrastive learning requires a large number of SAR images for positive and negative samples, while SAR images are limited and expensive to obtain. To address this issue, we use the patches randomly cropped from the to-be-attacked clean image to generate the positive sample. For the negative samples, we randomly select the negative samples once and also use patches randomly cropped from these samples for each iteration. For these patches, we perform RandomResizedCrop in touchvision without considering the parameter of the random aspect ratio. We term our proposed method as CL-based targeted adversarial attack (CL-TAA); this method drastically reduces the usage of clean images. CL-TAA is summarized in Algorithm 1 and builds on the basic TAA method by incorporating a regularization term, which helps improve the generalization of adversarial examples. Following [62], we set the maximum allowable perturbation magnitude to 16/255, with a step size of 2/255.
Algorithm 1 CL-TAA
Input: Source model f s ; input image x; images for negative samples X n e g = { x n e g 1 , x n e g 2 , , x n e g M } ; target label y t ; allowed maximum perturbation magnitude ϵ ; total number of iterations T; step size α ; hyperparameter of loss function λ ; cross-entropy loss function L C E ( · , · ) .
Output: Adversarial example x * with x * x ϵ .
   1:
x 0 * = x ;
   2:
for t = 0, 1, 2, ..., T 1  do
   3:
    x a d d = R a n d o m R e s i z e d C r o p ( x ) ; x p o s = x a d d + x t * ;
   4:
    X n e g = R a n d o m R e s i z e d C r o p ( X n e g ) ;
   5:
    l o g i t p o s = f s ( x p o s ) ;
   6:
    l o g i t a d v = f s ( x t * ) ;
   7:
    l o g i t n e g = f s ( X n e g ) ;
   8:
   Calculate the loss as
L t o t a l = L C E ( f s ( x t * ) , y t ) + λ L i n f o n c e ( l o g i t a d v , l o g i t p o s , l o g i t n e g ) ;
   9:
   Obtain the gradient by
g t = x L t o t a l ;
   10:
   Update x t + 1 * as
x t + 1 * = C l i p x ϵ x t * α · sign ( g t ) ;
   11:
end for
   12:
return  x * = x T * .

4. Experiments

4.1. Experimental Setup and Dataset

We conduct our experiments on the moving and stationary target acquisition and recognition (MSTAR) dataset [63,64], with a resolution of 0.3 m. The MSTAR dataset, provided by the Air Force Research Laboratory (AFRL) and Defence Advanced Research Projects Agency (DARPA), is widely used for SAR image recognition and includes ten military targets. These SAR images are obtained by radar operating in the X-band under HH polarization conditions, with the imaging depression angle set at two configurations: 15° and 17°. As shown in Figure 2, the MSTAR dataset contains ten different military targets, 2S1, BMP2, BTR60, BTR70, D7, T62, T72, ZIL131, and ZSU234, with their corresponding optimal images also provided.
We crop the MSTAR SAR images as 128 × 128 and obtain a total of 5093, where each image is labeled as one of the ten types of targets. In addition, we normalize the image to the range of 0 , 1 for the stable training. To evaluate the proposed method, we adopt the SAR images with different imaging depressions for the training and test sets, with a 17° imaging depression angle for the training set and a 15° imaging depression angle for the test set. The details of ten types of targets are shown in Table 1, with a total number of 2741 for the training set and 2346 for the test set.
To generate reliable adversarial examples and evaluate the effectiveness of our proposed method, we trained several classical and SAR-based recognition models, including AlexNet, VGG16, ResNet, AMS-CNN, EfficientNet, MobileNet, and ConvNeXt, on the MSTAR dataset. The training and test accuracies of these models are shown in Table 2. We can observe that the test accuracy of all models exceeds 80%, providing a solid foundation for our follow-up experiments.

4.2. Results on Transferable Targeted Attack

As shown in Section 3.1, we introduced seven different model architectures, including AlexNet, VGG16, ResNet18, AMS-CNN, EfficientNet, MobileNet, and ConvNeXt, for SAR image recognition. Given that AMS-CNN outperforms other models, with an accuracy of 98.14 %, we adopt it as the source model and report its results on all other models. The setup for all these experiments is as follows: the weight of CL loss is set to 0.01, the number of negative samples is 50, the size of the perturbation budget is 16/255, the step size is 2/255, and the number of iterations as 20. We randomly select 1000 images in the MSTAR dataset to conduct the experiments. In addition to evaluating our proposed method on the MSTAR dataset, we also provide the results on the CARABAS-II, the other open-source SAR image dataset. The results of CL-TAA on these two different datasets are detailed in the following tables: Table 3 for MSTAR and Table 4 for CARABAS-II. We can observe that CL-TAA achieves better performance than TAA by a large margin on these two different SAR datasets. Taking the AlexNet model as an example, CL-TAA achieves an accuracy of 17.9% on the MSTAR dataset, compared to 14.9% with I-FGSM. This indicates that our proposed CL loss significantly improves the performance of TAA. We also present the I-FGSM and CL-TAA confusion matrix, shown in Table 5 and Table 6, respectively. I-FGSM makes the predicted labels cluster in BRDM2, which leads to decreased performance. In contrast, CL-TAA is more effective at predicting the target label.
Visualization results. Figure 3 presents side-by-side comparisons of attacks across different target models. It shows that our proposed CL-TAA significantly outperforms I-FGSM on various target models. The heatmaps generated by grad-cam [65] for both clean and adversarial images are shown in Figure 4. This indicates that the adversarial perturbation can significantly change the focus areas across different models. We also visualize the logits used in CL, shown in Figure 5. We can observe a clear distinction: the negative sample logits have a very different distribution from the key logits, while the positive sample logits closely resemble the key logits. In addition, we randomly select four adversarial examples for visualization, shown in Figure 6, where x a d v = x c l e a n + x p e r t u r b a t i o n . Since the perturbation magnitude is limited to 16 / 255 , we scale it to ( 0 , 255 ) for better visualization. We notice that it is challenging to distinguish between the clean images and their adversarial counterparts, suggesting that the adversarial examples generated by CL-TAA are not easily detectable.

4.3. Comparison with SOTA Techniques

Transferable targeted adversarial attacks (TAAs) have long been considered challenging in computer vision. The difficulty lies in crafting adversarial examples that can both fool a specific model and other unseen models. This challenge is further amplified when the goal is to predict a specific target label rather than mislead the model. As mentioned in Section 2.1, several works [13,14,41,66] have explored ways to improve the transferability of generated adversarial examples, where the seminal methods include MI-FGSM [13] TI-FGSM [14], and DI-FGSM [41]. Essentially, our proposed CL loss is similar to these three methods because they all regularize the input gradient. We experiment with these three methods, with results shown in Table 7. Surprisingly, we find that the benefits of MI-FGSM and TI-FGSM for improving transferability to black-box models are negligible. For DI-FGSM, we observe a noticeable improvement in black box transferability. While all these three methods are effective for optical images, their performance varies when applied to SAR images, with DI-FGSM yielding the best results. We propose the reason to be that MI-FGSM and TI-FGSM primarily modify the gradients, offering less of an impact on over-fitting for SAR images. Conversely, DI-FGSM directly enhances the diversity of the input SAR images, resulting in reduced over-fitting on the source model. A more detailed investigation of this phenomenon is left for future work. For DI-FGSM, we observe a noticeable improvement in black box transferability, though it is less significant compared to the enhancement achieved by our proposed CL loss.

4.4. Ablation Study

Weight of CL loss. The weight of CL loss is a crucial parameter in determining its contribution to the total loss. Adjusting this weight directly impacts how the CL loss influences the model’s training process. A well-balanced weight can enhance performance, while an improper setting may negatively impact it. To identify the optimal weight for maximizing the effectiveness of the CL loss, we conduct an ablation study on this parameter, shown in Table 8. We can observe that the performance of TAA first increases and then decreases with the weight of CL loss increased from 0 to 100. Specifically, the performance gradually increases with the weight from 0 to 0.01 and then decreases from 0.01 to 100. This indicates that a higher weight parameter has a negative impact on the optimization goal, resulting in reduced performance. In our paper, we set the weight of CL loss to 0.01.
Size of negative sample. The negative sample in contrastive learning is designed to push the anchor away, creating a repelling effect, whereas the positive sample attracts the anchor closer. This means that the anchor focuses more on the independent features shared with the positive samples, and the diverse negative sample can ensure the anchor learns a more generalized feature representation. Therefore, we conduct an ablation study on various negative samples (1, 10, 50, 100), shown in Table 9. We observe that increasing the number of negative samples significantly improve the performance of targeted adversarial attacks (TAAs). It indicates that incorporating diverse negative sample representations benefits TAA training. The variety in negative samples likely helps the adversarial examples to focus on independent features, strengthening their generalization ability. To further increase the diversity of negative sample representations, we adopt the different negative samples for each iteration.
Temperature. It has been demonstrated that temperature has a significant influence on the performance of contrastive learning [67,68]. An ablation study on the different temperatures was conducted, as shown in Table 10. We observe that the performance of TAA decreases when the temperature is set to a small value. A lower temperature causes the model to focus more on hard negative samples [67,68]. As noted in [68], a smaller temperature equals a smaller number of negative samples. Consequently, using a lower temperature is expected to negatively impact performance when a large number of negative samples is needed, as it reduces the diversity of negative samples. In our work, we set the temperature to 0.1.
Size of perturbation budgets. The perturbation budget is crucial in determining the maximum allowable magnitude. It limits how much the input can be modified while keeping the perturbation imperceptible. Following [48], we experiment with four different perturbation budgets, 4 / 255 , 8 / 255 , 12 / 255 , and 16 / 255 , with the results presented in Table 11. As expected, larger perturbation budgets lead to better performance, but this comes at the cost of the increased visibility of perturbations. It is worth noting that many previous works [14,41] commonly use a perturbation budget of 16 / 255 as it strikes a balance between effectiveness and subtlety. For this reason, we adopt the same budget in our work to ensure comparability and maintain imperceptibility while maximizing attack performance.
Step size. Our CL-TAA method builds on I-FGSM to update the perturbation, leveraging the benefits of a multi-step attack. The step size plays a critical role in this process, as it significantly impacts the effectiveness of the attack. A larger step size can lead to faster convergence, but it is also hard to find the optimal point. On the other hand, a smaller step size allows for finer adjustments, but it requires more iterations and increases the risk of getting stuck in a local optimum. Therefore, a proper step size is essential for balancing attack efficiency and accuracy. The results of different step sizes are shown in Table 12, where the step size of 2 / 255 performs best. By default, we set the step size to 2 / 255 .
Number of iterations. We investigate the impact of the number of training iterations on the performance of adversarial examples. For this, we experiment with various training iterations and present the results of adversarial examples against AlexNet, VGG16, ResNet, EfficientNet, MobileNet, and ConvNeXt, depicted in Figure 7. It helps us understand how training duration influences the effectiveness of adversarial attacks across different model architectures. We can observe a clear improvement in performance as the number of iterations increases from 1 to 20. However, beyond this point, the performance of transferable TAA saturates or slightly decreases with additional iterations. Since the number of iterations of 20 yields the best performance, we adopt this setting in our experiments.

5. Conclusions

This work offers the first comprehensive study on TAA against SAR target recognition models in a black box setting, where the attacker has no access to these models. Given the limited generalization of adversarial examples generated by the classical I-FGSM method, we propose leveraging CL to enhance their generalization and improve their effectiveness. Unlike the classical CL used for pre-training, we apply it during the training stage by leveraging the logits of different samples. Specifically, we use the logit of the anchor sample to attract the positive sample while repelling the negative sample, significantly improving the transferability of adversarial examples. Extensive experiments have confirmed the high transferability and effectiveness of the proposed CL-TAA for TAA. The success of our attack method highlights the importance of developing adversarially robust SAR target recognition models.

Author Contributions

Conceptualization, C.Z. and X.H.; methodology, S.Z.; software, S.Z. and D.H.; validation, S.Z., C.H. and Y.H.; formal analysis, S.Z. and D.H.; investigation, S.Z.; resources, C.Z.; data curation, C.H.; writing—original draft preparation, S.Z. and C.L.; writing—review and editing, S.Z. and C.Z.; visualization, C.H.; supervision, C.Z.; project administration, C.Z. and X.H.; funding acquisition, C.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partly supported by Start-up Funding for Newly Introduced Talents in Shenzhen (CA11409031), Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (IITP-2022-II220078, Explainable Logical Reasoning for Medical Knowledge Generation) and under the ITRC (Information Technology Research Center) support program (IITP-2024-RS-2023-00259004) supervised by the IITP (Institute for Information & Communications Technology Planning & Evaluation).

Data Availability Statement

The research data is available at https://www.sdms.afrl.af.mil/index.php?collection=mstar&page=targets (accessed on 1 October 2024).

Acknowledgments

We thank the above grants for providing help in APC. We also thank the reviewers and editors for their constructive comments in improving the quality of our work.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SARSynthetic aperture radar
TAATargeted adversarial attack
CLContrastive learning
CL-TAAContrastive learning-based targeted adversarial attack
DCNNdeep convolutional neural network
non-TAANon-targeted adversarial attack
FGSMFast Gradient Sign Method
PGDProjected Gradient Descent
I-FGSMIterative-Fast Gradient Sign Method
MI-FGSMMomentum Iterative–Fast Gradient Sign Method
TI-FGSMTranslation Iterative–Fast Gradient Sign Method
DI-FGSMDiverse-Input-Fast Gradient Sign Method
MoCoMomentum Contrast
LRNLocal response normalization
VGGVisual Geometry Group
DNNsDeep Neural Networks
MSTARMoving and Stationary Target Acquisition and 335 Recognition
AFRLAir Force Research Laboratory
DARPADefence Advanced Research Projects Agency

References

  1. Chen, S.; Wang, H. SAR target recognition based on deep learning. In Proceedings of the 2014 International Conference on Data Science and Advanced Analytics (DSAA), Shanghai, China, 30 October–1 November 2014; pp. 541–547. [Google Scholar]
  2. Gao, F.; Yue, Z.; Wang, J.; Sun, J.; Yang, E.; Zhou, H. A novel active semisupervised convolutional neural network algorithm for SAR image recognition. Comput. Intell. Neurosci. 2017, 2017, 3105053. [Google Scholar] [CrossRef]
  3. Anagnostopoulos, G.C. SVM-based target recognition from synthetic aperture radar images using target region outline descriptors. Nonlinear Anal. Theory Methods Appl. 2009, 71, e2934–e2939. [Google Scholar] [CrossRef]
  4. Cha, M.; Majumdar, A.; Kung, H.T.; Barber, J. Improving SAR automatic target recognition using simulated images under deep residual refinements. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 2606–2610. [Google Scholar]
  5. Zheng, S.; Hao, X.; Zhang, C.; Zhou, W.; Duan, L. Towards Lightweight Deep Classification for Low-Resolution Synthetic Aperture Radar (SAR) Images: An Empirical Study. Remote Sens. 2023, 15, 3312. [Google Scholar] [CrossRef]
  6. Yi, G.; Hao, X.; Yan, X.; Dai, J.; Liu, Y.; Han, Y. Automatic modulation recognition of radiation source signals based on two-dimensional data matrix and improved residual neural network. Def. Technol. 2024, 33, 364–373. [Google Scholar] [CrossRef]
  7. Yi, G.; Hao, X.; Yan, X.; Wang, J.; Dai, J. Automatic Modulation Recognition for Radio Frequency Proximity Sensor Signals Based on Masked Autoencoders and Transfer Learning. IEEE Trans. Aerosp. Electron. Syst. 2024, 60, 8700–8712. [Google Scholar] [CrossRef]
  8. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 8–10 June 2015; pp. 1–9. [Google Scholar]
  9. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25. [Google Scholar] [CrossRef]
  10. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  11. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  12. Szegedy, C. Intriguing properties of neural networks. arXiv 2013, arXiv:1312.6199. [Google Scholar]
  13. Dong, Y.; Liao, F.; Pang, T.; Su, H.; Zhu, J.; Hu, X.; Li, J. Boosting adversarial attacks with momentum. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 9185–9193. [Google Scholar]
  14. Dong, Y.; Pang, T.; Su, H.; Zhu, J. Evading defenses to transferable adversarial examples by translation-invariant attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4312–4321. [Google Scholar]
  15. Hadsell, R.; Chopra, S.; LeCun, Y. Dimensionality reduction by learning an invariant mapping. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; Volume 2, pp. 1735–1742. [Google Scholar]
  16. Oord, A.v.d.; Li, Y.; Vinyals, O. Representation learning with contrastive predictive coding. arXiv 2018, arXiv:1807.03748. [Google Scholar]
  17. Biggio, B.; Fumera, G.; Roli, F. Security evaluation of pattern classifiers under attack. IEEE Trans. Knowl. Data Eng. 2013, 26, 984–996. [Google Scholar] [CrossRef]
  18. Brendel, W.; Rauber, J.; Bethge, M. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. arXiv 2017, arXiv:1712.04248. [Google Scholar]
  19. Ilyas, A.; Engstrom, L.; Athalye, A.; Lin, J. Black-box adversarial attacks with limited queries and information. In Proceedings of the International Conference on Machine Learning, Beijing, China, 14–16 November 2018; pp. 2137–2146. [Google Scholar]
  20. Ilyas, A.; Engstrom, L.; Madry, A. Prior convictions: Black-box adversarial attacks with bandits and priors. arXiv 2018, arXiv:1807.07978. [Google Scholar]
  21. Papernot, N.; McDaniel, P.; Jha, S.; Fredrikson, M.; Celik, Z.B.; Swami, A. The limitations of deep learning in adversarial settings. In Proceedings of the 2016 IEEE European Symposium on Security and Privacy (EuroS&P), Saarbrücken, Germany, 21–24 March 2016; pp. 372–387. [Google Scholar]
  22. Moosavi-Dezfooli, S.M.; Fawzi, A.; Frossard, P. Deepfool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 2574–2582. [Google Scholar]
  23. Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and harnessing adversarial examples. arXiv 2014, arXiv:1412.6572. [Google Scholar]
  24. Mądry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards deep learning models resistant to adversarial attacks. Stat 2017, 1050. [Google Scholar]
  25. Moosavi-Dezfooli, S.M.; Fawzi, A.; Fawzi, O.; Frossard, P. Universal adversarial perturbations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1765–1773. [Google Scholar]
  26. Chakraborty, A.; Alam, M.; Dey, V.; Chattopadhyay, A.; Mukhopadhyay, D. Adversarial attacks and defences: A survey. arXiv 2018, arXiv:1810.00069. [Google Scholar] [CrossRef]
  27. Croce, F.; Hein, M. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In Proceedings of the International Conference on Machine Learning, Shenzhen, China, 15–17 February 2020; pp. 2206–2216. [Google Scholar]
  28. Zhang, S.; Zuo, D.; Yang, Y.; Zhang, X. A transferable adversarial belief attack with salient region perturbation restriction. IEEE Trans. Multimed. 2022, 25, 4296–4306. [Google Scholar] [CrossRef]
  29. Zhang, C.; Benz, P.; Imtiaz, T.; Kweon, I.S. Understanding adversarial examples from the mutual influence of images and perturbations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Online, 14–19 June 2020; pp. 14521–14530. [Google Scholar]
  30. Zhang, C.; Benz, P.; Karjauv, A.; Cho, J.W.; Zhang, K.; Kweon, I.S. Investigating top-k white-box and transferable black box attack. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 15085–15094. [Google Scholar]
  31. Wang, X.; Chen, H.; Sun, P.; Li, J.; Zhang, A.; Liu, W.; Jiang, N. AdvST: Generating Unrestricted Adversarial Images via Style Transfer. IEEE Trans. Multimed. 2023, 26, 4846–4858. [Google Scholar] [CrossRef]
  32. Wang, J.; Zhao, J.; Yin, Q.; Luo, X.; Zheng, Y.; Shi, Y.Q.; Jha, S.K. SmsNet: A new deep convolutional neural network model for adversarial example detection. IEEE Trans. Multimed. 2021, 24, 230–244. [Google Scholar] [CrossRef]
  33. Cheng, Y.; Guo, Q.; Juefei-Xu, F.; Lin, S.W.; Feng, W.; Lin, W.; Liu, Y. Pasadena: Perceptually aware and stealthy adversarial denoise attack. IEEE Trans. Multimed. 2021, 24, 3807–3822. [Google Scholar] [CrossRef]
  34. Rathore, P.; Basak, A.; Nistala, S.H.; Runkana, V. Untargeted, targeted and universal adversarial attacks and defenses on time series. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  35. Lin, G.; Pan, Z.; Zhou, X.; Duan, Y.; Bai, W.; Zhan, D.; Zhu, L.; Zhao, G.; Li, T. Boosting adversarial transferability with shallow-feature attack on SAR images. Remote Sens. 2023, 15, 2699. [Google Scholar] [CrossRef]
  36. Peng, B.; Peng, B.; Yong, S.; Liu, L. An empirical study of fully black-box and universal adversarial attack for SAR target recognition. Remote Sens. 2022, 14, 4017. [Google Scholar] [CrossRef]
  37. Du, C.; Zhang, L. Adversarial attack for SAR target recognition based on UNet-generative adversarial network. Remote Sens. 2021, 13, 4358. [Google Scholar] [CrossRef]
  38. Huang, X.; Lu, Z.; Peng, B. Enhancing Transferability with Intra-Class Transformations and Inter-Class Nonlinear Fusion on SAR Images. Remote Sens. 2024, 16, 2539. [Google Scholar] [CrossRef]
  39. Kurakin, A.; Goodfellow, I.; Bengio, S. Adversarial machine learning at scale. arXiv 2016, arXiv:1611.01236. [Google Scholar]
  40. Carlini, N.; Wagner, D. Towards evaluating the robustness of neural networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 22–26 May 2017; pp. 39–57. [Google Scholar]
  41. Xie, C.; Zhang, Z.; Zhou, Y.; Bai, S.; Wang, J.; Ren, Z.; Yuille, A.L. Improving transferability of adversarial examples with input diversity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2730–2739. [Google Scholar]
  42. Tian, Y.; Krishnan, D.; Isola, P. Contrastive multiview coding. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 776–794. [Google Scholar]
  43. Dosovitskiy, A.; Springenberg, J.T.; Riedmiller, M.; Brox, T. Discriminative unsupervised feature learning with convolutional neural networks. Adv. Neural Inf. Process. Syst. 2014, 27. [Google Scholar] [CrossRef]
  44. Wu, Z.; Xiong, Y.; Yu, S.X.; Lin, D. Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3733–3742. [Google Scholar]
  45. He, K.; Fan, H.; Wu, Y.; Xie, S.; Girshick, R. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Online, 14–19 June 2020; pp. 9729–9738. [Google Scholar]
  46. Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning, Shenzhen, China, 15–17 February 2020; pp. 1597–1607. [Google Scholar]
  47. Zhuang, C.; Zhai, A.L.; Yamins, D. Local aggregation for unsupervised learning of visual embeddings. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6002–6012. [Google Scholar]
  48. Kurakin, A.; Goodfellow, I.J.; Bengio, S. Adversarial examples in the physical world. In Artificial Intelligence Safety and Security; Chapman and Hall/CRC: Boca Raton, FL, USA, 2018; pp. 99–112. [Google Scholar]
  49. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. Ussr Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  50. Duch, W.; Korczak, J. Optimization and global minimization methods suitable for neural networks. Neural Comput. Surv. 1998, 2, 163–212. [Google Scholar]
  51. Sutskever, I.; Martens, J.; Dahl, G.; Hinton, G. On the importance of initialization and momentum in deep learning. In Proceedings of the International Conference on Machine Learning, Atlanta, GA, USA, 17–19 June 2013; pp. 1139–1147. [Google Scholar]
  52. Lin, J.; Song, C.; He, K.; Wang, L.; Hopcroft, J.E. Nesterov accelerated gradient and scale invariance for adversarial attacks. In Proceedings of the International Conference on Learning Representations, Vienna, Austria, 26–30 April 2020. [Google Scholar]
  53. Gao, L.; Zhang, Q.; Song, J.; Liu, X.; Shen, H.T. Patch-wise attack for fooling deep neural network. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 307–322. [Google Scholar]
  54. He, K.; Zhang, X.; Ren, S.; Sun, J. Identity mappings in deep residual networks. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 630–645. [Google Scholar]
  55. Xie, C.; Wang, J.; Zhang, Z.; Ren, Z.; Yuille, A. Mitigating adversarial effects through randomization. arXiv 2017, arXiv:1711.01991. [Google Scholar]
  56. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  57. Kechagias-Stamatis, O.; Aouf, N.; Belloni, C. SAR automatic target recognition based on convolutional neural networks. In Proceedings of the International Conference on Radar Systems (Radar 2017), Belfast, UK, 23–26 October 2017; IET: Stevenage, UK, 2017; pp. 1–4. [Google Scholar]
  58. Shao, J.; Qu, C.; Li, J. A performance analysis of convolutional neural network models in SAR target recognition. In Proceedings of the 2017 SAR in Big Data Era: Models, Methods and Applications (BIGSARDATA), Beijing, China, 13–14 November 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar]
  59. Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6609–6618. [Google Scholar]
  60. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  61. Liu, Z.; Mao, H.; Wu, C.Y.; Feichtenhofer, C.; Darrell, T.; Xie, S. A ConvNet for the 2020s. arXiv 2021, arXiv:2103.03101. [Google Scholar]
  62. Zheng, S.; Zhang, C.; Hao, X. Black-box targeted adversarial attack on segment anything (sam). arXiv 2023, arXiv:2310.10010. [Google Scholar] [CrossRef]
  63. Keydel, E.R.; Lee, S.W.; Moore, J.T. MSTAR extended operating conditions: A tutorial. Algorithms Synth. Aperture Radar Imag. III 1996, 2757, 228–242. [Google Scholar]
  64. Ross, T.D.; Worrell, S.W.; Velten, V.J.; Mossing, J.C.; Bryant, M.L. Standard SAR ATR evaluation experiments using the MSTAR public release data set. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery V, Orlando, FL, USA, 13–17 April 1998; SPIE: San Francisco, CA, USA, 1998; Volume 3370, pp. 566–573. [Google Scholar]
  65. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
  66. Chen, P.Y.; Zhang, H.; Sharma, Y.; Yi, J.; Hsieh, C.J. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, Dallas, TX, USA, 3 November 2017; pp. 15–26. [Google Scholar]
  67. Wang, F.; Liu, H. Understanding the Behaviour of Contrastive Loss. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021. [Google Scholar]
  68. Zhang, C.; Zhang, K.; Pham, T.X.; Yoo, C.; Kweon, I.S. Dual Temperature Helps Contrastive Learning Without Many Negative Samples: Towards Understanding and Simplifying MoCo. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
Figure 1. Illustration of traditional contrastive learning (CL) and our proposed contrastive learning-based targeted adversarial attack (CL-TAA). There are two main differences between traditional CL and our method. First, traditional CL aims to improve the model’s generalization to hard samples, while our CL-TAA focuses on enhancing the generalization of adversarial examples across different black box models. Another difference is that traditional CL is typically used during the pre-training stage, whereas our CL-TAA is applied during the training.
Figure 1. Illustration of traditional contrastive learning (CL) and our proposed contrastive learning-based targeted adversarial attack (CL-TAA). There are two main differences between traditional CL and our method. First, traditional CL aims to improve the model’s generalization to hard samples, while our CL-TAA focuses on enhancing the generalization of adversarial examples across different black box models. Another difference is that traditional CL is typically used during the pre-training stage, whereas our CL-TAA is applied during the training.
Remotesensing 17 00146 g001
Figure 2. SAR images for ten classes in MSTAR dataset and their corresponding optical images.
Figure 2. SAR images for ten classes in MSTAR dataset and their corresponding optical images.
Remotesensing 17 00146 g002
Figure 3. Targeted transfer success rates (%) on different models.
Figure 3. Targeted transfer success rates (%) on different models.
Remotesensing 17 00146 g003
Figure 4. Visualization of the heatmap for different models.
Figure 4. Visualization of the heatmap for different models.
Remotesensing 17 00146 g004
Figure 5. Visualization of logits used for CL-TAA.
Figure 5. Visualization of logits used for CL-TAA.
Remotesensing 17 00146 g005
Figure 6. Adversarial examples generated by CL-TAA.
Figure 6. Adversarial examples generated by CL-TAA.
Remotesensing 17 00146 g006
Figure 7. Effect of the number of iterations. We report the results of different iterations under the black box setting by using AMS-CNN as the source model.
Figure 7. Effect of the number of iterations. We report the results of different iterations under the black box setting by using AMS-CNN as the source model.
Remotesensing 17 00146 g007
Table 1. Sample number of training and test sets for ten types of military targets.
Table 1. Sample number of training and test sets for ten types of military targets.
Training SetTest Set
ClassesNumberCLassesNumber
2S12992S1274
BMP2233BMP2195
BRDM2298BRDM2195
BTR60256BTR60195
BTR70233BTR70196
D7299D7274
T62299T62273
T72232T72196
ZIL131299ZIL131274
ZSU234299ZSU234274
Table 2. Training and test accuracy of deep learning models (%).
Table 2. Training and test accuracy of deep learning models (%).
ModelDatasetTraining AccuracyTesting Accuracy
AlexNetMSTAR10096.78
VGG16MSTAR10097.36
ResNet18MSTAR10096.57
EfficientNetMSTAR10094.84
MobileNetMSTAR10083.80
ConvNeXtMSTAR10088.66
AMS-CNNMSTAR10098.14
Table 3. Quantitative evaluation of black box targeted attack by using the AMS-CNN as the source model on the MSTAR dataset (%).
Table 3. Quantitative evaluation of black box targeted attack by using the AMS-CNN as the source model on the MSTAR dataset (%).
Target Models
Attack AlexNet VGG16 ResNet18 EfficientNet MobileNet ConvNeXt
FGSM10.910.510.310.911.210.9
I-FGSM14.915.513.613.211.715.2
CL-TAA17.920.017.314.315.217.5
Table 4. Quantitative evaluation of black box targeted attack by using the AMS-CNN as the source model on the CARABAS-II dataset (%).
Table 4. Quantitative evaluation of black box targeted attack by using the AMS-CNN as the source model on the CARABAS-II dataset (%).
Target Models
Attack AlexNet VGG16 ResNet18 EfficientNet MobileNet ConvNeXt
FGSM10.4610.2811.257.1414.2815.87
I-FGSM14.6711.9016.6710.3118.4228.73
CL-TAA15.8712.6919.0411.1120.6330.95
Table 5. Confusion matrix of I-FGSM (%). We present the result using AMS-CNN as the source model.
Table 5. Confusion matrix of I-FGSM (%). We present the result using AMS-CNN as the source model.
Target Labels
Predicted Labels BTR70 ZSU234 ZIL131 BMP2 BRDM2 BTR60 2S1 T62 T72 D7
BTR700.40.20.40.30.00.80.40.40.20.6
ZSU2341.62.31.71.00.80.41.41.20.60.7
ZIL1311.12.10.51.50.60.40.70.70.60.5
BMP20.30.10.50.60.00.60.20.10.80.3
BRDM23.42.42.64.34.92.52.72.23.72.9
BTR601.50.30.30.00.40.60.71.00.60.3
2S10.71.72.71.00.72.01.91.01.82.1
T621.20.62.40.61.51.71.02.71.21.6
T720.00.30.10.00.00.10.00.10.10.2
D70.20.30.40.90.50.00.50.00.40.9
Table 6. Confusion matrix of CL-TAA (%). We present the results using AMS-CNN as the source model.
Table 6. Confusion matrix of CL-TAA (%). We present the results using AMS-CNN as the source model.
Target Labels
Predicted Labels BTR70 ZSU234 ZIL131 BMP2 BRDM2 BTR60 2S1 T62 T72 D7
BTR701.50.31.01.10.11.11.40.80.50.8
ZSU2341.63.62.31.31.00.71.71.61.10.3
ZIL1311.41.21.31.51.40.91.21.00.70.5
BMP21.10.10.51.40.61.00.20.81.70.2
BRDM21.61.30.71.72.70.90.91.11.31.7
BTR601.60.20.30.10.31.11.20.70.60.1
2S10.21.72.71.01.31.61.91.21.91.9
T620.90.61.10.40.71.30.21.91.21.5
T720.11.01.00.71.00.50.50.30.80.2
D70.40.50.81.10.70.10.40.10.31.7
Table 7. Targeted adversarial attack on MSTAR dataset. We present the results by using the AMS-CNN (%) as the source model.
Table 7. Targeted adversarial attack on MSTAR dataset. We present the results by using the AMS-CNN (%) as the source model.
Target Models
Attack AlexNet VGG16 ResNet18 EfficientNet MobileNet ConvNeXt
I-FGSM14.915.513.613.211.715.2
MI-FGSM15.213.113.511.69.216.0
TI-FGSM15.214.813.412.712.414.9
DI-FGSM16.815.913.913.212.615.5
CL-TAA17.920.017.314.315.217.5
Table 8. Effect of CL loss weight. We report the results with different values of regularization parameters λ in the black box setting.
Table 8. Effect of CL loss weight. We report the results with different values of regularization parameters λ in the black box setting.
λAlexNetVGG16ResNet18EfficientNetMobileNetConvNeXt
014.915.513.613.211.715.2
10 2 17.920.017.314.315.217.5
116.418.815.713.413.815.7
10 + 2 15.516.914.712.413.314.5
Table 9. Effect of the number of negative samples. We report the results with different numbers of negative samples N under the black box setting.
Table 9. Effect of the number of negative samples. We report the results with different numbers of negative samples N under the black box setting.
NAlexNetVGG16ResNet18EfficientNetMobileNetConvNeXt
115.016.614.912.012.815.5
1017.619.616.213.114.216.7
5017.920.017.314.315.217.5
10018.220.917.914.515.417.7
Table 10. Effect of the temperature of InfoNCE. We report the results with different InfoNCE temperatures T in the black box setting.
Table 10. Effect of the temperature of InfoNCE. We report the results with different InfoNCE temperatures T in the black box setting.
TAlexNetVGG16ResNet18EfficientNetMobileNetConvNeXt
0.0116.617.714.712.413.715.0
0.0516.719.316.013.414.216.3
0.117.920.017.314.315.217.5
0.516.819.917.214.214.517.3
Table 11. Effect of perturbation budget. We report the results with different values of the maximum perturbation ϵ under the black box setup.
Table 11. Effect of perturbation budget. We report the results with different values of the maximum perturbation ϵ under the black box setup.
ϵAlexNetVGG16ResNet18EfficientNetMobileNetConvNeXt
4/25511.812.311.011.812.211.9
8/25513.916.113.013.112.813.6
12/25516.717.315.114.115.015.6
16/25517.920.017.314.315.217.5
Table 12. Effect of step size. We report the results with different step sizes α under the black box setting.
Table 12. Effect of step size. We report the results with different step sizes α under the black box setting.
αAlexNetVGG16ResNet18EfficientNetMobileNetConvNeXt
1/25516.018.216.314.115.017.0
2/25517.920.017.314.315.217.5
4/25517.219.416.513.213.817.0
6/25517.019.015.612.613.016.5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, S.; Han, D.; Lu, C.; Hou, C.; Han, Y.; Hao, X.; Zhang, C. Transferable Targeted Adversarial Attack on Synthetic Aperture Radar (SAR) Image Recognition. Remote Sens. 2025, 17, 146. https://doi.org/10.3390/rs17010146

AMA Style

Zheng S, Han D, Lu C, Hou C, Han Y, Hao X, Zhang C. Transferable Targeted Adversarial Attack on Synthetic Aperture Radar (SAR) Image Recognition. Remote Sensing. 2025; 17(1):146. https://doi.org/10.3390/rs17010146

Chicago/Turabian Style

Zheng, Sheng, Dongshen Han, Chang Lu, Chaowen Hou, Yanwen Han, Xinhong Hao, and Chaoning Zhang. 2025. "Transferable Targeted Adversarial Attack on Synthetic Aperture Radar (SAR) Image Recognition" Remote Sensing 17, no. 1: 146. https://doi.org/10.3390/rs17010146

APA Style

Zheng, S., Han, D., Lu, C., Hou, C., Han, Y., Hao, X., & Zhang, C. (2025). Transferable Targeted Adversarial Attack on Synthetic Aperture Radar (SAR) Image Recognition. Remote Sensing, 17(1), 146. https://doi.org/10.3390/rs17010146

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop