Next Article in Journal
The Influence of Australian Bushfire on the Upper Tropospheric CO and Hydrocarbon Distribution in the South Pacific
Next Article in Special Issue
SAR Image Object Detection and Information Extraction: Methods and Applications
Previous Article in Journal
Estimating Flood-Affected Houses as an SDG Indicator to Enhance the Flood Resilience of Sahel Communities Using Geospatial Data
Previous Article in Special Issue
TIAR-SAR: An Oriented SAR Ship Detector Combining a Task Interaction Head Architecture with Composite Angle Regression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Incremental SAR Automatic Target Recognition with Divergence-Constrained Class-Specific Dictionary Learning

by
Xiaojie Ma
1,
Xusong Bu
1,
Dezhao Zhang
1,
Zhaohui Wang
1,2,* and
Jing Li
1
1
China Satellite Network Digital Technology Co., Ltd., Xiong’an New Area 071800, China
2
School of Computer Science, China University of Geosciences (Wuhan), Wuhan 430074, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(12), 2090; https://doi.org/10.3390/rs17122090
Submission received: 30 April 2025 / Revised: 29 May 2025 / Accepted: 5 June 2025 / Published: 18 June 2025

Abstract

:
Synthetic aperture radar (SAR) automatic target recognition (ATR) plays a pivotal role in SAR image interpretation. While existing approaches predominantly rely on batch learning paradigms, their practical deployment is constrained by the sequential arrival of training data and high retraining costs. To overcome this challenge, this paper introduces a divergence-constrained incremental dictionary learning framework that enables progressive model updates without full data reprocessing. Specifically, firstly, this method learns class-specific dictionaries for each target category via sub-dictionary learning, where the learning process for a specific class does not involve data from other classes. Secondly, the intra-class divergence constraint is incorporated during sub-dictionary learning to address the challenges of significant intra-class variations and minor inter-class differences in SAR targets. Thirdly, the sparse representation coefficients of the target to be classified are solved across all sub-dictionaries, followed by the computation of corresponding reconstruction errors and intra-class divergence metrics to achieve classification. Finally, when the targets of new categories are obtained, the corresponding class-specific dictionaries are calculated and added to the learned dictionary set. In this way, the incremental update of the SAR ATR system is completed. Experimental results on the MSTAR dataset indicate that our method attains >96.62% accuracy across various incremental scenarios. Compared with other state-of-the-art methods, it demonstrates better recognition performance and robustness.

1. Introduction

Synthetic aperture radar (SAR) has the advantage of all-day, all-weather, high-resolution and large-scale observation. These characteristics have made SAR become an important means of battlefield surveillance and reconnaissance [1,2,3,4]. In various military applications, SAR automatic target recognition (ATR) has attracted more and more attention from scholars. With the development of machine learning, various SAR ATR methods have been proposed, including traditional methods [5,6] and deep-learning-based methods [7,8]. There methods generally use a batch training strategy to learn all training data. After obtaining the feature extractors and classifiers, the performances of these methods are verified on the test data. Experimental results show that they have high recognition accuracy; for example, the accuracy on MSTAR dataset can easily exceed 99% [9,10].
Nevertheless, the aforementioned methods suffer from substantial limitations. Specifically, the batch learning strategy is only applicable to situations where the training dataset remains constant. In real-world battlefield settings, acquiring all training data simultaneously is often unfeasible. Upon the acquisition of new training data, it becomes necessary to merge the existing and new training samples and then retrain the SAR ATR model using the combined dataset. This process demands a significant amount of time and computational resources, imposing a heavy burden on the update of the SAR ATR system. Consequently, substituting the batch learning strategy with incremental learning is imperative for the swift updating of the SAR ATR system.
Incremental learning, also referred to as continuous or lifelong learning, was initially introduced by Coppock et al. in 1962 [11]. It enables a system to continuously assimilate new knowledge from incoming data while retaining the majority of previously acquired knowledge. The significance of incremental learning is manifested in two key aspects [12,13,14]:
(1) In real-world applications, data is typically acquired incrementally; thus, upon encountering novel data, a pre-trained system must undergo adjustments to incorporate the knowledge embedded within the new data.
(2) Modifying an existing trained system generally incurs lower time and computational costs compared to retraining the system from scratch. Evidently, incremental learning represents a progressive process of accumulation and updating, closely resembling the human learning paradigm.
Evidently, incremental learning constitutes a process of progressive accumulation and iterative refinement, closely mirroring the cognitive learning mechanism inherent in human beings.
As shown in Figure 1, incremental learning can update the SAR ATR system at a very low cost. As far as we know, many scholars have carried out works on incremental SAR ATR. For instance, Dang et al. introduced the class boundary selection-based incremental learning (CBesIL). This method preserves prior recognition capabilities by selectively safeguarding class boundary exemplars as new classes are incrementally added [15]. Subsequently, Dang et al. enhanced the incremental nonnegative matrix factorization (INMF) by incorporating sparse constraints, resulting in a modified version denoted as L P - I N M F . Compared to the original INMF, L P - I N M F , an improved variant, exhibits superior recognition performance [16]. Tang et al. developed the high plastic error correction incremental learning (HPecIL), leveraging models trained on previous tasks to rectify cumulative errors and mitigate model degradation [17]. Meanwhile, Liu et al. proposed a class-incremental synthetic aperture radar (SAR) automatic target recognition (ATR) method. Grounded in a continuous teacher–student learning framework, this deep learning approach effectively alleviates catastrophic forgetting [18] in neural networks [19]. Guo et al. presented POSELM, an optimized version of the extreme learning machine (ELM). Specifically, POSELM applies the Particle Swarm Optimization algorithm to refine initial weights, thereby enhancing recognition accuracy [20]. Finally, Tao et al. devised a multiscale incremental dictionary learning method, building upon and improving the LC-KSVD algorithm [21]. Based on self-sustainment guidance representation, Pan et al. proposed an incremental SAR ATR method. This method made use of a dynamic query navigation module and a structural extension module to ensure the model’s learning ability for new and old class targets [22]. Zhang et al. proposed a multi-stage regularization-based SAR ATR method, called SCF-CIL. SCF-CIL applied an overfitting training strategy and a multi-stage regularization method to ensure the model’s incremental learning capability [23]. Cao et al. proposed the L 2 , 1 -constrained deep incremental NMF ( L 2 , 1 -DINMF) method for SAR ATR. By introducing the L 2 , 1 -constraint into deep nonnegative matrix factorization (DNMF) and defining incremental update rules, this method significantly improves the incremental recognition performance [24]. Gao et al. implemented incremental SAR ATR using strong separability features (SSF-IL). The SSF-IL employs intra-class and inter-class scatter to design the feature separability loss. Additionally, a classifier bias correction method based on boundary features is devised to achieve target classification [25]. Yu et al. proposed multilevel adaptive knowledge distillation network (MLAKDN) to realize incremental SAR ATR. MLAKDN proposes an adaptive weighted distillation strategy, a feature distillation method, a model rebalancing technique, and a weighted incremental classification loss to guarantee the model’s incremental recognition ability [26].
Furthermore, because of the complexity and challenge of the incremental SAR ATR problem, scholars only discuss incremental learning in specific settings. For example, the newly added data in References [15,16,17,18,19,20,22,23,24,25,26] belong to new classes, while Reference [21] deals with the new data of existing categories. In this paper, the projects we study are the same as those in References [15,16,17,18,19,20,22,23,24,25,26], and they belong to class incremental learning.
The above methods have good performance, but there is still room for improvement. In recent years, sparse representation has been widely used in SAR ATR, and it performed well [27,28,29]. The sparse representation classifier directly takes the training samples of various classes as the atoms to form a dictionary. Then, the test samples are classified according to their sparse representation under the dictionary. This approach is naturally suitable for class incremental learning. In more detail, when the samples of new classes are obtained, they can be directly used to expand the dictionary to accomplish class incremental learning. SAR target images have the characteristics of large intra-class distances and small inter-class differences. The dictionary constructed directly with training samples has difficulty adapting to this situation, and it performs poorly in SAR ATR. Therefore, we can improve this situation with a class-specific dictionary [30]. In addition, in the process of dictionary learning, regularization terms can be added to improve the recognition performance.
Motivated by the aforementioned concepts, this paper employs class-specific dictionaries to realize incremental learning for SAR ATR. Initially, the objective function of dictionary learning is regularized by the intra-class divergence [31], aiming to acquire a more discriminative dictionary. Subsequently, an enhanced dictionary learning approach is utilized to derive the class-specific dictionary for each target class. Next, the sparse representations of test samples under all class-specific dictionaries are computed. Based on these class-specific dictionaries, two metrics, namely reconstruction error and intra-class divergence, are calculated. Ultimately, these metrics are jointly considered for target recognition. Once the data of a new class is obtained, it suffices to learn its corresponding class-specific dictionary independently. Experiments conducted on the MSTAR dataset have verified the efficacy of the proposed method.
To summarize, the main contributions of this paper are as follows:
(1) A novel incremental SAR ATR method utilizing class-specific dictionaries is developed. This approach enables incremental learning without the necessity of retaining historical data, optimizing storage and computational resources.
(2) Intra-class divergence is integrated into the dictionary learning framework as a constraint, significantly enhancing the discriminative power of class-specific dictionaries. This innovation improves the ability to distinguish between different target classes.
(3) Comparative analysis demonstrates that the proposed method outperforms existing incremental learning techniques in SAR ATR tasks, demonstrating superior recognition accuracy and robustness.
The following sections of this paper are systematically arranged. Section II reviews the related studies, including incremental learning and sparse representation classifiers. In Section III, the detailed implementation of our proposed incremental SAR ATR method using class-specific dictionaries is elaborated. Section IV conducts multiple experiments on the MSTAR dataset, presenting and analyzing the corresponding experimental results. Finally, Section V summarizes the research conclusions and prospects for future work.

2. Related Works

This research primarily concerns incremental learning and sparse representation classifiers. This section elucidates the fundamental strategies of incremental learning and the operational principles of sparse representation classifiers, laying the groundwork for the subsequent discussions.

2.1. Incremental Learning

Incremental learning is used to solve the problem of the high computational cost of model updates, and it is not only for deep learning methods but also for traditional methods. In detail, incremental learning requires that the model can continuously learn the new knowledge and update itself with a small computation cost while retaining the previously learned information [32,33]. According to different tasks, incremental learning can be divided into task incremental learning, domain incremental learning, and class incremental learning [34]. Here, we only discuss class incremental learning, which continuously learns knowledge from new class targets [35,36].
In recent years, the implementation methods of incremental learning can be roughly divided into three categories: model-structure-based methods, replay-based methods, and regularization-based methods. Model structure-based methods continuously modify the model structures during incremental learning, which adds new structures for incremental tasks. RKR [37], SVDD [38], one-class-SVM [39] and ICAC [40] all belong to this kind of method. Replay-based methods (such as iCaRL [41], EVM [42] and CBesIL [15]) restore representative exemplars of old class targets. During incremental learning, these exemplars are mixed with new class targets to retrain the model. These kinds of methods have better performance than other incremental learning methods, but they face the problem of how to select high-quality key examples. Regularization-based methods make use of knowledge distillation as the regularization term to constrain the model. This kind of method requires storing the weight of the old model to obtain the source of knowledge distillation. In addition, regularization-based methods are often used in combination with replay-based methods.
In fact, replay-based methods are most used in incremental learning, while our proposed method belongs to the category of model-structure-based methods. In the following, we refer to class incremental learning as incremental learning for convenience.

2.2. Sparse Representation Classifier

Sparse representation has been widely used in the field of signal processing. The purpose of sparse representation is to find the sparsest representation of test samples using a linear combination of these training samples [43].
Consider a training sample set Y = [ Y 1 , Y 2 , , Y K ] R m × n with K classes. Here, m represents the length of the sample, while n denotes the number of samples. Y i = [ y i 1 ,     y i 2 ,           ,     y i n i ] R m × n i represents the training samples of class i , and y i j is the j -     th of Y i . y is a column vector representing the test sample. The sparse representation problem can be denoted by the following equation:
α ^ = arg min α y Y α 2 2     s . t .         α 0 T
where α is the sparse representation coefficient vector; T is the sparse threshold;     0 means L 0 norm; and   α 0 reflects the number of non-zero elements in α . The optimization of the L 0 norm is an NP-hard problem in Equation (1); therefore, in practical applications, we usually use the L 1 norm instead of the L 0 norm. In this case, Equation (1) can be expressed as
α ^ = arg min α y Y α 2 2   +       γ α 1
where the parameter γ is a factor used to balance reconstruction error and sparsity.
As for the sparse representation classifier, it makes use of the reconstruction errors of various classes to predict the class of the test sample y . The detailed process is presented in Equation (3).
i d e n t i t y ( y ) = arg min i e r r o r i       = arg min i y Y i α ^ i 2 ,       i = 1 , 2 , , K
where α ^ i is the sparse coefficient vector corresponding to the i - t h class. Additionally, i d e n t i t y ( y ) denotes the predicted class of the test sample y , and e r r o r i = y Y i α ^ i 2 represents the reconstruction error if the test sample y corresponds to Y i . Finally, the test sample y is divided into the class with the minimum reconstruction error.

3. Methods

In this section, we introduce our proposed incremental SAR target recognition method based on a class-specific dictionary. The schematic diagram of our method is shown in Figure 2. First, for each class of targets, we learn the dictionaries individually using the improved dictionary learning method. These dictionaries are also called class-specific dictionaries. Then, these dictionaries are collected into a dictionary set D ( D = { D 1 , D 2 , , D C } ), where D i represents the sub-dictionary corresponding to the i t h class, and C denotes the number of classes. Finally, the target is classified by comprehensive utilization of the sparse representation coefficient and reconstruction error. When encountering the targets from the new category, the class-specific dictionary D n e w is learned in the same way. At the same time, the dictionary set D is updated with D = { D ,   D n e w } . The following parts introduces the class-specific dictionary learning, target recognition strategy, and incremental learning.

3.1. Class-Specific Dictionary Learning

In this paper, the sub-dictionary of each class target will be built by the dictionary learning method, and these dictionaries can also be called class-specific dictionaries.
For the targets of class i , its training set is Y i = { y i 1 , y i 2 , , y i N 1 , y i N } , where y i j represents the j t h sample in Y i and N denotes the number of samples. The process of dictionary learning can be represented by Equation (4).
min D i , X i Y i D i X i 2 + λ X i 1
In Equation (4), X i = { x i 1 , x i 2 , , x i N } represents the sparse coefficient matrix of Y i under dictionary D i ( D i = { d i 1 , d i 2 , , d i n i } ), and n i indicates the number of atoms in dictionary D i . λ is the weight factor. The dictionary learning process is completed by alternately updating D i and X i .
The conventional dictionary learning methods only focus on the reconstruction performance and rarely consider the discriminative properties of sparse coefficients, which is not conducive to the classification task. To address this problem, here, we draw on the idea of FDDL and add constraints during dictionary learning. There are two ways to enhance the classification ability of dictionaries: (1) increasing inter-class differences, (2) reducing intra-class differences. We adopt the class-specific dictionary learning strategy and do not introduce information of other classes, so we can only reduce intra-class differences in this paper. In this case, the objective function of class-specific dictionary learning can be expressed as Equation (5).
min D i , X i     Y i D i X i F 2 + λ 1 X i 1 + λ 2 f X i     s . t .   d i k 2 = 1
Here, f ( X i ) represents the intra-class difference, and λ 1 and λ 2 are weight factors to balance sparsity and intra-class divergence. For class-specific dictionary, intra-class differences can be reduced by minimizing the intra-class divergence of the sparse coefficient matrix X i . This process can be expressed as
min X i f X i =   min X i j = 1 N x i j m i x i j m i T ,   x i j X i
In Equation (6), m i is the mean vector of X i , and m i = j = 1 N x i j / N . We build the matrix M i with the same size as X i , and make its column vectors all be m i . Therefore, f X i = X i M i F 2 , and Equation (5) can be rewritten as
min D i , X i     Y i D i X i F 2 + λ 1 X i 1 + λ 2 X i M i F 2   s . t .   d i k 2 = 1
Equation (7) needs to optimize D i and X i . We adopt the alternating optimization strategy; that is, fixed D i optimizes X i , and then fixed X i optimizes D i .
First, fix D i to optimize X i . When dictionary D i is fixed, the objective function is equivalent to
min X i f X i =   min X i j = 1 N x i j m i x i j m i T ,   x i j X i
Equation (8) can be solved according to the projection iteration method to obtain the sparse representation coefficient matrix X i .
Second, fix X i and optimize D i . It is known that λ 1 X i 1 + λ 2 f X i   is a constant when X i is fixed, so Equation (7) can be simplified as
min D i   Y i D i X i F 2   s . t .   d i k 2 = 1
In this case, the objective function is the same as that in conventional dictionary learning, and dictionary D i can be updated according to Algorithm 1.
Algorithm 1: Fix X i and optimize D i
Input:  D i = { d i 1 , d i 2 , , d i n i } //Initialized class-specific dictionary
Input:  Y i = { y i 1 , y i 2 , , y i N 1 , y i N } //Training data of i t h class
Input:  X i = { x i 1 , x i 2 , , x i N } //Sparse coefficient matrix
1:  For  k = 1 , 2 , 3 , , n i    do
2:     It’s known that D i X i = d i k x i k + j k d i k x i k ( j = 1 , 2 , , n i ). Let E i k = Y i j k d i j x i j , then Equation (9) can be represented as min d i k E i k d i k x i k F 2   s . t .   d i k 2 = 1 .
3:     Solve d i k by least square method.
4:     Normalize d i k , and d i k = d i k / d i k .
5:  end for
Output:  D i = { d i 1 , d i 2 , , d i n i } //The updated class-specific dictionary
SAR target recognition includes two steps: feature extraction and classification. Incremental SAR ATR methods require incremental feature extraction, and it is a subject worthy of study. In this paper, we directly convert SAR images into vectors and then construct the training set and test set. The method does not have the process of feature extraction.

3.2. Target Classification Scheme

As shown in Figure 2, after obtaining the class-specific dictionaries of various classes, we collect them to build a dictionary set for classification. The reconstruction error and discriminant information of the coefficient (also referred to as the intra-class difference in this paper) are used during class-specific dictionary learning, and these two items are also used comprehensively for classification during testing. Specifically, it is used to solve the sparse representation coefficient of the sample y under dictionary D i , and the solution method can refer to Equation (10).
x ^ = arg   min x y D i x 2 + γ x m i 2 2 x 2
In Equation (10), m i is the mean vector of sparse representation coefficients obtained by training samples of the i t h class during training. γ is the weight factor. In general, the least square method is usually used to solve x ^ , which could significantly reduce the computation complexity.
On the basis of D i , m i and x ^ , we calculate the measure e i for classification. As shown in Equation (11), it includes two parts: the reconstruction error term ( y D i x ^ 2 ) and the coefficient discriminant term ( x ^ m i 2 / x ^ 2 ). γ is the weight factor to balance them, which is the same as that in Equation (10).
e i = y D i x ^ 2 2 + γ x ^ m i 2 2 x ^ 2
Finally, the measures of test sample y under various class-specific dictionaries are calculated. Then, as shown in Equation (12), the test sample y belongs to the category with minimum e i .
i d e n t i t y y = arg   min i = 1 , 2 , k e i

3.3. Incremental Dictionary Learning

After obtaining the samples of new classes, we take Equation (2) as the objective function to learn the class-specific dictionary of these samples, and it is represented as D N e w . Then, the dictionary set D is updated with D N e w , and it is D = { D 1 , D 2 , , D N , D N e w } . On the basis of the updated dictionary set D , the sparse representation coefficients of these test samples are solved.
We comprehensively consider the reconstruction error term and the coefficient discriminant term to classify the targets.
In this paper, the learning process of a class-specific dictionary does not need to consider the information from other classes, so it is more suitable for the class incremental learning scenario in this paper.

4. Experiments and Results

In this section, we conducted several experiments on MSTAR dataset to evaluate the effect of our method. First, some implementation details are introduced, including the datasets and evaluation metrics. Second, the experimental scenario settings and incremental recognition performance under different scenarios are presented. At the same time, we compare our method with several state-of-the-art methods in this part. Finally, we discuss the influence of parameters on experimental results.

4.1. Inplementations

4.1.1. Dataset

We conduct experiments on the MSTAR (The Moving and Stationary Target Acquisition and Recognition) dataset [44]. The MSTAR dataset was established by the Air Force Research Laboratory (AFRL) and Defense Advanced Research Project Agency (DARPA), and it was widely used in the research of SAR target recognition. This dataset contains SAR images under Standard Operation Condition (SOC) [45] and Extended Operation Condition (EOC) [46], and these images are imaged in the X-band and with HH polarization. In addition, the range and azimuth resolution of them are both 0.3 m. Here, we utilized the images under SOC to conduct experiments. The SAR images of these targets are shown in Figure 3, and they include 10 classes of military vehicles, including BMP2, BTR70, T72, BTR60, 2S1, BRDM2, D7, T62, ZIL131, and ZSU234. The azimuth angles of these images range from 0 to 360 degrees. In our experiments, images with depression angles of 15 and 17 are used to establish the training set and testing set. The configuration of ten classes of targets is presented in Table 1.
In our experiments, all images are resized to 64*64 by center cropping. After that, these images are reshaped into vectors to construct a training set and test set.

4.1.2. Evaluation Protocol

The performance of our method is evaluated by recognition accuracy. Six state-of-the-art methods, including Replay [47], iCaRL [41], Wa [48], Podnet [49], ICAC [40], and CBesIL [15], were used to carry out comparative experiments. In our experiments, the backbones of iCaRL, Replay, Wa, ICAC and Podnet are all Resnet18. A brief description of these methods is as follows.
  • Replay: Replay retains a small amount of old category data as an instance set. Then, the instance set is used for training in the process of updating new data to review old knowledge.
  • iCaRL: iCaRL builds and manages an exemplar set, which is a representative sample set of old data. After the representation learning of these data, iCaRL classifies the sample with the nearest-mean-of-exemplars rule.
  • Wa: Wa is an improvement of iCaRL. On the basis of iCaRL, Wa normalized the classifier amplitude. This operation could eliminate the deviation in the model updating process.
  • Podnet: Inspired by the representation learning, Podnet improves the distillation loss with pooled outputs distillation (POD). In addition, it represents every class with several proxy vectors, so it has better performance in incremental learning.
  • ICAC: ICAC adaptively adds new anchored class centers for new classes, and the features of new class will be clustered around the corresponding center. For old classes, ICA retains key samples for each class. In addition, ICAC proposes an SL (Separable Learning) strategy to address the class imbalance between new and old classes.
  • CBesIL: To reduce data storage pressure, CBesIL proposes a class boundary selection method to build the exemplar set. When performing incremental learning, CBesIL employs a boundary-based reconstruction method to rebuild the key data to avoid catastrophic forgetting.
As presented in Table 2, we set up three different scenarios for the MSTAR dataset to carry out incremental learning experiments, and they are Scenario_A, Scenario_B and Scenario_C. The initial classes of three scenarios are all 4. However, the incremental classes (step sizes) of them are 1, 2 and 3, respectively. The settings of three scenarios are consistent with that in Reference [40].

4.2. Incremental Recognition Performance

The proposed method is compared with the above-mentioned state-of-the-art methods on the MSTAR dataset. In addition, the recognition accuracy of Joint Training is taken as the baseline. Among these methods, CBesIL belongs to the traditional method, while the other methods are based on deep learning. As for the setting of hyper-parameters, the exemplar size in Replay, iCaRL, and Podnet are all set to 30, which is consistent with the setting of Reference [40]. The number of atoms in the class-specific dictionary is set to 40, while the weight factors are set to 0.1 and 0.01, respectively.
It should be noted that Joint Training takes Resnet18 as the backbone to extract targets’ features; then, it makes use of a softmax classifier to recognize these targets. Furthermore, Joint Training adopts a batch training strategy. This means that Joint Training makes use of all training data to update the SAR ATR system when encountering data of new classes. In theory, the recognition accuracy of Joint Training should be significantly better than these incremental learning methods.
We conducted experiments on the three above-mentioned scenarios. The overall recognition accuracies of these methods are presented in Table 3. Furthermore, we have visualized these accuracies in Figure 4 for a more intuitive comparison. The codes of ICAC and CBesIL have not been published, so we directly cited the experimental results of Reference [40]. The other data in Table 3 were obtained from our own experiments.
It is obvious that Joint Training has the best recognition ability in the three scenarios, and its accuracies are more than 0.9917. Joint Training performs almost as well as expected, so we regard its accuracies as the benchmark. In general, with the gradual increase in classes, the recognition accuracies of these incremental learning methods basically present a downward trend. Taking Scenario_A as an example, we analyze the performance of these incremental learning methods. First, the final accuracies (accuracies of Task_7) of several incremental learning methods are 0.9592, 0.9604, 0.9559, 0.9633, 0.9176, 0.7215, and 0.9736. Compared with other incremental learning methods, the accuracies of our method are 0.0144, 0.0132, 0.0177, 0.0560 and 0.2521 higher. Second, the average accuracies of seven tasks in Replay, iCaRL, Wa, Podnet, ICAC and CBesIL are 0.9482, 0.9579, 0.9552, 0.9633, 0.9571, 0.7929. At the same time, the average accuracy of our method is 0.9767. It is obvious that our method achieves a higher average accuracy than other methods. Third, our method performs better than other incremental learning methods from Task_1 to Task_7 in Scenario_A, and it has the higher recognition accuracies in seven tasks.
As for Scenario_B, first, the average recognition accuracies of four tasks in Replay, iCaRL, Wa, Podnet, ICAC, CBesIL, and our method are 0.9493, 0.9624, 0.9607, 0.9600, 0.9698, 0.8145 and 0.9766, respectively. Our method performs better on average accuracy. Second, our method has higher recognition accuracy on three tasks (Task_1, Task_3 and Task_4), and it only performs slightly worse than iCaRL on Task_3. Furthermore, after analyzing the results of Scenario_C, we can draw similar conclusions as in Scenario_B. In summary, our method has better performance on incremental SAR target recognition.
It should be noted that, for each method, the recognition accuracies of Task_1 in Scenario_A, Scenario_B and Scenario_C are identical. For example, the accuracies of iCaRL’s Task_1 are 0.9834 in the three scenarios. This is because their training data are exactly the same. Furthermore, in our method, Task_3 of Scenario_A and Task_2 of Scenario_B have the same accuracy (they are both 0.9662). Similar situations include Task_4 of Scenario_A and Task_2 of Scenario_C (0.9565), Task_5 of Scenario_A, Task_3 of Scenario_B (0.9680), etc. This is because our method trains these dictionaries separately for each class of targets. The recognition accuracy is only related to the total number of classes, and it has nothing to do with the step size during incremental learning.
Different from the overall accuracy in Table 3, we analyzed the recognition accuracy of various class targets in Task_7 (Scenario_A). The recognition results are presented in Table 4. Six methods, including Joint Training, Replay, iCaRL, Wa, Podnet and our method, are compared in Table 4. Obviously, as a benchmark, Joint Training has the best performance on each task. Then, our method performs better on five classes of targets (BMP2, BTR70, T72, BTR60 and D7) compared with Replay, iCaRL, Wa and Podnet. Furthermore, we find that several incremental learning methods achieve significantly lower accuracy on BRDM2 than other classes. For example, the accuracies of Replay, iCaRL, Wa and our method on BRDM2 are 0.0942, 0.0480, 0.0617 and 0.0977 lower than their respective overall accuracies.
Furthermore, we compare the time consumed by several methods, including Replay, iCaRL, Wa, Podnet, and our method. The time taken by them to complete the seven tasks in Scenario_A is presented in Table 5. Among them, Replay requires the least time, and it is 310 s. Following the Replay method, our method consumes 457 s. The time consumed by iCaRL, Wa and Podnet is 28 s, 67 s and 472 s more than our method, respectively. On the whole, there is little difference in the time consumed by iCaRL, Wa and our method. Our method has some advantages in the time of model training. These methods are deployed on the computer with Intel Core i7-8700K CPU, NVIDIA GTX 1080Ti GPU and 16 GB memory.

4.3. Parametric Analysis

Here, we analyze the effect of four parameters on the experimental results. They are the iterations, the number of atoms, and the weight factors λ 1 and λ 2 . The evaluation is mainly carried out from the perspective of recognition accuracy.

4.3.1. The Effect of Iterations

Taking BMP2 as an example, we conduct a qualitative analysis of the loss during dictionary learning. At this time, the number of dictionary atoms is 40, and the factors λ 1 and λ 2 are set to 0.1 and 0.01, respectively. As shown in Figure 5, when i t e r a t i o n > 30 , the value of loss almost does not change. Specifically, when i t e r a t i o n = 30 and i t e r a t i o n = 60 , the loss values are 66.07 and 65.79, respectively. The difference between the two loss values is only 0.28, which is very subtle. We think the model converged when i t e r a t i o n = 30 . In addition, the change trend of the loss curve in other class-specific dictionaries is basically consistent with that in Figure 5.
Quantitatively, we train these class-specific dictionaries when the iterations are 5, 10, 15, 30, 45 and 60, respectively. Then, the recognition accuracy under different iterations is measured. Taking Scenario_A as an example, the accuracy is shown in Figure 6. It is obvious that the change in accuracy is more obvious when the iterations are 5 and 10. However, when the iterations are 15, 30, 45 and 60, the accuracy fluctuates only slightly. This phenomenon indirectly indicates that the model converged when i t e r a t i o n > 30 . Therefore, we take i t e r a t i o n = 45 in our experiments.

4.3.2. The Effect of Atom Number

A class-specific dictionary is composed of multiple atoms. In theory, a large number of atoms will make the dictionary have a better representation ability, but it would significantly increase the computational complexity. Here, we set the number of atoms to be 30, 40, 50, 60, 70 and 80, and train these class-specific dictionaries, respectively. On the basis of these dictionaries, the recognition accuracy under Scenario_A is calculated, which is shown in Figure 7. In addition, in this experiment, the iteration is 45, and the weight factors λ 1 and λ 2 are set to 0.1 and 0.01, respectively.
It can be found that, when the number of atoms is 40, 55, 70, and 85, the recognition accuracies of several models in seven tasks are basically the same. This leads to two conclusions. First, when the number of atoms is 40, it can meet the requirements of the experiments. Second, our proposed method is robust, and it is insensitive to the number of atoms. Based on the above analysis, we set the number of atoms to 40 in the several experiments above.

4.3.3. The Effect of Weight Factors λ 1 and λ 2

The weight factors λ 1 and λ 2 are used to control the weight of sparsity and intra-class divergence, respectively. As presented in Table 6, we set four different cases to analyze the effect of weight factors λ 1 and λ 2 . On this basis, we conducted several experiments in Scenario_A, and the recognition accuracy of several cases in Scenario_A is shown in Figure 8.
In addition, it should be noted that, in this experiment, we set i t e r a t i o n = 45 , and the number of atoms in the class-specific dictionary is set to 40.
On the whole, the recognition accuracy of each task in Case_D is the highest, and the result in Case_A is second only to that in Case_D. The results in Case_B and Case_C are worse. It can be found that, in Case_B and Case_C, intra-class divergence takes a larger proportion in the objective function, while sparsity takes a larger proportion in Case_A and Case_D. Based on the above phenomenon, we think that sparsity has a greater impact on the recognition accuracy than intra-class divergence. In addition, in the objective function, the intra-class divergence with a small proportion can improve the recognition accuracy, while the intra-class divergence with a large proportion will have a negative impact on the experimental results.

5. Conclusions

In order to realize the rapid and low-cost update of the SAR ATR system, this paper proposes an incremental SAR ATR method based on a class-specific dictionary. To this end, we make use of intra-class divergence to constrain the process of dictionary learning. Then, the class-specific dictionary of each class target is learned separately. These class-specific dictionaries are organized into a dictionary set for classification. During testing, the sparse representation coefficients of these samples under all dictionaries are calculated. On the basis of the coefficients, two indicators of reconstruction error and intra-class divergence are calculated, and finally, the targets are classified. When encountering new class data, it is enough to learn the corresponding class-specific dictionary, and then it is used to update the dictionary set. Experimental results on the MSTAR dataset have demonstrated the effectiveness of our method. In summary, the learning process of class-specific dictionaries does not require the participation of other class samples, so it is very suitable for class incremental learning.
For future research, our method will be enhanced in two key aspects.
(1) Incremental feature extraction [50,51]: the current approach of directly learning dictionaries from vectorized SAR images without feature extraction incurs high computational costs. We will explore incremental feature extraction techniques to mitigate this issue. Additionally, ensuring feature extraction captures target-specific features rather than SAR image background information is also a key focus for future work.
(2) Open-world recognition [52,53]: Class incremental learning leverages novel class targets to rapidly update SAR ATR systems, while open set recognition [54,55] facilitates the quick discovery of unknown class targets. These two approaches are inseparable in their applications, and their combined use has been termed open world recognition by scholars [52,53]. In future work, we will investigate SAR target recognition under open world scenarios to enable the SAR systems to adapt efficiently to dynamic environments.

Author Contributions

Conceptualization, X.M. and X.B.; methodology, X.M.; software, X.M.; validation, D.Z. and Z.W.; formal analysis, X.M.; investigation, X.B.; resources, X.B.; data curation, X.M.; writing—original draft preparation, Z.W. and J.L.; writing—review and editing, X.M. and Z.W.; visualization, Z.W. and D.Z.; supervision, J.L.; project administration, Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

Authors Xiaojie Ma, Xusong Bu, Dezhao Zhang, Zhaohui Wang and Jing Li were employed by the company China Satellite Network Digital Technology Co., Ltd.

References

  1. Kechagias-Stamatis, O.; Aouf, N. Automatic Target Recognition on Synthetic Aperture Radar Imagery: A Survey. IEEE Aerosp. Electron. Syst. Mag. 2021, 36, 56–81. [Google Scholar] [CrossRef]
  2. Sun, Z.; Leng, X.; Zhang, X.; Zhou, Z.; Xiong, B.; Ji, K. Arbitrary-Direction SAR Ship Detection Method for Multi-Scale Imbalance. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5208921. [Google Scholar] [CrossRef]
  3. Guan, T.; Chang, S.; Wang, C.; Jia, X. SAR Small Ship Detection Based on Enhanced YOLO Network. Remote Sens. 2025, 17, 839. [Google Scholar] [CrossRef]
  4. Deng, Y.; Tang, S.; Chang, S.; Zhang, H.; Liu, D.; Wang, W. A Novel Scheme for Range Ambiguity Suppression of Spaceborne SAR Based on Underdetermined Blind Source Separation. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5207915. [Google Scholar] [CrossRef]
  5. Zhang, X.; Zhang, S.; Sun, Z.; Liu, C.; Sun, Y.; Ji, K. Cross-Sensor SAR Image Target Detection Based on Dynamic Feature Discrimination and Center-Aware Calibration. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5209417. [Google Scholar] [CrossRef]
  6. Dong, G.; Liu, H.; Chanussot, J. Keypoint-Based Local Descriptors for Target Recognition in SAR Images: A Comparative Analysis. IEEE Geosci. Remote Sens. Mag. 2021, 9, 139–166. [Google Scholar] [CrossRef]
  7. Wang, K.; Zhang, G.; Xu, Y.; Leung, H. SAR Target Recognition Based on Probabilistic Meta-Learning. IEEE Geosci. Remote Sens. Lett. 2021, 18, 682–686. [Google Scholar] [CrossRef]
  8. Pei, J.; Huang, Y.; Huo, W.; Zhang, Y.; Yang, J.; Yeo, T. SAR Automatic Target Recognition Based on Multi-view Deep Learning Framework. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2196–2210. [Google Scholar] [CrossRef]
  9. Chen, S.; Wang, H.; Xu, F.; Jin, Y. Target Classification Using the Deep Convolutional Networks for SAR Images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4806–4817. [Google Scholar] [CrossRef]
  10. Feng, S.; Ji, K.; Zhang, L.; Ma, X.; Kuang, G. SAR Target Classification Based on Integration of ASC Parts Model and Deep Learning Algorithm. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 10213–10225. [Google Scholar] [CrossRef]
  11. Coppock, W.; Freund, E. All-or-none versus incremental learning of errorless shock escapes by the rat. Science 1962, 135, 318–319. [Google Scholar] [CrossRef] [PubMed]
  12. Asthana, A.; Zafeiriou, S.; Cheng, S.; Pantic, M. Incremental Face Alignment in the Wild. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
  13. Pratama, M.; Anavatti, S.; Angelov, P.; Lughofer, E. Panfis: A novel incremental learning machine. IEEE Trans. Neural Netw. Learn. Syst. 2013, 25, 55–68. [Google Scholar] [CrossRef] [PubMed]
  14. Ayoobi, H.; Cao, M.; Verbrugge, R.; Verheij, B. Argumentation-Based Online Incremental Learning. IEEE Trans. Autom. Sci. Eng. 2022, 19, 3419–3433. [Google Scholar] [CrossRef]
  15. Dang, S.; Cao, Z.; Cui, Z.; Pi, Y.; Liu, N. Class Boundary Exemplar Selection Based Incremental Learning for Automatic Target Recognition. IEEE Trans. Geosci. Remote Sens. 2020, 58, 5782–5792. [Google Scholar] [CrossRef]
  16. Dang, S.; Cui, Z.; Cao, Z.; Liu, N. SAR Target Recognition via Incremental Nonnegative Matrix Factorization. Remote Sens. 2018, 10, 374. [Google Scholar] [CrossRef]
  17. Tang, J.; Xiang, D.; Zhang, F.; Ma, F.; Zhou, Y.; Li, H. Incremental SAR Automatic Target Recognition with Error Correction and High Plasticity. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 1327–1339. [Google Scholar] [CrossRef]
  18. De Lange, M.; Aljundi, R.; Masana, M.; Parisot, S.; Jia, X.; Leonardis, A.; Slabaugh, G.; Tuytelaars, T. A Continual Learning Survey: Defying Forgetting in Classification Tasks. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 3366–3385. [Google Scholar]
  19. Liu, Y.; Zhang, F.; Ma, F.; Yin, Q.; Zhou, Y. Incremental Multitask SAR Target Recognition with Dominant Neuron Preservation. In Proceedings of the 2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 754–757. [Google Scholar]
  20. Guo, C.; Qiu, Z.; Sun, R. Synthetic Aperture Radar Target Recognition Based on Incremental Learning Algorithm. Electron. Opt. Control. 2019, 26, 31–34. [Google Scholar]
  21. Tao, L.; Jiang, X.; Li, Z.; Liu, X.; Zhou, Z. Multiscale Incremental Dictionary Learning with Label Constraint for SAR Object Recognition. IEEE Geosci. Remote Sens. Lett. 2019, 16, 80–84. [Google Scholar] [CrossRef]
  22. Pan, Q.; Liao, K.; He, X.; Bu, Z.; Huang, J. A Class-Incremental Learning Method for SAR Images Based on Self-Sustainment Guidance Representation. Remote Sens. 2023, 15, 2631. [Google Scholar] [CrossRef]
  23. Zhang, Y.; Xing, M.; Zhang, J.; Vitale, S. SCF-CIL: A Multi-Stage Regularization-Based SAR Class-Incremental Learning Method Fused with Electromagnetic Scattering Features. Remote Sens. 2025, 17, 1586. [Google Scholar] [CrossRef]
  24. Cao, C.; Chou, R.; Zhang, H.; Li, X.; Luo, T.; Liu, B. L2,1-Constrained Deep Incremental NMF Approach for SAR Automatic Target Recognition. IEEE Geosci. Remote Sens. Lett. 2024, 21, 4003905. [Google Scholar] [CrossRef]
  25. Gao, F.; Kong, L.; Lang, R.; Sun, J.; Wang, J.; Hussain, A.; Zhou, H. SAR Target Incremental Recognition Based on Features with Strong Separability. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5202813. [Google Scholar] [CrossRef]
  26. Yu, X.; Dong, F.; Ren, H.; Zhang, C.; Zou, L.; Zhou, Y. Multilevel Adaptive Knowledge Distillation Network for Incremental SAR Target Recognition. IEEE Geosci. Remote Sens. Lett. 2023, 20, 4004405. [Google Scholar] [CrossRef]
  27. He, Z.; Xiao, H.; Tian, Z. Multi-View Tensor Sparse Representation Model for SAR Target Recognition. IEEE Access 2019, 7, 48256–48265. [Google Scholar] [CrossRef]
  28. Dong, G.; Kuang, G.; Wang, N.; Wang, W. Classification via Sparse Representation of Steerable Wavelet Frames on Grassmann Manifold: Application to Target Recognition in SAR Image. IEEE Trans. Image Process 2017, 26, 2892–2904. [Google Scholar] [CrossRef]
  29. He, Z.; Xiao, H.; Gao, C.; Tian, Z.; Chen, S. Fusion of Sparse Model Based on Randomly Erased Image for SAR Occluded Target Recognition. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7829–7844. [Google Scholar] [CrossRef]
  30. Pan, F.; Zhang, Z.; Liu, B.; Xie, J. Class-Specific Sparse Principal Component Analysis for Visual Classification. IEEE Access 2020, 8, 110033–110047. [Google Scholar] [CrossRef]
  31. Yang, M.; Zhang, L.; Feng, X.; Zhang, D. Sparse Representation Based Fisher Discrimination Dictionary Learning for Image Classification. Int. J. Comput. Vis. 2014, 109, 209–232. [Google Scholar] [CrossRef]
  32. Mai, Z.; Li, R.; Jeong, J.; Quispe, D.; Kim, H.; Sanner, S. Online Continual Learning in Image Classification: An Empirical Survey. Neurocomputing 2022, 469, 28–51. [Google Scholar] [CrossRef]
  33. Chen, C.; Liu, Z. Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 10–24. [Google Scholar] [CrossRef] [PubMed]
  34. Dvgm, V.; Tolias, A. Three scenarios for continual learning. arXiv 2019, arXiv:1904.07734v1. [Google Scholar]
  35. Masana, M.; Liu, X.; Twardowski, B.; Menta, M.; Bagdanov, A.; van de Weijer, J. Class-Incremental Learning: Survey and Performance Evaluation on Image Classification. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 5513–5533. [Google Scholar] [CrossRef]
  36. Tahir, G.; Loo, C. An Open-Ended Continual Learning for Food Recognition Using Class Incremental Extreme Learning Machines. IEEE Access 2020, 8, 82328–82346. [Google Scholar] [CrossRef]
  37. Singh, P.; Mazumder, P.; Rai, P.; Namboodiri, V. Rectification-based Knowledge Retention for Continual Learning. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 15282–15291. [Google Scholar]
  38. Lee, K.; Kim, D.; Lee, K.; Lee, D. Density-induced support vector data description. IEEE Trans. Neural Netw. Learn. Syst. 2007, 18, 284–289. [Google Scholar] [CrossRef]
  39. Xiao, Y.; Wang, H.; Xu, W. Parameter selection of Gaussian kernel for one-class SVM. IEEE Trans. Cybern. 2015, 45, 927–939. [Google Scholar]
  40. Li, B.; Cui, Z.; Cao, Z.; Yang, J. Incremental Learning Based on Anchored Class Centers for SAR Automatic Target Recognition. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5235313. [Google Scholar] [CrossRef]
  41. Rebuffi, S.; Kolesnikov, A.; Sperl, G.; Lampert, C. iCaRL: Incremental Classifier and Representation Learning. In Proceedings of the 2017 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5533–5542. [Google Scholar]
  42. Rudd, E.; Jain, L.; Scheirer, W.; Boult, T. The Extreme Value Machine. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 762–768. [Google Scholar] [CrossRef]
  43. Wright, J.; Yang, A.; Ganesh, A.; Sastry, S.; Ma, Y. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 21–27. [Google Scholar] [CrossRef]
  44. The Air Force Moving and Stationary Target Recognition Database. Available online: https://www.sdms.afrl.af.mil/index.php?coll-ection=mstar (accessed on 4 June 2025).
  45. Ross, T.; Worrell, S.; Velten, V.; Mossing, J.; Bryant, M. Standard SAR ATR evaluation experiments using the MSTAR public release data set. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery V, Proceedings of the Aerospace/Defense Sensing and Controls, Orlando, FL, USA, 13–17 April 1998; Volume 3370, pp. 566–573. [Google Scholar]
  46. Keydel, E.; Lee, S.; Moore, J. MSTAR extended operating conditions: A tutorial. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery III, Proceedings of the Aerospace/Defense Sensing and Controls, Orlando, FL, USA, 8–12 April 1996; Volume 2757, pp. 228–242. [Google Scholar]
  47. Rakshit, S.; Mohanty, A.; Chavhan, R.; Banerjee, B.; Roig, G.; Chaudhuri, S. FRIDA—Generative feature replay for incremental domain adaptation. Comput. Vis. Image Underst. 2022, 217, 103367. [Google Scholar] [CrossRef]
  48. Zhao, B.; Xiao, X.; Gan, G.; Zhang, B.; Xia, S. Maintaining discrimination and fairness in class incremental learning. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 13208–13217. [Google Scholar]
  49. Douillard, A.; Cord, M.; Ollion, C.; Robert, T.; Valle, E. PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning. In Proceedings of the 2020 European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; pp. 86–102. [Google Scholar]
  50. Sang, B.; Chen, H.; Yang, L.; Li, T.; Xu, W. Incremental Feature Selection Using a Conditional Entropy Based on Fuzzy Dominance Neighborhood Rough Sets. IEEE Trans. Fuzzy Syst. 2022, 30, 1683–1697. [Google Scholar] [CrossRef]
  51. Choi, Y.; Ozawa, S.; Lee, M. Incremental two-dimensional kernel principal component analysis. Neurocomputing 2014, 134, 280–288. [Google Scholar] [CrossRef]
  52. Bendale, A.; Boult, T. Towards Open World Recognition. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1893–1902. [Google Scholar]
  53. Koch, T.; Liebezeit, F.; Riess, C.; Christlein, V.; Köhler, T. Exploring the Open World Using Incremental Extreme Value Machines. In Proceedings of the 2022 IEEE International Conference on Pattern Recognition, Montreal, QC, Canada, 21–25 August 2022; pp. 2792–2799. [Google Scholar]
  54. Ma, X.; Ji, K.; Zhang, L.; Feng, S.; Xiong, B.; Kuang, G. An Open Set Recognition Method for SAR Targets Based on Multitask Learning. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4014005. [Google Scholar] [CrossRef]
  55. Geng, X.; Dong, G.; Xia, Z.; Liu, H. SAR Target Recognition via Random Sampling Combination in Open-World Environments. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 331–343. [Google Scholar] [CrossRef]
Figure 1. Incremental learning for the SAR ATR model. The model can be updated at a minimal cost when it encounters the targets from new classes.
Figure 1. Incremental learning for the SAR ATR model. The model can be updated at a minimal cost when it encounters the targets from new classes.
Remotesensing 17 02090 g001
Figure 2. Schematic diagram of our proposed method.
Figure 2. Schematic diagram of our proposed method.
Remotesensing 17 02090 g002
Figure 3. SAR images of ten classes of vehicle targets.
Figure 3. SAR images of ten classes of vehicle targets.
Remotesensing 17 02090 g003
Figure 4. Recognition accuracies of different methods under three scenarios. (a) Scenario_A; (b) Scenario_B; (c) Scenario_C.
Figure 4. Recognition accuracies of different methods under three scenarios. (a) Scenario_A; (b) Scenario_B; (c) Scenario_C.
Remotesensing 17 02090 g004
Figure 5. The loss curve during dictionary learning (BMP2).
Figure 5. The loss curve during dictionary learning (BMP2).
Remotesensing 17 02090 g005
Figure 6. The recognition accuracy under different iterations (Scenario_A).
Figure 6. The recognition accuracy under different iterations (Scenario_A).
Remotesensing 17 02090 g006
Figure 7. The recognition accuracy under different atom numbers (Scenario_A).
Figure 7. The recognition accuracy under different atom numbers (Scenario_A).
Remotesensing 17 02090 g007
Figure 8. The recognition accuracy of several cases in Scenario_A.
Figure 8. The recognition accuracy of several cases in Scenario_A.
Remotesensing 17 02090 g008
Table 1. Configuration of ten-class targets under SOC.
Table 1. Configuration of ten-class targets under SOC.
TargetTrainingTest
NumAngle NumAngle
BMP22331719515
BTR702331719615
T722331719615
BTR602561719515
2S12991727415
BRDM22981727415
D72991727415
T622991727315
ZIL1312991727415
ZSU2342991727415
Table 2. Three different incremental learning scenarios for MSTAR dataset.
Table 2. Three different incremental learning scenarios for MSTAR dataset.
ScenarioBMP2BTR70T72BTR602S1BRDM2D7T62ZIL131ZSU234
ATask_1Task_2Task_3Task_4Task_5Task_6Task_7
BTask_1Task_2Task_3Task_4
CTask_1Task_2Task_3
Table 3. The overall recognition accuracy of several methods under three scenarios.
Table 3. The overall recognition accuracy of several methods under three scenarios.
MethodScenario_AScenario_BScenario_C
Task_1Task_2Task_3Task_4Task_5Task_6Task_7Task_1Task_2Task_3Task_4Task_1Task_2Task_3
Replay0.97700.95270.94140.93700.93450.93580.95920.97700.95340.92120.94560.97700.97130.9612
iCaRL0.98340.96120.94140.95140.94570.96190.96040.98340.96920.94350.95340.98340.97690.9715
Wa0.98340.96690.94960.93830.94140.95070.95590.98340.96240.95050.94640.98340.96880.9666
Podnet0.98590.97540.95640.95450.95580.95210.96330.98590.96090.95470.93860.98590.96570.9604
ICAC0.99490.98040.96760.95650.94830.93420.91760.99490.97350.9630.94770.99490.97130.9564
CBesIL0.98060.8280.77440.76330.75110.73130.72150.98060.81130.76390.70210.98060.77490.7175
Our
Method
0.99870.98960.96620.97070.96800.97070.97360.99870.96620.96800.97360.99870.97070.9736
Joint
Training
0.99620.99620.99170.99250.99680.99630.99510.99620.99170.99680.99510.99620.99250.9951
Table 4. The recognition accuracy of several methods in Task_7.
Table 4. The recognition accuracy of several methods in Task_7.
MethodBMP2BTR70T72BTR602S1BRDM2D7T62ZIL131ZSU234Overall
Joint Training0.98971.00001.00001.00000.98180.98911.00000.99630.99641.00000.9951
Replay0.88210.96940.95920.94360.98910.86500.98910.98170.98911.00000.9592
iCaRL0.92820.96940.97960.84620.96350.91240.97810.99630.99641.00000.9604
Wa0.96410.96940.96430.90260.93430.89420.93800.98531.00001.00000.9559
Podnet0.97440.96940.98980.95380.94890.90880.92700.97071.00001.00000.9633
Our Method1.00001.00001.00000.99490.96350.87590.99270.95240.98910.99640.9736
Table 5. Time consumption of several methods in Scenario_A.
Table 5. Time consumption of several methods in Scenario_A.
MethodReplayiCaRLWaPodnetOur Method
Time (s)310485524929457
Table 6. Combination of λ 1 . and λ 2 in four different cases.
Table 6. Combination of λ 1 . and λ 2 in four different cases.
CaseCase_ACase_BCase_CCase_D
λ 1 0.10.10.10.1
λ 2 00.10.020.01
λ 1 / λ 2 -1510
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, X.; Bu, X.; Zhang, D.; Wang, Z.; Li, J. Incremental SAR Automatic Target Recognition with Divergence-Constrained Class-Specific Dictionary Learning. Remote Sens. 2025, 17, 2090. https://doi.org/10.3390/rs17122090

AMA Style

Ma X, Bu X, Zhang D, Wang Z, Li J. Incremental SAR Automatic Target Recognition with Divergence-Constrained Class-Specific Dictionary Learning. Remote Sensing. 2025; 17(12):2090. https://doi.org/10.3390/rs17122090

Chicago/Turabian Style

Ma, Xiaojie, Xusong Bu, Dezhao Zhang, Zhaohui Wang, and Jing Li. 2025. "Incremental SAR Automatic Target Recognition with Divergence-Constrained Class-Specific Dictionary Learning" Remote Sensing 17, no. 12: 2090. https://doi.org/10.3390/rs17122090

APA Style

Ma, X., Bu, X., Zhang, D., Wang, Z., & Li, J. (2025). Incremental SAR Automatic Target Recognition with Divergence-Constrained Class-Specific Dictionary Learning. Remote Sensing, 17(12), 2090. https://doi.org/10.3390/rs17122090

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop