Next Article in Journal
Comparative Analysis of Machine Learning Approaches for Fetal Movement Detection with Linear Acceleration and Angular Rate Signals
Previous Article in Journal
From Accelerometer to Cognition: Hand Motion Can Reflect Effects of Cardiac Coherence on Cognitive Flexibility
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inverse Synthetic Aperture Radar Sparse Imaging Recovery Technique Based on Improved Alternating Direction Method of Multipliers

National Key Laboratory of Space Awareness, Space Engineering University, No.1 Bayi Road, Huairou, Beijing 101400, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(9), 2943; https://doi.org/10.3390/s25092943
Submission received: 26 February 2025 / Revised: 18 April 2025 / Accepted: 25 April 2025 / Published: 7 May 2025
(This article belongs to the Section Radar Sensors)

Abstract

:
Inverse synthetic aperture radar (ISAR) technology is widely used in the field of target recognition. This research addresses the image reconstruction error in sparse imaging for bistatic radar systems. In this paper, sparse imaging technology is studied, and a sparse imaging recovery algorithm based on an improved Alternating Direction Method of Multipliers is proposed. The algorithm accelerates the convergence of the algorithm by dynamically adjusting iterative parameters in the iterative process. Experiments show that the algorithm proposed in this paper has lower relative recovery error in the case of different noise levels and sparsity, and it can be concluded that the algorithm proposed in this paper has a lower relative recovery error than the ADMMs (Alternating Direction Method of Multipliers).

1. Introduction

The advancement of radar detection technology has led to the widespread application of inverse synthetic aperture radar sparse imaging recovery technology based on compressed sensing theory in the field. The development of compressed sensing theory, along with progress in solving algorithms [1] and research on regularization methods, has facilitated the extensive study [2] of sparse signal-processing technology in radar detection. The theoretical framework for transforming signals from the time domain to the frequency domain has given rise to the concept of sparse signal recovery. Despite numerous challenges, advancements in technology have led to the development of various solution algorithms. Simultaneously, sparse coding algorithms and the demands of radar signal processing have mutually driven progress, resulting in a series of applications and innovations in the radar field. Against this backdrop, the developmental trajectory is as follows.
Sparse signal recovery involves transforming time domain signals into frequency domain signals with limited bandwidth, utilizing the Fourier transform and cosine transform. The evolution of transform theory, wavelet transform, and compressed sensing theory has enabled sparse recovery technology to surpass the Nyquist sampling law, recovering original signals from observations below the theoretical sampling value. Sparse observation restoration typically involves minimizing a problem with an objective function consisting of restoration error and sparse induced terms. The regularization term, also known as the sparse induction term, primarily enforces sparsity through the L0 norm. However, the non-convex nature of the L0 norm poses challenges in optimization due to its NP-hard problem [3] nature. In order to solve this problem, Mallat et al. [4] applied the matching pursuit algorithm (MP) to the signal sparse decomposition task for the first time, and then the improved orthogonal matching pursuit algorithm [5] (OMP) was applied to this field, in addition to the iterative hard threshold algorithm (IHT) [6] and the hard threshold pursuit algorithm (HTP) [7]. On the other hand, in order to solve the nonconvex optimization problem with the L0 norm, Donoho et al. [8] used the L0 norm solution and proved the equivalence between the minimum L0 norm solution and the minimum L1 norm solution. Some scholars have studied the convex relaxation problem of the L1 norm, which is also known as the LASSO (least absolute shrinkage and selection operator) problem [9]. There are also a series of algorithms to solve this problem, such as the least-angle regression algorithm (LARS) [10], and so on.
The rapid progress of sparse coding algorithms and the increasing need for sparse recovery in radar signal processing highlight the importance of utilizing sparse coding technology to enhance radar imaging performance. Sparse coding technology aids in overcoming challenges in radar signal processing, such as detection and anti-jamming performance, to achieve high-quality imaging [11]. At the same time, the motion of some targets can also lead to the occurrence of sparse phenomenon, about which some scholars have given some solution algorithms [12]. With the development of new radar systems, frequency domain sparse radar, bistatic radar, and so on, have appeared. Among them, the authors [13] point out that due to the bistatic transceiver characteristics and the need to constantly switch beams when observing the target, it is easy to cause echo loss and sparse aperture in the observation process. Various algorithms, such as the ROMP algorithm, have been proposed to improve inverse synthetic aperture radar image resolution [14]. The literature [15] proposes an algorithm to obtain images using Bayesian bistatic ISAR in the case of a low signal-to-noise ratio. Qiu et al. [16] proposed a model that combines low-rank and sparse prior constraints for inverse synthetic aperture radar imaging to address the imaging problem under different sparse data patterns. Furthermore, they developed two reconstruction algorithms within the framework of the alternating direction method of multipliers. Zhang et al. [17] proposed a sparse ISAR autofocus imaging algorithm based on sparse Bayesian learning. In the literature [18], the Cauchy–Newton algorithm was proposed to solve the weighted L1 norm-constrained optimization problem to realize target image reconstruction.
This study aims to address sparse imaging recovery in inverse synthetic aperture radar by establishing a sparse aperture imaging model and proposing an inverse synthetic aperture radar sparse imaging restoration algorithm based on an improved exchange multiplier direction method. The algorithm’s effectiveness is evaluated through analysis of recovery outcomes under varying noise levels and sparsity, demonstrating its efficacy.
The contributions of this paper are concluded as follows:
  • The sparse imaging model of the inverse synthetic aperture radar is transformed into a constrained problem.
  • An improved Alternating Direction Method of Multipliers method with control parameters updated according to the iterations to accelerate the convergence.

2. Sparse Aperture Imaging Model

The physical ISAR imaging process is shown as follows. The echo signal S a can be derived from the ISAR image X and the echo model matrix D a .
The radar echo model used in this paper, as described in the literature [13], assumes that there are Q pulses in the full-aperture echo signal with a cumulative rotation angle of Δθ, and the definition of the sparse basis matrix is as follows:
D a | Q × M = 1 1 1 1 ω 1 1 ω 1 M 1 1 ω Q 1 1 ω Q 1 M 1
Among them, N is the number of range units in the ISAR image and M is the number of Doppler units. Q is the number of the valid pulses. ω q m = exp j 2 π m N M θ ( q ) Δ θ cos β ( q ) 2 , in which subscript q is the index of the pulsed and superscript m is the index of the Doppler units. θ(n) and β(n) are the rotation angle and bistatic angle change over time during the imaging period, respectively. The two-dimensional target is imaged into N range units and M Doppler units by matrix Da, but in practice, the echo signal received by bistatic ISAR is interfered by noise, so the model of the echo signal is
S a = D a X + ε 0
Formula (2) is the observation model of the ISAR image, which describes the echoes from the targets. More details can be acquired from the literature [13]. In Formula (2), Sa is the full aperture 2D echo data after motion compensation and phase compensation, X is the signal to be recovered, and ε 0 is the noise in the measurement process. In the actual measurement, it is not guaranteed that all the aperture data can be completely received. Assume that the number of valid pulses that can be received is K ( K     Q ) , then the actual reception of the fusion of the effective data is
S = I ( D a ) X + ε = DX + ε
The column selection function of the matrix is I ( ) . We selected a few columns of the matrix to be operated and set the other columns to 0. In this thesis, the recovery of X is mainly carried out through S and D pairs, and obviously the larger the number of effective pulses K, the better the recovery of the signal is. For the noise ε , in this study, we assume that it is Gaussian white noise [13], The probability density function of the noise is
P ( ε ) = 1 2 π σ 2 NK exp 1 2 σ 2 ε 2 2
where known X
P ( S | X ) = 1 2 π σ 2 NK exp 1 2 σ 2 S - DX 2 2
Since radar signals are assumed to be well sparse in this study, it can be assumed that each pixel point x ij = X ( i , j ) in the target image obeys a Laplace distribution with the parameter l ij . Thus,
P ( x ij ) = l ij 2 exp l ij x ij 1
Assuming that the individual pixel points follow an independent distribution, the probability density function of X is
P ( X ) = i = 1 N j = 1 M l ij 2 exp i = 1 N j = 1 M l ij x ij 1
To estimate X, according to the Bayesian criterion,
X ¯ = argmax P ( X | S ) = argmax P ( S | X ) P ( X ) = argmax ( log P ( S | X ) + log P ( X ) )
Substituting and simplifying the above public notice yields
X ¯ = argmax 1 2 σ 2 S DX F 2   L X 1 = argmin 1 2 S DX F 2 + σ 2 L X 1
where
L = l 11 l 12 l 1 M l 21 l 22 l 2 M l N 1 l N 2 l NM
is the multiplication of the corresponding elements of the matrix, and U F is the Frobenious norm for the matrix U , which is defined as U F = i , j U ( i , j ) , and U ( i , j ) is the element with index ( i , j ) .
In the optimization process, the optimization of X ¯ will be solved according to the columns, and the matrix L is called the regularized sparse parameter, which is used to balance the recovery error term and the regularization term. If the noise level is larger, the corresponding element of L should take a larger value. If the noise level is smaller, the corresponding element of L should take a smaller value. In this study, we assume L is the same value, then the problem can be further simplified.

3. Inverse Synthetic Aperture Radar Sparse Imaging Recovery Based on Improved Alternating Direction Method of Multipliers

In order to solve the above optimization problem, this thesis proposes a solution method based on the improvement of the Alternating Direction Method of Multipliers (ADMMs), which is derived from the traditional one [19], where the above problem is first transformed into a constrained problem.
min X C M × N ( 1 / 2 ) S DX F 2 + λ U 1 s . t . X = U
The Lagrange multiplier method is then applied to add quadratic terms X U 2 2 to the objective function of the above problem. The algorithm does this by alternately minimizing the original variables X and U and maximizing the dyadic variables (a version of the Lagrange multiplier method). The pseudo-code of the algorithm is given by Algorithm 1.
Algorithm 1. Inverse synthetic aperture radar sparse imaging recovery algorithm based on Improved Alternating Direction Method of Multipliers
Input :   D C Q × M ,   S C Q × N ,   μ > 0   ( multiplier ) ,   λ > 0 (noise parameter)
Output :   X C M × N
1. Begin
2.    k = 0
3 .         X ( 0 ) = D H S  
4 .         U ( 0 ) = X ,   V ( 0 ) = 0
5 .         F = ( D H D + μ I ) 1
6.    While not converge do
7 .                 U ( k + 1 ) = soft ( X ( k ) V ( k ) , λ / μ )
8 .                 X ( k + 1 ) = F ( D H S + μ ( U ( k + 1 ) + V ( k ) ) )
9 .                 V ( k + 1 ) = V ( k ) ( X ( k + 1 ) U ( k + 1 ) )
10 .             k = k + 1
11.    end
12 .       X = X ( k )
13. end
The complexity of this algorithm to encode N signals using the same matrix is O ( K Q 2 + NKQ ) . In this algorithm, the function soft x , τ = max x τ , 0 x / x is a soft threshold method and it operates for each element of the matrix. When the value of N is large, step 7 of the algorithm is the most time-consuming step with a complexity of O ( Q 2 ) . By utilizing the left eigenvector and eigenvalue of the matrix D H , the step can be simplified to an operation with a complexity of O ( KQ ) , in which K is the sparsity of the signal representation. That is, we just use the K column of the dictionary D to represent signal S . If a large K is used, the signal will be recovered perfectly, as well as the noise. Otherwise, the signal will not be recovered perfectly, but the noise will be suppressed as well. Although the value of μ > 0 affects the final convergence rate, the convergence of the algorithm is independent of its value. The selection of μ is discussed in detail in the literature [10].
In this study, the parameters μ are changed during the iteration process to achieve a faster convergence process, where the focus is on comparing the change of the optimization variable X with its dyadic variable U and comparing U with the result of the number of iterations corresponding to multiples of 10. The following algorithm shows how to update parameter μ depending on the relationship between the two changes.
Our proposal combines Algorithms 1 and 2. The algorithm can effectively accelerate the convergence speed and ensure that the objective function quickly converges to the optimized objective value. The adjustment of control parameter p, which is a real number, is selected randomly, taking the value of 10 in this paper. The smaller the p, the more frequent the updating of the parameters μ and V.
min X C M × N ( 1 / 2 ) S DX F 2 + λ U 1 + μ X U 2 2
In Algorithm 2, the parameter μ is changed according to the relationship between r 1 and r 2 . If r 1 > r 2 , which means the different between X and U is larger than between U and U 0 , μ should be increased to add more penalty to the third term of the object function shown in Formula (12), and vice versa. So, the convergence of the method can benefit from the tuning of parameter μ .
Although we change parameters μ and V according to the iterations, the complexity of the algorithm is still not changed, but in each iteration the penalty is changed to make the independent variables U and X optimal.
Algorithm 2. Algorithm for updating the parameters of Improved Alternating Direction Method of Multipliers
Input :   i   ( number   of   iterations ) ,   X ,   U ,   V ,   μ > 0   ( multiplier ) ,   p > 0 (adjustment of control parameter)
Output :   μ   ( updated   multiplier ) ,   V (iterative variables)
1. Begin
2 .         if   i mod 10 = 0
3 .                 U 0 = U
4.    end
5 .         r 1 = X - U F
6 .         r 2 = U - U 0 F
7 .           μ = μ
8 .           V = V
9 .         if   r 1 > p r 2
10 .               μ = 2 μ  
11 .               V = V / 2
12.  end
13 .       if   r 2 > p r 1
14 .               μ = μ / 2
15 .               V = 2 V
16.   end
17.end

4. Results and Discussion

The experimental part selects the simulated imaging results of inverse SAR of three different models of airplanes, in which the inverse SAR image sizes of the three models are 64 × 128, 128 × 64, and 128 × 128, respectively, in which the first number is the quantity of the cross range with denotation M and the second number is the Doppler range with denotation N. The physical quantity of the axes is the number of image pixels D , which is dependent on the measurement matrix with size Q × M . In the following experiments, D is initialized by a random matrix with Q = 128. The original K of the three models are 64, 128, and 128, respectively. Their original interferometric SAR images, which are notated by X, are shown in Figure 1:
First, the speed of the convergence of the improved algorithm of this thesis is compared with respect to the original method of swapping multiplier directions. For different model airplanes selecting a line of measurement, the change in the objective function value with the number of iterations is shown in Figure 2.
Figure 2 shows the convergence curves of model 1–model 3 at the 10th, 30th, and 60th measurement points with equal signal-to-noise ratios, where the horizontal coordinate is the number of iterations and the vertical coordinate is the value of the objective function. Comparison of the results leads to the conclusion that the convergence speed of the improved algorithm is higher than that of the original Alternating Direction Method of Multipliers. The degree of improvement varies from case to case, e.g., for model 2 observation 30, the improvement starts at the 12th iteration step and is small relative to the other two cases.
Figure 3 shows the convergence speed of each algorithm for different noise cases for model 1. It can be seen that the objective function values converge to larger values for the more noisy case (signal-to-noise ratio of 10). Conversely, with less noise, the objective function value converges to a smaller value, as shown in Figure 4. On the other hand, in terms of convergence speed, the case with relatively small noise has a faster convergence speed and is already close to the final convergence value at the 10th iteration, whereas the case with larger noise (signal-to-noise ratio of 10) has a relatively slower convergence speed and is probably close to the final convergence value at the 25th iteration.
In order to quantitatively analyze the effect of sparse recovery, X is the original image and X ¯ is the processed image. The relative recovery error (RRE) is defined as
r = X X ¯ F / X F
where X F is the Frobenius paradigm, which is defined as
X F = i j x ij 2
For model 1, the relative errors of recovery at different noise levels and with measurement sparsity are shown in Table 1 below. The imaging results are shown in Figure 5 and Figure 6 below. In this experiment, our proposal achieves the same results as the ADMMs for different SNRs and a sparsity of K, since our proposal is an improvement of the ADMMs, and the worst results are the same as the ADMMs when p in Algorithm 2 is very large. For the same SNR, the least RRE can be achieved for the largest sparsity K since the fitting error is small in this setting. Detailed analysis of the effect of the SNR and K on the RRE will be shown in the following experiments.
For model 2, the relative errors of recovery at different noise levels and with measurement sparsity are shown in Table 2 below. The figure marked bold is the best result among all the compared algorithms. The imaging results are shown in Figure 7, Figure 8, Figure 9 and Figure 10 below. In this experiment, our proposal achieves the best results, especially for the experiments with large sparsity K (the last column of Table 2). From the white rectangle in Figure 8 and Figure 9, we can conclude that our proposal suppresses the noise better than the other algorithms.
For model 3, the relative errors of recovery at different noise levels and with measurement sparsity are shown in Table 3 below. The figure marked bold is the best result among all the compared algorithms. The imaging results are shown in Figure 11, Figure 12, Figure 13 and Figure 14 below. In this experiment, the same conclusion can be achieved as plane model 2.
Figure 15 shows a comparison plot of the recovery error for different noise and sparsity values. Figure 15a shows the recovery error for SNR = 5 for model 2 at different sparsity values. Analyzing the left figure, it can be concluded that as the value increases and the sparsity decreases, the recovery error first decreases and then increases, while as the sparsity decreases, the better the fit to the signal and, therefore, the error decreases. However, since the noise is more drastic in the SNR = 5 case, the recovery process recovers the noise as the sparsity continues to decrease, so the recovery error relative to the original signal gradually increases. For the recovery errors calculated by the three algorithms, the OMP algorithm has the worst recovery, and the algorithm proposed in this paper has the best recovery. For Figure 15b, SNR = 50, the noise is relatively small. As a result, the recovery error decreases gradually as the sparsity decreases and more received signals are used to recover the original signal. The algorithm proposed in this paper can obtain a smaller relative recovery error compared to the other two algorithms.
The OMP method is a greedy method that is not definitely convergent to the optimal, so the RRE is large. The ADMMs and our proposal is the convex approximation, and our proposal tunes the parameters to get a better solution to the original constrained problem than ADMMs, so the least RRE can be got.
Figure 16 shows the relative recovery error curves of different algorithms for different signal-to-noise ratios. Figure 16a shows the relative recovery error plot for model 2 with sparsity K = 10. As the noise decreases, the different algorithms stabilize at a certain relative recovery error value, which is due to the error due to sparsity. And among them, the algorithm proposed in this thesis that stabilizes in the relative recovery error is smaller than the other two algorithms, and the algorithm in this thesis is able to achieve better sparse recovery results. Figure 16b shows the relative recovery error for model 3 in the sparsity K = 5 case, which stabilizes at a certain value as the noise decreases, and again, the algorithm proposed in this thesis is able to obtain a smaller relative recovery error.
Figure 17 shows, for model 2, the relative recovery error curves for different λ values taken at different noises in the sparsity K = 20 case. Analyzing the figure below, it can be concluded that the optimal λ value is 5000 in the case of relatively large noise, and the smaller relative recovery error is achieved by increasing the sparsity to suppress the noise. With decreasing noise levels, the optimal “λ” value also decreases. When the signal-to-noise ratio exceeds 15, a “λ” value ranging from 1 to 100 yields a near-optimal relative recovery error.
As shown in Equation (11), the parameter λ balances the fitting error and noise suppression. If the SNR is large, such as 30 dB, a small value of λ should be selected to give a large weight to the fitting error in the objection function; thus, a small fitting error can be achieved by minimizing the objection function. Otherwise, the large value of λ should be used for a small SNR since the large value of λ can suppress the severe noise.
The effect of the proposed algorithm is analyzed below for different parameter update frequencies, where the number i denotes that the parameters are updated every i iterations. Calculate the recovery time for the model 2, K = 10 case. Analyzing Table 4, it can be concluded that the minimum time is consumed to change the parameters every 10 iterations. As i increases, the frequency of parameter changes decreases, the amount of computation decreases gradually, while the speed of convergence also decreases, and there exists an optimal value of i that minimizes the time consumed. In this experiment, the optimal number of iterations is 10.

5. Conclusions

This study focuses on addressing the issue of sparse imaging in the context of inverse synthetic aperture radar (ISAR). Initially, an examination of the sparse aperture imaging model is conducted, followed by the introduction of an ISAR sparse imaging recovery algorithm utilizing the Improved Alternating Direction Method of Multipliers (ADMMs). The algorithm’s convergence is enhanced through the adjustment of iterative parameters. Experimental results demonstrate the algorithm’s capability to achieve recovery outcomes with significantly reduced relative recovery error. Furthermore, the thesis evaluates the impact of noise and sparsity on the recovery process and conducts a sensitivity analysis of the algorithm’s parameters.

Author Contributions

H.H. provided research ideas and completed simulation experiments and writing; W.Z. translated and edited papers; R.Y. and D.L. refined and reviewed the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by The Foundation of The National Key Laboratory of Space Awareness, grant number STA2024KGJ0102, and the project of discipline construction with grant number 145AXL250004000X.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author. The data are not publicly available due to the request of the funder.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sahoo, S.K.; Makur, A. Signal recovery from random measurements via extended orthogonal matching pursuit. IEEE Trans. Signal Process. A Publ. IEEE Signal Process. Soc. 2015, 63, 2572–2581. [Google Scholar] [CrossRef]
  2. Quan, Y.; Wu, Y.; Duan, L.; Xu, G.; Xue, M.; Liu, Z.; Xing, M. A review of radar signal processing based on sparse recovery. J. Radars 2024, 13, 46–67. [Google Scholar]
  3. Natarajan, K. Sparse approximate solutions to linear systems. SIAM J. Comput. 1995, 24, 227–234. [Google Scholar] [CrossRef]
  4. Mallet, S.G.; Zhang, Z. Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process. 1993, 41, 3397–3415. [Google Scholar] [CrossRef]
  5. Pati, Y.C.; Krishnaprasad, P. Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. In Proceedings of the 27th Annual Asilomar Conference on Signals, Systems and Computers, Los Alamitos, CA, USA, 1–3 November 1993; pp. 40–44. [Google Scholar]
  6. Blumensath, T.; Davies, M. Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 2009, 27, 265–274. [Google Scholar] [CrossRef]
  7. Foucart, S. Hard thresholding pursuit: An algorithm for compressive sensing. SIAM J. Numer. Anal. 2011, 49, 2543–2563. [Google Scholar] [CrossRef]
  8. Donoho, D.L.; Elad, M. Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1 minimization. Proc. Natl. Acad. Sci. USA 2003, 100, 2197–2202. [Google Scholar] [CrossRef]
  9. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. (Ser. B) 1996, 58, 267–288. [Google Scholar] [CrossRef]
  10. Efron, B.; Hastie, T.; Johnstone, I.; Tibshirani, R. Least angle regression. Ann. Stat. 2004, 32, 407–499. [Google Scholar] [CrossRef]
  11. Wen, F.; Zhang, G.; Tao, Y.; Liu, S. Adaptive compressive sensing toward low signal-to-noise ratio scene. Acta Phys. Sin. 2015, 64, 084301. [Google Scholar]
  12. Guo, X.; Liu, F.; Huang, D. Migration through Resolution Cell Correction and Sparse Aperture ISAR Imaging for Maneuvering Target Based on Whale Optimization Algorithm—Fast Iterative Shrinkage Thresholding Algorithm. Sensors 2024, 24, 2148. [Google Scholar] [CrossRef]
  13. Xue, D.; Zhu, X.; Hu, W.; Guo, B.; Zeng, H. Bi-ISAR imaging based on weighted L1 norm optimization algorithm. Syst. Eng. Electron. 2021, 43, 944–953. [Google Scholar]
  14. Hu, W.; Zhu, X.; Guo, B.; Xue, D. A bistatic ISAR sparse aperture high resolution Imaging algorithm based on ROMP algorithm. In Proceedings of the IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference, Chengdu, China, 15–17 March 2019; pp. 490–494. [Google Scholar]
  15. Zhang, S.; Liu, Y.; Li, X. Bayesian bistatic ISAR imaging for targets with complex motion under low SNR condition. IEEE Trans. Image Process. 2018, 27, 2447–2460. [Google Scholar] [CrossRef] [PubMed]
  16. Qiu, W.; Zhou, J.; Fu, Q. Jointly Using Low-Rank and Sparsity Priors for Sparse Inverse Synthetic Aperture Radar Imaging. IEEE Trans. Image Process. 2020, 29, 100–115. [Google Scholar] [CrossRef] [PubMed]
  17. Zhang, S.; Liu, Y.; Li, X. Fast sparse aperture ISAR autofocusing and imaging via ADMM based sparse Bayesian learning. IEEE Trans. Image Process. 2020, 29, 3213–3226. [Google Scholar] [CrossRef] [PubMed]
  18. Stephen, B.; Neal, P. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar]
  19. Bioucas-Dias, J.; Figueiredo, M. Alternating direction algorithms for constrained sparse regression: Application to hyperspectral unmixing. In Proceedings of the 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Reykjavik, Iceland, 14–16 June 2010; pp. 1–4. [Google Scholar]
  20. Chen, V.C.; Martorella, M.C. Inverse Synthetic Aperture Radar Imaging: Principles, Algorithms, and Applications; Scitech Publishing, an Imprint of the IET: Raleigh, NC, USA, 2014. [Google Scholar]
Figure 1. The ISAR imaging system [20].
Figure 1. The ISAR imaging system [20].
Sensors 25 02943 g001
Figure 2. The simulated imaging results of inverse SAR of three different models of airplanes.
Figure 2. The simulated imaging results of inverse SAR of three different models of airplanes.
Sensors 25 02943 g002
Figure 3. Variation of objective function values at different observation points for different models.
Figure 3. Variation of objective function values at different observation points for different models.
Sensors 25 02943 g003
Figure 4. Convergence rate under varying noise conditions.
Figure 4. Convergence rate under varying noise conditions.
Sensors 25 02943 g004
Figure 5. SNR = 50, K = 5 recovery effect for model 1.
Figure 5. SNR = 50, K = 5 recovery effect for model 1.
Sensors 25 02943 g005
Figure 6. SNR = 10, K = 5 recovery effect for model 1.
Figure 6. SNR = 10, K = 5 recovery effect for model 1.
Sensors 25 02943 g006
Figure 7. SNR = 50, K = 5 recovery effect for model 2.
Figure 7. SNR = 50, K = 5 recovery effect for model 2.
Sensors 25 02943 g007
Figure 8. SNR = 10, K = 5 recovery effect for model 2.
Figure 8. SNR = 10, K = 5 recovery effect for model 2.
Sensors 25 02943 g008
Figure 9. SNR = 50, K = 20 recovery effect for model 2.
Figure 9. SNR = 50, K = 20 recovery effect for model 2.
Sensors 25 02943 g009
Figure 10. SNR = 10, K = 20 recovery effect for model 2.
Figure 10. SNR = 10, K = 20 recovery effect for model 2.
Sensors 25 02943 g010
Figure 11. SNR = 50, K = 5 recovery effect for model 3.
Figure 11. SNR = 50, K = 5 recovery effect for model 3.
Sensors 25 02943 g011
Figure 12. SNR = 10, K = 5 recovery effect for model 3.
Figure 12. SNR = 10, K = 5 recovery effect for model 3.
Sensors 25 02943 g012
Figure 13. SNR = 50, K = 20 recovery effect for model 3.
Figure 13. SNR = 50, K = 20 recovery effect for model 3.
Sensors 25 02943 g013
Figure 14. SNR = 10, K = 20 recovery effect for model 3.
Figure 14. SNR = 10, K = 20 recovery effect for model 3.
Sensors 25 02943 g014
Figure 15. Comparison results of recovery errors for model 2 with different noise and sparsity.
Figure 15. Comparison results of recovery errors for model 2 with different noise and sparsity.
Sensors 25 02943 g015
Figure 16. Relative recovery error plots for different signal-to-noise ratios.
Figure 16. Relative recovery error plots for different signal-to-noise ratios.
Sensors 25 02943 g016
Figure 17. Model 2, relative recovery error plots for different λ values at different noise levels in the sparsity K = 20 case.
Figure 17. Model 2, relative recovery error plots for different λ values at different noise levels in the sparsity K = 20 case.
Sensors 25 02943 g017
Table 1. Comparison of relative recovery errors for different noise and sparsity scenarios (model 1).
Table 1. Comparison of relative recovery errors for different noise and sparsity scenarios (model 1).
NoisesMethodsK = 3K = 5K = 10K = 20
SNR = 80 OMP0.63390.51030.34260.2066
ADMMs0.62250.49810.32870.1942
proposed0.62250.49810.32870.1942
SNR = 50OMP0.63330.51030.34260.2063
ADMMs0.62260.49810.32870.1943
proposed0.62260.49810.32870.1943
SNR = 30OMP0.63330.51070.34270.2085
ADMMs0.62270.49840.32930.1956
proposed0.62270.49840.32930.1956
SNR = 10OMP0.64900.53300.40370.3564
ADMMs0.63280.51620.38130.3256
proposed0.63280.51620.38130.3256
Table 2. Comparison of relative recovery errors for different noise and sparsity scenarios (model 2).
Table 2. Comparison of relative recovery errors for different noise and sparsity scenarios (model 2).
NoisesMethodsK = 3K = 5K = 10K = 20
SNR = 80 OMP0.73750.64800.50440.4071
ADMMs0.71730.61480.49360.4642
proposed0.70980.60780.45370.3498
SNR = 50OMP0.73750.64860.50420.4075
ADMMs0.71730.61490.49360.4615
proposed0.70980.60750.45370.3502
SNR = 30OMP0.73770.64850.51000.4090
ADMMs0.71760.61390.49210.4596
proposed0.70780.60730.45410.3509
SNR = 10OMP0.75390.68270.56670.5592
ADMMs0.72770.63630.55310.5378
proposed0.72370.62680.50500.4852
Table 3. Comparison of relative recovery errors for different noise and sparsity scenarios (model 3).
Table 3. Comparison of relative recovery errors for different noise and sparsity scenarios (model 3).
NoisesMethodsK = 3K = 5K = 10K = 20
SNR = 80 OMP0.67360.54530.31190.1524
ADMMs0.66920.53130.30180.1711
proposed0.66720.53020.29830.1449
SNR = 50OMP0.67360.54530.31190.1524
ADMMs0.66920.53130.30180.1711
proposed0.66720.53010.29840.1449
SNR = 30OMP0.67390.54540.31180.1535
ADMMs0.66930.53150.30220.1727
proposed0.66730.53040.29900.1463
SNR = 10OMP0.68770.57040.37630.3167
ADMMs0.67890.55120.35440.2948
proposed0.67740.54760.35270.2815
Table 4. Parameter update frequency and consumption schedule.
Table 4. Parameter update frequency and consumption schedule.
i251020
t1.35501.31941.15851.4594
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hao, H.; Zhu, W.; Yu, R.; Liu, D. Inverse Synthetic Aperture Radar Sparse Imaging Recovery Technique Based on Improved Alternating Direction Method of Multipliers. Sensors 2025, 25, 2943. https://doi.org/10.3390/s25092943

AMA Style

Hao H, Zhu W, Yu R, Liu D. Inverse Synthetic Aperture Radar Sparse Imaging Recovery Technique Based on Improved Alternating Direction Method of Multipliers. Sensors. 2025; 25(9):2943. https://doi.org/10.3390/s25092943

Chicago/Turabian Style

Hao, Hongxing, Wenjie Zhu, Ronghuan Yu, and Desheng Liu. 2025. "Inverse Synthetic Aperture Radar Sparse Imaging Recovery Technique Based on Improved Alternating Direction Method of Multipliers" Sensors 25, no. 9: 2943. https://doi.org/10.3390/s25092943

APA Style

Hao, H., Zhu, W., Yu, R., & Liu, D. (2025). Inverse Synthetic Aperture Radar Sparse Imaging Recovery Technique Based on Improved Alternating Direction Method of Multipliers. Sensors, 25(9), 2943. https://doi.org/10.3390/s25092943

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop