Next Article in Journal
High-Resolution Mapping of Seaweed Aquaculture along the Jiangsu Coast of China Using Google Earth Engine (2016–2022)
Next Article in Special Issue
Motion Error Estimation and Compensation of Airborne Array Flexible SAR Based on Multi-Channel Interferometric Phase
Previous Article in Journal
Regional Spatiotemporal Patterns of Fire in the Eurasian Subarctic Based on Satellite Imagery
Previous Article in Special Issue
Temporal Subset SBAS InSAR Approach for Tropical Peatland Surface Deformation Monitoring Using Sentinel-1 Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Noise-Robust ISAR Translational Motion Compensation via HLPT-GSCFT

1
Air and Missile Defense College, Air Force Engineering University, Xi’an 710051, China
2
Science College, Armed Police Engineering University, Xi’an 710051, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(24), 6201; https://doi.org/10.3390/rs14246201
Submission received: 18 November 2022 / Revised: 30 November 2022 / Accepted: 4 December 2022 / Published: 7 December 2022
(This article belongs to the Special Issue Advances of SAR Data Applications)

Abstract

:
Translational motion compensation is a prerequisite of inverse synthetic aperture radar (ISAR) imaging. Translational motion compensation for datasets with low signal-to-noise ratio (SNR) is important but challenging. In this work, we proposed a noise-robust translational motion compensation method based on high-order local polynomial transform–generalized scaled Fourier transform (HLPT-GSCFT). We first model the translational motion as a fourth-order polynomial according to order-of-magnitude analysis, and then design HLPT-GSCFT for translation parameter estimation and parametric translational motion compensation. Specifically, HLPT is designed to estimate the acceleration and third-order acceleration of the translational motion and GSCFT is introduced to estimate the second-order acceleration. Both HLPT and GSCFT have a strong ability for cross-term suppression. In addition, we use a minimum weighted entropy algorithm to estimate the velocity of the translational motion, which can improve the noise robustness of the parameter estimation. Experimental results based on a measured dataset prove that the proposed method is effective and noise-robust.

1. Introduction

Translational motion compensation is a pre-step in inverse synthetic aperture radar (ISAR) imaging. In the early days of ISAR imaging, limited by the performance of microelectronic devices, most imaging radars used narrowband tracking–broadband imaging systems. Stretch processing was adopted to perform pulse compression for the echoes. Under this condition, the translational error cannot be modeled because a random phase error is introduced in the echoes. Therefore, early translational motion compensation methods were dedicated to compensating the translation error of each pulse separately. Such methods are called non-parametric translational motion compensation methods and are still widely used today. Specifically, the translational error affects both range profile and phase. Thus, non-parametric methods first align the range profile and then correct the phase. Common range alignment methods include maximum correlation-based range alignment (MCRA) [1] and minimum average range profile entropy (ARPE) [2,3]. Existing phase correction methods involve signal subspace-based methods [4,5,6,7], dominant scatter-based methods [8,9,10] and minimum entropy or maximum contrast methods [11,12,13,14,15]. However, most non-parametric methods struggle to achieve ideal results in low signal-to-noise ratio (SNR) environments. With the rise of artificial intelligence technology, some scholars have applied neural networks to translational motion compensation. For example, using deep recurrent neural networks for range alignment at low SNR [16] and combining neural networks with compressed sensing to accomplish autofocusing [17,18]. However, such methods require a lot of effort to train.
With the development of microelectronics technology, echoes with coherent translational error can be obtained from modern radar. Under this condition, the translational error can be modeled according to the movement of the target relative to the radar. Currently, most scholars model the translation as a polynomial, and the joint compensation of range profile and phase can be realized by the estimation of translational polynomial parameters. Such methods are called parametric translational motion compensation methods. Since estimating several translational parameters is far easier than estimating hundreds of phase errors, parametric methods are generally more robust. Some scholars draw on the idea of the minimum entropy algorithm in the non-parametric method and propose using the gradient descent algorithm [19,20], or the swarm intelligence algorithms [21,22,23,24,25], to optimize the ISAR image quality to estimate the translational parameters. Other scholars use time–frequency distribution to realize the non-search estimation of translation parameters [26]. However, the accuracy and robustness of parametric translational motion compensation methods need to be further improved. On the one hand, parameter estimation based on image quality of range-Doppler (RD) imaging results easily converges to a local optimum in low SNR, and, on the other, the existing time-frequency transform-based methods, such as the fractional-order Fourier transform based-method [26] and the high-order ambiguous function-based method [27], can only be applied to third-order polynomials, and, therefore, have major limitations.
We propose a noise-robust translational motion compensation method based on high-order local polynomial transform–generalized scaled Fourier transform (HLPT-GSCFT). Through mathematical derivation and order-of-magnitude analysis, we determined that the translational error should be modeled as a fourth-order polynomial. Then we design HLPT-GSCFT to estimate the translational parameters. Specifically, HLPT is designed to estimate the acceleration and the third-order acceleration and GSCFT is used to estimate the jerk. Additionally, we design a minimum weighted entropy algorithm (MWE) to estimate radial velocity. HLPT-GSCFT is robust due to its low degree of nonlinearity. Experimental results of a Yak-42 measured dataset prove that the proposed method is effective and noise-robust.

2. Signal Model

In this section, we derive expressions for the target echoes and detail how we model translational motion as a fourth-order polynomial through order-of-magnitude analysis.
Currently, most imaging radars transmit a wideband linear frequency modulation (LFM) signal
s ( t ^ , t m ) = rect ( t ^ T p ) exp [ j 2 π f c t ] exp [ j π μ t ^ 2 ]
where t = t ^ + t m is the full time, T p is the pulse width of the transmitted signal, f c is the carrier frequency, and μ is the chirp rate. rect ( t ) is the rectangular window function, it takes the value of 1 when t [ 0 , 1 ] and 0 in other cases. t ^ is the time sequence of a single pulse, which is usually called fast time. t m = n T p r is the slow time, where n is the number of pulses and T p r is pulse repetition time.
The transmitted signal is reflected by the target and generates an echo. If the total number of scattering centers of the target is N, the echo can be written as
s r ( t ^ , t m ) = p = 1 N σ p rect ( t ^ ( 2 R p ( t m ) / c ) T p ) exp [ j 2 π f c ( t 2 R p ( t m ) c ) ] exp [ j π μ ( t ^ 2 R p ( t m ) c ) 2 ]
where c is the speed of light and R p ( t m ) is the distance between the radar and the scattering center p.
After pulse compression, (2) can be written as
S r ( f r , t m ) = p = 1 N σ p exp [ j 4 π c ( f r + f c ) R p ( t m ) ]
where f r = μ t ^ is the fast time frequency. According to the basic theory of ISAR imaging [28], R p ( t m ) can be divided into rotational motion history and translational motion history. When the target moves smoothly, the rotational motion history R p , r o t ( t m ) can be approximated as
R p , r o t ( t m ) = x p ω t m + y p
where ω is the rotational angular velocity of the target, x p is the azimuth coordinate of the scattering center p and y p is the distance coordinate of the scattering center p. The y-axis of this imaging coordinate system is the radar line of sight direction, and the origin of this coordinate system is the geometric center of the target.
Next, we analyze the translational motion of the target. During ISAR imaging, the motion of the target relative to the radar is shown in Figure 1.
According to Figure 1, R t r a n s ( t m ) can be expressed as
R t r a n s ( t m ) = R 0 2 + V 2 t m 2 2 R 0 V t m sin θ 0
Using Taylor series expansion, (5) can be rewritten as
T e r m ( t m ) = ( V t m R 0 ) 2 2 V t m R 0 sin θ 0
R t r a n s ( t m ) = R 0 ( 1 + 1 2 T e r m ( t m ) 1 8 ( T e r m ( t m ) ) 2 + )
Then we should determine the order of the Taylor series expansion. Obviously, the first-order term of (7) must be preserved. The question is whether the higher-order terms of (7) need to be preserved and if so, to which order. Therefore, we use order-of-magnitude analysis to determine the order of (7). Specifically, the error phase caused by the kth term in (7) can be ignored if it changes by a small amount during the imaging process. According to the Sine Law, T e r m ( t m ) can be rewritten as
T e r m ( t m ) = ( sin φ r cos ( θ 0 φ r ) ) 2 2 sin φ r cos ( θ 0 φ r ) sin θ 0
where φ r is the rotated angle, which is usually taken as 3°–5° [28].
The mapping from θ 0 to T e r m ( t m ) is shown in Figure 2 (set φ r = 3 ° ).
We can find from (8) and Figure 2 that the absolute value of T e r m ( t m ) decreases with the increase of θ 0 or the decrease of φ r . When 3 ° φ r 5 ° and θ 0 60 ° , the value of ( T e r m ( t m ) ) 2 / 2 satisfies
3.4 × 10 7 ( T e r m ( t m ) ) 2 / 2 2.9 × 10 5
Considering that the order of magnitude of R 0 is 10 4   m and the order of magnitude of wavelength is 10 2   m , the effect of the squared term in (7) on the phase of the echo cannot be ignored in most cases. In comparison, the maximum of the absolute value of ( T e r m ( t m ) ) 3 / 4 is 1.1 × 10 7 when 3 ° φ r 5 ° and θ 0 60 ° . Therefore, the cubic and the higher terms in (7) can be ignored.
Based on the above analyses, R t r a n s ( t m ) can be written as
R t r a n s ( t m ) = R 0 ( V sin θ 0 ) t m + ( V cos θ 0 ) 2 2 R 0 t m 2 + V 3 sin θ 0 4 R 0 2 t m 3 V 4 8 R 0 3 t m 4
It can be found that the model of the translational motion is a fourth-order polynomial, and the echo can be expressed as
S r ( f r , t m ) = p = 1 N σ p exp [ j 4 π c ( f r + f c ) ( R 0 + y p + ( v + x p ω ) t m ) ] exp [ j 4 π c ( f r + f c ) ( a 1 t m 2 + a 2 t m 3 + a 3 t m 4 ) ]
{ v = V sin θ 0 a 1 = ( V cos θ 0 ) 2 2 R 0 a 2 = V 3 sin θ 0 4 R 0 2 a 3 = V 4 8 R 0 3

3. Translational Parameter Estimation Based on HLPT-GSCFT

To eliminate the translation error, we need to estimate v , a 1 , a 2 and a 3 . In this section, we will describe the specific steps of HLPT-GSCFT and display how to estimate a 1 and a 3 using HLPT and a 2 using GSCFT. In addition, we analyze how HLPT-GSCFT suppressed the cross terms.
According to [29,30,31], the local polynomial transform (LPT) can be defined as
L P T ( f τ m , ζ ) = FT τ m { [ S ( t m + τ m ) S * ( t m τ m ) exp ( j π 1 3 ζ τ m 3 ) ] | t m = 0 }
where τ m is the delay variable, ζ is the quadratic chirp rate and FT is the Fourier transform. LPT has the ability to perform non-search estimation on the frequency and quadratic chirp rate of cubic phase signals. Meanwhile, it has a low degree of nonlinearity and strong cross-term suppression ability. Referring to LPT, we propose HLPT.
We define the kernel function of HLPT as
R L P T ( f r , t m , τ m ) = S r ( f r , t m + τ m + ε ) S r * ( f r , t m + τ m ε ) S r ( f r , t m τ m ε ) S r * ( f r , t m τ m + ε )
where ε is the delay constant. The specific expression of R L P T ( f r , t m , τ m ) can be written as
R L P T ( f r , t m , τ m ) = p = 1 N A p 4 exp [ j 4 π c ( f r + f c ) ( 8 ε a 1 τ m ) ] exp [ j 4 π c ( f r + f c ) ( 24 ε a 2 τ m t m ) ] exp [ j 4 π c ( f r + f c ) ε a 3 ( 48 τ m t m 2 + 16 τ m 3 ) ] + C r o s s ( f r , t m , τ m )
where C r o s s ( f r , t m , τ m ) is the cross terms of R L P T ( f r , t m , τ m ) .
Then we perform inverse Fourier transform (IFT) for R L P T ( f r , t m , τ m ) on f r , the result of the first part of (15) (we call it “self-terms”) can be expressed as
R L P T , S ( t r , t m , τ m ) = p = 1 N A p 4 δ [ B ( t r + 16 ε a 1 τ m + 48 ε a 2 τ m t m + 32 ε a 3 ( 3 τ m t m 2 + τ m 3 ) c ) ] exp [ j 4 π f c c ( 8 ε a 1 τ m + 24 ε a 2 τ m t m + 16 ε a 3 ( 3 τ m t m 2 + τ m 3 ) ) ]
where t r is the modified fast time and δ is the Dirac function.
According to the analyses of Section II, the impact of a 1 , a 2 and a 3 on the range profile is not significant. If we set a small value for ε , the impact of a 1 , a 2 and a 3 on the range profile will be greatly reduced. Under this condition, the range profile offset caused by a 1 , a 2 and a 3 can be ignored. Therefore, we can approximate that the energy of the self-terms is focused in the range bin corresponding to t r = 0 . Based on the above analysis, we take t r = 0 , the result can be written as
R L P T , S ( t m , τ m ) | t r = 0 p = 1 N A p 4 exp [ j 4 π f c c ( 8 ε a 1 τ m + 24 ε a 2 τ m t m + 16 ε a 3 ( 3 τ m t m 2 + τ m 3 ) ) ]
Then we perform LPT for (17) and the result can be expressed as
L P T ( f τ m , ζ ) = ( R L P T , S ( t m , τ m ) | t r = 0 ) | t m = 0 exp ( j 2 π ( f τ m τ m + ζ τ m 3 ) ) d τ m = p = 1 N A p exp ( j 2 π ( ( f τ m + 16 f c ε a 1 c ) τ m + ( ζ + 32 f c ε a 3 c ) τ m 3 ) ) d τ m
According to the standing-phase method [28], we can obtain that
| L P T ( f τ m , ζ ) | = | 1 f τ m + 16 f c ε a 1 c ζ + 32 f c ε a 3 c |
It can be found that (19) will reach a maximum only when f τ m = 16 f c ε a 1 c and ζ = 32 f c ε a 3 c . Therefore, the estimation of a 1 and a 2 can be obtained by peak search as
{ a ^ 1 = c f τ m , max 16 f c ε a ^ 3 = c ζ max 32 f c ε
where f τ m , max and ζ max represent the coordinates of the maximum values, respectively
Next, we analyze the ability of HLPT to suppress the cross terms. C r o s s ( f r , t m , τ m ) in (15) has three kinds of components, which can be expressed as
c o r s s 1 ( f r , t m , τ m ) = A p A q A r A s exp [ j 4 π c ( f c + f r ) ( y p y q y r + y s ) ] exp [ j 4 π c ( f c + f r ) ( x p x q x r + x s ) t m ] exp [ j 4 π c ( f c + f r ) ( x p x q + x r x s ) τ m ] s e l f ( f r , t m , τ m )
c o r s s 2 ( f r , t m , τ m ) = A p 2 A q 2 exp [ j 8 π c ( f c + f r ) ( x p x q ) τ m ] s e l f ( f r , t m , τ m )
c o r s s 3 ( f r , t m , τ m ) = A p A q A r 2 exp [ j 4 π c ( f c + f r ) ( y p y q ) ] exp [ j 8 π c ( f c + f r ) ( x p x q ) ( t m + τ m ) ] s e l f ( f r , t m , τ m )
s e l f ( f r , t m , τ m ) = exp [ j 4 π c ( f r + f c ) ( 8 ε a 1 τ m ) ] exp [ j 4 π c ( f r + f c ) ( 24 ε a 2 τ m t m ) ] exp [ j 4 π c ( f r + f c ) ε a 3 ( 48 τ m t m 2 + 16 τ m 3 ) ]
where p, q, r and s are four different scattering centers.
It can be found that x p , x q , x r and x s cause migration through resolution cell (MTRC) in c o r s s 1 ( f r , t m , τ m ) . That means only a small part of c o r s s 1 ( f r , t m , τ m ) has the opportunity to participate in LPT after we perform IFT and take t r = 0 . Under this condition, c o r s s 1 ( f r , t m , τ m ) is unable to focus the energy by LPT. Even if x p x q x r + x s = 0 and x p x q + x r x s = 0 , c o r s s 1 ( f r , t m , τ m ) will not be focused by LPT because the energy only focuses on the range bin corresponding to t r = 2 ( y p y q y r + y s ) / c . Similarly, c o r s s 2 ( f r , t m , τ m ) and c o r s s 3 ( f r , t m , τ m ) will not be focused by LPT. Therefore, HLPT has excellent cross-term suppression ability.
Then we adopt GSCFT to estimate a 2 . According to [32,33], GSCFT can be defined as
T ( f [ ξ Δ m e ϒ m f ] , Δ m ) = ϒ m f [ g ( Δ m ) exp ( j 2 π ψ Δ m e ϒ m f ) ] exp ( j 2 π ( ξ Δ m e ϒ m f f [ ξ Δ m e ϒ m f ] ) ) d ( Δ m e ϒ m f ) = g ( Δ m ) δ ( f [ ξ Δ m e ϒ m f ] ψ ξ )
where ξ represents the zoom factor to avoid spectrum aliasing, f [ ξ Δ m e ϒ m f ] is the scaled frequency domain with respect to ϒ m f , e and f are constant power, g ( Δ m ) is a function of Δ m and ψ is an empirical parameter.
We use a ^ 3 to construct a compensation function as
F ( t m , τ m ) = exp [ j 64 π f c c ε a 3 ( 3 τ m t m 2 + τ m 3 ) ]
Compensating (17) by (26), the result can be written as
R L P T , S ( t m , τ m ) = R L P T , S ( t m , τ m ) | t r = 0 F ( t m , τ m ) = p = 1 N A p 4 exp [ j 4 π f c c ( 8 ε a 1 τ m + 24 ε a 2 τ m t m ) ]
Performing GSCFT for (27), we can obtain that
R L P T G S C F T ( f [ ξ τ m t m ] , τ m ) = R L P T , S ( t m , τ m ) exp ( j 2 π ( ξ τ m t m f [ ξ τ m t m ] ) ) d ( τ m t m ) = p = 1 N A p 4 δ ( f [ ξ τ m t m ] + 48 f c ε a 2 c ξ ) exp [ j 32 π f c c ε a 1 τ m ]
We can take the absolute value of (28) as
| R L P T G S C F T ( f [ ξ τ m t m ] , τ m ) | = | p = 1 N A p 4 δ ( f [ ξ τ m t m ] + 48 f c ε a 2 c ξ ) |
Obviously, we can estimate a 2 by peak search for (29), and the estimated value of a 2 can be written as
a ^ 2 = c ξ f [ ξ τ m t m ] , m 48 f c ε

4. Translational Motion Compensation

Using a ^ 1 , a ^ 2 and a ^ 3 , we can reconstruct the translational error and compensate for the original echo as
T ( f r , t m ) = exp [ j 4 π c ( f c + f r ) ( a ^ 1 t m 2 + a ^ 2 t m 3 + a ^ 3 t m 4 ) ]
S r L T ( f r , t m ) = S r ( f r , t m ) T * ( f r , t m ) = p = 1 N σ p exp [ j 4 π c ( f r + f c ) ( R 0 + y p + ( v + x p ω ) t m ) ]
It can be found that the linear translational error has still not been compensated. To ensure the noise robustness of the compensation, we design a minimum weighted entropy algorithm (MWE). The weighted entropy is defined as
W E ( I ) = b = 1 B a = 1 A h a m m ( a , b ) | I ( a , b ) | 2 E ln E | I ( a , b ) | 2
where I is the ISAR image, a and b are the sampling number of f c and t m , respectively, E is the total energy of I and h a m m is the 2-D Hamming window function.
The linear translational error can be compensated by solving the following optimization
S n o n T ( f r , t m ) = min v { W E ( S r L T ( f r , t m ) T l ( v , f r , t m ) ) }
T l ( v , f r , t m ) = exp [ j 4 π c ( f r + f c ) v t m ]
S n o n T ( f r , t m ) p = 1 N σ p exp [ j 4 π c ( f r + f c ) ( R 0 + y p + x p ω t m ) ]
There are many algorithms that can solve one-dimensional optimization problems. Even linear search can successfully solve the problem in (34).
To sum up, the flow chart of translational motion compensation based on HLPT-GSCFT is shown in Figure 3.

5. Experiment of Yak-42 Measured Dataset

In this section, we adopt a Yak-42 measured dataset to validate the performance of the proposed method. The shape of the Yak-42 is shown in Figure 4a. A C-band radar records the dataset with a bandwidth of 400 MHz, a PRF of 100 Hz and a carrier frequency of 5.52 GHz. The dataset contains 256 pulses, where each pulse has 256 sampling points. The RD imaging result of the dataset is shown in Figure 4b.
Next, we add a translational error for the dataset. We set R 0 = 5   k m , V = 500   m / s and θ 0 = 2 ° , the expression of the translational motion added in the dataset is the same as (5). The range profile of the dataset after adding the translational error is shown in Figure 5a. According to (12), we can obtain that v = 17.5 , a 1 = 24.9 , a 2 = 0.043 and a 3 = 0.062 .
We adopt the proposed method to compensate for the Yak-42 dataset. The result of each step is shown in Figure 5b–f.
We found that the range profile with the translational error is significantly bent. In the process of the proposed method, HLPT obtains that a ^ 1 = 24.85 and a ^ 3 = 0.06104 , GSCFT obtains that a ^ 2 = 0.0444 and MWE obtains that v ^ = 17.46 . The relative errors of the translational parameter estimation are 0.2%, 1.5%, 3.2% and 0.2%, respectively. After translational motion compensation by the proposed method, the range profile of the dataset is straight and the RD imaging result is very close to Figure 4b. This proves that the proposed method is able to estimate the translational parameters and compensate for the translational error accurately.
Next, we further validate the effectiveness and the noise robustness of the proposed method by comparing it with other translational motion compensation methods. We adopt particle swarm optimization (PSO) [24] and phase difference–Keystone transform–fractional Fourier transform (PD-KT-FrFT) [26] as a control group. Under the conditions of SNR = 0 dB, −3 dB and −6 dB, the results of the above three methods and the ideal result (Imaging result of the original dataset, illustrated in Figure 4b) are shown in Figure 6, Figure 7 and Figure 8, respectively. Additionally, we provide the entropy of each imaging result in Table 1 to quantify the imaging results. In general, the lower the entropy, the better the image quality.
The entropy of the results of PSO is higher than that of other methods at different SNRs. Moreover, the results of PSO become increasingly defocused as the SNR decreases. This is because the parameter estimation based on the image quality of RD imaging results is not noise-robust under low SNR. PD-KT-FrFT has better performance and noise robustness than PSO because this method does not depend on the quality of the RD image. However, the results of PD-KT-FrFT still have a certain degree of defocusing because this method is based on the third-order polynomial translation model and is unable to compensate for the translational error caused by a 3 . In comparison, the entropy of the results of the proposed method is closest to the ideal results. Moreover, the results of the proposed method are well-focused under different SNRs. Those prove that the proposed method is noise robust than traditional methods.
Finally, we validate the performance of the proposed method under the conditions of extremely low SNR and explore the performance limits of the proposed method. The results of the proposed method under the conditions of SNR = −9, −10, −11 and −12 dB are shown in Figure 9. Additionally, we provide the results of translational parameter estimation in Table 2.
The relative errors of the translational parameter estimation are lower than 3.2% when SNR is higher than −11 dB. Each scattering center in Figure 9a,b is well-focused. Under the condition of SNR = −11 dB, the error in translational parameter estimation becomes larger. However, this change is not significant in the final result. Under the condition of SNR = −12 dB, the results of the translational parameter estimation have a significant deviation. This causes a certain degree of defocusing for the result of the proposed method. Therefore, the proposed method is effective and accurate when SNR is higher than −12 dB.

6. Conclusions

This work proposes a noise-robust translational motion compensation method based on HLPT-GSCFT. Specifically, we first model the translational motion as a fourth-order polynomial according to order-of-magnitude analysis and then design HLPT-GSCFT to estimate the translational parameters. In the process of HLPT-GSCFT, HLPT is designed to estimate the acceleration and the third-order acceleration, GSCFT is adopted to estimate the jerk and MWE is designed to estimate the speed. Experimental results based on a Yak-42 measured dataset demonstrate the effectiveness and noise robustness of the proposed method. It must be noted that there are still some shortcomings in the proposed method. On the one hand, as with other parametric translational motion compensation methods, the proposed method is powerless to compensate for random translational errors, and, on the other, the proposed method has limited ability to compensate for sparse aperture echoes. In future research, we hope to extend the proposed method from conventional targets to micro-motion targets for higher-order translational compensation and ISAR imaging.

Author Contributions

Conceptualization, D.H..; methodology, F.L.; software, F.L.; validation, F.L., D.H. and X.G.; formal analysis, C.F.; investigation, F.L.; resources, D.H.; data curation, X.G.; writing—original draft preparation, F.L.; writing—review and editing, C.F.; visualization, F.L.; supervision, D.H.; project administration, C.F.; funding acquisition, D.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Postdoctoral Science Foundation of China under Grant 2019M661508, Shaanxi Provincial Fund Youth Project of China under Grant 2019JQ-497, and Aviation Science Fund of China under Grant 201920096001 (Funder: Darong Huang).

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the request of the funder.

Acknowledgments

This work was supported by the National Postdoctoral Science Foundation of China under grant 2019M661508, the Shaanxi Provincial Fund Youth Project of China under grant 2019JQ-497 and the Aviation Science Fund of China under grant 201920096001.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, C.C.; Andrews, H.C. Target-motion-induced radar imaging. IEEE Trans. Aerosp. Electron. Syst. 1980, 16, 2–14. [Google Scholar] [CrossRef]
  2. Zhu, D.; Wang, L.; Yu, Y. Robust ISAR range alignment ia minimizing the entropy of the average range profile. IEEE Geosci. Remote Sens. Lett. 2009, 6, 204–208. [Google Scholar]
  3. Sauer, T.; Schroth, A. Robust range alignment algorithm via Hough transform in an ISAR imaging system. IEEE Trans. Aerosp. Electron. Syst. 1995, 31, 1173–1177. [Google Scholar] [CrossRef]
  4. Lee, S.-H.; Bae, J.-H.; Kang, M.-S.; Kim, C.-H.; Kim, K.-T. ISAR autofocus by minimizing entropy of eigenimages. In Proceedings of the 2016 IEEE Radar Conference (RadarConf), Philadelphia, PA, USA, 2–6 May 2016. [Google Scholar]
  5. Xu, J.; Cai, J.; Sun, Y.; Xia, X.-G.; Farina, A.; Long, T. Efficient ISAR Phase Autofocus Based on Eigenvalue Decomposition. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2195–2199. [Google Scholar] [CrossRef]
  6. Cao, P.; Xing, M.; Sun, G.; Li, Y.; Bao, Z. Minimum Entropy via Subspace for ISAR Autofocus. IEEE Geosci. Remote Sens. Lett. 2009, 7, 205–209. [Google Scholar] [CrossRef]
  7. Cai, J.-J.; Xu, J.; Wang, G.; Xia, X.-G.; Long, T.; Bian, M.-M. An effective ISAR autofocus algorithm based on single eigenvector. In Proceedings of the 2016 CIE International Conference on Radar (RADAR), Guangzhou, China, 10–13 October 2016; pp. 1–5. [Google Scholar] [CrossRef]
  8. Ye, W.; Yeo, T.S.; Bao, Z. Weighted least-squares estimation of phase errors for SAR/ISAR autofocus. IEEE Trans. Geosci. Remote Sens. 1999, 37, 2487–2494. [Google Scholar] [CrossRef] [Green Version]
  9. Wahl, D.; Eichel, P.; Ghiglia, D.; Jakowatz, C. Phase gradient autofocus-a robust tool for high resolution SAR phase correction. IEEE Trans. Aerosp. Electron. Syst. 1994, 30, 827–835. [Google Scholar] [CrossRef] [Green Version]
  10. Chan, H.; Yeo, T.S. Comments on “Non-iterative quality phase-gradient autofocus (QPGA) algorithm for spotlight SAR imagery”. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2517. [Google Scholar] [CrossRef]
  11. Cai, J.; Martorella, M.; Chang, S.; Liu, Q.; Ding, Z.; Long, T. Efficient Nonparametric ISAR Autofocus Algorithm Based on Contrast Maximization and Newton’s Method. IEEE Sens. J. 2020, 21, 4474–4487. [Google Scholar] [CrossRef]
  12. Chen, J.; Xing, M.; Yu, H.; Liang, B.; Peng, J.; Sun, G.-C. Motion Compensation/Autofocus in Airborne Synthetic Aperture Radar: A Review. IEEE Geosci. Remote Sens. Mag. 2021, 10, 185–206. [Google Scholar] [CrossRef]
  13. Marston, T.M.; Plotnick, D.S. Semiparametric Statistical Stripmap Synthetic Aperture Autofocusing. IEEE Trans. Geosci. Remote Sens. 2014, 53, 2086–2095. [Google Scholar] [CrossRef]
  14. Chen, J.; Zhang, J.; Jin, Y.; Yu, H.; Liang, B.; Yang, D.-G. Real-Time Processing of Spaceborne SAR Data With Nonlinear Trajectory Based on Variable PRF. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–12. [Google Scholar] [CrossRef]
  15. Li, X.; Kong, L.; Cui, G.; Yi, W.; Yang, Y. ISAR imaging of maneuvering target with complex motions based on ACCF–LVD. Digit. Signal Process. 2015, 46, 191–200. [Google Scholar] [CrossRef]
  16. Yuan, Y.; Luo, Y.; Kang, L.; Ni, J.; Zhang, Q. Range Alignment in ISAR Imaging Based on Deep Recurrent Neural Network. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  17. Li, X.; Bai, X.; Zhou, F. High-Resolution ISAR Imaging and Autofocusing via 2D-ADMM-Net. Remote Sens. 2021, 13, 2326. [Google Scholar] [CrossRef]
  18. Wei, S.; Liang, J.; Wang, M.; Shi, J.; Zhang, X.; Ran, J. AF-AMPNet: A Deep Learning Approach for Sparse Aperture ISAR Imaging and Autofocusing. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–14. [Google Scholar] [CrossRef]
  19. Gao, Y.; Xing, M.; Li, Y.; Sun, W.; Zhang, Z. Joint Translational Motion Compensation Method for ISAR Imagery Under Low SNR Condition Using Dynamic Image Sharpness Metric Optimization. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–15. [Google Scholar] [CrossRef]
  20. Shao, S.; Zhang, L.; Liu, H.; Zhou, Y. Accelerated translational motion compensation with contrast maximization optimization algorithm for inverse synthetic aperture radar imaging. IET Radar Sonar Navig. 2018, 13, 316–325. [Google Scholar] [CrossRef]
  21. Fu, J.; Xing, M.; Amin, M.; Sun, G. ISAR Translational Motion Compensation with Simultaneous Range Alignment and Phase Adjustment in Low SNR Environments. In Proceedings of the 2021 IEEE Radar Conference (RadarConf21), Atlanta, GA, USA, 7–14 May 2021; pp. 1–6. [Google Scholar] [CrossRef]
  22. Zhang, L.; Sheng, J.-L.; Duan, J.; Xing, M.-D.; Qiao, Z.-J.; Bao, Z. Translational motion compensation for ISAR imaging under low SNR by minimum entropy. EURASIP J. Adv. Signal Process. 2013, 2013, 33. [Google Scholar] [CrossRef]
  23. Ustun, D.; Toktas, A. Translational Motion Compensation for ISAR Images Through a Multicriteria Decision Using Surrogate-Based Optimization. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4365–4374. [Google Scholar] [CrossRef]
  24. Liu, L.; Zhou, F.; Tao, M.; Sun, P.; Zhang, Z. Adaptive Translational Motion Compensation Method for ISAR Imaging Under Low SNR Based on Particle Swarm Optimization. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 5146–5157. [Google Scholar] [CrossRef]
  25. Peng, S.B.; Xu, J.; Peng, Y.N. Parametric inverse synthetic aperture radar manoeuvring target motion compensation based on particle swarm optimizer. IET Radar Sonar Navig. 2011, 5, 305–314. [Google Scholar] [CrossRef]
  26. Li, D.; Zhan, M.; Liu, H.; Liao, Y.; Liao, G. A Robust Translational Motion Compensation Method for ISAR Imaging Based on Keystone Transform and Fractional Fourier Transform Under Low SNR Environment. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 2140–2156. [Google Scholar] [CrossRef]
  27. Zhuo, Z.; Du, L.; Lu, X.; Ren, K.; Li, L. Ambiguity Function based High-Order Translational Motion Compensation. IEEE Trans. Aerosp. Electron. Syst. 2022, 1–8, early access. [Google Scholar] [CrossRef]
  28. Xing, M.; Wu, R.; Lan, J.; Bao, Z. Migration Through Resolution Cell Compensation in ISAR Imaging. IEEE Geosci. Remote Sens. Lett. 2004, 1, 141–144. [Google Scholar] [CrossRef]
  29. Ljubiša, S. Local polynomial Wigner distribution. Signal Process. 1997, 59, 123–128. [Google Scholar]
  30. Wang, Y.; Kang, J.; Jiang, Y. ISAR Imaging of Maneuvering Target Based on the Local Polynomial Wigner Distribution and Integrated High-Order Ambiguity Function for Cubic Phase Signal Model. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2971–2991. [Google Scholar] [CrossRef]
  31. Liu, F.; Feng, C.; Huang, D.; Guo, X. Fast ISAR imaging method for complex manoeuvring target based on local polynomial transform-fast chirp Fourier transform. IET Radar Sonar Navig. 2021, 15, 666–676. [Google Scholar] [CrossRef]
  32. Lv, Q.; Su, T.; He, X. An ISAR Imaging Algorithm for Nonuniformly Rotating Targets With Low SNR Based on Modified Bilinear Parameter Estimation of Cubic Phase Signal. IEEE Trans. Aerosp. Electron. Syst. 2018, 54, 3108–3124. [Google Scholar] [CrossRef]
  33. Zheng, J.; Su, T.; Zhu, W.; Zhang, L.; Liu, Z.; Liu, Q.H. ISAR Imaging of Nonuniformly Rotating Target Based on a Fast Parameter Estimation Algorithm of Cubic Phase Signal. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4727–4740. [Google Scholar] [CrossRef]
Figure 1. The motion of the target relative to the radar, where R 0 is the initial distance between radar and target, R t r a n s ( t m ) is the translational motion of the target, V is the speed of the target and θ 0 is the initial oblique angle.
Figure 1. The motion of the target relative to the radar, where R 0 is the initial distance between radar and target, R t r a n s ( t m ) is the translational motion of the target, V is the speed of the target and θ 0 is the initial oblique angle.
Remotesensing 14 06201 g001
Figure 2. The mapping from θ 0 to T e r m ( t m ) .
Figure 2. The mapping from θ 0 to T e r m ( t m ) .
Remotesensing 14 06201 g002
Figure 3. The flow chart of translational motion compensation based on HLPT-GSCFT.
Figure 3. The flow chart of translational motion compensation based on HLPT-GSCFT.
Remotesensing 14 06201 g003
Figure 4. Yak-42 (a) The shape of the Yak-42. (b) The RD imaging result of the dataset.
Figure 4. Yak-42 (a) The shape of the Yak-42. (b) The RD imaging result of the dataset.
Remotesensing 14 06201 g004
Figure 5. The range profile of the dataset after adding the translational error and the result of each step. (a) The range profile of the dataset after adding the translational error. (b) The result of HLPT. (c) The result of GSCFT. (d) The result of MWE. (e) The range profile after translational motion compensation. (f) The RD imaging result after translational motion compensation.
Figure 5. The range profile of the dataset after adding the translational error and the result of each step. (a) The range profile of the dataset after adding the translational error. (b) The result of HLPT. (c) The result of GSCFT. (d) The result of MWE. (e) The range profile after translational motion compensation. (f) The RD imaging result after translational motion compensation.
Remotesensing 14 06201 g005
Figure 6. Translational motion compensation result of different methods under SNR = 0 dB. (a) Ideal result. (b) Result of PSO. (c) Result of PD-KT-FrFT. (d) Result of the proposed method.
Figure 6. Translational motion compensation result of different methods under SNR = 0 dB. (a) Ideal result. (b) Result of PSO. (c) Result of PD-KT-FrFT. (d) Result of the proposed method.
Remotesensing 14 06201 g006
Figure 7. Translational motion compensation result of different methods under SNR = −3 dB. (a) Ideal result. (b) Result of PSO. (c) Result of PD-KT-FrFT. (d) Result of the proposed method.
Figure 7. Translational motion compensation result of different methods under SNR = −3 dB. (a) Ideal result. (b) Result of PSO. (c) Result of PD-KT-FrFT. (d) Result of the proposed method.
Remotesensing 14 06201 g007
Figure 8. Translational motion compensation result of different methods under SNR = −6 dB. (a) Ideal result. (b) Result of PSO. (c) Result of PD-KT-FrFT. (d) Result of the proposed method.
Figure 8. Translational motion compensation result of different methods under SNR = −6 dB. (a) Ideal result. (b) Result of PSO. (c) Result of PD-KT-FrFT. (d) Result of the proposed method.
Remotesensing 14 06201 g008
Figure 9. The results of the proposed method under the conditions of SNR = −9, −10, −11 and −12 dB. (a) SNR = −9 dB. (b) SNR = −10 dB. (c) SNR = −11 dB. (d) SNR = −12 dB.
Figure 9. The results of the proposed method under the conditions of SNR = −9, −10, −11 and −12 dB. (a) SNR = −9 dB. (b) SNR = −10 dB. (c) SNR = −11 dB. (d) SNR = −12 dB.
Remotesensing 14 06201 g009
Table 1. The entropy of each imaging result.
Table 1. The entropy of each imaging result.
Ideal ResultPSOPD-KT-FrFTProposed Method
SNR = −0 dB8.23548.45548.34558.2651
SNR = −3 dB9.52129.89309.62519.5321
SNR = −6 dB10.450110.665110.502110.4589
Table 2. Result of translational parameter estimation.
Table 2. Result of translational parameter estimation.
va1a2a3
SNR = −9 dB−17.4324.850.0444−0.061
SNR = −10 dB−17.4824.850.0444−0.061
SNR = −11 dB−17.4625.130.0421−0.064
SNR = −12 dB−17.5525.720.0457−0.057
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, F.; Huang, D.; Guo, X.; Feng, C. Noise-Robust ISAR Translational Motion Compensation via HLPT-GSCFT. Remote Sens. 2022, 14, 6201. https://doi.org/10.3390/rs14246201

AMA Style

Liu F, Huang D, Guo X, Feng C. Noise-Robust ISAR Translational Motion Compensation via HLPT-GSCFT. Remote Sensing. 2022; 14(24):6201. https://doi.org/10.3390/rs14246201

Chicago/Turabian Style

Liu, Fengkai, Darong Huang, Xinrong Guo, and Cunqian Feng. 2022. "Noise-Robust ISAR Translational Motion Compensation via HLPT-GSCFT" Remote Sensing 14, no. 24: 6201. https://doi.org/10.3390/rs14246201

APA Style

Liu, F., Huang, D., Guo, X., & Feng, C. (2022). Noise-Robust ISAR Translational Motion Compensation via HLPT-GSCFT. Remote Sensing, 14(24), 6201. https://doi.org/10.3390/rs14246201

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop