Next Article in Journal
AGNet: An Attention-Based Graph Network for Point Cloud Classification and Segmentation
Previous Article in Journal
Characterization of Tropical Cyclone Intensity Using the HY-2B Scatterometer Wind Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

A High-Precision Motion Errors Compensation Method Based on Sub-Image Reconstruction for HRWS SAR Imaging

School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(4), 1033; https://doi.org/10.3390/rs14041033
Submission received: 10 January 2022 / Revised: 12 February 2022 / Accepted: 18 February 2022 / Published: 21 February 2022
(This article belongs to the Section Remote Sensing Communications)

Abstract

:
High-resolution wide-swath (HRWS) synthetic aperture radar (SAR) plays an important role in remote sensing observation. However, the motion errors caused by the carrier platform’s instability severely degrade the performance of the HRWS SAR imaging. Conventional motion errors compensation methods have two drawbacks, i.e., (1) ignoring the spatial variation of the phase errors of pixels along the range direction of the scene, which leads to lower compensation accuracy, and (2) performing compensation after echo reconstruction, which fails to consider the difference in motion errors between channels, resulting in poor imaging performance in the azimuth direction. In this paper, to overcome these two drawbacks, a high-precision motion errors compensation method based on sub-image reconstruction (SI-MEC) for high-precision HRWS SAR imaging is proposed. The proposed method consists of three steps. Firstly, the motion errors of the platform are estimated by maximizing the intensity of strong points in multiple regions. Secondly, combined with the multichannel geometry, the equivalent phase centers (EPCs) used for sub-images imaging are corrected and the sub-images imaging is performed before reconstruction. Thirdly, the reconstruction is performed by using the sub-images. The proposed method has two advantages, i.e., (1) compensating for the spatially varying phase errors in the range direction, by correcting EPCs, to improve the imaging quality, and (2) compensating for the motion errors of each channel in sub-image imaging before reconstruction, to enhance the imaging quality in the azimuth direction. Moreover, the experimental results are provided to demonstrate that the proposed method outperforms PGA and BP-FMSA.

1. Introduction

High-resolution wide swath (HRWS) synthetic aperture radar (SAR) can not only perform day-and-night and weather-independent earth observations but also obtain high-resolution and wide-swath microwave images [1,2,3,4]. High resolution allows for more richly detailed information of the image, which is more conducive to tasks such as polarimetric SAR environmental monitoring [5,6,7] and target detection [8]. It is widely used in the military and civilian fields. The traditional SAS system contains a single channel, which cannot be used to achieve HRWS imaging, because the low pulse repetition frequency (PRF) of the SAR system is required to obtain a wide swath image, which conflicts with the high RPF that is required to avoid the azimuth Doppler ambiguity. The azimuth multichannel SAR system design is considered to be a promising technique to solve this problem [9,10,11], and many reconstruction algorithms have been proposed to reconstruct the multichannel SAR echo for unambiguous imaging [12,13,14,15]. In recent years, various airborne and hypersonic vehicle-borne multichannel SARs [16,17] have been extensively studied. However, due to the effect of the SAR platform vibration and the atmospheric turbulence [18,19], the motion trajectory of the platform deviates from the desired values, resulting in motion errors. Because SAR forms a large aperture in the azimuth direction through motion and achieves azimuth compression by coherent summation of echoes at each pulse repetition time [20], high-precision imaging is related to the motion trajectory. These motion errors severely degrade the image quality, and the motion errors compensation (MEC) must be considered in the imaging process.
A common MEC method is to use the antenna attitude information obtained by the inertial measurement unit (IMU) for motion compensation [21]. However, the measurement accuracy of the IMU is still unable to meet the demand for high-precision motion compensation [22]. Another extensively studied MEC method is autofocus algorithms [22,23,24,25,26] that use the echo data for motion error estimation. Phase gradient autofocus (PGA) is a widely used nonparametric autofocus algorithm that can perform phase correction for SAR images [27]. In addition, various autofocus methods based on different optimization criteria have also been developed [22,24,25,26]. In [24], an autofocus method based on image sharpness metric was presented, which is compatible with the backprojection algorithm supporting the flexible collection and imaging geometries. However, as this algorithm needs to compute and store the per-pulse backprojected values for all pixels, it suffers from a heavy computation burden. To improve imaging efficiency and quality, some improved autofocus algorithms [26,28] have been proposed. However, these autofocus methods have two drawbacks. They mainly rely on phase errors estimation, while assuming the same phase errors for all pixels of the SAR image. This ignores the spatial variation of the phase errors of pixels along the range direction of the scene, reducing compensation accuracy and image quality. Since HRWS SAR images have a large width in the range direction, this phenomenon is particularly obvious. Moreover, as these autofocus methods are designed for single-channel SAR, they perform compensation after echo reconstruction treating the multichannel reconstructed echo as a single-channel echo [16]. This ignores the difference in motion errors between channels, resulting in poor imaging performance in the azimuth direction.
To overcome the abovementioned two drawbacks of the existing SAR motion errors compensation methods, a high-precision motion errors compensation method based on sub-image reconstruction (SI-MEC) for HRWS SAR imaging is proposed. The proposed method consists of three steps. Firstly, coarse imaging with motion errors is performed and strong points in partial regions of the coarse imaging result are selected. The motion errors of the platform are estimated by maximizing the intensity of the strong points. Secondly, according to the estimated platform motion errors and the multichannel geometry, the EPCs used for sub-image imaging are corrected and the sub-images imaging is performed before reconstruction. Thirdly, the reconstruction is performed by using the sub-images.
The main contributions of this paper are as follows:
  • The proposed method can compensate for the spatially varying phase errors in the range direction by estimating and correcting EPCs. It increases the accuracy of compensating for phase errors for each pixel and improves imaging quality.
  • The proposed algorithm compensates for the motion errors of each channel before reconstruction, improving the imaging quality in the azimuth direction. Moreover, this compensation is implemented when calculating the distance history during sub-images imaging without additional calculation of the compensation phase for each pixel, which simplifies the processing.
The remainder of this paper is organized as follows. Section 2 presents the signal model of HRWS SAR with motion errors and analyzes the problem of the conventional motion errors compensation method. In Section 3, a high-precision motion compensation method based on sub-image reconstruction for HRWS SAR is proposed. Experimental results are given in Section 4. Section 5 concludes this paper.

2. Signal Model and Problem Analysis

In this section, we first introduce the signal model of HRWS SAR with motion errors. Then, we analyze the problems of conventional motion errors compensation methods applied to HRWS SAR.

2.1. Signal Model of HRWS SAR with Motion Errors

The geometric relation of multichannel HRWS SAR system with motion errors is illustrated in Figure 1. The y-axis denotes the platform movement direction, and its velocity is v. P T 1 , P T 2 , and P T 3 represent targets at different positions in the observation scene. The system shown in Figure 1 has four channels, in which Tx is used as a transmitter, and Rx1, Rx2, Rx3, and Rx4 are used as receivers to simultaneously receive echoes in each pulse repetition period. According to the principle of the equivalent phase center [9], the EPCs are located in the middle of the transmitter and receivers, which are shown as dots in Figure 1. The dashed line indicates the platform trajectory without motion errors, and the EPCs are represented as P ˜ A . The solid line indicates the platform trajectory with motion errors, and the EPCs are represented as P A .
Assuming that the system transmits chirp signals, the echo is represented by s ( τ , t ) . For the HRWS SAR, t k , n denotes the azimuth non-uniformly sampling time of the EPC corresponding to the n-th channel in the k-th pulse repetition period, and s ( τ , t k , n ) denotes the non-uniformly sampled echo. If the non-uniformly sampled echo is used for imaging, it will result in azimuth ambiguity, significantly degrading the image quality. To solve this problem, the reconstruction algorithms are usually adopted to restore uniformly sampled signals before imaging. According to the generalized sampling theorem [29], the uniformly sampled echo signal can be restored by
s ( τ , t k , n ) = k = k L / 2 k = k + L / 2 n = 1 N s ( τ , t k , n ) Ψ k n ( t k , n )
where t k , n denotes the azimuth uniformly sampling time, and Ψ k n is the reconstruction coefficient.
Assuming that the imaging scene can be approximated by discrete pixels, the position of each pixel can be expressed as P T ( m ) = [ x m , y m , z m ] T , m = 1 , 2 , , M . Let P ˜ A ( t k , n ) = [ x ˜ k , n , y ˜ k , n , z ˜ k , n ] T denote the position of the measured EPCs of channel n at slow time t k , n . The backprojection (BP) algorithm is employed to process the uniformly sampled echo signal, whose principle has been presented in the literature [30]. According to the BP algorithm, the filtered BP component of the m-th pixel at slow time t k , n can be expressed as
b k n m = s ( τ ^ ( t k , n , m ) , t k , n ) · e j 2 π f c τ ^ ( t k , n , m )
where τ ^ ( t k , n , m ) is the dual echo delay from the position of the measured EPC of channel n to the m-th pixel at slow time t k , n . It can be expressed as
τ ^ ( t k , n , m ) = 2 c | P ˜ A ( t k , n ) P T ( m ) | .
The filtered BP value of the m-th pixel accumulated over all slow time can be expressed as
B m = k = 1 K n = 1 N b k n m .
This is the imaging result without motion errors. Unfortunately, due to the presence of motion errors in practical applications, the non-uniformly sampled echo suffers from phase errors, which cause the SAR image to be defocused. Assuming that the compensation phase is denoted as ϕ k n , instead of Equation (4), the procedure of BP imaging with phase errors compensation is written as
B m = k = 1 K n = 1 N b k n m e j ϕ k n .
This is the conventional motion errors compensation method.

2.2. Problem Analysis

There are two problems with the conventional motion errors compensation method for HRWS SAR imaging.
The first problem is ignoring the spatial variation of the phase errors of pixels along the range direction of the scene. The phase errors for the pixel caused by the platform motion errors have the characteristic of spatial variation; that is, the phase errors for different pixels in the observation scene are different. However, Equation (5) shows that the conventional phase compensation method ignores the spatial variation of phase errors in the range direction and assumes that the same compensation phase is used for all pixels in the observation scene. Only the pixels with a small phase error change can be compensated for. However, in the motion error compensation of HRWS SARs, the phase error of the pixels in the observation scene vary greatly in the range direction due to the wide swath. Hence, most of the pixels in the range direction cannot be accurately compensated for, which results in the deterioration of the image quality.
The second problem is ignoring the difference in motion errors between channels. From Equations (2) and (5), we can see that the conventional compensation method is used to perform motion errors compensation on the echo after reconstruction treating the multichannel reconstructed echo as a single-channel echo. It ignores the difference in motion errors between channels, and the motion errors of each channel cannot be completely compensated for. The image quality is deteriorated in the azimuth direction.

3. HRWS SAR High-Precision Motion Compensation Method

In order to solve the two problems of the conventional algorithms, we propose a high-precision motion compensation method based on sub-image reconstruction (SI-MEC) for HRWS SAR. The flow of the proposed method is shown in Figure 2.

3.1. Motion Errors Estimation Based on the Maximum Intensity of Strong Points in Multi-Region

Since the antennas of the multichannel SAR system are fixed on the flight platform, their relative positions are known. According to the antennas relative positional relationship and imaging geometry of the multi-channel SAR system, the motion errors of one of the multiple channels can be used to calculate the motion errors of the other channels. Therefore, we can obtain the multichannel motion errors by estimating the motion errors of one channel. This has the advantage of using data from one channel for estimation to improve efficiency.
In fact, any one of the channels can be selected to estimate the motion error. Because the relative position relationship of the four channels is known, the motion error of the other channels can be calculated from the motion errors of any of the selected channels. In this paper, since channel 1 can transmit and receive signals, the equivalent phase centers and channel positions are coincident. For convenience, we choose channel 1 to estimate the motion error and treat it as the motion errors of the platform. The BP value of the m-th pixel by using channel 1 echo can be expressed as
B m = k = 1 K s ( τ ^ ( t k , 1 , m ) , t k , 1 ) · e j 2 π f c τ ^ ( t k , 1 , m )
where s ( τ ^ , t k , 1 ) is the echo data of channel 1.
The influence of motion errors on the amplitude of the BP value can be ignored, but its influence on the compensation phase calculated by the dual echo delay needs to be considered. When there are motion errors, the accurate BP imaging result can be expressed as
B m = k = 1 K s ( τ ^ ( t k , 1 , m ) , t k , 1 ) · e j 2 π f c τ ˜ ( t k , 1 , m )
where τ ˜ ( t k , n , m ) is the dual echo delay from the EPC with motion errors of channel 1 to the m-th pixel at slow time t k , 1 . It can be expressed as
τ ˜ ( t k , 1 , m ) = 2 c | P ˜ A ( t k , 1 ) + Δ P ^ A ( t k ) P T ( m ) |
where Δ P ^ A ( t k ) = [ Δ x ^ k , Δ y ^ k , Δ z ^ k ] T is the estimate of the EPC error of channel 1 and also the motion error of the platform.
In order to accurately estimate the motion errors, multiple strong points in two regions were selected for the estimation. These two regions are far apart in the range direction, which makes the difference of the phase errors corresponding to the two regions large, i.e., the spatial variation phenomenon is obvious. In practical applications, the regions close to the nearest and furthest boundary of the HRWS image in the range direction were selected, which are shown in Figure 3. If all pixels in these two regions were selected for estimation, it would suffer from a heavy computation burden. Therefore, we selected the strong point targets in these two regions for estimation. Let { B m ; m 1 < m < m 2 } and { B m ; m 3 < m < m 4 } denote the BP value of the strong targets in region 1 and region 2, respectively.
Since the motion errors affect the image quality, we can produce an estimate of the motion errors to maximize the image quality. Choosing an appropriate criterion to reflect the image quality needs to be considered. Image intensity, image sharpness, and image entropy, can be employed as evaluation criteria for SAR image quality. Because employing image intensity as the criterion simplifies the derivation of the objective function, it is used in this paper. The image intensity of the strong points in the two regions can be expressed as
F ( Δ P ^ A ) = m = m 1 m 2 | B m | 2 + α m = m 3 m 4 | B m | 2 .
Inspired by the maximum sharpness autofocus algorithm [28], we design the motion errors estimation model based on maximizing the image intensity of multiple strong points as
Δ P ^ A o p t = arg max Δ P ^ A F ( Δ P ^ A ) = arg min Δ P ^ A { ( m = m 1 m 2 | B m | 2 + α m = m 3 m 4 | B m | 2 ) }
where Δ P ^ A o p t is the optimal estimate of the motion errors Δ P A , and the unconstrained optimization approach can be used to solve it. α is the weighting coefficient. When α 0 , this means that only region 1 is used for estimation. When α , this means that only region 2 is used for estimation. To effectively use the strong points in multiple regions for estimation, the weighting coefficient needs to be adjusted so that the image intensities of the strong points in different regions are at the same order of magnitude.

3.2. Sub-Image EPCs Correction

For the HRWS SAR imaging, the reconstruction methods are usually necessary to reconstruct the multichannel SAR echo to obtain uniformly sampled echoes. Thus, the selection of the reconstruction methods is very important. The sub-image reconstruction method [12] is an excellent reconstruction method, in which the distance history of each pixel of the sub-image can be calculated, and the sub-image imaging is performed before the reconstruction. Due to these characteristics, this reconstruction method is used in this paper. According to the sub-image reconstruction method, the reconstruction image can be obtaind by weighted summation of sub-images. So we first perform sub-image imaging. The sub-image can be expressed as
I n l n ( m ) = k = 1 + L 2 + l K L 2 + l s ( τ ^ ( t k , n , m ) , t k , n ) e j ω τ ^ ( t k n , m ) .
Due to the influence of motion errors, the echo used in sub-image imaging contains phase errors, which will degrade the imaging quality. According to the principle of BP, let φ k n m denote the correction phase of the echo containing the motion errors at the azimuth time t k , n for the m-th pixel imaging. The sub-image imaging with phase compensation can be expressed as
I n l n ( m ) = k = 1 + L 2 + l K L 2 + l s ( τ ^ ( t k , n , m ) , t k , n ) e j φ k n m e j ω τ ^ ( t k , n , m )
where
φ k n m = j ω { τ ^ ( t k , n , m ) τ ˜ ( t k , m ) } = j 2 ω c | P ˜ A ( t k , n ) P T ( m ) | j 2 ω c | P ˜ A ( t k , n ) + Δ P A ( t k ) P T ( m ) | .
From Equation (12), it can be seen that the phase compensation is performed on the echo of each azimuth time for each pixel. However, it will greatly increase the additional computational burden. In order to solve this problem, the phase errors correction of the echo is integrated into the compensation phase term e j ω τ ^ ( t k , n , m ) in Equation (12).
Due to the long distance between the observation scene and the motion platform, φ k n m can be approximated as
φ k n m j 2 ω c | P ˜ A ( t k , n ) + Δ P A ( t k ) P T ( m ) | j 2 ω c | P ˜ A ( t k , n ) P T ( m ) | .
The sub-image imaging with EPCs Correction can be re-expressed as
I n l n ( m ) = k = 1 + L 2 + l K L 2 + l s ( τ ^ ( t k , n , m ) , t k , n ) · exp { j 2 ω ( | P ˜ A ( t k , n ) + Δ P A ( t k ) P T ( m ) | ) c } .
This means that the phase errors of each pixel of the sub-image can be accurately compensated for by correcting the EPCs at the time t k , n when calculating the distance history. The compensation phase does not need to be calculated separately as in Equation (13). This simplifies the processing.

3.3. Reconstruction

The reconstruction imaging can be expressed in the form of the weighted summation of sub-images with EPCs correction [12], which can be written as
I ( m ) = n = 1 N l = L / 2 L / 2 n = 1 N I n l n ( m ) Ψ n l n
where I n l n represents a sub-image, and Ψ n l n is the reconstruction coefficient.
Based on the previous analysis, the main steps of the proposed method are summarized in Algorithm 1.
Algorithm 1. High-precision Motion Errors Compensation Method Based on Sub-image Reconstruction for HRWS SAR Imaging
Inputs: The non-uniformly sampled echo s ( τ , t k , n ) and measured EPCs P ˜ A ( t k , n )
      Step 1: Obtain the SAR image B m with meaured EPCs by Equation (6).
      Step 2: Select strong points { B m ; m 1 < m < m 2 } and { B m ; m 3 < m < m 4 } in multi-regions of SAR image, and calculate α = m = m 1 m 2 | B m | 2 / m = m 3 m 4 | B m | 2 .
      Step 3: Estimate motion errors Δ P ^ A o p t based on maximizing image intensity by Equation (10).
      Step 4: Correct EPCs error for each sub-image, and obtain the sub-image I n l n ( m ) by Equation (15).
      Step 5: Reconstruct the sub-image by Equation (16), and obtain the well-focused SAR imaging result I ( m ) .
Outputs: High-precision image I ( m ) with motion errors compensation.

4. Experimental Results

In this section, the simulated multichannel SAR data processing is performed to verify the effectiveness of the proposed method.

4.1. Point Target Simulation

We simulated a four-channel SAR system, the parameters of which are shown in Table 1. As the errors in height direction have a greater impact on imaging quality, random uniform noises between 0.5 λ and 0.5 λ were added to the height of the platform’s motion trajectory to simulate the motion errors. We set three point targets P T 1 , P T 2 , and P T 3 in the observation scene, and their coordinates are [ 50 , 000 , 0 , 0 ] m, [ 60 , 000 , 0 , 0 ] m, and [ 70 , 000 , 0 , 0 ] m, respectively.
To demonstrate the effectiveness of the proposed method, PGA [27] and BP-FMSA [28] were used to process the reconstructed multichannel data as a comparison. Figure 4 shows the imaging results of point targets at 50 Km, 60 Km, and 70 Km in the range direction by using different compensation methods. The three point targets were seriously defocused in the azimuth as shown in Figure 4a. Figure 4b–d show the imaging results by PGA [27], BP-FMSA [28], and SI-MEC, respectively. These three methods selected local areas for estimation. In Figure 4b,c, PGA and BP-FMSA utilized the area around the point target P T 1 at 50 Km to estimate the compensation phase and perform compensation after reconstruction. In Figure 4d, the 1 m × 1 m area around the point targets P T 1 at 50 Km and the 1 m × 1 m area around the point targets P T 2 at 60 Km were selected to estimate the EPCs. From Figure 4b, it can be seen that the imaging quality of the point target at 50 Km was better, but the imaging quality of the point target at 60 km and 70 km was worse. It could not compensate for the areas, which were not selected for phase estimation, in the range direction. Compared with Figure 4a,b, the imaging quality of the three point targets in Figure 4c was improved, but the side lobe level could not be suppressed well. Figure 4d shows the imaging result by using SI-MEC. It can be seen that not only P T 1 at 50 Km and P T 2 at 60 km focused well, but P T 3 at 70 km also focused well. This means that SI-MEC can compensate for other areas in the range direction, which were not selected for EPCs estimation. It improved the imaging quality.
We compared the image quality of the targets at the same range, shown in Figure 4a–d, and we found that the proposed method had better performance in the azimuth direction. Moreover, Figure 5 shows the azimuth profile of the point target at 50 Km by different methods in Figure 4. It indicates that compared with the other two methods, SI-MEC could effectively suppress the side lobe level. The proposed method improved the imaging quality in the azimuth direction.
In Table 2, the azimuth peak sidelobe ratio (PSLR) and azimuth integrated sidelobe ratio (ISLR) are listed. First, we compared the performance of the same method at different ranges. The PSLR and ISLR of PGA were the smallest at 50 Km. It can be seen that PGA could only improve the image quality at 50 Km. The PSLR of BP-FMSA was almost the same at 50 Km, 60 Km, and 70 Km. However, the ISLR of BP-FMSA at 70 Km was higher than other places. This shows that BP-FMSA improved the image quality at 50 Km, 60 Km, and 70 Km, but its performance dropped at 70 Km. The PSLR and ISLR of SI-MEC were almost the same at 50 Km, 60 Km, and 70 Km. This shows that SI-MEC improved the image quality of all pixels in range direction.
Then, we compared the performance of different methods at the same range. At 50 Km, the ISLR of SI-MEC was lower than that of PGA, and the PSLR of SI-MEC was higher than that of PGA. The ISLR and PSLR of SI-MEC were better than that of BP-FMSA. Combined with Figure 4, this shows that although PGA could concentrate the energy of the main lobe, the side lobe level could not be suppressed well. SI-MEC improved these two indicators to obtain a better image quality. At 60 Km and 70 Km, the PSLR and ISLR of SI-MEC were best. This shows that SI-MEC improved the image quality in the azimuth direction.

4.2. Complex Scene Simulation

To further verify the effectiveness of the proposed algorithm, we performed HRWS SAR imaging of a complex scene. The HRWS SAR system has four channels, which are arranged at equal spacing along the azimuth, as shown in Figure 6. The channel spacing between two adjacent channels is d = 1 m. Channel 1 can transmit and receive signals, channels 2, 3, and 4 only receive signals. Figure 6 shows the azimuth sampling pattern of this four-channel SAR system. The system parameters and the motion errors were the same as those of the point target simulation experiment shown in Table 1. The amplitude of a real SAR image was used as the scattering coefficient to generate the echo. The size of the whole imaging scene was 20 Km × 2 Km, and the center of the scene was at 60 Km in the range direction. Figure 7 shows the complex scene imaging results with different compensation methods. In order to display the image features more clearly, a normalization strategy was used to normalize the amplitude of the imaging results. A threshold was set to 0.3 times the maximum value of the image pixel values. The pixel values of the image smaller than the threshold were normalized, and the pixel values larger than the threshold were set to 1. Figure 7a shows the imaging result without MEC. The imaging result after being compensated for by PGA is shown in Figure 7b. We can see that the motion errors are not completely compensated, and the image is still defocused. Figure 7c shows the imaging result using BP-FMSA. It shows that the image quality was improved, but the sidelobe was obvious. The imaging result using SI-MEC is shown in Figure 7d. This shows that all pixels could be compensated for, obtaining a high-precision image.
PGA and BP-FMSA utilized a local area, which is marked by orange boxes in Figure 7b,c, to estimate the compensation phase. This compensation phase mainly improved the imaging quality of the selected area. Imaging results in other areas of the scene had higher sidelobe levels. For BP-FMSA, because the selection of the dominant scatter region affects the performance of the imaging [28], we presented the optimal results under the current experimental conditions. For the proposed method, strong points in two local areas were selected as the input of the estimation. The areas are marked by orange boxes in Figure 7d. We can see that not only the selected area was focused, but the imaging quality of the whole scene was improved. This demonstrates that the proposed method improved the image quality of the whole scene.
In Table 3, the image entropy [31] of Figure 7 is listed. We can see that the image entropy of the proposed method was the best compared with the other methods. This demonstrates that the proposed algorithm is better than other methods to improve the image quality.
Figure 8 shows the enlarged imaging results of the building and the point target in Figure 7, which are marked by red boxes and an orange circle, respectively. We can see that it was difficult for PGA to compensate for the severely defocused multichannel SAR image, and BP-FMSA could not suppress the sidelobe level well. The sidelobe level of the imaging results by SI-MEC was significantly suppressed. This shows that the proposed method was better in terms of imaging quality in the azimuth direction. Figure 9 shows the azimuth profile of the point target in Figure 8 by different methods. It shows that the proposed method has a lower sidelobe level compared to other algorithms.

5. Conclusions

In this paper, we proposed a high-precision motion errors compensation method based on sub-image reconstruction for HRWS SAR. The proposed method achieved high-precision compensation for the phase errors of each pixel in the observation scene by estimating and correcting the EPCs, improving the compensation accuracy. It can improve the image quality not only of the area selected for estimation but also of other areas in the observation scene. Moreover, the motion errors of each channel were compensated for in the sub-images imaging before reconstruction, which improved the imaging quality in the azimuth direction.

Author Contributions

All the authors contributed extensively to the preparation of this manuscript. L.Z. and X.Z. conceived the methods and performed the experiments; L.P. provided support for the experiment and offered suggestions on revision; T.Z. provided suggestions for modification; J.S. and S.W. supervised the research and commented on the manuscript; and L.Z. wrote the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grant No. 61571099, 61671113, and 61501098.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We thank the anonymous reviewers for their comments towards improving this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, S.X.; Xing, M.D.; Xia, X.G.; Liu, Y.Y.; Guo, R.; Bao, Z. A Robust Channel-Calibration Algorithm for Multi-Channel in Azimuth HRWS SAR Imaging Based on Local Maximum-Likelihood Weighted Minimum Entropy. IEEE Trans. Image Process. 2013, 22, 5294–5305. [Google Scholar] [CrossRef] [PubMed]
  2. Yang, J.; Qiu, X.; Zhong, L.; Shang, M.; Ding, C. A Simultaneous Imaging Scheme of Stationary Clutter and Moving Targets for Maritime Scenarios with the First Chinese Dual-Channel Spaceborne SAR Sensor. Remote Sens. 2019, 11, 2275. [Google Scholar] [CrossRef] [Green Version]
  3. Xu, W.; Yu, Q.; Fang, C.; Huang, P.; Tan, W.; Qi, Y. Onboard Digital Beamformer with Multi-Frequency and Multi-Group Time Delays for High-Resolution Wide-Swath SAR. Remote Sens. 2021, 13, 4354. [Google Scholar] [CrossRef]
  4. Mittermayer, J.; Krieger, G.; Bojarski, A.; Zonno, M.; Villano, M.; Pinheiro, M.; Bachmann, M.; Buckreuss, S.; Moreira, A. MirrorSAR: An HRWS Add-On for Single-Pass Multi-Baseline SAR Interferometry. IEEE Trans. Geosci. Remote Sens. 2021. [Google Scholar] [CrossRef]
  5. Muhuri, A.; Manickam, S.; Bhattacharya, A.; Snehmani. Snow Cover Mapping Using Polarization Fraction Variation With Temporal RADARSAT-2 C-Band Full-Polarimetric SAR Data Over the Indian Himalayas. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2192–2209. [Google Scholar] [CrossRef]
  6. Touzi, R. Target Scattering Decomposition in Terms of Roll-Invariant Target Parameters. IEEE Trans. Geosci. Remote Sens. 2007, 45, 73–84. [Google Scholar] [CrossRef]
  7. Van Zyl, J.J.; Zebker, H.A.; Elachi, C. Imaging radar polarization signatures: Theory and observation. Radio Sci. 1987, 22, 529–543. [Google Scholar] [CrossRef]
  8. Zhang, T.; Zhang, X.; Shi, J.; Wei, S. Depthwise Separable Convolution Neural Network for High-Speed SAR Ship Detection. Remote Sens. 2019, 11, 2483. [Google Scholar] [CrossRef] [Green Version]
  9. Krieger, G.; Gebert, N.; Moreira, A. Unambiguous SAR signal reconstruction from nonuniform displaced phase center sampling. IEEE Geosci. Remote Sens. Lett. 2004, 1, 260–264. [Google Scholar] [CrossRef] [Green Version]
  10. Yang, T.; Li, Z.; Suo, Z.; Liu, Y.; Bao, Z. Performance analysis for multichannel HRWS SAR systems based on STAP approach. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1409–1413. [Google Scholar] [CrossRef]
  11. Nicolas, G.; Gerhard, K.; Alberto, M. Digital Beamforming on Receive: Techniques and Optimization Strategies for High-Resolution Wide-Swath SAR Imaging. IEEE Trans. Aerosp. Electron. Syst. 2009, 45, 564–592. [Google Scholar]
  12. Zhou, L.; Zhang, X.; Zhan, X.; Pu, L.; Zhang, T.; Shi, J.; Wei, S. A Novel Sub-Image Local Area Minimum Entropy Reconstruction Method for HRWS SAR Adaptive Unambiguous Imaging. Remote Sens. 2021, 13, 3115. [Google Scholar] [CrossRef]
  13. Guo, J.; Chen, J.; Liu, W.; Li, C.; Yang, W. An Improved Airborne Multichannel SAR Imaging Method With Motion Compensation and Range-Variant Channel Mismatch Correction. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 5414–5423. [Google Scholar] [CrossRef]
  14. Zhao, S.; Wang, R.; Deng, Y.; Zhang, Z.; Li, N.; Guo, L.; Wang, W. Modifications on Multichannel Reconstruction Algorithm for SAR Processing Based on Periodic Nonuniform Sampling Theory and Nonuniform Fast Fourier Transform. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 4998–5006. [Google Scholar] [CrossRef]
  15. Li, N.; Zhang, H.; Zhao, J.; Wu, L.; Guo, Z. An Azimuth Signal-Reconstruction Method Based on Two-Step Projection Technology for Spaceborne Azimuth Multi-Channel High-Resolution and Wide-Swath SAR. Remote Sens. 2021, 13, 4988. [Google Scholar] [CrossRef]
  16. Huang, H.; Huang, P.; Liu, X.; Xia, X.G.; Deng, Y.; Fan, H.; Liao, G. A Novel Channel Errors Calibration Algorithm for Multichannel High-Resolution and Wide-Swath SAR Imaging. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5201619. [Google Scholar] [CrossRef]
  17. Rui, Z.; Sun, J.; Hu, Y.; Qi, Y. Multichannel High Resolution Wide Swath SAR Imaging for Hypersonic Air Vehicle with Curved Trajectory. Sensors 2018, 18, 411. [Google Scholar]
  18. Chen, Z.; Zhang, Z.; Qiu, J.; Zhou, Y.; Wang, R. A Novel Motion Compensation Scheme for 2-D Multichannel SAR Systems With Quaternion Posture Calculation. IEEE Trans. Geosci. Remote Sens. 2020, 59, 9350–9360. [Google Scholar] [CrossRef]
  19. Ding, Z.; Liu, L.; Zeng, T.; Yang, W.; Long, T. Improved Motion Compensation Approach for Squint Airborne SAR. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4378–4387. [Google Scholar] [CrossRef]
  20. Bhattacharya, A.; Muhuri, A.; De, S.; Manickam, S.; Frery, A.C. Modifying the Yamaguchi Four-Component Decomposition Scattering Powers Using a Stochastic Distance. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3497–3506. [Google Scholar] [CrossRef] [Green Version]
  21. Kennedy, T. Strapdown inertial measurement units for motion compensation for synthetic aperture radars. IEEE Aerosp. Electron. Syst. Mag. 1988, 3, 32–35. [Google Scholar] [CrossRef]
  22. Hu, K.; Zhang, X.; He, S.; Zhao, H.; Shi, J. A Less-Memory and High-Efficiency Autofocus Back Projection Algorithm for SAR Imaging. IEEE Geosci. Remote Sens. Lett. 2015, 12, 890–894. [Google Scholar]
  23. Eichel, P.H.; Jakowatz, C.V. Phase-gradient algorithm as an optimal estimator of the phase derivative. Opt. Lett. 1989, 14, 1101–1103. [Google Scholar] [CrossRef] [PubMed]
  24. Ash, J.N. An Autofocus Method for Backprojection Imagery in Synthetic Aperture Radar. IEEE Geosci. Remote Sens. Lett. 2012, 9, 104–108. [Google Scholar] [CrossRef]
  25. Fletcher, I.; Watts, C.; Miller, E.; Rabinkin, D. Minimum entropy autofocus for 3D SAR images from a UAV platform. In Proceedings of the 2016 IEEE Radar Conference (RadarConf), Philadelphia, PA, USA, 2–6 May 2016; pp. 1–5. [Google Scholar] [CrossRef]
  26. Zhou, L.; Zhang, X.; Wang, Y.; Wang, C.; Wei, S. Precise Autofocus for SAR Imaging Based on Joint Multi-Region Optimization. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019. [Google Scholar]
  27. Wahl, D.; Eichel, P.; Ghiglia, D.; Jakowatz, C. Phase gradient autofocus-a robust tool for high resolution SAR phase correction. IEEE Trans. Aerosp. Electron. Syst. 1994, 30, 827–835. [Google Scholar] [CrossRef] [Green Version]
  28. Wei, S.; Zhou, L.; Zhang, X.; Shi, J. Fast back-projection autofocus for linear array SAR 3-D imaging via maximum sharpness. In Proceedings of the 2018 IEEE Radar Conference (RadarConf18), Oklahoma City, OK, USA, 23–27 April 2018; pp. 525–530. [Google Scholar] [CrossRef]
  29. Yen, J. On nonuniform sampling of bandwidth-limited signals. IRE Trans. Circuit Theory 1956, 3, 251–257. [Google Scholar] [CrossRef]
  30. Shi, J.; Zhang, X.; Yang, J.; Wen, C. APC Trajectory Design for One-Active Linear-Array Three-Dimensional Imaging SAR. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1470–1486. [Google Scholar] [CrossRef]
  31. Pu, W. SAE-Net: A Deep Neural Network for SAR Autofocus. IEEE Trans. Geosci. Remote Sens. 2022. [Google Scholar] [CrossRef]
Figure 1. The geometry of the HRWS SAR system with motion errors.
Figure 1. The geometry of the HRWS SAR system with motion errors.
Remotesensing 14 01033 g001
Figure 2. The flow of the HRWS SAR high-precision motion compensation method based on sub-image reconstruction.
Figure 2. The flow of the HRWS SAR high-precision motion compensation method based on sub-image reconstruction.
Remotesensing 14 01033 g002
Figure 3. The location of the selected regions for estimation.
Figure 3. The location of the selected regions for estimation.
Remotesensing 14 01033 g003
Figure 4. Imaging results of point targets at 50 Km, 60 Km, and 70 km by using different compensation methods. (a) Imaging result without MEC. (b) Imaging result by PGA [27]. (c) Imaging result by BP-FMSA [28]. (d) Imaging result by SI-MEC (Ours).
Figure 4. Imaging results of point targets at 50 Km, 60 Km, and 70 km by using different compensation methods. (a) Imaging result without MEC. (b) Imaging result by PGA [27]. (c) Imaging result by BP-FMSA [28]. (d) Imaging result by SI-MEC (Ours).
Remotesensing 14 01033 g004
Figure 5. The azimuth profile of the point target at 50 Km by the different methods shown in Figure 4.
Figure 5. The azimuth profile of the point target at 50 Km by the different methods shown in Figure 4.
Remotesensing 14 01033 g005
Figure 6. The azimuth sampling pattern of the four-channel SAR system.
Figure 6. The azimuth sampling pattern of the four-channel SAR system.
Remotesensing 14 01033 g006
Figure 7. Complex scene imaging results by using different compensation methods. (a) Imaging result without MEC. (b) Imaging result by PGA [27]. (c) Imaging result by BP-FMSA [28]. (d) Imaging result by SI-MEC (Ours).
Figure 7. Complex scene imaging results by using different compensation methods. (a) Imaging result without MEC. (b) Imaging result by PGA [27]. (c) Imaging result by BP-FMSA [28]. (d) Imaging result by SI-MEC (Ours).
Remotesensing 14 01033 g007
Figure 8. The imaging result of the building in Figure 7. (a) Imaging result without MEC. (b) Imaging result by PGA [27]. (c) Imaging result by BP-FMSA [28]. (d) Imaging result by SI-MEC (Ours).
Figure 8. The imaging result of the building in Figure 7. (a) Imaging result without MEC. (b) Imaging result by PGA [27]. (c) Imaging result by BP-FMSA [28]. (d) Imaging result by SI-MEC (Ours).
Remotesensing 14 01033 g008
Figure 9. The azimuth profile of the point target in Figure 8 using different methods. (a) Full azimuth data and (b) partial azimuth data.
Figure 9. The azimuth profile of the point target in Figure 8 using different methods. (a) Full azimuth data and (b) partial azimuth data.
Remotesensing 14 01033 g009
Table 1. Multichannel SAR simulation parameters.
Table 1. Multichannel SAR simulation parameters.
ParametersValue
Carrier frequency9.6 GHz
Signal bandwidth150 MHz
Range sampling rata500 Hz
PRF700 Hz
Azimuth bandwith1900 Hz
Platform height20 Km
Platform velocity1900 m/s
Number of channels4
Table 2. Imaging quality indicators of the point targets in Figure 4.
Table 2. Imaging quality indicators of the point targets in Figure 4.
Method P T 1 (50 Km) P T 2 (60 Km) P T 3 (70 Km)
PSLRISLRPSLRISLRPSLRISLR
No MEC−6.49 dB8.74 dB−9.35 dB4.74 dB−10.60 dB2.33 dB
PGA [27]−16.38 dB−3.47 dB−1.04 dB14.61 dB−2.32 dB14.08 dB
BP-FMSA [28]−12.64 dB−8.13 dB−12.64 dB−8.13 dB−12.47 dB−6.51 dB
SI-MEC (Ours)−13.26 dB−9.83 dB−13.27 dB−9.83 dB−13.27 dB−9.83 dB
Table 3. Imaging quality indicator of Figure 7.
Table 3. Imaging quality indicator of Figure 7.
MethodNo MECPGA [27]BP-FMSA [28]SI-MEC (Ours)
Image Entropy16.497416.495815.894415.8259
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhou, L.; Zhang, X.; Pu, L.; Zhang, T.; Shi, J.; Wei, S. A High-Precision Motion Errors Compensation Method Based on Sub-Image Reconstruction for HRWS SAR Imaging. Remote Sens. 2022, 14, 1033. https://doi.org/10.3390/rs14041033

AMA Style

Zhou L, Zhang X, Pu L, Zhang T, Shi J, Wei S. A High-Precision Motion Errors Compensation Method Based on Sub-Image Reconstruction for HRWS SAR Imaging. Remote Sensing. 2022; 14(4):1033. https://doi.org/10.3390/rs14041033

Chicago/Turabian Style

Zhou, Liming, Xiaoling Zhang, Liming Pu, Tianwen Zhang, Jun Shi, and Shunjun Wei. 2022. "A High-Precision Motion Errors Compensation Method Based on Sub-Image Reconstruction for HRWS SAR Imaging" Remote Sensing 14, no. 4: 1033. https://doi.org/10.3390/rs14041033

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop