Next Article in Journal
DSSFN: A Dual-Stream Self-Attention Fusion Network for Effective Hyperspectral Image Classification
Previous Article in Journal
Integrated Node Infrastructure for Future Smart City Sensing and Response
Previous Article in Special Issue
Super-Resolution Technique of Multi-Radar Fusion 2D Imaging Based on ExCoV Algorithm in Low SNR
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

Spatially Variant Error Elimination for High-Resolution UAV SAR with Extremely Small Incident Angle

1
National Key Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, China
2
China Academy of Space Technology (Xi’an), Xi’an 710100, China
3
Beijing Institute of Control and Electronics Technology, Beijing 100038, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(14), 3700; https://doi.org/10.3390/rs15143700
Submission received: 31 May 2023 / Revised: 19 July 2023 / Accepted: 20 July 2023 / Published: 24 July 2023

Abstract

:
Airborne synthetic aperture radar (SAR) is susceptible to atmospheric disturbance and other factors that cause the position offset error of the antenna phase center and motion error. In close-range detection scenarios, the large elevation angle may make it impossible to directly observe areas near the underlying plane, resulting in observation blind spots. In cases where the illumination elevation angle is extremely large, the influence of range variant envelope error and phase modulations becomes more serious, and traditional two-step motion compensation (MOCO) methods may fail to provide accurate imaging. In addition, conventional phase gradient autofocus (PGA) algorithms suffer from reduced performance in scenes with few strong scattering points. To address these practical challenges, we propose an improved phase-weighted estimation PGA algorithm that analyzes the motion error of UAV SAR under a large elevation angle, providing a solution for high-order range variant motion error. Based on this algorithm, we introduce a combined focusing method that applies a threshold value for selection and optimization. Unlike traditional MOCO methods, our proposed method can more accurately compensate for spatially variant motion error in the case of scenes with few strong scattering points, indicating its wider applicability. The effectiveness of our proposed approach is verified by simulation and real data experimental results.

Graphical Abstract

1. Introduction

Airborne synthetic aperture radar (SAR) is a valuable tool for remote sensing and mapping, providing high-resolution two-dimensional (2-D) images that improve detection performances [1,2,3,4]. Compared to traditional optical remote sensing, airborne SAR can be used for detection during the day, at night, and in adverse weather conditions, making it a flexible and reliable monitoring technology [5,6,7]. Recent advancements in unmanned aerial vehicle (UAV) technology have led to the development of micro-SAR devices that can be equipped on drones, offering advantages such as ease of operation and deployment and low cost, particularly in lightweight drones. UAV SAR can be used in hazardous conditions, such as during natural disasters or fires, to reduce the risks for rescue personnel [8,9,10,11,12].
A stable flying status is crucial for all kinds of airborne SAR systems to effectively synthesize the Doppler bandwidth. However, motion-induced error can compromise both resolution and overall system performance [13,14,15,16]. In practice, flight paths are often non-linear due to atmospheric airflow, resulting in motion error that significantly impacts the Doppler characteristics of the echo data, including the Doppler centroid and the Doppler chirp rate, which determine the azimuth position and depth of field, respectively. These factors are inherently limited by residual range cell migration (RCM) and nonlinear phase error (NPE) of the target [17,18,19,20]. As a result, motion error cannot be neglected during airborne SAR imaging processing.
For stable aircraft, such as transport planes, the impact of motion error on SAR performance is generally negligible [21,22,23]. However, for drones, atmosphere turbulence can cause a significant motion-induced error compared to manned aircraft [24]. Moreover, due to payload limitations, drone SARs are typically designed with higher frequencies to shorten the wavelength and reduce the size and weight of the RF device, resulting in significant phase error caused by motion error [25]. In addition, compared to traditional airborne SAR systems, drone SAR systems have a shorter detection range, requiring a larger antenna pitch angle to cover the ground scene. This results in significant spatial variability caused by motion error, which can greatly degrade imaging quality, particularly in high-resolution applications.
With the continuous improvement of radar resolution, the demand for enhanced accuracy in SAR motion compensation has grown [26]. Currently, the measurement accuracy of inertial guidance systems (INS) or global positioning systems (GPS) [27] often cannot meet the requirements for high-resolution SAR motion compensation UAV systems. This limitation restricts the use of motion compensation algorithms based on navigation information. Consequently, motion compensation emerges as a critical factor in obtaining high-resolution images for UAV SAR systems. Previous research has primarily focused on error properties and estimation methods, such as the phase gradient autofocus (PGA) technique for spotlight mode [28] and motion compensation methods for strip mode [29]. However, these approaches have limitations in addressing scenarios with extremely small angles of incidence. Therefore, the utilization of autofocus approaches is recommended to implement motion compensation in drone SAR systems, particularly for close-range scenarios with extremely small angles of incidence. Further research is necessary to investigate high-resolution imaging and motion compensation algorithms tailored for such scenarios.
Various studies have investigated the SAR autofocus problem, with phase gradient autofocus (PGA) being one of the most well-known techniques [30,31]. Qualification PGA (QPGA) reduces the requirement for the number of salient points in two dimensions, and increasing the number of salient points can improve the precision of phase gradient estimation and the robustness of the algorithm [32]. Different weighting strategies have been proposed to address the issue of low signal clutter ratio (SCR) features in phase gradient estimation and enhance the contribution of high-quality features. However, most existing algorithms have been proposed to compensate for the spatial invariant motion error and do not address the issue of spatial variation in the scene, which may limit their practical performance. In the case of large elevation angles, the observation distance to the target is small, i.e., the slant range is smaller than in the case of small elevation angles. Indeed, the impact on the echo signals is also significant. Therefore, there is a pressing need for a more precise and accurate motion compensation method. Moreover, due to the presence of spatial variation, the variation in slant range in the range dimension is smaller in the case of large elevation angles than in the case of small elevation angles. This leads to increased range spatial variation in the echo, while imaging targets are located at different range cells in the scene. As a consequence, existing algorithms that only consider spatial variation fail to address this issue adequately. Hence, the development of a more accurate higher-order autofocusing algorithm is necessary.
This study proposes a MOCO algorithm for UAV SAR systems that addresses practical issues, including range motion error and PGA failure. It first establishes a geometric model of the system and analyzes the potential issues related to motion error and inadequate scattering points. Then, it proposes a motion compensation algorithm based on an improved phase-weighted estimation PGA algorithm, which is able to estimate both spatial invariant and spatially variant phase error and perform full aperture phase stitching. Finally, a combined autofocus method is proposed to address the issue of insufficient strong scattering points in the scene, which selects different autofocus methods based on the proportion of strong scattering points and sets a threshold to improve the spatially variant performance of MOCO. Experimental results show that this proposed method has a wider application and a higher imaging precision compared with traditional methods.
In summary, the innovation and contribution of this work is a MOCO strategy designed for UAV SAR high-resolution imaging in extremely small incident angle. The core of which is the statistical threshold selection, resulting in a combined autofocus method applicable to arbitrary imaging scene. By selecting the appropriate processing method, the proposed approach addresses the challenge of poor performance of traditional methods when there are few strong scattering points in the imaging scene. Meanwhile, to ensure the accuracy of MOCO when incident angle is extremely small, an improved PGA algorithm that considers the effect of the high-order phase errors is utilized, which further enhance the robustness and effectiveness of the proposed approach.
The rest of this paper is organized as follows: Section 2 analyzes the airborne SAR motion error both geometrically and mathematically. In Section 3, an improved combined MOCO approach based on spatial variation, consisting of three parts (i.e., a statistical threshold selection, an improved phase-weighted estimation PGA algorithm, and an auxiliary algorithm), is presented in detail. Section 4 provides the experimental results including the simulation and real data processing, and Section 5 presents the conclusion summarizing the main findings.

2. Modeling

2.1. Geometric Model

The geometric model of airborne SAR imaging with motion error caused by atmosphere turbulence is shown in Figure 1.
In a spatial coordinate system based on the right-handed convention, the origin O is set as the ground projection of the synthetic aperture central. The platform is assumed to fly along y -axis at a constant velocity v and altitude H . The area of interest on the ground lies in the right field of view of the aircraft. M denotes the arbitrary point target in the imaging scene, R b denotes the closest range of M , and β is the corresponding incident angle, i.e., β = arccos ( H / R b ) .
It is almost impossible for an aircraft to maintain a constant attitude. Unstable motion leads to uncertain error, causing the real air path to deviate from the expected path as shown by the red solid line and blue dashed line plotted in Figure 1, respectively. A and A are the arbitrary point of the platform on the real path and the expected one, respectively. Thus, the instantaneous position on the real and the expected paths can be expressed by the spatial coordinates x , y , and z , i.e., [ x ( η ) , y ( η ) , z ( η ) ] and [ 0 , v η , H ] , where η denotes the slow time and v denotes the imaging velocity. Additionally, the location of M can be defined as [ x n , y n , z n ] . Thus, the instantaneous range from A to M can be expressed as follows:
R ( η , R b ) = [ x ( η ) x n ] 2 + [ y ( η ) y n ] 2 + [ z ( η ) z n ] 2 = [ x ( η ) x n ] 2 + y ( η ) 2 + z ( η ) 2 + R b 2 2 y ( η ) R b sin β 2 z ( η ) R b cos β [ x ( η ) x n ] 2 + R b 2 2 y ( η ) R b sin β 2 z ( η ) R b cos β
where η is azimuth slow time, R b = y n 2 + z n 2 , and y n = R b sin β , z n = R b cos β . Meanwhile, Equation (1) can be expanded by the Taylor series as
R ( η ) = R b + [ x ( η ) x n ] 2 2 R b y ( η ) sin β z ( η ) cos β
Thus, R ( η , R b ) , corresponding to azimuth and range dimensions, is decomposed into three components, which are explained in detail as follows:
  • The first component R b denotes the slant range of M at aperture central and determines the range position of M on the SAR image;
  • Note that the second component with respect to R b mainly depends on the x -axis motion status, and varies with the azimuth position of M . This term determines the azimuthal position of M on the SAR image. In reality, motion error on the x -axis may deteriorate the linear Doppler central and further cause a shift in the result. In addition, apart from phase error, the difference interval between chirps caused by motion error on the x -axis will result in azimuthal non-uniformity;
  • The third component affected by y ( η ) and z ( η ) is called the cross-path error, which is an important component that needs to be compensated primarily during imaging processing. This is because the speeds along the y -axis and z -axis change as the platform approaches the target, thereby deteriorating the Doppler frequency, such as the Doppler central and the Doppler chirp rate. Thus, the result is shifted and defocused. Moreover, it should be noted that the component β reflects the spatially variant nature, leading to extra range cell migrations (RCM) and non-linear phase errors (NPE). As a result, this component will cause a great impact on processing.
By means of Equation (2), the range history error Δ R ( η ) between the real and the expected trajectory can be expressed as
Δ R ( η ) = R ( η ) R ( η ) = R b + [ x ( η ) x n ] 2 2 R b y ( η ) sin β z ( η ) cos β [ R b + ( v η x n ) 2 2 R b ] = [ 2 v η 2 x n + Δ x ( η ) ] Δ x ( η ) 2 R b y ( η ) sin β z ( η ) cos β
where Δ x ( η ) denotes the position error along the x -axis at any moment.
Figure 2 shows the range error corresponding to range and azimuth directions caused by Δ x ( η ) in a typical UAV SAR application, with different reference ranges R s , where the speed along the x -axis varies from −1 m/s to 1 m/s during the entire aperture. It is apparent that the range error denoted by Δ x ( η ) is too small to cause defocusing in the images even in high-resolution cases. Thus, these errors can be ignored.
Hence, Equation (3) can be simplified as
Δ R ( η ) = y ( η ) sin β z ( η ) cos β
However, Δ R ( η ) is not only determined by y -axis and z -axis motion error but depends on β as well. This implies that Δ R ( η ) is cross-coupled and spatially variant in Equation (4).

2.2. Spatially Variant Error Analysis

Before developing motion compensation algorithms, it is essential to analyze the spatial variation error that results from motion error. In UAV applications, these errors exhibit more pronounced characteristics compared to traditional airborne SAR imaging. This disparity can be attributed to the large antenna pitch required to cover the ground scene, which is limited by the detection power. Figure 3 shows the relationship between antenna pitch and slant range. Generally, when the phase error resulting from spatially variant error is larger than π / 4 , it has a noticeable impact on the imaging process. We refer to this distance as the “near” detection areas. Conversely, when the phase error within the scene is less than π / 4 , the impact on imaging can be disregarded, and this distance is termed the “far” detection area.
In Figure 3, the red and green labels indicate near and far detection areas, respectively. Although the amplitudes of the red and green detections areas are the same, it is evident that the rate of change in the red detections areas is significantly greater than that in the green detections areas under a large elevation angle. This signifies that the phase error induced by spatially variant error in the far detections areas can be disregarded.
To simplify the analysis, let us assume that r s denotes the reference range vector from the platform to the interested area central at antenna phase central (APC), r denotes the range vector from the platform to M , which can be expressed as
| r | = | r s + Δ r |
where Δ r is the range vector from M to the scene center, which can be defined as the spatially variant part of M relative to the center, i.e., | Δ r | .
To further analyze the spatial variation induced by motion error, sin β and cos β in Equation (6) can be expanded by the Taylor series as
cos β = H | r s + Δ r | = H | r s | H | r s | 2 | Δ r | + H | r s | 3 | Δ r | 2 H | r s | 4 | Δ r | 3 + H | r s | 5 | Δ r | 4 sin β = 1 cos 2 β = 1 H 2 2 | r s | 2 H 4 8 | r s | 4 + ( H 2 | r s | 3 + H 4 2 | r s | 5 ) | Δ r | ( 3 H 2 2 | r s | 4 + 5 H 4 4 | r s | 6 ) | Δ r | 2 + ( 2 H 2 | r s | 5 + 5 H 4 2 | r s | 7 ) | Δ r | 3 ( 5 H 2 2 | r s | 6 + 35 H 4 8 | r s | 8 ) | Δ r | 4
Figure 4a,b display the spatially variant error corresponding to the azimuth and range directions for different range cases based on Equation (6), where the speed along the x -axis and z -axis varies from −1 m/s to 1 m/s during the whole aperture. It is clear that the spatially variant error increases as the reference range decreases.
The effects of spatial variation caused by motion error in UAV SAR are more pronounced than in conventional airborne SAR platforms, particularly in high-resolution applications, and thus cannot be overlooked.

2.3. Discussion of PGA Performance

Autofocus techniques are essential for improving the depth of field in practical airborne SAR processing. The reason is that, due to the presence of system noise, NPE caused by motion error cannot be fully compensated for by motion sensors. Additionally, high-frequency errors, which are associated with fine-scale variation in the motion trajectory, cannot be accurately measured by INS or GPS. Among the nonparametric autofocus methods, PGA has gained widespread usage in most airborne SAR systems due to its excellent performance. The critical step in PGA is to obtain NPE from the phase error gradient.
For further analysis, let us assume the n-th range cell’s data after windowing and shifting is denoted by g n ( η ) , the one after inverse Fourier transform (IFT) is given by G n ( f η ) , and the scatter-dependent phase function is denoted by θ n ( f η ) . Thus, the linear unbiased minimum variance (LUMV) estimation of the phase error gradient ϕ ( f η ) is given by [27]:
ϕ ( f η ) = n Im [ G n * ( f η ) G ˙ n ( f η ) ] n | G n ( f η ) | 2 = ϕ ˙ ( f η ) + n [ | G n ( f η ) | 2 θ ˙ n ( f η ) ] n | G n ( f η ) | 2
where n ( ) denotes the summation operation, G ˙ n ( f η ) denotes the first-order derivative of G n ( f η ) , G n * ( f η ) denotes the conjugate of G n ( f η ) , ϕ ˙ ( f η ) denotes the first-order derivative of ϕ ( f η ) , and θ ˙ n ( f η ) denotes the first-order derivative of θ n ( f η ) . Based on Equation (7), NPE θ ( η ) can approached the true value through iterative correction. However, there are two weaknesses:
According to Equation (4), PGA is a discrete point-type autofocus algorithm that averages over a number of samples (i.e., range cells) and neglects range-independent error during processing. However, as previously analyzed, the range variant error becomes more prominent with a larger antenna pitch compared to a smaller pitch. Therefore, the traditional PGA may not be suitable for such scenarios.
Moreover, classic PGA relies on selecting strong range cells as samples to ensure accuracy, which implies that the ability to focus an image depends entirely on the absence of dominant reflectors. However, this approach lacks robustness, especially for featureless areas. As depicted in Figure 5, the area within the yellow dash box represents a scene with few strong scattering points. In most drone SAR applications, obtaining enough features from the small interested area is impossible. Therefore, the performance of PGA should be improved.

3. Approach

In this section, we propose a combined autofocus approach that leverages statistical techniques to address the limitations of existing autofocus methods. Our proposed approach utilizes a brightness-counting method to obtain clearer and higher-quality images. To further enhance the stability and quality of the algorithm, we introduce a statistical threshold.

3.1. Statistical Threshold Selection

As previously discussed, the accurate selection of strong scattering points is crucial for the successful implementation of PGA. However, in practice, weak or non-existent scattering points are often selected, which may result in the inability to properly determine suitable scattering points and reduce image accuracy. Thus, it is imperative to improve the performance of selecting strong scattering points to enhance the quality of images.
The brightness of SAR images is typically represented by the intensity of the scattering points following mean quantization. To accurately quantify the intensity of each point, we developed a statistical threshold value that utilizes the statistical histogram approach. Specifically, the image brightness is partitioned into a range of 0 to the maximum value h, determined based on the specific situation. Then, we analyzed the intensity distribution of points in the image and derived a probability density function of the intensity distribution.
To identify the differences in probability density functions of target strength distribution for different scenarios, statistical analysis was performed on the target strength distribution of various scenes.
Assuming a SAR image has m azimuth samples and n range samples, the intensity value corresponding to each pixel point of the image is first computed and quantized to its mean value. In this paper, the gray interval h is set to 255, and the intensity of a pixel point in the k-th interval is denoted as x k , ( 0 k 255 ) . Therefore:
x 0 + x 1 + + x 255 = m n
Then, the probability density function of the intensity distribution for each point in the image is denoted as f ( k ) :
f ( k ) = x k m n , ( 0 k 255 )
Additionally, the corresponding threshold of the image can be calculated from the obtained probability density function, i.e.:
T = 0 k 0 f ( k )
where k0 can be regarded as the image intensity demarcation range, which is determined based on the specific circumstances encountered in the SAR image analysis. From Equation (10), it indicates an abundance of strong scatterers in the image under the condition that the proportion is smaller than a value p, which necessitates compensation by employing an improved phase-weighted estimation PGA algorithm. Conversely, if the proportion is larger than or equal to p, further compensation is necessary by employing an auxiliary algorithm.
Unlike the traditional algorithm, the imaging method with the threshold selection addresses the challenge of poor performance of the traditional PGA algorithm, which heavily relies on the selection of strong scatterers in the image. This threshold-based imaging method offers improved performance and can be applied to a wider range of scenarios.
In this method, the imaging process is enhanced by considering the count of all scattering points in the scene and analyzing the brightness distribution. From the above analysis, the proportion of strong scatterers in the image can be obtained. By comparing this proportion with the threshold value, different compensation strategies are selected for achieving high-resolution imaging. In practice, the imaging method improves the imaging performance of the traditional autofocusing algorithm and is verified in the real-data experiment in Section 4.

3.2. Improved Phase-Weighted Estimation PGA

After analyzing the motion error occurring at large elevation angles, it becomes apparent that the phase error, resulting from the spatial variation due to the motion error, cannot be ignored. The conventional PGA algorithm, which assumes that motion error do not vary spatially with range, is unsuitable for imaging blind spots at large elevation angles. In this section, we propose an improved phase-weighted estimation PGA algorithm and a novel MOCO method based on this algorithm. To solve the spatially variant error, we selected range cells with high contrast from the background.
To acquire the phase error, a spatially variant matrix is first constructed using least squares estimation. At large elevation angles, it is necessary to analyze the impact of the spatial variation of the high-order term range. By substituting Equation (6) into Equation (4), we establish a relationship between motion error and range, encompassing terms ranging from the first-order to the fourth-order spatial variation. The data simulation presented in Table 1 yields Figure 6, which illustrates the effect of the phase error from the spatial variation in the range up to the fourth order. The contour plot in Figure 6 employs units of π . In general, the motion errors can be ignored when they are less than π / 4 . From the simulation results, it can be seen that the phase error of the first-order to the third-order range spatial variation significantly exceeds π / 4 , while the phase error of the fourth-order range spatial variation is considerably less than π / 4 . In other words, the phase error from the first-order to the third-order range spatial variation should not be considered negligible.
Consequently, the phase error can be modeled as a third-order polynomial with respect to range, i.e.:
Φ ( η , R b ) = b 0 ( η ) + b 1 ( η ) Δ r + b 2 ( η ) Δ r 2 + b 3 ( η ) Δ r 3
where η represents the azimuth slow time and Δ r represents the difference between the slant range of an arbitrary target in the scene and the slant range of the scene center. b 0 , b 1 , b 2 , and b 3 represent the constant, first-order, second-order, and third-order coefficients of phase error, respectively. The phase error function is spatially variant in range due to the presence of motion error.
Once the expression of phase error is obtained, a weighted least squares estimation of b 0 ( η ) , b 1 ( η ) , b 2 ( η ) , and b 3 ( η ) can be formulated as:
B ^ = ( A T M A ) 1 A T M Φ = [ b ^ 0 ( · ) b ^ 1 ( · ) b ^ 2 ( · ) b ^ 3 ( · ) ] 4 × L
where, b ^ 0 ( · ) , b ^ 1 ( · ) , b ^ 2 ( · ) , and b ^ 3 ( · ) are the gradient estimation of b 0 ( η ) , b 1 ( η ) , b 2 ( η ) , and b 3 ( η ) , respectively, L represents the azimuth length of the sample, and M = d i a g [ m 1 , m 2 , , m k ] denotes the contrast weighting matrix between the target and background, where m k is the contrast within the k-th range cell. A denotes the matrix of the spatially variant range, which can be expressed as:
A = [ 1 Δ r ( 1 ) Δ r ( 1 ) 2 Δ r ( 1 ) 3 1 Δ r ( K ) Δ r ( K ) 2 Δ r ( K ) 3 ] K × 4
The phase gradient estimation matrix of the selected sample is expressed as:
Φ = [ ϕ ^ ( 1 , · ) ϕ ^ ( k , · ) ] K × L
where ϕ ^ ( k , · ) is the phase gradient estimation of the k-th sample cell.
The entire phase gradient estimation algorithm above utilizes the spatial invariant phase error in the range sub-apertures and estimates the phase gradient in each sub-aperture with high precision through maximum likelihood estimation (MLE). The polynomial coefficients can then be estimated using least squares estimation, thereby realizing the estimation of the spatially variant phase error of the range. As proposed in the improved PGA algorithm, the corresponding MOCO method for high-resolution SAR can effectively achieve both phase and envelope compensation while accurately compensating for spatially variant error.
It is recommended to use the overlapped sub-apertures strategy in real data processing to enhance the accuracy and robustness of the improved phase-weighted estimation PGA algorithm. The improved PGA algorithm can achieve highly accurate phase error estimation with high operational efficiency and robustness.

3.3. Auxiliary Algorithm

In this subsection, an auxiliary algorithm based on the quadratic phase error model [33,34] is designed to address the issue of the PGA algorithm failing in cases where there are few features available for accurate phase error estimation. This auxiliary algorithm can effectively compensate for such a deficiency with high estimation accuracy.
Unlike the PGA algorithm, the auxiliary algorithm is independent of the number of features. It is based on a parameter model and can effectively estimate the Doppler modulation frequency while accurately compensating for quadratic phase error. Moreover, it can produce better focusing effects for scattering points when the motion error is small. The auxiliary algorithm is characterized by its ability to achieve high-quality results with low computational complexity, as it only requires operations such as FFT, IFFT, correlation, and complex multiplication [35,36,37]. Additionally, the algorithm does not require multiple iterations and can adapt to various imaging scenarios easily. Concomitantly, the proposed method exhibits robustness in estimating the Doppler velocity parameter.
Assuming that the residual phase error remains spatially invariant, the algorithm is employed to extract instantaneous Doppler velocities within a sub-aperture, followed by double integration of the Doppler velocities to obtain the aperture’s complete phase function. The specific algorithm is described as follows:
The spatial invariant phase error obtained by estimation is ϕ n e , the spatial invariant range migration is Δ R ( η ) ϕ n e λ / 4 π , and the phase function of coarse compensation is:
H ( η ) = exp ( j 4 π ( f r + f c ) Δ R ( η ) c )
The change in azimuth modulation frequency can be obtained using the auxiliary algorithm. It can be deduced that the second-order integration of the frequency modulation with respect to time yields the phase. Thus, the phase error can be obtained by integrating this variation Δ K a . Note that the first-order integral of the frequency modulation yields the frequency while removing its linear component, which helps prevent linear offset of the image orientation. The phase error ϕ n e can then be expressed as:
ϕ n e ( n ) = 2 π n = 0 N Δ f a ( n ) Δ T a 2 = 2 π n = 0 N n = 0 N Δ K a ( n ) Δ T a 2
where N represents the number of points in the azimuth direction, and Δ f a represents the frequency shift. Through this method, the spatial invariant phase error can be accurately compensated even if the inertial navigation system (INS) fails.

3.4. Flowchart of Imaging Approach

For complex scenes, the conventional autofocus algorithm based on the image criterion may not adequately account for the spatial variability of motion error and may fail to meet focus requirements. To address these issues, a threshold-based combined imaging algorithm for high-resolution SAR is proposed.
Drawing upon the derivation in the preceding section, a statistical threshold is selected to design a composite autofocus algorithm with a broader scope of application. This approach enables the use of the PGA algorithm in scenes where the image’s brightness features exceed a certain threshold. In contrast, for scenes where distinct features are not present and the image’s brightness falls below a designated threshold, such as deserts and grasslands, an auxiliary algorithm can be utilized for imaging.
The flow chart of the combined autofocus algorithm proposed in this paper is shown in Figure 7. This combined focusing approach ensures both sufficient focusing accuracy in scenes with an abundance of strong points and stable focusing in scenes with sparse scatterers. Additionally, this approach has a wider range of applications and higher stability.

4. Experiment

In this section, the simulation and real-data processing results are presented to demonstrate the performance of the proposed approach.

4.1. Simulation Results

In this subsection, a simulation experiment with point targets is conducted to evaluate the performance of the proposed algorithm. An area with dimensions of 500 × 500 m is selected for the experiment, as shown in Figure 8. Two point targets are placed at the center and edge of the imaging scene, respectively. The simulation parameters are shown in Table 1 and the platform is flying along the y-axis.
Figure 9a–c illustrates the three-dimensional instantaneous velocity parameters of the UAV obtained through simulations, with varying orders of acceleration. The results demonstrate that the motion state of the platform is unstable, and thus the motion error cannot be neglected. However, due to the large antenna pitch, the spatial variation is significant, as evidenced by Figure 10. Therefore, spatial variability cannot be overlooked.
In the experiment, the image intensity demarcation range ko is set to 150, and the value p is set to 0.9. The probability function f ( k ) is then integrated between 0 and 150. For probability density functions greater than 0.9, it can be inferred that all point targets in the scene are sufficiently weak. The integration method can provide a theoretical foundation for determining the threshold value and designing the subsequent combined autofocus algorithm.
Figure 9 displays the velocity error added in three directions, indicating that the velocity changes are relatively uneven. In Figure 11, direct imaging results obtained using the conventional algorithm are presented. The simulation graph shows that the central point target achieves a relatively good focusing effect, while there is some degree of defocusing at the edge points. In Figure 12, imaging results using the improved algorithm are shown. Figure 12a displays the focusing performance of the central point, and Figure 12b displays that of the edge points. The azimuth imaging quality values with motion error are listed in Table 2.
Comparing the focusing results in Figure 11 and Figure 12 and the imaging data of the azimuth focusing parameters in Table 2, we can find that both the conventional method and the proposed method have great focusing effects on the central point target. However, it is evident from Figure 11b that the central point target achieves a relatively good focusing effect, while there is some degree of defocusing at the edge points. Additionally, the PLSR and ISLR indices of the conventional method deviate significantly from the ideal values, which can be attributed to ignoring the effect of high-order spatially variant motion error. In contrast, after applying our proposed method, as illustrated in Figure 12b, the edge points are well-focused, and the PLSR and ISLR indicators are basically consistent with the ideal values. After compensation, all point targets are well focused. The simulation results indicate that both the central and edge points meet the focusing requirements, verifying the effectiveness and advantages of the proposed method.
In comparison, Figure 13a,b show the residual phase error of the traditional approach from [38] and the proposed approach, respectively. It can be seen that under the condition of extremely small incident angle, the residual phase error of the traditional approach is larger than π / 4 , which cannot be ignored, while the residual phase error of the proposed approach is notably smaller than π / 4 .
Furthermore, Figure 14a,b show the imaging results of the traditional approach from [38] and the proposed approach, respectively. It is evident that the traditional approach fails to achieve satisfactory focusing in Figure 14a due to its non-negligible residual phase error, while the approach proposed in this paper preforms well, which further validates the effectiveness and innovativeness of our work.

4.2. Real Data Experiment Results

The proposed compensation method is highly effective for strip SAR motion compensation and can accurately estimate spatially variant phase error. The performance of the autofocus algorithm, as well as its comparison with the traditional autofocus algorithm [38], was then verified using measured data obtained from a flying experiment using a specific type of radar. The primary system parameters used in the experiment are provided in Table 1.
Figure 15a,b display the imaging results for large scenes processed using the traditional and proposed algorithms, respectively. Furthermore, Figure 16a,b show the imaging results for small scenes, with selected areas of interest magnified.
Figure 16a shows the imaging results processed using the traditional approach, which produces defocused results at the edge of the scene and relatively poor focusing of strong scattering points. These results suggest that spatially variant motion error still have a significant impact on imaging. Figure 16b presents the imaging results obtained using the proposed approach for the spatially variant process, which displays significant improvements in the focusing effect, especially at the edges. The results confirm the effectiveness of the proposed algorithm in compensating for spatially variant motion error and improving the overall imaging quality.
To further evaluate the autofocusing performance of the proposed algorithm, the entropy value was utilized as a quantitative index of image focusing. Generally, the more blurred the image, the greater the uncertainty, and the higher the entropy of the image. The entropy values of the images processed using the traditional approach and the proposed approach were calculated to be 3.6019 and 3.5419, respectively, indicating that the proposed algorithm can effectively correct spatially variant phase error. The enhancement in the focusing effect, particularly at the edges, confirms that the proposed algorithm is suitable for this scene.

5. Conclusions

Motion error, a form of atmospheric interference, poses a significant challenge in the design of remote detection and imaging methods for UAV SAR applications. In this study, we analyzed the practical issues arising from motion error, including envelope error, phase modulations, and NPE, to establish the imaging model of airborne SAR. A statistical threshold was utilized for feature selection, and we developed an improved phase-weighted estimation PGA algorithm that accurately approximates the phase error induced by spatial variation. A composite autofocus approach was developed by combining the improved phase-weighted estimation PGA algorithm with an auxiliary algorithm and building upon the threshold. The auxiliary algorithm compensates for the situation of fewer features present in the scene. The effectiveness and applicability of the approach are verified through both simulation and real data experiments.

Author Contributions

Conceptualization, X.Z., S.T. and Y.R.; methodology, X.Z., S.T., Y.R. and J.H.; software, X.Z. and S.T.; validation, S.T., T.J., J.Z., Y.L. and Q.D.; writing—original draft preparation, X.Z. and S.T.; writing—review and editing, Y.R., C.J. and J.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 61971329, Grant 61701393, Grant 62001062, and Grant 61671361; in part by the Natural Science Basis Research Plan in Shaanxi Province of China under Grant 2020ZDLGY02-08; and in part by the Fundamental Research Funds for the Central Universities under Grant ZYTS23153.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cumming, I.G.; Wong, F.H. Digital Processing of Synthetic Aperture Radar Data: Algorithm and Implementation; Artech House: Boston, MA, USA, 2005; pp. 113–122. [Google Scholar]
  2. Paul, D. Spudis. Mini-SAR: An Imaging Radar on India’s Chandrayaan-1 Mission to the Moon; NASA: Houston, TX, USA, 2010.
  3. Prats-Iraola, P.; Scheiber, R.; Rodriguez-Cassola, M.; Mittermayer, J.; Wollstadt, S.; De Zan, F.; Brautigam, B.; Schwerdt, M.; Reigber, A.; Moreira, A. On the Processing of Very High Resolution Spaceborne SAR Data. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6003–6016. [Google Scholar] [CrossRef] [Green Version]
  4. Fornaro, G.; Franceschetti, G.; Perna, S. Motion compensation errors: Effects on the accuracy of airborne SAR images. IEEE Trans. Aerosp. Electron. Syst. 2005, 41, 1338–1352. [Google Scholar] [CrossRef]
  5. Ren, Y.; Tang, S.; Guo, P.; Zhang, L.; So, H.C. 2-D Spatially Variant Motion Error Compensation for High-Resolution Airborne SAR Based on Range-Doppler Expansion Approach. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
  6. Chen, J.; Liang, B.; Zhang, J.; Yang, D.-G.; Deng, Y.; Xing, M. Efficiency and Robustness Improvement of Airborne SAR Motion Compensation With High Resolution and Wide Swath. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  7. Wiley, C. Synthetic aperture radars. IEEE Trans. Aerosp. Electron. Syst. 1985, 21, 440–443. [Google Scholar] [CrossRef]
  8. Fransson, J.E.S.; WaLter, F.; Ulander, L.M.H. Estimation of forest parameters using CARABAS-II VHF SAR data. IEEE Trans. Geosci. Remote Sens. 2000, 38, 720–727. [Google Scholar] [CrossRef]
  9. Li, N.; Niu, S.; Guo, Z.; Liu, Y.; Chen, J. Raw Data-Based Motion Compensation for High-Resolution Sliding Spotlight Synthetic Aperture Radar. Sensors 2018, 18, 842. [Google Scholar] [CrossRef] [Green Version]
  10. Chang, F.; Li, D.; Dong, Z. Elevation Spatial Variation Error Compensation in Complex Scene and Elevation Inversion by Autofocus Method in GEO SAR. Remote Sens. 2021, 13, 2916. [Google Scholar] [CrossRef]
  11. Hovanessian, S.A. Introduction to Synthetic Array and Imaging Radar; Artech House: Dedham, MA, USA, 1980; pp. 53–77. [Google Scholar]
  12. Bamler, R.; Hartl, P. Synthetic aperture radar interferometry. Inverse Probl. 1998, 14, R1. [Google Scholar] [CrossRef]
  13. Kirk, J.C. Motion compensation for synthetic aperture radar. IEEE Trans. Aerosp. Electron. Syst. 1975, 3, 338–348. [Google Scholar] [CrossRef]
  14. Curlander, J.C.; McDonough, R.N. Synthetic Aperture Radar: Systems and Signal Processing; Wiley: New York, NY, USA, 1991; pp. 178–212. [Google Scholar]
  15. Carrara, W.G.; Goodman, R.S. Spotlight Synthetic Aperture Radar: Signal Processing Algorithms; Artech House: Boston, MA, USA, 1995; pp. 245–254. [Google Scholar]
  16. Farrell, J.L.; Mims, J.H.; Sorrell, A. Effects of navigation errors in maneuvering SAR. IEEE Trans. Aerosp. Electron. Syst. 1984, 5, 363–400. [Google Scholar] [CrossRef]
  17. Xing, M.; Jiang, X.; Wu, R.; Zhou, F.; Bao, Z. Motion compensation for UAV SAR based on raw radar data. IEEE Trans. Geosci. Remote Sens. 2009, 47, 2870–2883. [Google Scholar] [CrossRef]
  18. Niho, Y.G. Phase Difference Auto Focusing for Synthetic Aperture Radar Imaging. U.S. Patent 4,999,635, 12 March 1991. [Google Scholar]
  19. Chen, J.; Xing, M.; Sun, G.; Li, Z. A 2-D Space-Variant Motion Estimation and Compensation Method for Ultrahigh-Resolution Airborne Stepped-Frequency SAR With Long Integration Time. IEEE Trans. Geosci. Remote Sens. 2017, 55, 6390–6401. [Google Scholar] [CrossRef]
  20. Tang, S.; Zhang, L.; Guo, P.; Liu, G.; Sun, G.C. Acceleration Model Analyses and Imaging Algorithm for Highly Squinted Airborne Spotlight-Mode SAR with Maneuvers. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 1120–1131. [Google Scholar] [CrossRef]
  21. Moreira, A.; Prats-Iraola, P.; Younis, M.; Krieger, G.; Hajnsek, I.; Papathanassiou, K.P. A tutorial on synthetic aperture radar. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef] [Green Version]
  22. Yi, T.; He, Z.; He, F.; Dong, Z.; Wu, M.; Song, Y. A Compensation Method for Airborne SAR with Varying Accelerated Motion Error. Remote Sens. 2018, 10, 1124. [Google Scholar] [CrossRef] [Green Version]
  23. Fornaro, G. Trajectory deviations in airborne SAR: Analysis and compensation. IEEE Trans. Aerosp. Electron. Syst. 1999, 35, 997–1009. [Google Scholar] [CrossRef] [Green Version]
  24. Khaikin, V.B.; Radzikhovsky, V.N.; Kuzmin, S.E. A compact highly sensitive radiometer for thermal sounding of atmosphere in 5 MM band. In Proceedings of the 2008 Microwaves, Radar and Remote Sensing Symposium, Kiev, Ukraine, 22–24 September 2008; pp. 66–70. [Google Scholar] [CrossRef]
  25. Tang, S.; Zhang, L.; Guo, P.; Zhao, Y. An omega-K algorithm for highly squinted missile-borne SAR with constant acceleration. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1569–1573. [Google Scholar] [CrossRef]
  26. Ren, Y.; Tang, S.; Dong, Q. An Improved Spatially Variant MOCO Approach Based on an MDA for High-Resolution UAV SAR Imaging with Large Measurement Errors. Remote Sens. 2022, 14, 2670. [Google Scholar] [CrossRef]
  27. Li, L.; Asif, R.; Mao, S. Improvement of rank one phase estimation (ROPE) autofocusing technique. In Proceedings of the ICSP ’98. 1998 Fourth International Conference on Signal Processing (Cat. No.98TH8344), Seattle, WA, USA, 6–10 July 1998; Volume 2, pp. 1461–1464. [Google Scholar]
  28. Wahl, D.E.; Eichel, P.H.; Ghiglia, D.C. Phase Gradient Autofocus: A Robust Tool for High Resolution SAR Phase Correction. IEEE Trans. Aerosp. Electron. Syst. 1994, 30, 827–835. [Google Scholar] [CrossRef] [Green Version]
  29. Fan, B.; Jiang, Z.; Chen, L.; Li, H.; He, Y. Motion Compensation of UAV Airborne High Resolution Stripmap SAR. Aero Weapon. 2019, 26, 50–55. [Google Scholar]
  30. Zhu, D.; Jiang, R.; Mao, X.; Zhu, Z. Multi-Subaperture PGA for SAR Autofocusing. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 468–488. [Google Scholar] [CrossRef]
  31. Chen, J.; Yu, H.; Xu, G. Airborne SAR Autofocus Based on Blurry Imagery Classification. Remote Sens. 2021, 13, 3872. [Google Scholar] [CrossRef]
  32. Chan, H.L.; Yeo, T.S. No iterative quality phase-gradient autofocus (QPGA) algorithm for spotlight SAR imagery. IEEE Trans. Geosci. Remote Sens. 1998, 36, 1531–1539. [Google Scholar] [CrossRef]
  33. Wang, G.; Zhang, M.; Huang, Y.; Zhang, L.; Wang, F. Robust Two-Dimensional Spatial-Variant Map-Drift Algorithm for UAV SAR Autofocusing. IEEE Trans. Geosci. Remote Sens. 2019, 11, 340. [Google Scholar] [CrossRef] [Green Version]
  34. Zhang, L.; Hu, M.; Wang, G. Range-dependent map-drift algorithm for focusing UAV SAR imagery. IEEE Trans. Geosci. Remote Sens. Lett. 2016, 13, 1158–1162. [Google Scholar] [CrossRef]
  35. Bezvesilniy, O.; Gorovyi, I.; Vavriv, D. Estimation of phase errors in SAR data by Local-Quadratic map-drift autofocus. In Proceedings of the 2012 13th International Radar Symposium, Warsaw, Poland, 23–25 May 2012; pp. 376–381. [Google Scholar]
  36. Huang, Y.; Liu, F.; Chen, Z.; Li, J.; Hong, W. An improved map-drift algorithm for unmanned aerial vehicle SAR imaging. IEEE Geosci. Remote Sens. Lett. 2021, 18, 1–5. [Google Scholar] [CrossRef]
  37. Zhu, D. SAR signal based motion compensation through combining PGA and 2-D map drift. In Proceedings of the 2009 2nd Asian-Pacific Conference on Synthetic Aperture Radar, Xi’an, China, 26–30 October 2009; pp. 435–438. [Google Scholar]
  38. Zhang, L.; Qiao, Z.; Xing, M.; Yang, L.; Bao, Z. A robust motion compensation approach for UAV SAR imagery. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3202–3218. [Google Scholar] [CrossRef]
Figure 1. SAR geometric model with motion error.
Figure 1. SAR geometric model with motion error.
Remotesensing 15 03700 g001
Figure 2. The range error caused by the x-axis motion error. (a) Rs = 10 km. (b) Rs = 5 km.
Figure 2. The range error caused by the x-axis motion error. (a) Rs = 10 km. (b) Rs = 5 km.
Remotesensing 15 03700 g002
Figure 3. The relationship of antenna pitch with respect to the slant range.
Figure 3. The relationship of antenna pitch with respect to the slant range.
Remotesensing 15 03700 g003
Figure 4. Spatially variant error: (a) Rs = 10 km. (b) Rs = 5 km.
Figure 4. Spatially variant error: (a) Rs = 10 km. (b) Rs = 5 km.
Remotesensing 15 03700 g004
Figure 5. Focusing quality of the classic PGA for featureless.
Figure 5. Focusing quality of the classic PGA for featureless.
Remotesensing 15 03700 g005
Figure 6. Phase error from first-order to fourth-order range spatial variation: (a) first-order; (b) second-order; (c) third-order; (d) fourth-order.
Figure 6. Phase error from first-order to fourth-order range spatial variation: (a) first-order; (b) second-order; (c) third-order; (d) fourth-order.
Remotesensing 15 03700 g006
Figure 7. Flow chart of combined autofocusing algorithm.
Figure 7. Flow chart of combined autofocusing algorithm.
Remotesensing 15 03700 g007
Figure 8. Simulation scene.
Figure 8. Simulation scene.
Remotesensing 15 03700 g008
Figure 9. Simulated velocities along three axes: (a) x -axis; (b) y -axis; (c) z -axis.
Figure 9. Simulated velocities along three axes: (a) x -axis; (b) y -axis; (c) z -axis.
Remotesensing 15 03700 g009
Figure 10. Spatially variant error caused by motion error.
Figure 10. Spatially variant error caused by motion error.
Remotesensing 15 03700 g010
Figure 11. Imaging results of traditional approach: (a) center point; (b) edge point.
Figure 11. Imaging results of traditional approach: (a) center point; (b) edge point.
Remotesensing 15 03700 g011
Figure 12. Imaging results of the proposed approach: (a) center point; (b) edge point.
Figure 12. Imaging results of the proposed approach: (a) center point; (b) edge point.
Remotesensing 15 03700 g012
Figure 13. Residual phase error of (a) the traditional approach; (b) the proposed approach.
Figure 13. Residual phase error of (a) the traditional approach; (b) the proposed approach.
Remotesensing 15 03700 g013
Figure 14. Imaging results of (a) the traditional approach; (b) the proposed approach.
Figure 14. Imaging results of (a) the traditional approach; (b) the proposed approach.
Remotesensing 15 03700 g014
Figure 15. Real-data imaging results in large scenes: (a) traditional algorithm; (b) proposed algorithm.
Figure 15. Real-data imaging results in large scenes: (a) traditional algorithm; (b) proposed algorithm.
Remotesensing 15 03700 g015
Figure 16. Real-data imaging results in small scenes: (a) traditional algorithm; (b) proposed algorithm.
Figure 16. Real-data imaging results in small scenes: (a) traditional algorithm; (b) proposed algorithm.
Remotesensing 15 03700 g016
Table 1. Simulation Parameters.
Table 1. Simulation Parameters.
ParametersValue
Carrier Frequency35 GHz
Pulse Repeat Frequency625 HZ
Bandwidth1200 MHz
Pulse Width0.54 μs
Sampling Frequency1440 MHz
Reference Slant Range4000 m
Height3000 m
Squint Angle0 rad
Velocity40 m/s
Table 2. Azimuth imaging quality with motion error.
Table 2. Azimuth imaging quality with motion error.
Proposed AlgorithmTraditional Algorithm
IRWPSLRISLRIRWPSLRISLR
Center Point0.20−13.84−10.280.28−13.56−10.67
Edge Point0.20−13.78−10.070.21−10.53−8.58
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, X.; Tang, S.; Ren, Y.; Han, J.; Jiang, C.; Zhang, J.; Li, Y.; Jiang, T.; Dong, Q. Spatially Variant Error Elimination for High-Resolution UAV SAR with Extremely Small Incident Angle. Remote Sens. 2023, 15, 3700. https://doi.org/10.3390/rs15143700

AMA Style

Zhang X, Tang S, Ren Y, Han J, Jiang C, Zhang J, Li Y, Jiang T, Dong Q. Spatially Variant Error Elimination for High-Resolution UAV SAR with Extremely Small Incident Angle. Remote Sensing. 2023; 15(14):3700. https://doi.org/10.3390/rs15143700

Chicago/Turabian Style

Zhang, Xintian, Shiyang Tang, Yi Ren, Jiahao Han, Chenghao Jiang, Juan Zhang, Yinan Li, Tong Jiang, and Qi Dong. 2023. "Spatially Variant Error Elimination for High-Resolution UAV SAR with Extremely Small Incident Angle" Remote Sensing 15, no. 14: 3700. https://doi.org/10.3390/rs15143700

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop