1. Introduction
Synthetic Aperture Radar (SAR) has been widely used in military reconnaissance, terrain mapping, environmental monitoring, and other fields [
1,
2] because of its all-weather ability. In the Stripmap SAR imaging algorithm, it is considered that the radar platform moves uniformly in a straight line, but in reality, due to many factors such as airflow disturbance and piloting error, the track will be an irregular curve.
Thus, phase errors are added in azimuth and distance direction, causing defocusing, distortion, and other phenomena affecting image quality [
3,
4]. Flight trajectory and attitude data provided by INS and GNSS are usually used for motion compensation [
5,
6] to eliminate serious errors. However, due to the limitation of the accuracy of the inertial data and vibration of the load itself, motion compensation cannot eliminate all errors. In order to solve this problem, it is necessary to use autofocus technology; that is, to use the characteristics of echo data itself for motion error compensation so that the image can be further focused [
7].
Figure 1 shows where motion compensation (MoCo) [
8] and autofocus technology are typically used, and to what extent they improve image quality.
According to the modeling method of motion error, standard autofocus algorithms are usually divided into two categories: parametric method and non-parametric method.
(1) Parametric autofocus method: Parametric autofocus methods usually build a parametric model for motion errors and then use algorithms to estimate the parameters of the model, mainly including Map-Drift (MD) [
9] and Shift-And-Correlate (SAC) [
10]. Their algorithms are simple and fast. However, it is only effective for quadratic phase errors, but not for linear phases and higher-order phases (above the second order) [
11]. In addition, there is an autofocus algorithm based on image optimization, which uses specific criteria to search a set of optimal parameters for the motion error model according to different actual requirements to achieve the best image quality. The criteria mainly include contrast optimization (CO) [
12], sharpness [
13,
14,
15], and minimum entropy (ME) [
16,
17,
18]. Since there is no analytical mapping relationship between image quality and model parameters, the iterative search process is needed to meet the constraints in image optimization methods. The larger the phase error order, the greater the computational complexity and the longer the time cost.
(2) Non-parametric autofocus method: The non-parametric method does not require a parametric model of the motion error because it extracts the phase or phase gradient of the motion error directly from the raw radar data. Its main representative algorithm is the Phase Gradient Algorithms (PGA) [
19,
20,
21], which does not depend on the specific model and can estimate the phase error from the characteristics of the data itself. Theoretically, the phase error of the second order or any order above can be estimated through iteration. Because of this advantage, various improved algorithms based on it have been widely studied in recent years, but there are still some problems that have not been overcome. For example, the Stripmap Phase Gradient Method (SPGA) [
22,
23], which is suitable for strip mode SAR. It introduces the PGA algorithm into strip SAR imaging. Although it has the advantages of being simple and easy to implement, it still does not make up for the shortcomings of PGA in essence, and it is still unable to directly estimate the linear phase error. The existence of linear phase error will lead to the drift of the target [
24,
25]. The SPGA algorithm will extract and splice the phase error of the point target at different positions. When the phase history of the image is too long, the linear terms of the phase error in different regions will be different, resulting in discontinuities at the phase stitching and large errors in the coincidence part, which further leads to unfocused images and requires several iterations to eliminate this effect [
26]. On the one hand, multiple iterations will consume a lot of computing resources and reduce the focusing efficiency, making it difficult to achieve real-time processing. On the other hand, it is still impossible to ensure that the global linear error residual is at the same level after iteration. As a result, although the image is focused, it will produce distortion in the azimuthal direction, which leads to a series of problems in some applications, such as SAR scene matching navigation technology [
27,
28], where image distortion will affect the navigation accuracy. For the above problems, under normal circumstances, SPGA first subtracts the average value of the phase error gradient of the adjacent two points, takes the average value of the phase error of the coincidence part when stitching the phase error, and then eliminates the influence of linear error as much as possible through iteration. However, the effectiveness of this approach is very limited.
In this paper, we analyze the effect of linear phase on SPGA; that is, it will cause image distortion and increase the number of iterations. We propose a modified autofocus algorithm based on SPGA that can remove linear phase error. We use phase continuity to estimate and restore the real relative position of the strong point target so that the linear error can be preserved in the phase error extraction stage. Because the real position of the strong point can be estimated, the phase error can be restored to the greatest extent so that there will be no phase discontinuity in the phase splicing stage, which theoretically through one iteration can achieve the ideal effect. Compared with the traditional SPGA algorithm, the proposed algorithm only uses a small computational cost to achieve the goal of no iteration and no image distortion, which is helpful to SAR real-time autofocus imaging.
2. Theory
As shown in
Figure 2, the trajectory of the aircraft will be curved due to airflow disturbance or self-vibration, which will bring phase error to the azimuth data, and it will change with the azimuth time. Assume that the residual phase error after MoCo is
.
To explore why SPGA requires multiple iterations and cannot estimate linear phase, start with a mathematical process of point target focusing and defocusing. Let the peak of a point target be at
and its expression before pulse compression be:
When it is disturbed by a phase error
, there is:
As shown in
Figure 3, the introduction of phase error will have many effects on pulse compression results, such as main lobe broadening, side lobe rising, side lobe dissymmetry, and peak point drift. These phenomena are reflected in the image as defocusing and distortion. When we know the exact working parameters such as
and
K, this process is reversible; that is, we can extract
through inverse compression and dchirp operation and compensate it to the original formula. You can obtain the focused pulse compression signal again. (
{} indicates the pulse compression operation).
It is fundamental is to inversely solve the phase error gradient by using the point target characteristics in the image, obtain the phase error surface through integration and fitting, and focus the image by compensating the phase errors.
In theory, it can estimate any error above the second order through iteration, but it is difficult to estimate the linear phase because the linear phase is reflected in the shift of the point target in the orientation, which prevents us from getting the right
. When carrying out the Legendre polynomial expansion on the phase error and listing the first-order term only, the higher-order terms represented by
, we can obtain:
substituting the above Equation (
3) into Equation (
2), we obtain:
By observing Equations (
2) and (
4), we see that the real position of the peak point is
, but due to the existence of the linear component of phase error, its peak point will be shifted to
. This will cause us to be unable to confirm the true value of
. Since
is an indeterminate constant, this will cause us to be unable to confirm the true value of
. This will further cause the extracted phase error information to change from
to
, thus losing the linear phase information. That is, if the correct point target position cannot be selected, the linear error cannot be extracted, and the true position of the origin cannot be returned if the linear error cannot be estimated. The two problems affect each other.
As shown in
Figure 4, suppose there are two point targets whose positions are
and
, respectively, and their synthetic aperture overlapped part is
.
(green solid line) is the true phase error passing through the two point targets. According to Equations (
3) and (
4), the estimated phase errors by SPGA are
(blue dotted line) and
(yellow dotted line), respectively. The concatenated phase error curves are
(red solid line). It can be seen that due to the absence of the linear phase, there will be a large difference between the estimated phase error curve and the actual phase error.
3. Method
In the process of processing a lot of real data, we found that under most conditions, due to the influence of linear phase error, the point target will only offset within a certain range of azimuthal direction, which means that the real position of the point target is around the selected point. As long as we go through all the points within a certain range around the selected point, the point closest to the real position can always be screened out under certain conditions. According to the continuity of the phase error, the motion error of the overlapped parts should be the same. When the selected point is correct, the coincidence degree of the phase error of the overlapped parts of two adjacent synthetic apertures should be the highest; that is, the phase difference in the overlapped part of the phase error function is the smallest. In other words, the phase error function has the greatest correlation. Therefore, for each point target selected, the point within a certain range of its azimuthal direction is taken as the alternative point of its real position. Then, through the coincidence part of the phase error of adjacent points, the error matching is carried out and the point closest to the real position is screened out from the alternative points so as to extract the linear phase error of this point.
Figure 5 is a flow chart of our approach.
3.1. Point Selection
The extraction of the linear phase depends on the phase error function of the coincidence part of the synthetic aperture of two adjacent points. Therefore, in the process of point selection, in addition to ensuring the good characteristics of the point target, it should be evenly distributed in the entire imaging region as far as possible so that the two adjacent points have overlapping parts as far apart as possible, so as to meet the conditions of phase error matching.
In many cases, due to the different scattering characteristics of ground objects, the strong points in the image may be concentrated in a few range gates or several range units. Using the maximum peak point of the range gate or range unit as the point target will filter out part of the ideal point target so that point targets are not evenly distributed in the azimuthal direction. Therefore, this paper uses
CFAR detection results [
29,
30] instead of the maximum range element for preliminary screening and uses the contrast rule for secondary filtering. That is, the amplitude of the
signal generated by the ideal point target is considered consistent within the synthetic aperture, as shown in Equation (
5).
is the signal, is the synthetic aperture time, less than a synthetic aperture according to the signal duration at this point. k is the number of point selection results of 2D .
3.2. Phase Error Extraction
This part is not the key point of the article, but aims to briefly introduce the basic concepts and functions of the related operations of phase error extraction and describe the brief process from the point target to the phase error. For a more detailed introduction, see paper [
7].
(1) Intercepting: When the point target is determined, the data of one synthetic aperture length are intercepted in the azimuthal direction with the point target as the center. When the data are less than one synthetic aperture, the excess part is discarded.
(2) Inverse pulse compression: It is the inverse process of pulse compression. Let the azimuthal coordinate of the image be
u, the range coordinate be
v, and the coordinates of the selected point be
. The signal intercepted centered at this point is
. The reference function of the range gate is
, and
is the azimuth frequency domain coordinate. The signal after inverse pulse compression is
, where
t is the azimuth time domain coordinate. They have the following equation:
(3) Dechirp:
is the superposition of the chirp signal with phase error and the chirp signal of the surrounding point target. “Dechirp” is to remove the LFM signal and keep the phase error information. Let the signal after “Dechirp” be
:(
means take the conjugate of
A)
(4) Windowing: contains all the phase error information of the selected point target, but due to the influence of the information of other nearby point targets and the influence of noise, it needs “Windowing” in order to eliminate these effects as much as possible.
3.3. Real Location Estimation of Point Target
There are
N selected point targets in the point selection step. In order to facilitate the analysis, only two adjacent selected point targets (hereinafter referred to as the main points) in the same range gate are considered, and their synthetic apertures overlap. It is assumed that the two main points have been shifted by within
points in the azimuthal direction due to the influence of linear phase error (W is set as an even number for easy calculation). The synthetic aperture time is
, the range of azimuth time is
, and the azimuth tuning frequency is
K. Without the influence of the linear phase error component, the azimuth time of the two main points
and
are
and
, and the azimuth peaks of them fall at
and
. AB is the phase overlap region of two main points. When it experiences a certain phase error
, the two main points can be obtained after interception and inverse compression:
Assume that the time corresponding to the true position of the point
u1 and
u2 is
and
. The ideal reference signals generated based on them are:
After “dechirp” [
7] using Equations (
8) and (
9), let
and
, the phase error function can be obtained:
Conjugating the multiplication of overlapped parts is denoted as:
According to the continuity of the phase error,
and
should be highly coincident. The interrelationship among
,
, and
is shown in
Figure 6.
Which also implies the following relationship:
is the phase-taking operation.
As can be seen from the above equation, it is a line with slope
. In order to satisfy the above equation, it needs to satisfy
. In fact, in most cases, due to azimuth sampling,
is discontinuous, whereas
is a continuous variable. This causes
to float between
. It is very rare that
. According to the above assumptions, when
and
are the real positions, let:
Then:
where
is pulse repetition frequency, namely azimuth sampling rate.
That is, after conjugate multiplication of the phase error of the overlapped parts of two adjacent points, its phase is linear, and the slope should meet the above equation. According to this condition, taking
as the origin of the pixel, the
traversal
is taken, and the step length is set as 1 (
W is called the redundant range of selection points), and they are, respectively, denoted by
,
, and the corresponding azimuth time is denoted by
, …
. Then we repeat R the same operation for
and select the combination of
and
that satisfies Equation (
15) and where
is minimum
is the estimated true position of the point. The matrix can be expressed as:
where
is defined as finding the element slope and
is defined as finding the element phase.
In the matrix
,
is the smallest element satisfying Equation (
15),
i and
j are the row index and column index, respectively, and
and
are the real positions of
and
. For the sake of understanding,
Figure 7 shows the typical relationship of
between
and
. We can find the minimum value of
in curve
, whose
is 22, and its value satisfies Equation (
15), so here
i is 22 and
j is 3. If there is no element satisfying Equation (
15) in
, it means that the range
W is too small, and
W and the traversal step size can be increased appropriately.
3.4. Phase Error Estimation
Taking
as an example, the Legendre polynomial expansion of
in Equation (
10) at
is performed [
31] to list the terms only once.
The phase error estimator of
can be obtained by differentiating and integrating the phase of the above equation:
It can be seen that the phase error estimated by selecting the real position of the point target can completely retain the linear error in the system, but a new linear error
is introduced. By compensating the above formula into
, we can obtain:
The above algorithm only focuses on the error matching between two adjacent points. In actual processing, the main points need to be sorted from the largest to the smallest according to the azimuth index, and then the above operations should be carried out on each two adjacent points. Except for the first and last points of the figure, all the other points will be estimated twice; that is, two sets of phase error gradients will be obtained when the estimation is accurate. The two sets of phase error gradients should be roughly the same, so the average can be taken in this case.
3.5. Phase Error Correction
It can be observed from Equation (14) that although the original linear phase error is eliminated, the main points will shift by at most two points extra. If the shift value of the origin is much larger than two points, the effect of our method is obvious. If the original shift value is less than two points, it will bring unnecessary errors. In addition, if N main points are selected, the specific azimuth time of the main points is recorded as , and the azimuth time of the peak point is . Because , it is easy to know that the shift value of each part will be inconsistent due to the difference of after the phase error is combined.
Therefore, two correction methods are proposed:
(1) Interpolation: According to Equation (
20), the extra part of the estimated phase error is positively correlated with
, whereas the maximum value of
is inversely proportional to the azimuth sampling rate. Therefore, after estimating the true position of the point target, the origin target can be interpolated, and the above operation can be repeated again for more precise estimation, thus reducing the upper limit of
. This reduces the additional shift value to an acceptable range.
(2) Iterative correction: According to Equations (
17) and (
19), the phase error gradient of point
is:
By Equation (14), we can obtain
, which is the slope of the phase curve multiplied by the conjugate of
and
phase error curves. The phase error gradient can be corrected by gradually calculating the data.
It can be observed from the above equation that the corrected error gradient is fixed at , which can ensure that the overall offset of the image is consistent in theory and will not distort the target positions of different strong points.
In summary, the key steps of the algorithm in this paper is given in
Table 1.
6. Conclusions
According to the continuity of the phase error, this paper proposes a modified iteration-free Stripmap Phase Gradient Algorithm based on removing the linear phase. Through the analysis of the mathematical process of phase error extraction, this paper finds a method to estimate the real position of the point target, which retains linear error in phase error extraction, thus eliminating the error generated in the phase stitching process, making the estimated phase error curve more accurate and smooth, avoiding the frequent iteration of traditional SPGA. At the same time, our approach can keep the global linear phase error level consistent, making the image scene as a whole without relative offset.This algorithm overcomes the defects of SPGA that make it difficult to remove linear phase errors and requires iteration, resulting in low operational efficiency.Under normal circumstances, our approach without iteration can achieve the focusing effect of SPGA for six iterations. The validity and efficiency of our approach are verified by the processing of simulation data and real data. By analyzing the time cost of the two algorithms, this paper obtains the use boundary of our approach. It provides a solution for SAR real-time self-focusing applications and has a certain significance. It is worth noting that the algorithm is suitable for small squint angle conditions; when the squint angle is too large, the algorithm needs further improvement.