Next Article in Journal
Assessment of Spatio-Temporal Dynamic Vegetation Evolution and Its Driving Mechanism on the Kubuqi Desert Using Multi-Source Satellite Remote Sensing
Next Article in Special Issue
Orbital Design Optimization for Large-Scale SAR Constellations: A Hybrid Framework Integrating Fuzzy Rules and Chaotic Sequences
Previous Article in Journal
A Multi-Level SAR-Guided Contextual Attention Network for Satellite Images Cloud Removal
Previous Article in Special Issue
An Innovative Internal Calibration Strategy and Implementation for LT-1 Bistatic Spaceborne SAR
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A High-Resolution Spotlight Imaging Algorithm via Modified Second-Order Space-Variant Wavefront Curvature Correction for MEO/HM-BiSAR

by
Hang Ren
1,
Zheng Lu
2,
Gaopeng Li
1,
Yun Zhang
1,*,
Xueying Yang
1,
Yalin Guo
3,
Long Li
1,
Xin Qi
4,
Qinglong Hua
1,
Chang Ding
5,
Huilin Mu
6 and
Yong Du
7
1
School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150001, China
2
Institute of Remote Sensing Satellite, China Academy of Space Technology, Beijing 100094, China
3
School of Mathematics and Statistics, Beijing Institute of Technology, Beijing 100081, China
4
Beijing Institute of Astronautical Systems Engineering, Beijing 100076, China
5
Shaanxi Key Laboratory of Artificially Structured Functional Materials, Air Force Engineering University, Xi’an 710051, China
6
Air Defense and Antimissile School, Air Force Engineering University, Xi’an 710051, China
7
China Mobile Chengdu Institute of Research and Development, Chengdu 610213, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(24), 4768; https://doi.org/10.3390/rs16244768
Submission received: 31 October 2024 / Revised: 14 December 2024 / Accepted: 18 December 2024 / Published: 20 December 2024
(This article belongs to the Special Issue Advanced HRWS Spaceborne SAR: System Design and Signal Processing)

Abstract

:
A bistatic synthetic aperture radar (BiSAR) system with a Medium-Earth-Orbit (MEO) SAR transmitter and high-maneuvering receiver (MEO/HM-BiSAR) can achieve a wide swath and high resolution. However, due to the complex orbit characteristics and the nonlinear trajectory of the receiver, MEO/HM-BiSAR high-resolution imaging faces two major challenges. First, the complex geometric configuration of the BiSAR platforms is difficult to model accurately, and the ‘non-stop-go’ effects should also be considered. Second, non-negligible wavefront curvature caused by the nonlinear trajectories introduces residual phase errors. The existing spaceborne BiSAR imaging algorithms often suffer from image defocusing if applied to MEO/HM-BiSAR. To address these problems, a novel high-resolution imaging algorithm named MSSWCC (Modified Second-Order Space-Variant Wavefront Curvature Correction) is proposed. First, a high-precision range model is established based on an analysis of MEO SAR’s orbital characteristics and the receiver’s curved trajectory. Based on the echo model, the wavefront curvature error is then addressed by two-dimensional Taylor expansion to obtain the analytical expressions for the high-order phase errors. By analyzing the phase errors in the wavenumber domain, the compensation functions can be designed. The MSSWCC algorithm not only corrects the geometric distortion through reverse projection, but it also compensates for the second-order residual spatial-variant phase errors by the analytical expressions for the two-dimensional phase errors. It can achieve high-resolution imaging ability in large imaging scenes with low computational load. Simulations and real experiments validate the high-resolution imaging capabilities of the proposed MSSWCC algorithm in MEO/HM-BiSAR.

1. Introduction

Medium-Earth-Orbit Synthetic Aperture Radar (MEO SAR) offers a balanced solution by providing moderate spatial resolution with high temporal resolution. This makes it a viable alternative to both geosynchronous orbit (GEO) SAR and low-Earth-orbit (LEO) SAR systems [1,2]. Compared to LEO SAR, MEO SAR offers a much wider beam coverage, extending over several hundred kilometers, as well as a shorter revisit interval, typically within minutes. In contrast to GEO SAR, MEO SAR achieves higher spatial resolutions while also benefiting from lower launch costs [3]. These unique characteristics highlight the significant potential of MEO SAR for a wide range of applications, including ocean ice surveillance, soil moisture monitoring, vegetation observation, and oceanic studies [4,5,6].
Due to the unique characteristics of MEO SAR, a bistatic SAR system with an MEO SAR illuminator and high-maneuvering platform receivers (MEO/HM-BiSAR) has been found to offer numerous advantages [7]. First, with its wide swath and moderate spatial resolution, the MEO SAR could provide excellent observation capability for the receivers [2]. Second, high-maneuvering receivers (spacecraft, high-speed unmanned aerial vehicles, and missiles) usually operate on nonlinear trajectories, which gives them higher degrees of freedom in flight trajectories and greater flexibility compared to conventional receivers [8]. This flexibility enables them to adapt to various complex terrain detection and accomplish different mission tasks. Meanwhile, the receiver platform does not actively send electromagnetic waves, and the echo is reflected by the target from the MEO SAR. These characteristics enable the high-maneuvering platform to have strong concealment and anti-interference capabilities [9]. In addition, the passive reception characteristic saves volume and weight in the high-maneuvering platform [10].
However, accurate imaging with MEO/HM-BiSAR systems faces two major issues. The first one is that the existing BiSAR range models are have trouble describing the complex slant range in MEO/HM-BiSAR precisely. The research on existing spaceborne/maneuvering platform BiSAR mainly focuses on the BiSAR system with the LEO illuminator [11,12] and GEO illuminator [13,14,15,16,17,18], where the orbit characteristics and range model are significantly different from those of the MEO SAR system. The above research cannot be applied to the BiSAR system with an MEO illuminator. Additionally, the high maneuverability and curved trajectory of the receiver further complicate the modeling of the slant range. The range models of high-maneuvering BiSAR have been studied in the literature [19,20]. The range models are on the basis of ‘go-stop-go’ (SAG) approximation, which is invalid for MEO/HM-BiSAR. The second issue regarding high-resolution imaging with MEO/HM-BiSAR is that the complex motion trajectories of BiSAR platforms and the uncertainty of the point target spectrum result in conventional imaging algorithms being poorly applicable, aside from certain point-by-point methods such as the back-projection algorithm (BPA) that are limited by their large computational loads. In MEO/HM-BiSAR, the receiver can reach several times the velocity of sound, and its acceleration can be several times greater than gravity. These special motion characteristics cause large high-order phase errors, which would significantly reduce the resolution of the SAR image and even result in defocusing. Additionally, the high-order phase errors increase with the size of the imaging scene, restricting the feasible extent of the imaging region [21,22]. Therefore, the range model and high-resolution imaging algorithm for MEO/HM-BiSAR need to be studied.
Currently, the existing imaging algorithms of bistatic SAR can be mainly summarized in term of the time-domain algorithm (TDA), frequency-domain algorithm (FDA), and polar format algorithm (PFA). Each of these approaches has its own strengths and limitations, particularly when applied to MEO/HM-BiSAR systems. The most typical TDAs are the BPA and the fast factorized BPA (FFBPA) [23,24,25,26]. While the BPA provides high precision, it is restricted by a significant computational load, especially when applied to high-maneuverability platforms. In contrast, the FFBPA offers reduced computational complexity but sacrifices precision. As a result, TDAs are generally not suitable for MEO/HM-BiSAR applications due to their computational demands and performance trade-offs. On the other hand, FDA algorithms, including the Range-Doppler Algorithm (RDA) [27,28], Chirp Scaling Algorithm (CSA), Nonlinear Chirp Scaling (NCS) algorithm [29,30,31], and Omega-K algorithm [32], are known for their lower computational complexity. However, these algorithms typically rely on an analytical point target reference spectrum (PTRS), which is difficult to derive, especially when the BiSAR platforms follow complex flight paths. Furthermore, algorithms based on analytical PTRS face challenges in compensating for high-order spatial-variant phase errors, which are critical for MEO/HM-BiSAR systems. These limitations further restrict the applicability of FDA approaches in MEO/HM-BiSAR.
Accurate compensation for wavefront curvature errors in the polar format algorithm (PFA) for BiSAR systems, especially in MEO/HM-BiSAR, remains a significant challenge. Compared to the previous two types of algorithms, the PFA can adapt well to various configurations of BiSARs like the BPA while maintaining computational complexity comparable to that of the Omega-K algorithm, making it widely applied in spotlight BiSAR systems [21]. However, the algorithm is based on the far-field planar wavefront assumption, but when applied to the complex motion of BiSAR platforms, the higher velocity and acceleration of the platform result in significantly larger curvature errors, making it difficult to develop an analytical solution for the wavefront curvature error and compensate for it accurately. Several algorithms have been raised to deal with the wavefront curvature errors of PFA images in BiSAR system. To correct the wavefront curvature errors, a universal post-filter was designed by Wang et al. by calculating the phase error coefficients [33]. But in complex bistatic geometric configurations, deriving the post-filter coefficients in the wavenumber domain becomes more difficult. Therefore, the algorithm derived in [33] is only applicable to several specific BiSAR configurations like the translational invariant BiSAR, where the geometry of the system remains relatively stable. To address this issue, an algorithm based on the closed-form filter for arbitrary configurations was proposed by Zhang et al. [34], but this algorithm becomes ineffective when defocusing of the BiSAR image is severe. To address this issue, Miao et al. [21] and Wang et al. [35] successively proposed PFA algorithms based on two-dimensional (2D) phase compensation filters and bistatic parametric PFAs to compensate for wavefront curvature. However, these algorithms require image partitioning and sub-image processing, significantly increasing their complexity. But for high-maneuverability BiSAR applications like high-speed unmanned aerial vehicles and missiles, the computational load becomes prohibitive. Han et al. introduced an improved post-filtering algorithm, which could mitigate any-order wavefront curvature errors [36]. The limitation of this method is its reliance on the accuracy of point target position information for filter design. For example, in complex BiSAR systems where the trajectory is highly irregular and the point target motion is complex, this reliance on precise target position data makes the method less effective. To handle this, a new wavefront curvature compensation algorithm based on dimensionality reduction was proposed in [37]. While this algorithm significantly improves the efficiency and accuracy of wavefront curvature error compensation, its performance is still influenced by the sub-block size and trajectory complexity. The quality of the BiSAR images focused by the algorithm degrades under highly complex trajectories like MEO/HM-BiSAR systems. The above analysis demonstrates the requirement for designing advanced wavefront curvature compensation algorithms for BiSAR PFAs that are applicable to MEO/HM-BiSAR.
In this paper, to achieve high-resolution imaging ability in MEO/HM-BiSAR, an innovative high-resolution imaging algorithm utilizing a modified second-order space-variant wavefront curvature correction (MSSWCC) algorithm is proposed. First, a high-precision echo model is deduced for MEO/HM-BiSAR to describe its range history precisely. Based on the echo model, the proposed algorithm utilizes a fourth-order series inversion method to derive the complex relationship between the azimuth time and spatial frequency, expressing the echo phase as an explicit function of spatial frequency. The wavefront curvature error is then addressed by two-dimensional Taylor expansion in a wavenumber domain concerning spatial frequency, obtaining the analytical expressions for the high-order phase errors induced by wavefront curvature. The algorithm not only corrects the geometric distortion through reverse projection, but it also compensates for the second-order residual spatial-variant phase errors introduced. Simulations and real-data processing results verify that the high-order phase errors induced by wavefront curvature target are fully corrected by the proposed MSSWCC algorithm, which illustrates the effectiveness and applicability in high-resolution MEO SAR imaging.
The structure of this paper is below. The range and echo models for the MEO/HM-BiSA are established in Section 2. The analysis of the effect of wavefront curvature error is presented in Section 3. The analysis results reveal that the complicated flight paths in the BiSAR platforms increase the severity of the 2D wavefront curvature errors. Derivations show that the complex flight paths of the BiSAR platforms aggravate the 2D wavefront curvature errors. To handle the problem, the proposed MSSWCC algorithm is developed in Section 4. Simulations and real experiment results of BiSAR are employed in Section 5 to certify the availability and superiority of the proposed MSSWCC algorithm. At last, Section 6 gives the conclusions.

2. The Geometry and Range Models

Due to the unique orbital characteristics of MEO SAR satellites and the high-maneuverability flight of the receiver, the geometry and range models of the MEO/HM-BiSAR system are significantly different from the conventional BiSAR system. To describe the range history of the MEO/HM-BiSAR precisely, we first analyze the motion parameters and the process of coordinate system transformation in the MEO SAR to convert the BiSAR platforms into one coordinate system. Then, an improved ‘non-stop-go’ range model, as well as the associated echo model, are designed for MEO/HM-BiSAR.

2.1. MEO SAR Characteristic Analysis and Coordinate System Transformation

In the MEO/HM-BiSAR geometric configuration, the MEO SAR moves along an elliptical orbit as a transmitter, which is usually established in the Earth-centered, Earth-fixed (ECEF) coordinates. The geometric configuration is shown in Figure 1a, where O X Y Z depicts the ECEF coordinates of the MEO SAR. The Earth’s center is represented by O. The X axis is set along the direction of the Greenwich meridian. The Z axis aligns with the angular momentum direction of the Earth. The MEO SAR moves along path AB, where i represents the orbit inclination, and Ω is the ascending node. However, the imaging scene of the MEO SAR is moving because of the rotation of the Earth and the difficulty of describing the MEO SAR’s motion parameters in ECEF coordinates. Additionally, the high-maneuvering receivers are primarily established in the local coordinate system (LCS). Hence, we should conduct the transformation from ECEF coordinates to the LCS in the MEO SAR before establishing the range model. By converting the MEO to the LCS coordinate system, we can obtain the precise information about the satellite’s slant range, velocity, and acceleration. The effect of the Earth’s rotation on the MEO SAR imaging scene movement is eliminated. Consequently, it becomes much easier to establish a geometric model of the MEO SAR transmitter and high-maneuvering receiver within the same coordinate system, allowing for accurate slant range description.
The process of coordinate transformation is shown in Figure 1a. The beam center point O is the origin of the coordinates of the LCS, where the z axis is directed toward O, and the y axis lies within the ground plane and is perpendicular to the z axis. Assuming the coordinates of the central point O of the beam is ( θ lo , θ la ) . The ECEF coordinate is rotated counterclockwise first with respect to the Z axis by θ lo . Next, we rotate it by π / 2 θ la around the Y axis in a clockwise direction. Then, we translate the coordinate system by the Earth’s radius along the Z axis and obtain the LCS. After the coordinate transformation, the position vector R Tc and the motion vectors ( V t , A t ) in the MEO SAR can be expressed as
R Tc = M 1 · M 0 · R s 0 , 0 , R e T V t = M 1 · M 0 · V s A t = M 1 · M 0 · A s
where the rotation matrices M 0 and M 1 are expressed as
M 0 = cos θ lo sin θ lo 0 sin θ lo cos θ lo 0 0 0 1 M 1 = sin θ la 0 cos θ la 0 1 0 cos θ la 0 sin θ la
The MEO SAR’s motion vectors in the LCS are calculated by Equations (1) and (2) with the parameters in Table 1. Figure 2 shows the beam scanning speed and the relative speed of the MEO SAR in different true anomalies. Due to the Earth’s rotation, the max beam scanning speed is 4530 m/s, and the max relative speed of the satellite is about 1690 m/s in the equator. The satellite’s relative speed is faster than the beam scanning speed. Hence, with the same resolution, the MEO SAR has a longer synthetic aperture time than the LEO SAR.
After coordinating transformation, the MEO SAR and high-maneuvering receiver are in the same coordinate. The range model in the LCS is presented in Figure 1b, where the central beam overlap region is the coordinate system’s origin O . P is situated at ( x p , y p , 0 ) , which represents an arbitrary point target, where y axis corresponds to the receiver’s velocity. Suppose the composite beam center crossing time t a being zero; then, the MEO SAR is situated in ( x t , y t , z t ) , and the values of the 3D initial velocity and acceleration are ( v tx , v ty , v tz ) and ( a tx , a ty , a tz ) . The receiver is located at the initial location ( x r , y r , z r ) descending with the curve trajectory, and the 3D initial speed and acceleration vectors are ( v rx , v ry , v rz ) and ( a rx , a ry , a rz ) , respectively.

2.2. Motion Range Model and Echo Model

The high altitude of the MEO SAR and the high maneuverability of the receiver result in longer signal propagation compared to those BiSAR platforms that have low altitudes and slow speeds. It is necessary to consider the movement of the high-maneuvering receiver during the signal propagation in MEO/HM-BiSAR. The echo models based on the ‘go-stop-go’ assumption are not applicable for MEO/HM-BiSAR. The actual propagation path and the propagation path founded on the ‘go-stop-go’ approximation for MEO/HM-BiSAR are shown in Figure 1b. In addressing the imaging errors caused by the ‘stop-go’ assumption, the proposed ‘non-stop-go’ echo model can be divided into two parts. The first part considers the motion of the receiver during signal propagation. The second part involves solving for the propagation delay, where a Taylor expansion is employed to approximate the model. It is assumed that the time reference is at the MEO SAR in the actual propagation delay model, where τ is the delay of accurate propagation, and R R t a + τ is the actual range histories of the receiver at time base t a . In the proposed ‘non-stop-go’ echo model, the motion of the receiver needs to be considered while the signal is propagating. The actual bistatic instantaneous range histories for the arbitrary target P of MEO/HM-BiSAR can be shown as
R bi ( t a + τ ) = R T ( t a ) + R R ( t a + τ )
with
τ = R T t a + R R t a + τ c
where τ is the accurate propagation delay. R T is the instantaneous slant range of the transmitter, and R R is the instantaneous slant range of the receiver, which is defined as
R T ( t a ) = x t 2 ( t a ) + y t 2 ( t a ) + z t 2 ( t a )
R R ( t a ) = x r 2 ( t a ) + y r 2 ( t a ) + z r 2 ( t a )
Due to the slow acceleration variation of the MEO SAR, it is reasonable to describe the motion state of MEO satellites using a constant acceleration model over synthetic aperture time [38,39,40]. For the high-maneuverability receiver, the constant acceleration model is usually used [41,42]. We consider the high-maneuverability receiver three-dimensional with high velocity and acceleration, which is a very complex condition. Based on the above analysis, the expressions of x t , x r , y t , y r , z t , and z r are
x t t a = x t x p + v tx t a + 1 2 a tx t a 2 y t t a = y t y p + v ty t a + 1 2 a ty t a 2 z t t a = z t + v tz t a + 1 2 a tz t a 2 x r t a = x r x p + v rx t a + 1 2 a rx t a 2 y r t a = y r y p + v ry t a + 1 2 a ry t a 2 z r t a = z r + v rz t a + 1 2 a rz t a 2
However, for the curve trajectory of the transmitter and receiver, as well as the square root operation of R T ( t a ) and R R ( t a ) , τ is a fourth-order equation variable, and there is no explicit solution to τ . Therefore, a fourth-order Taylor expansion is conducted firstly to provide an approximately explicit solution with enough precision, which is given by
R T ( t a ) = R Tc + b 1 T t a + b 2 T t a 2 + b 3 T t a 3 + b 4 T t a 4 + Δ r T ( t a )
R R ( t a ) = R Rc + b 1 R t a + b 2 R t a 2 + b 3 R t a 3 + b 4 R t a 4 + Δ r R ( t a )
where b n T is the Taylor expansion coefficient of the transmitter, and b n R is the that of the receiver. R Tc and R Rc denote the range histories at t a = 0 . Δ r T ( t a ) and Δ r R ( t a ) define the higher-order Taylor expansion series. Appendix A gives the expression of the above parameters. The ‘go-stop-go’ approximate bistatic range histories are the sum of that from two platforms to the arbitrary target, which can be written as
R sag ( t a ) = R T ( t a ) + R R ( t a )
Therefore, the propagation delay based on ‘go-stop-go’ is τ sag = R sag ( t a ) / c . Combining Equations (3)–(9), we can obtain the mathematical expression of τ . However, the solution to the fourth-order equation is complicated. Consider that the movement of the receiver is far less than the receiver’s range histories to targets R R ( t a ) during the propagation delay τ . Therefore, τ could be approximated to the propagation delay τ sag , and the ‘non-stop-go’ slant range model is
R bi ( t a ) = R T ( t a ) + R R ( t a + τ ) R T ( t a ) + R R ( t a + τ sag ) K 0 + K 1 t a + K 2 t a 2 + K 3 t a 3 + K 4 t a 4
where K n is the coefficient of Taylor expansion at the Doppler central time of the proposed model. The above parameters are given in Appendix A.
The error of the proposed model in (11) is analyzed during the synthetic aperture time. For a comparison, the errors of the BiSAR range model using ‘go-stop-go’ approximation in (10) and the reference ‘non-stop-go’ BiSAR range model in [43] are also analyzed. The MEO/HM-BiSAR’s simulation parameters are listed in Table 1 and Table 2. Figure 3a presents the discrepancy between the actual and the estimated instant bistatic range; the high-maneuverability platform works in high velocity and acceleration in squinted mode. It is clear that under the ‘go-stop-go’ range model, the maximum range error is 26 m, which is much larger than that with the reference ‘non-stop-go’ model and our proposed accurate ‘non-stop-go’ model. The high-altitude transmitter and the high-maneuverability SAR receiver are primarily responsible for the huge range error. It could be found that the range error of our proposed accurate ‘non-stop-go’ model is much smaller than the reference ‘non-stop-go’ model, which certifies the precision of our proposed range model.
From the analysis of the range model in (11), our proposed ‘non-stop-go’ echo is
Δ t bi ( t a ) = R bi ( t a ) c = Δ t 0 + Δ t 1 t a + Δ t 1 t a 2 + Δ t 3 t a 3 + Δ t 4 t a 4
where Δ t n = K n / c , n = 0 , 1 , 2 , 3 , 4 . We suppose that the signal delivered by the MEO SAR illumination source is the pulse of the linear frequency modulation (LFM). Hence, after impulse demodulation, the echo is written as
s ( t ^ ; t a ) = ω r ( t ^ Δ t bi ( t a ) ) ω a ( t a ) exp ( j π γ ( t ^ Δ t bi ( t a ) ) 2 ) · exp ( j 2 π f c Δ t bi ( t a ) )
where ω r is the range window functions of the echo, and ω a defines the window functions. c is the speed of light, and γ is the chirp rate. t ^ is the fast time, t a is the slow time, and f c is the carrier frequency.
The total phase error induced by the slant range error could be written as
Δ φ f r , t a = 2 π f r + f c c Δ R t a .
where Δ R t a is the range error of the slant range. f r is the range frequency. To validate the accuracy of the proposed echo model, a detailed error analysis in Equation (14) on the proposed model, the ‘go-stop-go’ echo model, as well as the BiSAR echo model in [43], is conducted. The MEO satellite is at the perigee location. v R 0 , a R 0 is the speed and acceleration of the high maneuvering receiver in Table 2, which have been calculated as ( 20 , 920 , 50 ) m/s and ( 0 , 50 , 50 )   m / s 2 . Figure 3b presents the phase errors of the echo model using ‘go-stop-go’ approximation in different geometric configurations of the MEO/HM-BiSAR. We could find the phase error of the echo model with ‘go-stop-go’ approximation is much higher than π /4 in any situation. So, it is not suitable for the MEO SAR system. From Figure 3c, we can know that only in low acceleration ( 0 , 5 , 5 )   m / s 2 can the phase errors of the model in [43] be less than π /4. The comparison result shows that the high-order term of the echo model for MEO/HM-BiSAR cannot be neglected. Figure 3d gives the phase errors in our proposed ‘non-stop-go’ echo model of MEO/HM-BiSAR in different cases. Note that the phase errors of the proposed model are less than π /4 in all situations, showing the necessity of the approximation clearly in MEO/HM-BiSAR. Since the proposed model uses a fourth-order Taylor approximation, the phase error remains very small, only 0.14 rad under a dive acceleration of ( 0 , 50 , 5 )   m / s 2 , making it suitable for the majority of scenarios in MEO/HM-BiSAR. The proposed model remains applicable when the phase error Δ φ is less than π / 4 . However, under extremely complex and intense motion trajectories, such as when the acceleration varies significantly during the synthetic aperture time, the model might fail to accurately describe the receiver’s slant range, which faces limitations. The analysis of the model’s accuracy proves the precision of the proposed model and its applicability to MEO/HM-BiSAR.

3. The Polar Formatting Process for BiSAR

In this section, based on the proposed echo model, we conduct a detailed analysis of the BiSAR polar formatting process considering wavefront curvature errors. First, we transform the echo in Equation (13) to the range frequency domain using fast Fourier transform (FFT), which is shown as
S f r , t a ; x p , y p = W r f r ω a t a exp j π f r 2 γ exp j 2 π c f r + f c R bi t a ; x p , y p
where W r corresponds to the frequency domain envelope of ω r . Pulse compression and space-invariant motion compensation are conducted by multiplying the compensation function of the reference point shown below:
H 1 ( f r , t a ) = exp j π f r γ exp j 2 π f r + f c c R bi ( t a ; 0 , 0 )
We obtain
S f r , t a ; x p , y p = W r f r ω a t a exp j 2 π c f r + f c Δ R bi t a ; x p , y p
The differential distance can be written as
Δ R bi t a ; x p , y p = R bi t a ; 0 , 0 R bi t a ; x p , y p x p f x t a + y p f y t a + Δ e t a
where
f x t a = x tc t a R Tc t a + x rc t a + Δ t bi R Rc t a + Δ t bi f y t a = y t c t a R Tc t a + y rc t a + Δ t bi R Rc t a + Δ t bi
where Δ e t a is the range error resulting from the plane wavefront approximation. x tc t a , x rc t a , y tc t a , and y rc t a are the values of x t t a , x r t a , y t t a , and y r t a at ( x p , y p ) = 0 . By substituting Equation (18) into Equation (17), the echo can be rewritten as
S 1 = A exp j k x x p + k y y p + 2 π c f r + f c Δ e t a
where A represents the window functions. k x and k y denote the spatial frequency in the x and y directions, respectively, which can be written as
k x = 2 π c f r + f c f x t a k y = 2 π c f r + f c f y t a
The echo data in the ( k x , k y ) domain are non-uniformly sampled. After resampling the data in the range and azimuth, the uniformly sampled data are obtained. The BiSAR PFA image could be achieved through the operation of 2D inverse FFT (IFFT). However, as is apparent from Equation (20), the wavefront curvature error would introduce 2D errors in the wavenumber domain, eventually resulting in image distortion. From the above analysis, the phase error is
Δ φ ( f r , t a ) = 2 π c f r + f c Δ e t a
To reveal the configuration of the wavefront curvature error, the operation of the Taylor expansion of (22) is conducted at t a = 0 , which is written as
Δ φ = Δ φ 0 + Δ φ 1 t a + Δ φ 2 t a 2
where
Δ φ 0 = 2 π c f r + f c Δ e t a t a = 0 Δ φ 1 = 2 π c f r + f c Δ e t a t a = 0 Δ φ 2 = 2 π c f r + f c Δ e t a t a = 0
From the Equation (23), each term has a different effect on the image. Δ φ 0 and Δ φ 1 would bring target position shifts in the BiSAR PFA image, while the quadratic term leads to a defocused SAR image. By using the parameters in Table 1 and Table 2, the absolute values of the linear, second-order, and higher-order phase errors induced by wavefront curvature are simulated, as shown in Figure 4. It is evident that the linear and second-order phase errors exceed π / 4 during the synthetic aperture time, which will affect the target’s focusing position and lead to defocusing. Therefore, they must be considered in the MEO/HM-BiSAR. For the higher-order phase error in Figure 4c, the absolute values of the higher-order errors are much smaller than π / 4 . Thus, the higher-order wavefront curvature errors can be neglected. From the above simulation results, wavefront curvature compensation is necessary to gain precise focusing on the BiSAR PFA images. However, because of the complicated data acquisition configuration of the PFA image in MEO/HM-BiSAR, designing phase error coefficient filters is challenging. Therefore, in the next section, we will analyze and compensate for the exact coefficients of the wavefront curvature errors.

4. The MSSWCC Imaging Algorithm for MEO/HM-BiSAR

In the section, the complex relationship between the azimuth time and spatial frequency through theoretical derivation of MEO/HM-BiSAR is first conducted. And then, the exact coefficients of the wavefront curvature errors are analyzed. From the analysis above, the MSSWCC imaging algorithm is derived.

4.1. The Compensation of Wavefront Curvature Error

Under conditions where the trajectories of the MEO SAR and the high maneuvering are curved, the non-negligible wavefront error would cause image defocusing and a certain degree of geometric distortion. Therefore, analyzing the wavefront curvature error terms is necessary to improve image-focusing quality. The actual phase of the signal can be rewritten as
Φ = K r Δ R bi t a ; x p , y p
where K r = 2 π c f r + f c . From (21), K r can be also expressed as
K r = k x f x t a
Substituting (26) into (25), we obtain
Φ k x , t a = k x Δ R bi t a ; x p , y p f x t a
From (27), we obtained the expression for phase in ( k x , t a ) domain. However, due to the complex relationship between this domain and the PFA image domain, it is difficult to compensate for wavefront curvature errors directly. We need to perform another variable substitution to formulate the phase explicitly as a function of spatial frequency.
After the polar format mapping, the signal will be converted to the ( k x , k y ) domain. To eliminate the wavefront curvature errors in this domain, it is necessary to obtain the inverse function of Equation (21) and express the phase as an explicit function of spatial frequency. However, the inverse function is generally difficult to gain in polar format mapping. Nevertheless, due to the two equations having a shared variable in (21), we can use series reversion to derive the inverse function concerning t a . First, to simplify the solution of the inverse function of t a , a fourth-order Taylor expansion of f x t a and f y t a at t a = 0 can be conducted, which is shown as
f x t a = κ 0 f x + κ 1 f x t a + κ 2 f x t a 2 + κ 3 f x t a 3 + κ 4 f x t a 4
f y t a = κ 0 f y + κ 1 f y t a + κ 2 f y t a 2 + κ 3 f y t a 3 + κ 4 f y t a 4
where κ i f n = κ ti f n + κ ri f n , i = 0 , 1 , 2 , 3 , 4 . The value of n is x and y, representing the components of the platforms’ position and motion information in the x and y directions. The coefficients of κ ti f n and κ ri f n are shown in Appendix B. According to Equations (21), (28) and (29), the relationship between k x , k y , and t a can be expressed as
k x k y = f x t a f y t a m 0 + m 1 t a + m 2 t a 2 + m 3 t a 3 + m 4 t a 4
with
m 0 = κ 0 f x κ 0 f y m 1 = κ 1 f x κ 0 f y κ 0 f x κ 1 f y κ 0 f y 2 m 2 = κ 2 f x κ 0 f y κ 0 f x κ 2 f y + κ 1 f x κ 1 f y κ 0 f y 2 + κ 0 f x κ 1 f y 2 κ 0 f y 3 m 3 = κ 3 f x κ 0 f y κ 2 f x κ 1 f y + κ 1 f x κ 2 f y + κ 0 f x κ 3 f y κ 0 f y 2 + 2 κ 0 f x κ 1 f y κ 2 f y + κ 1 f x κ 1 f y 2 κ 0 f y 3 κ 0 f x κ 1 f y 3 κ 0 f y 4 m 4 = κ 4 f x κ 0 f y κ 1 f y κ 3 f x + κ 2 f x κ 2 f y + κ 1 f x κ 3 f y κ 0 f x κ 4 f y κ 0 f y 2   + 2 κ 1 f x κ 1 f y κ 2 f y + κ 2 f x κ 1 f y 2 + κ 0 f x κ 2 f y 2 + 2 κ 1 f y κ 3 f y κ 0 f y 3   κ 1 f x κ 1 f y 3 + 3 κ 0 f x κ 1 f y 2 κ 2 f y κ 0 f y 4 + κ 0 f x κ 1 f y 4 κ 0 f y 5
Equation (30) can be rewritten as
M = k x k y m 0 m 1 t a + m 2 t a 2 + m 3 t a 3 + m 4 t a 4
Using the series reversion method, we obtain
t a n 1 M + n 2 M 2 + n 3 M 3 + n 4 M 4
where
n 1 = 1 m 1 n 2 = m 2 m 1 3 n 3 = 2 m 2 2 m 1 m 3 m 1 5 n 4 = 5 m 2 3 + m 1 2 m 4 5 m 1 m 2 m 3 m 1 7
From (33), the inverse function expressions for t a in terms of k x and k y are obtained. It is hard to gain an analytical solution in the wavenumber domain by substituting (33) into (27) directly. Hence, the Taylor approximation is conducted to solve the problem in the proposed algorithm. From Equations (11) and (18), Δ R bi t a ; x p , y p can be expressed as
Δ R bi t a ; x p , y p = Δ K 0 x p , y p + Δ K 1 x p , y p t a + Δ K 2 x p , y p t a 2 + Δ K 3 x p , y p t a 3 + Δ K 4 x p , y p t a 4
where Δ K n x p , y p , n = 0 , 1 , 2 , 3 , 4 is the difference of K n ( 0 , 0 ) and K n ( x p , y p ) . Let υ t a = Δ R bi t a / f x t a and substitute (35) and (28) into υ t a . Then, the fourth-order Taylor expansion is conducted in the result, which is written as
υ t a ; x p , y p = Δ R bi t a ; x p , y p f x t a b 0 x p , y p + b 1 x p , y p t a + b 2 x p , y p t a 2 + b 3 x p , y p t a 3 + b 4 x p , y p t a 4
with
b 0 = Δ K 0 κ 0 f x b 1 = Δ K 1 κ 0 f x Δ K 0 κ 1 f x κ 0 f x 2 b 2 = Δ K 2 κ 0 f x Δ K 0 κ 2 f x + Δ K 1 κ 1 f x κ 0 f x 2 + Δ K 0 κ 1 f x 2 κ 0 f x 3 b 3 = Δ K 3 κ 0 f x Δ K 2 κ 1 f x + Δ K 1 κ 2 f x + Δ K 0 κ 3 f x κ 0 f x 2   + 2 Δ K 0 κ 1 f x κ 2 f x + Δ K 1 κ 1 f x 2 κ 0 f x 3 Δ K 0 κ 1 f x 3 κ 0 f x 4 b 4 = Δ K 4 κ 0 f x κ 1 f x Δ K 3 + Δ K 2 κ 2 f x + Δ K 1 κ 3 f x Δ K 0 κ 4 f x κ 0 f x 2   + 2 Δ K 1 κ 1 f x κ 2 f x + Δ K 2 κ 1 f x 2 + Δ K 0 κ 2 f x 2 + 2 κ 1 f x κ 3 f x κ 0 f x 3   Δ K 1 κ 1 f x 3 + 3 Δ K 0 κ 1 f x 2 κ 2 f x κ 0 f x 4 + Δ K 0 κ 1 f x 4 κ 0 f x 5
Substitute the function of the azimuth time with respect to the spatial frequency in (30) and (36) into (27), and the expression for the actual phase in ( k x , k y ) domain is obtained, which is shown as
Φ k x , k y ; x p , y p = k x b 0 x p , y p + b 1 x p , y p t a M + b 2 x p , y p t a 2 M + b 3 x p , y p t a 3 M + b 4 x p , y p t a 4 M
Up to now, the phase can be expressed as an explicit function of spatial frequency, and a two-dimensional Taylor expansion of the phase concerning spatial frequency ( k x , k y ) can be performed.
Φ = a 00 + a 10 k x k x 0 + a 01 k y k y 0 + a 20 k x k x 0 2 + a 02 k y k y 0 2 + a 11 k x k x 0 k y k y 0 +
with
k x 0 = k x t a = 0 , f r = 0 , k y 0 = k y t a = 0 , f r = 0 a ij = 1 i ! j ! i + j Φ k x i k y j t a = 0 , f r = 0
The coefficient matrix can be expressed as
A = a 00 a 01 a 02 a 0 i a 10 a 11 a 12 a 1 i a 20 a 21 a 22 a 2 i a i 0 a i 1 a i 2 a i i
It can be observed that the wavefront curvature effect not only introduces phase errors but also causes an offset in the position of the point targets. The position of target P x p , y p in the PFA image is a 10 , a 01 . The geometric distortion correction should be conducted. It is important to note that if geometric distortion correction is performed first, it will change the side lobe positions, affecting the filtering process. Therefore, it is necessary to construct a spatially variant filter to eliminate second-order errors. The correction of the phase error correction would enhance the depth of focus and return the targets to their real positions.
Next, the second-order phase errors resulting from the space-variant wavefront curvature are simulated using the parameters in Table 1 and Table 2, which are presented in Figure 5. Figure 5a illustrates that the phase errors caused by the 2D coupled phase error in the imaging scene are all less than π / 4 , which can thus be neglected in the MEO/HM-BiSAR. The max absolute value of the second-order phase errors resulting from a 02 in Figure 5c is about 11 rad, making it the primary component of the space-variant wavefront curvature error. In Figure 5b, the second-order phase error relevant to a 20 is also larger than π / 4 . Therefore, the second-order wavefront curvature errors should be corrected in the MEO/HM-BiSAR. Additionally, as presented in Figure 5d, the maximum value of the remaining higher-order phase errors is approximately 0.0016 rad, which illustrates that the higher-order phase errors can be neglected.
Based on the above analysis, it is essential to correct the second-order phase errors resulting from wavefront curvature to ensure the image quality, while the remaining higher-order and second-order coupling errors can be neglected. The second-order wavefront curvature correction function is constructed as follows:
H wbc _ y x , k y ; x p , y p = exp j a 02 x p , y p k y 2
H wbc _ x k x , y ; x p , y p = exp j a 20 x p , y p k x 2
where the expressions of a 02 and a 20 are presented in Appendix C.

4.2. Geometric Correction and Projection

After obtaining the expression for second-order wavefront curvature compensation, we will now conduct a detailed analysis of the position distortion of scattering points. By Equation (41), the analytical solution for the actual focusing position of the target can be obtained, which is shown as
x p = a 10 = b 0 + k x 0 / k y 0 ( b 1 C 2 k y 0 C 1 b 2 C 1 2 C 2 k y 0 C 1 + b 3 C 1 2 3 C 2 k y 0 C 1 b 4 C 1 3 4 C 2 k y 0 C 1
y p = a 01 = k x 0 / k y 0 2 b 1 C 2 2 k x 0 b 2 C 1 C 2 + 3 b 3 k x 0 C 1 2 C 2 4 b 4 C 1 3 k x 0 C 2
where the expressions of C 1 , C 2 , and C 3 are presented in Appendix C. The above equation indicates that after BiSAR PFA imaging, the point situated at ( x p , y p ) is focused at ( x p , y p ) . To provide a more intuitive demonstration of the positional distortion phenomenon, a simulation using a point grid is conducted, with its parameters listed in Table 1 and Table 2. Figure 6 shows the simulation results. It is evident that due to first-order phase errors, the target’s position is offset from its true position. As the displacement of these points becomes larger, the degree of image distortion in the slant range plane becomes more severe. This eventually alters the geometric shape and relative positioning of the imaging area. Therefore, geometric distortion correction is required after completing the spatially variant filtering.
Due to the effect of wavefront curvature, the terrain features could not be precisely shown in the SAR image. In the proposed algorithm, reverse projection theory is employed for projection and distortion correction. The scattering point positions are located by using the relationships in (44) and (45). By utilizing relationships, the information of the phase could be restored [44]. The process is outlined in Algorithm 1.
Algorithm 1 Geometric Distortion Correction Method
1:
Input: BiSAR PFA imagery with positional distortion
2:
Step 1: Create the pixel grid in the target coordinate system on the basis of image point numbers and fixed resolution interval.
3:
Step 2: Calculate the shifted position of the arbitrary target on the ( x p , y p ) by Equations (44) and (45) within the pixel grid.
4:
Step 3: Use a M × M data block ( x p , y p ) as a reference and conduct interpolation to get the amplitude and phase information.
5:
Step 4: Iterate through all grid points, a well-focused and distortion-free ground image is obtained.
6:
Output: Focused imagery
In practice, various interpolation kernels and the data block size can be flexibly selected according to the accuracy requirements. The flowchart of the MSSWCC algorithm is shown in Figure 7.

4.3. Computation Cost

For practical engineering applications in high-maneuvering platform SAR systems, it is essential to assess the computational cost in our proposed MSSWCC algorithm. It is widely recognized that executing an N-point FFT/IFFT involves approximately 5 N l o g 2 N floating-point operations (FLOPs); in the same way, the complex phase multiplication in N-point entails 6N FLOPs, and the sinc interpolation in M-kernel requires about 2 ( 2 M 1 ) N FLOPs.
In the traditional BiSAR PFA algorithm, there are two times of range FFT/IFFT, one time of azimuth FFT/IFFT, one time of complex multiplication, and two interpolations. We assume that N a and N r represent the azimuth sampling points and range sampling points, respectively. The total computation cost of the BiSAR PFA is
C BiPFA = 10 N r N a log 2 N r + 5 N r N a log 2 N a + 6 N r N a + 4 ( 2 M 1 ) N r N a
Suppose that the BP algorithm of the bistatic SAR uses the same sampling points in two dimensions, and the computation cost of the BP algorithm in [45] is
C BP = 8 N a N r 2 + ( 5 M + 5 ) N r N a log 2 N r + 5 M log 2 ( M ) + 6 N r N a
The fast factorized back-projection algorithm reduces the computational burden by sub-aperture imaging and fusing sub-aperture images in polar coordinates through image-domain interpolation. Suppose that the number of the sub-apertures is n; then, the FLOPs of the FFBP are defined as in [46]
C FFBP = 8 N r N a L a n + 16 N r N a log 2 N
From Figure 7, there are four times of range FFT/IFFT, one time of azimuth FFT/IFFT, three times of complex multiplication, and two interpolations in the proposed MSSWCC algorithm. The computational load is
C MSSWCC = 20 N r N a log 2 N r + 5 N r N a log 2 N a + 18 N r N a + 4 ( 2 M 1 ) N r N a
Observing (49) and (47), it is obvious that the computation cost of the proposed algorithm is O ( N 2 log 2 N ) , which is far less than the O ( N 3 ) in the BPA. As is apparent from Equation (48), although FFBP lowers a large number of computational complexity, a large number of interpolation operations in the image domain will decrease the imaging accuracy. Based on the above analysis, the computational complexity of the proposed algorithm is much lower than that of the BP and FFBP algorithms.
Comparing Equations (46) and (49), it is clear that the proposed MSSWCC algorithm requires just two additional range FFT/iFFTs and two-phase multiplications compared to the BiSAR PFA. The two algorithms have the same order of magnitude in their computational load, while the proposed MSSWCC algorithm is capable of achieving high-resolution focus in large imaging scenes. Therefore, the proposed MSSWCC algorithm is available in practice.

5. Experiment Results

In this section, simulations and real BiSAR experiments have been conducted to validate the effectiveness of the proposed MSSWCC algorithm. First, numerical simulations of the scatter point targets are constructed for the MEO/HM-BiSAR. A detailed analysis of the focusing BiSAR image demonstrates the accuracy of the proposed MSSWCC algorithm. Subsequently, satellite-airborne imaging experiments are carried out to further prove the practicability of the proposed MSSWCC algorithm.

5.1. Simulation of Point Targets

The simulation parameters for point targets are listed in Table 1 and Table 2, and the corresponding imaging scene is shown in Figure 8. The simulated point target imaging scene includes 25 targets uniformly distributed with a 4 × 4 km size. The size of the scene far exceeds the limitations of the BiSAR PFA [47]. Three-point targets in the imaging areas have been marked and analyzed in detail, where P2 and P3 are the edge targets with the most significant wavefront curvature. P1 is regarded as a reference point, which is located in the scene’s center. Additionally, the reference bistatic SAR algorithm in [48] has been used for comparison, which is denoted as ‘Xiong’s algorithm’ for convenience. The PFA based on sub-image segmentation can theoretically be extended to handle scenes that are arbitrarily large [49,50]. Hence, the PFA based on sub-image segmentation in [51] has also been used to further demonstrate the improvement of the proposed MSSWCC algorithm. It is noted that, for a more comprehensive comparison of the algorithm’s performance, the errors introduced by the ‘go-stop-go’ effect have been corrected during the imaging process in Xiong’s algorithm and the PFA based on sub-image segmentation. In addition, to enable a comparison under bistatic SAR imaging, we applied the bistatic PFA for imaging after sub-image segmentation in the PFA based on sub-image segmentation.
The imaging results for the whole imaging scene of Xiong’s algorithm and the proposed MSSWCC algorithm are shown in Figure 9. As shown in Figure 9a, only the center target is well-focused. We know the spatial variance of the imaging scene relative to the center target is severe. Additionally, due to the influence of squint angle in the BiSAR platforms, the imaging result has been rotated by a certain angle, where the position distribution is no longer the same as in the original scene. According to the PFA based on sub-image segmentation in [51], five levels of segmentation and 256 sub-images were required, which greatly increased the computation load. According to the five levels of segmentation, the imaging result is shown in Figure 9b. We know that by the 256 sub-image segmentation, the point targets in the imaging scene are focused well. However, this was achieved at the expense of the efficiency of the algorithm. Since there are 256 sub-images to do PFA operations on, it is several hundred times more computationally intensive than the traditional PFA algorithm. In addition, position distortion appeared in the PFA with sub-image segmentation, as shown in Table 3. We can see that the PFA with sub-image segmentation reached a deviation of 134 m, while the proposed algorithm was not distorted due to the simultaneous correction of the distortion and defocus of the scene. Figure 9c presents the imaging result by using the proposed MSSWCC algorithm. Clearly, the focus quality improved significantly with the proposed MSSWCC algorithm. The proposed MSSWCC algorithm already corrected the second-order space-variant wavefront curvature errors induced by the complex geometric configuration and the large scene size. The algorithm achieved high-resolution imaging within the scene by correcting the wavefront curvature errors. The comparison results also prove the effectiveness and improvement of the proposed algorithm.
In addition, to certify the effectiveness of our proposed MSSWCC algorithm, the marked points in Figure 8 were employed to assess imaging quality in detail. Figure 10 displays the contour plots of P1, P2, and P3 using Xiong’s algorithm. Figure 11 shows the contour plots of P1, P2, and P3 using the PFA based on sub-image segmentation. Figure 12 shows the contour plots of P1, P2, and P3 using the proposed MSSWCC algorithms, respectively. We know that P1 was well-focused, while the other two points demonstrated poor performance in BiSAR focusing, as shown in Figure 10b,c. That is because Xiong’s algorithm ignores the spatial variance characteristics within the imaging scene. A comparison of the azimuth pulse responses of the three points obtained by Xiong’s and the proposed MSSWCC algorithms are presented in Figure 13. The center point’s pulse response profile has satisfactory results for the two algorithms. However, the imaging result of the edge points P2 and P3 is defocused severely in Xiong’s algorithm. Only the proposed MSSWCC algorithm could maintain good focusing for both the center and edge targets throughout the imaging scene with low computation load. This is due to the proposed MSSWCC algorithm correcting corrected the second-order space-variant wavefront curvature errors, as shown in Figure 12. It should be noted that there is an angle between the main lobe expansion direction in Figure 12 and the coordinate axis. The expansion direction of the scattering center energy in the BiSAR was primarily determined by the motion trajectory and the relative positions of the transmitter, receiver, and target.
Moreover, as demonstrated in Figure 13b,c, the broadening of the main lobe of points P2 and P3 occurred in the Xiong’s algorithm. As a comparison, there is a noticeable reduction in the main lobe, as well as side lobes for points P2 and P3 in the proposed MSSWCC algorithm. The wavefront curvature errors have been sufficiently eliminated, resulting in the main and side lobes of all marked points forming an ideal well-separated ‘cross’ shape. Additionally, compared to Xiong’s algorithm, it is evident that the imaging results remained well-focused even with an increase in the target positions. This outcome validates the effectiveness of our proposed MSSWCC algorithm. We noticed that the azimuth profiles of the PFA based on sub-image segmentation and the proposed algorithm are nearly identical, as seen in Figure 13. This is because both algorithms effectively eliminate the influence of the wavefront curvature errors and ultimately use the PFA algorithm for focusing. Although the two methods adopt different ideas—one based on sub-image segmentation and the other on compensating for spatially variant wavefront curvature errors across the whole imaging scene—they both effectively eliminated the impact of wavefront curvature errors and produced focused BiSAR PFA images. The maximum phase errors caused by the wavefront curvature of both algorithms are on the order of 0.01 rad, which has a negligible impact on imaging. As a result, the azimuth profiles of the two methods appear nearly indistinguishable.
Furthermore, to quantitatively assess the efficacy of our proposed MSSWCC algorithm, we selected the peak sidelobe ratio (PSLR), integrated sidelobe ratio (ISLR), and the 2D resolutions as performance criteria, where the results are shown in Table 4. As indicated by Table 4, the PSLR and ISLR for the marked points obtained through our proposed MSSWCC algorithm align closely with the values of BPA. Although the proposed algorithm uses the approximation process, these deviations had minimal impact on the imaging quality. With the proposed MSSWCC algorithm, both the center and edge targets have a high 2D resolution. The results indicate that wavefront curvature errors have been effectively addressed by this method. In contrast, when employing Xiong’s algorithm, the azimuth parameters measured at corresponding points showed a gradual decline due to unresolved space-variant phase errors. For edge targets in Xiong’s algorithm, the azimuth resolutions reached 15.35 m. Thus, based on the simulation results, our proposed MSSWCC algorithm provided high-resolution accurate imaging ability for the MEO/HM-BiSAR systems.
The drifts of the proposed algorithm in imaging performance are the approximation process in the proposed algorithm, including the approximation in the slant range modeling process and the second-order wavefront curvature error analysis. As a result, it cannot achieve the perfect imaging performance of approximation-free methods like the BP algorithm. The limitation of the proposed algorithm is that it does not consider the higher motion parameters when establishing the echo model. When the maneuvering platform undergoes more complex motion, such as rapid changes in acceleration during the synthetic aperture time, the proposed algorithm may fail.

5.2. Real BiSAR Experiment

Furthermore, to validate the practicability of the proposed MSSWCC algorithm, real BiSAR experiments were carried out. The spaceborne BiSAR system operates at 5.4 GHz, with the GF-3 transmitter functioning in sliding spotlight mode and the airborne SAR receiver operating at squint-looking mode. Figure 14a presents the SAR imaging results of the BiSAR PFA algorithm without the MSSWCC operation, while Figure 14b shows the results of the proposed MSSWCC algorithm. To obtain a more comprehensive performance evaluation of the proposed algorithm, the imaging results of real-measured data processed by BPA were obtained, as shown in Figure 14c. Two regions have been selected for comparison in Figure 15a–e, where region A is near the scene center, and region B is on the edge of the imaging scene. As apparent from Figure 15, region A could be effectively focused by both two algorithms. This is because the wavefront curvature errors in region A could be largely ignored due to the closeness to the scene center. Comparing Figure 15d with Figure 15e, the proposed MSSWCC algorithm achieved good focus results and high resolution on the edge of the imaging scene, allowing for clear identification of object features. Comparing Figure 15e with Figure 15f, the imaging results of the proposed MSSWCC are close to the BP algorithm. In contrast, the region B processed with the BiSAR PFA algorithm exhibited defocus in the azimuth direction, where the outline of the horizontal road had become blurred. The reason is that the BiSAR PFA algorithm failed to correct the wavefront curvature errors caused by non-ideal platform motion and large imaging scenes.
To quantitatively evaluate the performance of the proposed MSSWCC algorithm on the real BiSAR data, we calculated the image entropy and image contrast for two regions. As is apparent from region A, the values of the image entropy in the proposed algorithm and the algorithm without MSSWCC operation came out to 5.927 and 5.955. The image entropy of the proposed algorithm is only 0.028 lower than that of the BiSAR PFA without MSSWCC operation in region A, while in region B, the image entropy of the proposed algorithm has been reduced by 0.333 compared to the algorithm without MSSWCC operation, indicating a more significant improvement in image focusing. This is because region A is located near the center of the scene, where wavefront curvature errors are relatively small, whereas region B is in the scene’s edge, where wavefront curvature errors increase. The wavefront curvature correction in region B showed a more pronounced effect, leading to a more noticeable improvement in focusing performance. The results in Table 5 indicate that, compared to the BiSAR PFA algorithm, the BiSAR images processed with the proposed MSSWCC algorithm have lower image entropy and higher image contrast in regions A and B, which are close to the values of the BPA, resulting in better image quality. These results demonstrate that the algorithm effectively corrects azimuth variance in edge scenes and shows favorable performance on real BiSAR data. The processing of real data further confirms the effectiveness and practicality of the proposed MSSWCC algorithm.

6. Discussion

This paper proposed an innovative high-resolution imaging algorithm based on a modified second-order space-variant wavefront curvature correction. By analyzing the orbital characteristics of MEO SAR and the motion of high-maneuvering platforms, a high-precision echo model for MEO/HM-BiSAR systems has been established. Based on this model, we theoretically derived the analytical expressions of higher-order phase errors caused by wavefront curvature in the wavenumber domain and analyzed their variations across the scene. Using this analysis, a phase error compensation function was designed to correct position distortions and defocusing caused by wavefront curvature errors.
The existing BiSAR imaging algorithms in complex trajectories struggled to balance the accuracy and efficiency. For example, while the BP algorithm performed well in the experiment results, its computational burden significantly reduced its efficiency. Compared to existing algorithms, the proposed MSSWCC algorithm not only achieved high-resolution imaging capabilities over large scenes but also maintained computational complexity on the same order as traditional bistatic PFAs.
The limitation of our work is that the proposed algorithm assumes a three-dimensional constant acceleration model when modeling high-maneuvering platforms. While this parameter meets the requirements of most high-maneuvering platforms, it may fail to accurately describe the receiver’s slant range in cases of more complex motion, such as when jerk acceleration should also be considered, potentially resulting in image defocusing. Future work could incorporate jerk into the modeling process for more precise echo modeling. Compensation functions can then be designed based on specific scene parameters to enable high-resolution imaging for high-maneuvering platforms under arbitrary trajectories in MEO/HM-BiSAR.

7. Conclusions

In the paper, a novel MSSWCC imaging algorithm has been proposed to achieve high-resolution imaging capability in MEO/HM-BiSAR. First, a high-precision echo model was deduced to describe the slant range histories in MEO/HM-BiSAR precisely. Next, based on the echo model, a fourth-order series inversion approach was employed to formulate the phase explicitly as a function of spatial frequency. Utilizing a 2D Taylor expansion, the analytical expressions in the wavenumber domain for the high-order phase errors were obtained. Subsequently, on the basis of the analysis of the wavefront curvature error in MEO/HM-BiSAR, the refocusing function for the correction of second-order space-variant phase error was designed. Finally, two-dimensional accurate focusing SAR images were obtained using space-variant inverse filtering. The simulations of point targets verify the validity of the proposed MSSWCC algorithm under MEO/HM-BiSAR, which yielded a high 2D resolution. The real-data experiment further certifies the effectiveness and practicality of the proposed MSSWCC algorithm under high-resolution BiSAR imaging.

Author Contributions

Conceptualization and methodology, H.R. and Y.Z.; writing—original draft preparation, H.R.; writing—review and editing, H.R., Y.Z., Z.L., G.L., X.Y., Y.G., L.L., X.Q., Q.H., C.D., H.M. and Y.D.; supervision, Y.Z. All authors read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China under Grants 61971163, 62201612, 62371170, 6240011408, and 62401615, as well as the China Postdoctoral Science Foundation via Grant 2024M754185.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

The polynomial series coefficients b n T and b n R  n = 1, 2, 3, 4 in (4) and (8) are expressed as
k 1 T = 2 ( x t x p ) v tx + ( y t y p ) v ty + v tz z t k 2 T = v tx 2 + v t z 2 + v ty 2 + ( x t x p ) a tx + ( y t y p ) a ty + a tz z t k 3 T = v tx a tx + v ty a ty + v tz a tz k 4 T = a tx 2 + a ty 2 + a tz 2 4 k 1 R = 2 ( x r x p ) v rx + ( y r y p ) v ry + v r z z r k 2 R = v rx 2 + v rz 2 + v ry 2 + ( x r x p ) a rx + ( y r y p ) a ry + a tz z r k 3 R = v rx a rx + v ry a ry + v r z a r z k 4 R = a rx 2 + a ry 2 + a r z 2 4 .
The Taylor expansion coefficients K n  n = 1, 2, 3, 4 in (11) are expressed as
K 0 = R sc + k 1 R R sc c + k 2 R R sc 2 c 2 + k 3 R R sc 3 c 3 + k 4 R R sc 4 c 4 K 1 = k 1 T ( k 1 R + k 1 T c + 1 ) k 1 R + 2 k 2 R R sc c + 3 k 3 R R sc 2 c 2 + 4 k 4 R R s q 3 c 3 K 2 = k 2 T + ( k 1 R + k 1 T c + 1 ) 2 ( k 2 R + 6 k 4 R R sc 2 c 2 + 3 k 3 R R s c c ) + k 2 R + k 2 T c ( k 1 R + 2 k 2 R R s c c + 3 k 3 R R sc 2 c 2 + 2 k 4 R R sc 3 c 3 ) K 3 = k 3 T + ( k 1 R + k 1 T c + 1 ) 3 ( k 3 R + 4 k 4 R R sc c ) + k 2 R + k 2 T c ( k 1 R + k 1 T c + 1 ) ( 6 k 3 R R sc c + 12 k 4 R R sc 2 c 2 ) + k 3 R + k 3 T c k 1 R + 2 k 2 R + 2 k 2 R R sc c + 2 k 2 R k 1 R + k 1 T c + 3 k 3 R R sc 2 c 2 + 4 k 4 R R sc 3 c 3 K 4 = k 4 T + ( k 1 R + k 1 T c + 1 ) 2 k 2 R + 6 k 3 R R sc k 3 R + k 3 T c 2 + ( k 1 R + k 1 T c + 1 ) ( 3 k 3 R R sc c + 12 k 4 R R sc k 2 R + k 2 T c 2 ) + 12 k 4 R R sc 2 c 2 + k 4 R ( k 1 R + k 1 T c + 1 ) 3 + ( k 2 R + k 2 T c ) 2 ( k 2 R + 3 k 3 R R sc c + 6 k 4 R R sc 2 c 2 ) + k 4 R + k 4 T c ( k 1 R + k 2 R R sc c + 3 k 3 R R sc 2 c 2 + 4 k 4 R R sc 3 c 3 )
where R sc = R Tc + R Rc is the bistatic slant range at t a = 0 .

Appendix B

The coefficients of the fourth-order Taylor expansion of f x t a and f y t a at t a = 0 can be written as
κ t 0 f n = n t R T c 0 κ t 1 f n = v tn R Tc 0 k 1 Tc n t R Tc 0 2 κ t 2 f n = a tn 2 R Tc 0 k 2 Tc n t + k 1 Tc v tn R Tc 0 2 + k 1 Tc 2 n t R Tc 0 3 κ t 3 f n = a tn k 1 Tc + 2 v tn k 2 Tc + 2 n t k 3 Tc 2 R Tc 0 + v tn k 1 Tc 2 + 2 n t k 1 Tc k 2 Tc R Tc 0 3 k 1 Tc 3 n t R Tc 0 4 κ t 4 f n = 2 n t k 4 T c + 2 v t n k 3 Tc + a tn k 2 Tc 2 R Tc 0 2 v tn k 1 Tc 3 + 3 n t k 1 Tc 2 k 2 Tc R T c 0 4 + n t k 1 Tc 4 R T c 0 5 + 4 n t k 1 Tc k 3 Tc + 2 n t k 2 Tc 2 + 4 v tn k 1 Tc k 2 Tc + a tn k 1 Tc 2 2 R Tc 0 3
where k iTc , i = 1 , 2 , 3 , 4 is the value of k iT in (A1) at the point ( x p , y p ) = ( 0 , 0 ) . R Tc 0 is the range history of the transmitter at t a = 0 , and ( x p , y p ) = ( 0 , 0 ) . Since the ‘go-stop-go’ effect has been taken into account, the formation of the coefficients k iRc , i = 1 , 2 , 3 , 4 of relevance to the receiver is different from k iTc , being written as
κ r 0 f n = a rn Δ t 0 2 + 2 v rn Δ t 0 + 2 n r 2 R ^ Rc κ r 1 f n = v rn + a rn Δ t 0 + Δ t 1 v rn + a rn Δ t 0 Δ t 1 R ^ Rc a rn k ^ rc 1 Δ t 0 2 + 2 k ^ rc 1 v rn Δ t 0 + 2 k ^ rc 1 n r 2 R ^ Rc 2 κ r 2 f n = a rn Δ t 0 2 + 2 v rn Δ t 0 + 2 n r 2 R ^ Rc 3 k ^ r c 1 2 + a rn Δ t 1 2 + 2 a rn Δ t 1 2 + a rn + 2 Δ t 2 v rn + 2 a rn Δ t 0 Δ t 2 2 R ^ Rc 2 k rc 1 v rn + 2 k r c 2 n r + 2 a r n Δ t 0 k ^ rc 1 + 2 Δ t 0 k ^ r c 2 v r n + 2 Δ t 1 k ^ rc 1 v rn + a r n Δ t 0 2 k ^ rc 2 + 2 a rn Δ t 0 Δ t 1 k ^ rc 1 2 R ^ Rc 2 κ r 3 f n = 2 a rn Δ t 2 + Δ t 3 v rn + a rn Δ t 0 Δ t 3 + a rn Δ t 1 Δ t 2 R ^ Rc 2 k ^ rc 2 v rn + 2 k ^ rc 3 n r + 2 Δ t 0 k ^ rc 3 v rn + 2 Δ t 1 k ^ rc 2 v rn + 2 Δ t 2 k ^ rc 1 v rn R ^ Rc 2 a rn k ^ rc 1 + 2 Δ t 0 k ^ rc 2 + Δ t 1 2 k ^ rc 1 + Δ t 0 2 k ^ rc 3 + 2 Δ t 0 Δ t 1 k ^ rc 2 + 2 Δ t 0 Δ t 2 k ^ rc 1 + 2 Δ t 1 k ^ rc 1 R ^ Rc 2 + 2 ( k ^ rc 1 2 v rn + Δ t 1 k ^ rc 1 2 v rn + Δ t 0 k ^ rc 1 k ^ rc 2 v rn + 2 k ^ rc 1 k ^ rc 2 n r + a rn Δ t 0 k ^ rc 1 2 + 2 + a rn Δ t 0 Δ t 1 k ^ rc 1 2 + a rn Δ t 0 2 k ^ rc 1 k ^ rc 2 ) R ^ Rc 3 a rn Δ t 0 2 + 2 v rn Δ t 0 + 2 n r R ^ Rc 4 k ^ rc 1 3 κ r 4 f n = a rn Δ t 2 2 + 2 a rn Δ t 3 + 2 Δ t 4 v rn + 2 a rn Δ t 0 Δ t 4 + 2 a rn Δ t 1 Δ t 3 2 R ^ Rc a rn 2 Δ t 2 2 k ^ rc 1 k ^ rc 3 + Δ t 0 2 k ^ rc 4 + 2 Δ t 0 Δ t 1 k ^ rc 3 + 2 Δ t 0 Δ t 3 k ^ rc 1 + 2 Δ t 1 Δ t 2 k ^ rc 1 + 2 Δ t 0 k ^ rc 3 + 2 Δ t 2 k ^ rc 1 2 R ^ Rc 2 a rn k ^ rc 1 2 1 + 2 Δ t 1 + 2 Δ t 0 Δ t 2 2 R ^ Rc 2 Δ t 0 2 k ^ rc 2 2 + Δ t 1 2 k ^ rc 2 + 2 Δ t 1 k ^ rc 2 + 2 Δ t 0 Δ t 2 k ^ rc 2 + k ^ rc 2 2 R ^ Rc 2 k ^ rc 3 v rn + k ^ rc 4 n r + Δ t 0 k ^ rc 4 v rn + Δ t 1 k ^ rc 3 v rn + Δ t 2 k ^ rc 2 v rn + Δ t 3 k ^ rc 1 v rn R ^ Rc 2 + v rn Δ t 0 k ^ rc 2 2 + Δ t 2 k ^ rc 1 2 + 2 k ^ rc 1 k ^ rc 2 + 2 Δ t 0 k ^ rc 1 k ^ rc 3 + 2 Δ t 1 k ^ rc 1 k ^ rc 2 R ^ Rc 3 + ( 2 k ^ rc 2 2 n r + a rn Δ t 1 2 k ^ rc 1 2 + 4 k ^ rc 1 k ^ rc 3 n r + 4 a rn Δ t 0 k ^ rc 1 k ^ rc 2 + 4 a rn Δ t 0 Δ t 1 k ^ rc 1 k ^ rc 2 ) 2 R ^ Rc 3 2 k ^ rc 1 v rn 2 Δ t 1 k ^ rc 1 v rn 6 k ^ rc 2 n r 6 Δ t 0 k ^ rc 2 v rn 2 R ^ Rc 4 k ^ rc 1 2 3 Δ t 0 2 k ^ rc 2 + 2 Δ t 0 Δ t 1 k ^ rc 1 + 2 Δ t 0 k ^ rc 1 2 R ^ Rc 4 a rn k ^ rc 1 2 a rn Δ t 0 2 k ^ rc 1 4 + 2 v rn Δ t 0 k ^ rc 1 4 + 2 n r k ^ rc 1 4 2 R ^ Rc 5
where k ^ i R , i = 1 , 2 , 3 , 4 define the Taylor expansion coefficients of the ‘non-stop-go’ range history of receiver R R ( t a + Δ τ bi ) at t a = 0 . k ^ n R c , n = 1 , 2 , 3 , 4 is the value of k ^ n R at the point ( x p , y p ) = ( 0 , 0 ) . R ^ Rc 0 is the ‘non-stop-go’ range history of the receiver at t a = 0 and ( x p , y p ) = ( 0 , 0 ) . The expression of R ^ Rc 0 and k ^ n R c , n = 1 , 2 , 3 , 4 is
R ^ Rc 0 = R rc 0 + k 1 Rc Δ t 0 + k 2 Rc Δ t 0 2 + k 3 Rc Δ t 0 3 + k 4 Rc Δ t 0 4 k ^ rc 1 = ( Δ t 1 + 1 ) ( k 1 Rc + 2 k 2 Rc Δ t 0 + 3 k 3 Rc Δ t 0 2 + 4 k 4 Rc Δ t 0 3 ) k ^ rc 2 = k 1 Rc Δ t 2 + k 2 Rc ( ( Δ t 1 + 1 ) 2 + 2 Δ t 0 Δ t 2 ) + k 3 Rc ( Δ t 0 ( ( Δ t 1 + 1 ) 2 + 2 Δ t 0 Δ t 2 ) + Δ t 0 2 Δ t 2 + 2 Δ t 0 ( Δ t 1 + 1 ) 2 ) + k 4 Rc ( Δ t 0 ( Δ t 0 ( ( Δ t 1 + 1 ) 2 + 2 Δ t 0 Δ t 2 ) + Δ t 0 2 Δ t 2 + 2 Δ t 0 ( Δ t 1 + 1 ) 2 ) + 3 Δ t 0 2 ( Δ t 1 + 1 ) 2 + Δ t 0 3 Δ t 2 ) k ^ rc 3 = k 1 Rc Δ t 3 + 2 k 2 Rc Δ t 0 Δ t 3 + Δ t 2 + Δ t 1 Δ t 2 + k 3 Rc 1 + 3 Δ t 1 + 3 Δ t 1 2 + Δ t 1 3 + 6 Δ t 0 Δ t 2 + 3 Δ t 0 2 Δ t 3 + 6 Δ t 0 Δ t 1 Δ t 2 + k 4 Rc 4 Δ t 0 + 12 Δ t 0 Δ t 1 + 4 Δ t 0 3 Δ t 3 + 12 Δ t 0 2 Δ t 1 Δ t 2 + 12 Δ t 0 Δ t 1 2 + 4 Δ t 0 Δ t 1 3 + 12 Δ t 0 2 Δ t 2 k ^ rc 4 = k 1 Rc Δ t 4 + k 2 Rc 2 Δ t 3 + Δ t 2 2 + 2 Δ t 0 Δ t 4 + 2 Δ t 1 Δ t 3 + k 3 Rc 3 Δ t 0 2 Δ t 4 + 6 Δ t 0 Δ t 1 Δ t 3 + 6 Δ t 0 Δ t 3 + 6 Δ t 1 Δ t 2 + 3 Δ t 0 Δ t 2 2 + 3 Δ t 1 2 Δ t 2 + 3 Δ t 2 + k 4 Rc 1 + 12 Δ t 0 2 Δ t 3 + Δ t 4 + 4 Δ t 1 + 6 Δ t 1 2 4 Δ t 1 3 + Δ t 1 4 + 6 Δ t 0 2 Δ t 2 2 + 12 Δ t 0 Δ t 2 + 4 Δ t 0 3 Δ t 4 + 12 Δ t 0 Δ t 1 2 Δ t 2 + 12 Δ t 0 2 Δ t 1 Δ t 3 + 24 Δ t 0 Δ t 1 Δ t 2

Appendix C

The coefficients of the refocusing function a 20 and a 02 are
a 20 = b 1 / k y 0 2 k y 0 C 2 + k x 0 C 3 + b 2 / k y 0 2 k x 0 C 2 2 2 k x 0 C 3 C 1 2 k y 0 C 1 C 2 b 3 C 1 / k y 0 2 C 2 2 + C 3 C 1 + 3 k y 0 C 1 C 2 + b 4 C 1 / k y 0 2 4 C 1 C 2 2 + 2 C 2 2 4 C 3 C 4 k y 0 C 1 2 C 2 a 02 = k x 0 b 1 n 2 A + n 4 2 A q 2 + B n 3 A q k x 0 q 2 / k y 0 3 + B / 2 + n 1 k x 0 / k y 0 3 b 2 2 C 1 n 2 A + n 4 2 A q 2 + B n 3 A q k x 0 q 2 / k y 0 3 + B / 2 + k x 0 n 1 / k y 0 3 B C 1 2 / 4 q 4 + b 3 2 C 1 n 2 A + n 4 2 A q 2 + B n 3 A q k x 0 q 2 / k y 0 3 + B / 2 + n 1 k x 0 / k y 0 3 3 B / 4 q 4 + b 4 C 1 4 B / q 4 4 C 1 n 2 A + n 4 2 A q 2 + B n 3 A q k x 0 q 2 / k y 0 3 + B / 2 + k x 0 n 1 / k y 0 3
where q = m 0 k x 0 / k y 0 . The values of C 1 , C 2 , C 3 , A and B can be written as
C 1 = n 1 q n 2 q 2 + n 3 q 3 n 4 q 4 C 2 = n 1 + 3 n 3 q 2 4 n 4 q 3 2 n 2 q C 3 = n 2 + 6 n 4 q 2 3 n 3 q A = k x 0 2 / k y 0 4 2 k x 0 q / k y 0 3 B = 4 k x 0 2 q 2 / k y 0 4

References

  1. Matar, J.; Lopez-Dekker, P.; Krieger, G. Potentials and Limitations of MEO SAR. In Proceedings of the European Conference on Synthetic Aperture Radar, Hamburg, Germany, 6–9 June 2016; pp. 1–5. [Google Scholar]
  2. Matar, J.; Rodriguez-Cassola, M.; Krieger, G.; López-Dekker, P.; Moreira, A. MEO SAR: System Concepts and Analysis. IEEE Trans. Geosci. Remote Sens. 2020, 58, 1313–1324. [Google Scholar] [CrossRef]
  3. Liu, W.; Sun, G.C.; Xia, X.G.; You, D.; Xing, M.; Bao, Z. Highly Squinted MEO SAR Focusing Based on Extended Omega-K Algorithm and Modified Joint Time and Doppler Resampling. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9188–9200. [Google Scholar] [CrossRef]
  4. Chen, J.; Xing, M.; Sun, G.C.; Gao, Y.; Liu, W.; Guo, L.; Lan, Y. Focusing of Medium-Earth-Orbit SAR Using an ASE-Velocity Model Based on MOCO Principle. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3963–3975. [Google Scholar] [CrossRef]
  5. Liu, W.; Sun, G.C.; Xing, M.; Li, H.; Bao, Z. Focusing of MEO SAR Data Based on Principle of Optimal Imaging Coordinate System. IEEE Trans. Geosci. Remote Sens. 2020, 58, 5477–5489. [Google Scholar] [CrossRef]
  6. Matar, J.; Rodriguez-Cassola, M.; Krieger, G.; Moreira, A. On the Equivalence of LEO-SAR Constellations and Complex High-Orbit SAR Systems for the Monitoring of Large-Scale Processes. IEEE Geosci. Remote Sens. Lett. 2024, 21, 8500205. [Google Scholar] [CrossRef]
  7. Zhang, Y.; Ren, H.; Lu, Z.; Yang, X.; Li, G. Focusing of Highly Squinted Bistatic SAR With MEO Transmitter and High Maneuvering Platform Receiver in Curved Trajectory. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5227522. [Google Scholar] [CrossRef]
  8. Song, X.; Li, Y.; Wu, C.; Sun, Z.; Cen, X.; Zhang, T. A New Frenquency-Domain Imaging for High-maneuverability Bistatic Forward-looking SAR. In Proceedings of the 2021 CIE International Conference on Radar (Radar), Haikou, China, 15–19 December 2021; pp. 778–782. [Google Scholar] [CrossRef]
  9. Hu, X.; Xie, H.; Yi, S.; Zhang, L.; Lu, Z. An Improved NLCS Algorithm Based on Series Reversion and Elliptical Model Using Geosynchronous Spaceborne—Airborne UHF UWB Bistatic SAR for Oceanic Scene Imaging. Remote Sens. 2024, 16, 1131. [Google Scholar]
  10. Wang, Z.; Liu, F.; Zeng, T.; Wang, C. A Novel Motion Compensation Algorithm Based on Motion Sensitivity Analysis for Mini-UAV-Based BiSAR System. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5205813. [Google Scholar] [CrossRef]
  11. Zhang, S.; Liu, F.; Wang, Z.; Wang, C.; Lv, R.; Yao, D. A LEO Spaceborne-Airborne Bistatic SAR Imaging Experiment. In Proceedings of the IEEE International Conference on Signal Processing, Communications and Computing, ICSPCC, Bali, Indonesia, 19–22 August 2022; pp. 1–5. [Google Scholar] [CrossRef]
  12. Tang, S.; Guo, P.; Zhang, L.; Lin, C. Modeling and precise processing for spaceborne transmitter/missile-borne receiver SAR signals. Remote Sens. 2019, 11, 346. [Google Scholar]
  13. Sun, Z.; Wu, J.; Li, Z.; An, H.; He, X. Geosynchronous Spaceborne–Airborne Bistatic SAR Data Focusing Using a Novel Range Model Based on One-Stationary Equivalence. IEEE Trans. Geosci. Remote Sens. 2021, 59, 1214–1230. [Google Scholar] [CrossRef]
  14. Zhang, Y.; Xiong, W.; Dong, X.; Hu, C. A Novel Azimuth Spectrum Reconstruction and Imaging Method for Moving Targets in Geosynchronous Spaceborne–Airborne Bistatic Multichannel SAR. IEEE Trans. Geosci. Remote Sens. 2020, 58, 5976–5991. [Google Scholar] [CrossRef]
  15. An, H.; Wu, J.; Teh, K.C.; Sun, Z.; Yang, J. Geosynchronous Spaceborne–Airborne Bistatic SAR Imaging Based on Fast Low-Rank and Sparse Matrices Recovery. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5207714. [Google Scholar] [CrossRef]
  16. Tang, W.; Huang, B.; Wang, W.Q.; Zhang, S.; Liu, W.; Wang, Y. A Novel Imaging Algorithm for Forward-looking GEO/Missile-borne Bistatic SAR. In Proceedings of the Asia-Pacific Conference on Synthetic Aperture Radar APSAR, Xiamen, China, 26–29 November 2019; pp. 1–4. [Google Scholar] [CrossRef]
  17. Zhang, S.; Gao, Y.; Xing, M.; Guo, R.; Chen, J.; Liu, Y. Ground Moving Target Indication for the Geosynchronous-Low Earth Orbit Bistatic Multichannel SAR System. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2021, 14, 5072–5090. [Google Scholar] [CrossRef]
  18. Wu, J.; Sun, Z.; An, H.; Qu, J.; Yang, J. Azimuth Signal Multichannel Reconstruction and Channel Configuration Design for Geosynchronous Spaceborne–Airborne Bistatic SAR. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1861–1872. [Google Scholar] [CrossRef]
  19. Ding, J.; Li, Y.; Li, M.; Wang, J. Focusing High Maneuvering Bistatic Forward-Looking SAR With Stationary Transmitter Using Extended Keystone Transform and Modified Frequency Nonlinear Chirp Scaling. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 2476–2492. [Google Scholar] [CrossRef]
  20. Song, X.; Li, Y.; Zhang, T.; Li, L.; Gu, T. Focusing High-Maneuverability Bistatic Forward-Looking SAR Using Extended Azimuth Nonlinear Chirp Scaling Algorithm. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5240814. [Google Scholar] [CrossRef]
  21. Miao, Y.; Wu, J.; Li, Z.; Yang, J. A Generalized Wavefront-Curvature-Corrected Polar Format Algorithm to Focus Bistatic SAR Under Complicated Flight Paths. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2020, 13, 3757–3771. [Google Scholar] [CrossRef]
  22. Xie, H.; Hu, J.; Duan, K.; Wang, G. High-Efficiency and High-Precision Reconstruction Strategy for P-Band Ultra-Wideband Bistatic Synthetic Aperture Radar Raw Data Including Motion Errors. IEEE Access 2020, 8, 31143–31158. [Google Scholar] [CrossRef]
  23. Wang, Y.; Liu, Y.; Li, Z.; Suo, Z.; Fang, C.; Chen, J. High-Resolution Wide-Swath Imaging of Spaceborne Multichannel Bistatic SAR with Inclined Geosynchronous Illuminator. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2380–2384. [Google Scholar]
  24. Pu, W.; Wu, J.; Huang, Y.; Yang, J.; Yang, H. Fast Factorized Backprojection Imaging Algorithm Integrated With Motion Trajectory Estimation for Bistatic Forward-Looking SAR. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 3949–3965. [Google Scholar]
  25. Zhou, S.; Yang, L.; Zhao, L.; Wang, Y.; Xing, M. A New Fast Factorized Back Projection Algorithm for Bistatic Forward-Looking SAR Imaging Based on Orthogonal Elliptical Polar Coordinate. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1508–1520. [Google Scholar]
  26. Hu, X.; Xie, H.; Zhang, L.; Hu, J.; He, J.; Yi, S.; Jiang, H.; Xie, K. Fast Factorized Backprojection Algorithm in Orthogonal Elliptical Coordinate System for Ocean Scenes Imaging Using Geosynchronous Spaceborne—Airborne VHF UWB Bistatic SAR. Remote Sens. 2023, 15, 2215. [Google Scholar]
  27. Yuan, Y.; Chen, S.; Zhao, H. An Improved RD Algorithm for Maneuvering Bistatic Forward-Looking SAR Imaging with a Fixed Transmitter. Sensors 2017, 17, 1152. [Google Scholar]
  28. Li, C.; Zhang, H.; Deng, Y.; Wang, R.; Liu, K.; Liu, D.; Jin, G.; Zhang, Y. Focusing the L-Band Spaceborne Bistatic SAR Mission Data Using a Modified RD Algorithm. IEEE Trans. Geosci. Remote Sens. 2020, 58, 294–306. [Google Scholar] [CrossRef]
  29. Wong, F.H.; Cumming, I.G.; Neo, Y.L. Focusing Bistatic SAR Data Using the Nonlinear Chirp Scaling Algorithm. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2493–2505. [Google Scholar] [CrossRef]
  30. Chen, S.; Yuan, Y.; Zhang, S.; Zhao, H.; Chen, Y. A New Imaging Algorithm for Forward-Looking Missile-Borne Bistatic SAR. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 1543–1552. [Google Scholar] [CrossRef]
  31. Deng, Y. Focus Improvement of Airborne High-Squint Bistatic SAR Data Using Modified Azimuth NLCS Algorithm Based on Lagrange Inversion Theorem. Remote Sens. 2021, 13, 1916. [Google Scholar]
  32. Wang, Z.; Guo, Q.; Tian, X.; Chang, T.; Cui, H.L. Millimeter-Wave Image Reconstruction Algorithm for One-Stationary Bistatic SAR. IEEE Trans. Microw. Theory Tech. 2020, 68, 1185–1194. [Google Scholar] [CrossRef]
  33. Wang, X.; Zhu, D.; Mao, X.; Zhu, Z. Space-Variant Filtering for Wavefront Curvature Correction in Polar Formatted Bistatic SAR Image. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 940–950. [Google Scholar] [CrossRef]
  34. Zhang, Q.; Wu, J.; Li, Z.; Miao, Y.; Huang, Y.; Yang, J. PFA for Bistatic Forward-Looking SAR Mounted on High-Speed Maneuvering Platforms. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6018–6036. [Google Scholar] [CrossRef]
  35. Wang, F.; Zhang, L.; Cao, Y.; Yeo, T.S.; Lu, J.; Han, J.; Peng, Z. High-Resolution Bistatic Spotlight SAR Imagery With General Configuration and Accelerated Track. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5213218. [Google Scholar] [CrossRef]
  36. Han, S.; Zhu, D.; Mao, X. A Modified Space-Variant Phase Filtering Algorithm of PFA for Bistatic SAR. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4008005. [Google Scholar] [CrossRef]
  37. Shi, T.; Mao, X.; Jakobsson, A.; Liu, Y. Efficient BiSAR PFA Wavefront Curvature Compensation for Arbitrary Radar Flight Trajectories. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5221514. [Google Scholar] [CrossRef]
  38. Huang, L.; Qiu, X.; Hu, D.; Ding, C. An advanced 2-D spectrum for high-resolution and MEO spaceborne SAR. In Proceedings of the 2009 2nd Asian-Pacific Conference on Synthetic Aperture Radar, Xi’an, China, 26–30 October 2009. [Google Scholar]
  39. Qian, G.; Wang, Y. Analysis of Modeling and 2-D Resolution of Satellite–Missile Borne Bistatic Forward-Looking SAR. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5222314. [Google Scholar] [CrossRef]
  40. Huo, T.; Li, Y.; Yang, C.; Cao, C.; Wang, Y. A Novel Imaging Method for MEO SAR-GMTI Systems. In Proceedings of the IGARSS 2022-2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 2498–2501. [Google Scholar] [CrossRef]
  41. Tang, S.; Zhang, L.; Guo, P.; Liu, G.; Sun, G.C. Acceleration Model Analyses and Imaging Algorithm for Highly Squinted Airborne Spotlight-Mode SAR with Maneuvers. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 1120–1131. [Google Scholar] [CrossRef]
  42. Zheng, Z.; Tan, G.; Jiang, D. A Bidirectional Resampling Imaging Algorithm for High Maneuvering Bistatic Forward-Looking SAR Based on Chebyshev Orthogonal Decomposition. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5211512. [Google Scholar] [CrossRef]
  43. An, H.; Wu, J.; Teh, K.C.; Sun, Z.; Yang, J. Nonambiguous Image Formation for Low-Earth-Orbit SAR with Geosynchronous Illumination Based on Multireceiving and CAMP. IEEE Trans. Geosci. Remote Sens. 2021, 59, 348–362. [Google Scholar] [CrossRef]
  44. Deng, H.; Li, Y.; Liu, M.; Mei, H.; Quan, Y. A Space-Variant Phase Filtering Imaging Algorithm for Missile-Borne BiSAR With Arbitrary Configuration and Curved Track. IEEE Sens. J. 2018, 18, 3311–3326. [Google Scholar] [CrossRef]
  45. Guo, Y.; Yu, Z.; Li, J.; Li, C. Focusing Spotlight-Mode Bistatic GEO SAR with a Stationary Receiver Using Time-Doppler Resampling. IEEE Sens. J. 2020, 20, 10766–10778. [Google Scholar] [CrossRef]
  46. An, H.; Wu, J.; He, Z.; Li, Z.; Yang, J. Geosynchronous Spaceborne–Airborne Multichannel Bistatic SAR Imaging Using Weighted Fast Factorized Backprojection Method. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1590–1594. [Google Scholar] [CrossRef]
  47. Gorham, L.; Rigling, B. Fast corrections for polar format algorithm with a curved flight path. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 2815–2824. [Google Scholar] [CrossRef]
  48. Xiong, T.; Li, Y.; Li, Q.; Wu, K.; Zhang, L.; Zhang, Y.; Mao, S.; Han, L. Using an Equivalence-Based Approach to Derive 2-D Spectrum of BiSAR Data and Implementation Into an RDA Processor. IEEE Trans. Geosci. Remote Sens. 2021, 59, 4765–4774. [Google Scholar] [CrossRef]
  49. Xin, N. Research on Key Technique of Highly Squinted Sliding SpotlightSAR Imaging with Varied Receiving Range Bin. J. Electron. Inf. Technol. 2016, 38, 3122–3128. [Google Scholar]
  50. Xin, N.; Shijian, S.; Hui, Y.; Ying, L.; Long, Z.; Wanming, L. A wide-field SAR polar format algorithm based on quadtree sub-image segmentation. In Proceedings of the IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 9355–9358. [Google Scholar] [CrossRef]
  51. Nie, X.; Zhuang, L.; Shen, S. A Quadtree Beam-Segmenting Based Wide-Swath SAR Polar Format Algorithm. IEEE Access 2020, 8, 147682–147691. [Google Scholar] [CrossRef]
Figure 1. The geometry model and range model of MEO/HM-BiSAR. (a) The Earth-centered, Earth-fixed coordinate conversion. (b) The range model of MEO/HM-BiSAR in local coordinate system.
Figure 1. The geometry model and range model of MEO/HM-BiSAR. (a) The Earth-centered, Earth-fixed coordinate conversion. (b) The range model of MEO/HM-BiSAR in local coordinate system.
Remotesensing 16 04768 g001
Figure 2. Beam scanning speed and the relative speed of MEO SAR in the different true anomalies.
Figure 2. Beam scanning speed and the relative speed of MEO SAR in the different true anomalies.
Remotesensing 16 04768 g002
Figure 3. Range and phase errors of the range model using ‘go-stop-go’ approximation, the BiSAR echo model in [43], and the proposed range model for MEO/HM-BiSAR. (a) Range errors of the three models for MEO/HM-BiSAR. (b) Phase errors of the range model using ‘go-stop-go’ approximation in different velocities and acceleration. (c) Phase errors of the BiSAR echo model in [43] in different velocities and acceleration. (d) Phase errors of the proposed model in different velocities and acceleration.
Figure 3. Range and phase errors of the range model using ‘go-stop-go’ approximation, the BiSAR echo model in [43], and the proposed range model for MEO/HM-BiSAR. (a) Range errors of the three models for MEO/HM-BiSAR. (b) Phase errors of the range model using ‘go-stop-go’ approximation in different velocities and acceleration. (c) Phase errors of the BiSAR echo model in [43] in different velocities and acceleration. (d) Phase errors of the proposed model in different velocities and acceleration.
Remotesensing 16 04768 g003
Figure 4. The absolute values of the linear, second-order, and higher-order phase errors induced by wavefront curvature. (a) The linear phase error. (b) The second-order phase error. (c) The higher-order phase error.
Figure 4. The absolute values of the linear, second-order, and higher-order phase errors induced by wavefront curvature. (a) The linear phase error. (b) The second-order phase error. (c) The higher-order phase error.
Remotesensing 16 04768 g004
Figure 5. The phase errors caused by the space-variant wavefront curvature errors in wavenumber domain. (a) Two-dimensional coupled phase error. (b) Quadratic phase error in the k x direction induced by a 20 . (c) Quadratic phase error in the k y direction induced by a 02 . (d) Higher-order phase error.
Figure 5. The phase errors caused by the space-variant wavefront curvature errors in wavenumber domain. (a) Two-dimensional coupled phase error. (b) Quadratic phase error in the k x direction induced by a 20 . (c) Quadratic phase error in the k y direction induced by a 02 . (d) Higher-order phase error.
Remotesensing 16 04768 g005
Figure 6. The position distortion of the imaging area.
Figure 6. The position distortion of the imaging area.
Remotesensing 16 04768 g006
Figure 7. The flowchart of the proposed MSSWCC algorithm in MEO/HM-BiSAR.
Figure 7. The flowchart of the proposed MSSWCC algorithm in MEO/HM-BiSAR.
Remotesensing 16 04768 g007
Figure 8. The imaging scene of point targets simulation.
Figure 8. The imaging scene of point targets simulation.
Remotesensing 16 04768 g008
Figure 9. Simulation results of the whole imaging scene in MEO/HM-BiSAR by different algorithms. (a) The Xiong’s algorithm. (b) The PFA based on sub-image segmentation [51]. (c) The proposed MSSWCC algorithm.
Figure 9. Simulation results of the whole imaging scene in MEO/HM-BiSAR by different algorithms. (a) The Xiong’s algorithm. (b) The PFA based on sub-image segmentation [51]. (c) The proposed MSSWCC algorithm.
Remotesensing 16 04768 g009
Figure 10. Contour plot of the marked points in Figure 8 using Xiong’s algorithm. (a) The focused result of P1. (b) The focused result of P2. (c) The focused result of P3.
Figure 10. Contour plot of the marked points in Figure 8 using Xiong’s algorithm. (a) The focused result of P1. (b) The focused result of P2. (c) The focused result of P3.
Remotesensing 16 04768 g010
Figure 11. Contour plot of the marked points in Figure 8 using the PFA based on sub-image segmentation. (a) The focused result of P1. (b) The focused result of P2. (c) The focused result of P3.
Figure 11. Contour plot of the marked points in Figure 8 using the PFA based on sub-image segmentation. (a) The focused result of P1. (b) The focused result of P2. (c) The focused result of P3.
Remotesensing 16 04768 g011
Figure 12. Contour plot of the marked points in Figure 8 using the proposed MSSWCC algorithm. (a) The focused result of P1. (b) The focused result of P2. (c) The focused result of P3.
Figure 12. Contour plot of the marked points in Figure 8 using the proposed MSSWCC algorithm. (a) The focused result of P1. (b) The focused result of P2. (c) The focused result of P3.
Remotesensing 16 04768 g012
Figure 13. The comparison of azimuth pulse responses of the marked points in Figure 8 in different algorithms. (a) The azimuth pulse responses with different algorithm at P1. (b) The azimuth pulse responses with different algorithm at P2. (c) The azimuth pulse responses with different algorithm at P3.
Figure 13. The comparison of azimuth pulse responses of the marked points in Figure 8 in different algorithms. (a) The azimuth pulse responses with different algorithm at P1. (b) The azimuth pulse responses with different algorithm at P2. (c) The azimuth pulse responses with different algorithm at P3.
Remotesensing 16 04768 g013
Figure 14. The data-processing results of the real BiSAR experiment. (a) Without the MSSWCC operation. (b) With the MSSWCC operation. (c) With the BP algorithm.
Figure 14. The data-processing results of the real BiSAR experiment. (a) Without the MSSWCC operation. (b) With the MSSWCC operation. (c) With the BP algorithm.
Remotesensing 16 04768 g014
Figure 15. Expanded image of regions A and B. (a) Region A without the MSSWCC operation. (b) Region A with the MSSWCC operation. (c) Region A with the BPA. (d) Region B without the MSSWCC operation. (e) Region B with the MSSWCC operation. (f) Region B with the BPA.
Figure 15. Expanded image of regions A and B. (a) Region A without the MSSWCC operation. (b) Region A with the MSSWCC operation. (c) Region A with the BPA. (d) Region B without the MSSWCC operation. (e) Region B with the MSSWCC operation. (f) Region B with the BPA.
Remotesensing 16 04768 g015
Table 1. Orbital parameters of MEO SAR.
Table 1. Orbital parameters of MEO SAR.
Orbital ElementsValues
Semimajor axis (km)16,371
Inclination (deg)60
Eccentricity0.003
Argument of perigee (deg)90
Ascending node (deg)105
True anomaly (deg)0–360
Antenna look-down angle (deg)10
Beam direction angle (deg)90
Table 2. Parameters of the simulated MEO/HM-BiSAR system.
Table 2. Parameters of the simulated MEO/HM-BiSAR system.
Orbital ElementsValues
Receiver position (km)(−10, −40, 60)
Receiver velocity (m/s)(20, 920, −500)
Receiver acceleration (m/s2)(0, 50, −50)
Carrier frequency (GHz)5.4
Bandwidth (MHz)200
Sampling frequency (MHz)250
PRF (Hz)3000
Pulse duration (us)2
Table 3. The coordinates of the marked points in the sub-image PFA and the proposed MSSWCC algorithm.
Table 3. The coordinates of the marked points in the sub-image PFA and the proposed MSSWCC algorithm.
AlgorithmCoordinate of P1Coordinate of P2Coordinate of P3
The sub-image PFA algorithm(0 m, 0 m)(−1870 m, −2023 m)(2135 m, 1974 m)
The proposed algorithm(0 m, 0 m)(−2000 m, −2000 m)(2000 m, 2000 m)
Table 4. Measured parameters of the selected targets.
Table 4. Measured parameters of the selected targets.
AlgorithmPointPSLR/dBISLR/dBRes-A/mRes-R/m
Xiong’s algorithmP1−15.56−12.861.5990.840
P2−1.702−5.59212.850.840
P3−1.492−6.50615.350.840
Proposed algorithmP1−15.67−12.891.5990.840
P2−15.59−12.921.5990.840
P3−15.42−12.881.6000.840
BP algorithmP1−15.70−12.891.6000.840
P2−15.70−12.891.6000.840
P3−15.70−12.891.6000.840
Table 5. Imaging performance of region A and B.
Table 5. Imaging performance of region A and B.
AlgorithmRegion ARegion B
Image EntropyImage ContrastImage EntropyImage Contrast
Without MSSWCC5.9550.6566.6830.722
Proposed algorithm5.9270.6596.3500.750
BP algorithm5.9270.6596.3460.751
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ren, H.; Lu, Z.; Li, G.; Zhang, Y.; Yang, X.; Guo, Y.; Li, L.; Qi, X.; Hua, Q.; Ding, C.; et al. A High-Resolution Spotlight Imaging Algorithm via Modified Second-Order Space-Variant Wavefront Curvature Correction for MEO/HM-BiSAR. Remote Sens. 2024, 16, 4768. https://doi.org/10.3390/rs16244768

AMA Style

Ren H, Lu Z, Li G, Zhang Y, Yang X, Guo Y, Li L, Qi X, Hua Q, Ding C, et al. A High-Resolution Spotlight Imaging Algorithm via Modified Second-Order Space-Variant Wavefront Curvature Correction for MEO/HM-BiSAR. Remote Sensing. 2024; 16(24):4768. https://doi.org/10.3390/rs16244768

Chicago/Turabian Style

Ren, Hang, Zheng Lu, Gaopeng Li, Yun Zhang, Xueying Yang, Yalin Guo, Long Li, Xin Qi, Qinglong Hua, Chang Ding, and et al. 2024. "A High-Resolution Spotlight Imaging Algorithm via Modified Second-Order Space-Variant Wavefront Curvature Correction for MEO/HM-BiSAR" Remote Sensing 16, no. 24: 4768. https://doi.org/10.3390/rs16244768

APA Style

Ren, H., Lu, Z., Li, G., Zhang, Y., Yang, X., Guo, Y., Li, L., Qi, X., Hua, Q., Ding, C., Mu, H., & Du, Y. (2024). A High-Resolution Spotlight Imaging Algorithm via Modified Second-Order Space-Variant Wavefront Curvature Correction for MEO/HM-BiSAR. Remote Sensing, 16(24), 4768. https://doi.org/10.3390/rs16244768

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop