Efficient Time-Domain Imaging Processing for One-Stationary Bistatic Forward-Looking SAR Including Motion Errors

With the rapid development of the one-stationary bistatic forward-looking synthetic aperture radar (OS-BFSAR) technology, the huge amount of the remote sensing data presents challenges for real-time imaging processing. In this paper, an efficient time-domain algorithm (ETDA) considering the motion errors for the OS-BFSAR imaging processing, is presented. This method can not only precisely handle the large spatial variances, serious range-azimuth coupling and motion errors, but can also greatly improve the imaging efficiency compared with the direct time-domain algorithm (DTDA). Besides, it represents the subimages on polar grids in the ground plane instead of the slant-range plane, and derives the sampling requirements considering motion errors for the polar grids to offer a near-optimum tradeoff between the imaging precision and efficiency. First, OS-BFSAR imaging geometry is built, and the DTDA for the OS-BFSAR imaging is provided. Second, the polar grids of subimages are defined, and the subaperture imaging in the ETDA is derived. The sampling requirements for polar grids are derived from the point of view of the bandwidth. Finally, the implementation and computational load of the proposed ETDA are analyzed. Experimental results based on simulated and measured data validate that the proposed ETDA outperforms the DTDA in terms of the efficiency improvement.


Introduction
Nowadays, the synthetic aperture radar (SAR) has a major advantage for the high resolution imaging processing in all time and all weather conditions, and plays a very significant role in remote sensing, geosciences, surveillance and reconnaissance applications, and thus it is widely investigated in both civilian and military fields [1][2][3][4][5][6].
Bistatic forward-looking SAR (BFSAR) [7] is a special bistatic SAR (BSAR) system [8,9], where the radar system works in the forward-looking mode compared with the traditional BSAR system working in the side-looking mode. It not only inherits the advantages of the BSAR system, such as reduced vulnerability for military applications, exploiting the additional information and improving the detectability of the stealth targets, but also carries out high resolution scene imaging in the forward direction [10], and therefore it has gained wide attention in missile navigation, war-field reconnaissance, and forward-looking imaging. Recently, several countries have carried out the experiments on BFSAR imaging processing, and some excellent results were obtained, such as the BFSAR experiment with a fixed ground-based receiver working in the forward-looking mode [11,12], the BFSAR experiment with a vehicle-based transmitter and a receiver on board an ultra-light aircraft [13][14][15], and the BFSAR experiment using the TerraSAR-X as the transmitter and the airborne phased array multifunctional imaging radar (PAMIR) as the receiver [16,17]. One-stationary BFSAR (OS-BFSAR) is regarded as a SAR system combining the one-stationary BSAR (OS-BSAR) and BFSAR systems, which has a stationary radar (transmitter or receiver) fixed on the top of a high tower or mountain and a moving radar (receiver or transmitter) placed at a vehicle/airborne/spaceborne platform. The OS-BFSAR system not only inherits the advantages of the OS-BSAR and BFSAR systems, but also is subject to the difficulties of the imaging processing due to its special configuration, i.e., the huge amount of echo data, large spatial-variance, serious range-azimuth coupling and complicated motion errors. These make the precise imaging processing of the OS-BFSAR data more complicated.
Like monostatic SAR imaging algorithms, BSAR imaging algorithms can be also categorized into two groups: frequency-domain algorithms (FDAs) and time-domain algorithms (TDAs). The FDAs generally aim to minimize the time of the imaging processing. However, this aim can induce many limitations such as the bandwidth, integration time, motion errors, assumptions and approximations in the imaging processing, real-time imaging processing, memory requirements and so on, which may restrict the application of the FDAs. Thus, FDAs are only available for some particular SAR imaging processing cases. Recently, monostatic FDAs [1][2][3][4], which include the range Doppler algorithm (RDA) [18][19][20], the Omega-k algorithm (OKA) [21][22][23][24], the chirp scaling algorithm (CSA) [25][26][27] and nonlinear CSA (NLCSA) [28][29][30][31], have been extended for BSAR imaging processing. Among the above-mentioned methods, RDA, OKA and CSA are only available for the azimuth-invariant BSAR imaging processing, thus they cannot always satisfy the precise imaging processing for the all BSAR configurations in practice, especially for azimuth-variant BSAR systems. NLCSA and its modifications have been used to implement imaging processing for the different BSAR configurations, but there are some approximations of handling the spatial-variance, range-azimuth coupling and motion errors, which may cause large phase errors in some particular BSAR imaging processing cases. Thus, the FDAs are only valid for a limited number of BSAR systems, which don't satisfy the precise imaging processing for the OS-BFSAR due to the large spatial variances, serious range-azimuth coupling and complicated motion errors. For the OS-BFSAR, if the minimum phase error and the highest resolution are required, processing such BSAR data must rely on TDAs, which can avoid the limitations of the FDAs.
The TDAs include the direct TDA (DTDA), like the backprojection algorithm (BPA) [32], and the efficient TDA (ETDA), like the fast BPA (FBPA) [33][34][35] and fast factorized BPA (FFBPA) [36]. DTDA [32] is considered a linear transformation to reconstruct the SAR scene from the radar echoes, and thus it can be applied directly to monostatic and bistatic imaging processing with perfect focusing performance. Importantly, it can precisely accommodate the abovementioned problems in the OS-BFSAR imaging processing. Moreover, it can offer the further advantage of the precise disposal of the irregular sampling, particularly useful in the OS-BFSAR system. However, the DTDA has a higher computational load, which may prevent its use as a standard method for monostatic and bistatic imaging processing. To reduce the higher computational load, the efficient implementation of the TDA has been applied for the monostatic SAR imaging processing (i.e., FBPA [33]), which is based on a two-step split of the synthetic aperture. The FBPA method, including the derivation of Nyquist requirements for the linear track SAR imaging processing was presented in [34]. A quad-tree-based FBPA for the arbitrary motion SAR imaging processing was proposed in [35], offering the idea of splitting the imaging processing into multiple stages. All the developments of the monostatic FBPA converged into the FFBPA [36], which is an optimum method benefiting from the multiple stage factorizations working in an efficient geometry in terms of the image sampling. These ETDAs are based on the subaperture processing techniques, which can keep all the advantages of the DTDA but with a reduced computational load. With the rapid development of the BSAR technologies in recent years, the monostatic ETDAs have been extended Sensors 2016, 16, 1907 3 of 27 to the BSAR imaging processing, which are classified into two kinds: the bistatic FBPA [37][38][39][40][41][42] and bistatic FFBPA [43][44][45][46][47][48]. A study on the ability of the bistatic FBPA to handle the bistatic range history precisely was developed in [37]. In [38], a bistatic FBPA based on the subaperture and subimages has been presented for BSAR imaging processing, which required an intermediate processing step involving beamforming from the radar echoes. The phase errors caused by the approximations in this bistatic FBPA was analyzed in [39] to provide a good trade-off between the phase error and the computational load in imaging processing.
Another bistatic FBPA based on the subaperture and polar grid was proposed in [40], which required mapping the radar echoes into polar grids instead of the beamforming from the radar echoes as an intermediate processing step. In this research, the reconstruction of the SAR scene was recommended in a ground plane instead of in a slant-range plane, and the polar range coordinate and polar angular coordinate necessarily depended on both transmitter and receiver tracks, but the sampling requirements for polar grids were derived in [41] for the linear trajectory BSAR system. The bistatic FBPA for the OS-BSAR imaging processing was proposed in [42], which also represented the subimages in the Cartesian ground plane. However, the oversampling ratios of the subimages in the azimuth and range directions were uncertain, thus the sampling requirements for the subimages weren't optimal. The bistatic FFBPA was first used to the OS-BSAR imaging processing in [43], but only the experimental results were given. Application of the bistatic FFBPA to process the BSAR data was presented in [44], which gave the basic principles of the bistatic FFBPA, whereas the details of its implementation weren't given.
Another bistatic FFBPA has been applied for a space-borne-airborne BSAR case [45], and it first represented the subimages in the elliptical polar grids to reduce the computational load. However, the sampling requirement for the elliptical polar grids was derived not only for the linear track BSAR system, but also for the preferred BSAR case of the radars with the higher angular velocity. Considering the motion errors, the bistatic FFBPA with the derivation of the sampling requirement for the elliptical polar grids was developed in [46,47]. However, for the bistatic FFBPA given in [45][46][47], the elliptical polar range coordinate and polar angular coordinate were only referenced to the transmitter or receiver track, thus the applicability of these sampling requirements was limited in some instances. The authors of [38] presented a bistatic FFBPA for the linear trajectory BSAR imaging processing in [48], which gave the requirements for splitting the subaperture and subimage, while the sampling requirements of the beams for the corresponding subimages were not given. In [49], the bistatic FBPA and FFBPA based on the subapertures and local polar coordinates were proposed for the general bistatic airborne SAR systems, which in fact was a synthesis of the research given in [40] and [41]. Similarly, the motion errors was not considered in the derivation of the sampling requirements for polar grids, which cannot offer a near-optimum tradeoff between the imaging precision and efficiency in the BSAR imaging processing in practical. It is known that the bistatic ETDA in [37][38][39][40][41][42][43][44][45][46][47][48][49] were developed for the traditional bistatic side-looking SAR (BSSAR) imaging processing. However, to our knowledge, the ETDA for the OS-BFSAR imaging processing has hardly been investigated in earlier publications, thus it may still be desirable for practical OS-BFSAR data processing.
Based on these previous works, this paper explores an ETDA considering motion errors for the OS-BFSAR imaging processing based on the subaperture and polar grid processing. This method represents the subimages on polar grids in the ground plane instead of the slant-range plane, and it is referenced to the positions of both moving and stationary radars. It can not only accurately accommodate the large spatial variances, serious range-azimuth coupling and motion errors, but also improve greatly the imaging efficiency with respect to the DTDA. First, the OS-BFSAR imaging geometry is built, and the DTDA for the OS-BFSAR imaging processing is provided, which lays the foundation for the proposed ETDA. Second, the polar grids of the subimages are defined, and then the subaperture imaging processing in the ETDA is derived. The sampling requirements considering motion errors for the polar grids are derived from the point of view of the bandwidth, which can offer a near-optimum tradeoff between the imaging precision and efficiency. Third, the implementation and Sensors 2016, 16, 1907 4 of 27 computational load of the proposed ETDA are analyzed, and then the speed-up factor of the proposed ETDA with respect to the DTDA is derived. Finally, the presented ETDA is tested and validated by experimental results based on the simulated and measured OS-BFSAR data.
This paper is organized as follows: Section 2 reviews the bistatic DTDA for the OS-BFSAR imaging processing. The details of the proposed ETDA, including the definition of the polar grids, subaperture imaging processing, sampling requirements for the polar grids, implementation and computational load, are presented in Section 3. Experimental results and the corresponding analysis based on simulated and measured data are given in Section 4. Conclusions are drawn in Section 5.

Imaging Geometry
The imaging geometry of the OS-BFSAR system including the motion errors is shown in Figure 1. The straight line l 1 is the ideal track of the moving radar, and its actual track is the curve l 2 . The position of the moving radar is r M (η) = (x M (η),y M (η),z M (η)) at the slow time(azimuth time) η, while the position of the stationary radar is r S = (x S ,0,z S ). Suppose that the illuminating beam of the moving radar is always covered by that of the stationary radar in order to insure the synchronization of this OS-BFSAR system, and the moving radar operates in the forward-looking spotlight mode. P is assumed to be an arbitrary scattering target in the scene, and its position is r P . The distances from the moving and stationary radars to the scattering target P at the slow time η are R M (η,r P ) and R S (r P ), respectively. Therefore, the bistatic distance from the scattering target P to the moving and stationary radar at the slow time η is: Provided that the transmitted signal is p(τ), and then the received signal of the target P is: where, τ is the fast time, σ P is the scattering coefficient of the scattering target P, and c 0 is the speed of light. Therefore, the range-compressed signal of the scattering target P is: where p rc [·] is the range-compressed pulse, and B is the transmitted signal bandwidth.
Sensors 2016, 16, 1907 4 of 27 ETDA is tested and validated by experimental results based on the simulated and measured OS-BFSAR data. This paper is organized as follows: Section 2 reviews the bistatic DTDA for the OS-BFSAR imaging processing. The details of the proposed ETDA, including the definition of the polar grids, subaperture imaging processing, sampling requirements for the polar grids, implementation and computational load, are presented in Section 3. Experimental results and the corresponding analysis based on simulated and measured data are given in Section 4.Conclusions are drawn in Section 5.

Imaging Geometry
The imaging geometry of the OS-BFSAR system including the motion errors is shown in Figure  1. The straight line l1 is the ideal track of the moving radar, and its actual track is the curve l2. The position of the moving radar is rM(η) = (xM(η),yM(η),zM(η)) at the slow time(azimuth time) η, while the position of the stationary radar is rS = (xS,0,zS). Suppose that the illuminating beam of the moving radar is always covered by that of the stationary radar in order to insure the synchronization of this OS-BFSAR system, and the moving radar operates in the forward-looking spotlight mode. P is assumed to be an arbitrary scattering target in the scene, and its position is rP. The distances from the moving and stationary radars to the scattering target P at the slow time η are RM(η,rP) and RS(rP), respectively. Therefore, the bistatic distance from the scattering target P to the moving and stationary radar at the slow time η is: Provided that the transmitted signal is p(τ), and then the received signal of the target P is: where, τ is the fast time, σP is the scattering coefficient of the scattering target P, and c0 is the speed of light. Therefore, the range-compressed signal of the scattering target P is: where prc[·] is the range-compressed pulse, and B is the transmitted signal bandwidth.

DTDA for OS-BFSAR Imaging
The DTDA (i.e., the BPA) can be considered as a direct transformation process from the radar echoes into a complex SAR image, and therefore it can be applied directly for the OS-BFSAR imaging processing without any modification. Unlike the monostatic DTDA, the backprojection (BP)

DTDA for OS-BFSAR Imaging
The DTDA (i.e., the BPA) can be considered as a direct transformation process from the radar echoes into a complex SAR image, and therefore it can be applied directly for the OS-BFSAR imaging processing without any modification. Unlike the monostatic DTDA, the backprojection (BP) of the radar echoes in the bistatic DTDA for the OS-BFSAR imaging processing is carried out over an ellipsoidal basis. In Figure 1, a and b are the major and minor axes of the dashed ellipse, whose foci are determined by the positions of the considered moving and stationary radars. Based on the major and minor axes (a and b), the linear eccentricity can be defined as c = √ a 2 + b 2 . r = (x,y,0) is assumed to be an arbitrary sample in the scene, and then the value of the SAR image at the sample r calculated by the bistatic DTDA is given by [32]: where η c is the synthetic aperture center time of the moving radar, f is the radar frequency, and T is the integration time.

ETDA for OS-BFSAR Imaging Processing
To reduce the computational load of the bistatic DTDA, the efficient implementation of the bistatic DTDA (i.e., ETDA) for OS-BFSAR imaging processing is presented. The proposed ETDA is developed from the research described in [40,41] based on the subaperture and polar grid processing, but the motion errors is considered in the derivation of the sampling requirements for polar grids in the proposed ETDA, which can offer a near-optimum tradeoff between the imaging precision and efficiency. Similar to the ETDA in [40,41], the polar grids in the ground plane instead of the slant-range plane are highly recommended for calculating the subimages in the proposed ETDA, since there is no exact slant-range plane for the BSAR configurations. Moreover, the polar grids of the subimages and the SAR image grids in the same plane can also simplify the calculation of the travel distance of a radar pulse from the transmitter and receiver to the scene.
Similar to the ETDA for the BSSAR imaging processing in [46,47], the proposed ETDA is able to accommodate the non-ideal track of the moving radar (i.e., compensating the motion error of the moving radar), but does not increase the computational load. In the first processing stage, the motion error can be accurately compensated in the subaperture imaging using the bistatic ETDA. In other words, the motion error is corrected for each BP data line by computing the bistatic range from each subaperture position of the moving radar via the polar grid of subimages to the position of the stationary radar. In the successive processing stage, the higher resolution subimage is interpolated from the lower resolution subimages in the previous stage. Therefore, the finer motion error compensation is included by the proposed ETDA as the resolution successively improves through the processing stages [46,47].

DTDA with Subaperture and Polar Grid Processing
The OS-BFSAR imaging geometry for the bistaticDTDA with the subaperture and polar grid processing is shown in Figure 2. For the n-th subaperture of the moving radar, A Mn is the n-th subaperture center of the moving radar at the n-th subaperture center time η n . The position vector of the moving radar at the slow time η n is r M (η n ) = (x M (η n ),y M (η n ),z M (η n )), and its projection in the X-Y plane is A Mgn with the position vector r Mg (η n ) = (x M (η n ),y M (η n ),0). The distances from the moving and stationary radars to the sample r at slow time η n are R Mn and R Sn , and its projection in the X-Y plane are R Mgn and R Sgn , respectively. a n , b n and c n have the similar physical meanings as a, b and c in Figure 1, respectively, and the projection of the linear eccentricity c n is c gn . Similar to the monostatic DTDA, the polar coordinates (ρ n ,θ n ) of the sample r are defined in the ground plane as following [41].
First, the origin of the polar grid is defined as the projection of the center point of the link line between the stationary radar position B and considered moving radar subaperture center position A Mn . Second, the polar range ρ n is defined as the distance between the origin of the polar grid and the sample r, and the polar angle θ n is defined as the angle from the linear eccentricity projection c gn to the polar range ρ n . Thus, the polar coordinates (ρ n ,θ n ) of the sample r are determined by: It is well known that the angle θ n in Equation (5) is calculated based on the law of cosine. Using the above equation, we can get the Cartesian coordinates of the sample r, which are given by: It can be found that the sign ± in Equation (6) depends on the relative positions between the subaperture center A Mn and the sample r in the polar subimage. Here, γ n is the angle from the linear eccentricity projection c gn to the X axis, which can be given by: Similarly, the polar coordinates (ρ np ,θ np ) of the scattering target P can be defined. Let R(η,ρ np ,θ np ) = R(η,r p ) and R(η,ρ n ,θ n )= R(η,r), the value of the polar subimage at the sample (ρ n ,θ n ) for the n-th subaperture imaging processing is calculated as: where T n is the integration time of the n-th subaperture.
It is well known that the angle θn in Equation (5) is calculated based on the law of cosine. Using the above equation, we can get the Cartesian coordinates of the sample r, which are given by: It can be found that the sign ± in Equation (6) depends on the relative positions between the subaperture center AMn and the sample r in the polar subimage. Here, γn is the angle from the linear eccentricity projection cgn to the X axis, which can be given by: Similarly, the polar coordinates (ρnp,θnp) of the scattering target P can be defined. Let R(η,ρnp,θnp) = R(η,rp) and R(η,ρn,θn)= R(η,r), the value of the polar subimage at the sample (ρn,θn) for the n-th subaperture imaging processing is calculated as: η ρ θ π η ρ θ ρ θ η η η ρ θ η ρ θ π η ρ θ σ η where Tn is the integration time of the n-th subaperture.

Sampling Requirements for Polar Grids Considering Motion Errors
From the point of view of the bandwidth, the sampling requirements for the polar grids can be derived by calculating the bistatic range from the moving and stationary radars to the sample (ρn,θn). The bistaitc range calculation of the sample (ρn,θn) in the n-th subaperture imaging processing is now

Sampling Requirements for Polar Grids Considering Motion Errors
From the point of view of the bandwidth, the sampling requirements for the polar grids can be derived by calculating the bistatic range from the moving and stationary radars to the sample (ρ n ,θ n ). The bistaitc range calculation of the sample (ρ n ,θ n ) in the n-th subaperture imaging processing is now shown in Figure 3. A Mη is the moving radar position at the slow time η, and the n-th subaperture integration time is η∈ (η n -T n /2, η n + T n /2). l 2g is the projection of the real flight track l 2 of the moving radar in the X-Y plane, and A Mgη is the projection of the moving radar position A Mη in the X-Y plane.
µ Mη is the distance between the moving radar position projections A Mgn and A Mgη along the Y axis direction, and δ Mη is the distance between the moving radar position projections A Mgn and A Mgη along the X axis direction, and therefore the length of the straight line A Mgn A Mgη is d Mη = µ 2 Mη + δ 2 Mη . ϑ Mη is the angle between straight line A Mgn A Mgη and the range projection R Mgn , and ψ Mη is the angle between the straight line A Mgn A Mgη and the linear eccentricity projection c gn . φ Mn is defined as the angle between the range projection R Mgn and the linear eccentricity projection c gn .
Sensors 2016, 16,1907 7 of 27 radar in the X-Y plane, and AMgη is the projection of the moving radar position AMη in the X-Y plane. μMη is the distance between the moving radar position projections AMgn and AMgη along the Y axis direction, and δMη is the distance between the moving radar position projections AMgn and AMgη along the X axis direction, and therefore the length of the straight line AMgnAMgη is = + . ϑMη is the angle between straight line AMgnAMgη and the range projection RMgn, and ψMη is the angle between the straight line AMgnAMgη and the linear eccentricity projection cgn. ϕMn is defined as the angle between the range projection RMgn and the linear eccentricity projection cgn. From the imaging geometry in Figure 3, the bistatic distance from the moving and stationary radars to the sample (ρn,θn) at the slow time η can be computed and expanded using the Taylor series, which is given by: where, RM(η,ρn,θn) is the distance from the moving radar position AMη to the sample (ρn,θn), and RS(ρn,θn) is the distance from the stationary radar position to the sample (ρn,θn). For the far-field OS-BFSAR imaging, it is reasonable to assume that the range dMη and height zM(η) are much smaller than the range RMgn, and the height zS is much smaller than the range RSgn. Therefore, all the terms, except the first three terms of Equation (9) are approximated to zeros. Taking only the first three terms of Equation (9) into account, Equation (9) can be approximated as: According to [41], it can be clearly found that RMgn + RSgn, is nearly a constant with respect to the polar angle θn but not to the polar range ρn for the n-th subaperture imaging processing. From the imaging geometry in Figure 3, cos(ϑMη) in Equation (10) can then be calculated as follows: From the imaging geometry in Figure 3, the bistatic distance from the moving and stationary radars to the sample (ρ n ,θ n ) at the slow time η can be computed and expanded using the Taylor series, which is given by: where, R M (η,ρ n ,θ n ) is the distance from the moving radar position A Mη to the sample (ρ n ,θ n ), and R S (ρ n ,θ n ) is the distance from the stationary radar position to the sample (ρ n ,θ n ). For the far-field OS-BFSAR imaging, it is reasonable to assume that the range d Mη and height z M (η) are much smaller than the range R Mgn , and the height z S is much smaller than the range R Sgn . Therefore, all the terms, except the first three terms of Equation (9) are approximated to zeros. Taking only the first three terms of Equation (9) into account, Equation (9) can be approximated as: Sensors 2016, 16, 1907 8 of 27 According to [41], it can be clearly found that R Mgn + R Sgn , is nearly a constant with respect to the polar angle θ n but not to the polar range ρ n for the n-th subaperture imaging processing. From the imaging geometry in Figure 3, cos(ϑ Mη ) in Equation (10) can then be calculated as follows: Substituting Equation (11) into Equation (10), the expression in Equation (10) can be written as: The two-dimensional Fourier transforms of the subimage I n (ρ n ,θ n ) in Equation (8) with respect to the polar range ρ n and polar angle θ n is given by: where k ρ n and k θ n are the wavenumbers corresponding to the polar range ρ n and polar angle θ n , respectively. According to [34], the Fourier transform can be computed accurately using the stationary phase principle. Substituting Equation (8) into Equation (13), and then the stationary phase condition is given by: From the Equations (10), (12) and (14), the wavenumber k ρ n can be calculated by: Due to the fact that the range d Mη is much smaller than polar range ρ n for the far-filed OS-BFSAR imaging processing, the third term in Equation (15) is approximated to zeros, so it can be neglected in general. Then, Equation (15) can be approximated as: Let δ n be defined as the ratio of c n to ρ n (i.e., δ n = c n /ρ n ), and then the function H(δ n ,θ n ) is:  From Figure 4, we can clearly find that the minimum and maximum of the function H(δn,θn) are given by: and: Based on the maximum and minimum of the radar frequency f, the bound of the wavenumber can be given by: Then, the bandwidth of IFTn( , ) with respect to the wavenumber is given by: Finally, the sampling requirement for the polar rangeρn is therefore derived by: From Figure 4, we can clearly find that the minimum and maximum of the function H(δ n ,θ n ) are given by: and: Based on the maximum and minimum of the radar frequency f, the bound of the wavenumber k ρ n can be given by: Then, the bandwidth of I FTn k ρ n , k θ n with respect to the wavenumber k ρ n is given by: Finally, the sampling requirement for the polar range ρ n is therefore derived by: For the monostatic SAR system with the co-located transmitter and receiver, by setting δ n = 0, the sampling requirement of the polar range used in the monostatic ETDA can be simplified as: which is the same with the equation derived and given in [36].
Similarly, based on the Equations (10), (12) and (14), the wavenumber k θ n is calculated by: Since the first term in Equation (24) is nearly a constant with respect to the polar angle θ n , the first derivative of the first term with respect to polar angle θ n is zero, so the wavenumber k θ n is approximated by: In this paper, to find the bound of the wavenumber k θ n in Equation (25), we can investigate it in some extreme cases, i.e., the polar angle has such values θ n = 0, θ n = π/2 and θ n = π. As the cases θ n = 0 and θ n = π are considered, Equation (25) becomes: and: Thus, the bound of the wavenumber k θ n in Equations (26) and (27) can be simply found from the values of the factors f, d Mη and sin(ψ Mη ), which can be expressed as: and: where d Mn is the maximum of the length d Mη .
Similarly, when the case θ n = π/2 is considered, Equation (25) becomes: where g(ψ Mη ) is the trigonometric function, i.e., g(ψ Mη ) = cos(ψ Mη )δ n sin(ψ Mη ). The extremum of this trigonometric function can be estimated with roots of its first derivative with respect to the angle Then, we have: Therefore, the bound of the function g(ψ Mη ) is given by: As a result, the bound of the wavenumber k θ n in Equation (30) is obtained as: Therefore, the bandwidth of I FTn k ρ n , k θ n with respect to the wavenumber k θ n for these above extreme cases can be given by: From Equation (35), it can be seen that B (k θ n (θ n = π)) ≥ B (k θ n (θ n = 0)) and B (k θ n (θ n = π)) ≥ B (k θ n (θ n = π/2)), thus the sampling requirement for the polar angle θ n is derived from the bandwidth B (k θ n (θ n = π)), which is given by: where, l Mn is the length of the n-th subaperture along the Y axis direction, and δ Mn,max is the maximum of the range δ Mn . From Equation (36), it can be found that the efficiency of the proposed ETDA will reduce when the deviation from the ideal flight track becomes large in comparison with the bistatic DTDA. For the monostatic SAR imaging processing, by setting δ n = 0 and d Mn = 2d n (d n is length of the n-th subaperture for the monostatic SAR), the polar angle sampling requirement used in the monostatic ETDA can be simplified as: which is also the same as the equation derived and given in [36].

Algorithm Implementation
The implementation of the proposed ETDA for the OS-BFSAR imaging processing is similar to that of the ETDA for the traditional one-stationary BSSAR (OS-BSSAR) imaging processing given in [46], but it calculates the subimages on the polar grids in the ground plane instead of the elliptical polar grids in the slant-range plane, which can be referenced to the positions of both moving and stationary radars. Figure 5 shows the implementation of the proposed ETDA for the OS-BFSAR imaging processing, and it contains two parts: the raw data factorization and SAR image generation, which are marked by the dashed rectangles with the different colors. The former includes the factorization of the received echo data and moving radar track data (i.e., the moving radar synthetic aperture), while the latter includes calculating the polar grids and Cartesian grid, performing the BP on the polar grids, interpolating the polar subimages to the polar subimages (P2P), and interpolating the polar subimages to the Cartesian image (P2C).
imaging processing, and it contains two parts: the raw data factorization and SAR image generation, which are marked by the dashed rectangles with the different colors. The former includes the factorization of the received echo data and moving radar track data (i.e., the moving radar synthetic aperture), while the latter includes calculating the polar grids and Cartesian grid, performing the BP on the polar grids, interpolating the polar subimages to the polar subimages (P2P), and interpolating the polar subimages to the Cartesian image (P2C). Similarly, the proposed ETDA improves the imaging efficiency by dividing the full synthetic aperture of the moving radar with L aperture positions. In other words, the full synthetic aperture is split recursively K times or stages by a factor of Fk in the k-th (1 ≤ k ≤ K) stage, and until ∏ subapertures of the size = / ∏ are finally reached. Then, we have: where Fk is defined as the reduction in the number of aperture positions Fk for the moving radar during the k-th processing stage. For simplification, we assume that there is a constant factorization of the full aperture positions of the moving radar during all processing stages, i.e., Fk= l for all k and K = logl(L/LK). Therefore, Equation (38) can be simplified as: In the first stage, the full synthetic aperture of the moving radar is split into l K small subapertures, which requires a split of the range-compressed echo data similarly. Taking the n-th subaperture imaging processing in the first stage for example, the polar grids of the subimage are defined according to the derived sampling requirements. The regular BP for the n-th subaperture is Similarly, the proposed ETDA improves the imaging efficiency by dividing the full synthetic aperture of the moving radar with L aperture positions. In other words, the full synthetic aperture is split recursively K times or stages by a factor of F k in the k-th (1 ≤ k ≤ K) stage, and until ∏ K k=1 F k subapertures of the size L K = L/ ∏ K k=1 F k are finally reached. Then, we have: where F k is defined as the reduction in the number of aperture positions F k for the moving radar during the k-th processing stage. For simplification, we assume that there is a constant factorization of the full aperture positions of the moving radar during all processing stages, i.e., F k = l for all k and K = log l (L/L K ). Therefore, Equation (38) can be simplified as: In the first stage, the full synthetic aperture of the moving radar is split into l K small subapertures, which requires a split of the range-compressed echo data similarly. Taking the n-th subaperture imaging processing in the first stage for example, the polar grids of the subimage are defined according to the derived sampling requirements. The regular BP for the n-th subaperture is performed on the corresponding polar grids over an elliptical mapping, and then accumulated coherently to generate the coarse polar subimage. The parameters ρ n , θ n , ρ np , θ np , η n and T n in Equation (8) are rewritten as ρ 1 n , θ 1 n , ρ 1 np , θ 1 np , η 1 n and T 1 n , respectively. Hence, the n-th polar subimage at the first stage is given by: The second stage is a recursive procedure. For k-th stage (2 ≤ k ≤ K), the polar subimages at the k-th stage are generated from the polar subimages formed in the (k − 1)-th stage. First, every l subapertures of the moving radar at the (k − 1)-th stage are combined into a new subaperture at the k-th stage. The origin of the new polar grid is defined as the projection of the center point of the link line between the stationary radar and the new subaperture center of the moving radar, and then the new polar grids (ρ k q , θ k q ) are defined according to the derived sampling requirements. To generate the q-th polar subimages at the k-th stage, every l corresponding polar subimages at the (k − 1)-th stage are interpolated into the polar grids (ρ k q ,θ k q ) and then accumulated coherently, which is: where, I k q is the q-th polar subimage in the k-th stage, and I k−1 q is the p-th polar subimage at the (k − 1)-th stage. (ρ k−1 p,cor , θ k−1 p,cor ) is the corresponding position of the polar grid (ρ k q , θ k q ) in the polar subimage I k−1 q . However, in the practice terms, the position (ρ k−1 p,cor , θ k−1 p,cor ) may not be just a discrete sample position in the polar subimage I k−1 q . According to [40,41], the values of the position (ρ k−1 p,cor , θ k−1 p,cor ) in the polar subimages I k−1 q should be interpolated from that of its surrounding samples in l polar subimages I k−1 p by the different interpolation methods [50]. Please note that, before the interpolation is operated, the upsampling of the data of the polar subimages I k−1 q in the polar range and polar angle is usually required in the proposed ETDA, since it will directly affect the final SAR image quality [50].
According to [46], if the sampling requirements for the polar grids are satisfied for all processing stages, it is found that the grid interpolation is the main source of error in the proposed ETDA, and is also the computational bottleneck of the proposed algorithm. It is well known that the quality of the SAR image is a trade-off between the imaging precision and efficiency. Fortunately, the high precision interpolators used in the ETDA can be easy to obtain in [50]. The two dimensional linear interpolations are used in the proposed ETDA due to their high efficiency. The low-pass linear interpolation is used in the polar angle since the angular signal is low-pass, and the band-pass linear interpolation is used in the polar range since the transmitted signal is band-pass [46]. Of course, there are also other interpolation methods. For example, the cubic spline interpolation can be used in the polar angle and the fast Fourier transform (FFT) and inverse FFT (IFFT) in combination with the zero padding can be used in the polar range, which is another alternative.
In the final stage, the final Cartesian SAR image is generated from all polar subimages at the K-th stage. First, all aperture positions of the moving radar are combined into a whole aperture. Then, the Cartesian grid (x,y) is defined according to the resolutions of the final image. Finally, l polar subimages at the K-th stage are interpolated into the Cartesian grid, and then summed coherently to reconstruct the Cartesian OS-BFSAR image, which is given by: where I(x,y) is the Cartesian OS-BFSAR image and (ρ K p,cor , θ K p,cor ) is the corresponding position of the Cartesian grid (x,y) in the p-th subimage I K q at the K-th stage.

Computational Load
Similar to the ETDA for the traditional OS-BSSAR imaging processing presented in [46], the computational load (number of operations) of the proposed ETDA for the OS-BFSAR imaging processing mainly includes the operation number of calculating the polar grids and Cartesian grid, the operation number of performing the BP on the polar grids, the operation number of the P2P interpolation, and the operation number of the P2C interpolation during all processing stages.
Provided that the final Cartesian SAR image has the size of M x and M y in the X and Y axes directions, respectively, while the polar subimage at the K-th stage has the size of M ρ and M θ in the polar range and polar angle directions, respectively. For the first stage, the size of the polar subimage is assumed to be M ρ ·(M θ /l K-1 ), and therefore the operation number of calculating the polar grids O 1,Grid at this stage is given by: Similarly, the operation number of performing the BP on the polar grids O BP at this stage can be calculated by: Thus, the total operation number of the imaging processing at first stage is: For the k-th stage (2 ≤ k ≤ K), the size of the polar subimage is assumed to M ρ ·(M θ /l K-k ), thus the operation number of calculating the polar grids O 2,Grid at this stage is given by: Similarly, the operation number of interpolating the polar subimages into the polar subimages O P2P in this stage is given by: Therefore, the total operation number of the imaging processing at this stage is: For the final stage, the size of the Cartesian image is assumed to M x ·M y , thus the operation number of calculating the Cartesian grid O 3,Grid in this stage is given by: Thus, the operation number of interpolating the polar subimages into the Cartesian image O P2C is: Thus, the total operation number of the imaging processing at the final stage is: As a result, the total operation number of the proposed ETDA is given by: We assume that M ρ = µ ρ M x and M θ = µ θ M y , and then Equation (52) can be approximated as: Analogously, the operation number of the bistatic DTDA can be calculated by: The speed-up factor of the proposed ETDA with respect to the bistatic DTDA is defined as: The value of the speed-up factor κ EDTA is determined by factors L, l, K, µ θ and µ ρ , and the values of factors µ θ and µ ρ are both larger than or equal to one in general. When the radar pulse number L (i.e., the length of the synthetic aperture of the moving radar) increases, only the factor K in Equation (55) increases in K = log l (L/L K ), where the factors l, µ θ and µ ρ are assumed to be the constant values. Figure 6 shows the logarithm (base 2) of the speed-up factor with respect to the radar pulse number L for the different values of factors µ θ and µ ρ , where L K = 16 and l = 4 are assumed. From Figure 6, it is clearly found that the logarithm values of these speed-up factors marked by different colors are almost directly proportional to log 2 L in some range.
( ) We assume that Mρ = μρMx and Mθ = μθMy, and then Equation (52) can be approximated as: Analogously, the operation number of the bistatic DTDA can be calculated by:   The larger the product of factors μθ and μρ, the smaller the value of the speed-up factor. The red line indicates the speed-up factor on the condition μθ = μρ= 1, so the value of the speed-up factor is almost larger than one for all the synthetic apertures. However, in the practical SAR imaging, the factors μθ and μρ are both larger than one, and therefore, the value of speed-up factor is smaller than The larger the product of factors µ θ and µ ρ , the smaller the value of the speed-up factor. The red line indicates the speed-up factor on the condition µ θ = µ ρ = 1, so the value of the speed-up factor is almost larger than one for all the synthetic apertures. However, in the practical SAR imaging, the factors µ θ and µ ρ are both larger than one, and therefore, the value of speed-up factor is smaller than one for the small synthetic aperture (see the zoom image in the green ellipse in Figure 6). The reason may be that the proposed ETDA spends more time on the calculation and interpolation of polar grids than that of the bistatic DTDA does. Comparing with the bistatic DTDA, the proposed ETDA has a good acceleration for the moderate and high resolution OS-BFSAR imaging processing, but doesn't offer any acceleration for the low resolution OS-BFSAR imaging processing.

Experimental Results
In this section, in order to verify the validity of the proposed ETDA for the OS-BFSAR imaging processing, experimental results based on both simulated and measured data are shown and analyzed to compare the performance of the proposed ETDA with that of the bistatic DTDA. The SAR scene reconstructed using the bistatic DTDA is used as the reference for the comparison, since this algorithm is the most accurate method for the imaging processing without any approximation in theory. The simulated OS-BFSAR data of the scene including several discrete scattering targets and the measured OS-BFSAR data acquired by an OS-BFSAR experiment at P-band are processed by the two algorithms, and then imaging results are illuminated in this section.

Simulated Data Results
The imaging geometry of the simulated OS-BFSAR system including the motion errors is the same as that in Figure 7. The parameters of the simulated OS-BFSAR system are shown in Table 1. The stationary radar is located on the top of a high tower, and its position is (0,0,20) m. The ideal flight track of the moving radar is parallel to the Y-axis direction, and its initial position is (1650,0,100) m at the slow time η = 0. In this OS-BFSAR system, both moving and stationary radars work in the forward-looking and spotlight mode. Suppose that the illuminating beam of the transmitter is always covered by that of the receiver to insure the synchronization of the OS-BFSAR system. Based on these parameters listed in Table 1, it is easy to compute that the moving radar synthetic aperture time T a is 6.5 s. The motion error is added to the ideal flight track of the moving radar. The motion error in the X axis direction is δM x = 5sin(2π(1/T a )η) + 0.3η, the motion error in the Y axis direction is δM y = 2sin(2π(0.3/T a )η) + 0.1η and the motion error in the Z axis direction is δM z = 3sin(2π(0.5/T a )η) + 0.2η. The simulated ground scene contains nine discrete point-like scatterers labeled as A-I in 3 rows and 3 columns, which are equally spaced in an area with the size of 300 m × 300 m in the azimuth and range directions, respectively. And, the point-like scatterer E is located at the scene center position (1650,0,0) m. Both the range and azimuth intervals of all point-like scatterers are 100 m in the ground plane, and the radar cross sections of all point-like scatterers are assumed and normalized to be 1 m 2 for simplification. The samples of the Cartesian image grids are assumed to be 0.8 m × 0.6 m in the azimuth and range directions, respectively. The distribution of the point-like scatterers is shown in Figure 8a. For evaluation purposes, the effects of the radio frequency interference (RFI), jamming, thermal noise, clutter, multipath, incidence/ reflection angles, local reflection and so on are not considered in this simulation. Moreover, in order to observe the performance of the reconstructed SAR images more clearly and to make a fair comparison, no weighting function or sidelobe control approach is used in this simulation.    Figure 8b, it is seen that the simulated SAR scene is well reconstructed by the DTDA, and all point-like scatterers appear in the SAR image as points. As observed from Figure 8c, all point-like scatterers in the simulated SAR scene are also well focused by the proposed ETDA, and also appear in the SAR image as points. It can be found that the focusing of all point-like scatterers in Figure 8c is very similar to that in Figure 8b. Visually, there is nearly no difference between SAR images given in Figure 8b,c, which indicates that the proposed ETDA is very effective for the OS-BFSAR imaging processing. Figures 9 and 10 show the contours of the imaging results of the point-like scatterers C, E and G labeled in Figure 8a, which are extracted from Figure 8b,c, respectively. From Figure9, we may observe the general ultrawideband (UWB) features of all selected point-like scatterers, such as the orthogonal and nonorthogonal sidelobes, etc. By observing Figure 10, it is seen that the contours of the imaging results of all selected point-like scatterers is very similar to those shown in Figure 9. Also, the point targets with the typical features of a point-like scatterer illuminated by a UWB SAR system can be observed in the SAR image given in Figure 10. However, the focusing quality of all selected point-like scatterers in Figure 10 is slightly degraded in comparison to the reference one in Figure 9, since there may be still small phase error caused by interpolations in the proposed ETDA. The effects are invisible at the high contour levels and some small influences is observed at lower contour levels, i.e., only the sidelobes of focused point-like scatterers suffer from the effect of phase errors. The phase errors are relatively small (usually smaller than or equal to π/8) and therefore do not strongly affect the peak signal level and itssurrounding of the focused point-like scatterers.   Figure 8b, it is seen that the simulated SAR scene is well reconstructed by the DTDA, and all point-like scatterers appear in the SAR image as points. As observed from Figure 8c, all point-like scatterers in the simulated SAR scene are also well focused by the proposed ETDA, and also appear in the SAR image as points. It can be found that the focusing of all point-like scatterers in Figure 8c is very similar to that in Figure 8b. Visually, there is nearly no difference between SAR images given in Figure 8b,c, which indicates that the proposed ETDA is very effective for the OS-BFSAR imaging processing. Figures 9 and 10 show the contours of the imaging results of the point-like scatterers C, E and G labeled in Figure 8a, which are extracted from Figure 8b,c, respectively. From Figure9, we may observe the general ultrawideband (UWB) features of all selected point-like scatterers, such as the orthogonal and nonorthogonal sidelobes, etc. By observing Figure 10, it is seen that the contours of the imaging results of all selected point-like scatterers is very similar to those shown in Figure 9. Also, the point targets with the typical features of a point-like scatterer illuminated by a UWB SAR system can be observed in the SAR image given in Figure 10. However, the focusing quality of all selected point-like scatterers in Figure 10 is slightly degraded in comparison to the reference one in Figure 9, since there may be still small phase error caused by interpolations in the proposed ETDA. The effects are invisible at the high contour levels and some small influences is observed at lower contour levels, i.e., only the sidelobes of focused point-like scatterers suffer from the effect of phase errors. The phase errors are relatively small (usually smaller than or equal to π/8) and therefore do not strongly affect the peak signal level and itssurrounding of the focused point-like scatterers.   Figure 8b, it is seen that the simulated SAR scene is well reconstructed by the DTDA, and all point-like scatterers appear in the SAR image as points. As observed from Figure 8c, all point-like scatterers in the simulated SAR scene are also well focused by the proposed ETDA, and also appear in the SAR image as points. It can be found that the focusing of all point-like scatterers in Figure 8c is very similar to that in Figure 8b. Visually, there is nearly no difference between SAR images given in Figure 8b,c, which indicates that the proposed ETDA is very effective for the OS-BFSAR imaging processing. Figures 9 and 10 show the contours of the imaging results of the point-like scatterers C, E and G labeled in Figure 8a, which are extracted from Figure 8b,c, respectively. From Figure 9, we may observe the general ultrawideband (UWB) features of all selected point-like scatterers, such as the orthogonal and nonorthogonal sidelobes, etc. By observing Figure 10, it is seen that the contours of the imaging results of all selected point-like scatterers is very similar to those shown in Figure 9. Also, the point targets with the typical features of a point-like scatterer illuminated by a UWB SAR system can be observed in the SAR image given in Figure 10. However, the focusing quality of all selected point-like scatterers in Figure 10 is slightly degraded in comparison to the reference one in Figure 9, since there may be still small phase error caused by interpolations in the proposed ETDA. The effects are invisible at the high contour levels and some small influences is observed at lower contour levels, i.e., only the sidelobes of focused point-like scatterers suffer from the effect of phase errors. The phase errors are relatively small (usually smaller than or equal to π/8) and therefore do not strongly affect the peak signal level and itssurrounding of the focused point-like scatterers.  Figures 11-13, both amplitude and phase profiles of the imaging results of all selected point-like scatterers by the two algorithms are very similar. In the mainlobe areas of the amplitude profiles for all selected point-like scatterers, it is almost impossible to see any difference between the two algorithms. However, in the sidelobe areas of the amplitude profiles, there is still slight difference between the two algorithms, which may be caused by the effects of the phase errors in the proposed ETDA. It might be clearly found that the similar performance can be observed in the phase profiles for all selected point-like scatterers. Therefore, such influences clearly show that the phase errors in the proposed ETDA may degrade slightly the focusing quality of the OS-BFSAR image compared with the bistatic DTDA.
To quantitatively analyze the imaging performance of the proposed ETDA in comparison to the bistatic DTDA, the quality measurements for the OS-BFSAR image can be calculated based on the amplitude profiles in Figures 11-13, such as the spatial resolution, peak sidelobe ratio (PSLR) and integrated sidelobe ratio (ISLR). Spatial resolutions in the azimuth and range directions can be defined by −3 dB width of the mainlobe of the focused point-like scatterer, i.e., the distance between two points where the intensity is one half of the peak intensity.  Figures 11-13, both amplitude and phase profiles of the imaging results of all selected point-like scatterers by the two algorithms are very similar. In the mainlobe areas of the amplitude profiles for all selected point-like scatterers, it is almost impossible to see any difference between the two algorithms. However, in the sidelobe areas of the amplitude profiles, there is still slight difference between the two algorithms, which may be caused by the effects of the phase errors in the proposed ETDA. It might be clearly found that the similar performance can be observed in the phase profiles for all selected point-like scatterers. Therefore, such influences clearly show that the phase errors in the proposed ETDA may degrade slightly the focusing quality of the OS-BFSAR image compared with the bistatic DTDA.
To quantitatively analyze the imaging performance of the proposed ETDA in comparison to the bistatic DTDA, the quality measurements for the OS-BFSAR image can be calculated based on the amplitude profiles in Figures 11-13, such as the spatial resolution, peak sidelobe ratio (PSLR) and integrated sidelobe ratio (ISLR). Spatial resolutions in the azimuth and range directions can be defined by −3 dB width of the mainlobe of the focused point-like scatterer, i.e., the distance between two points where the intensity is one half of the peak intensity.  Figures 11-13, both amplitude and phase profiles of the imaging results of all selected point-like scatterers by the two algorithms are very similar. In the mainlobe areas of the amplitude profiles for all selected point-like scatterers, it is almost impossible to see any difference between the two algorithms. However, in the sidelobe areas of the amplitude profiles, there is still slight difference between the two algorithms, which may be caused by the effects of the phase errors in the proposed ETDA. It might be clearly found that the similar performance can be observed in the phase profiles for all selected point-like scatterers. Therefore, such influences clearly show that the phase errors in the proposed ETDA may degrade slightly the focusing quality of the OS-BFSAR image compared with the bistatic DTDA.
To quantitatively analyze the imaging performance of the proposed ETDA in comparison to the bistatic DTDA, the quality measurements for the OS-BFSAR image can be calculated based on the amplitude profiles in Figures 11-13, such as the spatial resolution, peak sidelobe ratio (PSLR) and integrated sidelobe ratio (ISLR). Spatial resolutions in the azimuth and range directions can be defined by −3 dB width of the mainlobe of the focused point-like scatterer, i.e., the distance between two points where the intensity is one half of the peak intensity. PSLR can be defined by the ratio of the peak intensity in the sidelobe area to the peak intensity in the mainlobe area, and ISLR can be defined by the ratio of the energy of the sidelobe to that of the mainlobe. The results of these measured parameters are listed in Table 2. As observed from Table 2, it is found that all measured parameters obtained by the two algorithms are very similar. The resolutions in the azimuth and range directions of all selected point-like scatterers by the proposed ETDA become little worse than those of the bistatic DTDA. Fortunately, the PSLRs and ISLRs of all selected point-like scatterers by the proposed ETDA are very similar to those by the bistatic DTDA. Based on the above-mentioned analysis of the imaging results, we can conclude that the imaging result by the proposed ETDA is very similar to that by the bistatic DTDA. PSLR can be defined by the ratio of the peak intensity in the sidelobe area to the peak intensity in the mainlobe area, and ISLR can be defined by the ratio of the energy of the sidelobe to that of the mainlobe. The results of these measured parameters are listed in Table 2. As observed from Table 2, it is found that all measured parameters obtained by the two algorithms are very similar. The resolutions in the azimuth and range directions of all selected point-like scatterers by the proposed ETDA become little worse than those of the bistatic DTDA. Fortunately, the PSLRs and ISLRs of all selected point-like scatterers by the proposed ETDA are very similar to those by the bistatic DTDA. Based on the above-mentioned analysis of the imaging results, we can conclude that the imaging result by the proposed ETDA is very similar to that by the bistatic DTDA. PSLR can be defined by the ratio of the peak intensity in the sidelobe area to the peak intensity in the mainlobe area, and ISLR can be defined by the ratio of the energy of the sidelobe to that of the mainlobe. The results of these measured parameters are listed in Table 2. As observed from Table 2, it is found that all measured parameters obtained by the two algorithms are very similar. The resolutions in the azimuth and range directions of all selected point-like scatterers by the proposed ETDA become little worse than those of the bistatic DTDA. Fortunately, the PSLRs and ISLRs of all selected point-like scatterers by the proposed ETDA are very similar to those by the bistatic DTDA. Based on the above-mentioned analysis of the imaging results, we can conclude that the imaging result by the proposed ETDA is very similar to that by the bistatic DTDA. PSLR can be defined by the ratio of the peak intensity in the sidelobe area to the peak intensity in the mainlobe area, and ISLR can be defined by the ratio of the energy of the sidelobe to that of the mainlobe. The results of these measured parameters are listed in Table 2. As observed from Table 2, it is found that all measured parameters obtained by the two algorithms are very similar. The resolutions in the azimuth and range directions of all selected point-like scatterers by the proposed ETDA become little worse than those of the bistatic DTDA. Fortunately, the PSLRs and ISLRs of all selected point-like scatterers by the proposed ETDA are very similar to those by the bistatic DTDA. Based on the above-mentioned analysis of the imaging results, we can conclude that the imaging result by the proposed ETDA is very similar to that by the bistatic DTDA. In order to prove the imaging efficiency of the proposed ETDA, the operation number of the two imaging methods is estimated in this simulation case. The simulated data are processed by the two algorithms using the next parameters: M x = 500, M y = 375, L = 2 24 Figure 6. According to [45], if the proposed ETDA is parallelized on the Graphic Processing Unit (GPU), the proposed ETDA will further accelerate the process in the OS-BFSAR imaging.

Measured Data Results
In order to further prove the feasibility of the proposed ETDA, the measured data acquired by a real OS-BFSAR system are processed by the two algorithms, and the imaging results are shown and analyzed. The imaging experiment of the OS-BFSAR system has been carried out at P-band in late 2015, which has the construction of the ground-based stationary radar to operate in conjunction with an existing vehicle-based monostatic SAR system. The imaging geometry of the real OS-BFSAR system is shown in Figure 14, and the real OS-BFSAR system is given in Figure 15. The stationary radar is located on the top of a tripod on a high position. The moving radar is fixed on the top of the vehicle in a square, and the dashed line is the ideal track of the vehicle, while the solid curve is its actual track. The vehicle-based moving radar continually illuminates the scene including several targets and transmits the chirp signal (P-band) in the forward-looking and side-looking modes, while the ground-based stationary radar illuminates the same region and receives the bistatic scattered signal of the scene. Both vehicle-based and ground-based antennas have a 3 dB azimuth beamwidth of approximately 35 • to 50 • , and a 3 dB elevation beamwidth of approximately 75 • to 85 • . To maintain the time synchronization of the OS-BFSAR system, the one pulse per second (1PPS) output signal from a global positioning system (GPS) receiver incorporated into the moving and stationary radars is used as a timing reference. Besides, to maintain the frequency synchronization the OS-BFSAR system, the voltage-controlled oscillator (VCO) generate a standard sine signal, after amplification and frequency conversion, this signal is transferred into a phase detector, and it is phase-detected by the 1PPS signal. The phase error from the phase detector is used to discipline the VCO, and the output signal from this in turn stabilizes the VCO via the digital phase-locked loop (DPLL). VCO produces coherent reference signals in the moving and stationary radars, and these are used to drive the frequency synthesizer to produce the various system signals. This technique can real-timing revise the time and frequency reference signals to satisfy system synchronization.
Sensors 2016, 16, 1907 21 of 27 75° to 85°. To maintain the time synchronization of the OS-BFSAR system, the one pulse per second (1PPS) output signal from a global positioning system (GPS) receiver incorporated into the moving and stationary radars is used as a timing reference. Besides, to maintain the frequency synchronization the OS-BFSAR system, the voltage-controlled oscillator (VCO) generate a standard sine signal, after amplification and frequency conversion, this signal is transferred into a phase detector, and it is phase-detected by the 1PPS signal. The phase error from the phase detector is used to discipline the VCO, and the output signal from this in turn stabilizes the VCO via the digital phase-locked loop (DPLL). VCO produces coherent reference signals in the moving and stationary radars, and these are used to drive the frequency synthesizer to produce the various system signals. This technique can real-timing revise the time and frequency reference signals to satisfy system synchronization.  The acquisition parameters of the real OS-BFSAR system are shown in Table 3. Figure 16 gives the position of the vehicle-based radar from the GPS data, including the actual position in the X and Y axes directions, the motion errors in the X, Y and Z axes directions, and so on. The vehicle moves parallel to the X-axis direction with an average speed about 12.  75° to 85°. To maintain the time synchronization of the OS-BFSAR system, the one pulse per second (1PPS) output signal from a global positioning system (GPS) receiver incorporated into the moving and stationary radars is used as a timing reference. Besides, to maintain the frequency synchronization the OS-BFSAR system, the voltage-controlled oscillator (VCO) generate a standard sine signal, after amplification and frequency conversion, this signal is transferred into a phase detector, and it is phase-detected by the 1PPS signal. The phase error from the phase detector is used to discipline the VCO, and the output signal from this in turn stabilizes the VCO via the digital phase-locked loop (DPLL). VCO produces coherent reference signals in the moving and stationary radars, and these are used to drive the frequency synthesizer to produce the various system signals. This technique can real-timing revise the time and frequency reference signals to satisfy system synchronization.  The acquisition parameters of the real OS-BFSAR system are shown in Table 3. Figure 16 gives the position of the vehicle-based radar from the GPS data, including the actual position in the X and Y axes directions, the motion errors in the X, Y and Z axes directions, and so on. The vehicle moves parallel to the X-axis direction with an average speed about 12.   The acquisition parameters of the real OS-BFSAR system are shown in Table 3. Figure 16 gives the position of the vehicle-based radar from the GPS data, including the actual position in the X and Y axes directions, the motion errors in the X, Y and Z axes directions, and so on. The vehicle moves parallel to the X-axis direction with an average speed about 12. As the scene being imaged must lie in the line-of-sight of the stationary radar, this constrained operation of the radar system to the imaging of test-sites within only a few decameters from the tripod. One data site was acquired for the imaging processing: the scene was a flat area, and its size is about 40 m in the X axis direction and 35 m in the Y axis direction. Various scattering targets, including three metallic cylinders and one metallic trihedral reflector were deployed at the test-site prior to this experiment, and their positions in the ground plane are about (5.5,5.5,0) m, (0,0,0) m, (−5.5,−5.5,0) m and (7.5,−17.5,0) m, respectively. As the vehicle-based radar system moves parallel to the X-axis direction, the bistatic measured data were naturally collected from a wide variety of the azimuth look direction. Figure 17 gives the bistatic scattered signal of the scene in the time domain and frequency domain. As observed from Figure 17, it can be clearly found that the scattered signal of the targets is very obvious (see the region in the green ellipses in Figure 17).
Sensors 2016, 16,1907 22 of 27 radar system to the imaging of test-sites within only a few decameters from the tripod. One data site was acquired for the imaging processing: the scene was a flat area, and its size is about 40 m in the X axis direction and 35 m in the Y axis direction. Various scattering targets, including three metallic cylinders and one metallic trihedral reflector were deployed at the test-site prior to this experiment, and their positions in the ground plane are about (5.5,5.5,0) m, (0,0,0) m, (−5.5,−5.5,0) m and (7.5,−17.5,0) m, respectively. As the vehicle-based radar system moves parallel to the X-axis direction, the bistatic measured data were naturally collected from a wide variety of the azimuth look direction. Figure 17 gives the bistatic scattered signal of the scene in the time domain and frequency domain. As observed from Figure 17, it can be clearly found that the scattered signal of the targets is very obvious (see the region in the green ellipses in Figure 17).  Next, the measured data of the scene is processed by the bistatic DTDA and proposed ETDA. Before the imaging processing is performed by two methods, the measured data of the OS-BFSAR system (including the GPS data of the vehicle-based radar) should be preprocessed. First, the GPS data should be transferred into the Cartesian coordinate system to obtain the local position of the vehicle-based radar. Second, the scattered data of the scene should be downsampled and filtered in the slow time. Finally, the scattered data must be range compressed by the matching filter function recorded before this experiment, and then the RFI in the scattered data should be suppressed by the method of the frequency spectrum equilibrium. In the imaging processing, the direct-path signal Sensors 2016, 16,1907 22 of 27 radar system to the imaging of test-sites within only a few decameters from the tripod. One data site was acquired for the imaging processing: the scene was a flat area, and its size is about 40 m in the X axis direction and 35 m in the Y axis direction. Various scattering targets, including three metallic cylinders and one metallic trihedral reflector were deployed at the test-site prior to this experiment, and their positions in the ground plane are about (5.5,5.5,0) m, (0,0,0) m, (−5.5,−5.5,0) m and (7.5,−17.5,0) m, respectively. As the vehicle-based radar system moves parallel to the X-axis direction, the bistatic measured data were naturally collected from a wide variety of the azimuth look direction. Figure 17 gives the bistatic scattered signal of the scene in the time domain and frequency domain. As observed from Figure 17, it can be clearly found that the scattered signal of the targets is very obvious (see the region in the green ellipses in Figure 17).  Next, the measured data of the scene is processed by the bistatic DTDA and proposed ETDA. Before the imaging processing is performed by two methods, the measured data of the OS-BFSAR system (including the GPS data of the vehicle-based radar) should be preprocessed. First, the GPS data should be transferred into the Cartesian coordinate system to obtain the local position of the vehicle-based radar. Second, the scattered data of the scene should be downsampled and filtered in the slow time. Finally, the scattered data must be range compressed by the matching filter function recorded before this experiment, and then the RFI in the scattered data should be suppressed by the method of the frequency spectrum equilibrium. In the imaging processing, the direct-path signal Next, the measured data of the scene is processed by the bistatic DTDA and proposed ETDA. Before the imaging processing is performed by two methods, the measured data of the OS-BFSAR system (including the GPS data of the vehicle-based radar) should be preprocessed. First, the GPS data should be transferred into the Cartesian coordinate system to obtain the local position of the vehicle-based radar. Second, the scattered data of the scene should be downsampled and filtered in the slow time. Finally, the scattered data must be range compressed by the matching filter function recorded before this experiment, and then the RFI in the scattered data should be suppressed by the method of the frequency spectrum equilibrium. In the imaging processing, the direct-path signal from the vehicle-based radar to the ground-based radar should be used to reduce or correct the time error of the scattered data caused by the uncertain time delay of the OS-BFSAR system. Figure 18 shows the imaging results of this measured data by the bistatic DTDA and proposed ETDA. In Figure 18, it is seen that the illuminated scene is well reconstructed by both two algorithms, and the metallic cylinders and metallic trihedral reflector are well focused, and the UWB features (i.e., the orthogonal and nonorthogonal sidelobes) can be clearly observed. Besides, it can be found that the image in Figure 18b obtained by the proposed ETDA is very close to that in Figure 18a obtained by the bistatic DTDA. However, the focusing quality of the image in Figure 18b is slightly degraded in comparison to that of the image in Figure 18a, since there may be still small phase error caused by the interpolations in the proposed ETDA. Fortunately, only the sidelobes of the focused image in Figure 18b may suffer from the effect of the phase errors.
Sensors 2016, 16,1907 23 of 27 from the vehicle-based radar to the ground-based radar should be used to reduce or correct the time error of the scattered data caused by the uncertain time delay of the OS-BFSAR system. Figure  18 shows the imaging results of this measured data by the bistatic DTDA and proposed ETDA. In Figure 18, it is seen that the illuminated scene is well reconstructed by both two algorithms, and the metallic cylinders and metallic trihedral reflector are well focused, and the UWB features (i.e., the orthogonal and nonorthogonal sidelobes) can be clearly observed. Besides, it can be found that the image in Figure 18b obtained by the proposed ETDA is very close to that in Figure 18a obtained by the bistatic DTDA. However, the focusing quality of the image in Figure 18b is slightly degraded in comparison to that of the image in Figure 18a, since there may be still small phase error caused by the interpolations in the proposed ETDA. Fortunately, only the sidelobes of the focused image in Figure 18b may suffer from the effect of the phase errors. In order to quantitatively evaluate the focusing quality of the SAR images in Figure 18, the resolutions of the focused trihedral reflector in the X and Y axes directions are measured based on the width of the amplitude profiles at −3 dB point. The measured resolutions in the X and Y axes direction obtained by the bistatic DTDA are 0.35 m and 1.73 m, respectively. And, the measured resolutions in the X and Y axes direction obtained by the proposed ETDA are 0.37 m and 1.87 m, respectively. Therefore, the measured resolutions by the two algorithms are very similar, which indirectly proves the validity of the proposed ETDA. Finally, for the sake of brevity, only the reconstruction time of the scene by the two algorithms is measured. The reconstruction time of the scene using the bistatic DTDA is 4687 s, while the scene reconstruction time using the proposed ETDA is 386 s. It can be concluded that the proposed ETDA is much faster than the bistatic DTDA while the achieved accuracies are about the same for the real OS-BFSAR imaging processing.

Conclusions
In this paper, an efficient time domain imaging method called ETDA is presented to give better performance in imaging efficiency than the bistatic DTDA for OS-BFSAR imaging processing. This method still inherits the advantages of the bistatic DTDA, such as the precisely accommodation of the large spatial variances, serious range-azimuth coupling and motion errors. Besides, it represents the subimages on polar grids in the ground plane instead of the slant-range plane, in order to be accurately referenced to the positions of both moving and stationary radars. Moreover, it derives the sampling requirements considering the motion errors for the polar grids to offer a near-optimum tradeoff between the imaging precision and efficiency. Experimental results based on both simulated and measured data confirm that the proposed algorithm performs better than the bistatic DTDA in terms of the efficiency improvement. The proposed ETDA for the BFSAR imaging processing with the complex configurations will be the next focus of our research.
In order to quantitatively evaluate the focusing quality of the SAR images in Figure 18, the resolutions of the focused trihedral reflector in the X and Y axes directions are measured based on the width of the amplitude profiles at −3 dB point. The measured resolutions in the X and Y axes direction obtained by the bistatic DTDA are 0.35 m and 1.73 m, respectively. And, the measured resolutions in the X and Y axes direction obtained by the proposed ETDA are 0.37 m and 1.87 m, respectively. Therefore, the measured resolutions by the two algorithms are very similar, which indirectly proves the validity of the proposed ETDA. Finally, for the sake of brevity, only the reconstruction time of the scene by the two algorithms is measured. The reconstruction time of the scene using the bistatic DTDA is 4687 s, while the scene reconstruction time using the proposed ETDA is 386 s. It can be concluded that the proposed ETDA is much faster than the bistatic DTDA while the achieved accuracies are about the same for the real OS-BFSAR imaging processing.

Conclusions
In this paper, an efficient time domain imaging method called ETDA is presented to give better performance in imaging efficiency than the bistatic DTDA for OS-BFSAR imaging processing. This method still inherits the advantages of the bistatic DTDA, such as the precisely accommodation of the large spatial variances, serious range-azimuth coupling and motion errors. Besides, it represents the subimages on polar grids in the ground plane instead of the slant-range plane, in order to be accurately referenced to the positions of both moving and stationary radars. Moreover, it derives the sampling requirements considering the motion errors for the polar grids to offer a near-optimum tradeoff between the imaging precision and efficiency. Experimental results based on both simulated and measured data confirm that the proposed algorithm performs better than the bistatic DTDA in terms of the efficiency improvement. The proposed ETDA for the BFSAR imaging processing with the complex configurations will be the next focus of our research. Traditional Chinese Medicine (Changsha, China). The authors also wish to extend their sincere thanks to the anonymous reviewers and the academic editor for their careful reading and valuable suggestions that helped improve the paper quality.
Author Contributions: Hongtu Xie is responsible for all the theoretical work, the performing the simulation, the implementation of the experiment, the processing and analysis of the experiment data, and the writing of the manuscript; Shaoying Shi and Hui Xiao conceived and designed the experiment; Chao Xie, Feng Wang and Qunle Fang revised the paper.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript:

1PPS
One