Next Article in Journal
A Data Transfer Fusion Method for Discriminating Similar Spectral Classes
Next Article in Special Issue
Validation and Parameter Sensitivity Tests for Reconstructing Swell Field Based on an Ensemble Kalman Filter
Previous Article in Journal
Secure Multiuser Communications in Wireless Sensor Networks with TAS and Cooperative Jamming
Previous Article in Special Issue
SAR Ground Moving Target Indication Based on Relative Residue of DPCA Processing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Time-Domain Imaging Processing for One-Stationary Bistatic Forward-Looking SAR Including Motion Errors

1
Department of Air/Space-based Early Warning Equipment, Air Force Early Warning, Wuhan 430019, China
2
College of Electronic Science and Engineering, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(11), 1907; https://doi.org/10.3390/s16111907
Submission received: 26 May 2016 / Revised: 1 September 2016 / Accepted: 24 September 2016 / Published: 12 November 2016

Abstract

:
With the rapid development of the one-stationary bistatic forward-looking synthetic aperture radar (OS-BFSAR) technology, the huge amount of the remote sensing data presents challenges for real-time imaging processing. In this paper, an efficient time-domain algorithm (ETDA) considering the motion errors for the OS-BFSAR imaging processing, is presented. This method can not only precisely handle the large spatial variances, serious range-azimuth coupling and motion errors, but can also greatly improve the imaging efficiency compared with the direct time-domain algorithm (DTDA). Besides, it represents the subimages on polar grids in the ground plane instead of the slant-range plane, and derives the sampling requirements considering motion errors for the polar grids to offer a near-optimum tradeoff between the imaging precision and efficiency. First, OS-BFSAR imaging geometry is built, and the DTDA for the OS-BFSAR imaging is provided. Second, the polar grids of subimages are defined, and the subaperture imaging in the ETDA is derived. The sampling requirements for polar grids are derived from the point of view of the bandwidth. Finally, the implementation and computational load of the proposed ETDA are analyzed. Experimental results based on simulated and measured data validate that the proposed ETDA outperforms the DTDA in terms of the efficiency improvement.

1. Introduction

Nowadays, the synthetic aperture radar (SAR) has a major advantage for the high resolution imaging processing in all time and all weather conditions, and plays a very significant role in remote sensing, geosciences, surveillance and reconnaissance applications, and thus it is widely investigated in both civilian and military fields [1,2,3,4,5,6].
Bistatic forward-looking SAR (BFSAR) [7] is a special bistatic SAR (BSAR) system [8,9], where the radar system works in the forward-looking mode compared with the traditional BSAR system working in the side-looking mode. It not only inherits the advantages of the BSAR system, such as reduced vulnerability for military applications, exploiting the additional information and improving the detectability of the stealth targets, but also carries out high resolution scene imaging in the forward direction [10], and therefore it has gained wide attention in missile navigation, war-field reconnaissance, and forward-looking imaging. Recently, several countries have carried out the experiments on BFSAR imaging processing, and some excellent results were obtained, such as the BFSAR experiment with a fixed ground-based receiver working in the forward-looking mode [11,12], the BFSAR experiment with a vehicle-based transmitter and a receiver on board an ultra-light aircraft [13,14,15], and the BFSAR experiment using the TerraSAR-X as the transmitter and the airborne phased array multifunctional imaging radar (PAMIR) as the receiver [16,17]. One-stationary BFSAR (OS-BFSAR) is regarded as a SAR system combining the one-stationary BSAR (OS-BSAR) and BFSAR systems, which has a stationary radar (transmitter or receiver) fixed on the top of a high tower or mountain and a moving radar (receiver or transmitter) placed at a vehicle/airborne/spaceborne platform. The OS-BFSAR system not only inherits the advantages of the OS-BSAR and BFSAR systems, but also is subject to the difficulties of the imaging processing due to its special configuration, i.e., the huge amount of echo data, large spatial-variance, serious range-azimuth coupling and complicated motion errors. These make the precise imaging processing of the OS-BFSAR data more complicated.
Like monostatic SAR imaging algorithms, BSAR imaging algorithms can be also categorized into two groups: frequency-domain algorithms (FDAs) and time-domain algorithms (TDAs). The FDAs generally aim to minimize the time of the imaging processing. However, this aim can induce many limitations such as the bandwidth, integration time, motion errors, assumptions and approximations in the imaging processing, real-time imaging processing, memory requirements and so on, which may restrict the application of the FDAs. Thus, FDAs are only available for some particular SAR imaging processing cases. Recently, monostatic FDAs [1,2,3,4], which include the range Doppler algorithm (RDA) [18,19,20], the Omega-k algorithm (OKA) [21,22,23,24], the chirp scaling algorithm (CSA) [25,26,27] and nonlinear CSA (NLCSA) [28,29,30,31], have been extended for BSAR imaging processing. Among the above-mentioned methods, RDA, OKA and CSA are only available for the azimuth-invariant BSAR imaging processing, thus they cannot always satisfy the precise imaging processing for the all BSAR configurations in practice, especially for azimuth-variant BSAR systems. NLCSA and its modifications have been used to implement imaging processing for the different BSAR configurations, but there are some approximations of handling the spatial-variance, range-azimuth coupling and motion errors, which may cause large phase errors in some particular BSAR imaging processing cases. Thus, the FDAs are only valid for a limited number of BSAR systems, which don’t satisfy the precise imaging processing for the OS-BFSAR due to the large spatial variances, serious range-azimuth coupling and complicated motion errors. For the OS-BFSAR, if the minimum phase error and the highest resolution are required, processing such BSAR data must rely on TDAs, which can avoid the limitations of the FDAs.
The TDAs include the direct TDA (DTDA), like the backprojection algorithm (BPA) [32], and the efficient TDA (ETDA), like the fast BPA (FBPA) [33,34,35] and fast factorized BPA (FFBPA) [36]. DTDA [32] is considered a linear transformation to reconstruct the SAR scene from the radar echoes, and thus it can be applied directly to monostatic and bistatic imaging processing with perfect focusing performance. Importantly, it can precisely accommodate the abovementioned problems in the OS-BFSAR imaging processing. Moreover, it can offer the further advantage of the precise disposal of the irregular sampling, particularly useful in the OS-BFSAR system. However, the DTDA has a higher computational load, which may prevent its use as a standard method for monostatic and bistatic imaging processing. To reduce the higher computational load, the efficient implementation of the TDA has been applied for the monostatic SAR imaging processing (i.e., FBPA [33]), which is based on a two-step split of the synthetic aperture. The FBPA method, including the derivation of Nyquist requirements for the linear track SAR imaging processing was presented in [34]. A quad-tree-based FBPA for the arbitrary motion SAR imaging processing was proposed in [35], offering the idea of splitting the imaging processing into multiple stages. All the developments of the monostatic FBPA converged into the FFBPA [36], which is an optimum method benefiting from the multiple stage factorizations working in an efficient geometry in terms of the image sampling. These ETDAs are based on the subaperture processing techniques, which can keep all the advantages of the DTDA but with a reduced computational load. With the rapid development of the BSAR technologies in recent years, the monostatic ETDAs have been extended to the BSAR imaging processing, which are classified into two kinds: the bistatic FBPA [37,38,39,40,41,42] and bistatic FFBPA [43,44,45,46,47,48]. A study on the ability of the bistatic FBPA to handle the bistatic range history precisely was developed in [37]. In [38], a bistatic FBPA based on the subaperture and subimages has been presented for BSAR imaging processing, which required an intermediate processing step involving beamforming from the radar echoes. The phase errors caused by the approximations in this bistatic FBPA was analyzed in [39] to provide a good trade-off between the phase error and the computational load in imaging processing.
Another bistatic FBPA based on the subaperture and polar grid was proposed in [40], which required mapping the radar echoes into polar grids instead of the beamforming from the radar echoes as an intermediate processing step. In this research, the reconstruction of the SAR scene was recommended in a ground plane instead of in a slant-range plane, and the polar range coordinate and polar angular coordinate necessarily depended on both transmitter and receiver tracks, but the sampling requirements for polar grids were derived in [41] for the linear trajectory BSAR system. The bistatic FBPA for the OS-BSAR imaging processing was proposed in [42], which also represented the subimages in the Cartesian ground plane. However, the oversampling ratios of the subimages in the azimuth and range directions were uncertain, thus the sampling requirements for the subimages weren’t optimal. The bistatic FFBPA was first used to the OS-BSAR imaging processing in [43], but only the experimental results were given. Application of the bistatic FFBPA to process the BSAR data was presented in [44], which gave the basic principles of the bistatic FFBPA, whereas the details of its implementation weren’t given.
Another bistatic FFBPA has been applied for a space-borne-airborne BSAR case [45], and it first represented the subimages in the elliptical polar grids to reduce the computational load. However, the sampling requirement for the elliptical polar grids was derived not only for the linear track BSAR system, but also for the preferred BSAR case of the radars with the higher angular velocity. Considering the motion errors, the bistatic FFBPA with the derivation of the sampling requirement for the elliptical polar grids was developed in [46,47]. However, for the bistatic FFBPA given in [45,46,47], the elliptical polar range coordinate and polar angular coordinate were only referenced to the transmitter or receiver track, thus the applicability of these sampling requirements was limited in some instances. The authors of [38] presented a bistatic FFBPA for the linear trajectory BSAR imaging processing in [48], which gave the requirements for splitting the subaperture and subimage, while the sampling requirements of the beams for the corresponding subimages were not given. In [49], the bistatic FBPA and FFBPA based on the subapertures and local polar coordinates were proposed for the general bistatic airborne SAR systems, which in fact was a synthesis of the research given in [40] and [41]. Similarly, the motion errors was not considered in the derivation of the sampling requirements for polar grids, which cannot offer a near-optimum tradeoff between the imaging precision and efficiency in the BSAR imaging processing in practical. It is known that the bistatic ETDA in [37,38,39,40,41,42,43,44,45,46,47,48,49] were developed for the traditional bistatic side-looking SAR (BSSAR) imaging processing. However, to our knowledge, the ETDA for the OS-BFSAR imaging processing has hardly been investigated in earlier publications, thus it may still be desirable for practical OS-BFSAR data processing.
Based on these previous works, this paper explores an ETDA considering motion errors for the OS-BFSAR imaging processing based on the subaperture and polar grid processing. This method represents the subimages on polar grids in the ground plane instead of the slant-range plane, and it is referenced to the positions of both moving and stationary radars. It can not only accurately accommodate the large spatial variances, serious range-azimuth coupling and motion errors, but also improve greatly the imaging efficiency with respect to the DTDA. First, the OS-BFSAR imaging geometry is built, and the DTDA for the OS-BFSAR imaging processing is provided, which lays the foundation for the proposed ETDA. Second, the polar grids of the subimages are defined, and then the subaperture imaging processing in the ETDA is derived. The sampling requirements considering motion errors for the polar grids are derived from the point of view of the bandwidth, which can offer a near-optimum tradeoff between the imaging precision and efficiency. Third, the implementation and computational load of the proposed ETDA are analyzed, and then the speed-up factor of the proposed ETDA with respect to the DTDA is derived. Finally, the presented ETDA is tested and validated by experimental results based on the simulated and measured OS-BFSAR data.
This paper is organized as follows: Section 2 reviews the bistatic DTDA for the OS-BFSAR imaging processing. The details of the proposed ETDA, including the definition of the polar grids, subaperture imaging processing, sampling requirements for the polar grids, implementation and computational load, are presented in Section 3. Experimental results and the corresponding analysis based on simulated and measured data are given in Section 4. Conclusions are drawn in Section 5.

2. DTDA for OS-BFSAR Imaging Processing

2.1. Imaging Geometry

The imaging geometry of the OS-BFSAR system including the motion errors is shown in Figure 1. The straight line l1 is the ideal track of the moving radar, and its actual track is the curve l2. The position of the moving radar is rM(η) = (xM(η),yM(η),zM(η)) at the slow time(azimuth time) η, while the position of the stationary radar is rS = (xS,0,zS). Suppose that the illuminating beam of the moving radar is always covered by that of the stationary radar in order to insure the synchronization of this OS-BFSAR system, and the moving radar operates in the forward-looking spotlight mode. P is assumed to be an arbitrary scattering target in the scene, and its position is rP. The distances from the moving and stationary radars to the scattering target P at the slow time η are RM(η,rP) and RS(rP), respectively. Therefore, the bistatic distance from the scattering target P to the moving and stationary radar at the slow time η is:
R ( η , r P ) = R M ( η , r P ) + R S ( r P ) = | r P r M ( η ) | + | r P r S |
Provided that the transmitted signal is p(τ), and then the received signal of the target P is:
s ( τ , η ) = σ P p [ τ R ( η , r P ) / c 0 ]
where, τ is the fast time, σP is the scattering coefficient of the scattering target P, and c0 is the speed of light. Therefore, the range-compressed signal of the scattering target P is:
s r c ( τ , η ) = σ p p r c [ B ( τ R ( η , r P ) / c 0 ) ]
where prc[·] is the range-compressed pulse, and B is the transmitted signal bandwidth.

2.2. DTDA for OS-BFSAR Imaging

The DTDA (i.e., the BPA) can be considered as a direct transformation process from the radar echoes into a complex SAR image, and therefore it can be applied directly for the OS-BFSAR imaging processing without any modification. Unlike the monostatic DTDA, the backprojection (BP) of the radar echoes in the bistatic DTDA for the OS-BFSAR imaging processing is carried out over an ellipsoidal basis. In Figure 1, a and b are the major and minor axes of the dashed ellipse, whose foci are determined by the positions of the considered moving and stationary radars. Based on the major and minor axes (a and b), the linear eccentricity can be defined as c = a 2 + b 2 . r = (x,y,0) is assumed to be an arbitrary sample in the scene, and then the value of the SAR image at the sample r calculated by the bistatic DTDA is given by [32]:
I ( r ) = η c T / 2 η c + T / 2 s r c ( R ( η , r ) / c 0 , η ) exp [ j 2 π f R ( η , r ) / c 0 ] d η = η c T / 2 η c + T / 2 σ P p r c [ B ( ( R ( η , r ) R ( η , r P ) ) / c 0 ) ] exp [ j 2 π f R ( η , r ) / c 0 ] d η
where ηc is the synthetic aperture center time of the moving radar, f is the radar frequency, and T is the integration time.

3. ETDA for OS-BFSAR Imaging Processing

To reduce the computational load of the bistatic DTDA, the efficient implementation of the bistatic DTDA (i.e., ETDA) for OS-BFSAR imaging processing is presented. The proposed ETDA is developed from the research described in [40,41] based on the subaperture and polar grid processing, but the motion errors is considered in the derivation of the sampling requirements for polar grids in the proposed ETDA, which can offer a near-optimum tradeoff between the imaging precision and efficiency. Similar to the ETDA in [40,41], the polar grids in the ground plane instead of the slant-range plane are highly recommended for calculating the subimages in the proposed ETDA, since there is no exact slant-range plane for the BSAR configurations. Moreover, the polar grids of the subimages and the SAR image grids in the same plane can also simplify the calculation of the travel distance of a radar pulse from the transmitter and receiver to the scene.
Similar to the ETDA for the BSSAR imaging processing in [46,47], the proposed ETDA is able to accommodate the non-ideal track of the moving radar (i.e., compensating the motion error of the moving radar), but does not increase the computational load. In the first processing stage, the motion error can be accurately compensated in the subaperture imaging using the bistatic ETDA. In other words, the motion error is corrected for each BP data line by computing the bistatic range from each subaperture position of the moving radar via the polar grid of subimages to the position of the stationary radar. In the successive processing stage, the higher resolution subimage is interpolated from the lower resolution subimages in the previous stage. Therefore, the finer motion error compensation is included by the proposed ETDA as the resolution successively improves through the processing stages [46,47].

3.1. DTDA with Subaperture and Polar Grid Processing

The OS-BFSAR imaging geometry for the bistaticDTDA with the subaperture and polar grid processing is shown in Figure 2. For the n-th subaperture of the moving radar, AMn is the n-th subaperture center of the moving radar at the n-th subaperture center time ηn. The position vector of the moving radar at the slow time ηn is rM(ηn) = (xM(ηn),yM(ηn),zM(ηn)), and its projection in the X-Y plane is AMgn with the position vector rMg(ηn) = (xM(ηn),yM(ηn),0). The distances from the moving and stationary radars to the sample r at slow time ηn are RMn and RSn, and its projection in the X-Y plane are RMgn and RSgn, respectively. an, bn and cn have the similar physical meanings as a, b and c in Figure 1, respectively, and the projection of the linear eccentricity cn is cgn. Similar to the monostatic DTDA, the polar coordinates (ρn,θn) of the sample r are defined in the ground plane as following [41]. First, the origin of the polar grid is defined as the projection of the center point of the link line between the stationary radar position B and considered moving radar subaperture center position AMn. Second, the polar range ρn is defined as the distance between the origin of the polar grid and the sample r, and the polar angle θn is defined as the angle from the linear eccentricity projection cgn to the polar range ρn. Thus, the polar coordinates (ρn,θn) of the sample r are determined by:
{ ρ n = [ x S + x M ( η n ) 2 x ] 2 + [ y M ( η n ) 2 y ] 2 θ n = arccos ( ρ n 2 + c g n 2 ( ( x S x ) 2 + y 2 ) 2 ρ n c g n ) , θ n [ 0 , π ]
It is well known that the angle θn in Equation (5) is calculated based on the law of cosine. Using the above equation, we can get the Cartesian coordinates of the sample r, which are given by:
{ x = ( x S + x M ( η n ) ) / 2 + ρ n cos ( π ± ( γ n + θ n ) ) y = y M ( η n ) / 2 + ρ n sin ( π ± ( γ n + θ n ) )   w h e r e   { +   w h e n   y M ( η n ) 0   w h e n   y M ( η n ) < 0
It can be found that the sign ± in Equation (6) depends on the relative positions between the subaperture center AMn and the sample r in the polar subimage. Here, γn is the angle from the linear eccentricity projection cgn to the X axis, which can be given by:
γ n = arccos ( x S 2 + 4 c g n 2 ( x M ( η n ) ) 2 ( y M ( η n ) ) 2 4 x S c g n )
Similarly, the polar coordinates (ρnpnp) of the scattering target P can be defined. Let R(η,ρnpnp) = R(η,rp) and R(η,ρnn)= R(η,r), the value of the polar subimage at the sample (ρnn) for the n-th subaperture imaging processing is calculated as:
I n ( ρ n , θ n ) = η n T n / 2 η n + T n / 2 s r c [ R ( η , ρ n , θ n ) c 0 , η ] exp [ j 2 π f R ( η , ρ n , θ n ) c 0 ] d η = η n T n / 2 η n + T n / 2 σ P p r c [ B ( R ( η , ρ n , θ n ) R ( η , ρ n p , θ n p ) ) c 0 ] exp [ j 2 π f R ( η , ρ n , θ n ) c 0 ] d η
where Tn is the integration time of the n-th subaperture.

3.2. Sampling Requirements for Polar Grids Considering Motion Errors

From the point of view of the bandwidth, the sampling requirements for the polar grids can be derived by calculating the bistatic range from the moving and stationary radars to the sample (ρn,θn). The bistaitc range calculation of the sample (ρn,θn) in the n-th subaperture imaging processing is now shown in Figure 3. A is the moving radar position at the slow time η, and the n-th subaperture integration time is η∈ (ηnTn/2, ηn + Tn/2). l2g is the projection of the real flight track l2 of the moving radar in the X-Y plane, and AMgη is the projection of the moving radar position A in the X-Y plane. μ is the distance between the moving radar position projections AMgn and AMgη along the Y axis direction, and δ is the distance between the moving radar position projections AMgn and AMgη along the X axis direction, and therefore the length of the straight line AMgnAMgη is d M η = μ M η 2 + δ M η 2 . ϑ is the angle between straight line AMgnAMgη and the range projection RMgn, and ψ is the angle between the straight line AMgnAMgη and the linear eccentricity projection cgn. ϕMn is defined as the angle between the range projection RMgn and the linear eccentricity projection cgn.
From the imaging geometry in Figure 3, the bistatic distance from the moving and stationary radars to the sample (ρn,θn) at the slow time η can be computed and expanded using the Taylor series, which is given by:
R ( η , ρ n , θ n ) = R M ( η , ρ n , θ n ) + R S ( ρ n , θ n ) = R M g n 2 + d M η 2 2 R M g n d M η cos ( ϑ M η ) + z M 2 ( η ) + R S g n 2 + z S 2 = R M g n +   R S g n d M η cos ( ϑ M η ) + d M η 2 sin 2 ( ϑ M η ) 2 R M g n + z M 2 ( η ) 2 R M g n + z S 2 2 R S g n +
where, RM(η,ρnn) is the distance from the moving radar position A to the sample (ρnn), and RS(ρnn) is the distance from the stationary radar position to the sample (ρnn).
For the far-field OS-BFSAR imaging, it is reasonable to assume that the range d and height zM(η) are much smaller than the range RMgn, and the height zS is much smaller than the range RSgn. Therefore, all the terms, except the first three terms of Equation (9) are approximated to zeros. Taking only the first three terms of Equation (9) into account, Equation (9) can be approximated as:
R ( η , ρ n , θ n ) R M g n +   R S g n d M η cos ( ϑ M η ) ρ n 2 + c g n 2 + 2 ρ n c g n cos ( θ n ) + ρ n 2 + c g n 2 2 ρ n c g n cos ( θ n )   d M η cos ( ϑ M η )
According to [41], it can be clearly found that RMgn + RSgn, is nearly a constant with respect to the polar angle θn but not to the polar range ρn for the n-th subaperture imaging processing. From the imaging geometry in Figure 3, cos(ϑ) in Equation (10) can then be calculated as follows:
cos ( ϑ M η ) = cos ( φ M n ψ M η ) = cos ( φ M n ) cos ( ψ M η ) + sin ( φ M n ) sin ( ψ M η ) = R M g n 2 + c g n 2 ρ n 2 2 R M g n c g n cos ( ψ M η ) + ρ n sin ( θ n ) R M g n sin ( ψ M η ) = ( c g n + ρ n cos ( θ n ) ) cos ( ψ M η ) ρ n 2 + c g n 2 + 2 ρ n c g n cos ( θ n ) + ρ n sin ( θ n ) sin ( ψ M η ) ρ n 2 + c g n 2 + 2 ρ n c g n cos ( θ n ) = c g n cos ( ψ M η ) + ρ n cos ( θ n ψ M η ) ρ n 2 + c g n 2 + 2 ρ n c g n cos ( θ n )
Substituting Equation (11) into Equation (10), the expression in Equation (10) can be written as:
R ( η , ρ n , θ n ) ρ n 2 + c g n 2 + 2 ρ n c g n cos ( θ n ) + ρ n 2 + c g n 2 2 ρ n c g n cos ( θ n )   d M η c g n cos ( ψ M η ) + ρ n cos ( θ n ψ M η ) ρ n 2 + c g n 2 + 2 ρ n c g n cos ( θ n )
The two-dimensional Fourier transforms of the subimage In(ρn,θn) in Equation (8) with respect to the polar range ρn and polar angle θn is given by:
I F T n ( k ρ n , k θ n ) = I n ( ρ n , θ n ) exp [ j 2 π ( k ρ n ρ n + k θ n θ n ) ] d ρ n d θ n
where k ρ n and k θ n are the wavenumbers corresponding to the polar range ρn and polar angle θn, respectively. According to [34], the Fourier transform can be computed accurately using the stationary phase principle. Substituting Equation (8) into Equation (13), and then the stationary phase condition is given by:
{ ( 2 π f R ( ϕ , ρ n , θ n ) / c 0 2 π k ρ n ρ n ) ρ n = 0 ( 2 π f R ( ϕ , ρ n , θ n ) / c 0 2 π k θ n θ n ) θ n = 0
From the Equations (10), (12) and (14), the wavenumber k ρ n can be calculated by:
k ρ n = f c 0 ( R ( η , ρ n , θ n ) ) ρ n = f c 0 [ ρ n + c g n cos ( θ n ) ρ n 2 + c g n 2 + 2 ρ n c g n cos ( θ n ) + ρ n c g n cos ( θ n ) ρ n 2 + c g n 2 2 ρ n c g n cos ( θ n )   d M η c g n [ cos ( θ n ψ M η ) ( c g n + ρ n cos ( θ n ) ) c o s ( ψ M η ) ( ρ n + c g n cos ( θ n ) ) ] ( ρ n 2 + c g n 2 + 2 ρ n c g n cos ( θ n ) ) 3 ]
Due to the fact that the range d is much smaller than polar range ρn for the far-filed OS-BFSAR imaging processing, the third term in Equation (15) is approximated to zeros, so it can be neglected in general. Then, Equation (15) can be approximated as:
k ρ n f c 0 [ ρ n + c g n cos ( θ n ) ρ n 2 + c g n 2 + 2 ρ n c g n cos ( θ n ) + ρ n c g n cos ( θ n ) ρ n 2 + c g n 2 2 ρ n c g n cos ( θ n ) ] f c 0 H ( δ n , θ n )
Let δn be defined as the ratio of cn to ρn (i.e., δn = cn/ρn), and then the function H(δn,θn) is:
H ( δ n , θ n ) = 1 + δ n cos ( θ n ) 1 + δ n 2 + 2 δ n cos ( θ n ) + 1 δ n cos ( θ n ) 1 + δ n 2 2 δ n cos ( θ n ) ,   δ n 0
Figure 4 gives the plot of the function H(δn,θn) for different values of the angle θn.
From Figure 4, we can clearly find that the minimum and maximum of the function H(δn,θn) are given by:
H max ( δ n , θ n ) = { H ( δ n , 0 )   or   H ( δ n , π ) , 0 δ n 1 H ( δ n , π / 2 ) , δ n > 1 = { 2 , 0 δ n 1 2 1 + δ n 2 , δ n > 1
and:
H min ( δ n , θ n ) = { H ( δ n , π / 2 ) , 0 δ n 1 H ( δ n , 0 )   or   H ( δ n , π ) , δ n > 1 = { 2 1 + δ n 2 , 0 δ n 1 0 , δ n > 1
Based on the maximum and minimum of the radar frequency f, the bound of the wavenumber k ρ n can be given by:
f min c 0 H min ( δ n , θ n ) k ρ n f max c 0 H max ( δ n , θ n )
Then, the bandwidth of IFTn ( k ρ n ,   k θ n ) with respect to the wavenumber k ρ n is given by:
B ( k ρ n ) = { 2 c 0 ( f max f min / 1 + δ n 2 ) , 0 δ n 1 2 f max c 0 1 + δ n 2 , δ n > 1
Finally, the sampling requirement for the polar range ρn is therefore derived by:
Δ ρ n 1 B ( k ρ n ) = { c 0 1 + δ n 2 2 ( 1 + δ n 2 f max f min ) , 0 δ n 1 c 0 1 + δ n 2 2 f max , δ n > 1
For the monostatic SAR system with the co-located transmitter and receiver, by setting δn= 0, the sampling requirement of the polar range used in the monostatic ETDA can be simplified as:
Δ ρ n c 0 2 ( f max f min )
which is the same with the equation derived and given in [36].
Similarly, based on the Equations (10), (12) and (14), the wavenumber k θ n is calculated by:
k θ n = f c 0 ( R ( η , ρ n , θ n ) ) θ n = f c 0 θ n [ R M g n +   R S g n d M η c g n cos ( ψ M η ) + ρ n cos ( θ n ψ M η ) ρ n 2 + c g n 2 + 2 ρ n c g n cos ( θ n ) ]
Since the first term in Equation (24) is nearly a constant with respect to the polar angle θn, the first derivative of the first term with respect to polar angle θn is zero, so the wavenumber k θ n is approximated by:
k θ n = f d M η ρ n c 0 [ ρ n sin ( θ n ψ M η ) c g n sin ( ψ M η ) ] [ ρ n + c g n cos ( θ n ) ] ( ρ n 2 + c g n 2 + 2 ρ n c g n cos ( θ n ) ) 3 f d M η ρ n c 0 [ ρ n sin ( θ n ψ M η ) c g n sin ( ψ M η ) ] [ ρ n + c g n cos ( θ n ) ] [ ρ n 2 + c g n 2 + 2 ρ n c g n cos ( θ n ) ] ρ n 2 + c g n 2 cos 2 ( θ n ) + 2 ρ n c g n cos ( θ n ) f d M η ρ n c 0 [ ρ n sin ( θ n ψ M η ) c g n sin ( ψ M η ) ] [ ρ n 2 + c g n 2 + 2 ρ n c g n cos ( θ n ) ] f d M η [ sin ( θ n ψ M η ) δ n sin ( ψ M η ) ] c 0 [ 1 + δ n 2 + 2 δ n cos ( θ n ) ]
In this paper, to find the bound of the wavenumber k θ n in Equation (25), we can investigate it in some extreme cases, i.e., the polar angle has such values θn = 0, θn = π/2 and θn = π. As the cases θn = 0 and θn = π are considered, Equation (25) becomes:
k θ n ( θ n = 0 ) = f d M η sin ( ψ M ϕ ) c 0 ( 1 + δ n )
and:
k θ n ( θ n = π ) = f d M η sin ( ψ M η ) c 0 ( 1 δ n )
Thus, the bound of the wavenumber k θ n in Equations (26) and (27) can be simply found from the values of the factors f, d and sin(ψ), which can be expressed as:
f max d M n c 0 ( 1 + δ n ) k θ n ( θ n = 0 ) f max d M n c 0 ( 1 + δ n )
and:
f max d M n c 0 | 1 δ n | k θ n ( θ n = π ) f max d M n c 0 | 1 δ n |
where dMn is the maximum of the length d.
Similarly, when the case θn = π/2 is considered, Equation (25) becomes:
k θ n ( θ n = π / 2 ) = f d M η [ cos ( ψ M η ) δ n sin ( ψ M η ) ] c 0 ( 1 + δ n 2 ) = f d M η g ( ψ M η ) c 0 ( 1 + δ n 2 )
where g(ψ) is the trigonometric function, i.e., g(ψ) = cos(ψ) – δnsin(ψ). The extremum of this trigonometric function can be estimated with roots of its first derivative with respect to the angle ψ, i.e.:
[ g ( ψ M ϕ ) ] ψ M ϕ = 0
Then, we have:
ψ M ϕ = arc tan ( δ n )
Therefore, the bound of the function g(ψ) is given by:
1 + δ n 2 g ( ψ M ϕ ) 1 + δ n 2
As a result, the bound of the wavenumber k θ n in Equation (30) is obtained as:
f max d M n c 0 1 + δ n 2 k θ n ( θ n = π / 2 ) f max d M n c 0 1 + δ n 2
Therefore, the bandwidth of IFTn ( k ρ n ,   k θ n )   with respect to the wavenumber k θ n for these above extreme cases can be given by:
{ B ( k θ n ( θ n = 0 ) ) = 2 f max d M n c 0 ( 1 + δ n ) B ( k θ n ( θ n = π ) ) = 2 f max d M n c 0 | 1 δ n | B ( k θ n ( θ n = π / 2 ) ) = 2 f max d M n c 0 1 + δ n 2
From Equation (35), it can be seen that B ( k θ n ( θ n = π ) ) B ( k θ n ( θ n =   0 ) ) and B ( k θ n ( θ n = π ) ) B ( k θ n ( θ n =   π / 2 ) ) , thus the sampling requirement for the polar angle θ n is derived from the bandwidth B ( k θ n ( θ n = π ) ) , which is given by:
Δ θ n 1 B ( k θ n ( θ n = π ) ) = c 0 | 1 δ n | 2 f max d M n = c 0 | 1 δ n | f max l M n 2 + 4 δ M n , max 2
where, lMn is the length of the n-th subaperture along the Y axis direction, and δMn,max is the maximum of the range δMn. From Equation (36), it can be found that the efficiency of the proposed ETDA will reduce when the deviation from the ideal flight track becomes large in comparison with the bistatic DTDA.
For the monostatic SAR imaging processing, by setting δn= 0 and dMn = 2dn (dn is length of the n-th subaperture for the monostatic SAR), the polar angle sampling requirement used in the monostatic ETDA can be simplified as:
Δ θ n c 0 2 f max l M n 2 + 4 δ M n , max 2
which is also the same as the equation derived and given in [36].

3.3. Algorithm Implementation

The implementation of the proposed ETDA for the OS-BFSAR imaging processing is similar to that of the ETDA for the traditional one-stationary BSSAR (OS-BSSAR) imaging processing given in [46], but it calculates the subimages on the polar grids in the ground plane instead of the elliptical polar grids in the slant-range plane, which can be referenced to the positions of both moving and stationary radars. Figure 5 shows the implementation of the proposed ETDA for the OS-BFSAR imaging processing, and it contains two parts: the raw data factorization and SAR image generation, which are marked by the dashed rectangles with the different colors. The former includes the factorization of the received echo data and moving radar track data (i.e., the moving radar synthetic aperture), while the latter includes calculating the polar grids and Cartesian grid, performing the BP on the polar grids, interpolating the polar subimages to the polar subimages (P2P), and interpolating the polar subimages to the Cartesian image (P2C).
Similarly, the proposed ETDA improves the imaging efficiency by dividing the full synthetic aperture of the moving radar with L aperture positions. In other words, the full synthetic aperture is split recursively K times or stages by a factor of Fk in the k-th (1 ≤ kK) stage, and until k = 1 K F k subapertures of the size L K = L / k = 1 K F k are finally reached. Then, we have:
L = L K k = 1 K F k k = 1 , , K ,
where Fk is defined as the reduction in the number of aperture positions Fk for the moving radar during the k-th processing stage. For simplification, we assume that there is a constant factorization of the full aperture positions of the moving radar during all processing stages, i.e., Fk= l for all k and K = logl(L/LK). Therefore, Equation (38) can be simplified as:
L = L K l K
In the first stage, the full synthetic aperture of the moving radar is split into lK small subapertures, which requires a split of the range-compressed echo data similarly. Taking the n-th subaperture imaging processing in the first stage for example, the polar grids of the subimage are defined according to the derived sampling requirements. The regular BP for the n-th subaperture is performed on the corresponding polar grids over an elliptical mapping, and then accumulated coherently to generate the coarse polar subimage. The parameters ρn, θn, ρnp, θnp, ηn and Tn in Equation (8) are rewritten as ρ n 1 ,   θ n 1 ,   ρ n p 1 ,   θ n p 1 ,   η n 1   a n d   T n 1 , respectively. Hence, the n-th polar subimage at the first stage is given by:
I n 1 ( ρ n 1 , θ n 1 ) = η n 1 T n 1 / 2 η n 1 + T n 1 / 2 σ P p r c [ B ( R ( η , ρ n 1 , θ n 1 ) R ( η , ρ n P 1 , θ n P 1 ) ) c 0 ] exp [ j 2 π f R ( η , ρ n 1 , θ n 1 ) c 0 ] d η
The second stage is a recursive procedure. For k-th stage (2 ≤ kK), the polar subimages at the k-th stage are generated from the polar subimages formed in the (k − 1)-th stage. First, every l subapertures of the moving radar at the (k − 1)-th stage are combined into a new subaperture at the k-th stage. The origin of the new polar grid is defined as the projection of the center point of the link line between the stationary radar and the new subaperture center of the moving radar, and then the new polar grids ( ρ q k , θ q k ) are defined according to the derived sampling requirements. To generate the q-th polar subimages at the k-th stage, every l corresponding polar subimages at the (k − 1)-th stage are interpolated into the polar grids ( ρ q k , θ q k ) and then accumulated coherently, which is:
I q k ( ρ q k , θ q k ) = p = 1 + ( q 1 ) l q l I p ( k 1 ) ( ρ p , c o r k 1 , θ p , c o r k 1 )
where, I q k is the q-th polar subimage in the k-th stage, and I q k 1 is the p-th polar subimage at the (k − 1)-th stage. ( ρ p , c o r k 1 , θ p , c o r k 1 ) is the corresponding position of the polar grid ( ρ q k , θ q k ) in the polar subimage I q k 1 . However, in the practice terms, the position ( ρ p , c o r k 1 , θ p , c o r k 1 ) may not be just a discrete sample position in the polar subimage I q k 1 . According to [40,41], the values of the position ( ρ p , c o r k 1 , θ p , c o r k 1 ) in the polar subimages I q k 1 should be interpolated from that of its surrounding samples in l polar subimages I p k 1 by the different interpolation methods [50]. Please note that, before the interpolation is operated, the upsampling of the data of the polar subimages I q k 1 in the polar range and polar angle is usually required in the proposed ETDA, since it will directly affect the final SAR image quality [50].
According to [46], if the sampling requirements for the polar grids are satisfied for all processing stages, it is found that the grid interpolation is the main source of error in the proposed ETDA, and is also the computational bottleneck of the proposed algorithm. It is well known that the quality of the SAR image is a trade-off between the imaging precision and efficiency. Fortunately, the high precision interpolators used in the ETDA can be easy to obtain in [50]. The two dimensional linear interpolations are used in the proposed ETDA due to their high efficiency. The low-pass linear interpolation is used in the polar angle since the angular signal is low-pass, and the band-pass linear interpolation is used in the polar range since the transmitted signal is band-pass [46]. Of course, there are also other interpolation methods. For example, the cubic spline interpolation can be used in the polar angle and the fast Fourier transform (FFT) and inverse FFT (IFFT) in combination with the zero padding can be used in the polar range, which is another alternative.
In the final stage, the final Cartesian SAR image is generated from all polar subimages at the K-th stage. First, all aperture positions of the moving radar are combined into a whole aperture. Then, the Cartesian grid (x,y) is defined according to the resolutions of the final image. Finally, l polar subimages at the K-th stage are interpolated into the Cartesian grid, and then summed coherently to reconstruct the Cartesian OS-BFSAR image, which is given by:
I ( x , y ) = p = 1 l I p K ( ρ p , c o r K , θ p , c o r K ) )
where I(x,y) is the Cartesian OS-BFSAR image and ( ρ p , c o r K , θ p , c o r K ) is the corresponding position of the Cartesian grid (x,y) in the p-th subimage I q K at the K-th stage.

3.4. Computational Load

Similar to the ETDA for the traditional OS-BSSAR imaging processing presented in [46], the computational load (number of operations) of the proposed ETDA for the OS-BFSAR imaging processing mainly includes the operation number of calculating the polar grids and Cartesian grid, the operation number of performing the BP on the polar grids, the operation number of the P2P interpolation, and the operation number of the P2C interpolation during all processing stages.
Provided that the final Cartesian SAR image has the size of Mx and My in the X and Y axes directions, respectively, while the polar subimage at the K-th stage has the size of Mρ and Mθ in the polar range and polar angle directions, respectively. For the first stage, the size of the polar subimage is assumed to be Mρ·(Mθ/lK–1), and therefore the operation number of calculating the polar grids O1,Grid at this stage is given by:
O 1 , G r i d = l K M ρ ( M θ / l K 1 ) = l M ρ M θ
Similarly, the operation number of performing the BP on the polar grids OBP at this stage can be calculated by:
O B P   = l K ( L / l K ) M ρ ( M θ / l K 1 ) = ( L / l K 1 ) M ρ M θ
Thus, the total operation number of the imaging processing at first stage is:
O 1 = O 1 , G r i d + O B P = ( l + L / l K 1 ) M ρ M θ
For the k-th stage (2 ≤ kK), the size of the polar subimage is assumed to Mρ·(Mθ/lKk), thus the operation number of calculating the polar grids O2,Grid at this stage is given by:
O 2 , G r i d = k = 2 K l K k + 1 M ρ ( M θ / l K k ) = ( K 1 ) l M ρ M θ
Similarly, the operation number of interpolating the polar subimages into the polar subimages OP2P in this stage is given by:
O P 2 P = k = 2 K l K k + 1 l M ρ ( M θ / l K k ) = ( K 1 ) l 2 M ρ M θ
Therefore, the total operation number of the imaging processing at this stage is:
O 2 = O 2 , G r i d + O P 2 P = ( K 1 ) l ( 1 + l ) M ρ M θ
For the final stage, the size of the Cartesian image is assumed to Mx·My, thus the operation number of calculating the Cartesian grid O3,Grid in this stage is given by:
O 3 , G r i d = M x M y
Thus, the operation number of interpolating the polar subimages into the Cartesian image OP2C is:
O P 2 C = l M x M y = l M x M y
Thus, the total operation number of the imaging processing at the final stage is:
O 3 = O 3 , G r i d + O P 2 C = ( 1 + l ) M x M y
As a result, the total operation number of the proposed ETDA is given by:
O ETDA = O 1 + O 2 + O 3 = M x M y ( 1 + l ) + M ρ M θ ( K l + ( K 1 ) l 2 + L / l K 1 )
We assume that Mρ = μρMx and Mθ = μθMy, and then Equation (52) can be approximated as:
O ETDA = M x N y [ 1 + l + μ ρ μ θ ( K l + ( K 1 ) l 2 + L / l K 1 ) ]
Analogously, the operation number of the bistatic DTDA can be calculated by:
O DTDA = O 1 , G r i d + O B P = M x M y + L M x M y =   M x M y ( 1 + L )
The speed-up factor of the proposed ETDA with respect to the bistatic DTDA is defined as:
κ   ETDA = O DTDA O ETDA = 1 + L 1 + l + μ ρ μ θ ( K l + ( K 1 ) l 2 + L / l K 1 )
The value of the speed-up factor κEDTA is determined by factors L, l, K, μθ and μρ, and the values of factors μθ and μρ are both larger than or equal to one in general. When the radar pulse number L (i.e., the length of the synthetic aperture of the moving radar) increases, only the factor K in Equation (55) increases in K = logl(L/LK), where the factors l, μθ and μρ are assumed to be the constant values. Figure 6 shows the logarithm (base 2) of the speed-up factor with respect to the radar pulse number L for the different values of factors μθ and μρ, where LK = 16 and l = 4 are assumed. From Figure 6, it is clearly found that the logarithm values of these speed-up factors marked by different colors are almost directly proportional to log2L in some range.
The larger the product of factors μθ and μρ, the smaller the value of the speed-up factor. The red line indicates the speed-up factor on the condition μθ = μρ= 1, so the value of the speed-up factor is almost larger than one for all the synthetic apertures. However, in the practical SAR imaging, the factors μθ and μρ are both larger than one, and therefore, the value of speed-up factor is smaller than one for the small synthetic aperture (see the zoom image in the green ellipse in Figure 6). The reason may be that the proposed ETDA spends more time on the calculation and interpolation of polar grids than that of the bistatic DTDA does. Comparing with the bistatic DTDA, the proposed ETDA has a good acceleration for the moderate and high resolution OS-BFSAR imaging processing, but doesn’t offer any acceleration for the low resolution OS-BFSAR imaging processing.

4. Experimental Results

In this section, in order to verify the validity of the proposed ETDA for the OS-BFSAR imaging processing, experimental results based on both simulated and measured data are shown and analyzed to compare the performance of the proposed ETDA with that of the bistatic DTDA. The SAR scene reconstructed using the bistatic DTDA is used as the reference for the comparison, since this algorithm is the most accurate method for the imaging processing without any approximation in theory. The simulated OS-BFSAR data of the scene including several discrete scattering targets and the measured OS-BFSAR data acquired by an OS-BFSAR experiment at P-band are processed by the two algorithms, and then imaging results are illuminated in this section.

4.1. Simulated Data Results

The imaging geometry of the simulated OS-BFSAR system including the motion errors is the same as that in Figure 7. The parameters of the simulated OS-BFSAR system are shown in Table 1. The stationary radar is located on the top of a high tower, and its position is (0,0,20) m. The ideal flight track of the moving radar is parallel to the Y-axis direction, and its initial position is (1650,0,100) m at the slow time η = 0. In this OS-BFSAR system, both moving and stationary radars work in the forward-looking and spotlight mode. Suppose that the illuminating beam of the transmitter is always covered by that of the receiver to insure the synchronization of the OS-BFSAR system. Based on these parameters listed in Table 1, it is easy to compute that the moving radar synthetic aperture time Ta is 6.5 s. The motion error is added to the ideal flight track of the moving radar. The motion error in the X axis direction is δMx = 5sin(2π(1/Ta)η) + 0.3η, the motion error in the Y axis direction is δMy = 2sin(2π(0.3/Ta)η) + 0.1η and the motion error in the Z axis direction is δMz = 3sin(2π(0.5/Ta)η) + 0.2η. The simulated ground scene contains nine discrete point-like scatterers labeled as A–I in 3 rows and 3 columns, which are equally spaced in an area with the size of 300 m × 300 m in the azimuth and range directions, respectively. And, the point-like scatterer E is located at the scene center position (1650,0,0) m. Both the range and azimuth intervals of all point-like scatterers are 100 m in the ground plane, and the radar cross sections of all point-like scatterers are assumed and normalized to be 1 m2 for simplification. The samples of the Cartesian image grids are assumed to be 0.8 m × 0.6 m in the azimuth and range directions, respectively. The distribution of the point-like scatterers is shown in Figure 8a. For evaluation purposes, the effects of the radio frequency interference (RFI), jamming, thermal noise, clutter, multipath, incidence/ reflection angles, local reflection and so on are not considered in this simulation. Moreover, in order to observe the performance of the reconstructed SAR images more clearly and to make a fair comparison, no weighting function or sidelobe control approach is used in this simulation.
Figure 8b,c give the imaging results of the simulated SAR scene obtained by the bistatic DTDA and proposed ETDA, respectively. From Figure 8b, it is seen that the simulated SAR scene is well reconstructed by the DTDA, and all point-like scatterers appear in the SAR image as points. As observed from Figure 8c, all point-like scatterers in the simulated SAR scene are also well focused by the proposed ETDA, and also appear in the SAR image as points. It can be found that the focusing of all point-like scatterers in Figure 8c is very similar to that in Figure 8b. Visually, there is nearly no difference between SAR images given in Figure 8b,c, which indicates that the proposed ETDA is very effective for the OS-BFSAR imaging processing.
Figure 9 and Figure 10 show the contours of the imaging results of the point-like scatterers C, E and G labeled in Figure 8a, which are extracted from Figure 8b,c, respectively. From Figure 9, we may observe the general ultrawideband (UWB) features of all selected point-like scatterers, such as the orthogonal and nonorthogonal sidelobes, etc. By observing Figure 10, it is seen that the contours of the imaging results of all selected point-like scatterers is very similar to those shown in Figure 9. Also, the point targets with the typical features of a point-like scatterer illuminated by a UWB SAR system can be observed in the SAR image given in Figure 10. However, the focusing quality of all selected point-like scatterers in Figure 10 is slightly degraded in comparison to the reference one in Figure 9, since there may be still small phase error caused by interpolations in the proposed ETDA. The effects are invisible at the high contour levels and some small influences is observed at lower contour levels, i.e., only the sidelobes of focused point-like scatterers suffer from the effect of phase errors. The phase errors are relatively small (usually smaller than or equal to π/8) and therefore do not strongly affect the peak signal level and itssurrounding of the focused point-like scatterers.
To perform a more illustrative evaluation, both amplitude and phase profiles of the imaging results of all selected point-like scatterers in the azimuth and range directions are extracted from Figure 9 and Figure 10 and then plotted in Figure 11, Figure 12 and Figure 13. The blue solid line indicates the profiles with the bistatic DTDA, while the red dashed line represents the profiles with the proposed ETDA. As observed from the images in Figure 11, Figure 12 and Figure 13, both amplitude and phase profiles of the imaging results of all selected point-like scatterers by the two algorithms are very similar. In the mainlobe areas of the amplitude profiles for all selected point-like scatterers, it is almost impossible to see any difference between the two algorithms. However, in the sidelobe areas of the amplitude profiles, there is still slight difference between the two algorithms, which may be caused by the effects of the phase errors in the proposed ETDA. It might be clearly found that the similar performance can be observed in the phase profiles for all selected point-like scatterers. Therefore, such influences clearly show that the phase errors in the proposed ETDA may degrade slightly the focusing quality of the OS-BFSAR image compared with the bistatic DTDA.
To quantitatively analyze the imaging performance of the proposed ETDA in comparison to the bistatic DTDA, the quality measurements for the OS-BFSAR image can be calculated based on the amplitude profiles in Figure 11, Figure 12 and Figure 13, such as the spatial resolution, peak sidelobe ratio (PSLR) and integrated sidelobe ratio (ISLR). Spatial resolutions in the azimuth and range directions can be defined by −3 dB width of the mainlobe of the focused point-like scatterer, i.e., the distance between two points where the intensity is one half of the peak intensity.
PSLR can be defined by the ratio of the peak intensity in the sidelobe area to the peak intensity in the mainlobe area, and ISLR can be defined by the ratio of the energy of the sidelobe to that of the mainlobe. The results of these measured parameters are listed in Table 2. As observed from Table 2, it is found that all measured parameters obtained by the two algorithms are very similar. The resolutions in the azimuth and range directions of all selected point-like scatterers by the proposed ETDA become little worse than those of the bistatic DTDA. Fortunately, the PSLRs and ISLRs of all selected point-like scatterers by the proposed ETDA are very similar to those by the bistatic DTDA. Based on the above-mentioned analysis of the imaging results, we can conclude that the imaging result by the proposed ETDA is very similar to that by the bistatic DTDA.
In order to prove the imaging efficiency of the proposed ETDA, the operation number of the two imaging methods is estimated in this simulation case. The simulated data are processed by the two algorithms using the next parameters: Mx = 500, My = 375, L = 224, LK = 16, l = 4, μθ = μρ = 1.2, thus K = logl(L/LK) = 10. From Equation (55), the number of operations of the bistatic DTDA can be therefore calculated by ODTDA = MxMy(1 + L) ≈ 3.15 × 1012, while the operation number of the proposed ETDA is given by OETDA = MxNy[1 + l+ μρμθ(Kl + (K–1)l2 + L/lK–1)]) ≈ 1.05 × 108. Then, the speed-up factor of the proposed ETDA to the bistatic DTDA is κETDA = ODTDA/OETDA ≈ 3.01 × 104, and the logarithm (base 2) of the speed-up factor κETDA is about 14.87. Besides, the processing time of two algorithms is measured on the same condition, which depends directly on the interpolation method, upsampling factor and computational platform. Radar echo data is interpolated as follows: one dimension complex linear interpolation with the upsampling factor of 4 for the BP operation at the first stage of the proposed ETDA, as well as the bistatic DTDA; two dimension complex linear interpolation with the upsampling factor of 4 for the P2P and P2C operations at the successive stages of the proposed ETDA, since it has the high precision and efficiency and the well adaptability to the band-limited signals. They are programmed in the MATLAB version 7.10.0 on a Personal Computer (PC) with a 2.93 GHz Dual-Core Central Processing Unit (CPU) and a 2.00 GB Random Access Memory (RAM). The processing time of the bistatic DTDA and proposed ETDA are 665.8 s and 46.2 s, respectively. Compared with the bistatic DTDA, the imaging speed of the proposed ETDA is improved about 14.5 times, which corresponds with the theoretical results in Figure 6. According to [45], if the proposed ETDA is parallelized on the Graphic Processing Unit (GPU), the proposed ETDA will further accelerate the process in the OS-BFSAR imaging.

4.2. Measured Data Results

In order to further prove the feasibility of the proposed ETDA, the measured data acquired by a real OS-BFSAR system are processed by the two algorithms, and the imaging results are shown and analyzed. The imaging experiment of the OS-BFSAR system has been carried out at P-band in late 2015, which has the construction of the ground-based stationary radar to operate in conjunction with an existing vehicle-based monostatic SAR system. The imaging geometry of the real OS-BFSAR system is shown in Figure 14, and the real OS-BFSAR system is given in Figure 15. The stationary radar is located on the top of a tripod on a high position. The moving radar is fixed on the top of the vehicle in a square, and the dashed line is the ideal track of the vehicle, while the solid curve is its actual track. The vehicle-based moving radar continually illuminates the scene including several targets and transmits the chirp signal (P-band) in the forward-looking and side-looking modes, while the ground-based stationary radar illuminates the same region and receives the bistatic scattered signal of the scene. Both vehicle-based and ground-based antennas have a 3 dB azimuth beamwidth of approximately 35° to 50°, and a 3 dB elevation beamwidth of approximately 75° to 85°. To maintain the time synchronization of the OS-BFSAR system, the one pulse per second (1PPS) output signal from a global positioning system (GPS) receiver incorporated into the moving and stationary radars is used as a timing reference. Besides, to maintain the frequency synchronization the OS-BFSAR system, the voltage-controlled oscillator (VCO) generate a standard sine signal, after amplification and frequency conversion, this signal is transferred into a phase detector, and it is phase-detected by the 1PPS signal. The phase error from the phase detector is used to discipline the VCO, and the output signal from this in turn stabilizes the VCO via the digital phase-locked loop (DPLL). VCO produces coherent reference signals in the moving and stationary radars, and these are used to drive the frequency synthesizer to produce the various system signals. This technique can real-timing revise the time and frequency reference signals to satisfy system synchronization.
The acquisition parameters of the real OS-BFSAR system are shown in Table 3. Figure 16 gives the position of the vehicle-based radar from the GPS data, including the actual position in the X and Y axes directions, the motion errors in the X, Y and Z axes directions, and so on. The vehicle moves parallel to the X-axis direction with an average speed about 12.8 km/h (3.55 m/s), and the average height of the vehicle-based antenna is about 4 m. The position of the ground-based antenna is about (−54,0,6)m, and the initial position of the vehicle-based antenna is about (−40,26,4) m. As the scene being imaged must lie in the line-of-sight of the stationary radar, this constrained operation of the radar system to the imaging of test-sites within only a few decameters from the tripod. One data site was acquired for the imaging processing: the scene was a flat area, and its size is about 40 m in the X axis direction and 35 m in the Y axis direction. Various scattering targets, including three metallic cylinders and one metallic trihedral reflector were deployed at the test-site prior to this experiment, and their positions in the ground plane are about (5.5,5.5,0) m, (0,0,0) m, (−5.5,−5.5,0) m and (7.5,−17.5,0) m, respectively. As the vehicle-based radar system moves parallel to the X-axis direction, the bistatic measured data were naturally collected from a wide variety of the azimuth look direction. Figure 17 gives the bistatic scattered signal of the scene in the time domain and frequency domain. As observed from Figure 17, it can be clearly found that the scattered signal of the targets is very obvious (see the region in the green ellipses in Figure 17).
Next, the measured data of the scene is processed by the bistatic DTDA and proposed ETDA. Before the imaging processing is performed by two methods, the measured data of the OS-BFSAR system (including the GPS data of the vehicle-based radar) should be preprocessed. First, the GPS data should be transferred into the Cartesian coordinate system to obtain the local position of the vehicle-based radar. Second, the scattered data of the scene should be downsampled and filtered in the slow time. Finally, the scattered data must be range compressed by the matching filter function recorded before this experiment, and then the RFI in the scattered data should be suppressed by the method of the frequency spectrum equilibrium. In the imaging processing, the direct-path signal from the vehicle-based radar to the ground-based radar should be used to reduce or correct the time error of the scattered data caused by the uncertain time delay of the OS-BFSAR system. Figure 18 shows the imaging results of this measured data by the bistatic DTDA and proposed ETDA. In Figure 18, it is seen that the illuminated scene is well reconstructed by both two algorithms, and the metallic cylinders and metallic trihedral reflector are well focused, and the UWB features (i.e., the orthogonal and nonorthogonal sidelobes) can be clearly observed. Besides, it can be found that the image in Figure 18b obtained by the proposed ETDA is very close to that in Figure 18a obtained by the bistatic DTDA. However, the focusing quality of the image in Figure 18b is slightly degraded in comparison to that of the image in Figure 18a, since there may be still small phase error caused by the interpolations in the proposed ETDA. Fortunately, only the sidelobes of the focused image in Figure 18b may suffer from the effect of the phase errors.
In order to quantitatively evaluate the focusing quality of the SAR images in Figure 18, the resolutions of the focused trihedral reflector in the X and Y axes directions are measured based on the width of the amplitude profiles at −3 dB point. The measured resolutions in the X and Y axes direction obtained by the bistatic DTDA are 0.35 m and 1.73 m, respectively. And, the measured resolutions in the X and Y axes direction obtained by the proposed ETDA are 0.37 m and 1.87 m, respectively. Therefore, the measured resolutions by the two algorithms are very similar, which indirectly proves the validity of the proposed ETDA. Finally, for the sake of brevity, only the reconstruction time of the scene by the two algorithms is measured. The reconstruction time of the scene using the bistatic DTDA is 4687 s, while the scene reconstruction time using the proposed ETDA is 386 s. It can be concluded that the proposed ETDA is much faster than the bistatic DTDA while the achieved accuracies are about the same for the real OS-BFSAR imaging processing.

5. Conclusions

In this paper, an efficient time domain imaging method called ETDA is presented to give better performance in imaging efficiency than the bistatic DTDA for OS-BFSAR imaging processing. This method still inherits the advantages of the bistatic DTDA, such as the precisely accommodation of the large spatial variances, serious range-azimuth coupling and motion errors. Besides, it represents the subimages on polar grids in the ground plane instead of the slant-range plane, in order to be accurately referenced to the positions of both moving and stationary radars. Moreover, it derives the sampling requirements considering the motion errors for the polar grids to offer a near-optimum tradeoff between the imaging precision and efficiency. Experimental results based on both simulated and measured data confirm that the proposed algorithm performs better than the bistatic DTDA in terms of the efficiency improvement. The proposed ETDA for the BFSAR imaging processing with the complex configurations will be the next focus of our research.

Acknowledgments

This work in this paper has been supported in part by the National Natural Science Foundation of China under Grant No. 61571447, Grant No. 61302194, and in part by the Research Project of National University of Defense Technology under Grant No. JC14-04-02. The authors are with the Air Force Early Warning Academy (Wuhan, China), with the College of Electronic Science and Engineering, National University of Defense Technology (Changsha, China), and with the Affiliated Hospital of Hunan Institute of Traditional Chinese Medicine (Changsha, China). The authors also wish to extend their sincere thanks to the anonymous reviewers and the academic editor for their careful reading and valuable suggestions that helped improve the paper quality.

Author Contributions

Hongtu Xie is responsible for all the theoretical work, the performing the simulation, the implementation of the experiment, the processing and analysis of the experiment data, and the writing of the manuscript; Shaoying Shi and Hui Xiao conceived and designed the experiment; Chao Xie, Feng Wang and Qunle Fang revised the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
1PPSOne pulse per second
BPBackprojection
SARSynthetic aperture radar
BSARBistatic SAR
OS-BSAROne-stationary BSAR
BFSARBistatic forward-looking SAR
OS-BFSAROne-stationary BFSAR
BLSARSide-looking SAR
OS-BSSAROne-stationary BSSAR
PAMIRPhased array multifunctional imaging radar
UWBUltrawideband
PSLRPeak sidelobe ratio
ISLRIntegrated sidelobe ratio
RFIRadio frequency interfere
FDAFrequency-domain algorithm
TDATime-domain algorithm
DTDADirect TDA
ETDAEfficient TDA
BPABackprojection algorithm
FBPAFast BPA
FFBPAFast factorized BPA
RDADoppler algorithm
OKAOmega-k algorithm
CSAChirp scaling algorithm
NLCSANonlinear CSA
P2PPolar subimages into polar subimages
P2CPolar subimages into the Cartesian image
FFTFast Fourier transform
IFFTInverse FFT
GPSGlobal positioning system
VCOVoltage-controlled oscillator
DPLLDigital phase-locked loop
PCPersonal Computer
CPUCentral Processing Unit
GPUGraphic Processing Unit
RAMRandom Access Memory

References

  1. Cumming, I.G.; Wong, F.H. Digital Processing of Synthetic Aperture Radar Data: Algorithm and Implementation; Artech House Publishers: Norwood, MA, USA, 2005. [Google Scholar]
  2. An, D.X.; Huang, X.T.; Jin, T.; Zhou, Z.M. Extended nonlinear chirp scaling algorithm for high-resolution highly squint SAR data focusing. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3761–3773. [Google Scholar] [CrossRef]
  3. An, D.X.; Huang, X.T.; Jin, T.; Zhou, Z.M. Extended two-step focusing approach for squinted spotlight SAR imaging. IEEE Trans. Geosci. Remote Sens. 2012, 50, 2889–2900. [Google Scholar] [CrossRef]
  4. An, D.X.; Li, Y.H.; Huang, X.T.; Li, X.Y.; Zhou, Z.M. Performance evaluation of frequency-domain algorithms for chirped low frequency UWB SAR data processing. IEEE J. Sel. Topics Appl. Earth Observ. 2014, 7, 678–690. [Google Scholar]
  5. Xie, H.T.; An, D.X.; Huang, X.T.; Zhou, Z.M. Efficient raw signal generation based on equivalent scatterer and subaperture processing for SAR with arbitrary motion. Radioengineering 2014, 23, 1169–1178. [Google Scholar]
  6. Xie, H.T.; An, D.X.; Huang, X.T.; Zhou, Z.M. Spatial resolution analysis of low frequency ultrawidebeam-ultrawideband synthetic aperture radar based on wavenumber domain support of echo data. J. Appl. Remote Sens. 2015, 9, 095033. [Google Scholar] [CrossRef]
  7. Walterscheid, I.; Espeter, T.; Klare, J.; Brenner, A.B.; Ender, J.H.G. Potential and limitations of forward-looking bistatic SAR. In Proceedings of the 2010 IEEE International Geoscience and Remote Sensing Symposium, Honolulu, HI, USA, 25–30 July 2010; pp. 216–219.
  8. Xie, H.T.; An, D.X.; Huang, X.T.; Zhou, Z.M. Research on spatial resolution of one-stationary bistatic ultrahigh frequency ultrawidebeam-ultrawideband SAR. IEEE J. Sel. Topics Appl. Earth Observ. 2015, 8, 1782–1798. [Google Scholar] [CrossRef]
  9. Xie, H.T.; An, D.X.; Huang, X.T.; Zhou, Z.M. Efficient raw signal generation based on equivalent scatterer and subaperture processing for one-stationary bistatic SAR including motion errors. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3360–3377. [Google Scholar] [CrossRef]
  10. Zhang, H.R.; Wang, Y.; Li, J.W. New application of parameter-adjusting polar format algorithm in spotlight forward-looking bistatic SAR processing. In Proceedings of the 2013 Asia-Pacific Conference on Synthetic Aperture Radar, Tsukuba, Japan, 23–27 September 2013; pp. 384–387.
  11. Balke, J. Field test of bistatic forward-looking synthetic aperture radar. In Proceedings of the 2005 IEEE International Radar Conference, Arlington, VA, USA, 9–12 May 2005; pp. 424–429.
  12. Balke, J. SAR image formation for forward-looking radar receivers in bistatic geometry by airborne illumination. In Proceedings of the 2008 IEEE Radar Conference, Rome, Italy, 26–30 May 2008; pp. 1–5.
  13. Walterscheid, I.; Brenner, A.R.; Klare, J. Radar imaging with very low grazing angles in a bistatic forward-looking configuration. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 327–330.
  14. Walterscheid, I.; Brenner, A.R.; Klare, J. Bistatic radar imaging of an airfield in forward direction. In Proceedings of the 2012 European Conference on Synthetic Aperture Radar, Nuremberg, Germany, 23–26 April 2012; pp. 227–230.
  15. Walterscheid, I.; Papke, B. Bistatic forward-looking SAR imaging of a runway using a compact receiver on board an ultralight aircraft. In Proceedings of the 2013 International Radar Symposium (IRS), Dresden, Germany, 24–26 June 2013; pp. 461–466.
  16. Walterscheid, I.; Espeter, T.; Klare, J.; Brenner, A.B. Bistatic spaceborne-airborne forward-looking SAR. In Proceedings of the 2010 European Conference on Synthetic Aperture Radar, Aachen, Germany, 7–10 June 2010; pp. 1–4.
  17. Espeter, T.; Walterscheid, I.; Klare, J.; Brenner, A.R.; Ender, J.H.G. Bistatic forward-looking SAR: Results of a spaceborne-airborne experiment. IEEE Geosci. Remote Sens. Lett. 2011, 8, 765–768. [Google Scholar] [CrossRef]
  18. Ender, J.H.G.; Walterscheid, I.; Brenner, A.R. New aspects of bistatic SAR: Processing and experiments. In Proceedings of the 2004 IEEE International Geoscience and Remote Sensing Symposium, Anchorage, AK, USA, 20–24 September 2004; pp. 1758–1762.
  19. Yew, L.N.; Wong, F.H.; Cumming, I.G. In Proceedings of azimuth invariant bistatic SAR data using the range doppler algorithm. IEEE Trans. Geosci. Remote Sens. 2008, 46, 14–21. [Google Scholar]
  20. Jun, S.; Zhang, X.; Yang, J. Principle and methods on bistatic SAR signal processing via time correlation. IEEE Trans. Geosci. Remote Sens. 2008, 46, 3163–3178. [Google Scholar] [CrossRef]
  21. Soumekh, M. Bistatic synthetic aperture radar imaging using wide-bandwidth continuous-wave sources. In Proceedings of the 1998 SPIE Radar Process, Technology and Application III, San Diego, CA, USA, 14–16 October 1998; pp. 99–109.
  22. Walterscheid, I.; Ender, J.H.G.; Brenner, A.R.; Loffeld, O. Bistatic SAR processing using an omega-k type algorithm. In Proceedings of the 2005 IEEE International Geoscience and Remote Sensing Symposium, Seoul, Korea, 25–29 July 2005; pp. 1064–1067.
  23. Qiu, X.; Hu, D.; Ding, C. An omega-k algorithm with phase error compensation for bistatic SAR of a translational invariant case. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2224–2232. [Google Scholar]
  24. Shin, H.; Lim, J. Omega-k algorithm for airborne spatial invariant bistatic spotlight SAR imaging. IEEE Trans. Geosci. Remote Sens. 2009, 47, 238–250. [Google Scholar] [CrossRef]
  25. Rodriguez-Cassola, M.; Krieger, G.; Wendler, M. Azimuth-invariant, bistatic airborne SAR processing strategies based on monostatic algorithms. In Proceedings of the 2005 IEEE International Geoscience and Remote Sensing Symposium, Seoul, Korea, 25–29 July 2005; pp. 1047–1050.
  26. Li, F.; Li, S.; Zhao, Y. Focusing azimuth-invariant bistatic SAR data with chirp scaling. IEEE Geosci. Remote Sens. Lett. 2008, 5, 484–486. [Google Scholar]
  27. Wang, R.; Loffeld, O.; Nies, H.; Knedlik, S.; Ender, J.H.G. Chirp scaling algorithm for bistatic SAR data in the constant offset configuration. IEEE Trans. Geosci. Remote Sens. 2009, 47, 952–964. [Google Scholar] [CrossRef]
  28. Wong, F.H.; Cumming, I.G.; Yew, L.N. Focusing bistatic SAR data using the nonlinear chirp scaling algorithm. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2493–2505. [Google Scholar] [CrossRef]
  29. Qiu, X.; Hu, D.; Ding, C. An improved NLCS algorithm with capability analysis for one-stationary bistatic SAR. IEEE Trans. Geosci. Remote Sens. 2008, 46, 3179–3186. [Google Scholar] [CrossRef]
  30. Li, Z.Y.; Wu, J.J.; Li, W.C.; Huang, Y.L.; Yang, J.Y. One-stationary bistatic side-looking SAR imaging algorithm based on extended keystone transforms and nonlinear chirp scaling. IEEE Geosci. Remote Sens. Lett. 2012, 10, 211–215. [Google Scholar]
  31. Zeng, T.; Wang, R.; Li, F.; Long, T. A modified nonlinear chirp scaling algorithm for spaceborne/stationary bistatic SAR based on series reversion. IEEE Trans. Geosci. Remote Sens. 2013, 51, 3108–3118. [Google Scholar] [CrossRef]
  32. Bauck, J.L.; Jenkins, W.K. Convolution-backprojection image reconstruction for bistatic synthetic aperture radar. In Proceedings of the 1989 IEEE International Symposium on Circuit System, Portland, OR, USA, 8–11 May 1989; pp. 631–634.
  33. Seger, O.; Herberthson, M.; Hellsten, H. Real-time SAR processsing of low frequency ultra wide band radar data. In Proceedings of the 1998 European Conference on Synthetic Aperture Radar, Friedrichshafen, Germany, 25–27 May 1998; pp. 489–492.
  34. Yegulalp, A.F. Fast backprojection algorithm for synthetic aperture radar. In Proceedings of the 1999 IEEE Radar Conference, Waltham, MA, USA, 20–22 April 1999; pp. 60–65.
  35. McCorkle, J.; Rofheart, M. An order N2log(N) backprojector algorithm for focusing wide-angle wide-bandwidth arbitrary- motion synthetic aperture radar. In Proceedings of the 1996 SPIE Aerosense Conference, Orlando, FL, USA, 8–12 June 1996; pp. 25–36.
  36. Ulander, L.M.H.; Hellsten, H.; Stenström, G. Synthetic-aperture radar processing using fast factorized backprojection. IEEE Trans. Aerosp. Electron. Syst. 2003, 39, 760–776. [Google Scholar] [CrossRef]
  37. Ding, Y.; Munson, D.C. A fast back-projection algorithm for bistatic SAR imaging. In Proceedings of the 2002 International Conference on Image Processing, Rochester, NY, USA, 22–25 September 2002; pp. 449–452.
  38. Vu, V.T.; Sjögren, T.K.; Pettersson, M.I. Fast backprojection algorithm for UWB bistatic SAR. In Proceedings of the 2011 IEEE Radar Conference, Kansas, MO, USA, 23–27 May 2011; pp. 431–434.
  39. Vu, V.T.; Sjögren, T.K.; Pettersson, M.I. Phase error calculation for fast time-domain bistatic SAR algorithms. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 631–639. [Google Scholar] [CrossRef]
  40. Vu, V.T.; Sjögren, T.K.; Pettersson, M.I. SAR imaging in ground plane using fast backprojection for mono-and bistatic cases. In Proceedings of the 2012 IEEE Radar Conference, Atlanta, GA, USA, 7–11 May 2012; pp. 184–189.
  41. Vu, V.T.; Sjögren, T.K.; Pettersson, M.I. Nyquist sampling requirements for polar grids in bistatic time-domain algorithms. IEEE Trans. Signal Process. 2015, 63, 457–465. [Google Scholar] [CrossRef]
  42. Shao, Y.F.; Wang, R.; Deng, Y.K.; Liu, Y.; Chen, R.; Liu, G.; Loffeld, O. Fast backprojection algorithm for bistatic SAR imaging. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1080–1084. [Google Scholar] [CrossRef]
  43. Ulander, L.M.H.; Flood, B.; Froelind, P.-O.; Jonsson, T.; Gustavsson, A.; Rasmusson, G.; Stenstroem, G.; Barmettler, A.; Meier, E. Bistatic experiment with ultra-wideband VHF synthetic aperture radar. In Proceedings of the 2008 European Conference on Synthetic Aperture Radar, Friedrichshafen, Germany, 2–5 June 2008; pp. 1–4.
  44. Ulander, L.M.H.; Frölind, P.-O.; Gustavsson, A.; Murdin, D.; Stenström, G. Fast factorized back-projection for bistatic SAR processing. In Proceedings of the 2010 European Conference on Synthetic Aperture Radar, Aachen, Germany, 7–10 June 2010; pp. 1002–1005.
  45. Rodriguez-Cassola, M.; Prats, P.; Krieger, G.; Moreira, A. Efficient time-domain image formation with precise topography accommodation for general bistatic SAR configurations. IEEE Trans. Aerosp. Electron. Syst. 2011, 47, 2949–2966. [Google Scholar] [CrossRef] [Green Version]
  46. Xie, H.T.; An, D.X.; Huang, X.T.; Li, X.Y.; Zhou, Z.M. Fast factorised backprojection algorithm in elliptical polar coordinate for one-stationary bistatic very high frequency/ultrahigh frequency ultra wideband synthetic aperture radar with arbitrary motion. IET Radar Sonar Navig. 2014, 8, 946–956. [Google Scholar] [CrossRef]
  47. Xie, H.T.; An, D.X.; Huang, X.T.; Zhou, Z.M. Fast time-domain imaging in elliptical polar coordinate for general bistatic VHF/UHF ultra-wideband SAR with arbitrary motion. IEEE J. Sel. Topics Appl. Earth Observ. 2015, 8, 879–895. [Google Scholar] [CrossRef]
  48. Vu, V.T.; Sjögren, T.K.; Pettersson, M.I. Fast time-domain algorithms for UWB bistatic SAR processing. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 1982–1994. [Google Scholar] [CrossRef]
  49. Vu, V.T.; Sjögren, T.K.; Pettersson, M.I. Fast backprojection algorithms based on subapertures and local polar coordinates for general bistatic airborne SAR systems. IEEE Trans. Geosci. Remote Sens. 2016, 54, 2706–2712. [Google Scholar] [CrossRef]
  50. Frölind, P.-O.; Ulander, L.M.H. Evaluation of angular interpolation kernels in fast back-projection SAR processing. IET Radar Sonar Navig. 2006, 153, 243–249. [Google Scholar] [CrossRef]
Figure 1. Imaging geometry of the OS-BFSAR system including the motion errors.
Figure 1. Imaging geometry of the OS-BFSAR system including the motion errors.
Sensors 16 01907 g001
Figure 2. DTDA with the subperture and polar grid processing.
Figure 2. DTDA with the subperture and polar grid processing.
Sensors 16 01907 g002
Figure 3. Bistatic range calculation in the proposed ETDA.
Figure 3. Bistatic range calculation in the proposed ETDA.
Sensors 16 01907 g003
Figure 4. Function H(δn,θn) for different values of the angle θn.
Figure 4. Function H(δn,θn) for different values of the angle θn.
Sensors 16 01907 g004
Figure 5. Implementation of the proposed ETDA for the OS-BFSAR imaging processing.
Figure 5. Implementation of the proposed ETDA for the OS-BFSAR imaging processing.
Sensors 16 01907 g005
Figure 6. The logarithm (base 2) of the speed-up factor of the proposed ETDA for the different values of the factors μθ and μρ.
Figure 6. The logarithm (base 2) of the speed-up factor of the proposed ETDA for the different values of the factors μθ and μρ.
Sensors 16 01907 g006
Figure 7. Imaging geometry of the simulated OS-BFSAR system.
Figure 7. Imaging geometry of the simulated OS-BFSAR system.
Sensors 16 01907 g007
Figure 8. Imaging results of the simulated data obtained by the different algorithms. (a) Distribution of the point-like scatterers; (b) Bistatic DTAD; (c) Proposed ETDA.
Figure 8. Imaging results of the simulated data obtained by the different algorithms. (a) Distribution of the point-like scatterers; (b) Bistatic DTAD; (c) Proposed ETDA.
Sensors 16 01907 g008
Figure 9. Contours of the imaging results obtained by the bistctic DTDA for all selected point-like scatterers in Figure 8a. (a) Scatterer C; (b) Scatterer E; (c) Scatterer G.
Figure 9. Contours of the imaging results obtained by the bistctic DTDA for all selected point-like scatterers in Figure 8a. (a) Scatterer C; (b) Scatterer E; (c) Scatterer G.
Sensors 16 01907 g009
Figure 10. Contours of imaging results obtained by the proposed ETDA for all selected point-like scatterers in Figure 8a. (a) Scatterer C; (b) Scatterer E; (c) Scatterer G.
Figure 10. Contours of imaging results obtained by the proposed ETDA for all selected point-like scatterers in Figure 8a. (a) Scatterer C; (b) Scatterer E; (c) Scatterer G.
Sensors 16 01907 g010
Figure 11. Comparison between the focused results obtained by the bistatic DTDA and proposed ETDA for the point-like scatterer C. (a) Azimuth amplitude; (b) Azimuth phase; (c)Range amplitude; (d) Range phase.
Figure 11. Comparison between the focused results obtained by the bistatic DTDA and proposed ETDA for the point-like scatterer C. (a) Azimuth amplitude; (b) Azimuth phase; (c)Range amplitude; (d) Range phase.
Sensors 16 01907 g011
Figure 12. Comparison between the focused results obtained by the bistatic DTDA and proposed ETDA for the point-like scatterer E. (a) Azimuth amplitude; (b) Azimuth phase; (c) Range amplitude; (d) Range phase.
Figure 12. Comparison between the focused results obtained by the bistatic DTDA and proposed ETDA for the point-like scatterer E. (a) Azimuth amplitude; (b) Azimuth phase; (c) Range amplitude; (d) Range phase.
Sensors 16 01907 g012
Figure 13. Comparison between the focused results obtained by the bistatic DTDA and proposed ETDA for the point-like scatterer G. (a) Azimuth amplitude; (b) Azimuth phase; (c)Range amplitude; (d) Range phase.
Figure 13. Comparison between the focused results obtained by the bistatic DTDA and proposed ETDA for the point-like scatterer G. (a) Azimuth amplitude; (b) Azimuth phase; (c)Range amplitude; (d) Range phase.
Sensors 16 01907 g013
Figure 14. Imaging geometry of the real OS-BFSAR system.
Figure 14. Imaging geometry of the real OS-BFSAR system.
Sensors 16 01907 g014
Figure 15. Real OS-BFSAR system. (a) Vehicle-based moving radar; (b) Ground-based stationary radar.
Figure 15. Real OS-BFSAR system. (a) Vehicle-based moving radar; (b) Ground-based stationary radar.
Sensors 16 01907 g015
Figure 16. Position of the vehicle-based radar from the GPS data.
Figure 16. Position of the vehicle-based radar from the GPS data.
Sensors 16 01907 g016
Figure 17. Bistatic scattered signal of the scene. (a) Bistatic scattered signal from one transmitted radar pulse, including the real (blue line) and imaginary (red line); (b) The sum of the frequency spectrum of the bistatic scattered signal from 100 transmitted radar pulses. Note that the scattered signal of the targets is in the green ellipses.
Figure 17. Bistatic scattered signal of the scene. (a) Bistatic scattered signal from one transmitted radar pulse, including the real (blue line) and imaginary (red line); (b) The sum of the frequency spectrum of the bistatic scattered signal from 100 transmitted radar pulses. Note that the scattered signal of the targets is in the green ellipses.
Sensors 16 01907 g017
Figure 18. Imaging results of the measured data obtained by the different algorithms. (a) Bistatic DTAD; (b) Proposed ETDA.
Figure 18. Imaging results of the measured data obtained by the different algorithms. (a) Bistatic DTAD; (b) Proposed ETDA.
Sensors 16 01907 g018
Table 1. Parameters of the simulated OS-BFSAR system.
Table 1. Parameters of the simulated OS-BFSAR system.
ParametersValuesParametersValues
Carrier frequency700 MHzSignal bandwidth200 MHz
Sampling frequency220 MHzPulse duration1 μs
Pulse repetition frequency120 HzStationary radar position(0,0,20) m
Moving radar ideal speed45 m/sMoving radar ideal altitude100 m
Table 2. Measured parameters of the selected point-like scatterers.
Table 2. Measured parameters of the selected point-like scatterers.
AlgorithmsMeasured ParametersScattererCScattererEScattererG
Bistatic DTDAResolution (m)Azimuth0.94700.88630.8240
Range0.67020.67030.6705
PSLR (dB)Azimuth−13.53−13.60−13.76
Range−13.22−12.26−13.26
ISLR (dB)Azimuth−10.51−10.41−9.68
Range−10.21−10.43−9.73
Proposed ETDAResolution (m)Azimuth0.94860.88730.8288
Range0.67110.67130.6716
PSLR (dB)Azimuth−13.47−13.36−13.87
Range−15.34−15.19−15.01
ISLR (dB)Azimuth−10.62−10.46−9.71
Range−10.32−10.49−9.84
Table 3. Acquisition parameters of the real OS-BFSAR system.
Table 3. Acquisition parameters of the real OS-BFSAR system.
ParametersValuesParametersValues
Signal frequencyP-bandSampling frequency220 MHz
Pulse repetition frequency500 HzPulse duration100 ns
Stationary radar position(−54,0,6) mMoving radar initial position(−40,26,4) m
Moving radar ideal speed12.8 km/hMoving radar ideal altitude4 m

Share and Cite

MDPI and ACS Style

Xie, H.; Shi, S.; Xiao, H.; Xie, C.; Wang, F.; Fang, Q. Efficient Time-Domain Imaging Processing for One-Stationary Bistatic Forward-Looking SAR Including Motion Errors. Sensors 2016, 16, 1907. https://doi.org/10.3390/s16111907

AMA Style

Xie H, Shi S, Xiao H, Xie C, Wang F, Fang Q. Efficient Time-Domain Imaging Processing for One-Stationary Bistatic Forward-Looking SAR Including Motion Errors. Sensors. 2016; 16(11):1907. https://doi.org/10.3390/s16111907

Chicago/Turabian Style

Xie, Hongtu, Shaoying Shi, Hui Xiao, Chao Xie, Feng Wang, and Qunle Fang. 2016. "Efficient Time-Domain Imaging Processing for One-Stationary Bistatic Forward-Looking SAR Including Motion Errors" Sensors 16, no. 11: 1907. https://doi.org/10.3390/s16111907

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop