Next Article in Journal
Retrieval of the Absorption Coefficient of L-Band Radiation in Antarctica From SMOS Observations
Previous Article in Journal
Hyperspectral Mixed Denoising via Spectral Difference-Induced Total Variation and Low-Rank Approximation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Precision Downward-Looking 3D Synthetic Aperture Radar Imaging with Sparse Linear Array and Platform Motion Parameters Estimation

1
The Institute of Information and Navigation, Air Force Engineering University, Xi’an 710077, China
2
The Collaborative Innovation Center of Information Sensing and Understanding, Xi’an 710077, China
3
The Key Laboratory for Information Science of Electromagnetic Waves (Ministry of Education), Fudan University, Shanghai 200433, China
4
Department of Electrical and Computer Engineering, National University of Singapore, Singapore 117583, Singapore
5
Science and Technology on Microwave Imaging Laboratory, Institute of Electronics, Chinese Academy of Sciences, Beijing 100190, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(12), 1957; https://doi.org/10.3390/rs10121957
Submission received: 31 October 2018 / Revised: 30 November 2018 / Accepted: 2 December 2018 / Published: 5 December 2018
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
The downward-looking sparse linear array three-dimensional synthetic aperture radar (DLSLA 3D SAR) has attracted a great deal of attention, due to the ability to obtain three-dimensional (3D) images. However, if the velocity and the yaw rate of the platform are not measured with enough accuracy, the azimuth signal cannot be compressed and then the 3D image of the scene cannot be obtained. In this paper, we propose a method for platform motion parameter estimation, and downward-looking 3D SAR imaging. A DLSLA 3D SAR imaging model including yaw rate was established. We then calculated the Doppler frequency modulation, which is related to the cross-track coordinates rather than the azimuth coordinates. Thus, the cross-track signal reconstruction was realized. Furthermore, based on the minimum entropy criterion (MEC), the velocity and yaw rate of the platform were accurately estimated, and the azimuth signal compression was also realized. Moreover, a deformation correction procedure was designed to improve the quality of the image. Simulation results were given to demonstrate the validity of the proposed method.

1. Introduction

Three-dimensional synthetic aperture radar (3D SAR) imaging can obtain 3D images of targets, and obtain more abundant target information than traditional SAR imaging, which often suffers from shading and layover effects [1,2,3]. Compared with other 3D SAR techniques, e.g., SAR tomography [4,5], which is a multi-baseline extension and employs many passes over the same area, downward-looking sparse linear array 3D SAR (DLSLA 3D SAR) can obtain the 3D image by single voyage and works in a more flexible mode [6,7]. The 3D resolution is acquired by pulse compression with wideband chirp signal, along-track aperture synthesis with flying platform movement, and cross-track aperture synthesis with physical sparse linear arrays [8,9,10]. In a practical system, array distribution is usually non-uniform and sparse due to some inevitable factors, e.g., installation restriction, wing vibration, etc. [11,12].
Existing DLSLA 3D SAR researchers have been focusing on three main aspects: Imaging methods, array optimization, and improving the cross-track resolution. Uniform linear array imaging algorithms are usually based on the beam-forming theory and multiple signal classification (MUSIC) algorithm to realize 3D imaging [13,14]. Sparse array methods exploit the sparsity of the 3D scene, and employ compressed sensing (CS) [15,16] and regularization methods [17] for DLSLA 3D SAR imaging [18,19]. Under the traditional CS framework, the continuous scene must be discretized. However, the coordinates to be estimated do not usually fall on discrete points, which may cause the off-grid effect [20,21]. To mitigate against the off-grid effect, continuous CS (CCS) based on atomic norm minimization (ANM) is applied to DLSLA 3D SAR [22]. A multilayer first-order approximation model is also proposed in [23] to deal with this issue. On the other hand, for sparse array methods, designing the arrays to obtain better imaging effects is a key research topic. An array design method based on the spatial convolution principle is proposed in [24]. An array optimization method based on the minimum average mutual coherence of the observation matrix is proposed in [25]. A particular issue in DLSLA 3D SAR imaging is that the cross-track resolution is relatively low, because the length of the array is limited by the platform size. Thus, a large amount of research is focused on how to improve the cross-track resolution, e.g., the two dimensional smoothed L0 (2D SL0) algorithm [26], the Bayesian compressed sensing algorithm [27], etc.
However, the above articles do not consider signal processing under non-ideal conditions. In practical applications, the actual path of the platform usually deviates from its ideal path. A joint multi-channel auto-focusing technology is proposed in [28] to estimate motion error, but the performance of this method is degraded when the synthetic aperture time is short. A method based on wavenumber domain sub-block is proposed in [29] to compensate the yaw angle error, but this method requires the velocity and yaw angle to be known, which restricts its application. Moreover, the yaw angle will cause image deformation, another aspect that was not considered. Actually, there is a difference between traditional SAR and DLSLA 3D SAR in motion compensation. Conventional SAR usually works in a long-range imaging mode, which has the characteristics of long synthetic aperture and long synthetic aperture time. Therefore, it is necessary to compensate the motion error with the sub-aperture technique [30,31]. However, in DLSLA 3D SAR, the cross-track resolution depends on the wavelength of the transmitted signal, the length of the array, and the flying height of the platform. Due to the fact that the length of the array is limited by the platform size, the flying height of the platform cannot be too high if the cross-track resolution is to be within an acceptable level. When the flying height of the platform is limited, the corresponding synthetic aperture length will be relatively short, resulting in a very short synthetic aperture time, especially for fast moving platforms. Thus, it can be considered that platform motion is constant during the synthetic aperture time. As long as the motion parameters of the platform are obtained, they can be used to construct a compensation function to compensate for motion errors and realize 3D imaging of the scene.
Based on the above analysis, this paper considers the situation where there is a yaw rate when the platform flies, and that the velocity and yaw rate obtained by the airborne measuring equipment are inaccurate. We proposed the following solutions: First, an imaging model of DLSLA 3D SAR with yaw rate was established; and second, the Doppler frequency modulation was calculated from the modulated cross-track coordinates (rather than the modulated azimuth coordinates). As a result, cross-track signal processing was carried out to obtain the cross-track coordinates before azimuth signal processing. Based on the minimum entropy criterion (MEC), the velocity and yaw rate of the platform were estimated, and the azimuth signal compression was also realized. Moreover, a deformation correction procedure was designed to improve the quality of the image. Compared with the existing methods, the proposed method was able to estimate the platform motion parameters precisely and achieve better imaging results. Simulation results demonstrated the validity of the proposed method.
The remainder of this paper is organized as follows: The imaging model is established and the Doppler frequency modulation is analyzed in Section 2; in Section 3, cross-track signal reconstruction is discussed; parameter estimation and the azimuth compression based on MEC are described in Section 4; deformation correction is presented in Section 5; simulation experiments are carried out in Section 6; and some conclusions are drawn in Section 7.

2. DLSLA 3D SAR Imaging Model

The imaging geometry of airborne DLSLA 3D SAR is shown in Figure 1. In the downward-looking working mode, the beam illuminates the area below the platform. The platform flies at the altitude H with velocity v. The flight path (azimuth direction) is paralleled to the X-axis. The sparse linear array is obtained by random selection on a uniform linear array, which is mounted underneath the wings along the cross-track direction (Y-axis) and symmetrical about the Z-axis. The Z-axis denotes the height direction (range direction), which is also the line of sight of the radar. That is, the height resolution is dependent on the transmitted wideband chirp signal. The azimuth resolution is dependent on the synthesis aperture formed by the platform flying, and the cross-track resolution is dependent on the real aperture formed by the sparse linear array. Suppose the sequence vector of uniform linear array with spacing 2d is N = [ 1 , 2 , , N ] and the sparse linear array is obtained by random selection from a uniform linear array. The value of d can be set as half of the wavelength [32]. The hollow circle “ ” in the middle represents the transmitting array and the solid circles “ ” on both sides represent the receiving arrays. According to the equivalent phase centers (EPCs) principle [33,34], N arrays can obtain N EPCs. By random selection from a uniform linear array, the number P of sparse EPCs can be obtained with n p N . The obtained sparse EPC sequence vector can be denoted as T = [ n 1 , n 2 , , n P ] , with T N . Thus, the p th EPC is located at A p ( x m , y n p , H ) at slow time t m , where x m = v t m and y n p = ( n p ( N 1 ) / 2 ) d . For the pth EPC, the ideal instantaneous distance from the array to the kth scatterer ( x k , y k , z k ) in the imaging scene is:
R B ( t m ) = ( v t m x k ) 2 + ( y n p y k ) 2 + ( H z k ) 2
Actually, due to the influence of air disturbance, the platform may deviate from its ideal path, which will affect the DLSLA 3D SAR imaging. Assume there is a yaw rate ω in the flight. The θ in Figure 1 denotes the intersection angle between the ideal and actual paths. The actual instantaneous distance from the p th EPC to the kth scatterer ( x k , y k , z k ) is:
R p k ( t m ) = ( v x t m y n p sin θ x k ) 2 + ( y n p cos θ + v y t m y k ) 2 + ( H z k ) 2 R k + v 2 t m 2 2 x k v t m cos θ 2 y k v t m sin θ 2 R k + y n p 2 2 y n p y k cos θ + 2 y n p x k sin θ 2 R k + x k 2 2 R k
where v x = v cos θ , v y = v sin θ , and R k = ( H z k ) 2 + y k 2 .
Assuming the radar transmits a linear frequency modulation (LFM) signal with the center frequency fc. In the far field, the received data of the p th EPC can be expressed as:
s p ( t ^ , t m ) = D σ k exp ( j 4 π f c R p k ( t m ) / c ) exp ( j π K r ( t ^ 2 R p k ( t m ) / c ) 2 ) d x k d y k d z k
where D is the imaging region, σ k is the reflectance of the kth scatterer, t ^ is the fast time, c is the electromagnetic wave speed, and K r is the chirp rate.
Errors caused by EPC can be compensated using the method in [19]. When the range compression is completed and the errors caused by EPC are compensated, the echo signal in the time domain after scene discretization can be expressed as:
s p ( t m ; R k ) = k sinc ( t ^ 2 R k c ) σ k exp ( j 4 π f c c R p k ( t m ) )
Thus, the signal represented by Equation (4) can be seen as a series of two-dimensional signals with different heights. Then, the two-dimensional signal of azimuth and cross-track of the ith range (height) cell is:
s p ( t m ; R i ) = l = 1 L σ l exp ( j 4 π f c c v 2 t m 2 2 x l v t m cos θ 2 y l v t m sin θ 2 R i ) exp ( j 4 π f c c y n p 2 2 y n p y l cos θ + 2 y n p x l sin θ 2 R i + j φ l )
where φ l = R i + x l 2 / 2 R i is the constant phase, R i represents the range value of the ith range cell, and L represents the number of scatterers in the ith range cell.
Assuming the yaw rate is ω, and the initial yaw angle is θ0. Then the instantaneous yaw angle is θ = ω t m + θ 0 . The instantaneous Doppler frequency can be obtained as follows:
f d ( t m ) = 2 λ d R p k ( t m ) d t m = 2 λ R k ( v 2 t m x k v cos θ + x k v t m ω sin θ y k v sin θ ) 2 λ R k ( y k v t m ω cos θ + y n p y k ω sin θ + x k y n p ω cos θ )
where λ = c / f c is the wavelength. Furthermore, the Doppler frequency modulation can be obtained by the following expression:
γ = 2 λ d 2 R p k ( t m ) d t m 2 = 2 λ R k ( v 2 + x k v ω sin θ + x k v ω sin θ + x k v t m ω 2 cos θ y k v ω cos θ ) 2 λ R k ( y k v ω cos θ + y k v t m ω 2 sin θ + y n p y k ω 2 cos θ x k y n p ω 2 sin θ )
Referring to the definitions of instantaneous Doppler and Doppler frequency modulations, the instantaneous frequency f c d ( y n ) and frequency modulation K c of cross-track signal can be denoted as:
f c d ( y n ) = 2 λ d R n k ( t m ) d y n = 2 λ R k ( y n y k cos θ + x k sin θ )
K c = 2 λ d 2 R n k ( t m ) d y n 2 = 2 λ R k
Equation (9) shows that K c is a constant, so the cross-track signal can be processed directly. According to Equations (6) and (8), the Doppler frequency center f d ( 0 ) and the cross-track frequency center f c d ( 0 ) are:
f d ( 0 ) = 2 λ R k [ ( v ω y n p ) ( x k cos θ 0 y k sin θ 0 ) ]
f c d ( 0 ) = 2 λ d R n k ( t m ) d y n = 2 λ R k ( y k cos θ + x k sin θ )
Equations (10) and (11) indicate that the focused image is deformed. More specifically, the azimuth coordinate and cross-track coordinate are modulated. That is, after image focusing, deformation correction is required to improve the quality of the image.
Assuming that the plane is flying smoothly, the yaw rate ω is kept small and θ is changing slowly. From experience, typical values are w = 0.05   rad / s , and ω 2 = 0.0025 ( rad / s ) 2 [35,36]. Thus, the terms that contain factor ω 2 can be ignored. Then, Equation (7) can be approximately expressed as:
γ = 2 λ R k [ v 2 2 v ω ( y k cos θ x k sin θ ) ]
Equation (12) indicates that the Doppler frequency modulation is space-variant, which is related not only to velocity and yaw rate, but also to the coordinates of the target.
Denote
x k = x k cos θ 0 + y k sin θ 0
y k = y k cos θ 0 x k sin θ 0
then
x k cos θ + y k sin θ = cos ( ω t m ) ( x k cos θ 0 + y k sin θ 0 ) + sin ( ω t m ) ( y k cos θ 0 x k sin θ 0 ) ( x k cos θ 0 + y k sin θ 0 ) + ω t m ( y k cos θ 0 x k sin θ 0 ) = x k + ω t m y k
y k cos θ x k sin θ = cos ( ω t m ) ( y k cos θ 0 x k sin θ 0 ) sin ( ω t m ) ( x k cos θ 0 + y k sin θ 0 ) ( y k cos θ 0 x k sin θ 0 ) ω t m ( x k cos θ 0 + y k sin θ 0 ) = y k ω t m x k
then, Equations (11) and (12) can be further approximately expressed as:
f c d ( 0 ) 2 λ R k ( y k + ω t m x k )
γ 2 λ R k [ v 2 2 v ω y k ]
According to Equations (9), (10), (17), and (18), the two-dimensional signal of azimuth and cross-track of the ith range cell can be constructed as:
s p ( t m ; R i ) = l = 1 L σ l exp ( j 2 π ( v 2 2 v ω y k ) t m 2 2 ( x l v + ω y n p x l ) t m λ R i j 2 π y n p 2 2 y n p ( y l ω t m x l ) λ R i ) = l = 1 L σ l exp ( j 2 π ( v 2 2 v ω y k ) t m 2 2 x l v t m λ R i ) exp ( j 2 π y n p 2 2 y n p y l λ R i )
where the first phase term represents the azimuth information and the second phase term represents the cross-track information. Equation (18) shows that the Doppler frequency modulation γ is related to the modulated cross-track coordinates and unrelated to the modulated azimuth coordinates. It also shows that the azimuth information is not contained in the second phase term of Equation (19), so the cross-track signal can be processed to obtain the modulated cross-track coordinates of scatterers before azimuth signal processing.

3. Cross-Track Signal Reconstruction with CS

For the azimuth and cross-track signals, each slow time sampling tm can be seen as a snapshot. According to Equation (19), the cross-track signal of a single snapshot can be expressed as:
s p ( R i ; m ) = l = 1 L σ l exp ( j 2 π y n p 2 2 y n p y l λ R i ) exp ( j f l ( m ) ) = l = 1 L ο m l exp ( j 2 π y n p 2 2 y n p y l λ R i )
where L denotes the total number at the ith range cell, m = 1 , 2 , , M , f l ( m ) denotes the azimuthal phase corresponding to the slow time t m , and ο m l = σ l exp ( j f l ( m ) ) . After removing the independent quadratic phase terms, Equation (20) can be rewritten as:
s p ( R i ; m ) = l = 1 L ο m l exp ( j 4 π y n p y l λ R i )
Generally, in 3D SAR imaging, there are large amounts of non-target zones in the 3D scene. It means that the cross-track signal to be reconstructed is sparse. Thus, the cross-track signal can be processed with the CS method.
Firstly, the cross-track scene needed to be discretized. The cross-track imaging scene [ y 0 , y 0 ] can be divided into Q equal intervals and the corresponding grid coordinates can be represented as y = [ y 1 , , y Q ] , where y 1 = y 0 , y Q = y 0 . y 0 is half the width of the region of interest, and the grid interval is 2 Δ y = 2 y 0 / Q . Then, the dictionary matrix of cross-track signal can be denoted as:
A = [ a ( y 1 ) , , a ( y Q ) ] = [ exp ( j 4 π y n 1 y 1 / ( λ R i ) ) exp ( j 4 π y n 1 y 2 / ( λ R i ) ) exp ( j 4 π y n 1 y Q / ( λ R i ) ) exp ( j 4 π y n 2 y 1 / ( λ R i ) ) exp ( j 4 π y n 2 y 2 / ( λ R i ) ) exp ( j 4 π y n 2 y Q / ( λ R i ) ) exp ( j 4 π y n P y 1 / ( λ R i ) ) exp ( j 4 π y n P y 2 / ( λ R i ) ) exp ( j 4 π y n P y Q / ( λ R i ) ) ] P × Q
where a ( y q ) = [ a 1 ( y q ) , , a P ( y q ) ] T , and a p ( y q ) = exp ( j 4 π y n p y q λ R i ) .
Assuming s ( R i ; m ) = [ s 1 ( R i ; m ) , s 2 ( R i ; m ) , , s P ( R i ; m ) ] T , and ο m = [ ο m 1 , o m 2 , , o m Q ] T , signal s p ( R i ; m ) can be further expressed as:
s ( R i ; m ) = A o m + w
where w = [ w 1 , w 2 , , w P ] T represents the noise. Equation (23) can be solved with the CS method in [23].

4. Minimum Entropy Criterion for Azimuth Compression

Image entropy closely relates to the quality of image focus. It is generally acknowledged that SAR images with a better focusing quality have smaller entropy.
According to Equation (18), the Doppler frequency modulation is related to the velocity, yaw rate, and modulated cross-track coordinates. The modulated cross-track coordinates were obtained in Section 3, so the velocity and yaw rate are the parameters to be estimated next. Meanwhile, in the same range and cross-track cells, targets located in different azimuth cells have the same Doppler frequency modulation.
After range compression and cross-track reconstruction, for scatterers in the ith range cell and lth cross-track cell, the azimuth signal can be denoted as:
s ( t m ; R i , y l ) = g = 1 G σ g exp ( j 2 π ( v 2 2 v ω y g ) t m 2 2 v t m x g λ R i )
where G is the number of scatterers in the ith range cell and lth cross-track cell.
According to the expression of Doppler frequency modulation, the compensation function can be constructed as:
H a ( η ) = exp ( j π λ R i 2 η 2 f a 2 )
where η is the phase compensation factor, which represents the Doppler frequency modulation; and f a represents the Doppler frequency, f a = [ f a 1 , f a 2 , , f a M ] .
The phase compensation operation can be completed by matrix operation. Assuming D ( η ) decided by η is the signal after compensation, then D ( η ) can be denoted as:
D ( η ) = ϖ a [ ( ω a s ( R i , y l ) ) H a ( η ) ]
where ω a denotes the discrete Fourier transform (DFT) matrix with size M, ϖ a denotes inverse discrete Fourier transform (IDFT) matrix with size M, represents the Hadamard product, s ( R i , y l ) = [ s ( t 1 ; R i , y l ) , s ( t 2 ; R i , y l ) , , s ( t M ; R i , y l ) ] T , and H a ( η ) = [ H a 1 ( η ) , H a 2 ( η ) , , H a M ( η ) ] T . According to the image entropy definition, the image entropy of D ( η ) can be defined as:
E = m | D m ( η ) | 2 ln ( | D m ( η ) | 2 )
where D m ( η ) represents the mth element of vector D ( η ) , m = 1 , 2 , , M , | | represents the Euclidean norm operator. The image entropy can be used to evaluate the image focus quality, that is to say, when the entropy reaches minimum, the corresponding phase compensation factor η is the required value. The result can be obtained by solving the following optimization problem:
min η m | D m ( η ) | 2 ln ( | D m ( η ) | 2 ) s . t . D ( η ) = ϖ a [ ( ω a s ( R i , y l ) ) H a ( η ) ]
It is not easy to solve the above optimization problem directly. To simplify the optimization problem, a substitution function can be constructed to replace the objective function of Equation (28). The substitution function is designed as follows:
Ξ ( η ; η ( u ) ) = m | D m ( η ) | 2 ln ( | D m ( η ( u ) ) | 2 )
where η ( u ) is the uth iteration value, which is known in the uth iteration. Furthermore, the accurate phase compensation factor η can be obtained by an iterative algorithm, i.e., the problem of minimum entropy can be solved with the following iteration problem:
η ( u + 1 ) = arg min Ξ ( η ; η ( u ) )
Through solving the above problem, the convergence value η o p t of the phase compensation factor η can be obtained. Then, the azimuth compression can be carried out with the compensation function H a ( η o p t ) . The flowchart is shown in Figure 2.
In Figure 2, v 0 is the velocity obtained by airborne measuring equipment. The concrete derive of A ( u ) and B ( u ) is shown in Appendix A. μ represents the step length, which is selected by the following searching method: Assuming the set of candidate values of step length is μ = [ μ 1 , , μ I ] , each step length μ i can obtain an image entropy value. Then, the step length corresponding to the minimum entropy can be taken as the required step length among these candidate values in this iteration.

5. Deformation Correction

After the above operation, a 3D image is obtained, albeit with the scatterers’ coordinates still modulated by the yaw angle. For a scatterer k located at ( x k , y k , z k ) , its coordinates obtained by the focused image can be denoted as ( x k , y k , z k ) , i.e., the azimuth and cross-track coordinates are modulated. The 3D image has been deformed and needs to be corrected.
It is noted that the starting time T k _ s and the ending time T k _ e of scatterer k can be obtained from the signal s ( t m ; R i , y l ) before azimuth compression.
Then from ( x k , y k , z k ) and velocity v, the theoretical starting time T k _ s , and the theoretical ending time T k _ e can be expressed as in Equations (31) and (32), respectively:
T k _ s = ( x k L s a r / 2 ) / v
T k _ e = ( x k + L s a r / 2 ) / v
where L s a r is the synthetic aperture length. Therefore, the relationship between the azimuth coordinates before and after modulation can be expressed as:
x k x k = v ( T k _ s T k _ s )
or
x k x k = v ( T k _ e T k _ e )
When combining Equations (13), (14), with (33) or (34), there are three unknown parameters, namely x k , y k and the initial yaw angle θ 0 . By solving this system of equations, the unknown parameters can be obtained. Thus, the accurate coordinates ( x k , y k , z k ) are obtained and the deformation correction is also completed.
The flow chart of the proposed method is shown in Figure 3.

6. Experiments and Results

In this section, some experiments are presented to illustrate the performance of the proposed method.

6.1. DLSLA 3D SAR Imaging of Isolated Targets

In this subsection, isolated targets simulation is shown to verify the proposed method for DLSLA 3D SAR imaging. The parameters of simulation are listed in Table 1. The interval of linear array is d = 0.009 m. There were 20 isolated targets with the unit reflectivity in the Cartesian coordinates system, as shown in Figure 4.
In the simulation experiments, it was assumed that the velocity and the yaw rate obtained by the airborne measuring equipment were 62 m/s and 0 °/s, respectively. Noise was added to the signal after range compression, and the signal to noise ratio (SNR) was 5dB. The simulation results are shown in Figure 5. The imaging result after range compression and cross-track reconstruction with iterative shrinkage thresholding (IST) algorithm is shown in Figure 5a. The ratio of randomly selected array elements to the total linear uniform array elements was 0.875. Figure 5b shows the imaging results after azimuth compression from the Traditional Method (TM), with motion compensation function derived from parameters obtained by the airborne measuring equipment. It shows that the azimuth broadening exists, because these parameters obtained by the airborne measuring equipment were inaccurate. Figure 5c was obtained by autofocusing of the MapDrift (MD) method [24]. Compared to Figure 5b, the quality of Figure 5c was greatly improved. Figure 5d shows the imaging result obtained by the MEC before deformation correction. It shows that the azimuth signal was completely compressed, which demonstrated that the proposed MEC method can obtain the focused imaging result. Meanwhile, the estimation results of velocity and yaw rate were v = 59.91 m/s and ω = 2.09 °/s, respectively. This meant that the proposed method can accurately estimate the parameter value and compensate for motion error.
To evaluate the azimuth imaging quality of the proposed method, the azimuth sectional images of the rectangular and ellipse areas of Figure 5 are shown in Figure 6a,b. It shows that the azimuth signal obtained by the TM had an obvious broadening phenomenon. The poor quality of the TM was due to parameter inaccuracies. The imaging result obtained by the MD method was better than that of TM. However, a prominent isolated target in each azimuth sectional image was necessary for satisfactory imaging results with the MD method. When the isolated targets in the scene were uniformly distributed, the estimation performance decreased, as shown in Figure 6b. The quality of the image obtained by the MEC was the best of the three methods. Figure 6 shows that the MEC can achieve azimuth signal compression and compensate for motion error. That is, the proposed method is valid.
The 3dB width of the five targets with MD and MEC methods are shown in Table 2, where “improvement: Is defined as the ratio of the width of MD to the width of MEC”. Targets 1 and 2 correspond to targets in Figure 6a; and targets 3, 4, and 5 correspond to targets in Figure 6b. Table 2 shows that the performance of the MEC method was greatly improved.
According to the analysis, the azimuth and cross-track coordinates obtained were modulated. The 2D projection image of Figure 5a onto azimuth and cross-track is shown in Figure 7a. The ending time T k _ e = 1.008 s of the echo signal is also shown in Figure 7a. The 2D projection image of Figure 5d onto azimuth and cross-track is shown in Figure 7b. It shows that the coordinates obtained were modulated, with region 1 more severe than that of region 2, because modulation increased with coordinates. Meanwhile, according to Figure 7b, the modulated azimuth coordinate x k was also obtained. Then, the theoretical echo signal ending time T k _ e of x k was obtained. Finally, according to Equations (13), (14), and (45), the accurate coordinates ( x k , y k , z k ) were obtained and the initial yaw angle θ 0 was obtained as 2.92°. Figure 7c shows the 2D projection result after deformation correction. It indicates that the azimuth coordinates have been corrected. However, due to the relatively poor cross-track resolution limited by the length of the antenna array, the correction effect of cross-track coordinates was not distinctive, so there is a deviation between the estimated values and the real values of cross-track coordinates.
Figure 8 shows the coordinates of target 3 (Figure 7c) before and after deformation correction. The azimuth and cross-track coordinates before and after deformation correction of the 5 targets are shown in Table 3. Take target 1 as an example, the definition of improvement in azimuth is that (distance to actual position before correction)/(distance to actual position after correction) = |(−2.81 − (−3))/(−2.91 − (−3))| = 2.11. Obviously, it means that the deformation correction was effective when the improvement value was greater than 1; the larger the improvement value, the more improvement of the imaging result accuracy. Table 3 shows that the improvements of azimuth coordinate were significant. Due to the low resolution, the improvements of cross-track were not obvious. In general, the deformation correction method is valid.
During azimuth compression, the range cell and cross-track cell were determined. Figure 9 shows the corresponding image entropy convergence of an azimuth signal under different SNRs. In each iteration, the step length μ was determined by the searching method. The convergence threshold was set as ε = 1 × 10 4 .

6.2. DLSLA 3D SAR Imaging of Distributed Extended Targets

In this subsection, the DEM data of an airborne CSAR 2D image [37] were used to represent a distributed scene. The scene had an extent of 200   m × 200   m × 35   m in the Cartesian coordinates system with the radar system parameters listed in Table 1. The azimuth coordinates of scatterers were uniformly distributed in [−100 m, 100 m] with 1 m intervals. The cross-track coordinates of scatterers were also uniformly distributed in [−100 m, 100 m] with 1 m intervals. The ideal 3D distributed scene is shown in Figure 10a, and its 2D projection onto azimuth and cross-track plane is shown in Figure 10b. According to the system parameters, the cross-track Rayleigh resolution δ c was 6.25 m. The SNR was chosen as 5 dB after range compression. The ratio of randomly selected array elements to the total linear uniform array elements was 0.875. The 3D reconstructed image by the proposed method shown in Figure 11a,b gives the corresponding image obtained by the traditional method. Here, the traditional method meant that using the parameter obtained by the airborne measuring equipment to carry out azimuth compression, where the velocity and the yaw rate were given as 62 m/s and 0 °/s, respectively. The imaging result by the traditional method clearly suffered from broadening phenomenon, and lost many image details. On the contrary, the proposed method obtained a good 3D image of the scene. Based on the minimum entropy method, on the other hand, the velocity and yaw rate were estimated as v = 59.90 m/s and v = 2.06 °/s, respectively. Further, the projection images onto the azimuth and cross-track plane by the two methods are illustrated in Figure 12a,b. It is seen that the coordinates of targets were affected by the deformation in both figures. Figure 13 shows the azimuth sectional image of a corresponding (range and cross-track cell) in Figure 12a,b. It is clear that TM (Figure 12b) suffered from the broadening phenomenon. However, because of the influence of range sidelobes, the azimuth sectional image of distributed targets (MEC, Figure 12a) was not as ideal as that of isolated targets (shown previously in Figure 6). By using the deformation correction method, the corrected result of Figure 12a is shown in Figure 14. The obtained initial yaw angle θ 0 was 3.08°, which was very close to the real value. Comparing Figure 14 with Figure 10b, showed that the modulated coordinates in Figure 12a were corrected. In addition, the azimuth and cross-track coordinates before and after deformation correction of the 4 scatterers in Figure 10b are shown in Table 4. It can be found that the improvements of azimuth coordinate were significant.
Finally, the topographic profile of the ideal scene is shown in Figure 15a. Since the focused image was deformed in azimuth and cross-track plane, the topographic profile was also deformed, which is shown in Figure 15. Figure 15c shows the topographic profile after deformation correction. It can be seen that Figure 15c is very similar to Figure 15a. Figure 15d shows the elevation errors of the corresponding positions in Figure 15a,c. All the errors were less than one range resolution unit (0.75 m). That is, the proposed method was able to obtain accurate 3D scene images.

7. Conclusions

In the DLSLA 3D SAR imaging model with yaw angle, the Doppler frequency modulation is spatial-variant, and is related to the modulated cross-track coordinates rather than the modulated azimuth coordinates, and the focused 3D image will be deformed in the azimuth and cross-track plane. In this paper, we propose a method to estimate platform motion parameters, which can be used to construct compensation functions to compress azimuth signal and compensate for motion error of the platform. The deformation correction of focused 3D image can be realized by the deformation correction procedure. It must be mentioned that the proposed method can also be extended to the applications of multi-input multi output (MIMO) radar systems, although the current paper analyzed a single transmitter element for clarity. In MIMO array, it is necessary to compensate for the error introduced by the EPC, which is related to the platform velocity. We will consider how to compensate for motion errors in the MIMO array in the future work.

Author Contributions

Conceptualization, Q.L. and Q.Z.; Data curation, W.H.; Funding acquisition, Y.L., Q.Z. and T.S.Y.; Investigation, Q.L.; Methodology, Q.L.; Resources, Q.Z. and W.H.; Supervision, T.S.Y.; Validation, Y.L.; Writing—original draft, Q.L. and Y.L.; Writing—review & editing, T.S.Y.

Funding

This research was funded by the National Natural Science Foundation of China under Grant No. 61471386 and 61571457; the Ministry of Education, Singapore under Grant No. MOE2016-T2-1-070; and the China Scholarship Council (CSC) under Grant No. 201703170012.

Acknowledgments

The authors would like to thank the handing Associate Editor and the anonymous reviewers for their valuable comments and suggestions for this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

In this appendix, we will derive the calculation of d Ξ ( η ; η ( u ) ) / d η and d 2 Ξ ( η ; η ( u ) ) / d η 2 .
Firstly, we derive the derivative of plural modulus. For plurality a ( ξ ) = a r ( ξ ) + j a i ( ξ ) , where a r ( ξ ) is the real part, a i ( ξ ) is the imaginary part, and ξ is a variable. The plural modulus can be obtained by | a ( ξ ) | = a r 2 ( ξ ) + a i 2 ( ξ ) . The first-order derivative of | a ( ξ ) | to variable ξ is:
d | a ( ξ ) | d ξ = 1 2 a r 2 ( ξ ) + a i 2 ( ξ ) ( 2 a r ( ξ ) d a r ( ξ ) d ξ + 2 a i ( ξ ) d a i ( ξ ) d ξ ) = 1 | a ( ξ ) | ( a r ( ξ ) d a r ( ξ ) d ξ + a i ( ξ ) d a i ( ξ ) d ξ )
then, the first-order derivative of Ξ ( η ; η ( u ) ) to η can be obtained by:
A ( u ) = d Ξ ( η ; η ( u ) ) d η = 2 m | D m ( η ) | ln ( | D m ( η ( u ) ) | 2 ) d | D m ( η ) | d η = m 2 ln ( | D m ( η ( u ) ) | 2 ) [ D m _ r ( η ) d D m _ r ( η ) d η + D m _ i ( η ) d D m _ i ( η ) d η ]
B ( u ) = d 2 Ξ ( η ; η ( u ) ) d η 2 = m 2 ln ( | D m ( η ( u ) ) | 2 ) [ ( d D m _ r ( η ) d η ) 2 + D m _ r ( η ) d 2 D m _ r ( η ) d η 2 + ( d D m _ i ( η ) d η ) 2 + D m _ i ( η ) d 2 D m _ i ( η ) d η 2 ]
where D m _ r ( η ) and D m _ i ( η ) are the real and imaginary parts of D m ( η ) , respectively. Meanwhile, the element D m ( η ) is:
D m ( η ) = q ϖ a ( m , q ) [ ( ω a ( m , q ) s q ( R i , y l ) ) H a ( q ) ]
where ω a ( m , q ) and ϖ a ( m , q ) represent the mth row and qth column element of matrices ω a and ϖ a , respectively. s q ( R i , y l ) and H a ( q ) represent the qth element of vectors s ( R i , y l ) and H a , respectively. ω a , ϖ a and s ( R i , y l ) are known. Denote:
I m q = ϖ a ( m , q ) ω a ( m , q ) s q ( R i , y l ) = I m q _ r + j I m q _ i
where I m q _ r and I m q _ i are the real and imaginary parts of I m q , respectively. Meanwhile, the compensation function can expressed as H a ( q ) = cos ( φ q ( η ) ) + j sin ( φ q ( η ) ) , where φ q ( η ) = π λ R i 2 η 2 f a q 2 . Then:
D m ( η ) = q ( I m q _ r + j I m q _ i ) [ cos ( φ q ( η ) ) + j sin ( φ q ( η ) ) ] = q { [ I m q _ r cos ( φ q ( η ) ) I m q _ i sin ( φ q ( η ) ) ] + j [ I m q _ r sin ( φ q ( η ) ) + I m q _ i cos ( φ q ( η ) ) ] }
According to Equation (40), the real part D m _ r ( η ) and the imaginary part D m _ i ( η ) of D m ( η ) can be expressed as:
D m _ r ( η ) = q [ I m q _ r cos ( φ q ( η ) ) I m q _ i sin ( φ q ( η ) ) ]
D m _ i ( η ) = q [ I m q _ r sin ( φ q ( η ) ) + I m q _ i cos ( φ q ( η ) ) ]
then, the first and second derivatives of D m _ r ( η ) and D m _ i ( η ) can be obtained by the following equations:
d D m _ r ( η ) d η = q [ I m q _ r sin ( φ q ( η ) ) I m q _ i cos ( φ q ( η ) ) ] d φ q ( η ) d η
d D m _ i ( η ) d η = q [ I m q _ r cos ( φ q ( η ) ) I m q _ i sin ( φ q ( η ) ) ] d φ q ( η ) d η
d 2 D m _ r ( η ) d η 2 = q { [ I m q _ r cos ( φ q ( η ) ) + I m q _ i cos ( φ q ( η ) ) ] ( d φ q ( η ) d η ) 2 + [ I m q _ r sin ( φ q ( η ) ) I m q _ i cos ( φ q ( η ) ) ] d 2 φ q ( η ) d η 2 }
d 2 D m _ i ( η ) d η 2 = q { [ I m q _ r sin ( φ q ( η ) ) I m q _ i cos ( φ q ( η ) ) ] ( d φ q ( η ) d η ) 2 + [ I m q _ r cos ( φ q ( η ) ) I m q _ i sin ( φ q ( η ) ) ] d 2 φ q ( η ) d η 2 }
where d φ q ( η ) d η = π λ R i η 3 f a q 2 , d 2 φ q ( η ) d η 2 = 3 π λ R i η 4 f a q 2 . Then, A ( u ) and B ( u ) can be calculated, and η ( u + 1 ) can be obtained.

References

  1. Klare, J.; Cerutti, D.; Brenner, A.; Ender, J. Image quality analysis of the vibrating sparse MIMO antenna array of the airborne 3D imaging radar ARTINO. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (GRASS), Barcelona, Spain, 23–28 July 2007; pp. 5310–5314. [Google Scholar]
  2. Lin, Y.; Hong, W.; Tan, W.X.; Wu, Y.R. Extension of rangemigration algorithm to squint circular SAR imaging. IEEE Geosci. Remote Sens. Lett. 2011, 8, 651–655. [Google Scholar] [CrossRef]
  3. Zhu, X.X.; Bamler, R. Very High Resolution Spaceborne SAR Tomography in Urban Environment. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4296–4308. [Google Scholar] [CrossRef] [Green Version]
  4. Reigber, A.; Moreira, A. First Demonstration of Airborne SAR Tomography Using Multibaseline L-Band Data. IEEE Trans. Geosci. Remote Sens. 2000, 38, 2142–2152. [Google Scholar] [CrossRef]
  5. Liang, L.; Li, X.W.; Ferro-Famil, L.; Guo, H.D.; Zhang, L.; Wu, W.J. Urban area tomography using a sparse representation based two-dimensional spectral analysis technique. Remote Sens. 2018, 10, 109. [Google Scholar] [CrossRef]
  6. Shi, J.; Zhang, X.L.; Yang, J.Y.; Wang, Y.B. Surface-tracing-based LASAR 3-D imaging method via multiresolution approximation. IEEE Trans. Geosci. Remote Sens. 2008, 46, 3719–3730. [Google Scholar] [CrossRef]
  7. Han, K.Y.; Wang, Y.P.; Tan, W.X.; Hong, W. Efficient pseudopolar format algorithm for down-looking linear-array SAR 3-D imaging. IEEE Geosci. Remote Sens. Lett. 2015, 12, 572–576. [Google Scholar] [CrossRef]
  8. Du, L.; Wang, Y.P.; Hong, W.; Tan, W.X.; Wu, Y.R. A three-dimensional range migration algorithm for downward-looking 3-D-SAR with single-transmitting and multiple-receiving linear array antennas. EURASIP J. Adv. Signal Process 2010, 957916, 1–15. [Google Scholar] [CrossRef]
  9. Liu, Q.Y.; Zhang, Q.; Gu, F.F.; Cheng, Y.C.; Kang, L.; Qu, X.Y. Downward-looking linear array 3D SAR imaging based on multiple measurement vectors model and continuous compressive sensing. J. Sens. 2017, 2017, 6207828. [Google Scholar] [CrossRef]
  10. Zhang, S.Q.; Zhu, Y.T.; Kuang, G.Y. Imaging of downward-looking linear array three-dimensional SAR based on FFT-MUSIC. IEEE Geosci. Remote Sens. Lett. 2015, 12, 885–889. [Google Scholar] [CrossRef]
  11. Wei, S.J.; Zhang, X.L.; Shi, J.; Liao, K.F. Sparse array microwave 3-D imaging: Compressed sensing recovery and experimental study. Progr. Electromagn. Res. 2012, 135, 161–181. [Google Scholar] [CrossRef]
  12. Peng, X.M.; Tan, W.X.; Wang, Y.; Hong, W.; Wu, Y.R. Convolution back-projection imaging algorithm for downward-looking sparse linear array three dimensional synthetic aperture radar. Prog. Electromagn. Res. 2012, 129, 287–313. [Google Scholar] [CrossRef]
  13. Chen, C.; Zhang, X. A new super-resolution 3-D SAR imaging method based on MUSIC algorithm. In Proceedings of the IEEE RadarCon (RADAR), Kansas, MO, USA, 23–27 May 2011; pp. 525–529. [Google Scholar]
  14. Gu, F.F.; Zhang, Q.; Chi, L.; Cheng, Y.A.; Li, S. A Novel motion compensating method for mimo-sar imaging based on compressed sensing. IEEE Sens. J. 2015, 15, 2157–2165. [Google Scholar] [CrossRef]
  15. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  16. Cand´es, E.J.; Wakin, M.B. An introduction to compressive sampling. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
  17. Kwak, N. Principle component analysis based on L1 norm maximization. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 1672–1680. [Google Scholar] [CrossRef] [PubMed]
  18. Peng, X.M.; Tan, W.X.; Hong, W.; Jiang, C.L.; Bao, Q.; Wang, Y.P. Airborne SLADL 3-D SAR image reconstruction by combination of polar formatting and L1 regularization. IEEE Trans. Geosci. Remote Sens. 2016, 54, 213–226. [Google Scholar] [CrossRef]
  19. Zhang, S.Q.; Zhu, Y.T.; Dong, G.G.; Kuang, G.Y. Truncated SVD-based compressive sensing for downward-looking three-dimensional sar imaging with uniform/nonuniform linear array. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1853–1857. [Google Scholar] [CrossRef]
  20. Tang, G.G.; Bhaskar, B.N.; Shah, P.; Recht, B. Compressed sensing off the grid. IEEE Trans. Inf. Theory 2013, 59, 7465–7490. [Google Scholar] [CrossRef]
  21. Yang, Z.; Xie, L.H. Enhancing sparsity and resolution via reweighted atomic norm minimization. IEEE Trans. Signal Process. 2016, 64, 1–12. [Google Scholar] [CrossRef]
  22. Bao, Q.; Han, K.Y.; Peng, X.M.; Hong, W.; Zhang, B.C.; Tan, W.X. DLSLA 3-D SAR imaging algorithm for off-grid targets based on pseudo-polar formatting and atomic norm minimization. Sci. China 2016, 59, 062310. [Google Scholar] [CrossRef]
  23. Liu, Q.Y.; Zhang, Q.; Luo, Y.; Li, K.M.; Sun, L. Fast algorithm for sparse signal reconstruction based on off-grid model. IET Radar Sonar Navig. 2018, 12, 390–397. [Google Scholar] [CrossRef]
  24. Wu, Z.B.; Zhu, Y.T.; Su, Y.; Li, Y.; Song, X.J. MIMO array design for airborne linear array 3D SAR imaging. J. Electron. Inf. Technol. 2013, 35, 2672–2677. [Google Scholar] [CrossRef]
  25. Bao, Q.; Jiang, C.L.; Lin, Y.; Tan, W.X.; Wang, Z.R.; Hong, W. Measurement matrix optimization and mismatch problem compensation for DLSLA 3-D SAR cross-track reconstruction. Sensors 2016, 16, 1333. [Google Scholar] [CrossRef] [PubMed]
  26. Zhang, S.Q.; Dong, G.G.; Kuang, G.Y. Superresolution downward-looking linear array three-dimensional SAR imaging based on two-dimensional compressive sensing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2184–2196. [Google Scholar] [CrossRef]
  27. Ren, X.; Chen, L.; Yang, T. 3D imaging algorithm for down-looking MIMO array SAR based on Bayesian compressive sensing. Int. J. Antennas Propag. 2014, 2014, 612326. [Google Scholar] [CrossRef]
  28. Yang, Z.M.; Sun, G.C.; Xing, M.D.; Bao, Z. motion compensation for airborne 3-D SAR based on joint multi-channel auto-focusing technology. J. Electron. Inf. Technol. 2012, 34, 1581–1588. [Google Scholar] [CrossRef]
  29. Ding, Z.Y.; Tan, W.X.; Wang, Y.P.; Hong, W.; Wu, Y.R. Yaw angle error compensation for airborne 3-D SAR based on wavenumber-domain subblock. J. Radars 2015, 4, 467–473. [Google Scholar] [CrossRef]
  30. Macedo, K.A.C.D.; Scheiber, R. Precise tomography and aperture dependent motion compensation for airborne SAR. IEEE Geosci. Remote Sens. Lett. 2005, 2, 172–176. [Google Scholar] [CrossRef]
  31. Xing, M.D.; Jiang, X.W.; Wu, R.B.; Zhou, F.; Bao, Z. Motion compensation for UAV SAR based on raw radar data. IEEE Trans. Geosci. Remote Sens. 2009, 47, 2870–2883. [Google Scholar] [CrossRef]
  32. Du, L.; Wang, Y.P.; Hong, W.; Wu, Y.R. Analysis of 3D-SAR based on angle compression principle. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Boston, MA, USA, 7–11 July 2008; pp. 1324–1327. [Google Scholar]
  33. Li, J.; Stoica, P.; Zhen, X.Y. Signal Synthesis and Receiver Design for MIMO Radar Imaging. IEEE Signal Process. 2008, 56, 3959–3968. [Google Scholar] [CrossRef]
  34. Wang, L.B.; Xu, J.; Huang, F.K.; Peng, Y.N. Analysis and Compensation of Equivalent Phase Center Error in MIMO-SAR. Acta Electron. Sin. 2009, 12, 2687–2693. [Google Scholar] [CrossRef]
  35. Chen, Y.; Li, G.; Zhang, Q.; Zhang, Q.J.; Xia, X.G. Motion compensation for airborne SAR via parametric sparse representation. IEEE Trans. Geosci. Remote Sens. 2017, 55, 551–562. [Google Scholar] [CrossRef]
  36. Fan, B.K.; Ding, Z.G.; Gao, W.B.; Long, T. An improved motion compensation method for high resolution UAV SAR imaging. Sci. China 2014, 57, 122301:1–122301:13. [Google Scholar] [CrossRef]
  37. Lin, Y.; Hong, W.; Tan, W.X.; Wang, Y.P.; Xiang, M.S. Airborne circular SAR imaging: Results at P-band. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Munich, Germany, 22–27 July 2012; pp. 5594–5597. [Google Scholar]
Figure 1. DLSLA 3D SAR imaging geometry model.
Figure 1. DLSLA 3D SAR imaging geometry model.
Remotesensing 10 01957 g001
Figure 2. The flow chart of MEC.
Figure 2. The flow chart of MEC.
Remotesensing 10 01957 g002
Figure 3. The flow chart of the proposed method.
Figure 3. The flow chart of the proposed method.
Remotesensing 10 01957 g003
Figure 4. 3D isolated targets model.
Figure 4. 3D isolated targets model.
Remotesensing 10 01957 g004
Figure 5. Imaging result: (a) 3D imaging result after range compression and cross-track reconstruction; (b) 3D imaging result by traditional method; (c) 3D imaging result by MapDrift method; and (d) 3D imaging result by the proposed method.
Figure 5. Imaging result: (a) 3D imaging result after range compression and cross-track reconstruction; (b) 3D imaging result by traditional method; (c) 3D imaging result by MapDrift method; and (d) 3D imaging result by the proposed method.
Remotesensing 10 01957 g005
Figure 6. The azimuth sectional image of three methods: (a) The azimuth sectional image of the rectangular area in Figure 5; and (b) the azimuth sectional image of the ellipse area in Figure 5.
Figure 6. The azimuth sectional image of three methods: (a) The azimuth sectional image of the rectangular area in Figure 5; and (b) the azimuth sectional image of the ellipse area in Figure 5.
Remotesensing 10 01957 g006
Figure 7. 2D projection onto azimuth and cross-track plane: (a) Before azimuth compression; (b) before deformation correction; and (c) after deformation correction.
Figure 7. 2D projection onto azimuth and cross-track plane: (a) Before azimuth compression; (b) before deformation correction; and (c) after deformation correction.
Remotesensing 10 01957 g007
Figure 8. The azimuth and cross-track coordinates before and after deformation correction of target 3.
Figure 8. The azimuth and cross-track coordinates before and after deformation correction of target 3.
Remotesensing 10 01957 g008
Figure 9. Image entropy convergence curve.
Figure 9. Image entropy convergence curve.
Remotesensing 10 01957 g009
Figure 10. The ideal scene: (a) The ideal 3D distributed scene; and (b) the 2D projection of ideal 3D scene onto azimuth and cross-track plane.
Figure 10. The ideal scene: (a) The ideal 3D distributed scene; and (b) the 2D projection of ideal 3D scene onto azimuth and cross-track plane.
Remotesensing 10 01957 g010
Figure 11. 3D imaging result of scene: (a) The proposed method; and (b) the traditional method.
Figure 11. 3D imaging result of scene: (a) The proposed method; and (b) the traditional method.
Remotesensing 10 01957 g011
Figure 12. 2D projection onto azimuth and cross-track plane: (a) The proposed method; and (b) the traditional method.
Figure 12. 2D projection onto azimuth and cross-track plane: (a) The proposed method; and (b) the traditional method.
Remotesensing 10 01957 g012
Figure 13. The azimuth sectional image of MEC and TM methods.
Figure 13. The azimuth sectional image of MEC and TM methods.
Remotesensing 10 01957 g013
Figure 14. 2D projection onto azimuth and cross-track plane after deformation correction.
Figure 14. 2D projection onto azimuth and cross-track plane after deformation correction.
Remotesensing 10 01957 g014
Figure 15. Topographic profile: (a) The topographic profile of the ideal scene; (b) the topographic profile before deformation correction; (c) the topographic profile after deformation correction; and (d) the errors of the corresponding positions in Figure 15a,c.
Figure 15. Topographic profile: (a) The topographic profile of the ideal scene; (b) the topographic profile before deformation correction; (c) the topographic profile after deformation correction; and (d) the errors of the corresponding positions in Figure 15a,c.
Remotesensing 10 01957 g015
Table 1. Parameters of platform and antenna.
Table 1. Parameters of platform and antenna.
ParametersValueParametersValue
Carrier frequency fc (GHz)17Pulse repeat frequency PRF (Hz)1000
Bandwidth Br (MHz)200Pulse duration T p (μs)1
Height of platform H (m)1500Number of receiving antenna N210
Velocity of platform v (m/s)60Cross-track resolution δ c (m)6.25
Yaw rate ω (°/s)2Initial yaw angle θ 0 (°)3
Table 2. The 3dB width of MD and MEC.
Table 2. The 3dB width of MD and MEC.
TargetsMDMECImprovement
10.59 m0.2 m2.95
20.63 m0.25 m2.52
31.53 m0.2 m7.65
41.68 m0.19 m8.84
51.57 m0.22 m7.13
Table 3. The azimuth and cross-track coordinates before and after deformation correction.
Table 3. The azimuth and cross-track coordinates before and after deformation correction.
TargetsActual Coordinates
(Azimuth, Cross-Track)
Coordinates before CorrectionCoordinates after CorrectionImprovement Value
1(−3, −6) m(−2.81, −7.32) m(−2.91, −7.30) m(2.11, 1.01)
2(−3, 6) m(−3.15, 7.32) m(−3.03, 7.25) m(5, 1.05)
3(29, 29) m(29.91, 28.26) m(28.87, 29.48) m(7, 1.54)
4(32, 29) m(33.28, 28.26) m(31.94, 29.58) m(21.33, 1.27)
5(35, 29) m(36.38, 28.26) m(35.01, 29.68) m(138, 1.08)
Table 4. The azimuth and cross-track coordinates before and after deformation correction.
Table 4. The azimuth and cross-track coordinates before and after deformation correction.
TargetsActual Coordinates
(Azimuth, Cross-Track)
Coordinates before CorrectionCoordinates after CorrectionImprovement Value
1(95, −97) m(80.08, −105.4) m(95.34, −98.25) m(43.88, 6.72)
2(48, 100) m(59.7, 97.04) m(47.84, 101.5) m(73.12, 1.97)
3(16, −57) m(12.32, −59.48) m(16.28, −58.49) m(13.14, 1.66)
4(−50, −12) m(−49.86, −11.48) m(−49.87, −11.51) m(1.07, 1.06)

Share and Cite

MDPI and ACS Style

Liu, Q.; Luo, Y.; Zhang, Q.; Hong, W.; Yeo, T.S. Precision Downward-Looking 3D Synthetic Aperture Radar Imaging with Sparse Linear Array and Platform Motion Parameters Estimation. Remote Sens. 2018, 10, 1957. https://doi.org/10.3390/rs10121957

AMA Style

Liu Q, Luo Y, Zhang Q, Hong W, Yeo TS. Precision Downward-Looking 3D Synthetic Aperture Radar Imaging with Sparse Linear Array and Platform Motion Parameters Estimation. Remote Sensing. 2018; 10(12):1957. https://doi.org/10.3390/rs10121957

Chicago/Turabian Style

Liu, Qiyong, Ying Luo, Qun Zhang, Wen Hong, and Tat Soon Yeo. 2018. "Precision Downward-Looking 3D Synthetic Aperture Radar Imaging with Sparse Linear Array and Platform Motion Parameters Estimation" Remote Sensing 10, no. 12: 1957. https://doi.org/10.3390/rs10121957

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop