Next Article in Journal
Novel Drift Reduction Methods in Foot-Mounted PDR System
Previous Article in Journal
Development of a Single Leg Knee Exoskeleton and Sensing Knee Center of Rotation Change for Intention Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MIMO Radar Accurate 3-D Imaging and Motion Parameter Estimation for Target with Complex Motions

Automation College, Harbin Engineering University, Nantong Street 145, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(18), 3961; https://doi.org/10.3390/s19183961
Submission received: 12 August 2019 / Revised: 4 September 2019 / Accepted: 11 September 2019 / Published: 13 September 2019
(This article belongs to the Section Remote Sensors)

Abstract

:
In this paper, three-dimensional (3-D) multiple-input multiple-output (MIMO) radar accurate localization and imaging method with motion parameter estimation is proposed for targets with complex motions. To characterize the target accurately, a multi-dimensional signal model is established including the parameters on target 3-D position, translation velocity, and rotating angular velocity. For simplicity, the signal model is transformed into three-joint two-dimensional (2-D) parametric models by analyzing the motion characteristics. Then a gridless method based on atomic norm optimization is proposed to improve precision and simultaneously avoid basis mismatch in traditional compressive sensing (CS) techniques. Once the covariance matrix is obtained by solving the corresponding semi-definite program (SDP), estimating signal parameters via rotational invariance techniques (ESPRIT) can be used to estimate the positions, then motion parameters can be obtained by Least Square (LS) method, accordingly. Afterwards, pairing correction is carried out to remove registration errors by setting judgment conditions according to resolution performance analysis, to improve the accuracy. In this way, high-precision imaging can be realized without a spectral search process, and any slight changes of target posture can be detected accurately. Simulation results show that proposed method can realize accurate localization and imaging with motion parameter estimated efficiently.

1. Introduction

Owing to accurate localization and imaging performance, radar is widely applied in many imaging fields. Unfortunately, the localization errors will increase so the image will be distorted and worsened when target complex motions are taken into account, particularly for slow time-varying motions containing translations and rotations [1,2].
Under this circumstance, relevant studies mostly concentrate on synthetic aperture radar (SAR), inverse synthetic aperture radar (ISAR), and 3-D interference inverse synthetic aperture radar (3-D InISAR) [3,4]. Nevertheless, the imaging accuracy and real-time performance will be worse if the synthetic aperture time cannot be optimized reasonably. Following this, some relevant improvements have been applied, containing optimization of imaging accuracy [5,6,7,8,9], selection of optimal imaging time [10], improvements of imaging efficiency by CS and sparse sampling techniques [11,12,13]. However, there are still inevitable defects. On one hand, the platform is usually required to ensure baselines unchanged during synthetic aperture time, which is impractical in actual. On the other hand, the geometric models are usually one or two-dimensional which are insufficient to accurately characterize the target, and the coordinate coincidence phenomenon will increase imaging errors when 2-D estimation results are directly expanded to 3-D [14]. In addition, the migration through resolution cell (MTRC) problem caused by the neglect of basis mismatch will further reduce the accuracy of localization and imaging.
Owing to the multi-transmitting multi-receiving mechanism, MIMO radar has bigger aperture, more array degrees of freedom and less imaging time [15,16,17], which determines it is more appropriate to realize efficient imaging. Therefore, effects of target rotations have been systematically studied with MIMO radar in relevant research. In [18], MIMO radar is proposed to solve the problems about low imaging efficiency and poor imaging quality of ISAR, but the method is limited by the computation complexity of the exhaustive search process. In [19], multi-channel Doppler computing method is proposed, and a data fitting method is used to extract motion features. However, the fitting error is large due to the approximate matching process, so the method cannot meet the requirement of high imaging precision. Based on this, literature [20] improves the estimation accuracy of micro-Doppler parameters according to a novel space geometric distribution model. For these papers, the performance of MIMO radar imaging cannot be improved fundamentally because most of them only focus on the improvements of geometric model or data fitting process, rather than analyzing and optimizing the imaging or estimation techniques in theory. In contrast, the studies in [21] greatly improve the imaging quality by combining SAR or ISAR processing techniques with MIMO radar. In this way, two radar systems can complement each other, improve the imaging efficiency, and reduce the transmitting power. Following this, Zhao et.al [22] presents a short-term shift orthogonal waveform which is more effective for parameter estimation. Then a distributed MIMO SAR/ISAR system is designed and a focusing technique is developed in [23] which can greatly improve the imaging resolution. In paper [24], the law and period of target motion are obtained by synthesizing rotation axis and comparing the target coordinates at different sampling times. Nevertheless, the synthetic aperture time is still needed, or a large number of array elements is required to refine the grids, so the real-time performance is still difficult to be guaranteed and some problems such as phase wrapping need to be solved. Moreover, the neglect of basis mismatch further increases the imaging errors and deteriorates the imaging performance. Although many current methods such as sparse adaptive calibration recovery via iterative maximum a posteriori (SACR-iMAP) method in [25] and sparsity-cognizant total least-squares (S-TLS) method in [26] can improve the imaging precision of MIMO radar by solving basis mismatch problem, the imaging errors cannot be eliminated completely.
This paper presents an accurate MIMO radar 3-D localization and imaging method with motion parameter estimation for maneuvering target. A multi-dimensional echo model is first established to precisely characterize the target, containing all the parameters of position and motion. Then it is transformed into three two-dimensional (2-D) parametric models for facilitate analysis by simplifying and analyzing specific motions features. To improve precision and eliminate the basis mismatch problem, a gridless method is presented based on atomic norm optimization. After constructing the covariance matrix by solving the SDP, parameters of motion and position can be directly calculated without spectral search process. In addition, pairing judgment and correction is carried out to remove registration errors according to resolution analysis, so that accurate imaging can be realized with precise estimation results. Finally, the simulations show that proposed method can achieve more accurate imaging with efficient estimation of motion parameters when compared to other methods.
This paper is organized as follows. In Section 2, a multi-dimensional signal model is built and transformed into joint 2-D models. In Section 3, a gridless method is introduced to achieve efficient imaging, the resolution performance is analyzed, and pairing disorders are corrected to improve accuracy. In Section 4, simulation results and discussion are given to illustrate the performance of proposed method. Finally, Section 5 gives some conclusions.
Notations: In the rest of the paper, small boldface letters denote column vectors and capital boldface letters denote matrices. · A denotes the atomic norm. T ( · ) denotes Toeplitz matrix. ⊗ denotes Kronecker product. ( · ) T , ( · ) * , ( · ) H , ( · ) 1 and ( · ) + denote the transpose, conjugate, conjugate transpose, inverse, and pseudo-inverse operations, respectively.

2. MIMO Radar Signal Model

2.1. Multi-Dimensional Echo Model

Appropriate array structure is necessary for MIMO radar 3-D imaging and parameter estimation [17], the layout of array in the paper is shown in Figure 1. The paper chooses a uniform linear array consists M transmitters along X axis direction, T m represents the m-th transmitter where m = 0 , 1 , , M 1 . Signal s m t = p m t exp ( j 2 π f c t + φ m ) is Hadamard orthogonal-phase encoded signal of T m , where p m ( t ) and f c are envelope and carrier frequency, signal orthogonality is ensured by adjusting phase φ m . Then a N × L uniform planar array is designed as the receiving array. Axial directions of Y and Z are considered to be the array line directions and R n l is used to represent the receiver in n-th row, l-th column where n = 0 , 1 , , N 1 and l = 0 , 1 , , L 1 . In this paper, a large target with translations and rotations is considered in the model, thus all the scattering points have same motion states. Moreover, they all obey Swerling II distribution and scattering coefficients remain unchanged in one pulse period.
Following the array model, the echo signal at R n l receiver can be expressed as
d n , l ( t ) = k = 1 K m = 0 M 1 σ k · exp [ j 2 π ( f c + f d ) ( t τ m , n , l k ) ]
where σ k is scattering coefficient of the k-th scattering point, τ m , n , l k = T m k + R n l k / c is delay time and c is propagation speed of electromagnetic signal. T m k and R n l k represent the distance from T m to the k-th scattering point and the distance from k-th scattering point to R n l , respectively. f d = 2 V d / λ is Doppler frequency caused by target motions, λ is signal wavelength and V d is the synthesis of translation velocity and rotational angular velocity. After removing carrier, the receiving signal at R n l from T m can be written as
d m , n , l ( t ) = k K σ k · exp [ j 2 π f d t j 2 π ( f c + f d ) · τ m , n , l k ) ]
Due to V d c , the model in (2) can be simplified as
d m , n , l ( t ) = k K σ k · exp ( j 2 π f d t ) · exp [ j 2 π ( T m k + R n l k ) / λ ]
Taking P as the reference center of the region, then the echo d m , n , l P ( t ) reflected from P can be used as reference signal to compensate the target echo signal
D m , n , l ( t ) = d m , n , l ( t ) · d m , n , l P ( t ) * = k K σ k · exp ( j 2 π f d t ) · exp [ j 2 π ( T m k + R n l k T m P R n l P ) / λ ]
Then according to the geometrical model shown in Fig.1, taking Δ R to represent the range deviation term in (4) and it can be finally written as following, the proof is shown in Appendix A.1.
Δ R [ ( T m P ) + ( R nl P ) ] · Pk
where ( T m P ) and ( R nl P ) respectively represents the unit direction vector from T m and R n l to center P. Then, in order to intuitively describe the echo, we set ( P X , P Y , P Z ) and ( P X + x k , P Y + y k , P Z + z k ) as the coordinates of P and point k, d X is internal spacing of transmitting array, d Y and d Z respectively denote the row and column spacing inside the receiving array. Considering the coordinates of T 0 transmitter is ( r X , 0 , 0 ) , the coordinates of R 00 receiver is ( 0 , r Y , r Z ) , thus the coordinates of T m and R nl are ( r X + m d X , 0 , 0 ) and ( 0 , r Y + n d Y , r Z + l d Z ) , respectively. Following this, we can get ( T m P ) ( P X r X m d X , P Y , P Z ) ( P X r X m d X , P Y , P Z ) R 0 R 0 and ( R nl P ) ( P X , P Y r Y n d Y , P Z r Z l d Z ) ( P X , P Y r Y n d Y , P Z r Z l d Z ) R 0 R 0 , where R 0 is the reference distance from P to coordinate center O. A pulse transmitted by radar is divided into Q samplings, t q = q · T p / Q ( q = 0 , 1 , , Q 1 ) is the sampling time and T p is pulse width. Thus, the echo model can be written as
D = k = 1 K σ k · ( a f a x k a y k a z k )
where
a f = a f ( 0 ) a f ( 1 ) a f ( Q 1 ) T , a f ( q ) = exp ( j 2 π f d t q ) a x k = a x k ( 0 ) a x k ( 1 ) a x k ( M 1 ) T , a x k ( m ) = exp [ j 2 π 2 P X ( r X + m d X ) λ · R 0 · x k ] a y k = a y k ( 0 ) a y k ( 1 ) a y k ( N 1 ) T , a y k ( n ) = exp [ j 2 π 2 P Y ( r Y + n d Y ) λ · R 0 · y k ] a z k = a z k ( 0 ) a z k ( 1 ) a z k ( L 1 ) T , a z k ( l ) = exp [ j 2 π 2 P Z ( r Z + l d Z ) λ · R 0 · z k ]
However, it is difficult to directly extract parameters from the model in (6) due to the large dimension. Moreover, V d is also hard to be dealt because it is the synthesis of translation velocity and 3-D rotation velocity. Therefore, we transform this model into 2-D parametric models for simplicity.

2.2. Joint 2-D Parameter Models

Due to f d t q = 2 V d t q / λ = 2 R q / λ , the range term produced by target motions can be expressed as following, with translation velocity V and 3-D rotating angular velocities ω x , ω y , ω z are considered
Δ R = V d · t q Δ R X = Δ R · P X R 0 = ( V · t q + Δ R ω ) · P X R 0 = V · t q · P X R 0 + Δ x Δ R Y = Δ R · P Y R 0 = ( V · t q + Δ R ω ) · P Y R 0 = V · t q · P Y R 0 + Δ y Δ R Z = Δ R · P Z R 0 = ( V · t q + Δ R ω ) · P Z R 0 = V · t q · P Z R 0 + Δ z
where Δ R ω is caused by target rotations and its projections are Δ x , Δ y and Δ z , respectively. Δ R X , Δ R Y and Δ R Z are the projections of Δ R in three dimensions. Obviously, these deviation terms are only related to their own dimensions and do not affect each other.
It is noted that the velocity parameters are considered invariable due to the short processing interval of MIMO radar, i.e., V, ω x , ω y and ω z are all constant during Q samplings in one pulse. As for the roll, pitch, and yaw rotations of target shown in Figure 1, the following rotation matrices are supported by basic navigation theory
r o l l ( θ r ( t ) ) = 1 0 0 0 cos θ r ( t ) sin θ r ( t ) 0 sin θ r ( t ) cos θ r ( t ) p i t c h ( θ p ( t ) ) = cos θ p ( t ) 0 sin θ p ( t ) 0 1 0 sin θ p ( t ) 0 cos θ p ( t ) y a w ( θ y ( t ) ) = cos θ y ( t ) sin θ y ( t ) 0 sin θ y ( t ) cos θ y ( t ) 0 0 0 1
where θ r ( t ) , θ p ( t ) and θ y ( t ) represent the time-varying angles caused by roll, pitch, and yaw rotations. Based on this, Δ x , Δ y and Δ z in Equation (8) can be final expressed as following, the proof is presented in Appendix A.2.
Δ x ( t q ) = ω z · y k · t q ω y · z k · t q Δ y ( t q ) = ω z · x k · t q ω x · z k · t q Δ z ( t q ) = ω y · x k · t q ω x · y k · t q
Finally, the model in (6) can be transformed into following joint three 2-D parametric models based on Equations (8) and (10), with partial complex variables represented by new parameters α k , β k , γ k
D x = k = 1 K σ k · ( a x k a α k ) , a α k = a α k ( 0 ) a α k ( 1 ) a α k ( Q 1 ) T D y = k = 1 K σ k · ( a y k a β k ) , a β k = a β k ( 0 ) a β k ( 1 ) a β k ( Q 1 ) T D z = k = 1 K σ k · ( a z k a γ k ) , a γ k = a γ k ( 0 ) a γ k ( 1 ) a γ k ( Q 1 ) T
where
a α k ( q ) = exp ( j 2 π 2 α k · t q λ ) , α k = ω y · z k ω z · y k + V · P X R 0 a β k ( q ) = exp ( j 2 π 2 β k · t q λ ) , β k = ω x · z k ω z · x k + V · P Y R 0 a γ k ( q ) = exp ( j 2 π 2 γ k · t q λ ) , γ k = ω x · y k ω y · x k + V · P Z R 0
Hence, we can get the parameter models of X, Y, and Z in low-dimensional space through above decoupling process of Doppler shift term. Compared with the impossibility of estimating motion parameters directly from the Doppler frequency in (6), it becomes feasible to obtain the estimation results of target location and motion parameters from the models in (11). Simultaneously, the problem of large complexity caused by large dimension can also be solved. Then according to the consistency of three models in (11), the processing in X direction will be taken as an example.

3. Accurate Imaging and Motion Estimations

3.1. 2-D Parameters Estimation without Basis Mismatch

For radar imaging, traditional sparse recovery method such as Orthogonal Matching Pursuit (OMP) will result in basis mismatch due to the construction of sparse dictionary, which depends on the discretization process of continuous variable. In view of this, it is a feasible method to avoid this phenomenon by taking SVD decomposition of echo covariance matrix and then extracting the eigenvalues from the signal subspace accordingly. However, it should be noted that it is impractical to extract all K-column eigenvectors to construct the signal subspace because the echo in (11) is one-dimensional. Therefore, to solve this problem and avoid basis mismatch, a gridless method based on atomic norm optimization is proposed in this section.
As a penalty function for convex optimization problem, atomic norm shows its convenience in solving underdetermined linear inverse problems [27,28,29,30]. For X dimension echo model in (11), we define atoms
a ( x , α ) = exp [ j 2 π 2 P X ( r X + m ¯ d X ) λ · R 0 · x ] exp [ j 2 π 2 q ¯ λ · α ]
where a ( x , α ) C M Q × 1 , m ¯ = 0 1 M 1 T and q ¯ = t 0 t 1 t Q 1 T . So, the model can be written as
D x = k K σ k · a ( x k , α k )
Then the atomic set is defined as A = { a ( x , α ) , x x m i n , x m a x , α [ α m i n , α m a x ] } where x m i n , x m a x and α m i n , α m a x represent the range of x and α . Obviously, the basic components in A construct the full echo D x . In this model, A and σ are considered continuous. Therefore, the atomic norm of the echo can be expressed as
D x A = inf k = 1 K σ k : D x = k K σ k · a ( x k , α k )
where σ k = σ k e j ϕ k , σ k and ϕ k are amplitude and initial phase. However, it is impractical to directly construct the covariance matrix of the echo because it is not Toeplitz. Considering this, the dimension of echo is extended: Λ = E σ σ H = d i a g ( σ 1 2 , , σ K 2 ) , so that σ can be replaced by an equivalent diagonal matrix. Thus, we can get the rank- K matrix
R = E D x D x H = A ( x , α ) Λ A ( x , α ) H = k K | σ k | 2 · a ( x k , α k ) a ( x k , α k ) H
where A ( x , α ) = [ a ( x m i n , α m i n ) , , a ( x m a x , α m a x ) ] and it is obviously a Vandermonde matrix with infinite columns. Then, with Gaussian noise considered in D x , the covariance matrix can be construct based on the optimization problem
D ^ = arg min μ 1 2 μ D x F 2 + ρ 2 μ A
where μ is denoised truth echo and ρ is the regularization coefficient. Accordingly, this atomic norm minimization problem can be transformed into an approximate semi-definite program (SDP) as
minimize 1 2 μ D x F 2 + ρ 2 [ 1 M Q t r a c e ( T ( s ) ) + ξ ] subject to T ( s ) μ μ ξ 0
where s = k K | σ k | 2 · a ( x k , α k ) , ξ = k = 1 K | σ k | 2 , T ( s ) denotes the Toeplitz covariance matrix and its first column is s . Furthermore, it has been proved that the noise can be efficiently suppressed by the optimization condition, i.e., the obtained s mainly includes the scattered sampling data even at low SNR.
According to the Toeplitz covariance matrix T ( s ) , the estimation of parameters can be realized based on ESPRIT algorithm. K large singular values can be found and relevant signal subspace U s can be obtained by taking singular value decomposition of T ( s ) , then two subspace of U s can be obtained as
U s 1 = W 1 · U s U s 2 = W 2 · U s
where W 1 = [ I ( M 1 ) × ( M 1 ) 0 ( M 1 ) × 1 ] and W 2 = [ 0 ( M 1 ) × 1 I ( M 1 ) × ( M 1 ) ] . Following this, we can get
Ψ = U s 1 + · U s 2
Then the parameters x and α can be directly obtained according to (13) with the K eigenvalues extracted from Ψ by eigenvalue decomposition. Following the results, scattering coefficients distributions of the points in X- α plane can be expressed as
σ x = ( A 0 H A 0 ) 1 A 0 H D x
where A 0 C M Q × K is constructed by selecting corresponding columns in A ( x , α ) , according to the parameters extraction results x 1 x 2 x K and α 1 α 2 α K from (20).
Following this, all estimation results of three models in (11) can be obtained without any spectral search process
F X = x 1 x 2 x K α 1 α 2 α K σ x 1 σ x 2 σ x K , F Y = y 1 y 2 y K β 1 β 2 β K σ y 1 σ y 2 σ y K , F Z = z 1 z 2 z K γ 1 γ 2 γ K σ z 1 σ z 2 σ z K

3.2. Target 3-D Imaging with Motion Parameters Estimated

According to (22), F X , F Y and F Z only represent the distributions of scattering points in X- α , Y- β and Z- γ planes, but the orders of the points are different in these matrices. For example, the k-th point may locates on the a-th row in F X , but the b-th row in F Y and the c-th row in F Z , where a, b, c are different because it maybe not one-to-one with the rows among F X , F Y and F Z . So the true 3-D imaging result cannot be obtained unless the coordinates in F X , F Y and F Z are paired accurately. Based on this, a set is constructed by combining all of X, Y and Z coordinates in (22), which surely contains the coordinates of K true scattering points.
Γ = [ g 0 , 0 , 0 , , g ς , ε , ζ , , g K , K , K ] g ς , ε , ζ = a x ς a y ε a z ζ , ς = 1 , 2 , , K , ε = 1 , 2 , , K , ζ = 1 , 2 , , K
where
a x ς = exp ( j 2 π 2 P X r X λ · R 0 · x ς ) exp ( j 2 π 2 P X r X ( M 1 ) · d X λ · R 0 · x ς ) T a y ε = exp ( j 2 π 2 P Y r Y λ · R 0 · y ε ) exp ( j 2 π 2 P Y r Y ( N 1 ) · d Y λ · R 0 · y ε ) T a z ζ = exp ( j 2 π 2 P Z r Z λ · R 0 · z ζ ) exp ( j 2 π 2 P Z r Z ( L 1 ) · d Z λ · R 0 · z ζ ) T
Then a set is defined as χ = [ σ 0 , 0 , 0 , , σ ς , ε , ζ , , σ K , K , K ] . As for the elements in σ ς , ε , ζ , σ ς 0 , ε 0 , ζ 0 = ( σ x ς + σ y ε + σ z ζ ) σ x ς + σ y ε + σ z ζ ) 3 3 only if ς 0 = ς , ε 0 = ε and ζ 0 = ζ , otherwise σ ς 0 , ε 0 , ζ 0 = 0 . Then the true positions can be obtained by following optimization problem
min D Γ · σ ς , ε , ζ 2 2 , σ ς , ε , ζ χ
where D = [ D 1 , 1 , 1 ( t 1 ) , , D M , N , L ( t 1 ) ] T . All the parameters are all known in this problem, so it is actually a process of finding K minimum values from a finite set consists of K 3 elements. Therefore, 3-D imaging result is obtained
T = [ X ^ Y ^ Z ^ σ ^ ]
where T C K × 4 , each row of T denotes the 3-D positions and scattering coefficients of every scattering point. X ^ = J 1 × X , Y ^ = J 2 × Y and Z ^ = J 3 × Z , where X = [ x 1 x 2 x K ] T , Y = [ y 1 y 2 y K ] T , Z = [ z 1 z 2 z K ] T , J 1 , J 2 , J 3 can be regarded as position selection matrices according to the results of (25).
Nevertheless, the obtained results are not the most accurate because the effects of Doppler frequency shift are ignored. For this paper, accurate motion estimation plays a significant role in position compensation and target identification. Thus, efficient estimation of the motion parameters is carried out according to the relationships among the parameters in (11) and (26), we can get
φ = ω · Φ , φ = α ^ T β ^ T γ ^ T C 1 × 3 K , ω = ω x ω y ω z V C 1 × 4 Φ = 0 Z ^ T Y ^ T Θ 1 Z ^ T 0 X ^ T Θ 2 Y ^ T X ^ T 0 Θ 3 C 4 × 3 K
where α ^ = J 1 α , β ^ = J 2 β , γ ^ = J 3 γ , α , β , γ are the second column in F X , F Y , F Z , and
Θ 1 = P X R 0 P X R 0 P X R 0 C 1 × K Θ 2 = P Y R 0 P Y R 0 P Y R 0 C 1 × K Θ 3 = P Z R 0 P Z R 0 P Z R 0 C 1 × K
Therefore, Least Square (LS) method can be directly used to solve the problem in (27), then motion parameter vector can be estimated
ω = φ Φ H ( Φ Φ H ) 1

3.3. Pairing Correction

The estimation precision of the motion parameters is mostly determined by (27), so it must be guaranteed that the results of (22) and (26) are correct. However, the existence of model errors will lead to the failure of parameter estimation. After pairing, the coordinates of two points in one resolution cell may be disorder. For example, ( x a , y a , z a ) and ( x b , y b , z b ) are coordinates of point a and point b in same X resolution cell, with y a , z a far away from y b , z b and x a close to x b . In this way, it is hard to distinguish them in the pairing process due to the strong coherence between them. So the final pairing result maybe ( x b , y a , z a ) or ( x a , y b , z b ) , which will break the structure of J 1 . As a result, the estimation of ω y and ω z in (29) will be inaccurate due to α ^ = J 1 α in (27). Similarly, when y or z coordinates are difficult to distinguish, the estimation of motion parameters will also be affected. Therefore, pairing correction is carried out in this section to remove this registration error and improve the accuracy of the method, particularly the estimation accuracy of motion parameters.
First, the imaging resolution performance of the model is studied by analyzing the point spread function theoretically. Here we define the point spread function of the X model as
P s f ( k , k 0 ) = 1 κ a ( x k , α k ) , a ( x k 0 , α k 0 )
where κ is a normalized parameter to ensure the maximum value of P s f ( k , k 0 ) is 1. Following this, we take the mathematical simplification
P s f ( k , k 0 ) = 1 κ m = 1 M q = 1 Q exp [ j 2 π 2 P X ( r X + m d X ) λ · R 0 · ( x k x k 0 ) ] · exp ( j 2 π 2 T p · q λ · Q ( α k α k 0 ) ) = 1 κ exp ( j 2 π 2 P X r X λ · R 0 · ( x k x k 0 ) ) · m = 1 M exp ( j 2 π m d X λ · R 0 · M · ( x k x k 0 ) ) · q = 1 Q exp ( j 2 π 2 T p · q λ · Q ( α k α k 0 ) ) = 1 κ sin ( π M d X λ · R 0 · ( x k x k 0 ) ) sin ( π d X λ · R 0 · ( x k x k 0 ) ) · sin ( 2 π T p λ ( α k α k 0 ) ) sin ( 2 π T p λ · Q ( α k α k 0 ) ) sin c ( M d X λ · R 0 · ( x k x k 0 ) ) · sin ( 2 T p λ ( α k α k 0 ) )
where T p = Q × t q is pulse width. Then the limit resolution in X- α plane is
ρ x = λ · R 0 M d X ρ α = λ 2 T p
that is, when the distance between two points in X direction is less than ρ x , the main lobes of them will overlap and makes their X coordinates difficult to be distinguished. As a result, the estimation of motion parameters will be inaccurate.
In a similar way, the resolutions of Y- β plane and Z- γ plane can also be obtained
ρ y = λ · R 0 N d Y ρ β = λ 2 T p ρ z = λ · R 0 L d Z ρ γ = λ 2 T p
In the following, a rough estimation is developed by judging and removing the points in the same cell. First, we need to determine whether there are points difficult to be distinguished. Taking X dimension as example, for any point a and point b, a judging condition is set as
x a x b ρ x
if it does not meet the condition, there must be more than one point in one cell, then these points are taken out from the pairing result. Therefore, it is actually a process of constantly eliminating indistinguishable target points by pairing judgment. The convergence condition of this process is that all the remaining target points satisfy the judgment condition in (34). Then, the parametric coarse estimation is developed with the target points satisfying (34)
φ = ω · Φ
where φ C 1 × K , ω C 1 × 4 , Φ C 4 × K , K denotes the number of the points satisfying (34).
With the process of pairing judgment and correction in (34) and (35), the coordinates between these points in same cell can be distinguished according to the coarse estimation of ω . Then the position selection matrix J 1 can be corrected and Equations (26) and (27) are updated. Accordingly, Equation (29) will be carried out based on the corrected results. In this way, the registration error can be removed, and the high accuracy of localization and motion parameter estimation can be guaranteed.
As a summary, the flow chart of the whole method is shown in Figure 2.
It is noted that the main computational load of the algorithm includes three parts. The first one is caused by the SDPT3 or ADMM [29] when the atomic norm optimization problem is solved. The second one is caused by singular value decomposition and eigenvalue decomposition when ESPRIT algorithm is used to calculate the target positions, which is O ( 2 ( M 3 + N 3 + L 3 ) Q 3 + 6 K 3 ) . The third one is brought by the process of pairing and motion parameter estimation, which is O ( M N L K 3 ) . Thus, the computational complexity of the algorithm is mainly affected by the number of antennas, samplings, and targets.
The method is applicable for many scenarios such as ships, airplanes, and accurate imaging can be realized with stable application environment. However, the performance will be deteriorated once the application environment becomes worse, such as the sea surface full of clutters and sea waves, which will be further studied in future research.

4. Simulation Results and Discussion

In this section, relevant simulation results are shown to verify that proposed method can realize accurate imaging and motion estimation for target with complex motions. In the simulations, a ship target with radial translation and 3-D rotations is taken into account. Scattering points are set on the ship hull shown in Figure 3 and scattering coefficients are all set to 1. Following this, imaging results and motion estimation results are shown in Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9.
First, in order to verify the feasibility of proposed method, ship target are imaged at two moments t A and t B with different motion parameters, meanwhile radar parameters are set as Table 1 and motion parameters of t A and t B are set as Table 2.
Figure 4 shows the 3-D imaging results by proposed method with SNR = 0 dB, Figure 5 and Figure 6 show their 2-D projections. In these figures, accurate localization and imaging can be intuitively presented. The deviation between true scattering points and the estimation results are small and some coincidence points appear in the figures when the estimated results of these points are more close to the real situation, which proves the feasibility and high accuracy of proposed method. Moreover, Table 3 shows the motion estimation results at t A and t B . High estimation accuracy of proposed method can be verified by comparing Table 3 with Table 2, which indicates that any slight changes of target posture can be efficiently detected with the proposed imaging method.
Table 4, Table 5 and Table 6 show the time performance with different M, Q, and L which represent the number of transmitters, samplings, and targets, respectively. For simplicity, the effects caused by receivers are ignored because they are similar to the transmitters according to the computation analysis, so the number N, L of receivers are set as constant as shown in Table 1. Then, the rest of the parameters follow the settings in Table 1. As is shown in the tables, with the increase of M, Q, and L, the running time of the algorithm increases obviously. Therefore, the time efficiency decreases accordingly, which coincides with the computation analysis of the method.
Figure 7 shows the relationship between the imaging error performance and SNR. In simulation Figure 7a, we take MIMO-ISAR method [21] and modified OMP method [13] for comparison. Apparently, our method has minimum imaging errors in the figure, which verifies its high accuracy over other imaging methods. In simulation Figure 7b, the SACR-iMAP method [25] and S-TLS method [26] are simulated for references. It is obvious that these methods cannot further improve the imaging accuracy because they only focus on the improvement of 2-D resolution and the basis mismatch errors cannot be completely removed. In contrast, proposed method has higher precision. On one hand, it benefits from the denoising performance of SDP. On the other hand, target positions can be directly calculated without any spectral search process, which can avoid basis mismatch.
Figure 8 shows the comparison results of motion estimation accuracy, with MIMO-ISAR method and modified OMP method for comparison. It can be seen that proposed method has more precise motion estimation performance. In fact, the high precision is guaranteed by the gridless method and the pairing correction, where the former avoids the basis mismatch and the latter removes registration errors.
In addition, to further show the performance of proposed method under basis mismatch circumstance, Figure 9 presents the error curve of localization and motion estimation in more detail. Take 500 Monte Carlo experiments and 6 random scattering points with random motion parameters, then SNR step size is set to 1dB from −10 dB to 14 dB in the simulations. As a result, the high precision of the method can be efficiently illustrated from the simulation results Figure 9a,b.

5. Conclusions

In this paper, we have presented a 3-D MIMO radar imaging method with motion parameter estimation for target with complex motions. The method can reduce process difficulty by building joint 2-D parameter models. Then efficient imaging and accurate motion parameter estimation can be guaranteed by the gridless method and pairing correction process, which can eliminate basis mismatch and remove registration errors. Simulation results show that proposed method is more suitable for accurate localization and imaging of the target with complex motions.

Author Contributions

Conceptualization, Z.H. and W.W.; Methodology, Z.H., W.W., and P.H.; Software, Z.H.; Validation, Z.H., W.W. and F.D.; Formal analysis, Z.H. and F.D.; Writing—original draft preparation, Z.H.; Writing—review and editing, Z.H., W.W., F.D. and P.H.

Funding

This work is supported by the National Natural Science Foundation(61571148, 61871143), Fundamental Research for the Central University(HEUCFG201823, 3072019CF0402), Heilongjiang Natural Science Foundation (LH2019F006), Research and Development Project of Application Technology in Harbin(2017R-AQXJ095).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Appendix A.1. Proof of (5)

As is shown, Δ R represents the range deviation term in (4), so it can be further processed as
Δ R = T m k + R nl k ( T m P + R nl P ) = ( T m k T m P ) + ( R nl k R nl P )
In the formula, the modulus values of the vectors T m k , R nl k , T m P and R nl P are used to represent the corresponding range terms, then we can get following approximations
T m k T m P = T m P + Pk T m P ( T m P ) · Pk R nl k R nl P = R nl P + Pk R nl P ( R nl P ) · Pk
where ( T m P ) and ( R nl P ) respectively represents the unit direction vector from T m and R n l to center P, then Equation (A1) can be expressed as
Δ R [ ( T m P ) + ( R nl P ) ] · Pk

Appendix A.2. Proof of (10)

According to the rotation matrices, the time-varying characteristics of Δ x , Δ y and Δ z in Equation (8) can be expressed as
Δ x ( t ) Δ y ( t ) Δ z ( t ) = Ω · x k y k z k
where Ω = r o l l ( θ r ( t ) ) · p i t c h ( θ p ( t ) ) · y a w ( θ y ( t ) ) I 3 × 3 , so that we can obtain the processing results
x k + Δ x ( t q ) = x k · cos θ p ( t q ) · cos θ y ( t q ) + y k · cos θ p ( t q ) · sin θ y ( t q ) + z k · sin θ p ( t q ) y k + Δ y ( t q ) = y k · cos θ y ( t q ) · cos θ r ( t q ) + z k · cos θ y ( t q ) · sin θ r ( t q ) + x k · sin θ y ( t q ) z k + Δ z ( t q ) = z k · cos θ r ( t q ) · cos θ p ( t q ) + x k · cos θ r ( t q ) · sin θ p ( t q ) + y k · sin θ r ( t q )
Because rotational velocities are invariable due to the short processing interval of MIMO radar, i.e., ω x , ω y , ω z are all constant during Q samplings in one pulse. Following this, the instantaneous angle is θ i ( t q ) = ω i · t q ( i = r , p , y ) . Moreover, it is supposed that sin θ i ( t q ) θ i ( t q ) = ω i · t q and cos θ i ( t q ) 1 θ i 2 ( t q ) θ i 2 ( t q ) 2 2 = 1 ω i 2 · t q 2 ω i 2 · t q 2 2 2 , then we can get the following results with higher-order terms ignored.
Δ x ( t q ) = ω z · y k · t q ω y · z k · t q Δ y ( t q ) = ω z · x k · t q ω x · z k · t q Δ z ( t q ) = ω y · x k · t q ω x · y k · t q

References

  1. Chen, V.; Li, F.; Ho, S.; Wechsler, H. Micro-Doppler effect in radar: phenomenon, model, and simulation study. IEEE Trans. Aerosp. Electron. Syst. 2006, 42, 2–21. [Google Scholar] [CrossRef]
  2. Reed, A.; Mligram, J. Ship wakes and their radar images. Annu. Rev. Fluid Mech. 2002, 34, 469–502. [Google Scholar] [CrossRef]
  3. Wang, S.; Wang, M.; Yang, S.; Jiao, L. New Hierarchical Saliency Filtering for Fast Ship Detection in High-Resolution SAR Images. IEEE Trans. Geosci. Remote Sens. 2016, 55, 351–362. [Google Scholar] [CrossRef]
  4. Liu, P.; Jin, Y. Simulation of Synthetic Aperture Radar Imaging of Dynamic Wakes of Submerged Body. IET Radar Sonar Navig. 2016, 11, 481–489. [Google Scholar] [CrossRef]
  5. Liu, P.; Jin, Y. A Study of Ship Rotation Effects on SAR Image. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3132–3144. [Google Scholar] [CrossRef]
  6. Li, Y.; Wu, R.; Xing, M.; Bao, Z. Inverse synthetic aperture radar imaging of ship target with complex motion. IET Radar Sonar Navig. 2008, 2, 395–403. [Google Scholar] [CrossRef]
  7. Wang, C.; Li, S.; Wang, Y. Inverse synthetic aperture radar imaging of ship targets with complex motion based on match Fourier transform for cubic chirps model. IET Radar Sonar Navig. 2013, 7, 994–1003. [Google Scholar] [CrossRef]
  8. Zheng, J.; Su, T.; Zhang, L.; Zhu, W.; Liu, Q.H. ISAR Imaging of Targets With Complex Motion Based on the Chirp Rate–Quadratic Chirp Rate Distribution. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7276–7289. [Google Scholar] [CrossRef]
  9. Xu, G.; Xing, M.D.; Zhang, L.; Duan, J.; Chen, Q.Q.; Bao, Z. Sparse Apertures ISAR Imaging and Scaling for Maneuvering Targets. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2014, 7, 2942–2956. [Google Scholar] [CrossRef]
  10. Xu, G.; Yang, L.; Bi, G.; Xing, M.D. Maneuvering target imaging and scaling by using sparse inverse synthetic aperture. Signal Process. 2017, 137, 149–159. [Google Scholar] [CrossRef]
  11. Zhang, S.Q.; Dong, G.G.; Kuang, G.Y. Superresolution Downward-Looking Linear Array Three-Dimensional SAR Imaging Based on Two-Dimensional Compressive Sensing. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2016, 9, 2184–2196. [Google Scholar] [CrossRef]
  12. Wang, Y.; Li, X. Three-Dimensional Interferometric ISAR Imaging for the Ship Target Under the Bi-Static Configuration. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2016, 9, 1505–1520. [Google Scholar] [CrossRef]
  13. Xu, G.; Xing, M. D.; Xia X., G.; Zhang, L.; Chen, Q.; Bao, Z. 3D Geometry and Motion Estimations of Maneuvering Targets for Interferometric ISAR With Sparse Aperture. IEEE Trans. Image Process. 2016, 25, 2005–2020. [Google Scholar] [CrossRef]
  14. Wang, Y.; Chen, X. F. 3-D Interferometric Inverse Synthetic Aperture Radar Imaging of Ship Target With Complex Motion. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3693–3708. [Google Scholar] [CrossRef]
  15. Fishler, E.; Haimovich, A.; Blum, R. MIMO radar: an idea whose time has come. In Proceedings of the IEEE Radar Conference, Philadelphia, PA, USA, 23–27 May 2004; pp. 71–78. [Google Scholar]
  16. Haimovich, A.; Blum, R.; Cimini, L. MIMO radar with widely separated antennas. IEEE Signal Process. Mag. 2008, 25, 116–129. [Google Scholar] [CrossRef]
  17. Ma, C.; Yeo, T.; Tan, C. Three-dimensional imaging of targets using collocated MIMO radar. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3009–3021. [Google Scholar] [CrossRef]
  18. Wang, D.; Chen, G.; Wu, N.; Ma, X. Y. Efficient target identification for MIMO high-resolution imaging radar via plane-rotation-invariant feature. In Proceedings of the IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), Ajman, UAE, 14–17 December 2009; pp. 350–354. [Google Scholar]
  19. Luo, Y.; He, J.; Liang, X.; Zhang, Q. Three-dimensional micro-doppler signature extraction in MIMO radar. In Proceedings of the International Conference on Signal Processing System, Dalian, China, 5–7 July 2010. [Google Scholar]
  20. Li, X.; Tian, B.; Feng, C.; Xu, X. Extration Micro-Doppler Feature of Precession Targets in OFDM Linear Frequence Modulation MIMO radar. In Proceedings of the 5th International Conference on Systems and Informatics (ICSAI), Nanjing, China, 10–12 November 2018; pp. 883–888. [Google Scholar]
  21. Ma, C.; Yeo, T.S.; Tan, C.S.; Li, J.; Shang, Y. Three-Dimensional Imaging Using Colocated MIMO Radar and ISAR Technique. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3189–3201. [Google Scholar] [CrossRef]
  22. Zhao, G.; Zhuang, Z.; Nie, L.; Fu, Y. Imaging and micro-Doppler analysis of vibrating target in multi-input–multi-output synthetic aperture radar. IET Radar Sonar Navig. 2015, 9, 1360–1365. [Google Scholar] [CrossRef]
  23. Pastina, D.; Santi, F.; Brucciarelli, M. MIMO Distributed Imaging of Rotating Targets for Improved 2-D Resolution. IEEE Trans. Geosci. Remote Sens. 2015, 12, 190–194. [Google Scholar] [CrossRef]
  24. Martinez, J.; Thurn, K.; Vossiek, M. MIMO radar for supporting automated rendezvous maneuvers with non-cooperative satellites. In Proceedings of the IEEE Radar Conference, Seattle, WA, USA, 8–12 May 2017; pp. 0497–0501. [Google Scholar]
  25. He, X.; Liu, C.; Liu, B.; Wang, D.J. Sparse Frequency Diverse MIMO Radar Imaging for Off-Grid Target Based on Adaptive Iterative MAP. Remote Sens. 2013, 5, 631–647. [Google Scholar] [CrossRef] [Green Version]
  26. Zhu, H.; Leus, G.; Giannakis, G.B. Sparsity-cognizant total least-squares for perturbed compressive sampling. IEEE Trans. Signal Process. 2011, 59, 2002–2016. [Google Scholar] [CrossRef]
  27. Tang, G.; Bhaskar, B. N.; Shah, P.; Recht, B. Compressed Sensing Off the Grid. IEEE Trans. Inf. Theory 2013, 59, 7465–7490. [Google Scholar] [CrossRef] [Green Version]
  28. Xiao, H.; Yang, L.; Zhou, J. Super-resolution radar imaging using fast continuous compressed sensing. Electron. Lett. 2015, 51, 2043–2045. [Google Scholar]
  29. Hu, X.; Tong, N.; Zhang, Y.; He, X.; Wang, Y. Moving Target’s HRRP Synthesis With Sparse Frequency-Stepped Chirp Signal via Atomic Norm Minimization. IEEE Signal Process. Lett. 2016, 23, 1212–1215. [Google Scholar] [CrossRef]
  30. Yang, Z.; Xie, L.; Stoica, P. Vandermonde Decomposition of Multilevel Toeplitz Matrices With Application to Multidimensional Super-Resolution. IEEE Trans. Inf. Theory 2016, 62, 3685–3701. [Google Scholar] [CrossRef]
Figure 1. Geometry of MIMO radar array.
Figure 1. Geometry of MIMO radar array.
Sensors 19 03961 g001
Figure 2. The flow chart of proposed method.
Figure 2. The flow chart of proposed method.
Sensors 19 03961 g002
Figure 3. Distribution of scattering Points on the ship hull.
Figure 3. Distribution of scattering Points on the ship hull.
Sensors 19 03961 g003
Figure 4. 3-D MIMO Radar Imaging Results. (a) is imaging result at t A ; (b) is imaging result at t B .
Figure 4. 3-D MIMO Radar Imaging Results. (a) is imaging result at t A ; (b) is imaging result at t B .
Sensors 19 03961 g004
Figure 5. 2-D projections of MIMO radar 3-D imaging results at t A . (a) is projection in XY plane; (b) is projection in XZ plane. (c) is projection in YZ plane.
Figure 5. 2-D projections of MIMO radar 3-D imaging results at t A . (a) is projection in XY plane; (b) is projection in XZ plane. (c) is projection in YZ plane.
Sensors 19 03961 g005
Figure 6. 2-D projections of MIMO radar 3-D imaging results at t B . (a) is projection in XY plane; (b) is projection in XZ plane; (c) is projection in YZ plane.
Figure 6. 2-D projections of MIMO radar 3-D imaging results at t B . (a) is projection in XY plane; (b) is projection in XZ plane; (c) is projection in YZ plane.
Sensors 19 03961 g006
Figure 7. Error performance of 3-D imaging. (a) is the comparison with MIMO-ISAR method and modified OMP method; (b) is the comparison with SACR-iMAP method and S-TLS method.
Figure 7. Error performance of 3-D imaging. (a) is the comparison with MIMO-ISAR method and modified OMP method; (b) is the comparison with SACR-iMAP method and S-TLS method.
Sensors 19 03961 g007
Figure 8. Error performance of motion estimation results. (a) is the comparison of roll angular velocity; (b) is the comparison of pitch angular velocity; (c) is the comparison of yaw angular velocity.
Figure 8. Error performance of motion estimation results. (a) is the comparison of roll angular velocity; (b) is the comparison of pitch angular velocity; (c) is the comparison of yaw angular velocity.
Sensors 19 03961 g008
Figure 9. Error performance of proposed method with 500 Monte Carlo experiments. (a) is the error curve of 3-D localization; (b) is the error curve of motion parameters estimation.
Figure 9. Error performance of proposed method with 500 Monte Carlo experiments. (a) is the error curve of 3-D localization; (b) is the error curve of motion parameters estimation.
Sensors 19 03961 g009
Table 1. Parameters for MIMO radar imaging.
Table 1. Parameters for MIMO radar imaging.
ParametersValues
Transmitting elements number10
Receiving elements number10 × 10
Internal spacing of transmitting array3 m
Internal spacing in row and column of receiving array4 m
Coordinate of T 0 (1 m, 0 m, 0 m)
Coordinate of R 00 (0 m, 0.5 m, 0.5 m)
Carrier frequency35 GHz
Sampling times30
Pulse width600 µs
X distance5 km
Y distance6 km
Z distance7 km
Table 2. Motion parameters for target at different times.
Table 2. Motion parameters for target at different times.
Values of Motion
Parameters at t A
Values of Motion
Parameters at t B
Radial translation velocity (m/s)4.88.7
Angular velocity of pitch rotation (rad/s)0.10.3
Angular velocity of roll rotation (rad/s)0.20.4
Angular velocity of yaw rotation (rad/s)0.30.5
Table 3. Motion parameters estimation results.
Table 3. Motion parameters estimation results.
Estimation Results of Motion
Parameters at t A
Estimation Results of Motion
Parameters at t B
Radial translation velocity (m/s)4.82528.6541
Angular velocity of pitch rotation (rad/s)0.11360.2879
Angular velocity of roll rotation (rad/s)0.20870.3897
Angular velocity of yaw rotation (rad/s)0.32010.5221
Table 4. Time performance with different number of transmitters.
Table 4. Time performance with different number of transmitters.
The Number of Transmitters51015202530
Running Time (s)0.981.071.211.692.363.30
Table 5. Time performance with different number of samplings.
Table 5. Time performance with different number of samplings.
The Number of Samplings51015202530
Running Time (s)0.840.961.171.502.192.98
Table 6. Time performance with different number of targets.
Table 6. Time performance with different number of targets.
The Number of Targets12345678910
Running Time (s)0.720.800.840.900.941.021.051.141.301.52

Share and Cite

MDPI and ACS Style

Hu, Z.; Wang, W.; Dong, F.; Huang, P. MIMO Radar Accurate 3-D Imaging and Motion Parameter Estimation for Target with Complex Motions. Sensors 2019, 19, 3961. https://doi.org/10.3390/s19183961

AMA Style

Hu Z, Wang W, Dong F, Huang P. MIMO Radar Accurate 3-D Imaging and Motion Parameter Estimation for Target with Complex Motions. Sensors. 2019; 19(18):3961. https://doi.org/10.3390/s19183961

Chicago/Turabian Style

Hu, Ziying, Wei Wang, Fuwang Dong, and Ping Huang. 2019. "MIMO Radar Accurate 3-D Imaging and Motion Parameter Estimation for Target with Complex Motions" Sensors 19, no. 18: 3961. https://doi.org/10.3390/s19183961

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop