Next Article in Journal
Retrieval of Leaf Area Index by Linking the PROSAIL and Ross-Li BRDF Models Using MODIS BRDF Data
Previous Article in Journal
Prediction of Root Biomass in Cassava Based on Ground Penetrating Radar Phenomics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

Compressed Sensing Imaging with Compensation of Motion Errors for MIMO Radar

1
College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China
2
Communication NCO Academy, Army Engineering University of PLA, Chongqing 400035, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(23), 4909; https://doi.org/10.3390/rs13234909
Submission received: 29 September 2021 / Revised: 1 December 2021 / Accepted: 1 December 2021 / Published: 3 December 2021

Abstract

:
Using a multiple-input-multiple-output (MIMO) radar for environment sensing is gaining more attention in unmanned ground vehicles (UGV). During the movement of the UGV, the position of MIMO array compared to the ideal imaging position will inevitably change. Although compressed sensing (CS) imaging can provide high resolution imaging results and reduce the complexity of the system, the inaccurate MIMO array elements position will lead to defocusing of imaging. In this paper, a method is proposed to realize MIMO array motion error compensation and sparse imaging simultaneously. It utilizes a block coordinate descent (BCD) method, which iteratively estimates the motion errors of the transmitting and receiving elements, as well as synchronously achieving the autofocus imaging. The method accurately estimates and compensates for the motion errors of the transmitters and receivers, rather than approximating them as phase errors in the data. The validity of the proposed method is verified by simulation and measured experiments in a smoky environment.

1. Introduction

The application of unmanned ground vehicles allows the performance of operations in areas inaccessible to humans due to chemical, biological, thermal, and other environmental hazards [1]. The MIMO radar becomes an alternative to the full array when only a small number of array elements is used to meet the demands for high azimuth and elevation resolution [2]. For an antenna of a predetermined size, MIMO array is often applied to accomplish rapid imaging and cut down the number of array elements in order to economize on the hardware cost [3,4,5,6,7]. Therefore, advanced UGVs are equipped with various types of MIMO radar [8].
In [9], through increasing the variety of radiation, a method called radar coincidence imaging is studied. Time-reversal imaging is applied to MIMO radar imaging problems [10,11,12]. In [13], an imaging method combining the range migration and the back projection (BP) is proposed for arbitrary scanning paths. However, the azimuth resolution is restricted to the length of the received array. In [14,15], different spectral estimation algorithms are used to enhance azimuth resolution and to compress sidelobes. The theory of compressed sensing (CS) [16] provides the possibility of solving the underdetermined problem. In [17], a segmented random sparse method based on CS is presented to ensure the accuracy of 3-D reconstruction. CS has been introduced to the radar-related applications, such as ground-penetrating radar [18], through-the-wall imaging [19], and inverse SAR (ISAR) [20].
A basic difficulty in MIMO radar imaging is imperfect knowledge of the real position of the array. Providing real-time and accurate vehicle posture information is one of the key technologies to achieve conditional and even highly autonomous driving [21]. During the movement of UGV, unknown road information, such as the road inclination angle, tire-road friction coefficient, and road slope angle [22], will lead to motion errors of the MIMO array. In the real environment, the inertial measurement units (INS) or Global Positioning System (GPS) circuits, generally provide reasonably accurate but inaccurate locations. The unsolved uncertainty can be solved by data-driven autofocus algorithms [23,24,25].
Extensive literature settles the radar autofocus problem by estimating a substituted collection of phase errors in the measured signal, instead of the position errors [26,27,28,29,30]. In [31], an autofocus method is proposed for compressively sampled SAR. In [32,33], autofocusing technology is proposed to correct the phase errors. In [34], a CS imaging with compensation of the observation position error method is proposed to reconstruct the image and correct the errors in the SAR structure. A joint sparsity-based imaging and motion error estimation algorithm is utilized to obtain focused images [35]. A blind deconvolution method is proposed to acquire autofocus images from observations that undergo a position error [36]. However, it can only solve the problem when antennas are influenced by the identical position error. Table 1 shows the categories of methods used to solve autofocus problems.
We propose a method to compensate for the motion errors of the MIMO radar array in CS imaging. It is modeled as an optimization problem in which the cost function includes the motion errors of transmitters and receivers and the reflectivity coefficients of targets. The main contributions are as follows:
(1)
We analyzed the essential relationship between the motion errors of array and CS imaging. The proposed method takes effect on estimating the MIMO array motion errors as well as reconstructing images, which is without any approximations.
(2)
The optimization problem is solved by a BCD method, which cycles through steps of target reconstruction and MIMO array motion errors estimation and compensation. The motion errors of transmitters and receivers can be estimated by gradient-based optimization algorithms.
(3)
Based on the accurate estimation of the motion errors, we can achieve super-resolution imaging. Compared with optical sensors, in special circumstances, such as smoke scenes, it has a better environmental perception ability.
This paper consists of 5 parts. In Section 2, the method proposed in this paper is introduced, i.e., the geometry model and signal model for MIMO radar imaging are described in Section 2.1 and CS imaging with motion errors compensation and computational complexity are depicted in Section 2.2. In Section 3, simulation and experiment results are presented. Section 4 provides the discussion. Section 5 will summarize this paper.
Throughout the text, lower case bold face letters y denote vectors and upper case bold face letters A denote matrices. Superscripts T and refer to the transpose of matrices and the Hermitian of matrices. The 1 norm of a vector d is defined as the sum of its absolute values, i.e., d 1 = i | d ( i ) | . The 2 norm of a vector d is defined as the square root of the sum of its squares, i.e., d 2 = i | d ( i ) | 2 .

2. Materials and Methods

2.1. MIMO Radar Imaging Model

2.1.1. Geometry Model

Consider a MIMO array that has M transmitters and N receivers. Figure 1 illustrates the MIMO array mounted on UGV, x is azimuth direction and where y is the forward direction. Supposing the position vector of m th transmitting element is p t m = [ x t m , y t m , z t m ] T and the position vector of n th receiving element is p r n = [ x r n , y r n , z r n ] T , where x t m , y t m , z t m , x r n , y r n , z r n are coordinates in the Cartesian coordinate system. The real time position vectors of m th transmitting element and n th receiving element are denoted as p t m and p r n , respectively,
{ p t m = p t m + e t m p r n = p r n + e r n
where e t m = [ Δ x t m , Δ y t m , Δ z t m ] T and e r n = [ Δ x r n , Δ y r n , Δ z r n ] T depict the real motion error vector of the m th transmitting element and n th receiving element, respectively. We set the position vectors of the k th target to be p k = [ x k , y k , z k ] T , the instantaneous two-way range of target p k of the m th transmitting element and the n th receiving element can be expressed as
R t r ( m , n , k ) = R t ( m , k ) + R r ( n , k )
where R t ( m , k ) and R r ( n , k ) denote the instantaneous real range from the m th transmitting element to target p k and target p k to the n th receiving element, respectively,
R t ( m , k ) = p t m p k 2
R r ( n , k ) = p r n p k 2
The hypothetical two-way range for target p k without any motion errors are expressed as
R t r ( m , n , k ) = R t ( m , k ) + R r ( n , k )
where R t ( m , k ) and R r ( n , k ) denote the ideal range from the m th transmitting element to target p k and target p k to the n th receiving element, respectively,
R t ( m , k ) = R t ( m , k ) | e t m = [ 0 , 0 , 0 ] T
R r ( m , k ) = R r ( n , k ) | e r n = [ 0 , 0 , 0 ] T

2.1.2. Signal Model

Assume that the stepped frequency is transmitted by the MIMO radar and the data of the n th receiving element from the m th transmitting element can be expressed as
y ( m , n , l ) = G d ( x , y , z ) exp [ j 2 π f l R t r ( m , n , x , y , z ) / c ] d x d y d z
where x , y and z are the coordinates of the target and d ( x , y , z ) is the reflectivity coefficient of the target at ( x , y , z ) ; R t r ( m , n , x , y , z ) is the two-way range of the target at ( x , y , z ) of the m th transmitter and the n th receiver; f l is the value of l th frequency; c depicts the speed of light; G is the area illuminated by the beam.
Based on (8), the discrete expression of the echo of the n th receiving element from the m th transmitting element is
y ( m , n , l ) = k = 1 K d ( k ) exp [ j 2 π f l R t r ( m , n , k ) / c ]
where the K is the total number of grid points after the discretization of the scene, d ( k ) is the reflectivity coefficient of the k th point, and R t r ( m , n , k ) is the two-way range of k th point of the m th transmitter and the n th receiver. The equation for R t r ( m , n , k ) is shown in (2).
Equation (9) can be expressed in matrix form as
y = A d
where y is a M N L × 1 signal vector, A is a M N L × K measurement matrix, and d is a K × 1 target vector. L is the total number of frequencies, N is the total number of receivers, M is the total number of transmitters. The vector/matrix terms in (10) are depicted by the following equations, where
y = [ y ( 1 , 1 , 1 ) , , y ( M , 1 , 1 ) , y ( M , 2 , 1 ) , ,     y ( M , N , 1 ) , y ( M , N , 2 ) , , y ( M , N , L ) ]
A = [ A [ R t r ( 1 , 1 , 1 ) , f 1 ] , A [ R t r ( M , 1 , 1 ) , f 1 ] , A [ R t r ( M , 2 , 1 ) , f 1 ] , A [ R t r ( M , N , 1 ) , f 1 ] , A [ R t r ( M , N , 2 ) , f 1 ] , A [ R t r ( M , N , K ) , f 1 ] A [ R t r ( 1 , 1 , 1 ) , f 2 ] , A [ R t r ( M , 1 , 1 ) , f 2 ] , A [ R t r ( M , 2 , 1 ) , f 2 ] , A [ R t r ( M , N , 1 ) , f 2 ] , A [ R t r ( M , N , 2 ) , f 2 ] , A [ R t r ( M , N , K ) , f 2 ] , , , , , , A [ R t r ( 1 , 1 , 1 ) , f L ] A [ R t r ( M , 1 , 1 ) , f L ] A [ R t r ( M , 2 , 1 ) , f L ] A [ R t r ( M , N , 1 ) , f L ] A [ R t r ( M , N , 2 ) , f L ] A [ R t r ( M , N , K ) , f L ] ]
A [ R t r ( m , n , k ) , f l ] = exp [ j 2 π f l R t r ( m , n , k ) / c ]
d = [ d ( 1 ) , d ( 2 ) , , d ( K ) ] T

2.2. CS Imaging with Motion Errors Compensation

In this section, CS Imaging is achieved by estimating the radar cross section (RCS) information and motion errors. As depicted in (10), y is the received signal. A is the measurement matrix. Owing to the MIMO array, the position cannot be acquired precisely, A often involves errors, which have an impact on the reconstruction of targets d .
As a consequence, knowing the inaccuracy position of the MIMO array and denoting the A as a function in connection with errors, i.e., A = A ( e t , e r ) , where e t and e r denote the transmitters motion errors and the receivers motion errors, respectively,
e t = [ Δ x t 1 , Δ y t 1 , Δ z t 1 , Δ x t M , Δ y t M , Δ z t M ] T
e r = [ Δ x r 1 , Δ y r 1 , Δ z r 1 , , Δ x r N , Δ y r N , Δ z r N ] T
The model in (10) can be modified to
y = A ( e t , e r ) d
Considering the array motion errors, except for the imaging, we should achieve the estimation of the array motion errors. We express the process of imaging and estimating the array motion errors as the minimization of the cost function below
J ( d , e t , e r ) = { y A ( e t , e r ) d 2 2 + λ d 1 }
where λ is the regularization parameter, which balances the imaging fidelity and the sparsity of the solution.
Because of the difference in propagation path, the method proposed in [30] for SAR cannot give an accurate estimation of array motion errors in MIMO situation. A BCD method is exploited to figure out (18), which cycles through steps of target reconstruction and array motion errors estimation and compensation. The algorithm flow is depicted as below. Figure 2 shows the flow chart of Algorithm 1.
Algorithm 1 Compressed Sensing Imaging with Compensation of Motion Errors for MIMO Radar
Initialize: i = 0 , ( e t ) 0 = 0 , ( e r ) 0 = 0  
Step   1 :   ( d ) i + 1 = arg min   d { J [ d , ( e t ) i , ( e r ) i ] }
Step   2 :   ( e t ) i + 1 = arg min   e t { J [ d i + 1 , e t , ( e r ) i ] }
Step   3 :   ( e r ) i + 1 = arg min   e r { J [ d i + 1 , ( e t ) i + 1 , e r ] }
Step   4 :   Let   i = i + 1 , and return to step 1.
          Terminate   when   e = d i + 1 d i d i 2 is smaller than the presupposed threshold.

2.2.1. Target Reconstruction

In step 1, the targets are reconstructed with the given MIMO array motion errors. It can be depicted as
d i + 1 = arg min   d J [ d , ( e t ) i , ( e r ) i ] = arg min   d { y A [ ( e t ) i , ( e r ) i ] d 2 2 + λ d 1 }
This type of problem can be figured out by sparse approaches, such as orthogonal matching pursuit (OMP) [37] or matching pursuit. We utilize OMP to get the reconstruction, considering that it can be utilized without knowing the data error magnitude in advance.

2.2.2. MIMO Array Motion Errors Estimation

In step 2, given the estimated receivers motion errors ( e r ) i and the reflectivity coefficients vector d i + 1 in the i th iteration, the optimization problem is
( e t ) i + 1 = arg min   e t J [ d i + 1 , e t , ( e r ) i ] = arg min   e t { y A ( e t , ( e r ) i ) d i + 1 2 2 + λ d i + 1 1 }
On account of λ d i + 1 1 is a constant, (20) can be revised as
( e t ) i + 1 = arg min   e t { y A ( e t , ( e r ) i ) d i + 1 2 2 }
We depict the cost function by H i + 1 ( e t ) , and
H i + 1 ( e t ) = y A ( e t , ( e r ) i ) d i + 1 2 2 = m = 1 M n = 1 N l = 1 L | y ( m , n , f l ) k = 1 K A [ R t r ( m , n , k ) , f l ] d k i + 1 | 2
where d k i + 1 is the k th element in d i + 1 .
In (22), there are M N subprocesses in H i + 1 ( e t ) . In the m n th subprocess, the cost function is only associated with the motion errors of m th transmitter e t m = [ Δ x t m , Δ y t m , Δ z t m ] T , considering the estimated motion errors of receivers ( e r ) i are provided here. Therefore, letting H m n i + 1 ( e t m ) depict the m n th subprocess as
H m n i + 1 ( e t m ) = l = 1 L | y ( m , n , f l ) k = 1 K A [ R t r ( m , n , k ) , f l ] d k i + 1 | 2  
To figure out (23), we use the gradient descent method. The gradient method is based on the assumption that the gradient can be calculated explicitly. We derive the gradient of H m n i + 1 ( e t m ) according to e t m . The gradient is as follows
H m n i + 1 ( e t m ) e t m = [ H m n i + 1 ( e t m ) Δ x t m , H m n i + 1 ( e t m ) Δ y t m , H m n i + 1 ( e t m ) Δ z t m ] T
The gradients are depicted as follows, utilizing the differential criterion of the composite function.
H m n i + 1 ( e t m ) Δ x t m = l = 1 L H m n i + 1 ( e t m ) R t r ( m , n , k ) R t r ( m , n , k ) Δ x t m
H m n i + 1 ( e t m ) Δ y t m = l = 1 L H m n i + 1 ( e t m ) R t r ( m , n , k ) R t r ( m , n , k ) Δ y t m
H m n i + 1 ( e t m ) Δ z t m = l = 1 L H m n i + 1 ( e t m ) R t r ( m , n , k ) R t r ( m , n , k ) Δ z t m
Using (2) and (3), we get
R t r ( m , n , k ) Δ x t m = ( x t m + Δ x t m x k ) / R t ( m , k )
R t r ( m , n , k ) Δ y t m = ( y t m + Δ y t m y k ) / R t ( m , k )
R t r ( m , n , k ) Δ z t m = ( z t m + Δ z t m z k ) / R t ( m , k )
In (25)–(27), the calculation of H m n i + 1 ( e t m ) / R t r ( m , n , k ) is deduced in the Appendix A. Combining the equations from (25)–(30) and (A1), the explicit expression of the gradient can be accurately given. We can achieve the gradient of H m n i + 1 ( e t m ) . A nesterov-accelerated adaptive moment (Nadam) [38] method is utilized to figure out (23). When (23) is solved, the global solution of (21) is given by taking the mean value, and step 2 in Algorithm 1 is realized.
Similarly, step 3 in Algorithm 1 is realized by the same means.
When the termination condition is fulfilled, the MIMO array transmits motion errors e t and receivers motion errors e r and the reflectivity coefficients d can be estimated precisely.

2.2.3. Computational Complexity

We will analyze the computation complexity of each step respectively in this section. OMP is utilized in step 1 to reconstruct images, whose complexity is order O ( q M N L K ) , where q depicts the number of targets. In step 2, the Nadam method is utilized to achieve the estimation of the MIMO array motion errors. Here, gradient computation dominates the computation complexity. The complexity of the gradient computation is order O ( L K ) in each subprocess. Supposing there are p sub iterations in step 2, the computation complexity of step 2 is O ( p M N L K ) . The computation complexity of step 3 is the same as step 2. Thereby, the computation complexity is O [ ( q + 2 p ) M N L K ] in each iteration of Algorithm 1. Table 2 shows the complexity terms.

3. Results

3.1. Simulation

Table 3 shows the simulation parameters.
Firstly, we place seven targets in the scene. Secondly, the MIMO array motion errors are simulated as uniformly distributed random errors, whose extent is 1 / 8 of the wavelength. The fully sampled data are generated for BP imaging. To take advantage of the MIMO array information and make each subprocess have the same amount of data to achieve the estimation of the MIMO array motion errors, we used the following sparse strategy: use full of the transmitters and receivers; then randomly select the frequencies, whose indices are the same for each subprocess.
Figure 3 shows the contrast of imaging results without compensation of errors and the proposed method. Figure 3a shows the results without compensation of motion errors, which are defocusing on account of the array motion errors. In Figure 3b, the targets are reconstructed accurately by the proposed method.
The estimation precision of the proposed method for MIMO array motion error is evaluated, and the superiority of this method is further emphasized. Figure 4 shows the true errors and estimated errors. Figure 4a shows the comparison between the error estimation results and the true error values of the x dimension of the 8 transmitting array elements, where the horizontal coordinate represents the serial number of the transm−itting array. Figure 4d also shows the comparison between the error estimation result and the true error value of the x dimension of the 8 receiving array elements, where the horizontal coordinate represents the serial number of the receiving array. Combined with Figure 4a,d the error estimation accuracy of the x dimension of the transmitting and receiving array is given. The remaining two columns depict the estimation precision in the y and z dimension, respectively. The results show that the estimation of errors of this method is in good agreement with the real errors.
We define the data error as y A [ ( e t ) i , ( e r ) i ] d i 2 2 , where ( e t ) i and ( e r ) i are the MIMO array motion errors of transmitters and the receivers estimated at iteration i , d i depicts the estimate of d in the i th iteration. We define the target reconstruction error, i.e., d i d 0 1 , where d 0 depicts the actual value of d . We define the root mean square error (RMSE) of the estimated errors as
RMSE = 1 T e i e 0 2 2
where e i depicts the estimation of e t or e r in the i th iteration and e 0 depicts the true value of e i , T denotes the number of transmitters or receivers.
In order to evaluate the convergence of the proposed method, the reduction of data error, reconstruction error, and RMSE of e t and e r in different iterations are illustrated in Figure 5. Since the change rate of the d is less than the presupposed threshold, the method terminates at the 41 th iteration. The change of the target reconstruction error relative to different iterations is illustrated in Figure 5a. As the number of iterations increases to larger than 5, the target reconstruction error tends to zero. The change of the data error relative to different iterations is illustrated in Figure 5b. When the number of iterations is 5 or larger, the data error decreases and tends to zero. The quick reduction of RMSE of e t and e r are illustrated in Figure 5c,d respectively.
In addition, the robustness and accuracy to noise of the method are evaluated. Gaussian white noise with different SNRs is added to the original data, and 15 repetitions of the simulation were performed. The RMSE of all of the average estimated motion errors are less than 0.035 m under all simulated SNR conditions is illustrated in Figure 6a. In Figure 6b,c, if SNR is bigger than 6 dB, the target reconstruction error is smaller than 1 and the data error is smaller than 800. The simulations show that the method is robust to noise and has good reconstruction precision and estimation even under low SNR conditions.

3.2. Experiment

An MIMO radar with a stepped frequency waveform is installed on UGV for data collection in this experiment. The radar has 10 transmitters and 10 receivers. Figure 7a shows the details of the MIMO radar and the camera mounted above the radar. Figure 7b shows the radar and camera in an indoor artificial smoke scene. Figure 7c shows the diagram of corner reflector distribution. Figure 7d shows the optical image from the camera, in which the targets are invisible. The distribution of the three corner reflectors is illustrated with an additional optical image in the upper right corner of Figure 7d. The data from the reflectors is larger than the other areas, so the scenario can be seen as a sparse scenario. During the experiment, the UGV keeps moving. When the UGV passes by the designated position, the geometric center of target 1 is ( 0 , 2.5 m ) and the other two targets have geometric centers in ( 0 , 4 m ) and ( 0.3 m , 4 m ) . The theoretical azimuth resolution is 0.36 m at a distance of 4 m. We first acquire the full data for BP imaging and utilizing part of the data for CS imaging. The experimental parameters are shown in Table 4.
A comparison of imaging results of different methods is illustrated in Figure 8. The BP imaging result is illustrated in Figure 8a. The result of conventional CS reconstruction without compensation of MIMO radar array motion errors is illustrated in Figure 8b. The result of the proposed method is illustrated in Figure 8c. Comparing with the optical image in Figure 7d, all the radar imaging results in Figure 8 show that the ability to perceive the environment has been significantly improved in a smoky scene. In Figure 8a, targets 2 and 3 are aliased together because the distance of the targets is small than azimuth resolution. In Figure 8b, targets one and two are defocused, which is owing to the effects of the radar array motion errors. In Figure 8c, the imaging quality is enhanced by compensating the motion errors so that targets 2 and 3 can be easily distinguished. It can demonstrate that the proposed method can achieve autofocus and super-resolution imaging.

4. Discussion

While the UGV-installed MIMO radar is moving, array motion errors are inevitable. In [29], a method is proposed to deal with the observation error under the SAR structure. A blind deconvolution method is proposed to acquire autofocus images [31], which can only solve the problem when antennas are influenced by the identical position error. Our proposed method can accurately estimate and compensate for the motion errors of the transmitters and receivers, as well as synchronously achieve autofocus imaging.
Figure 4 shows the estimation of motion errors is in good agreement with the true errors. Figure 8 shows that compared with traditional imaging method, the proposed algorithm can give super-resolution imaging in the presence of motion errors. In the smoky environment, the distribution of the targets can be accurately given by autofocus imaging, which has greatly improved the environmental perception ability compared with the optical sensor.
Future efforts will verify the validity of the method in complex environments such as the wild environment. Autofocus imaging of moving targets is a promising research direction in the future, which will increase the scope of the application of the algorithm. Incorporating low-rank and sparse for autofocus imaging may improve the noise suppression effect. Another research direction is considering the rigid constraints of the array during the estimating of array motion errors, which may be used to improve the speed and accuracy.

5. Conclusions

We have presented a method to compensate for the motion errors of the MIMO radar array in CS imaging. This method can realize the estimation of errors of the transmitters and receivers of the MIMO array and the reconstruction of the target image simultaneously. The proposed method analyses the essential relationship between the motion errors of array and the model. It uses a BCD iterative method, which iterates through target reconstruction, estimation, and compensation of the motion errors of the array. A gradient optimization method is utilized to get the estimation of the motion errors. The proposed method enhances the environment perception ability since it can accurately estimate the motion errors of the MIMO array and significantly improve the reconstruction results. The validity of the method is verified by simulation and measurement.

Author Contributions

Conceptualization, H.L. and T.J.; methodology, H.L.; software, H.L. and Z.L.; validation, H.L. and Y.D.; formal analysis, H.L. and S.L.; investigation, H.L.; resources, H.L.; data curation, H.L.; writing—original draft preparation, H.L.; writing—review and editing, H.L. and Z.L.; visualization, Y.D.; supervision, T.J.; project administration, T.J.; funding acquisition, T.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the National Natural Science Foundation of China under grant number 61971430.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in the manuscript:
BCDBlock coordinate descent
BPBack projection
CSCompressed sensing
GPSGlobal positioning system
INSInertial measurement units
ISARInverse synthetic aperture radar
MIMOMultiple-input multiple-output
NadamNesterov-accelerated adaptive moment
OMPOrthogonal matching pursuit
RCSRadar cross section
RMSERoot mean square error
SARSynthetic aperture radar
SNRSignal noise ratio
UGVUnmanned ground vehicles

Appendix A

In this appendix, we will deduce the calculation of H m n i + 1 ( e t m ) / R t r ( m , n , k ) . Using (23), we have
H m n i + 1 ( e t m ) R t r ( m , n , k ) = l = 1 L | y ( m , n , f l ) k = 1 K A [ R t r ( m , n , k ) , f l ] d k i + 1 | 2 R t r ( m , n , k )
Expand the absolute term in (A1) as
| y ( m , n , f l ) k = 1 K A [ R t r ( m , n , k ) , f l ] d k i + 1 | 2 = [ y ( m , n , f l ) k = 1 K A [ R t r ( m , n , k ) , f l ] d k i + 1 ] [ y ( m , n , f l ) k = 1 K A [ R t r ( m , n , k ) , f l ] d k i + 1 ] = y ( m , n , f l ) y ( m , n , f l ) y ( m , n , f l ) k = 1 K A [ R t r ( m , n , k ) , f l ] d k i + 1 y ( m , n , f l ) k = 1 K A [ R t r ( m , n , k ) , f l ] d k i + 1 + k = 1 K A [ R t r ( m , n , k ) , f l ] d k i + 1 k = 1 K A [ R t r ( m , n , k ) , f l ] d k i + 1
Then, we have
| y ( m , n , f l ) k = 1 K A [ R t r ( m , n , k ) , f l ] d k i + 1 | 2 R ( m , n , k ) = y ( m , n , f l ) d k i + 1 A [ R t r ( m , n , k ) , f l ] R t r ( m , n , k ) y ( m , n , f l ) d k i + 1 A [ R t r ( m , n , k ) , f l ] R t r ( m , n , k ) + d k i + 1 A [ R t r ( m , n , k ) , f l ] R t r ( m , n , k ) k = 1 K A [ R t r ( m , n , k ) , f l ] d k i + 1 + d k i + 1 A [ R t r ( m , n , k ) , f l ] R t r ( m , n , k ) k = 1 K A [ R t r ( m , n , k ) , f l ] d k i + 1 = 2 Re { y ( m , n , f l ) d k i + 1 A [ R t r ( m , n , k ) , f l ] R t r ( m , n , k ) + d k i + 1 A [ R t r ( m , n , k ) , f l ] R t r ( m , n , k ) k = 1 K A [ R t r ( m , n , k ) , f l ] d k i + 1 } = 2 Re { s ( m , n , f l ) d k i + 1 A [ R t r ( m , n , k ) , f l ] R t r ( m , n , k ) }
where
s ( m , n , f l ) = y ( m , n , f l ) k = 1 K A [ R t r ( m , n , k ) , f l ] d k i + 1
Using the expression of A [ R t r ( m , n , k ) , f l ] in (13), we have
A [ R t r ( m , n , k ) , f l ] = exp [ j 2 π f l R t r ( m , n , k ) / c ]
A [ R t r ( m , n , k ) , f l ] R t r ( m , n , k ) = [ j 2 π f l / c ] exp [ j 2 π f l R t r ( m , n , k ) / c ]

References

  1. Czapla, T.; Wrona, J. Technology development of military applications of unmanned ground vehicles. Stud. Comput. Intell. 2013, 481, 293–309. [Google Scholar] [CrossRef]
  2. Bekkerman, I.; Tabrikian, J. Target detection and localization using MIMO radars and sonars. IEEE Trans. Signal Process. 2006, 54, 3873–3883. [Google Scholar] [CrossRef]
  3. Fischer, C.; Younis, M.; Wiesbeck, W. Multistatic GPR data acquisition and imaging. Int. Geosci. Remote Sens. Symp. 2002, 1, 328–330. [Google Scholar] [CrossRef]
  4. Bradley, M.R.; Witten, T.R.; Duncan, M.; McCummins, R. Mine detection with a forward-looking ground-penetrating synthetic aperture radar. In Detection and Remediation Technologies for Mines and Minelike Targets VIII; International Society for Optics and Photonics: Bellingham, WA, USA, 2003; Volume 5089, p. 334. [Google Scholar] [CrossRef]
  5. Ressler, M.; Nguyen, L.; Koenig, F.; Wong, D.; Smith, G. The Army Research Laboratory (ARL) synchronous impulse reconstruction (SIRE) forward-looking radar. Unmanned Syst. Technol. IX 2007, 6561, 656105. [Google Scholar] [CrossRef]
  6. Counts, T.; Gurbuz, A.C.; Scott, W.R.; McClellan, J.H.; Kim, K. Multistatic ground-penetrating radar experiments. IEEE Trans. Geosci. Remote Sens. 2007, 45, 2544–2553. [Google Scholar] [CrossRef]
  7. Jin, T.; Lou, J.; Zhou, Z. Extraction of landmine features using a forward-looking ground-penetrating radar with MIMO array. IEEE Trans. Geosci. Remote Sens. 2012, 50, 4135–4144. [Google Scholar] [CrossRef]
  8. Bilik, I.; Longman, O.; Villeval, S.; Tabrikian, J. The Rise of Radar for Autonomous Vehicles: Signal processing solutions and future research directions. IEEE Signal Process. Mag. 2019, 36, 20–31. [Google Scholar] [CrossRef]
  9. Cheng, Y.; Zhou, X.; Xu, X.; Qin, Y.; Wang, H. Radar Coincidence Imaging with Stochastic Frequency Modulated Array. IEEE J. Sel. Top. Signal Process. 2017, 11, 414–427. [Google Scholar] [CrossRef]
  10. Ciuonzo, D. On time-reversal imaging by statistical testing. IEEE Signal Process. Lett. 2017, 24, 1024–1028. [Google Scholar] [CrossRef] [Green Version]
  11. Ciuonzo, D.; Romano, G.; Solimene, R. Performance analysis of time-reversal MUSIC. IEEE Trans. Signal Process. 2015, 63, 2650–2662. [Google Scholar] [CrossRef]
  12. Devaney, A.J. Time reversal imaging of obscured targets from multistatic data. IEEE Trans. Antennas Propag. 2005, 53, 1600–1610. [Google Scholar] [CrossRef]
  13. Zhu, R.; Zhou, J.; Cheng, B.; Fu, Q.; Jiang, G. Sequential Frequency-Domain Imaging Algorithm for Near-Field MIMO-SAR with Arbitrary Scanning Paths. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2967–2975. [Google Scholar] [CrossRef]
  14. Gini, F.; Lombardini, F.; Montanari, M. Layover solution in multibaseline SAR interferometry. IEEE Trans. Aerosp. Electron. Syst. 2002, 38, 1344–1356. [Google Scholar] [CrossRef]
  15. Chen, C.; Xiaoling, Z. A new super-resolution 3D-SAR imaging method based on MUSIC algorithm. In Proceedings of the 2011 IEEE RadarCon (RADAR), Kansas City, MO, USA, 23–27 May 2011; pp. 525–529. [Google Scholar] [CrossRef]
  16. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  17. Li, H.; Jin, T.; Dai, Y.P. Segmented random sparse MIMO-SAR 3-D imaging based on compressed sensing. In Proceedings of the IET International Radar Conference (IET IRC 2020), Online, 4–6 November 2020; pp. 317–322. [Google Scholar] [CrossRef]
  18. Suksmono, A.B.; Bharata, E.; Lestari, A.A.; Yarovoy, A.G.; Ligthart, L.P. Compressive stepped-frequency continuous-wave ground-penetrating radar. IEEE Geosci. Remote Sens. Lett. 2010, 7, 665–669. [Google Scholar] [CrossRef]
  19. Zhu, X.X.; Bamler, R. Tomographic SAR inversion by L1-norm regularization—The compressive sensing approach. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3839–3846. [Google Scholar] [CrossRef] [Green Version]
  20. Zhang, L.; Qiao, Z.J.; Xing, M.; Li, Y.; Bao, Z. High-resolution ISAR imaging with sparse stepped-frequency waveforms. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4630–4651. [Google Scholar] [CrossRef]
  21. Singh, K.B.; Arat, M.A.; Taheri, S. Literature review and fundamental approaches for vehicle and tire state estimation*. Veh. Syst. Dyn. 2019, 57, 1643–1665. [Google Scholar] [CrossRef]
  22. Guo, H.; Cao, D.; Chen, H.; Lv, C.; Wang, H.; Yang, S. Vehicle dynamic state estimation: State of the art schemes and perspectives. IEEE/CAA J. Autom. Sin. 2018, 5, 418–431. [Google Scholar] [CrossRef]
  23. Wahl, D.E.; Eichel, P.H.; Ghiglia, D.C.; Jakowatz, C.V. Phase Gradient Autofocus—A Robust Tool for High Resolution SAR Phase Correction. IEEE Trans. Aerosp. Electron. Syst. 1994, 30, 827–835. [Google Scholar] [CrossRef] [Green Version]
  24. Kolman, J. PACE: An autofocus algorithm for SAR. In Proceedings of the IEEE International Radar Conference, Arlington, VA, USA, 9–12 May 2005; pp. 310–314. [Google Scholar] [CrossRef]
  25. Yang, J.; Huang, X.; Jin, T.; Xue, G.; Zhou, Z. An interpolated phase adjustment by contrast enhancement algorithm for SAR. IEEE Geosci. Remote Sens. Lett. 2011, 8, 211–215. [Google Scholar] [CrossRef]
  26. Xi, L.I. Autofocusing of ISAR images based on entropy minimization. IEEE Trans. Aerosp. Electron. Syst. 1999, 35, 1240–1252. [Google Scholar] [CrossRef]
  27. Ye, W.; Yeo, T.S. Weighted least-squares estimation of phase errors for SAR/ISAR autofocus. IEEE Trans. Geosci. Remote Sens. 1999, 37, 2487–2494. [Google Scholar] [CrossRef] [Green Version]
  28. Cho, H.J.; Munson, D.C. Overcoming polar-format issues in multichannel SAR autofocus. In Proceedings of the 2008 42nd Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 26–29 October 2008; pp. 523–527. [Google Scholar] [CrossRef]
  29. Liu, K.H.; Munson, D.C. Fourier-domain multichannel autofocus for synthetic aperture radar. IEEE Trans. Image Process. 2011, 20, 3544–3552. [Google Scholar] [CrossRef] [PubMed]
  30. Nguyen, M.P.; Ammar, S.B. Second order motion compensation for squinted spotlight synthetic aperture radar. In Proceedings of the 2013 Asia-Pacific Conference on Synthetic Aperture Radar (APSAR), Tsukuba, Japan, 23–27 September 2013; pp. 202–205. [Google Scholar]
  31. Kelly, S.I.; Yaghoobi, M.; Davies, M.E. Auto-focus for Compressively Sampled SAR. In Proceedings of the Keynote Speak of 1st International Workshop on Compressed Sensing Applied to Radar (CoSeRa 2012), Bonn, Germany, 14–16 May 2012. [Google Scholar]
  32. Du, X.; Duan, C.; Hu, W. Sparse representation based autofocusing technique for ISAR images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 1826–1835. [Google Scholar] [CrossRef]
  33. Ender, J.H.G. Autofocusing ISAR images via sparse representation. In Proceedings of the 9th European Conference on Synthetic Aperture Radar, Nuremberg, Germany, 23–26 April 2012; pp. 203–206. [Google Scholar]
  34. Yang, J.; Huang, X.; Thompson, J.; Jin, T.; Zhou, Z. Compressed sensing radar imaging with compensation of observation position error. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4608–4620. [Google Scholar] [CrossRef]
  35. Pu, W.; Wu, J.; Wang, X.; Huang, Y.; Zha, Y.; Yang, J. Joint Sparsity-Based Imaging and Motion Error Estimation for BFSAR. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1393–1408. [Google Scholar] [CrossRef]
  36. Mansour, H.; Liu, D.; Kamilov, U.S.; Boufounos, P.T. Sparse Blind Deconvolution for Distributed Radar Autofocus Imaging. IEEE Trans. Comput. Imaging 2018, 4, 537–551. [Google Scholar] [CrossRef] [Green Version]
  37. Tropp, J.; Gilbert, A. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory. 2007, 53, 4655–4666. [Google Scholar] [CrossRef] [Green Version]
  38. Dozat, T. Incorporating Nesterov Momentum into Adam. In Proceedings of the 4th International Conference on Learning Representations, Workshop Track, San Juan, Puerto Rico, 2–4 May 2016; pp. 1–4. [Google Scholar]
Figure 1. Imaging geometry of MIMO Radar.
Figure 1. Imaging geometry of MIMO Radar.
Remotesensing 13 04909 g001
Figure 2. Flow chart of Algorithm 1.
Figure 2. Flow chart of Algorithm 1.
Remotesensing 13 04909 g002
Figure 3. Imaging Results Contrast. (a) Results without compensation of errors; (b) Results of the proposed method.
Figure 3. Imaging Results Contrast. (a) Results without compensation of errors; (b) Results of the proposed method.
Remotesensing 13 04909 g003
Figure 4. MIMO array motion errors estimation. (a) x dimension of Transmitters. (b) y dimension of Transmitters. (c) z dimension of Transmitters. (d) x dimension of Receivers. (e) y dimension of Receivers. (f) z dimension of Receivers.
Figure 4. MIMO array motion errors estimation. (a) x dimension of Transmitters. (b) y dimension of Transmitters. (c) z dimension of Transmitters. (d) x dimension of Receivers. (e) y dimension of Receivers. (f) z dimension of Receivers.
Remotesensing 13 04909 g004
Figure 5. Data error, Reconstruction error, and RMSE of e t and e r across the iteration. (a) Data error. (b) Reconstruction error. (c) RMSE of e t . (d) RMSE of e r .
Figure 5. Data error, Reconstruction error, and RMSE of e t and e r across the iteration. (a) Data error. (b) Reconstruction error. (c) RMSE of e t . (d) RMSE of e r .
Remotesensing 13 04909 g005
Figure 6. Proposed method performance under different SNRs. (a) RMSE of the average of six directions motion errors. (b) Target reconstruction error. (c) Data error.
Figure 6. Proposed method performance under different SNRs. (a) RMSE of the average of six directions motion errors. (b) Target reconstruction error. (c) Data error.
Remotesensing 13 04909 g006
Figure 7. Optical images of experimental scenes. (a) MIMO radar mounted on the UGV (b) UGV in the fog. (c) Diagram of corner reflectors distribution. (d) Optical images of three corner reflectors in a smoky scene.
Figure 7. Optical images of experimental scenes. (a) MIMO radar mounted on the UGV (b) UGV in the fog. (c) Diagram of corner reflectors distribution. (d) Optical images of three corner reflectors in a smoky scene.
Remotesensing 13 04909 g007
Figure 8. MIMO radar experiments results. (a) Result of BP. (b) Result without compensation of errors. (c) Result of the proposed method.
Figure 8. MIMO radar experiments results. (a) Result of BP. (b) Result without compensation of errors. (c) Result of the proposed method.
Remotesensing 13 04909 g008
Table 1. Categories of methods to solve autofocus problems.
Table 1. Categories of methods to solve autofocus problems.
MethodsDetailsReferences
Phase errorsEstimating a substituted collection of phase errors in the measured signal[24,25,26,27,28,29,30,31]
Motion errorsEstimating the motion errors in SAR structure[32]
Estimating the motion errors in bistatic-SAR[33]
Supposing the transmitter and receiver are affected by the same motion error[34]
Estimating the motion errors the transmitters and receivers of MIMO arrayOur method
Table 2. Complexity terms.
Table 2. Complexity terms.
TermsValue
Number of targets q
Complexity of OMP O ( M N L K )
Complexity in step1 O ( q M N L K )
Iterations in step 2 p
Complexity of Nadam O ( M N L K )
Complexity in step 2 O ( p M N L K )
Complexity in step 3 O ( p M N L K )
Computation complexity O [ ( q + 2 p ) M N L K ]
Table 3. Simulation parameters.
Table 3. Simulation parameters.
ParametersValue
Center Frequency3 GHz
Bandwidth2.048 GHz
Frequency Interval4 MHz
Number of Frequencies512
Number of Transmitters8
Number of Receivers8
Selected Frequencies64
Scene Azimuth Points40
Scene Range Points40
Table 4. Experimental parameters for MIMO radar.
Table 4. Experimental parameters for MIMO radar.
ParametersValue
Center Frequency2.3 GHz
Bandwidth1.024 GHz
Frequency Interval4 MHz
Number of Frequencies256
Number of Transmitters10
Number of Receivers10
Selected Frequencies64
Scene Azimuth Points40
Scene Range Points40
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, H.; Li, S.; Li, Z.; Dai, Y.; Jin, T. Compressed Sensing Imaging with Compensation of Motion Errors for MIMO Radar. Remote Sens. 2021, 13, 4909. https://doi.org/10.3390/rs13234909

AMA Style

Li H, Li S, Li Z, Dai Y, Jin T. Compressed Sensing Imaging with Compensation of Motion Errors for MIMO Radar. Remote Sensing. 2021; 13(23):4909. https://doi.org/10.3390/rs13234909

Chicago/Turabian Style

Li, Haoran, Shuangxun Li, Zhi Li, Yongpeng Dai, and Tian Jin. 2021. "Compressed Sensing Imaging with Compensation of Motion Errors for MIMO Radar" Remote Sensing 13, no. 23: 4909. https://doi.org/10.3390/rs13234909

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop