Next Article in Journal
Automated Method of Extracting Urban Roads Based on Region Growing from Mobile Laser Scanning Data
Next Article in Special Issue
Experimental Validation of Microwave Tomography with the DBIM-TwIST Algorithm for Brain Stroke Detection and Classification
Previous Article in Journal
Thrombin Aptamer-Modified Metal–Organic Framework Nanoparticles: Functional Nanostructures for Sensing Thrombin and the Triggered Controlled Release of Anti-Blood Clotting Drugs
Previous Article in Special Issue
Detecting Axial Ratio of Microwave Field with High Resolution Using NV Centers in Diamond
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Imaging Plane Calibration Method for MIMO Radar Imaging

Key Laboratory of Electromagnetic Space Information, Chinese Academy of Sciences, University of Science and Technology of China, Hefei 230026, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(23), 5261; https://doi.org/10.3390/s19235261
Submission received: 10 November 2019 / Revised: 26 November 2019 / Accepted: 27 November 2019 / Published: 29 November 2019
(This article belongs to the Special Issue Microwave Sensing and Imaging)

Abstract

:
In two dimensional cross-range multiple-input multiple-output radar imaging for aerial targets, due to the non-cooperative movement of the targets, the estimated imaging plane parameters, namely the center and the posture angles of the imaging plane, may have deviations from true values, which defocus the final image. This problem is called imaging plane mismatch in this paper. Focusing on this problem, firstly the deviations of spatial spectrum fulfilling region caused by imaging plane mismatch is analyzed, as well as the errors of the corresponding spatial spectral values. Thereupon, the calibration operation is deduced when the imaging plane parameters are accurately obtained. Afterwards, an imaging plane calibration algorithm is proposed to utilize particle swarm optimization to search out the imaging plane parameters. Finally, it is demonstrated through simulations that the proposed algorithm can accurately estimate the imaging plane parameters and achieve good image focusing performance.

1. Introduction

Aerial targets imaging is an important research direction in the field of radar imaging technology. Especially, it plays a crucial role in the military field, such as, aerial defense [1] and anti-missile defense [2] and so on. Multiple-input multiple-output (MIMO) radar is a new radar technique, which adopts multiple transmitters and receivers. By transmitting orthogonal space–time block codes [3] or frequency diversity signals [4], a MIMO radar with M transmitters and N receivers can eventually form a virtual array of the aperture length up to M times that of the receive array, which greatly saves the hardware cost. In addition, MIMO radar can obtain the images of the aerial targets with only one snapshot, and thus has enormous superiority in radar image acquisition time [5,6].
Since MIMO radar technique was proposed, it has been desired to build high performance imaging algorithms. The researches of MIMO radar imaging algorithms mainly focus on two aspects: The first is the wave-number domain imaging methods, and their imaging performances mainly depend on the spatial spectrum fulfilling region, which is generally required to be uniformly fulfilled, so that the fast Fourier transform (FFT) can be applied [7,8]. In this regard, Prof. Yarovoy et al. have conducted a large number of studies and verified the feasibility of the proposed algorithms in security check [9], wall penetrating [10] and ground penetrating imaging applications [7]. In addition to the traditional wave-number domain methods, the iterative optimization methods have also been applied in the MIMO radar imaging field. For example, Prof. Li’s team of University of Florida proposed several iterative imaging algorithms, such as iterative adaptive approaches (IAA) algorithm [11] and the sparse learning via iterative minimization (SLIM) [12] algorithm, which have been well applied and verified in MIMO radar scenarios.
In MIMO radar imaging, the model mismatch caused by system errors or array spatial position errors will degrade the imaging quality. Therefore, the study of model error calibration algorithms is an important research direction of high-quality MIMO radar imaging. In this regard, a large number of studies have been used to calibrate the phase error [13,14], carrier frequency deviation [15], array position error [16], off-grid problem [17,18], and so on. The degradation of MIMO radar resolution under the condition of phase error from the perspective of point spread function (PSF) is analyzed in [13], and the sparse imaging via expectation maximization (SIEM) algorithm is proposed, which alternately estimates the phase errors and the target image, and obtains better imaging quality. Subsequently, the degradation of MIMO radar resolution under the condition of carrier frequency deviation from the perspective of PSF is analyzed in [15] as well, and an iterative algorithm employing iteration strategy similar to reference [13] is proposed, and good imaging results are achieved. MIMO imaging with array position errors is studied in reference [16], while in reference [17,18], the off-grid problem of MIMO radar imaging is studied. The algorithms proposed in [16,17,18] all employ sparse optimization by alternately estimating the target image and the errors during iterations, hence clear images are finally obtained.
Most of the above methods are proposed for two dimensional (2D) imaging in range and cross-range directions. However, for the target plane parallel to cross-range direction, these methods cannot be directly applied. In fact, it is necessary to set the origin of the coordinates at the center of the imaging plane [5,19] for the 2D cross-range imaging methods based on the spatial spectrum. In practical applications, as the target is non-cooperative, it is required to estimate the scene center and posture angles of the target plane, so as to make the final image focus on the image scene center and the target plane. However, there are always some deviations between the estimated imaging plane parameters and the real situations, which resulting in unfocused image and poor imaging quality. This problem is called the imaging plane mismatch problem in this paper.
To solve this problem, firstly, the deviations of spatial spectral fulfilling region caused by imaging plane mismatch are analyzed, and the location errors between estimated spatial spectral point and real spatial spectral point under the condition of imaging plane mismatch are deduced, as well as the errors of the corresponding spatial spectral values. Subsequently, in order to estimate imaging plane parameters and to be able to calibrate the locations and values of spatial spectral points, an imaging plane calibration algorithm (IPCA) is proposed. Aiming at minimizing the image entropy as well as promoting target sparsity, IPCA utilizes a particle swarm optimization (PSO) [20,21] algorithm to search out the parameters of imaging plane center deviation and pose angles deviations, and then calibrates the locations of spatial spectral points according to these parameters, so as to obtain images with better quality.
This paper is organized as follows. Section 2 introduces the spatial spectral imaging model of MIMO radar, and analyzes the problem of imaging plane mismatch, and the deviations between the estimated spatial spectral point and the real spatial spectral point position under the imaging plane mismatch are deduced. Section 3 provides the design and the detailed flow of IPCA. In Section 4, the validity of the proposed algorithm, robustness to noise, and tolerance to mismatching parameters are verified by simulations. Section 5 is the conclusion of this paper.

2. Problem Formulation of Imaging Plane Mismatch

In this section, the spatial spectral imaging model of MIMO radar is reviewed firstly, and then the model mismatch problem caused by the imaging plane mismatch is analyzed. Under the condition of imaging plane mismatch, the deviations between the positions of the obtained spatial spectral points and the positions of the real spectral points are analyzed, as well as the values of the corresponding spatial spectral points. Afterwards, the calibration operation is deduced to obtain the focused image after obtaining the imaging plane parameters. Notice that the radar system discussed here is the frequency diversity MIMO (f-MIMO) radar [4,22].

2.1. Space Spectral Imaging Model of Multiple-Input Multiple-Output (MIMO) Radar

In this subsection, the spatial spectral imaging model of MIMO radar is reviewed firstly.
Figure 1 illustrates the general 2D cross-range imaging scenario of MIMO radar. Let ( x , y , z ) be Cartesian coordinates with the origin O located at the center of the imaging plane and the 2D target is supposed to be on the imaging plane. The location of the p-th transmitting antenna and the q-th receiving antenna are denoted as r p = r p , θ p , φ p and r q = r q , θ q , φ q in spherical coordinate respectively.
Without loss of generality, regardless of the loss associated with the free-space propagation, the echo signal at the q-th receiver by the p-th transmitter is given by [5]:
s p , q ( t ) = σ ( x T , y T ) exp j 2 π f p t R p , T c R q , T c d x T d y T
where σ ( x T , y T ) denotes the reflectivity of the scatterer at ( x T , y T ) on the imaging plane, f p is the transmitting frequency of the p-th transmitter and c is the speed of light. R p , T = r p r T , R q , T = r q r T , r T = ( x T , y T , 0 ) .
Then, down conversion is applied to the received signal, which can be achieved by multiplying it by the following reference signal:
s r e f ( t ) = exp j 2 π f p t r p c r q c
In the case of the far field, approximate conditions can be used:
R p , T = r p r T = r p r T · e ^ p R q , T = r q r T = r q r T · e ^ q
where e ^ p = r p / | r p | and e ^ q = r q / | r q | .
Thus it can get:
s p , q ( t ) · s r e f ( t ) = σ ( x T , y T ) exp j 2 π f p t R p , T c R q , T c × exp j 2 π f p t r p c r q c d x T d y T = σ x T , y T exp j 2 π K p , q · r T d x T d y T
where K p , q = k p , q x , k p , q y · k p , q x and k p , q y are:
k p , q x = f p c cos θ p sin φ p + cos θ q sin φ q k p , q y = f p c cos θ p cos φ p + cos θ q cos φ q
Therefore, the value of the 2D spatial spectral point ( k p , q x , k p , q y ) of the imaging plane is obtained:
G k p , q x , k p , q y = σ x T , y T exp j 2 π x T k p , q x + y T k p , q y d x T d y T
Finally, the common algorithms can be applied on spatial spectral point value to obtain the target image, such as the inverse fast Fourier transform (IFFT) algorithm, the back-projection (BP) algorithm [23], and the non-uniform fast Fourier transform [8] and so on.

2.2. Analysis of Model Mismatch Problem Caused by Imaging Plane Mismatch

In the actual MIMO radar imaging applications, especially in aerial target imaging, the center and the posture angles of the imaging plane are uncertain due to the target’s non-cooperative movement state. When the parameters of the imaging plane have deviations, the fulfilling region of the obtained spatial spectrum will deviate from the real region, which will cause images to be unfocused.
Figure 2 shows the imaging geometry of the scenario with imaging plane mismatch. Let α denote the estimated target plane, while let β denote the real target plane. Besides, set up the coordinate O x α y α z α with the origin O located at the center of the estimated target plane and plane x α O y α coincides with the estimated target plane. The location vectors of p-th transmitting antenna and q-th receiving antenna are r p = r p , θ p , φ p and r q = r q , θ q , φ q respectively in Coordinate O x α y α z α . Likewise, set up the coordinate O x β y β z β with the origin O located at the center of the real target plane and plane x β O y β coincides with the real target plane. Meanwhile, the location vectors of p-th transmitting antenna and q-th receiving antenna are r p = r p , θ p , φ p and r q = r q , θ q , φ q respectively in coordinate O x β y β z β . In addition, the changes between plane β and plane α consist of the translation change and the posture angles’ change. The direction vector d = d β , θ β , φ β represents the translation of the origin of coordinate O x β y β z β relative to the origin of coordinate O x α y α z α . Define ( δ , μ , ξ ) as the posture angle of plane β in Coordinate O x α y α z α , where δ denotes the angle between the axis x β of coordinate O x β y β z β and the plane x α O y α , μ denotes the angle between the projection of the axis x β on the plane x α O z α and the axis x α , and ξ denotes the angle between the plane y β O z β and the plane y α O z α .
What is more, θ p , θ q , φ p , φ q and θ p , θ q , φ p , φ q have relations:
θ p = θ p + δ sin φ p + μ cos φ p , φ p = φ p + ξ + δ cos φ p + μ sin φ p θ q = θ q + δ sin φ q + μ cos φ q , φ q = φ q + ξ + δ cos φ q + μ sin φ q
Hence, r p and r q can be computed by r p , r q and d , namely
r p = r p d = r p 2 + d β 2 2 r p · d β cos θ p cos θ β cos φ p φ β + sin θ p sin θ β r q = r q d = r q 2 + d β 2 2 r q · d β cos θ q cos θ β cos φ q φ β + sin θ q sin θ β
Substitute Equation (8) into Equation (3), so that it can get:
τ p , T = R p , T c = r p + x T cos θ p sin φ p + y T cos θ p cos φ p c = r p 2 + d β 2 2 r p d β cos θ p cos θ β cos φ p φ β + sin θ p sin θ β c + x T cos θ p sin φ p + y T cos θ p cos φ p c τ q , T = R q , T c = r q + x T cos θ q sin φ q + y T cos θ q cos φ q c = r q 2 + d β 2 2 r q d β cos θ q cos θ β cos φ q φ β + sin θ q sin θ β c + x T cos θ q sin φ q + y T cos θ q cos φ q c
Since the real imaging plane parameters are not accurately known, the reference signal s r e f ( t ) is still the same as Equation (2). Substitute Equation (9) into Equation (4) and using far field approximation:
s p , q ( t ) · s r e f ( t ) = σ x T , y T exp { j 2 π f p c ( d β ( cos θ p cos θ β cos φ p φ β + sin θ p sin θ β ) d β ( cos θ q cos θ β cos φ q φ β + sin θ q sin θ β ) + x T [ cos ( θ p + δ sin φ p + μ cos φ p ) sin ( φ p + ξ + δ cos φ p + μ sin φ p ) + cos ( θ q + δ sin φ q + μ cos φ q ) sin ( φ q + ξ + δ cos φ q + μ sin φ q ) ] + y T [ cos ( θ p + δ sin φ p + μ cos φ p ) cos ( φ p + ξ + δ cos φ p + μ sin φ p ) + cos θ q + δ sin φ q + μ cos φ q cos ( φ q + ξ + δ cos φ q + μ sin φ q ) ] ) } d x T d y T
Therefore Equation (6) actually gets:
G k p , q x , k p , q y = σ x T , y T exp { j 2 π x T k p , q x + y T k p , q y + j 2 π f p c ( d β ( cos θ p cos θ β cos φ p φ β + sin θ p sin θ β ) d β ( cos θ q cos θ β cos φ q φ β + sin θ q sin θ β ) + x T [ sin δ sin φ p + μ cos φ p sin θ p sin φ p + sin θ q sin φ q + sin ξ + δ cos φ q + μ sin φ q cos θ p cos φ p + cos θ q cos φ q ] + y T [ sin δ sin φ p + μ cos φ p sin θ p cos φ p + sin θ q cos φ q + sin ξ + δ cos φ q + μ sin φ q cos θ p sin φ p + cos θ q sin φ q ] ) } d x T d y T
The location of space spectral points obtained by the above processing is not completely matched with the real spatial spectral fulfilling region. As a result, the entire image is not focused on the real target plane.

2.3. The Calibration Operation

In order to obtain more accurate imaging results, it is necessary to search out the parameters of the imaging plane, so as to eliminate the mismatch between the estimated spatial spectral fulfilling region and the real spatial spectral fulfilling region, and the according spatial spectral values. After the parameters of the imaging plane are obtained, the following calibration operation can be applied to achieve focused image.
When the parameters d = d β , θ β , φ β and ( δ , μ , ξ ) have been obtained, the reference signal can be calibrated as:
s r e f t , r p , r q = exp j 2 π f p t r p + r q c
After the coherent processing in Equation (10), the final spatial spectral value is:
G k p , q x , k p , q y = T σ x T , y T e j 2 π x T k p , q x + y T k p , q y d x T d y T
where T denotes the imaging area and k p , q x , k p , q y are defined in Equation (14).
k p , q x = f p cos θ p sin φ p + cos θ q sin φ q c = f p c cos θ p + δ sin φ p + μ cos φ p sin φ p + ξ + δ cos φ p + μ sin φ p + cos θ q + δ sin φ q + μ cos φ q sin φ q + ξ + δ cos φ q + μ sin φ q ) k p , q y = f p cos θ p cos φ p + cos θ q cos φ q c = f p c cos θ p + δ sin φ p + μ cos φ p cos φ p + ξ + δ cos φ p + μ sin φ p + cos θ q + δ sin φ q + μ cos φ q cos φ q + ξ + δ cos φ q + μ sin φ q )
Ultimately, the focused images can be obtained using k p , q x , k p , q y and the corresponding spatial spectral value G k p , q x , k p , q y .

3. The Proposed Imaging Plane Calibration Algorithm (IPCA)

When there are deviations between the estimated values of the imaging plane parameters ( d β , θ β , φ β , δ , μ , ξ ) and the real values, it will lead to the problem of spatial spectrum mismatch. The mismatch of spatial spectrum results in the image not focusing at the real imaging plane and the whole image quality is deteriorated seriously.
In order to achieve better imaging performance, it is necessary to search out the real imaging plane parameters, namely the six parameters ( d β , θ β , φ β , δ , μ , ξ ) .
However, searching directions cannot be clear at beginning and always changed in the processing. Commonly, a good objective function will make the searching process in desirable directions.
Generally, image entropy can be used to measure the focusing performance of an image. In radar imaging, it is defined as follows:
E = k = 0 M 1 n = 0 N 1 σ x k , y n 2 S ln σ x k , y n 2 S
where, S = k = 0 M 1 n = 0 N 1 σ x k , y n 2 , and the image is discretized into M × N grids. σ ( x k , y n ) denotes the scattering coefficient of the grid in k-th row at n-th column. At the same time, in the aerial imaging applications, the targets usually have sparse characteristics.
Therefore, both the focusing performance and the sparsity of the targets are considered, hence the objective function is to minimize the following function:
f ( d , δ , μ , ξ ) = k = 0 M 1 n = 0 N 1 σ x k , y n 2 S ln σ x k , y n 2 S + γ k = 0 M 1 n = 0 N 1 σ x k , y n 1
where σ x k , y n = I F F T G p , q k x , k y , d , δ , μ , ξ and γ is a tunable parameter to adjust the weights of the image entropy and the sparsity.
In such a way, the estimation problem for ( d , δ , μ , ξ ) can be transformed into the following optimization problem:
( d , δ , μ , ξ ) = arg min d , δ , μ , ξ f ( d , δ , μ , ξ ) s . t . σ x k , y n = IFFT G p , q k x , k y , d , δ , μ , ξ
However, the above optimization problem is not a convex problem. Commonly, among non-convex optimization algorithms, metaheuristic optimization [24] is becoming increasingly popular. Particle swarm optimization (PSO) algorithm is one of the most popular metaheuristic optimization algorithms, and has been successfully applied in radar imaging problems [25,26]. Hence, the imaging plane calibration algorithm (IPCA) is proposed to utilize PSO algorithm for solving the optimization problem in Equation (17). In IPCA, each particle p i = ( d β i , θ β i , φ β i , δ i , μ i , ξ i ) denotes one estimation of ( d β , θ β , φ β , δ , μ , ξ ) , while the velocity v i of each particle denotes one search direction. In PSO, each particle remembers its personal best position. Meanwhile, global best position is also recorded. In each iteration, the velocity of each particle is updated combining the individual movement states and the group movement states. Particles acquire their new positions by constantly updating their speed, and eventually all particles will converge to the global optimal value [20,21].
The detailed algorithm of IPCA is described in Algorithm 1 and the flow chart is illustrated in Figure 3.
Algorithm 1: IPCA
Sensors 19 05261 i001

4. Simulations

In this section, several simulations are carried out to verify the effectiveness of IPCA.
The imaging distance is set to 1 km in the simulations and the target is a B727 airplane with its point scattering model shown in Figure 4a. The size of the imaging plane, approximately parallel to the radar antenna array, is 60 × 60 m. The radar consists of 31 × 31 transmitters and 4 receivers and all of the antennas are located on the same transceiver plane, as is illustrated in Figure 4b. Besides, the size of the radar array is 60 × 60 m and the radar works at C-band. The detailed simulation parameters are given in Table 1.

4.1. Imaging Simulation

To verify the effectiveness of the proposed method, the following simulation was carried out. System parameters in the simulations are shown in Table 1. The parameters of the center and pose angles of the imaging plane both have errors. The real parameters are given in Table 2.
According to the derivation in Section 2 and the deviation parameters of the imaging plane, the estimated and real fulfilling region of the spatial spectrum are illustrated in Figure 5a,b respectively. It can be seen that when there are deviations of the imaging plane parameters, the estimated fulfilling region of the spatial spectrum is quite different from the fulfilling region of the real spatial spectrum. The recovered image is illustrated in Figure 5c when deviations of the imaging plane parameters are not known, and the adopted algorithm is the IFFT algorithm. It can been seen that even when there are slight deviations of the imaging plane parameters, e.g., the deviations of the posture angles of the imaging plane are within 1 , the image is badly defocused, and the target’s contour is not clear.
In the following, the proposed IPCA is taken to search out the six parameters, namely d β , θ β , φ β , δ , μ , ξ , and then the calibration operation is taken to obtain the final image. The number of the particles, I, in IPCA is set to 100 and the maximum number of iterations, K m a x , is set to 100 too. Meanwhile some other parameters in PSO are set: ω k = 0.8 0.5 × k K m a x , c 1 = c 2 = 1 .
According to Figure 6, the image has clearer target outlines and better focusing performance with fewer noisy points around the target. In the final image, each scattering point of the target can be clearly identified. Meanwhile, the image entropy is smaller using IPCA than that using IFFT, as is shown in Table 3, which means better focusing performance.
Figure 7 shows the value of the objective function and the image entropy during the IPCA iterations. It can been seen that when the number of iterations exceeds 80, the value of the objective function tends to be stable, that is, the search results gradually converge.
From what has been discussed above, the method proposed in this paper can effectively converge. Meanwhile, the entropy of the inversion image is lower and the image quality is higher.
Figure 8a shows the estimated d = ( d x , d y , d z ) during the iterations. It can be seen that, with the increase of the number of iterations, d z gradually approaches the real value, and finally converges to the real value. The values of d x d y are still deviated from the real values. This is because the deviations d x , d y of the image will only make the image translation in the imaging plane, which have no effect on the image focus effect and target sparsity. Therefore, d x , d y do not affect the imaging quality and IPCA cannot guarantee d x , d y convergence to the real value. Likewise, Figure 8b shows the estimated imaging plane posture angles ( δ , μ , ξ ) with the number of iterations. It can be seen that as the number of iterations increases, the value of δ , μ converges to the real value while ξ not. This is because the estimation error of the rotation angle ξ , whose rotating axis parallel to the line of sight, will make the image rotate in the imaging plane, which has no influence on the image entropy and target sparsity. Therefore ξ does not affect the imaging quality and IPCA cannot guarantee ξ convergence to the real value.
Meanwhile, since the PSO algorithm is a metaheuristic optimization algorithm, the computation time of IPCA is hard to predict. Therefore, 10 Monte Carlo trials are taken to count the computation time. The computation times of 10 Monte Carlo trials vary from 954 s to 2017 s with average computation time is 1617 s.
To sum up, the algorithm proposed in this paper can obtain more accurate imaging plane parameters, resulting in better image focusing performance and better imaging quality.

4.2. Simulations with Different Tunable Parameter γ

In Equation (16), a tunable parameter γ is defined to adjust the weights of the image entropy and the sparsity. In order to analysis the influence of γ , the below simulations are taken.
The simulation parameters are set the same with that in Table 1 and Table 2. γ is set to [ 0.1 , 0.5 , 1 , 3 , 5 , 10 , 50 ] . The imaging results are illustrated in Figure 9, meanwhile the image entropy are given in Table 4.
It can be seen from Figure 9 that the reconstructed images are all well focused and the image entropy are all lower than 4 when γ is chosen with different values. So it can conclude that γ has seldom influence on the final imaging results. Therefore, γ can be selected from a wide range in IPCA.

4.3. Simulations under Different Signal-to-Noise Ratios (SNRs)

In order to verify the robustness to the noise of IPCA, the following simulation is carried out in different echo signal-to-noise ratio. The echo SNRs are set from 5 dB to 30 dB. The simulation parameters are the same with that in Table 1 and Table 2. The simulation results are in Figure 10.
The value of the objective function f ( d , δ , μ , ξ ) during the iterations is illustrated in Figure 10a. As it is shown, the value of the objective function f ( d , δ , μ , ξ ) decreases iteratively and finally converges. When the SNR is above 15 dB, the final values of f ( d , δ , μ , ξ ) are closed to each other.
The image entropy during the iterations is illustrated in Figure 10b. It can be seen that image entropy decreases iteratively. When the SNR is equal to or above 15 dB, the final image entropy is lower than 4 which means good image focusing performance.

4.4. Simulations under Different Parameter Ranges

In order to verify that the proposed method has a certain degree of tolerance for the deviations of the imaging parameters, the following simulations are conducted. According to the first subsection in this section, the center deviation parameter d x , d y of the imaging plane only cause the image to shift in the imaging plane, and has no influence on the imaging quality and focusing performance. In addition, the posture angle ξ makes the image rotate in the imaging plane, while the focusing performance is not affected. So in the following simulations, only d z , δ and μ are considered.
In the simulations, d z , δ and μ are uniform random selected in [ Δ d z 2 , Δ d z 2 ] , [ Δ δ 2 , Δ δ 2 ] and [ Δ μ 2 , Δ μ 2 ] . Δ d z are set to [ 0.1 , 0.2 , 0.3 , 0.5 , 0.8 , 1.0 ] , while Δ δ are set to [ 1 , 2 , 3 , 5 , 8 , 10 ] / 180 π whereas Δ μ are set to [ 1 , 2 , 3 , 5 , 8 , 10 ] / 180 π . For each Δ d z , Δ δ and Δ μ , 10 Monte Carlo trials are taken. Meanwhile, when the value d z , δ and μ are changing, d x , d y and ξ are set to be zeros.
The errors between the estimated d z , δ , μ and the real values and the image entropy obtained by 10 Monte Carlo trials with different Δ d z , Δ δ , Δ μ are illustrated using boxplot in Figure 11. On each box, the central mark indicates the median, and the bottom and top edges of the box indicate the 25th and 75th percentiles, respectively. The whiskers extend to the most extreme data points not considered outliers, and the outliers are plotted individually using the red ’+’ symbol. According to Figure 11a–c, the errors between the final estimated d z and the real values are within ± 0.01 m, and the errors between the final estimated δ and the real value are within ± 0 . 35 when Δ δ is chosen little than 3 . Moreover the errors between the final estimated μ and the real values are within ± 0.35 when Δ μ is chosen little than 5 . In addition, according to Figure 11d–f, the image entropy are all lower than 4 when Δ d z , Δ δ , Δ μ are chosen within 1 m, 3 , 5 respectively.
In short, IPCA can obtain the final estimated d z , δ and μ with errors at most ± 0.01 m, ± 0 . 35 and ± 0 . 35 when Δ d z , Δ δ , Δ μ are chosen within 1 m, 3 , 5 respectively, and the image entropy is smaller than 4.

5. Conclusions

In this paper, the imaging plane mismatch problem of 2D cross-range MIMO radar imaging is analyzed, and the deviations between the estimated spatial spectral point location and the real spatial spectral point location are deduced as well as the corresponding spatial spectral values. To solve this problem, IPCA is proposed in this paper. Aiming at minimizing the image entropy and sparsity of the image, PSO is utilized in IPCA to obtain the image with better focusing performance. Simulation results verify the effectiveness of the proposed algorithm and the robustness to noise. At the same time, when the parameters of the imaging plane are different, the proposed algorithm can obtain the imaging plane parameters of the real values, and then carry out the imaging plane calibration operation to obtain the focused image. The method proposed in this paper can solve the imaging plane mismatch problem and obtain high quality MIMO images.

Author Contributions

All authors contributed extensively to the work presented in this paper. Y.G. proposed the original idea and designed the study; B.Y. and Z.W. performed the simulations and wrote the paper; Y.G. supervised the analysis and edited the manuscript; R.X. provided his valuable suggestions to improve this study.

Funding

This research was funded by the National Natural Science Foundation of China under contact No. 61771446 and No. 61431016.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Constancias, L.; Cattenoz, M.; Brouard, P.; Brun, A. Coherent collocated MIMO radar demonstration for air defence applications. In Proceedings of the 2013 IEEE Radar Conference (RadarCon13), Ottawa, ON, Canada, 29 April–3 May 2013; pp. 1–6. [Google Scholar]
  2. Chen, X.P.; Zeng, X.N. Superiority Analysis of MIMO Radar in Aerial Defence and Anti-missile Battle. Telecommun. Eng. 2009, 10. [Google Scholar]
  3. Fishler, E.; Haimovich, A.; Blum, R.; Chizhik, D.; Cimini, L.; Valenzuela, R. MIMO radar: An idea whose time has come. In Proceedings of the IEEE Radar Conference, Philadelphia, PA, USA, 26–29 April 2004; pp. 71–78. [Google Scholar]
  4. Li, X.R.; Zhang, Z.; Mao, W.X.; Wang, X.M.; Lu, J.; Wang, W.S. A study of frequency diversity MIMO radar beamforming. In Proceedings of the IEEE International Conference on Signal Processing, Beijing, China, 24–28 October 2010. [Google Scholar]
  5. Wang, D.W.; Ma, X.Y.; Su, Y. Two-dimensional imaging via a narrowband MIMO radar system with two perpendicular linear arrays. IEEE Trans. Image Process. 2009, 19, 1269–1279. [Google Scholar] [CrossRef] [PubMed]
  6. Hu, X.; Tong, N.; Song, B.; Ding, S.; Zhao, X. Joint sparsity-driven three-dimensional imaging method for multiple-input multiple-output radar with sparse antenna array. IET Radar Sonar Navig. 2016, 11, 709–720. [Google Scholar] [CrossRef]
  7. Sakamoto, T.; Sato, T.; Aubry, P.J.; Yarovoy, A.G. Ultra-wideband radar imaging using a hybrid of Kirchhoff migration and Stolt FK migration with an inverse boundary scattering transform. IEEE Trans. Antennas Propag. 2015, 63, 3502–3512. [Google Scholar] [CrossRef]
  8. Wang, J.; Cetinkaya, H.; Yarovoy, A. NUFFT based frequency-wavenumber domain focusing under MIMO array configurations. In Proceedings of the 2014 IEEE Radar Conference, Cincinnati, OH, USA, 19–23 May 2014; pp. 1–5. [Google Scholar]
  9. Savelyev, T.; Yarovoy, A. 3D imaging by fast deconvolution algorithm in short-range UWB radar for concealed weapon detection. Int. J. Microw. Wirel. Technol. 2013, 5, 381–389. [Google Scholar] [CrossRef]
  10. Zhuge, X.; Yarovoy, A.G.; Savelyev, T.; Ligthart, L. Modified Kirchhoff migration for UWB MIMO array-based radar imaging. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2692–2703. [Google Scholar] [CrossRef]
  11. Roberts, W.; Stoica, P.; Li, J.; Yardibi, T.; Sadjadi, F.A. Iterative adaptive approaches to MIMO radar imaging. IEEE J. Sel. Top. Signal Process. 2010, 4, 5–20. [Google Scholar] [CrossRef]
  12. Tan, X.; Roberts, W.; Li, J.; Stoica, P. Sparse learning via iterative minimization with application to MIMO radar imaging. IEEE Trans. Signal Process. 2010, 59, 1088–1101. [Google Scholar] [CrossRef]
  13. Ding, L.; Chen, W. MIMO radar sparse imaging with phase mismatch. IEEE Geosci. Remote Sens. Lett. 2014, 12, 816–820. [Google Scholar] [CrossRef]
  14. Yun, L.; Zhao, H.; Du, M. A MIMO radar quadrature and multi-channel amplitude-phase error combined correction method based on cross-correlation. In Proceedings of the Ninth International Conference on Graphic and Image Processing (ICGIP 2017), Qingdao, China, 13–15 October 2017; International Society for Optics and Photonics: Bellingham, DC, USA, 2018; Volume 10615, p. 1061555. [Google Scholar]
  15. Ding, L.; Chen, W.; Zhang, W.; Poor, H.V. MIMO radar imaging with imperfect carrier synchronization: A point spread function analysis. IEEE Trans. Aerosp. Electron. Syst. 2015, 51, 2236–2247. [Google Scholar] [CrossRef]
  16. Liu, C.; Yan, J.; Chen, W. Sparse self-calibration by MAP method for MIMO radar imaging. In Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, 25–30 March 2012; pp. 2469–2472. [Google Scholar]
  17. Tan, Z.; Nehorai, A. Sparse direction of arrival estimation using co-prime arrays with off-grid targets. IEEE Signal Process. Lett. 2014, 21, 26–29. [Google Scholar] [CrossRef]
  18. He, X.; Liu, C.; Liu, B.; Wang, D. Sparse frequency diverse MIMO radar imaging for off-grid target based on adaptive iterative MAP. Remote Sens. 2013, 5, 631–647. [Google Scholar] [CrossRef]
  19. Duan, G.Q.; Dang, W.W.; Xiao, Y.M.; Yi, S. Three-Dimensional Imaging via Wideband MIMO Radar System. IEEE Geosci. Remote Sens. Lett. 2010, 7, 445–449. [Google Scholar] [CrossRef]
  20. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
  21. Kennedy, J. Particle swarm optimization. In Encyclopedia of Machine Learning; Springer: Boston, MA, USA, 2010; pp. 760–766. [Google Scholar]
  22. Gao, C.; Teh, K.C.; Liu, A. Orthogonal Frequency Diversity Waveform with Range-Doppler Optimization for MIMO Radar. IEEE Signal Process. Lett. 2014, 21, 1201–1205. [Google Scholar] [CrossRef]
  23. Chen, A.L.; Wang, D.W.; Ma, X.Y. An improved BP algorithm for high-resolution MIMO imaging radar. In Proceedings of the 2010 International Conference on Audio, Language and Image Processing, Shanghai, China, 23–25 November 2010; pp. 1663–1667. [Google Scholar]
  24. Yang, X.S. Metaheuristic optimization: Algorithm analysis and open problems. In Proceedings of the 10th International Conference on Experimental Algorithms, Crete, Greece, 5–7 May 2011; pp. 21–32. [Google Scholar]
  25. Liu, L.; Zhou, F.; Tao, M.; Zhang, Z. A Novel Method for Multi-Targets ISAR Imaging Based on Particle Swarm Optimization and Modified CLEAN Technique. IEEE Sens. J. 2015, 16, 97–108. [Google Scholar] [CrossRef]
  26. Luo, C.; Wang, G.; Lu, G.; Wang, D. Recovery of moving targets for a novel super-resolution imaging radar with PSO-SRC. In Proceedings of the 2016 CIE International Conference on Radar (RADAR), Guangzhou, China, 10–13 Octorber 2016. [Google Scholar]
Figure 1. Space spectral imaging model of multiple-input multiple-output (MIMO) radar.
Figure 1. Space spectral imaging model of multiple-input multiple-output (MIMO) radar.
Sensors 19 05261 g001
Figure 2. Imaging geometry of MIMO radar with imaging plane mismatch.
Figure 2. Imaging geometry of MIMO radar with imaging plane mismatch.
Sensors 19 05261 g002
Figure 3. Flow chart of imaging plane calibration algorithm (IPCA).
Figure 3. Flow chart of imaging plane calibration algorithm (IPCA).
Sensors 19 05261 g003
Figure 4. Target and antenna array (a) Scattering points distribution of the B727 airplane, (b) Locations of transmit antennas and receive antennas.
Figure 4. Target and antenna array (a) Scattering points distribution of the B727 airplane, (b) Locations of transmit antennas and receive antennas.
Sensors 19 05261 g004
Figure 5. Spatial spectrum and reconstructed image. (a) The estimated fulfilled region of the spatial spectrum, (b) The real fulfilled region of the spatial spectrum, (c) Image reconstructed using estimated spatial spectrum values.
Figure 5. Spatial spectrum and reconstructed image. (a) The estimated fulfilled region of the spatial spectrum, (b) The real fulfilled region of the spatial spectrum, (c) Image reconstructed using estimated spatial spectrum values.
Sensors 19 05261 g005
Figure 6. Reconstructed images of different iterations, (af) images of the 10th, 20th, 30th, 50th, 80th, and 100th iteration respectively.
Figure 6. Reconstructed images of different iterations, (af) images of the 10th, 20th, 30th, 50th, 80th, and 100th iteration respectively.
Sensors 19 05261 g006
Figure 7. Value of objective function and image entropy during iterations. (a) Value of objective function during iterations (b) Image entropy during iterations.
Figure 7. Value of objective function and image entropy during iterations. (a) Value of objective function during iterations (b) Image entropy during iterations.
Sensors 19 05261 g007
Figure 8. Values of d = ( d x , d y , d z ) and ( δ , μ , ξ ) during iterations. (a) d = ( d x , d y , d z ) , (b) ( δ , μ , ξ ) .
Figure 8. Values of d = ( d x , d y , d z ) and ( δ , μ , ξ ) during iterations. (a) d = ( d x , d y , d z ) , (b) ( δ , μ , ξ ) .
Sensors 19 05261 g008
Figure 9. Reconstructed images with different γ , (a) γ = 0.1 , (b) γ = 1 , (c) γ = 3 , (d) γ = 5 , (e) γ = 10 , (f) γ = 50 .
Figure 9. Reconstructed images with different γ , (a) γ = 0.1 , (b) γ = 1 , (c) γ = 3 , (d) γ = 5 , (e) γ = 10 , (f) γ = 50 .
Sensors 19 05261 g009
Figure 10. (a) Value of the objective function f ( d , δ , μ , ξ ) during iterations under different SNRs, (b) Image entropy during iterations under different SNRs.
Figure 10. (a) Value of the objective function f ( d , δ , μ , ξ ) during iterations under different SNRs, (b) Image entropy during iterations under different SNRs.
Sensors 19 05261 g010
Figure 11. Errors between the estimated d z , δ , μ and real values and Image entropy of 10 Monte Carlo trials, (ac). Errors between the estimated d z , δ , μ and real values respectively when different Δ d z , Δ δ and Δ μ are chosen; (df) image entropy with different Δ d z , Δ δ and Δ μ .
Figure 11. Errors between the estimated d z , δ , μ and real values and Image entropy of 10 Monte Carlo trials, (ac). Errors between the estimated d z , δ , μ and real values respectively when different Δ d z , Δ δ and Δ μ are chosen; (df) image entropy with different Δ d z , Δ δ and Δ μ .
Sensors 19 05261 g011
Table 1. Simulation parameters.
Table 1. Simulation parameters.
ParameterValue
Imaging distance1 km
The size of the imaging plane60 m × 60 m
The size of the radar antenna array60 m × 60 m
Number of transmitters 31 × 31
Number of receivers4
Carrier frequency5 GHz
Bandwidth200 MHz
Table 2. Imaging plane parameters.
Table 2. Imaging plane parameters.
ParameterValue
Deviation of the center of the imaging plane d = ( d β , θ β , φ β ) = ( 1.5 m , 0.785 rad , 0.340 rad )
Deviation of the posture angle of the imaging plane ( δ , μ , ξ ) = ( 1 180 π , 1 180 π , 1 180 π ) rad
Table 3. Image entropy.
Table 3. Image entropy.
Adopted AlgorithmIFFTThe Proposed IPCA
Image entropy5.663.729
Table 4. Image entropy with different γ .
Table 4. Image entropy with different γ .
γ 0.10.51351050
Image entropy3.763.703.633.733.733.743.75

Share and Cite

MDPI and ACS Style

Guo, Y.; Yuan, B.; Wang, Z.; Xia, R. An Imaging Plane Calibration Method for MIMO Radar Imaging. Sensors 2019, 19, 5261. https://doi.org/10.3390/s19235261

AMA Style

Guo Y, Yuan B, Wang Z, Xia R. An Imaging Plane Calibration Method for MIMO Radar Imaging. Sensors. 2019; 19(23):5261. https://doi.org/10.3390/s19235261

Chicago/Turabian Style

Guo, Yuanyue, Bo Yuan, Zhaohui Wang, and Rui Xia. 2019. "An Imaging Plane Calibration Method for MIMO Radar Imaging" Sensors 19, no. 23: 5261. https://doi.org/10.3390/s19235261

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop