Next Article in Journal
Traffic Demand Estimations Considering Route Trajectory Reconstruction in Congested Networks
Previous Article in Journal
A Systematic Approach for Developing a Robust Artwork Recognition Framework Using Smartphone Cameras
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MIMO Radar Imaging Method with Non-Orthogonal Waveforms Based on Deep Learning

1
School of Marine Science and Technology, Northwest Polytechnical University, Xi’an 710072, China
2
Early Warning and Detection Department, Air Force Engineering University, Xi’an 710051, China
*
Author to whom correspondence should be addressed.
Algorithms 2022, 15(9), 306; https://doi.org/10.3390/a15090306
Submission received: 12 July 2022 / Revised: 18 August 2022 / Accepted: 26 August 2022 / Published: 28 August 2022

Abstract

:
Transmitting orthogonal waveforms are the basis for giving full play to the advantages of MIMO radar imaging technology, but the commonly used waveforms with the same frequency cannot meet the orthogonality requirement, resulting in serious coupling noise in traditional imaging methods and affecting the imaging effect. In order to effectively suppress the mutual coupling interference caused by non-orthogonal waveforms, a new non-orthogonal waveform MIMO radar imaging method based on deep learning is proposed in this paper: with the powerful nonlinear fitting ability of deep learning, the mapping relationship between the non-orthogonal waveform MIMO radar echo and ideal target image is automatically learned by constructing a deep imaging network and training on a large number of simulated training data. The learned imaging network can effectively suppress the coupling interference between non-ideal orthogonal waveforms and improve the imaging quality of MIMO radar. Finally, the effectiveness of the proposed method is verified by experiments with point scattering model data and electromagnetic scattering calculation data.

1. Introduction

Compared with optical imaging, radar imaging has the ability of all-weather detection, and can effectively detect the morphological characteristics of targets. It has a wide range of application needs in the military and civil fields. Multiple-input multiple-output (MIMO) radar imaging is a new radar imaging technology. By transmitting mutually orthogonal signals and sorting the signals in the receivers, observation channels and degrees of freedom far greater than the actual number of receiving and transmitting array elements can be obtained. Compared with traditional synthetic aperture radar (SAR) imaging and inverse synthetic aperture radar (ISAR) imaging, MIMO radar imaging has significant advantages, such as a high data sampling rate, no complex motion compensation, and easy-to-realize three-dimensional imaging. In recent years, it has attracted extensive attention. Many studies have been performed on imaging models, imaging methods, error correction, and other aspects [1,2,3].
Transmitting orthogonal waveforms are the basis for giving full play to the advantages of MIMO radar imaging technology. Through time diversity or frequency diversity, it is easy to achieve the ideal orthogonality between different transmission waveforms, but it will reduce the data rate and spectral efficiency of MIMO radar imaging. At present, transmitting orthogonal waveforms simultaneously with the same frequency can give full play to the advantages of MIMO radar system, which is a research hotspot in the MIMO radar field. However, studies have shown that the ideal orthogonal waveform requires that the spectra of different transmitted signals do not overlap [4], and the commonly used waveforms with the same time and frequency do not meet the above requirements. This will lead to mutual coupling noise among different waveform components and high integral sidelobe in the result after the matched filter, which will seriously affect the imaging quality of MIMO radar.
In order to solve this problem, researchers have successively designed multiple MIMO waveforms from the perspective of waveform design, such as the multi-phase coded signal [5], discrete frequency coded signal [6], zero-correlation area phase coded signal [7], positive and negative linear frequency modulation signal [8], OFDM chirp signal [9], and space-time coded waveform [10]. Although the influence of non-orthogonal waveform coupling can be suppressed to a certain extent through waveform optimization, a completely orthogonal waveform with the same frequency does not exist. Thus, the waveform design cannot completely solve this problem. In addition, from the perspective of signal processing, some scholars have designed receiving filters [11], introduced waveform polarization information [12], and adopted clean technology [13] to improve the image forming effect of non-orthogonal waves, but the actual effect is limited. Sparse recovery or compressed sensing imaging is a new imaging method proposed in recent years, which is different from the matching filter imaging theory. By constructing the linear mapping relationship between the MIMO radar echo signal and the target’s one-dimensional range profile [14], two-dimensional profile [15], and three-dimensional profile [16,17], based on the sparsity of the target image, the sparse optimization algorithm is used to directly reconstruct the high-resolution, low-sidelobe target image, which can avoid the mutual coupling noise generated by the non-orthogonal waveform through matching filtering. However, this type of non-orthogonal waveform MIMO radar imaging, based on sparse recovery, faces problems such as sensitivity to model errors, difficulty in selecting the super-parameters of the optimization algorithm, sparse basis mismatch, and so on, resulting in a poor sparse imaging effect in practice. In addition, in the case of high-resolution and high-dimensional imaging, the sparse optimization solution process has high computational complexity and a large demand for data sampling and storage, which is difficult to meet the needs of real-time and low-cost imaging.
In recent years, deep learning technology based on big data has attracted extensive attention. It can automatically mine hidden structural information and internal laws from data, and has strong nonlinear mapping ability. In practical applications, only one forward propagation of the network is needed, and the processing efficiency is very high. The network parameters are automatically learned through training without manual intervention. In view of the above advantages, in recent years, deep learning technology has also been introduced into the field of radar imaging and has been developed rapidly [18,19]. A complex valued convolutional neural network (CNN) for sparse ISAR image enhancement is constructed in [20]. In [21], the authors further construct a full convolution neural network to improve the imaging quality of sparse ISAR. In [22], the complex CNN is applied to radar image enhancement, and the ideal point scattering center model is used to build data sets to train the complex CNN, which improves the resolution of ISAR images. For the same purpose, a deep residual network for ISAR low-resolution image enhancement is further constructed in [23]. The authors in [24] introduce the generation model into ISAR imaging, and enhance the recovery effect of weak scattering points with the help of the generation of a countermeasure network. In addition to the above image enhancement methods, some researchers use the traditional sparse reconstruction algorithm for deep expansion for radar imaging. In [25], an iterative soft threshold algorithm is used to expand the network for ISAR imaging. The network combines CNN with the sparse reconstruction process to improve the effect of ISAR imaging. In [26], the authors propose a general method for constructing a layered sparse ISAR imaging network, which uses mixed channels to process real and imaginary data, respectively, which is more suitable for complex echo processing. In [27,28], ISAR self-focusing imaging is realized by adding initial phase compensation operation in the sub-network. In [29], the 2D-ADMM is used to expand the network to realize ISAR self-focusing imaging under two-dimensional sparse conditions. In addition to the research on ISAR imaging, the authors in [30,31] have studied deep expansion networks suitable for 2D SAR sparse imaging and 3D SAR sparse imaging, respectively. In [32], the authors attempt to combine the learning imaging method based on deep expansion with the image domain enhancement method to improve the adaptability of the imaging network to model errors.
We can see that, in recent years, deep learning technology has made great research progress in the field of ISAR and SAR imaging, which can improve the imaging effect of ISAR and SAR to a certain extent. However, there is no public report on non-orthogonal waveform MIMO radar imaging. In this paper, a non-ideal orthogonal waveform MIMO radar imaging method based on deep learning is proposed. With the help of the powerful nonlinear fitting ability of deep learning, the mapping relationship between the non-orthogonal waveform MIMO radar echo and the ideal target image is automatically learned by constructing the MIMO radar deep imaging network and training on a large amount of simulation training data. The learned imaging network can effectively suppress the coupling interference between non-ideal orthogonal transmission waveforms and improve the imaging quality of MIMO radar. The differences between the proposed method and the existing methods of suppressing the mutual coupling among different waveforms are summarized in Table 1. The effectiveness and timeliness of the proposed method are verified by experiments on point scattering model data and electromagnetic scattering calculation data.

2. Imaging Model of Non-Orthogonal Waveform MIMO Radar

2.1. MIMO Radar Echo Signal Model

Consider a M-transmitting and N-receiving linear array MIMO radar. The spacings between transmitting and receiving array elements are Nd and d, respectively. In this case, the MIMO array can be equivalent to a uniform transceiver array. Let φ m ( t ) represent the mth baseband transmitting signal. Consider a far-field target containing U scattering points, and Q is the qth scattering point on the target. TmQ and RnQ represent the distances between the scattering point and the transmitting array element and the receiving array element, respectively. The baseband echo signal received by the nth receiving array element can be expressed as
y n ( t ) = q = 0 U 1 m = 0 M 1 σ ˜ q , m n φ m ( t τ q , m n )
where n = 0 , 1 ,   ,   N 1 , τ q , m n = ( T m Q + R n Q ) / c , c is the speed of light, σ ˜ q , m n = σ q exp [ j 2 π ( T m Q + R n Q ) / λ ] , σ q represents the scattering coefficient of the scattering point, and λ is the working wavelength of the radar. When the line of sight angle of the target is known, the echo envelope of different transmit and receive channels can be aligned by pre-compensation of the envelope. At this time, τ q , m n in (1) does not change with m,n, so it can be re-expressed as τ q .
Assuming that the range imaging scope is Δ R , the number of sampling points in Δ R will be K = 2 Δ R f s / c with the sampling frequency f s . Similarly, by sampling φ m ( t ) with the frequency f s , we can obtain a transmission waveform vector φ m with the length of L . Assuming that the scattering point Q is located in the kth range grid, k = 0 , 1 ,   ,   K 1 , the range kernel matrix of the kth range grid is defined as T k = ( 0 k × L I L × L 0 ( K k ) × L ) ( K + L ) × L , and then the echo signal of the scattering point Q received by the nth receiving array element can be expressed in vector form
y n q = T k S θ n k
where θ n k = [ σ ˜ q , 0 n , σ ˜ q , 1 n , , σ ˜ q , ( M 1 ) n ] T M × 1 , S = [ φ 0 , φ 1 , , φ M 1 ] L × M .
The target echo is the superposition of the echoes of all scattering points on the target. Let T = [ T 0 , T 1 , , T K 1 ] ( K + L ) × L K and define T S [ T 0 S , T 1 S , , T K 1 S ] ( K + L ) × M K , so the echo signal of the whole imaging scene received at R n can be expressed as
y n = T S θ n + ε
where θ n = [ ( θ n 0 ) T , ( θ n 1 ) T , , ( θ n K 1 ) T ] T M K × 1 is the corresponding target complex range image at R n , and ε represents the received noise.

2.2. Coupling Interference Induced by Non-Orthogonal Waveforms

To realize MIMO radar imaging, it is necessary to separate the waveform of the MIMO radar echo. Usually, the matched filter method is used. A group of matched filters are used to separate the signals with different waveforms from the mixed echo. The complex range profile of the target is reconstructed at the same time. Ignoring the influence of noise, the separation result of the echo after the mth matched filter can be expressed as
y m n = ( T φ m ) * y n = ( T φ m ) * T S θ n
y m n represents the waveform component in the echo corresponding to the mth transmitted waveform φ m , where
T φ m = [ T 0 , T 1 , , T K 1 ] φ m = [ T 0 φ m , T 1 φ m , , T K 1 φ m ] ( K + L ) × K
Substituting Equation (5) into Equation (4), we can obtain
y m n = D m θ n
where D m = [ T 0 φ m , T 1 φ m , , T K 1 φ m ] * [ T 0 S , T 1 S , , T K 1 S ] K × M K .
In order to analyze the reconstruction effect, the point target containing only one scattering point Q is considered. Assuming that Q is located on the kth distance grid, the scattering coefficient vector to be recovered can be expressed as θ n = [ 0 , , ( θ n k ) T , , 0 ] T M K × 1 , where θ n k = [ σ ˜ q , 0 n , σ ˜ q , 1 n , , σ ˜ q , ( M 1 ) n ] T M × 1 . At this time, (6) can be further expressed as
y m n = [ T 0 φ m , T 1 φ m , , T K 1 φ m ] * [ T 0 S , T 1 S , , T K 1 S ] [ 0 , , ( θ n k ) T , , 0 ] T = [ T 0 φ m , T 1 φ m , , T K 1 φ m ] * T k S θ n k
The result of the kth distance grid, where the target is located, of y m n is
y m n ( k ) = ( T k φ m ) * T k S θ n k = φ m * ( T k ) T T k S θ n k = φ m * S θ n k = [ φ m * φ 0 , φ m * φ 1 , , φ m * φ M 1 ] [ σ ˜ q , 0 n , σ ˜ q , 1 n , , σ ˜ q , ( M 1 ) n ] T
When the transmission waveform meets the ideal orthogonal condition ( m = m when φ m * φ m = 1 ; m m when φ m * φ m = 0 ), it can be obtained y m n ( k ) = σ ˜ q , m n . This means that the separation result only contains the target signal corresponding to the mth transmission waveform, and there is no component of other waveforms. MIMO radar imaging can be realized by integrating the complex scattering coefficients σ ˜ q , m n obtained from different receiving and transmitting channels, and then compressing them in the azimuth direction.
The above considers that the transmission waveform is ideally orthogonal, but in practice, waveforms with the same frequency cannot meet the above orthogonal conditions. As can be seen from (8), the separation result includes not only the signal component σ ˜ q , m n corresponding to the mth transmission waveform, but also a large amount of coupling interference from other transmission waveforms, which will lead to a serious decline in the reconstruction quality of the target image.

3. Imaging Method of Non-Orthogonal Waveform MIMO Radar Based on Deep Learning

In this paper, the deep learning technology is introduced to solve the problem of non-orthogonal waveform coupling suppression, and a new non-orthogonal waveform MIMO radar imaging framework based on a deep network is proposed. The main idea is to realize the mapping from the complex echo of non-orthogonal waveform MIMO radar to the target high-resolution image through the deep network, and the key is to design a network structure suitable for non-orthogonal waveform MIMO radar imaging. The network structure based on deep expansion of the sparse reconstruction algorithm can effectively deal with complex echoes, but for non-orthogonal MIMO radar imaging, the dimension of the sparse reconstruction model is too high, and the network is difficult to realize. Convolutional neural networks and recurrent neural networks, which are widely used in the field of general deep learning, are not suitable for the data characteristics of non-orthogonal waveform MIMO radar echo, and it is difficult to learn the complex mapping from complex echo to target image. The proposed framework organically combines the MIMO radar imaging theory with the advanced deep network, provides the basis for network construction and parameter initialization through the imaging theory, and uses the advanced deep network module to improve the nonlinear fitting ability of the imaging network. The overall network structure is divided into two parts, which are called the ‘echo transformation module’ and ‘feature imaging module’. The ‘echo transformation module’ serves to extract artificial features, transform the complex echo using the traditional MIMO radar imaging theories such as matched filtering, sparse recovery, time-frequency analysis, etc., and realize the transformation from the complex echo domain of ‘implicit expression’ of information to the feature domain of ‘explicit expression’ of information. The function of the ‘feature imaging module’ is to carry out automatic feature extraction, use advanced achievements in the field of deep learning to build a network structure, and complete the nonlinear mapping from echo features to target images. The overall structure of the framework is shown in Figure 1, which mainly includes three parts: training data generation based on the MIMO radar imaging model, deep network forward propagation, and error back-propagation based on reconstruction loss. Based on the above framework, a non-orthogonal waveform MIMO radar imaging method based on deep learning is designed in this paper. The proposed imaging method is introduced from three aspects: deep network structure, training data generation, and network training strategy.

3.1. Deep Network Structure

According to the specific characteristics of non-orthogonal waveform MIMO radar imaging, this paper constructs a MIMO radar learning imaging network (MIMO-LI-Net), and the network structure is shown in Figure 2. The echo transformation module is constructed according to the range Doppler imaging algorithm, including the matched filter network and the azimuth compression network: the matched filter network completes the range dimension compression of the target echo by inputting the non-orthogonal emission waveform set. According to the configuration of the MIMO array structure, the azimuth compression network completes the two-dimensional compression of the target echo through echo rearrangement, azimuth compression, and other processes. In the specific implementation, this paper designs a uniform equivalent transceiver array and frequency domain matched filter, which can improve the efficiency of echo transformation with the help of fast Fourier transform (FFT). It should be noted that, due to the coupling interference between non-orthogonal waveforms and the possible migration though resolution cells, the above two-dimensional compression results cannot achieve the ideal imaging effect. Nevertheless, the echo transformation process still extracts effective target features from the complex echo of MIMO radar, which can provide a good initial input for the subsequent feature imaging process. The feature imaging module is constructed by a convolutional neural network, which can be divided into two parts, encoding and decoding, as shown in Figure 2. In the figure, Conv and ConvTrans represent the convolution layer and deconvolution layer, respectively. Moreover, (3 × 3, 1) indicates that the size of the convolution kernel or deconvolution kernel is 3 × 3 and the step length is 1. After each convolution operation or deconvolution operation, a batch normalization (BN) process and a relu activation function are connected. The network consists of five convolution layers, and the number of input/output channels of each layer is 2/32, 32/64, 64/64, 64/64, and 64/128, respectively. It includes five deconvolution layers, and the number of input/output channels of each layer is 128/64, 64/64, 64/64, 64/32, and 32/1, respectively. MaxPool and MaxUnpool represent the maximum pooling layer and the maximum unpooling layer, respectively, and (2 × 2, 2) indicates that the size of the pooled or unpooled window is 2 × 2 and the window step is 2. The same location index is used for the maximum pooling layer and the maximum unpooling layer. The network includes 4 pooling layers and 4 unpooling layers, respectively. Different from general natural image processing, we send the real part and imaginary part of the echo transformation result to the CNN network as two separate channels. Compared with the input single-channel modulus data, the inputted dual-channel data contain the complete information of the transformed echo, which is conducive to image feature extraction. In addition, the feature imaging module establishes a skip connection between the coding and decoding parts of the network, which helps to retain the details of the image in the process of image reconstruction.

3.2. Training Data Generation

In order to train the above MIMO-LI-Net, sufficient training samples need to be generated. Considering that it is difficult to obtain a large number of MIMO radar measured data in practice, this paper constructs a training data set from simulation data. It can be seen from Figure 2 that the training data include the MIMO radar target echo and the ideal target image. For the MIMO radar target echo, this paper generates the simulated MIMO radar target echo based on the parameterized attribute scattering center model and MIMO radar echo signal model in Section 2.1. For a multi-scattering center target, its frequency domain echo at the nth receiver of the MIMO radar can be generated by the following formula
y n = m q e q , m , n φ m
where y n is the vector composed of echoes in different frequencies, φ m is the vector form of the transmission waveform φ m ( t ) in the frequency domain, e q , m , n is the frequency-related vector composed of the scattering field generated by the qth scattering center in the mth transmitting and nth receiving channel, and denotes the point-to-point multiplication between two vectors. Considering the centralized MIMO imaging scene, the scattering center has stable scattering characteristics within the observation angle range of the MIMO array, and according to the attribute scattering center model [33], e q , m , n can be generated by the following formula
e q , m , n = σ q ( 1 j f f c ) α q sin c ( 2 π f c L q sin ( ϕ q , m , n ϕ ¯ q ) ) exp ( 2 π f γ q sin ϕ q , m , n ) exp ( 1 j 4 π f c Δ R q , m , n )
where f is a vector that contains all the frequency points of echoes, f c is the center frequency, and ϕ q , m , n is the azimuth angle between the qth scattering center and the equivalent transceiver of the mth transmitter and nth receiver. Δ R q , m , n is the sum of the wave path from the qth scattering center to the mth transmitter and nth receiver. { σ q , α q , L q , ϕ ¯ q , γ q } is the parameter related to the qth scattering center: σ q is the scattering coefficient; α q is a parameter describing the frequency and geometric relationship of the scatterer; L q , ϕ ¯ q represent the length and direction angle of the distributed scattering center; γ q describes the relationship between the local scattering center and the direction. Through the above methods, MIMO radar echoes containing both local and distributed scattering mechanism targets can be simulated.
In addition to the MIMO radar target echo, network training also requires the corresponding output target image. In this paper, the target imaging result under the condition of an ideal orthogonal waveform is used as the target image. The specific method is as follows: using the equivalent transceiver array echo to replace the MIMO array echo shown in Formula (9), at this time, the echoes of different transmitted waveforms are not aliased and there is no coupling interference; the back-projection algorithm is used to image the echo of the equivalent transceiver array. Even if there is migration though resolution cells, the algorithm can still obtain the focused image; the sidelobe is suppressed by two-dimensional windowing, and the image resolution is improved by doubling the array length and doubling the signal bandwidth.

3.3. Network Training Strategy

Through effective network learning, the imaging network can be driven to automatically learn the mapping from the non-orthogonal waveform echo to the ideal target image from a large number of training samples. In MIMO-LI-Net, the echo transformation module does not contain learnable parameters, so the network only needs to learn the CNN parameters in the feature imaging module. In this paper, the mean square error between the reconstructed target image and the ideal target image is used as the loss function, and the back-propagation algorithm is used to train the network.

4. Experimental Results

This part first trains the proposed imaging network by generating simulation data, and then verifies the effect of the proposed imaging method by using two sets of test data. The simulation experiment adopts a MIMO linear array with 4 transmitters and 20 receivers. The interval between transmitting and receiving array elements is 3 m and 60 m, respectively, the radar center frequency is 10 GHz, and the transmitting waveform is 4 polyphase coded signals with 450 symbols. The signal bandwidth is 300 MHz. The time duration is 1.5 μs. The range scope is set as 50 m, which corresponds to the time duration 0.333 μs. The sampling interval in frequency is 0.545 MHz and the frequency scope is set as 768 MHz, which corresponds to a time sampling interval of 1.3 ns. Therefore, the number of frequencies is 768 ÷ 0.545 ≈ 1410. The autocorrelation sidelobe of the four signals is low, but the cross-correlation sidelobe is high. This means that the waveform does not meet the ideal orthogonality. We set the target 5 km away from the center of the array, and the corresponding range and azimuth theoretical imaging resolutions are 0.5 m and 0.625 m, respectively. The training samples are generated by the method in Section 3.2. The number of scatterers of each target is uniformly and randomly generated in the range of 0~500, and all of them are in the range of 50 m × 50 m. The scattering coefficient of the scatterer is generated by the standard complex Gaussian distribution. Each scatterer is set as the distributed scattering center or local scattering center with equal probability, and the length of the distributed scattering center is a uniform random number in the range of 0~2 m. We set the size of the ideal target image to 256 × 256. A total of 40,000 training sample pairs of MIMO radar echo and target image are generated to form a training data set for training the MIMO-LI-Net shown in Figure 2. The training platform is an Intel Core i9-9900 CPU and NVIDIA geforce RTX 2080 Ti GPU, which has the memory size of 11 GHz. Under the Pytorch framework, the Adam optimizer is used for training, and the learning rate is set to 0.0001. There are 40,000 training samples. The batch size is set as 32, which occupies 4.8 GHz GPU memory. One training epoch takes around 13 min. In the training, we set the number of training epochs as 100. Thus, the total training time is around 21.67 h. For the imaging network after training, in the following, we will verify the trained imaging network by building test data.

4.1. Experiment of Point Target Simulation Data

Firstly, an aircraft target model composed of ideal scattering points is considered. The imaging parameters are consistent with the training data. We use the method in Section 3.2 to generate the MIMO radar echo and the ideal target image, which is shown in Figure 3. Four existing MIMO radar imaging methods are selected for comparison. (1) The matched filtering method—range compression is carried out based on Equation (4), and FFT is carried out to realize azimuth compression. (2) The 1D-SR method—a one-dimensional sparse recovery (1D-SR) model is constructed by using the method proposed in [16] to directly solve the two-dimensional image of the target. It should be noted that, due to the high dimension of the one-dimensional vector model (in this experiment, the dimension of the perception matrix is up to 11,000 × 8000), the efficiency of the sparse solving algorithm in [16], which involves matrix inversion, is too low. Therefore, this paper uses the more efficient orthogonal matching pursuit (OMP) algorithm to solve the sparse model, and the sparsity for OMP is set to 400. (3) The MB-SL0 method—the MB-SL0 algorithm proposed in [14] is used for the range compression, and then the azimuth compression is realized through FFT. (4) The CNN method—we train a CNN network for MIMO radar imaging. The network input is the imaging result of the matched filtering method. The network structure is similar to MIMO-LI-Net, but does not include skip connections between network layers. The network training method is consistent with the training method of the proposed imaging network.
The imaging results of the four comparison methods are shown in Figure 4. As can be seen from Figure 4a, due to the non-ideal orthogonality of the transmitted waveform, there is serious mutual coupling noise in the imaging result of the matched filtering method. Figure 4b shows the imaging results of the 1D-SR method. It can be seen that the 1D-SR method can well suppress the mutual coupling noise, but, due to the basis mismatch of the sparse reconstruction model, there is a certain position shift in the scattering points in the sparse reconstructed image compared with the ideal image. Moreover, the sparsity parameter of the OMP algorithm needs to be set manually. When the sparsity is small, the scattering points will be lost, and when the sparsity is large, the computational complexity will increase sharply. As can be seen from Figure 4c, the MB-SL0 method can effectively suppress the range coupling interference caused by the non-orthogonality of the waveform, but the sidelobe is high in the azimuth direction. Moreover, due to the decoupling processing of distance and azimuth, when the scattering points far away in the azimuth undergo migration over distance units, the imaging results will be defocused. Figure 4d shows the imaging results of the CNN method. It can be seen that the CNN method can better suppress the image coupling interference caused by non-orthogonal waveforms. However, due to the lack of targeted design of the network structure, the network has a poor effect on the recovery of weak scattering points in the image. The imaging results of MIMO-LI-Net, trained in this paper, are shown in Figure 5. From the figure, we can see that most scattering points on the target have been well recovered, and the coupling interference between non-orthogonal waveforms has been effectively suppressed.
In order to quantitatively analyze the above imaging results, normalized mean square error (NMSE) and target background ratio (TBR) [32] are used as evaluation indicators. In TBR, the threshold is set to 0.0001. The index calculation results of each method are shown in Table 2. It can be seen that the imaging index of the matched filtering method is the worst, and the method in this paper has achieved the best results in NMSE and TBR. At the same time, in order to compare the calculation efficiency of various methods, Table 2 also shows the running time of different methods under CPU. It can be seen that the operation efficiency of the matched filtering method is the highest. The running time of the 1D-SR method is the longest, reaching nearly 100 s. With the increase in the dimension of the imaging model and the sparsity of the OMP algorithm, the demand for storage space and computing resources of the algorithm will increase sharply. The running times of the MB-SL0 method, CNN method, and the proposed method are lower, all of which are in the order of 0.1 s. Compared with the MB-SL0 method, the running efficiency of the CNN method and the proposed method is less affected by the increase in imaging model dimension.

4.2. Experiment on Electromagnetic Calculation Data

In this part, the performance of the proposed imaging method is further tested by using the electromagnetic calculation data. The CAD model of the target is shown in Figure 6. Physical optics and the method of moments are used as electromagnetic scattering calculation methods to generate the target scattering field, and the MIMO radar echo of the target can be generated by using Equation (9). The imaging results of the target for each comparison method and the proposed imaging method are shown in Figure 7 and Figure 8, respectively. It can be seen that for non-ideal point scattering model targets, the proposed method can also effectively suppress the mutual coupling noise of non-orthogonal waveforms, while retaining the main scattering center of the target. Compared with the results of other comparison methods, the results of the proposed method have lower sidelobes and fewer false points.

5. Discussions

In this manuscript, we mainly focus on the specific error in MIMO array radar, i.e., the waveform coupling. Besides the specific error, there are some general errors that are common to all the array radars, including the channel gain and phase error, array element position error, carrier error, imaging model error, noise, etc. Since both array element position error and carrier error will eventually lead to channel phase error, they can be dealt with as the channel phase error. For MIMO array imaging, the channel phase errors usually have a great impact. A large phase error may lead to imaging failure. The noise usually has a slight influence, and will reduce the signal-to-noise ratio of the image to some extent. The mutual coupling among waveforms will enhance the range sidelobes and degrade the image quality eventually. The imaging model error may result in a deformed image, reduce the signal-to-noise ratio, and so on. In this work, we propose a new method to deal with the error of waveform coupling. For the random general errors, some error correction methods will be necessary. There have been many studies focused on correcting the various errors, such as the method in [2]. One possible approach to combine the existing error correction methods with the proposed imaging network is embedding the correction methods into the ‘echo transformation module’ in Figure 2. In this way, the correction methods can be directly applied with little change. Moreover, another potential approach is described in [27,28,29], by combining the step of error correction with the network in the ‘feature imaging module’, in which the network structure needs to be changed significantly.
In this work, we train the proposed network via simulated data. There may be some degradations for various real scenarios in practice. In order to improve its performance on real scenarios, one strategy is to build a more accurate simulated system model and produce more accurate simulated training data. In addition, it is also important, for the deep-learning based method, to collect measured data as much as possible. However, often, it is very difficult to obtain enough measured data in practice. Transfer learning, as one of the most important techniques in deep learning, can transfer knowledge from the simulated data to the real data, and then good performance on the real test data can be obtained by fine-tuning the pre-trained network (which is trained via a large number of simulated data) via only a small number of real training data. Therefore, it is possible to expand the proposed network trained via simulated data to different scenarios in real microwave imaging.

6. Conclusions

In this paper, deep learning technology is introduced into the field of MIMO radar imaging, and a new MIMO radar imaging method suitable for non-orthogonal waveforms is proposed. This method only needs to transmit the common non-orthogonal signal waveforms, and the echo is post-processed through the training deep network. It can effectively suppress the coupling interference of different waveforms and efficiently obtain a high-resolution image of the target. The experimental results verify that, by training the proposed imaging network on the simulation data set, a good imaging effect can be obtained on the point scattering target data and electromagnetic calculation data. In the follow-up, we will continue to study the organic combination method of existing imaging theory and deep networks, build a more advanced echo transformation module, improve the quality of target echo feature extraction, and further improve the recovery effect of the imaging network on weak scattering points. Moreover, near-field microwave-based tomography, which is closely related to radar imaging, has wide application prospects, such as medical diagnosis or industrial testing. Thus, it would also be valuable to expand our proposal to the field of near-field microwave-based tomography.

Author Contributions

Methodology, H.L.; resources, Q.Z.; writing—review and editing, H.L.; funding acquisition, Q.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant 61531015.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets generated during the current study are not publicly available but are available from the first author on reasonable request.

Acknowledgments

Thanks to Xiaowei Hu for the many valuable suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ma, C.; Yeo, T.S.; Tan, C.S.; Liu, Z. Three-dimensional imaging of targets using colocated MIMO radar. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3009–3021. [Google Scholar] [CrossRef]
  2. Roberts, W.; Stoica, P.; Li, J.; Yardibi, T.; Sadjadi, F.A. Iterative adaptive approaches to MIMO radar imaging. IEEE J. Sel. Topics Signal Process. 2010, 4, 5–20. [Google Scholar] [CrossRef]
  3. Ding, L.; Chen, W.; Zhang, W.; Poor, H.V. MIMO radar imaging with imperfect carrier synchronization: A point spread function analysis. IEEE Trans. Aerosp. Electron. Syst. 2015, 51, 2236–2247. [Google Scholar] [CrossRef]
  4. Krieger, G. MIMO-SAR: Opportunities and pitfalls. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2628–2645. [Google Scholar] [CrossRef]
  5. Deng, H. Polyphase code design for orthogonal netted radar systems. IEEE Trans. Signal Process. 2004, 52, 3126–3135. [Google Scholar] [CrossRef]
  6. Deng, H. Discrete frequency-coding waveform design for netted radar systems. IEEE Signal Process. Lett. 2004, 11, 1070–9908. [Google Scholar] [CrossRef]
  7. Xu, L.; Liang, Q. Zero correlation zone sequence pair sets for MIMO radar. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 2100–2113. [Google Scholar] [CrossRef]
  8. Wang, W.Q. MIMO SAR chirp modulation diversity waveform design. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1644–1648. [Google Scholar] [CrossRef]
  9. Cheng, S.J.; Wang, W.Q.; So, H.C. MIMO radar OFDM chirp waveform diversity design with sparse modeling and joint optimization. Multidim. Syst. Sign. Process. 2017, 28, 237–249. [Google Scholar] [CrossRef]
  10. Wang, J.; Liang, X.D.; Chen, L.Y.; Li, K. A novel space-time coding scheme used for MIMO-SAR systems. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1556–1560. [Google Scholar] [CrossRef]
  11. Li, J.; Stoica, P.; Zheng, X.Y. Signal synthesis and receiver design for MIMO radar imaging. IEEE Trans. on Signal Process. 2008, 56, 3959–3968. [Google Scholar] [CrossRef]
  12. Meng, C.; Xu, J.; Xia, X.G.; Liu, F.; Long, T.; Mao, E.; Yang, J.; Peng, Y. MIMO-SAR waveforms separation in same frequency area based on virtual polarization filter. Sci. China Inf. Sci. 2015, 58, 1–12. [Google Scholar]
  13. Wang, J.; Liang, X.; Ding, C.; Chen, L.; Wang, Z.; Li, K. A novel scheme for ambiguous energy suppression in MIMO-SAR systems. IEEE Geosci. Remote Sens. Lett. 2015, 12, 344–348. [Google Scholar] [CrossRef]
  14. Hu, X.W.; Tong, N.N.; Zhang, Y.S. MIMO Radar Imaging With Non-orthogonal Waveforms Based on Joint-Block Sparse Recovery. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5985–5996. [Google Scholar]
  15. Ma, C.; Yeo, T.; Ng, B.P. MIMO Radar Imaging Based on Multidimensional Linear Equations and Sparse Signal Recovery. IET Radar Sonar Navig. 2018, 12, 3–10. [Google Scholar] [CrossRef]
  16. Ma, C.; Yeo, T.; Zhao, Y. MIMO radar 3D imaging based on combined amplitude and total variation cost function with sequential order one negative exponential form. IEEE Trans. Image Process. 2014, 23, 2168–2183. [Google Scholar] [CrossRef]
  17. Hu, X.; Tong, N.; Guo, Y.; Ding, S. MIMO Radar 3D Imaging Based on Multi-dimensional Sparse Recovery and Signal Support Prior Information. IEEE Sens. J. 2018, 18, 3152–3162. [Google Scholar] [CrossRef]
  18. Luo, Y.; Ni, J.; Zhang, Q. Synthetic aperture radar learning-imaging method based on data-driven technique and artificial intelligence. J. Radars 2020, 9, 107–122. [Google Scholar]
  19. Zhang, Y.; Mu, H.; Jiang, Y.; Ding, C. Overview of Radar Imaging Techniques Based on Deep Learning. Radar Sci. Technol. 2021, 19, 467–478. [Google Scholar]
  20. Hu, C.; Wang, L.; Li, Z.; Sun, L.; Loffeld, O. Inverse synthetic aperture radar imaging using complex-value deep neural network. J. Eng. 2019, 2019, 7096–7099. [Google Scholar] [CrossRef]
  21. Hu, C.; Wang, L.; Li, Z.; Zhu, D. Inverse synthetic aperture radar imaging using a fully convolutional neural network. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1203–1207. [Google Scholar] [CrossRef]
  22. Gao, J.; Deng, B.; Qin, Y.; Wang, H.; Li, X. Enhanced radar imaging using a complex-valued convolutional neural network. IEEE Geosci. Remote Sens. Lett. 2019, 16, 35–39. [Google Scholar] [CrossRef]
  23. Gao, X.; Qin, D.; Gao, J. Resolution enhancement for inverse synthetic aperture radar images using a deep residual network. Microw. Opt. Technol. Lett. 2020, 62, 1588–1593. [Google Scholar] [CrossRef]
  24. Qin, D.; Gao, X. Enhancing ISAR Resolution by a Generative Adversarial Network. IEEE Geosci. Remote Sens. Lett. 2021, 18, 127–131. [Google Scholar] [CrossRef]
  25. Wei, S.; Liang, J.; Wang, M.; Zeng, X.; Shi, J.; Zhang, X. CIST: An improved ISAR imaging method using convolution neural network. Remote Sens. 2020, 12, 2641. [Google Scholar] [CrossRef]
  26. Liang, J.; Wei, S.; Wang, M.; Shi, J.; Zhang, X. Sparsity-Driven ISAR Imaging via Hierarchical Channel-Mixed Framework. IEEE Sens. J. 2021, 21, 19222–19235. [Google Scholar] [CrossRef]
  27. Li, R.; Zhang, S.; Zhang, C.; Liu, Y.; Li, X. Deep Learning Approach for Sparse Aperture ISAR Imaging and Autofocusing Based on Complex-Valued ADMM-Net. IEEE Sens. J. 2021, 21, 3437–3451. [Google Scholar] [CrossRef]
  28. Wei, S.; Liang, J.; Wang, M.; Shi, J.; Zhang, X.; Ran, J. AF-AMPNet: A Deep Learning Approach for Sparse Aperture ISAR Imaging and Autofocusing. IEEE Trans. Geosci. Remote Sens. 2021, 60, 3073123. [Google Scholar] [CrossRef]
  29. Li, X.; Bai, X.; Zhou, F. High-Resolution ISAR Imaging and Autofocusing via 2D-ADMM-Net. Remote Sens. 2021, 13, 2326. [Google Scholar]
  30. Zhao, S.; Ni, J.; Liang, J.; Xiong, S.; Luo, Y. End-to-End SAR Deep Learning Imaging Method Based on Sparse Optimization. Remote Sens. 2021, 13, 4429. [Google Scholar] [CrossRef]
  31. Wang, M.; Wei, S.; Liang, J.; Zhou, Z.; Qu, Q.; Shi, J.; Zhang, X. TPSSI-Net: Fast and enhanced two-path iterative network for 3D SAR sparse imaging. IEEE Trans. Image Processing 2021, 30, 7317–7332. [Google Scholar] [CrossRef] [PubMed]
  32. Hu, X.; Xu, F.; Guo, Y.; Feng, W.; Jin, Y.-Q. MDLI-Net: Model-Driven Learning Imaging Network for High-Resolution Microwave Imaging With Large Rotating Angle and Sparse Sampling. IEEE Trans. Geosci. Remote Sens. 2022, 60, 3110579. [Google Scholar] [CrossRef]
  33. Gerry, M.J.; Potter, L.C.; Gupta, I.J.; Merwe, A.V.D. A parametric model for synthetic aperture radar measurements. IEEE Trans. Antennas Propag. 1999, 47, 1179–1188. [Google Scholar] [CrossRef]
Figure 1. Non-orthogonal waveform MIMO radar imaging framework based on deep network.
Figure 1. Non-orthogonal waveform MIMO radar imaging framework based on deep network.
Algorithms 15 00306 g001
Figure 2. Network structure of MIMO-LI-Net.
Figure 2. Network structure of MIMO-LI-Net.
Algorithms 15 00306 g002
Figure 3. Ideal target image.
Figure 3. Ideal target image.
Algorithms 15 00306 g003
Figure 4. Imaging results of point target simulation data via comparison methods: (a) matched filtering method; (b) 1D-SR method; (c) MB-SL0 method; (d) CNN method.
Figure 4. Imaging results of point target simulation data via comparison methods: (a) matched filtering method; (b) 1D-SR method; (c) MB-SL0 method; (d) CNN method.
Algorithms 15 00306 g004
Figure 5. Imaging result of point target simulation data via MIMO-LI-Net.
Figure 5. Imaging result of point target simulation data via MIMO-LI-Net.
Algorithms 15 00306 g005
Figure 6. Target CAD model.
Figure 6. Target CAD model.
Algorithms 15 00306 g006
Figure 7. Imaging results of electromagnetic calculation data via comparison methods: (a) matched filtering method; (b) 1D-SR method; (c) MB-SL0 method; (d) CNN method.
Figure 7. Imaging results of electromagnetic calculation data via comparison methods: (a) matched filtering method; (b) 1D-SR method; (c) MB-SL0 method; (d) CNN method.
Algorithms 15 00306 g007
Figure 8. Imaging result of electromagnetic calculation data via MIMO-LI-Net.
Figure 8. Imaging result of electromagnetic calculation data via MIMO-LI-Net.
Algorithms 15 00306 g008
Table 1. Comparison of methods of suppressing the mutual coupling among different waveforms.
Table 1. Comparison of methods of suppressing the mutual coupling among different waveforms.
MethodsProsCons
Waveform design methods [6,7,8,9,10]Wide research and various alternative waveforms.There are no completely orthogonal waveforms with the same frequency. Mutual coupling of the applied waveforms always exists.
Signal processing methods [11,12,13]Suitable for existing non-orthogonal waveforms.The actual effect is limited.
Sparse recovery methods [14,15,16,17]Suitable for existing non-orthogonal waveforms, good imaging effect.Sensitive to model errors, basis mismatch, high computational complexity, and large storage amount.
Deep learning method (the proposal)Suitable for existing non-orthogonal waveforms, good imaging effect, high computation efficiency, and small storage amount.Specially designed imaging network is necessary.
Table 2. Performance comparison of different methods.
Table 2. Performance comparison of different methods.
Matched Filtering1D-SRMB-SL0CNNMIMO-LI-Net
NMSE0.73140.64280.62770.54560.3328
TBR22.579139.327136.102839.361752.1375
Running time(s)0.009498.17330.10350.11070.1147
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, H.; Zhang, Q. MIMO Radar Imaging Method with Non-Orthogonal Waveforms Based on Deep Learning. Algorithms 2022, 15, 306. https://doi.org/10.3390/a15090306

AMA Style

Li H, Zhang Q. MIMO Radar Imaging Method with Non-Orthogonal Waveforms Based on Deep Learning. Algorithms. 2022; 15(9):306. https://doi.org/10.3390/a15090306

Chicago/Turabian Style

Li, Hongbing, and Qunfei Zhang. 2022. "MIMO Radar Imaging Method with Non-Orthogonal Waveforms Based on Deep Learning" Algorithms 15, no. 9: 306. https://doi.org/10.3390/a15090306

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop