Joint Smoothed l0-Norm DOA Estimation Algorithm for Multiple Measurement Vectors in MIMO Radar

Direction-of-arrival (DOA) estimation is usually confronted with a multiple measurement vector (MMV) case. In this paper, a novel fast sparse DOA estimation algorithm, named the joint smoothed l0-norm algorithm, is proposed for multiple measurement vectors in multiple-input multiple-output (MIMO) radar. To eliminate the white or colored Gaussian noises, the new method first obtains a low-complexity high-order cumulants based data matrix. Then, the proposed algorithm designs a joint smoothed function tailored for the MMV case, based on which joint smoothed l0-norm sparse representation framework is constructed. Finally, for the MMV-based joint smoothed function, the corresponding gradient-based sparse signal reconstruction is designed, thus the DOA estimation can be achieved. The proposed method is a fast sparse representation algorithm, which can solve the MMV problem and perform well for both white and colored Gaussian noises. The proposed joint algorithm is about two orders of magnitude faster than the l1-norm minimization based methods, such as l1-SVD (singular value decomposition), RV (real-valued) l1-SVD and RV l1-SRACV (sparse representation array covariance vectors), and achieves better DOA estimation performance.


Introduction
Colocated multiple-input multiple-output (MIMO) radar has attracted a growing interest recently because it can achieve higher resolution and better parameter identification compared with conventional phased-array radar [1]. As one type of colocated MIMO radar, the monostatic MIMO radar is equipped with closely-located transmit and receive arrays, which result in the same angle for direction-of-departure (DOD) and direction-of-arrival (DOA) [2]. Aiming at DOA estimation, a large quantity of methods have been proposed, most of them are based on the signal and noise subspaces [3][4][5][6], such as the representative MUSIC (multiple signal classification) algorithm [3] and ESPRIT (estimation of signal parameters via rotational invariance techniques) algorithm [4].
In the area of sensor array signal processing, sparse representation is an important technique to estimate the parameters. When reconstructing the sparse signal, to avoid the NP-hard (non-deterministic polynomial-time hard) l 0 -norm minimization, different methods such as those based on the relaxed constraint l 1 -norm minimization [7], the focal underdetermined system solution (FOCUSS) [8] and the sparse Bayesian learning (SBL) [9] have been proposed. These algorithms were originally designed for the single measurement vector (SMV) problem. Then, to extend them to the multiple measurement vector (MMV) case, some algorithms, e.g., M-FOCUSS and M-SBL, were proposed [10,11]. Recently, the emerging application that exploits the sparse representation technique to achieve the DOA estimation has been increasingly attractive. Compared with the alternative methods, sparse representation-based DOA estimation methods can achieve higher angle resolution and better adapt to some challenging scenarios such as low signal-to-noise ratio (SNR), as discussed in [7]. Among the abovementioned sparse signal reconstruction methods, l 1 -norm minimization is widely used for sparse DOA estimation because of its desirable solution accuracy and reasonable computing speed [7,10,12]. The DOA estimation algorithms using l 1 -norm minimization include l 1 -SRACV (sparse representation array covariance vectors) [13], real-valued (RV) l 1 -SRACV [14], and the iterative algorithm with reweighting l 1 -norm minimization [15]. All of these methods involve the SMV problem when recovering the signal. Aiming at the MMV case, the l 1 -SVD (singular value decomposition) [7], RV l 1 -SVD [16], reweighted l 1 -SVD [17] and CMSR (covariance matrix sparse representation) [10] were proposed for DOA estimation. However, the computational complexity of the l 1 -norm minimization is relatively large compared with that of the fast sparse representation method named smoothed l 0 -norm algorithm, which was proposed in [18,19] under the SMV circumstance, and applied in [20,21] for the imaging and the power-line communications. To achieve fast sparse DOA estimation, by designing a reweighted continuous function in the SMV case, the reweighted smoothed l 0 -norm (RSL0) algorithm was proposed in [22] based on the covariance vector. With enlarged array aperture and better angle estimation performance, the RSL0 algorithm [22] is about two orders of magnitude faster than the l 1 -norm minimization based methods.
The previously mentioned DOA estimation methods are all proposed under the ideal circumstance with Gaussian white noise. However, the noise is often correlated in practical, thus the additive noise is spatially modeled as Gaussian colored noise rather than white noise. It has been verified that, with colored noise, the performance of the conventional algorithms including the methods mentioned above, is seriously degraded, except for the specially designed methods such as those based on fourth-order cumulants (FOC) [12,23,24]. These FOC-based methods are computationally expensive when they are applied to estimate the DOAs for MIMO radar. In [25], the reduced-dimension (RD) FOC-based sparse representation method (RD l 1 -SRFOC) was proposed for DOA estimation in MIMO radar with array errors. RD l 1 -SRFOC is an l 1 -norm minimization based sparse representation method, which uses the high-order cumulants to deal with the colored noise and simultaneously solves the problem of mutual coupling.
It can be concluded that the existing sparse DOA estimation should improve the following problems: (I) for the l 1 -norm minimization based methods, the computation complexities need to be lowered down and the computing time needs speeding up; (II) the fast sparse DOA estimation algorithm, i.e., the reweighted smoothed l 0 -norm algorithm [22], is based on the covariance vector that is deemed the SMV problem. The fast sparse algorithm tailored for the MMV case needs to be studied and put forward; and (III) fast sparse algorithms for solving the colored Gaussian noise have not been available so far.
To solve the three problems stated above, a joint smoothed l 0 -norm algorithm for DOA estimation in the MIMO radar is proposed in this paper. The new sparse algorithm first obtains a low-complexity FOC-based data matrix to eliminate the white or colored Gaussian noises. Secondly, the proposed algorithm designs a joint smoothed function tailored for the MMV case. Thirdly, with the MMV-based joint smoothed function, a joint smoothed l 0 -norm sparse representation framework is constructed. Finally, the corresponding gradient-based sparse signal reconstruction is designed, and then DOA estimation is achieved. The proposed algorithm is a fast sparse DOA estimation algorithm, which can solve the multiple measurement vector problem, and perform well for both of the white and colored Gaussian noise environments. The proposed algorithm is about two orders of magnitude faster than the l 1 -norm minimization based methods. This is because it approximates the l 0 -norm by a joint smoothed function and performs the gradient-based steepest ascent scheme, which avoids the convex optimization problem involved in the l 1 -norm minimization. The proposed algorithm can provide better DOA estimation performance than l 1 -SVD, RV l 1 -SVD and RV l 1 -SRACV methods.
The rest of this paper is organized as follows. Section 2 presents the MIMO radar system and array signal models. In Section 3, the implementation process of the proposed algorithm is described in detail. Section 4 gives some related remarks and discussions regarding the advantages, the extended applications, the computational complexity and the noise elimination of the proposed algorithm. The experimental results of different methods are compared in Section 5. Finally, Section 6 concludes this paper.

Problem Formulation
A narrowband monostatic MIMO radar system is considered, and it is shown in Figure 1. The transmit and the receive arrays of this system are equipped with M transmit and N receive antennas, respectively. The arrays are both half-wavelength spaced uniform linear arrays (ULAs), whose effects of the array errors, including mutual coupling and gain-phase errors, can be ignored. The transmitting antennas transmit M orthogonal narrowband waveforms, such as BPSK (binary phase shift keying) modulated signal waveforms. In the far field, there are P uncorrelated targets regarded as point scatterers at the same range. In addition, it is assumed that P ≤ M + N − 2 [22,25,26]. Thus, at the receive array, the N antennas are impinged by the echo signals reflected by the P targets. Because of the closely located arrays in the monostatic MIMO radar, for the pth target, DOA and DOD are the same and can be denoted by θ p . Then, by matched filtering operation, the N × 1 dimensional complex envelop of the output of the mth carrier matched filter is expressed as [14,27] x m (t) = where a r (θ p ) = [1, e jπ sin(θ p ) , e jπ2 sin(θ p ) , . . . , e jπ(N−1) sin(θ p ) ] T is the receive steering vector, a tm is the mth element of the transmit steering vector a t (θ p ) = [1, e jπ sin(θ p ) , e jπ2 sin(θ p ) , . . . , e jπ(M−1) sin(θ p ) ] T , s p (t) contains the target reflection coefficient and the transmitted baseband signal such as non-circular signal, and n m (t) is the noise vector after the mth matched filter. After all the matched filters, the received data vector, i.e., the vector composed of the outputs of the M matched filters, is given by [2,25,28] where x(t) ∈ C MN×1 and n(t) = [n T 1 (t), . . . , n T M (t)] T ∈ C MN×1 is the zero-mean Gaussian white or colored noise vector. s(t) = [s 1 (t), s 2 (t), . . . , s P (t)] T ∈ C P×1 is the reflected signal vector after the M carrier matched filters, and it is assumed to be statistically independent and non-Gaussian with zero-mean. The target model is considered as the classical Swerling case II, namely, the radar cross section (RCS) fluctuations are constant during a snapshot period and vary independently from snapshot to snapshot [29]. n(t) and s(t) are independent of each other. Moreover, A ∈ C MN×P is the transmit-receive steering matrix, and its detailed expression is where a(θ p ) = a t (θ p ) ⊗ a r (θ p ) ∈ C MN×1 is the transmit-receive steering vector for p = 1, 2, . . . , P. Therefore, by collecting J snapshots, the MN × J dimensional received data matrix in monostatic MIMO radar is represented as where S = [s(t 1 ), . . . , s(t J )] ∈ C P×J and N = [n(t 1 ), . . . , n(t J )] ∈ C MN×J are the signal matrix and complex Gaussian noise matrix, respectively. Based on the received data X, sparse DOA estimation can be viewed as the signal reconstruction that subjects to [14] where Aθ = [a(θ 1 ), . . . , a(θ L )] is the complete dictionary with the discrete sample grid {θ i } L i=1 , L P. Sθ has the same row support with S. Let sθ = [s(θ 1 ), . . . , s(θ L )] be a sparse vector whose ith element can be equal to the l 2 -norm of the ith row in Sθ. To obtain the sparsest solution of Equation (5), an ideal constraint is the l 0 -norm method by minimizing the nonzero number of sθ, which can be expressed as Unfortunately, the l 0 -norm minimization method in Equation (6) is NP-hard. To solve this problem, conventional sparse DOA estimation methods [7,10,[13][14][15]17] can obtain the DOAs with the l 1 -norm minimization. However, the computational complexity of the l 1 -norm minimization is relatively large. In [22], by solving the array covariance vector based SMV sparse signal reconstruction problem, the reweighted smoothed l 0 -norm algorithm greatly improves the computation speed. To develop a fast sparse DOA estimation algorithm tailored for the MMV problem existed in the MIMO radar systems, a joint smoothed l 0 -norm algorithm is proposed in the following.

High-Order Cumulants Based Data Matrix Construction
Based on Equation (3), the detailed expression of the steering vector a(θ p ) is with z = e jπ sin(θ p ) . It can be observed that there are only M + N − 1 distinct elements in a(θ p ); thus, the dimension of the received data vector x(t) can be reduced. Let b(θ p ) be a new steering vector composed of the distinct elements. It is written as where b(θ p )∈ C (M+N−1)×1 . According to the element structures of a(θ p ) and b(θ p ), their relationship can be expressed as [25] a(θ p ) = Gb(θ p ), where Based on the relationship in Equation (9), a reduced-dimensional matrix can be defined as [25] Therefore, the dimensional reduction for the received data is carried out as follows [25]: are the new low-dimensional received data vector, transmit-receive steering matrix and noise vector, respectively. In addition, F = (G H G) 1 2 is a known diagonal matrix, and its (i, i)th element can be expressed as with β = min(M, N) and τ = |M − N| + 2. Note that the noise vectorñ(t) remains Gaussian after the dimensional reduction, for asymptotic normal distribution has the invariance speciality of linear transformation. To eliminate the Gaussian white noise or colored noise inx(t), fourth-order cumulant is exploited under the circumstance of collecting J snapshots. The definition of the FOC is given as follows [12,24,25] where C 4ñ (k 1 , k 2 , k 3 , k 4 ) represents the fourth-order cumulant ofñ corresponding to the indices (k 1 , k 2 , k 3 , k 4 ). For the signal assumed in Equation (2), the fourth-order cumulant satisfies where c p = cum{s p , s * p , s p , s * p } is the FOC of the signal for the pth target, and 1 ≤ p i ≤ P for i = 1, 2, 3, 4. Based on Equations (14) and (15), an FOC-based data matrix can be constructed with its (k 1 , k 2 )th element being obtained from where [Fb(θ p )] k 1 and [Fb * (θ p )] k 2 are the k 1 th and the k 2 th elements in Fb(θ p ) and Fb * (θ p ). As a result, the data matrix Y can be expressed as where D s = diag(c 1 , c 2 , . . . , c P ) is a diagonal matrix composed of the fourth-order cumulants for s p , p = 1, 2, . . . , P. Then, the singular value decomposition of Y can be performed as follows: where U is the left singular vector, V is the right singular vector, and Λ = diag(λ 1 , λ 2 , . . . , λ M+N−1 ) is the singular value matrix with λ 1 ≥ λ 2 ≥ . . . ≥ λ M+N−1 . Therefore, the signal subspace V s is obtained by extracting the first P right singular vectors that correspond to (λ 1 , λ 2 , . . . , λ P ). In addition, the last where Y s ∈ C (M+N−1)×P .

Designs of Joint Smoothed Function and Joint Smoothed l 0 -Norm Framework for MMV Case
In the sparse representation framework for DOA estimation, a complete dictionary containing all interest DOAs is required. Thus, let {θ i } L i=1 be the discrete sample grid. According to Equation (19), the complete dictionary can be constructed as The DOA estimation is considered as a sparse signal reconstruction problem that is subject to where Tθ ∈ C L×P is a sparse matrix and has the same row support with T = D s (F 3 B) H V s . Then, DOA estimation can be achieved by measuring the sparsity of Tθ. Note that, in the sparse signal recovery problem in Equation (20), the low-dimensional data Y s is a matrix rather than a vector. Therefore, recovering Tθ involves the MMV problem. To develop a fast sparse DOA estimation algorithm that can solve the MMV problem, in the following, the proposed joint smoothed l 0 -norm algorithm designs a new joint smoothed function and then derives its new steepest ascent scheme. For obtaining the smoothed estimation of the sparsity of Tθ, we first design a joint continuous function as follows where Tθ(i, p) is the (i, p)th element of Tθ for p = 1, 2, . . . , P and i = 1, 2, . . . , L. Thus, the result of the designed joint continuous function f σ [Tθ(i, :)] can be expressed as follows: where T where r wi is the weight coefficient, which is obtained from (23), the preset parameter σ is known and adjusts the smoothness of f wσ [Tθ(i, :)]. In addition, P stands for the total number of the columns in the data matrix Y s , namely, the column number of the sparse matrix Tθ. For the true target DOA θ p , 0 < r wp 1 because of the orthogonality between the signal subspace and the noise subspace [5]. Therefore, the approximate value of the joint smoothed function f wσ [Tθ(i, :)] corresponding toθ i can be calculated as follows: where a small σ is required to guarantee the approximation. Based on Equation (24), the sparsity of Tθ can be measured by where ]. According to Equation (25), to recover the sparse matrix Tθ, the minimization of ||T l 2 θ || 0 can be turned into the maximization of F wσ (T l 2 θ ). As a consequence, the MMV-based joint smoothed l 0 -norm sparse representation framework is represented as

Design of the MMV-Based Signal Reconstruction in the Joint Smoothed l 0 -Norm Algorithm
For the joint smoothed function F wσ (T l 2 θ ), the steepest ascent based on the gradient is designed to recover the sparse signal. Aiming at the sparse signal reconstruction in Equation (26) that involves the MMV problem, an initial matrix is needed for the joint smoothed l 0 -norm algorithm to start the steepest ascent process. The initial matrix Uθ 0 can be chosen as the solution of Equation (26) with σ → ∞, i.e., the minimum l 2 -norm solution calculated by where (FBθ) + is the pseudo-inverse of FBθ. When recovering the sparse signal for Equation (26), the selected value of σ has an effect on the computation time and the solution accuracy. The larger σ is, the smoother F wσ (T l 2 θ ) will be. Larger σ can result in less local maxima and makes it more easy to maximize F wσ (T l 2 θ ). In this case, the computation process can be speeded up. However, large σ causes the deviation of the approximation in Equation (25), and it will further decrease the precision of the signal reconstruction. To guarantee both of the speed and the accuracy, a decreasing sequence for σ, i.e., [σ 1 , σ 2 , . . . , σ K ] with σ 1 ≥ σ 2 ≥ . . . ≥ σ K , which is known as the graduated non-convexity (GNC) approach [18,22], can be exploited. Even though large σ is desired to avoid a lot of local maxima and improve the speed, the value of σ 1 in the decreasing σ sequence may be set as one to four times of the maximum absolute value of Uθ 0 . This is because in f wσ [Tθ(i, :)], when σ > 4max{|Uθ 0 |}, σ acts virtually like infinity for all elements with e To make the steepest ascent algorithm be not far from the actual maximum, and be less likely to get trapped into local maxima, σ is required to change slowly. This guarantees that the GNC approach can work. In general, the decreasing factor α satisfies 0.5 ≤ α < 1. σ K should be a small value determined by the desired accuracy, and a parameter σ o f f can be used to set the lower limit of the decreasing σ sequence. For each σ k in the decreasing σ sequence, graduated non-convexity based Q iterations are carried out to maximize F wσ (T l 2 θ ). In each iteration, Tθ is first updated as the following form: whereμ = µσ 2 is a step-size parameter, and µ should be µ ≥ 1. In addition, ∇F wσ (T l 2 θ ) is the gradient of F wσ (T l 2 θ ) corresponding to Tθ. In the following, ∆Tθ will be derived in detail. Let the detailed expression of Tθ be where t i ∈ C 1×P is the ith row of Tθ. According to Equation (28), t i can be written as where ∇ t i F wσ (T l 2 θ ) stands for the 1 × P dimensional gradient vector of F wσ (T l 2 θ ) in regard to t i = [Tθ(i, 1), Tθ(i, 2), . . . , Tθ(i, P)]. The pth element of ∇ t i F wσ (T l 2 θ ) is detailedly expressed as where t ip =ã +bj is the pth parameter of t i with the real componentã and imaginary componentb.
∂F wσ (T l 2 θ )/∂t ip = [∂F wσ (T l 2 θ )/∂ã] + [∂F wσ (T l 2 θ )/∂b]j denotes the 1st-order derivative of F wσ (T l 2 θ ). As a result, ∇ t i F wσ (T l 2 θ )(p) is derived as follows: Combining Equation (30) with Equation (32), the pth element of ∆t i can be obtained, that is, With ∆t ip , the 1 × P dimensional vector ∆t i is written as for i = 1, 2, . . . , L. As a result, based on Equations (33) and (34), the sparse matrix to be recovered in each iteration is first updated as with where ∆Tθ ∈ C L×P . Then, this iteration projects Tθ back to the feasible set Tθ = {Tθ | Y s = FBθTθ}, which satisfies Equation (20), that is, Based on Equations (27)-(37), after Q iterations for σ = σ k , the MMV based sparse signal reconstruction in Equation (26) starts the maximization process of the joint smoothed function F wσ (T l 2 θ ) for σ = σ k+1 . Let Tθ f be the output of Tθ by maximizing F wσ (T l 2 θ ) for σ = σ K . Consequently, the final sparse matrix is recovered as Tθ f . In order to evaluate the reconstruction results of each row in Tθ f , and then achieve the DOA estimation, let¯ with [(Tθ f ) l 2 ](i) = Tθ f (i, :) 2 . Therefore, by searching the spectrum of¯ p , the target DOAs are obtained. The summary of the proposed joint smoothed l 0 -norm algorithm is listed below: Step 1: Compute the fourth-order cumulants based matrix Y s from Equations (11), (16) and (19).

Related Remarks
Remark 1. With the joint smoothed function F wσ (T l 2 θ ), the proposed joint smoothed l 0 -norm minimization based sparse signal reconstruction algorithm is a new fast sparse DOA estimation algorithm, which is tailored for the MMV problem. By designing a specific joint continuous function that exploits all data information in the data matrix Y s , and then, deriving its steepest ascent process to achieve the maximization, the sparse solutions can be obtained. In the extended and improved smoothed l 0 -norm algorithms, the proposed joint continuous function f σ [Tθ(i, :)] in Equation (21) is applicable for all MMV cases with the form Y s =BθTθ, whereBθ is the complete dictionary and Tθ is the L × P dimensional sparse matrix that needs to be reconstructed. Under this circumstance, gradient-based maximizing ∑ L i=1 f σ [Tθ(i, :)] is similar to the derivation of Equations (27)-(37), and its ∆t ip corresponding to ∑ L i=1 f σ [Tθ(i, :)] is concluded as . As L is much larger than P, M + N − 1, Q and K, the computation speed of the proposed joint smoothed l 0 -norm algorithm is much faster than the l 1 -norm minimization based methods.

Remark 3.
The proposed method can deal with both of the white and colored Gaussian noises, thereby making it desirable for use in practical applications. This is because, in the proposed joint smoothed l 0 -norm algorithm, high-order cumulants are utilized to process the reduced-dimensional data x(t). Note that, for the dimension reduction in Equation (11), the reduced-dimensional matrix R satisfies RR H = I M+N−1 . Hence, if the noise vectorñ(t) is white Gaussian with covariance matrix ρ 2 I MN , it remains white Gaussian with covariance matrix Rρ 2 I MN R H = ρ 2 I M+N−1 after the transformation [2,25]. On the other hand, if the noise vector n(t) is colored Gaussian with covariance matrix V a , its covariance matrix turns into RV a R H after the dimensional reduction. However, in this case,ñ(t) is still Gaussian according to the invariance speciality of linear transformation in asymptotic normal distribution. Therefore, even though the dimensional reduction makes the change of the noise happen, the high-order cumulant process in Equation (16) can eliminate it.

Simulation Results
In this part, some simulations are implemented to evaluate the performance and the computation speed of the proposed method. The simulation results of the proposed method are compared with those of the l 1 -SVD [7], RV l 1 -SVD [16], RV l 1 -SRACV [14] and reweighted smoothed l 0 -norm (RSL0) [22] algorithms. In all of the simulations, a narrowband monostatic MIMO radar system with ULAs is considered. The antennas of the transmit array and the receive array are both half-wavelength spaced, and their numbers are M = 6 and N = 6, respectively. The transmitted signals are assumed to be BPSK modulated [27,30], moreover, they are mutually orthogonal and different from each other. The noise is white Gaussian with the covariance matrix R = I MN , or colored Gaussian with the elements in the covariance matrix being R(k 1 , k 2 ) = 0.75 |k 1 −k 2 | e jπ(k 1 −k 2 )/2 . By means of the minimum description length (MDL) principle or the Akaike information criterion (AIC) principle [31], the prior knowledge of the target number is assumed to be known as P.
When constructing the complete dictionary, the discrete sample grid is set from −90 • to 90 • with the uniform interval distance ∆θ = 0.05 • for the proposed algorithm, as well as the l 1 -SVD, RV l 1 -SVD, RV l 1 -SRACV and RSL0 algorithms. The parameters σ 1 , µ and σ o f f are fixedly set at σ 1 = 4max{|Uθ 0 |}, µ = 2.5 and σ o f f = 0.0007 for the proposed algorithm and the RSL0 algorithm. The effects of the other parameters, i.e., the decreasing factor α and the iteration number Q, will be evaluated and given in detail in the following experiments. The definitions for the signal-to-noise ratio (SNR) and the root mean square error (RMSE) are shown, respectively, as follows: whereθ p,i is the angle estimation of the pth target DOA θ p for the ith Monte Carlo trial. Figure 2 depicts the spatial spectrum of the proposed method for white noise and colored noise, where SNR = 5 dB and SNR = −10 dB are considered. There are three targets located at θ 1 = −8.2 • , θ 2 = 0 • and θ 3 = 21 • . The spatial spectrum is plotted by computing 10log 10 [|¯ p |/max(|¯ p |)] with¯ p being the final sparse solution. Figure 2 shows that the proposed joint smoothed l 0 -norm minimization algorithm is effective for the MMV case of DOA estimation in monostatic MIMO radar. Moreover, it is suitable for the practical radar systems with colored Gaussian noise.  Figure 3a,b evaluates the effects of the parameters α and Q on the DOA estimation performance of the proposed method, with Q = 3 and α = 0.5, respectively. Moreover, J = 1000. Three uncorrelated targets with the DOAs θ 1 = −8.2 • , θ 2 = 0 • and θ 3 = 21 • are considered. In Figure 3, it can be verified that the proposed method keeps the same angle estimation performance when the parameters α and Q vary from α = 0.5 to α = 0.9 and Q = 3 to Q = 35. In Figure 3b, the RMSE decreases with the increase of Q when Q ≤ 3, and it reaches an asymptotic value at Q = 3. For the iterations of the steepest ascent algorithm, we just need a region near the maximum value of F wσ (T l 2 θ ); thus, a fixed small positive integer Q = 3 is applicable.
With respect to Figure 3, the effects of the parameters α and Q on the average computation time of the proposed method are shown in Tables 1 and 2. The simulation conditions keep the same with those of Figure 3. As the implementing processes of the analyzed methods are not affected by the type of the noise, the computation time is the same for the white and the colored Gaussian noises. It can be observed that the computation speed becomes slower when α and Q increase. However, larger α and Q are not able to improve the estimation performance of the proposed method, as shown in Figure 3a,b. As a result, small values of α and Q are enough to guarantee the estimation accuracy and the computation speed. In the following experiments, the parameters α and Q are set at α = 0.5 and Q = 3.    Figure 4a,b verify the RMSE of DOA estimation versus SNR for white noise and colored noise, where J = 1000. Consider three targets whose DOAs are θ 1 = −8.2 • , θ 2 = 0 • and θ 3 = 21 • . According to Figure 4a,b, we can conclude that the RSL0 algorithm achieves the best DOA estimation for white noise. For colored noise, the proposed algorithm and the RSL0 algorithm provide better angle estimation performance than the l 1 -SVD, RV l 1 -SVD and RV l 1 -SRACV methods.  The processing time of signal reconstruction is compared in Table 3, where J = 1000 and SNR = 0 dB. Different antenna numbers and target numbers are considered. For p = 2, the DOAs are θ 2 = 0 • and θ 3 = 21 • . For p = 3, they are θ 1 = −8.2 • , θ 2 = 0 • and θ 3 = 21 • . Table 3 verifies that, compared with the l 1 -norm minimization based methods, the proposed algorithm has much lower computational complexity and is about two orders of magnitude faster.  Figure 5a,b illustrates the RMSE versus snapshots for Gaussian white noise and colored noise, where SNR = 0 dB, and the DOAs of three targets are θ 1 = −8.2 • , θ 2 = 0 • and θ 3 = 21 • . It can be observed that, for white noise, the RSL0 algorithm has the minimum RMSE. For colored noise, the proposed algorithm provides the best angle estimation performance when the snapshots vary from J = 450 to J = 4050, whereas, in this case, the RV l 1 -SRACV method performs worse than the other algorithms.   Figure 6a,b verifies the DOA estimation performance of different sparse methods versus angle separation for Gaussian white noise and colored noise, where J = 1000, and the DOAs of two targets are considered as θ 1 = 0 • and θ 2 = θ 1 +θ withθ varying fromθ = 4 • toθ = 16 • . For the white noise in Figure 6a, the RSL0 algorithm in [22] performs the best, and the proposed algorithm provides better estimation performance than the l 1 -SVD, RV l 1 -SVD and RV l 1 -SRACV methods. For the colored noise in Figure 6b, the performance of the proposed algorithm stands out among the analyzed methods, while the performance of the RV l 1 -SRACV is severely degraded.   Figure 7a,b verifies the relationships between the target resolution probability and SNR for different sparse methods, where J = 1000, and three targets θ 1 = −8.2 • , θ 2 = 0 • and θ 3 = 21 • are considered. The ith Monte Carlo trial is regarded as a successful one if the estimation resultsθ 1 ,θ 2 andθ 3 satisfy max{|θ 1 − θ 1 |, |θ 2 − θ 2 |, |θ 3 − θ 3 |} ≤ 0.1 • . Figure 7a,b verifies that the RSL0 algorithm performs the best for the white noise, and the proposed algorithm provides the highest target resolution probability for the colored noise. Colored noise causes the serious performance degradation for the analyzed methods except for the proposed algorithm.

Conclusions
Direction-of-arrival estimation is usually confronted with the MMV case. In this paper, for DOA estimation in MIMO radar, we have proposed a joint smoothed l 0 -norm algorithm tailored for the multiple measurement vector case. By designing the new joint smoothed function and its gradient-based sparse signal reconstruction, the proposed method is a fast sparse DOA estimation algorithm that can solve the MMV problem. In addition, it is applicable to the colored Gaussian noise. The extended applications and the computational complexity of the proposed method are analyzed. The experimental results have verified that the proposed algorithm is about two orders of magnitude faster than the l 1 -norm minimization based methods, such as l 1 -SVD, RV l 1 -SVD and RV l 1 -SRACV, and performs well for both white and colored Gaussian noises.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this paper: