Next Article in Journal
Dimensional Errors Due to Overhanging Features in Laser Powder Bed Fusion Parts Made of Ti-6Al-4V
Next Article in Special Issue
An Efficient Ultra-Tight GPS/RISS Integrated System for Challenging Navigation Environments
Previous Article in Journal
A Hybrid Ion-Exchange Fabric/Ceramic Membrane System to Remove As(V), Zn(II), and Turbidity from Wastewater
Previous Article in Special Issue
Improved Active Interference Canceling Algorithms for Real-Time Protection of 2nd/3rd Level Facilities in Electronic Warfare Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Asymptotic Performance Analysis of Maximum Likelihood Algorithm for Direction-of-Arrival Estimation: Explicit Expression of Estimation Error and Mean Square Error

1
Department of Information and Communication Engineering, Sejong University, Seoul 05006, Korea
2
INITECH Co., Digital-ro 26-gil, Guro-gu, Seoul 08389, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(7), 2415; https://doi.org/10.3390/app10072415
Submission received: 23 January 2020 / Revised: 7 March 2020 / Accepted: 26 March 2020 / Published: 1 April 2020
(This article belongs to the Special Issue Recent Advances in Electronic Warfare Networks and Scenarios)

Abstract

:
This paper proposes a new method to get explicit expressions of some quantities associated with performance analysis of the maximum likelihood DOA algorithm in the presence of an additive Gaussian noise on the antenna elements. The motivation of the paper is to make a quantitative analysis of the ML DOA algorithm in the case of multiple incident signals. We present a simple method to derive a closed-form expression of the MSE of the DOA estimate based on the Taylor series expansion. Based on the Taylor series expansion and approximation, we get explicit expressions of the MSEs of estimates of azimuth angles of all incident signals. The validity of the derived expressions is shown by comparing the analytic results with the simulation results.

1. Introduction

There has been a lot of research on the direction-of-arrival (DOA) estimation [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]. Our interest in this paper is the performance analysis of the maximum likelihood (ML)-based DOA estimation algorithm.
In [8], a performance analysis of the ML DOA estimation algorithm for low SNR and small number of snapshots is considered. A threshold effect in the ML DOA algorithm is exploited, and the authors derive approximations to the mean square error and probability of outlier.
If the noise variance at each sensor in the array antenna system is equal, the noise covariance matrix is considered to be multiples of an identity matrix. In [9], nonuniform white noise, whose covariance matrix can be expressed as an arbitrary diagonal covariance, is considered, and the new ML DOA algorithm for nonuniform noise is proposed, and the performance analysis of the proposed algorithm is also presented.
In [10], the authors addressed the DOA estimation using sparse sensor arrays, where the sensor noises can be uncorrelated between different subarrays due to large intersubarray spacings. The authors proposed a new maximum-likelihood estimator, which can be extended to the uncalibrated arrays with sensor gain and phase mismatch.
A new computationally efficient ML DOA algorithm exploiting spatial aliasing is proposed in [11]. Generally, spatial aliasing is undesirable since it degrades the DOA estimation accuracy. In [11], the authors adopted a nested array structure with a doubly spaced aperture. The computational burden of the ML DOA estimation algorithm is reduced by the highly compressed search range and the small number of candidate angles to be searched. The authors also presented Monte Carlo simulation based mean square error (MSE). However, analytic performance analysis of the proposed scheme is not presented in [11].
A new ML DOA algorithm for use with a uniform linear array is proposed [12]. The scheme is superior to the conventional ML algorithm when the true DOAs of two incident signals are very close to each other. The formulation for the new DOA algorithm is based on an asymptotic approximation of the unconditional maximum likelihood (UML) procedure when two closely space signals are incident on the ULA. Taylor approximation is also adopted for derivation of the new algorithm. Empirical MSE is illustrated to validate the proposed scheme. However, the authors do not present analytic performance analysis of the new DOA algorithm.
To overcome the problem of large computational complexity for implementation of the ML DOA estimation algorithm, based on a spatially overcomplete array output formulation, an efficient ML DOA estimator is proposed in [13]. Empirical performance from Monte Carlo simulation is present to illustrate the superiority of the proposed scheme over the other DOA algorithms. Analytic performance analysis is not given.
Although the ML estimator is known to be optimal in DOA estimation, its computational cost can be quite prohibitive, especially for a large number of incident signals. To solve this problem, in [14], three kinds of natural computing algorithms, differential evolution, clonal selection algorithm, and the particle swarm optimization, are applied for implementation of the multivariable nonlinear optimization of the cost function of the ML DOA estimation algorithm. It turns out that all three natural computing algorithms are capable of optimizing the ML DOA cost function, irrespective of the number of incident signals and their nature. In addition, the number of points evaluated by natural computing algorithms is much smaller than that associated with exhaustive grid search-based algorithms, justifying the application of these natural computing algorithms to the optimization of the cost function of the ML DOA estimation algorithm.
In [15], a new implementation of ML DOA estimation, which outperforms the other DOA algorithms for closely spaced incident signals, is proposed. The concept of Monte Carlo importance sampling is applied. The superiority of the proposed scheme comes from its better convergence to a global maximum in comparison with other iterative approaches. Although analytical performance analysis of the proposed scheme is not presented, empirical performance of the propose algorithm and the other DOA algorithms is given. Note that Monte Carlo simulation is employed to get empirical performance in terms of the MSE of the DOA estimate.
A heuristic optimization algorithm, called gravitational search algorithm, is presented to optimize the cost function of the ML DOA estimation algorithm for a uniform circular array [16]. It is empirically shown that the proposed algorithm is superior to the MUSIC algorithm and particle swarm optimization-based ML algorithm. Analytic performance analysis of the proposed scheme is not presented in [16].
To reduce computational burden of optimizing the cost function of ML DOA estimation algorithm, the artificial bee colony (ABC) algorithm is applied to maximize the cost function of the ML DOA estimation algorithm [17]. It is empirically shown that the proposed scheme is superior to other ML-based DOA estimation methods in the view point of efficiency in computation and statistical performance. Analytic performance analysis of the proposed scheme is not presented in [17].
DOA estimation of narrowband sources in unknown nonuniform white noise is considered in [18]. The stepwise concentration of the log-likelihood function with respect to the signal parameters and noise parameters is obtained by alternating minimization of the Kullback–Leibler divergence. Closed-form expressions for the signal parameters and noise parameters are derived, implying that the proposed scheme results in significant reduction in computational complexity in comparison with exhaustive multidimensional search-based ML DOA algorithms.
In [19], a new wideband ML DOA estimation algorithm for an unknown nonuniform sensor noise is proposed to reduce the performance degradation due to nonuniformity of the noise. Two associated implementation schemes are proposed: one is iterative and the other is non-iterative. Simulation results show that the performance of two processing algorithms is consistent with the Cramer–Rao lower bound. Analytic performance analysis, more specifically the Cramer–Rao lower bound, of the proposed algorithm is presented in [19].
In this paper, we are concerned with quantitative study on how much estimation error is induced due to an additive Gaussian noise on array antennas. More specifically, mean-squared error (MSE) of direction-of-arrival estimation in terms of a standard deviation of an additive noise is derived. In this paper, performance analysis of azimuth estimation using uniform linear array (ULA) is presented.
In this paper, the estimate with no superscript denotes the estimate of the original ML algorithm. Note that no approximation is used in getting the estimate with no superscript. In this paper, the estimate with the superscript ( u = 1 ) denotes the estimate by using the first approximation, and that with the superscript ( u = 1 , v ) represents the estimate by using the first approximation and the second approximation.
The difference between the estimate with no superscript and the estimate with the superscript ( u = 1 ) quantifies the error due to the first approximation since the first approximation is applied in getting the estimate with the superscript ( u = 1 ) . Note that no approximation is applied in getting the estimate with no superscript. Similarly, the difference between the estimate with the superscript ( u = 1 ) and the estimate with the superscript ( u = 1 , v ) quantifies the error due to the second approximation since the first approximation and the second approximations are applied in getting the estimate with the superscript ( u = 1 , v ) . Based on this intuition, by comparing these three estimates, we can easily determine which approximation results in the dominant approximation error. This inspection cannot be obtained from the scheme presented in the previous study [7,8,9,19].
In this paper, Gaussian noise is used to model measurement uncertainty. The effect of Gaussian noise on the accuracy of the azimuth estimate is rigorously derived. Furthermore, an explicit expression of the MSE of the azimuth estimate is also derived. In comparison with the previous studies on the performance analysis of the maximum likelihood algorithm [7,8,9,19], a more explicit representation of the MSE of the azimuth estimate is proposed in this paper.
Many previous studies on the ML DOA estimation algorithm focused on how the performance of the ML DOA estimation algorithm can be improved by proposing new algorithms or by modifying the ML DOA estimation algorithm [9,10,11,12,13,14,15,16,17,18,19]. Note that our contribution in this paper does not lie in how much improvement can be achieved by proposing an improved ML DOA algorithm. Our contribution lies in a reduction in computational cost in getting the MSE of an existing ML DOA algorithm by adopting an analytic approach, rather than the Monte Carlo simulation-based MSE under measurement uncertainty which is assumed to be Gaussian distributed. That is, the scheme described how analytic MSE can be obtained with much less computational complexity than the Monte Carlo simulation-based MSE.
In this paper, the derivation is based on the Taylor series expansion of the sample covariance matrix since the cost function of the ML DOA estimation algorithm can be explicitly written in terms of the sample covariance matrix. The difference between the sample covariance matrix associated with noisy measurement and that associated with noiseless sample covariance matrix is explicitly expressed in terms of additive noises on the antenna arrays. Azimuth estimation error is explicitly expressed in terms of the additive noises. Finally, the MSE of the azimuth estimate is given in terms of the statistics of an additive noise. To the best of our knowledge, no previous study presented these explicit expressions of the azimuth estimation error and the MSE of the azimuth estimate in terms of the statistics of an additive noise.
The proposed scheme can be used in predicting how accurate the estimate of the ML DOA estimation algorithm is without a computationally intensive Monte Carlo simulation. The performance of the ML algorithm depends on various parameters including the number snapshots, the number antenna elements in the array, inter-element spacing between adjacent antenna elements, and the SNR. Therefore, Monte Carlo simulations for different values of the various parameters can be computationally intensive. Therefore, the scheme presented in this paper can be adopted to predict the accuracy of the ML DOA algorithm for different values of various parameters.

2. Maximum Likelihood Algorithm

In this section, the maximum likelihood (ML) for use with the uniform linear array (ULA) algorithm is briefly described.
In the case of ULA, for the incident signal from θ c , the array vector associated with the m-th antenna can be written as
a m θ c = exp j 2 π λ m 1 Δ sin θ c ,
where λ is wavelength and Δ is the distance between two neighboring elements.
Using (1), A θ 1 , θ 2 , , θ d is defined as
A θ 1 , θ 2 , , θ d = a 1 θ 1 a 1 θ 2 a 1 θ d a M θ 1 a M θ 2 a M θ d
where the number of incident signals is d.
Projection matrix on to the column space of A θ 1 , θ 2 , , θ d can be expressed as [6]
P A θ 1 , θ 2 , , θ d = A θ 1 , θ 2 , , θ d A θ 1 , θ 2 , , θ d H A θ 1 , θ 2 , , θ d 1 A θ 1 , θ 2 , , θ d H .
The incident signals on the array antenna elements can be written as, for i = 1 , , L ,
x t i = A θ 1 , θ 2 , , θ d s t i x 1 t i x M t i = A θ 1 , θ 2 , , θ d s 1 t i s 2 t i s d t i .
The noisy incident signals can be expressed as
x t i = A θ 1 , θ 2 , , θ d s t i + n t i x 1 t i x M t i = A θ 1 , θ 2 , , θ d s 1 t i s 2 t i s d t i + n 1 t i n M t i .
It is assumed that the entries of the Gaussian vector are independent and identically distributed Gaussian random variables with the same mean and the same variance. Note that the noise are complex-valued and that the real part and the imaginary part of the noise are independent Gaussian random variables with non-zero mean μ . The variance of the real part is denoted by σ 2 2 , which is equal to the variance of the imaginary part, σ 2 2 .
Let L denote the number of the snapshots. R ^ is a sample covariance matrix given by
R ^ = 1 L t i = 1 L x t i x t i H = R ^ 11 R ^ 1 M R ^ M 1 R ^ M M
where x t i is given by (4).
From (5), we get
R ^ = 1 L t i = 1 L x t i x t i H = R ^ 11 R ^ 1 M R ^ M 1 R ^ M M .
δ R is defined as
δ R = R ^ R ^ .
In the ML algorithm, the estimates, θ ^ 1 , θ ^ 2 , , θ ^ d , are obtained from
θ ^ 1 , θ ^ 2 , , θ ^ d = arg max θ 1 , θ 2 , , θ d tr P A θ 1 , θ 2 , , θ d R ^ ,
where θ ^ c is given by θ ^ c = θ c 0 + δ θ c c = 1 , , d , and tr P A θ 1 , θ 2 , , θ d R ^ can be written as
tr P A θ 1 , θ 2 , , θ d R ^ = tr P 11 θ 1 , θ 2 , , θ d P 1 M θ 1 , θ 2 , , θ d P M 1 θ 1 , θ 2 , , θ d P M M θ 1 , θ 2 , , θ d R ^ 11 R ^ 1 M R ^ M 1 R ^ M M = k = 1 M l = 1 M P k l θ 1 , θ 2 , , θ d R ^ l k
where M is the number of antenna elements.

3. Closed-Form Expression of Estimation Error

From (1) and (3), we get
a m θ c × a m * θ c = 1 .
From (11), we obtain
A θ 1 , , θ d H A θ 1 , , θ d = M B θ 1 , θ 2 B θ 1 , θ 3 B θ 1 , θ d B * θ 1 , θ 2 M B θ 2 , θ 3 B θ 2 , θ d B * θ 1 , θ 3 B * θ 2 , θ 3 M B θ 3 , θ d B * θ 1 , θ d B * θ 2 , θ d B * θ 3 , θ d M ,
where, from (1), B ( θ k , θ l ) is defined as
B ( θ k , θ l ) m = 1 M exp j 2 π λ m 1 Δ sin θ l sin θ k = 1 exp j 2 π λ Δ sin θ l sin θ k M 1 exp j 2 π λ Δ sin θ l sin θ k .
Using (12), A θ 1 , , θ d H A θ 1 , , θ d 1 k , l can be written as
A θ 1 , , θ d H A θ 1 , , θ d 1 k , l = 1 det A θ 1 , , θ d H A θ 1 , , θ d · adj A θ 1 , , θ d H A θ 1 , , θ d k , l .
adj · denotes adjoint of a matrix. k , l -th element of the adjoint of A θ 1 , , θ d H A θ 1 , , θ d can be expressed as
adj A θ 1 , , θ d H A θ 1 , , θ d k , l = 1 k + l · det M B * θ 1 , θ k 1 B * θ 1 , θ k + 1 B * θ 1 , θ d B θ 1 , θ l 1 B * θ l 1 , θ k 1 B * θ l 1 , θ k + 1 B * θ l 1 , θ d B θ 1 , θ l + 1 B * θ l + 1 , θ k 1 B * θ l + 1 , θ k + 1 B * θ l + 1 , θ d B θ 1 , θ d B θ k 1 , θ d B θ k + 1 , θ d M ,
where the determinant can be obtained in many ways, one of which is a cofactor expansion.
Generally, the number of incident signals is d. From (14) and (15), an explicit expression of the entry at the k-th row and l-th column of A θ 1 , , θ d H A θ 1 , , θ d 1 can be obtained. However, it is very complicated to express the determinant in (15) in terms of the entries of A θ 1 , , θ d H A θ 1 , , θ d for all k = 1 , , d and l = 1 , , d . In addition, due to very complicated expression, expressing each entry of A θ 1 , , θ d H A θ 1 , , θ d 1 in terms of the entries of A θ 1 , , θ d H A θ 1 , , θ d may impair the readability of this paper. Therefore, the number of signals in this paper is set to two.
Using (12) in (3), the entries of P A θ 1 , θ 2 can be written as
P k l θ 1 , θ 2 = 1 det θ 1 , θ 2 a k θ 1 M a k θ 2 B * θ 1 , θ 2 a l * θ 1 + 1 det θ 1 , θ 2 a k θ 2 M a k θ 1 B θ 1 , θ 2 a l * θ 2
where det θ 1 , θ 2 is given in Appendix B.
The numerator of P k l θ 1 , θ 2 is defined as Q k l θ 1 , θ 2 :
Q k l θ 1 , θ 2 = a k θ 1 M a k θ 2 B * θ 1 , θ 2 a l * θ 1   + a k θ 2 M a k θ 1 B θ 1 , θ 2 a l * θ 2 .
Let θ 1 0 and θ 2 0 denote the true incident angles of two incident signals and let θ ^ 1 and θ ^ 2 denote the estimates of two incident signals. From (9) and (10), f k l θ 1 , θ 2 and g k l θ 1 , θ 2 in Appendix I, the estimates, θ ^ 1 , θ ^ 2 , satisfy
θ 1 tr P A θ 1 , θ 2 R ^ θ 1 = θ ^ 1 θ 2 = θ ^ 2 = k = 1 M l = 1 M P k l θ 1 , θ 2 θ 1 R l k θ 1 = θ ^ 1 θ 2 = θ ^ 2 = k = 1 M l = 1 M f k l θ ^ 1 , θ ^ 2 R l k = 0
θ 2 tr P A θ 1 , θ 2 R ^ θ 1 = θ ^ 1 θ 2 = θ ^ 2 = k = 1 M l = 1 M P k l θ 1 , θ 2 θ 2 R l k θ 1 = θ ^ 1 θ 2 = θ ^ 2 = k = 1 M l = 1 M g k l θ ^ 1 , θ ^ 2 R l k = 0 .
The derivatives of ML cost function with respect to θ 1 and θ 2 should be zero at the true incident angles for the noiseless sample covariance matrix:
P A θ 1 , θ 2 R ^ θ 1 θ 1 = θ 1 0 θ 2 = θ 2 0 = k = 1 M l = 1 M P k l θ 1 , θ 2 θ 1 R ^ l k θ 1 = θ 1 0 θ 2 = θ 2 0 = k = 1 M l = 1 M f k l θ 1 0 , θ 2 0 R ^ l k = 0
P A θ 1 , θ 2 R ^ θ 2 θ 1 = θ 1 0 θ 2 = θ 2 0 = k = 1 M l = 1 M P k l θ 1 , θ 2 θ 2 R ^ l k θ 1 = θ 1 0 θ 2 = θ 2 0 = k = 1 M l = 1 M g k l θ 1 0 , θ 2 0 R ^ l k = 0 .
Substituting f k l θ 1 , θ 2 and g k l θ 1 , θ 2 with the Tayler series approximation in Appendix J, and, using
R ^ l k = R ^ l k + δ R l k ,
we have
k = 1 M l = 1 M f k l θ 1 0 , θ 2 0 R ^ l k + δ R l k + k = 1 M l = 1 M δ θ 1 f k l θ 1 , θ 2 θ 1 θ 1 = θ 1 0 θ 2 = θ 2 0 R ^ l k + δ R l k + k = 1 M l = 1 M δ θ 2 f k l θ 1 , θ 2 θ 2 θ 1 = θ 1 0 θ 2 = θ 2 0 R ^ l k + δ R l k = 0
k = 1 M l = 1 M g k l θ 1 0 , θ 2 0 R ^ l k + δ R l k + k = 1 M l = 1 M δ θ 1 g k l θ 1 , θ 2 θ 1 θ 1 = θ 1 0 θ 2 = θ 2 0 R ^ l k + δ R l k + k = 1 M l = 1 M δ θ 2 g k l θ 1 , θ 2 θ 2 θ 1 = θ 1 0 θ 2 = θ 2 0 R ^ l k + δ R l k = 0 ,
where the first order derivatives of f k l θ 1 , θ 2 and g k l θ 1 , θ 2 with respect to θ 1 and θ 2 are given in Appendix A.
Using (20) and (21) in (23) and (24), and rearranging terms yields
k = 1 M l = 1 M f k l θ 1 , θ 2 θ 1 θ 1 = θ 1 0 θ 2 = θ 2 0 R ^ l k + δ R l k k = 1 M l = 1 M f k l θ 1 , θ 2 θ 2 θ 1 = θ 1 0 θ 2 = θ 2 0 R ^ l k + δ R l k k = 1 M l = 1 M g k l θ 1 , θ 2 θ 1 θ 1 = θ 1 0 θ 2 = θ 2 0 R ^ l k + δ R l k k = 1 M l = 1 M g k l θ 1 , θ 2 θ 2 θ 1 = θ 1 0 θ 2 = θ 2 0 R ^ l k + δ R l k δ θ 1 u = 1 δ θ 2 u = 1          = k = 1 M l = 1 M f k l θ 1 0 , θ 2 0 δ R l k k = 1 M l = 1 M g k l θ 1 0 , θ 2 0 δ R l k .
We define C u = 1 and b as follows:
C u = 1 k = 1 M l = 1 M f k l θ 1 , θ 2 θ 1 θ 1 = θ 1 0 θ 2 = θ 2 0 R ^ l k + δ R l k k = 1 M l = 1 M f k l θ 1 , θ 2 θ 2 θ 1 = θ 1 0 θ 2 = θ 2 0 R ^ l k + δ R l k k = 1 M l = 1 M g k l θ 1 , θ 2 θ 1 θ 1 = θ 1 0 θ 2 = θ 2 0 R ^ l k + δ R l k k = 1 M l = 1 M g k l θ 1 , θ 2 θ 2 θ 1 = θ 1 0 θ 2 = θ 2 0 R ^ l k + δ R l k
b k = 1 M l = 1 M f k l θ 1 0 , θ 2 0 δ R l k k = 1 M l = 1 M g k l θ 1 0 , θ 2 0 δ R l k .
Using (26) and (27) in (25), (25) can be written as
C u = 1 δ θ 1 u = 1 δ θ 2 u = 1 = b .
The solution of (28) and the associated estimates are given by
δ θ 1 u = 1 δ θ 2 u = 1 = C u = 1 1 b
θ ^ 1 u = 1 = θ 1 0 + δ θ 1 u = 1 θ ^ 2 u = 1 = θ 2 0 + δ θ 2 u = 1
where the superscript u = 1 indicates that the first order Taylor expansion is used. This approximation is called U approximation.
At high SNR, it is true that R ^ l k is much larger than δ R l k :
R ^ l k + δ R l k R ^ l k .
This approximation is called V approximation.
Using (31) in (25) yields
k = 1 M l = 1 M f k l θ 1 , θ 2 θ 1 θ 1 = θ 1 0 θ 2 = θ 2 0 R ^ l k k = 1 M l = 1 M f k l θ 1 , θ 2 θ 2 θ 1 = θ 1 0 θ 2 = θ 2 0 R ^ l k k = 1 M l = 1 M g k l θ 1 , θ 2 θ 1 θ 1 = θ 1 0 θ 2 = θ 2 0 R ^ l k k = 1 M l = 1 M g k l θ 1 , θ 2 θ 2 θ 1 = θ 1 0 θ 2 = θ 2 0 R ^ l k δ θ 1 u = 1 , v δ θ 2 u = 1 , v         = k = 1 M l = 1 M f k l θ 1 0 , θ 2 0 δ R l k k = 1 M l = 1 M g k l θ 1 0 , θ 2 0 δ R l k
where the superscript u = 1 , v indicates that both U approximation and V approximation are used to get the estimates.
Let C u = 1 , v be defined by
C u = 1 , v k = 1 M l = 1 M f k l θ 1 , θ 2 θ 1 θ 1 = θ 1 0 θ 2 = θ 2 0 R ^ l k k = 1 M l = 1 M f k l θ 1 , θ 2 θ 2 θ 1 = θ 1 0 θ 2 = θ 2 0 R ^ l k k = 1 M l = 1 M g k l θ 1 , θ 2 θ 1 θ 1 = θ 1 0 θ 2 = θ 2 0 R ^ l k k = 1 M l = 1 M g k l θ 1 , θ 2 θ 2 θ 1 = θ 1 0 θ 2 = θ 2 0 R ^ l k .
The solution of (32) and the associated estimates are given by
δ θ 1 u = 1 , v δ θ 2 u = 1 , v = C u = 1 , v 1 b
θ ^ 1 u = 1 , v = θ 1 0 + δ θ 1 u = 1 , v θ ^ 2 u = 1 , v = θ 2 0 + δ θ 2 u = 1 , v .

4. Closed-Form Expression of Mean Square Error of θ ^ 1 u , v and θ ^ 2 u , v

From (34), analytic mean square errors (MSEs) of δ θ 1 u = 1 , v and δ θ 2 u = 1 , v are given by
E δ θ 1 u = 1 , v 2 = E δ θ 1 u = 1 , v 2 = E δ θ 1 u = 1 , v δ θ 1 u = 1 , v * = C u = 1 , v 1 E bb H C u = 1 , v H 1 11
E δ θ 2 u = 1 , v 2 = E δ θ 2 u = 1 , v 2 = E δ θ 2 u = 1 , v δ θ 2 u = 1 , v * = C u = 1 , v 1 E bb H C u = 1 , v H 1 22
where the subscript 11 represents the entry at the first row and at the first column. The subscript 22 is similarly defined. From (27), E bb H can be expressed as
E bb H 11 = k = 1 M l = 1 M k = 1 M l = 1 M f k l θ 1 0 , θ 2 0 f k l * θ 1 0 , θ 2 0 E δ R l k δ R l k * E bb H 12 = k = 1 M l = 1 M k = 1 M l = 1 M f k l θ 1 0 , θ 2 0 g k l * θ 1 0 , θ 2 0 E δ R l k δ R l k * E bb H 21 = k = 1 M l = 1 M k = 1 M l = 1 M g k l θ 1 0 , θ 2 0 f k l * θ 1 0 , θ 2 0 E δ R l k δ R l k * E bb H 22 = k = 1 M l = 1 M k = 1 M l = 1 M g k l θ 1 0 , θ 2 0 g k l * θ 1 0 , θ 2 0 E δ R l k δ R l k *
where E δ R l k δ R l k * is given in Appendix G.

5. Numerical Results

In Section 3, U approximation and V approximation have been described and analytic mean square error(MSE) have been derived in Section 4. Empirical MSEs of the azimuths are defined as
Simulation E θ ^ 1 θ 1 0 2 = 1 W w = 1 W θ ^ 1 w θ 1 0 2
Simulation E θ ^ 1 u = 1 θ 1 0 2 = 1 W w = 1 W θ ^ 1 w u = 1 θ 1 0 2
Simulation E θ ^ 1 u = 1 , v θ 1 0 2 = 1 W w = 1 W θ ^ 1 w u = 1 , v θ 1 0 2
Simulation E θ ^ 2 θ 2 0 2 = 1 W w = 1 W θ ^ 2 w θ 2 0 2
Simulation E θ ^ 2 u = 1 θ 2 0 2 = 1 W w = 1 W θ ^ 2 w u = 1 θ 2 0 2
Simulation E θ ^ 2 u = 1 , v θ 2 0 2 = 1 W w = 1 W θ ^ 2 w u = 1 , v θ 2 0 2
where W denotes the number of repetitions. The subscript (w) denotes the estimate associated with the w-th repetition out of W repetitions.
θ ^ 1 and θ ^ 2 in (39) and (42) are given by (9). Similarly, θ ^ 1 u = 1 and θ ^ 2 u = 1 in (40) and (43) are given by (29) and (30). θ ^ 1 u = 1 , v = 1 and θ ^ 2 u = 1 , v = 1 in (41) and (44) are obtained from (34) and (35).
In Figure 1, Figure 2, Figure 3 and Figure 4, we illustrate the accuracy of estimation of azimuths. In the simulation, an additive noise is assumed to be zero-mean Gaussian-distributed. In Figure 1, Figure 2, Figure 3 and Figure 4, the results with ‘Simulation E θ ^ 1 θ 1 0 2 ’, ‘Simulation E θ ^ 1 u = 1 θ 1 0 2 ’, ‘Simulation E θ ^ 1 u = 1 , v θ 1 0 2 ’, and ‘Analytic E θ ^ 1 u = 1 , v θ 1 0 2 ’ are obtained from (39), (40), (41), and (36), respectively.
Similarly, the results with ‘Simulation E θ ^ 2 θ 2 0 2 ’, ‘Simulation E θ ^ 2 u = 1 θ 2 0 2 ’, ‘Simulation E θ ^ 2 u = 1 , v θ 2 0 2 ’, and ‘Analytic E θ ^ 2 u = 1 , v θ 2 0 2 ’ are obtained from (42), (43), (44) and (36), respectively.
For all the results in Figure 1, Figure 2, Figure 3 and Figure 4, the difference between Simulation E θ ^ 1 θ 1 0 2 ’ and ‘Simulation E θ ^ 1 u = 1 θ 1 0 2 ’ is much larger than that between ‘Simulation E θ ^ 1 u = 1 θ 1 0 2 ’ and ‘Simulation E θ ^ 1 u = 1 , v θ 1 0 2 ’, which implies that U approximation results in much greater error than V approximation. Therefore, to improve DOA estimation performance, second-order Taylor expansion, which corresponds to u = 2 , can be used.
Actually, in all the results in Figure 1, Figure 2, Figure 3 and Figure 4, ‘Simulation E θ ^ 1 u = 1 θ 1 0 2 ’ and ‘Simulation E θ ^ 1 u = 1 , v θ 1 0 2 ’ are approximately equal.
Instead, to quantify how computationally efficient the proposed algorithm is, execution time is obtained both for analytically derived MSE and for the Monte Carlo simulation-based MSE.
The number of incident signals is two, where two signals are incident from 20 and 40 . The number of antenna elements is 10, and the number of snapshots is 1000. In getting the Monte Carlo simulation-based MSE, since the computational complexity is nearly proportional to the number of repetitions, the number of repetitions varies from 100 to 1000 in increments of 100.
Figure 5 shows how computationally efficient the proposed algorithm is. The execution times of the simulation-based MSEs and analytic MSEs are illustrated with respect to the number of repetitions. Note that the execution time for analytically derived MSE is essentially independent of the number of repetitions.
It is clearly shown in Figure 5 that execution time for the Monte Carlo simulation-based MSE is much greater than that for the analytically derived MSE even for the number of repetitions of 100. Figure 5 illustrates that getting analytically derived MSE is much less computationally intensive than getting Monte Carlo simulation based MSE, which justifies why the analytically derived MSEs should be employed for performance analysis.

6. Conclusions and Summary

In Figure 6, the proposed algorithm is outlined. Note that Equations (A)–(D) in this section refer to Equations (A)–(D) in Figure 6, respectively. The constraints used for derivation of estimation errors are equations (A) and (B): equation (A) is valid since, in a noiseless environment, no estimation error occurs in the azimuth estimates. The estimation error due to an additive noise in noisy environment is formulated as equation (B).
To quantify the estimation error due to an additive noise, equations (A) and (B) and two approximations of U approximation and V approximation are used. Note that the covariance matrix in equation (A) and that in equation (B) are associated with noiseless response and noisy response, respectively. Applying the Taylor series approximation in equation (B) and using equation (A) in the approximated expression, the azimuth estimation error in equation (C) can be derived. To get an explicit expression of the MSE of the azimuth estimate, V approximation is applied, and the closed-form expression of the MSE of the estimate is obtained from equation (D).
To the best of our knowledge, no previous study used explicit equations (A) and (B) to derive the azimuth error in equation (C). One of the novelties of this paper is that the derivation is based on the observation that the azimuth error in the subscript in equation (B) can be analytically derived since equation (A) is true for noiseless covariance matrix.
In summary, applying U approximation to equation (B) and using the constraint in equation (A), the estimation error in equation (C) can be obtained. The MSE of the azimuth estimation error in equation (D) is obtained from equation (C) and V approximation. Note that, to get equation (D) from equation (C), the statistics of an additive noise should also be exploited.
Quantitative study on the estimation error for direction-of-arrival estimation in terms of standard deviation of an additive noise has been addressed in this paper.
In this paper, in case of estimating azimuths of multiple incident signals, the closed-form expression of the MSE of the DOA estimate for the ML algorithm has been derived by stepwise approximations. The type of antenna array is assumed as ULA. The closed-form of the MSE has been derived by using the Taylor series approximation which linearizes the nonlinear parts of the array vector and additional approximation based on the assumption that the estimation error is very small at high SNR. The closed-form of the MSE has been verified through numerical results. The closed-form of the MSE has been verified through numerical results. All of the stepwise approximated simulation results and the results obtained from the closed-form of the MSE show good agreement.
Although the formulation and the numerical results for two incident signals are presented in this paper, an extension to multiple incident signals is quite intuitively clear and straightforward.
In this paper, we rigorously derive how the MSE of the ML algorithm for direction-of-arrival can be expressed in terms of various parameters, which include the number of sensor elements, the number of incident signals, the number of snapshots, and the variance of additive noises on the antenna elements. Although, for convenience, the statistics of additive noise is assumed to be non-zero-mean Gaussian distributed, the derivation in the Appendices can be extended to the case where noise can be modeled as any other random variable as long as the moments of the random variable are analytically available.

Author Contributions

J.W.P. and K.-H.L. made a Matlab implementation of the proposed algorithm. J.-H.L. formulated the proposed algorithm. J.-H.L., J.W.P., and K.-H.L. wrote an initial draft, and J.-H.L. and J.W.P. revised the manuscript. J.W.P. and K.-H.L. derived the results in the appendices. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2018R1D1A1B07048294). The authors gratefully acknowledge the support from Electronic Warfare Research Center at Gwangju Institute of Science and Technology (GIST), originally funded by Defense Acquisition Program Administration (DAPA) and Agency for Defense Development (ADD).

Conflicts of Interest

The authors declare no conflict of interest.

Notation

· H Hermitian matrix transpose
· Noisy quantity
δ · Difference between noisy quantity and the corresponding noiseless quantity
A θ 1 , θ 2 Matrix whose columns are array vector for θ 1 and θ 2
θ ^ c Estimate of the c-th incident signal   c = 1 , 2
θ c 0 True azimuth of c-th incident signal   c = 1 , 2
x t i Noisy signals on the antenna array at t t i
R ^ Sample covariance matrix of the noiseless signal
R ^ l k The entry at the k-th row and the l-th column of R ^
R ^ Sample covariance matrix of the noisy signal
R ^ l k The entry at the k-th row and the l-th column of the R ^
δ R Difference between R ^ and R ^
δ R l k Difference between R ^ l k and R ^ l k
· u = 1 First, order U approximation of · based on Taylor expansion
θ ^ c u = 1 First, order U approximation of θ ^ c c = 1 , 2
δ θ c u = 1 Difference between θ ^ c u = 1 and θ c 0 c = 1 , 2
θ ^ c u = 1 , v V approximation of θ ^ c u = 1 c = 1 , 2
δ θ c u = 1 , v Difference between θ ^ c u = 1 , v and θ c 0 c = 1 , 2
θ ^ c w θ ^ c associated with w-th repetition out of W repetitions.  c = 1 , 2
θ ^ c w u = 1 θ ^ c u = 1 associated with w-th repetition out of W repetitions.  c = 1 , 2
θ ^ c w u = 1 , v θ ^ c u = 1 , v associated with w-th repetition out of W repetitions.  c = 1 , 2

Abbreviations

The following abbreviations are used in this manuscript:
DOADirection-of-Arrival
MLMaximum Likelihood
MSEMean Square Error
SNRSignal to Noise Ratio
ULAUniform Linear array

Appendix A. First Order Derivative of fkl (θ1,θ2) and gkl (θ1,θ2)

Q k l θ 1 , θ 2 θ 1 = d a k θ 1 d θ 1 M a k θ 2 B * θ 1 , θ 2 θ 1 a l * θ 1 + d a l * θ 1 d θ 1 a k θ 1 M a k θ 2 B * θ 1 , θ 2 d a k θ 1 d θ 1 B θ 1 , θ 2 + a k θ 1 B θ 1 , θ 2 θ 1 2 a l * θ 2
Q k l θ 1 , θ 2 θ 2 = d a k θ 2 d θ 2 B * θ 1 , θ 2 + a k θ 2 B * θ 1 , θ 2 θ 2 a l * θ 1 + a l * θ 2 d a k θ 2 d θ 2 M a k θ 1 B θ 1 , θ 2 θ 2 + a k θ 2 M a k θ 1 B θ 1 , θ 2 d a l * θ 2 d θ 2
2 Q k l θ 1 , θ 2 θ 1 2 = d 2 a k θ 1 d θ 1 2 M a k θ 2 2 B * θ 1 , θ 2 θ 1 2 a l * θ 1   + d a k θ 1 d θ 1 M a k θ 2 B * θ 1 , θ 2 θ 1 d a l * θ 1 d θ 1   + a k θ 1 M a k θ 2 B * θ 1 , θ 2 d 2 a l * θ 1 d θ 1 2   + d a k θ 1 d θ 1 M a k θ 2 B * θ 1 , θ 2 θ 1 d a l * θ 1 d θ 1   d 2 a k θ 1 d θ 1 2 B θ 1 , θ 2 + 2 d a k θ 1 d θ 1 B θ 1 , θ 2 θ 1 + a k θ 1 2 B θ 1 , θ 2 θ 1 2 a l * θ 2
2 Q k l θ 1 , θ 2 θ 2 2 = d 2 a k θ 2 d θ 2 2 B * θ 1 , θ 2 + 2 d a k θ 2 d θ 2 B * θ 1 , θ 2 θ 2 + a k θ 2 2 B * θ 1 , θ 2 θ 2 2 a l * θ 1 + d 2 a k θ 2 d θ 2 2 M a k θ 1 2 B * θ 1 , θ 2 θ 2 2 a l * θ 2 + d a k θ 2 d θ 2 M a k θ 1 B θ 1 , θ 2 θ 2 d a l * θ 2 d θ 2 + a k θ 2 M a k θ 1 B θ 1 , θ 2 d 2 a l * θ 2 d θ 2 2 + d a k θ 2 d θ 2 M a k θ 1 B θ 1 , θ 2 θ 2 d a l * θ 2 d θ 2
2 Q k l θ 1 , θ 2 θ 1 θ 2 = d a k θ 2 d θ 2 B * θ 1 , θ 2 θ 1 a k θ 2 2 B * θ 1 , θ 2 θ 1 θ 2 a l * θ 1 + d a k θ 2 d θ 2 B * θ 1 , θ 2 a k θ 2 B * θ 1 , θ 2 θ 2 d a l * θ 1 d θ 1 d a k θ 1 d θ 1 B θ 1 , θ 2 θ 2 + a k θ 1 2 B θ 1 , θ 2 θ 1 θ 2 a l * θ 2 d a k θ 1 d θ 1 B θ 1 , θ 2 + a k θ 1 B θ 1 , θ 2 θ 1 d a l * θ 2 d θ 2
f k l θ 1 , θ 2 θ 1 = 2 Q k l θ 1 , θ 2 θ 1 2 det θ 1 , θ 2 2 det θ 1 , θ 2 θ 1 2 Q k l θ 1 , θ 2 det θ 1 , θ 2 2 det θ 1 , θ 2 4 Q k l θ 1 , θ 2 θ 1 det θ 1 , θ 2 det θ 1 , θ 2 θ 1 Q k l θ 1 , θ 2 det θ 1 , θ 2 2 θ 1 det θ 1 , θ 2 4
f k l θ 1 , θ 2 θ 2 = 2 Q k l θ 1 , θ 2 θ 1 θ 2 det θ 1 , θ 2 + Q k l θ 1 , θ 2 θ 1 det θ 1 , θ 2 θ 2 det θ 1 , θ 2 2 det θ 1 , θ 2 4 + 2 det θ 1 , θ 2 θ 1 θ 2 Q k l θ 1 , θ 2 det θ 1 , θ 2 θ 1 Q k l θ 1 , θ 2 θ 2 det θ 1 , θ 2 2 det θ 1 , θ 2 4 Q k l θ 1 , θ 2 θ 1 det θ 1 , θ 2 det θ 1 , θ 2 θ 1 Q k l θ 1 , θ 2 det θ 1 , θ 2 2 θ 2 det θ 1 , θ 2 4
g k l θ 1 , θ 2 θ 1 = 2 Q k l θ 1 , θ 2 θ 1 θ 2 det θ 1 , θ 2 + Q k l θ 1 , θ 2 θ 2 det θ 1 , θ 2 θ 1 det θ 1 , θ 2 2 det θ 1 , θ 2 4 + 2 det θ 1 , θ 2 θ 1 θ 2 Q k l θ 1 , θ 2 det θ 1 , θ 2 θ 2 Q k l θ 1 , θ 2 θ 1 det θ 1 , θ 2 2 det θ 1 , θ 2 4 Q k l θ 1 , θ 2 θ 2 det θ 1 , θ 2 det θ 1 , θ 2 θ 2 Q k l θ 1 , θ 2 det θ 1 , θ 2 2 θ 1 det θ 1 , θ 2 4
g k l θ 1 , θ 2 θ 2 = 2 Q k l θ 1 , θ 2 θ 2 2 det θ 1 , θ 2 2 det θ 1 , θ 2 θ 2 2 Q k l θ 1 , θ 2 det θ 1 , θ 2 2 det θ 1 , θ 2 4 Q k l θ 1 , θ 2 θ 2 det θ 1 , θ 2 det θ 1 , θ 2 θ 2 Q k l θ 1 , θ 2 det θ 1 , θ 2 2 θ 2 det θ 1 , θ 2 4
Note that the first and the second order derivatives of det θ 1 , θ 2 are given in Appendix B and first and second order derivatives of B θ 1 , θ 2 are given in Appendix C.

Appendix B. First and Second Order Derivatives of Det (θ1,θ2)

det θ 1 , θ 2 = M 2 1 cos 2 π λ Δ sin θ 1 sin θ 2 M 1 cos 2 π λ Δ sin θ 1 sin θ 2 M 2 H θ 1 , θ 2 G θ 1 , θ 2
H θ 1 , θ 2 θ 1 = 2 π λ Δ cos θ 1 M sin 2 π λ Δ sin θ 1 sin θ 2 M
H θ 1 , θ 2 θ 2 = 2 π λ Δ cos θ 2 M sin 2 π λ Δ sin θ 1 sin θ 2 M
G θ 1 , θ 2 θ 1 = 2 π λ Δ cos θ 1 sin 2 π λ Δ sin θ 1 sin θ 2
G θ 1 , θ 2 θ 2 = 2 π λ Δ cos θ 2 sin 2 π λ Δ sin θ 1 sin θ 2
2 H θ 1 , θ 2 θ 1 2 = 2 π λ Δ sin θ 1 M sin 2 π λ Δ sin θ 1 sin θ 2 M + 2 π λ Δ M cos θ 1 2 cos 2 π λ Δ sin θ 1 sin θ 2 M
2 H θ 1 , θ 2 θ 2 2 = 2 π λ Δ sin θ 2 M sin 2 π λ Δ sin θ 1 sin θ 2 M + 2 π λ Δ M cos θ 2 2 cos 2 π λ Δ sin θ 1 sin θ 2 M
2 H θ 1 , θ 2 θ 1 θ 2 = 2 π λ Δ M 2 cos θ 1 cos θ 2 cos 2 π λ Δ sin θ 1 sin θ 2 M
2 G θ 1 , θ 2 θ 1 2 = 2 π λ Δ sin θ 1 sin 2 π λ Δ sin θ 1 sin θ 2 + 2 π λ Δ cos θ 1 2 cos 2 π λ Δ sin θ 1 sin θ 2
2 G θ 1 , θ 2 θ 2 2 = 2 π λ Δ sin θ 2 sin 2 π λ Δ sin θ 1 sin θ 2 + 2 π λ Δ cos θ 2 2 cos 2 π λ Δ sin θ 1 sin θ 2
2 G θ 1 , θ 2 θ 1 θ 2 = 2 π λ Δ 2 cos θ 1 cos θ 2 cos 2 π λ Δ sin θ 1 sin θ 2
det θ 1 , θ 2 θ 1 = H θ 1 , θ 2 θ 1 G θ 1 , θ 2 + G θ 1 , θ 2 θ 1 H θ 1 , θ 2 G θ 1 , θ 2 2
det θ 1 , θ 2 θ 2 = H θ 1 , θ 2 θ 2 G θ 1 , θ 2 + G θ 1 , θ 2 θ 2 H θ 1 , θ 2 G θ 1 , θ 2
2 det θ 1 , θ 2 θ 1 2 = 2 H θ 1 , θ 2 θ 1 2 G θ 1 , θ 2 + 2 G θ 1 , θ 2 θ 1 2 H θ 1 , θ 2 G θ 1 , θ 2 2 G θ 1 , θ 2 4 H θ 1 , θ 2 θ 1 G θ 1 , θ 2 + G θ 1 , θ 2 θ 1 H θ 1 , θ 2 2 G θ 1 , θ 2 G θ 1 , θ 2 θ 1 G θ 1 , θ 2 4
2 det θ 1 , θ 2 θ 2 2 = 2 H θ 1 , θ 2 θ 2 2 G θ 1 , θ 2 + 2 G θ 1 , θ 2 θ 2 2 H θ 1 , θ 2 G θ 1 , θ 2 2 G θ 1 , θ 2 4 H θ 1 , θ 2 θ 2 G θ 1 , θ 2 + G θ 1 , θ 2 θ 2 H θ 1 , θ 2 2 G θ 1 , θ 2 G θ 1 , θ 2 θ 2 G θ 1 , θ 2 4
2 det θ 1 , θ 2 θ 1 θ 2 = 2 H θ 1 , θ 2 θ 1 θ 2 G θ 1 , θ 2 H θ 1 , θ 2 θ 1 G θ 1 , θ 2 θ 2 G θ 1 , θ 2 2 G θ 1 , θ 2 4 + 2 G θ 1 , θ 2 θ 1 θ 2 H θ 1 , θ 2 + G θ 1 , θ 2 θ 1 H θ 1 , θ 2 θ 2 G θ 1 , θ 2 2 G θ 1 , θ 2 4 H θ 1 , θ 2 θ 1 G θ 1 , θ 2 + G θ 1 , θ 2 θ 1 H θ 1 , θ 2 2 G θ 1 , θ 2 G θ 1 , θ 2 θ 2 G θ 1 , θ 2 4

Appendix C. First and Second Order Derivatives of B (θ1,θ2)

B θ 1 , θ 2 = 1 exp j 2 π λ Δ sin θ 2 sin θ 1 M 1 exp j 2 π λ Δ sin θ 2 sin θ 1 N θ 1 , θ 2 D θ 1 , θ 2
N θ 1 , θ 2 θ 1 = j 2 π λ Δ cos θ 1 M exp j 2 π λ Δ sin θ 2 sin θ 1 M
N θ 1 , θ 2 θ 2 = j 2 π λ Δ cos θ 2 M exp j 2 π λ Δ sin θ 2 sin θ 1 M
2 N θ 1 , θ 2 θ 1 2 = j 2 π λ Δ sin θ 1 M exp j 2 π λ Δ sin θ 2 sin θ 1 M + 2 π λ Δ cos θ 1 M 2 exp j 2 π λ Δ sin θ 2 sin θ 1 M
2 N θ 1 , θ 2 θ 2 2 = j 2 π λ Δ sin θ 2 M exp j 2 π λ Δ sin θ 2 sin θ 1 M + 2 π λ Δ cos θ 2 M 2 exp j 2 π λ Δ sin θ 2 sin θ 1 M
2 N θ 1 , θ 2 θ 1 d θ 2 = 2 π λ Δ M 2 cos θ 1 cos θ 2 exp j 2 π λ Δ sin θ 2 sin θ 1 M
D θ 1 , θ 2 θ 1 = j 2 π λ Δ cos θ 1 exp j 2 π λ Δ sin θ 2 sin θ 1
D θ 1 , θ 2 θ 2 = j 2 π λ Δ cos θ 2 exp j 2 π λ Δ sin θ 2 sin θ 1
2 D θ 1 , θ 2 θ 1 2 = j 2 π λ Δ sin θ 1 exp j 2 π λ Δ sin θ 2 sin θ 1 + 2 π λ Δ cos θ 1 2 exp j 2 π λ Δ sin θ 2 sin θ 1
2 D θ 1 , θ 2 θ 2 2 = j 2 π λ Δ sin θ 2 exp j 2 π λ Δ sin θ 2 sin θ 1 + 2 π λ Δ cos θ 2 2 exp j 2 π λ Δ sin θ 2 sin θ 1
2 D θ 1 , θ 2 θ 1 d θ 2 = 2 π λ Δ 2 cos θ 1 cos θ 2 exp j 2 π λ Δ sin θ 2 sin θ 1
D 2 θ 1 , θ 2 = 1 exp j 2 π λ Δ sin θ 2 sin θ 1 2
D 2 θ 1 , θ 2 θ 1 = 2 1 exp j 2 π λ Δ sin θ 2 sin θ 1 × j 2 π λ Δ cos θ 1 exp j 2 π λ Δ sin θ 2 sin θ 1
D 2 θ 1 , θ 2 θ 2 = 2 1 exp j 2 π λ Δ sin θ 2 sin θ 1 × j 2 π λ Δ cos θ 2 exp j 2 π λ Δ sin θ 2 sin θ 1
B θ 1 , θ 2 θ 1 = N θ 1 , θ 2 θ 1 D θ 1 , θ 2 D θ 1 , θ 2 θ 1 N θ 1 , θ 2 1 exp j 2 π λ Δ sin θ 2 sin θ 1 2
B θ 1 , θ 2 θ 2 = N θ 1 , θ 2 θ 2 D θ 1 , θ 2 D θ 1 , θ 2 θ 2 N θ 1 , θ 2 1 exp j 2 π λ Δ sin θ 2 sin θ 1 2
2 B θ 1 , θ 2 θ 1 2 = 2 N θ 1 , θ 2 θ 1 2 D θ 1 , θ 2 2 D θ 1 , θ 2 θ 1 2 N θ 1 , θ 2 D 2 θ 1 , θ 2 1 exp j 2 π λ Δ sin θ 2 sin θ 1 4 N θ 1 , θ 2 θ 1 D θ 1 , θ 2 D θ 1 , θ 2 θ 1 N θ 1 , θ 2 D 2 θ 1 , θ 2 θ 1 1 exp j 2 π λ Δ sin θ 2 sin θ 1 4
2 B θ 1 , θ 2 θ 2 2 = 2 N θ 1 , θ 2 θ 2 2 D θ 1 , θ 2 2 D θ 1 , θ 2 θ 2 2 N θ 1 , θ 2 D 2 θ 1 , θ 2 1 exp j 2 π λ Δ sin θ 2 sin θ 1 4 N θ 1 , θ 2 θ 2 D θ 1 , θ 2 D θ 1 , θ 2 θ 2 N θ 1 , θ 2 D 2 θ 1 , θ 2 θ 2 1 exp j 2 π λ Δ sin θ 2 sin θ 1 4
2 B θ 1 , θ 2 θ 1 θ 2 = 2 N θ 1 , θ 2 θ 1 θ 2 D θ 1 , θ 2 + N θ 1 , θ 2 θ 1 D θ 1 , θ 2 θ 2 D 2 θ 1 , θ 2 1 exp j 2 π λ Δ sin θ 2 sin θ 1 4 2 D θ 1 , θ 2 θ 1 θ 2 N θ 1 , θ 2 D θ 1 , θ 2 θ 1 N θ 1 , θ 2 θ 2 D 2 θ 1 , θ 2 1 exp j 2 π λ Δ sin θ 2 sin θ 1 4 N θ 1 , θ 2 θ 1 D θ 1 , θ 2 D θ 1 , θ 2 θ 1 N θ 1 , θ 2 D 2 θ 1 , θ 2 θ 2 1 exp j 2 π λ Δ sin θ 2 sin θ 1 4
The first order and the second order derivatives of a k θ 1 and a k θ 2 are given in Appendix D.

Appendix D. First and Second Order Derivatives of ak (θ1) and ak (θ2)

d a k θ 1 d θ 1 = j 2 π λ k 1 Δ cos θ 1 exp j 2 π λ k 1 Δ sin θ 1
d a k θ 2 d θ 2 = j 2 π λ k 1 Δ cos θ 2 exp j 2 π λ k 1 Δ sin θ 2
d 2 a k θ 1 d θ 1 2 = j 2 π λ k 1 Δ sin θ 1 exp j 2 π λ k 1 Δ sin θ 1 2 π λ k 1 Δ cos θ 1 2 exp j 2 π λ k 1 Δ sin θ 1
d 2 a k θ 2 d θ 2 2 = j 2 π λ k 1 Δ sin θ 2 exp j 2 π λ k 1 Δ sin θ 2 2 π λ k 1 Δ cos θ 2 2 exp j 2 π λ k 1 Δ sin θ 2

Appendix E. Derivations of fkl (θ1,θ2) and gkl (θ1,θ2)

Let f k l θ 1 , θ 2 and g k l θ 1 , θ 2 denote the partial derivatives of P k l θ 1 , θ 2 with respect to θ 1 and θ 2 , respectively:
f k l θ 1 , θ 2 P k l θ 1 , θ 2 θ 1 = 1 det θ 1 , θ 2 2 Q k l θ 1 , θ 2 θ 1 det θ 1 , θ 2 det θ 1 , θ 2 θ 1 Q k l θ 1 , θ 2
g k l θ 1 , θ 2 P k l θ 1 , θ 2 θ 2 = 1 det θ 1 , θ 2 2 Q k l θ 1 , θ 2 θ 2 det θ 1 , θ 2 det θ 1 , θ 2 θ 2 Q k l θ 1 , θ 2 .

Appendix F. Derivations of fkl (θ1,θ2) and gkl (θ1,θ2) with the Tayler Series Approximation

Applying the Taylor series expansion in (A49) and (A50) yields
f k l θ ^ 1 , θ ^ 2 = f k l θ 1 0 , θ 2 0 + δ θ 1 u = 1 f k l θ 1 , θ 2 θ 1 θ 1 = θ 1 0 θ 2 = θ 2 0 + δ θ 2 u = 1 f k l θ 1 , θ 2 θ 2 θ 1 = θ 1 0 θ 2 = θ 2 0
g k l θ ^ 1 , θ ^ 2 = g k l θ 1 0 , θ 2 0 + δ θ 1 u = 1 g k l θ 1 , θ 2 θ 1 θ 1 = θ 1 0 θ 2 = θ 2 0 + δ θ 2 u = 1 g k l θ 1 , θ 2 θ 1 θ 1 = θ 1 0 θ 2 = θ 2 0
where δ θ 1 u = 1 and δ θ 2 u = 1 denote estimation errors for the first incident signal and the second incident signal, respectively.

Appendix G. Derivation of E δ R l k δ R l k *

R ^ l k = 1 L i = 1 L x l ( t i ) x k * ( t i ) + x l ( t i ) n k * ( t i ) + n l ( t i ) x k * ( t i ) + n l ( t i ) n k * ( t i ) .
R ^ l k = 1 L i = 1 L x l ( t i ) x k * ( t i ) .
δ R l k = 1 L i = 1 L x l ( t i ) n k * ( t i ) + n l ( t i ) x k * ( t i ) + n l ( t i ) n k * ( t i ) .
From (A55), E δ R l k δ R l k * is given by
E δ R l k δ R l k *
= 1 L 2 i = 1 L i = 1 L x l ( t i ) x l * ( t i ) E n k * ( t i ) n k ( t i ) + x l ( t i ) x k ( t i ) E n k * ( t i ) n l * ( t i ) + x l ( t i ) E n k * ( t i ) n l * ( t i ) n k ( t i ) + x k * ( t i ) x l * ( t i ) E n l ( t i ) n k ( t i ) + x k * ( t i ) x k ( t i ) E n l ( t i ) n l * ( t i )   + x k * ( t i ) E n l ( t i ) n l * ( t i ) n k ( t i ) + x l * ( t i ) E n l ( t i ) n k * ( t i ) n k ( t i ) + x k ( t i ) E n l ( t i ) n k * ( t i ) n l * ( t i ) + E n l ( t i ) n k * ( t i ) n l * ( t i ) n k ( t i )
where the second moments, third moments, and the fourth moments in (A56) are derived in Appendix H, Appendix I and Appendix J, respectively.
Finally, when i = i , E δ R l k δ R l k * is given by
E δ R l k δ R l k * = 1 L 2 i = 1 L i = 1 L x l t i x l * t i σ 2 + 2 μ 2 + x l t i x k * t i 2 j μ 2 + x l t i 3 2 σ 2 + μ 2 μ + 1 2 σ 2 + μ 2 μ + j 1 2 σ 2 + μ 2 μ + j 3 2 σ 2 + μ 2 μ + x k * t i x l * t i 2 j μ 2 + x k * t i x k t i σ 2 + 2 μ 2 + x k * t i 3 2 σ 2 + μ 2 μ + 1 2 σ 2 + μ 2 μ + j 1 2 σ 2 + μ 2 μ + j 3 2 σ 2 + μ 2 μ + x l * t i 3 2 σ 2 + μ 2 μ + 1 2 σ 2 + μ 2 μ + j 1 2 σ 2 + μ 2 μ + j 3 2 σ 2 + μ 2 μ + x k t i 3 2 σ 2 + μ 2 μ + 1 2 σ 2 + μ 2 μ + j 1 2 σ 2 + μ 2 μ + j 3 2 σ 2 + μ 2 μ + 2 μ 4 + 3 μ 2 σ 2 + 3 4 σ 4 + 2 σ 2 2 + μ 2 2
for l = k and k = l and l = k .
E δ R l k δ R l k * = 1 L 2 i = 1 L i = 1 L x l t i x l * t i 2 μ 2 + x l t i x k * t i 2 j μ 2 + x l t i 2 μ 3 + 2 j μ 3 + x k * t i x l * t i 2 j μ 2 + x k * t i x k t i σ 2 + 2 μ 2 + x k * t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + x l * t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + x k t i 3 2 σ 2 + μ 2 μ + σ 2 2 + μ 2 μ + j σ 2 2 + μ 2 μ + j 3 2 σ 2 + μ 2 μ + 2 3 2 σ 2 + μ 2 μ + 2 σ 2 2 + μ 2 μ 2
for l = k and k = l and l k .
E δ R l k δ R l k * = 1 L 2 i = 1 L i = 1 L x l t i x l * t i σ 2 + 2 μ 2 + x l t i x k * t i 2 j μ 2 + x l t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + x k * t i x l * t i 2 j μ 2 + x k * t i x k t i 2 μ 2 + x k * t i 2 μ 3 + 2 j μ 3 + x l * t i 3 2 σ 2 + μ 2 μ + σ 2 2 + μ 2 μ + j σ 2 2 + μ 2 μ + j 3 2 σ 2 + μ 2 μ + x k t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + 2 3 2 σ 2 + μ 2 μ + 2 σ 2 2 + μ 2 μ 2
for l = k and k = k and l k .
E δ R l k δ R l k * = 1 L 2 i = 1 L i = 1 L x l t i x l * t i 2 μ 2 + x l t i x k * t i 2 j μ 2 + x l t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + x k * t i x l * t i 2 j μ 2 + x k * t i x k t i σ 2 + 2 μ 2 + x k * t i 3 2 σ 2 + μ 2 μ + σ 2 2 + μ 2 μ + j σ 2 2 + μ 2 μ + j 3 2 σ 2 + μ 2 μ + x l * t i 2 μ 3 + 2 j μ 3 + x k t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + 2 3 2 σ 2 + μ 2 μ + 2 σ 2 2 + μ 2 μ 2
for l = l and l = k and k k .
E δ R l k δ R l k * = 1 L 2 i = 1 L i = 1 L x l t i x l * t i σ 2 + 2 μ 2 + x l t i x k * t i 2 j μ 2 + x l t i 3 2 σ 2 + μ 2 μ + σ 2 2 + μ 2 μ + j σ 2 2 + μ 2 μ + j 3 2 σ 2 + μ 2 μ + x k * t i x l * t i 2 j μ 2 + x k * t i x k t i 2 μ 2 + x k * t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + x l * t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + x k t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + 2 3 2 σ 2 + μ 2 μ + 2 σ 2 2 + μ 2 μ 2
for l = k and l = k and l k .
E δ R l k δ R l k * = 1 L 2 i = 1 L i = 1 L x l t i x l * t i 2 μ 2 + x l t i x k * t i 2 j μ 2 + x l t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + x k * t i x l * t i 2 j μ 2 + x k * t i x k t i 2 μ 2 + x k * t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + x l * t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + x k t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + 4 σ 2 2 + μ 2 2
for l = k and k l and l = k .
E δ R l k δ R l k * = 1 L 2 i = 1 L i = 1 L x l t i x l * t i σ 2 + 2 μ 2 + x l t i x k * t i 2 j μ 2 + x l t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + x k * t i x l * t i 2 j μ 2 + x k * t i x k t i σ 2 + 2 μ 2 + x k * t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + x l * t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + x k t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + 4 σ 2 2 + μ 2 2
for l = l and k l and k = k .
E δ R l k δ R l k * = 1 L 2 i = 1 L i = 1 L x l t i x l * t i 2 μ 2 + x l t i x k * t i 2 j μ 2 + x l t i 2 μ 3 + 2 j μ 3 + x k * t i x l * t i 2 j μ 2 + x k * t i x k t i 2 μ 2 + x k * t i 2 μ 3 + 2 j μ 3 + x l * t i 2 μ 3 + 2 j μ 3 + x k t i 2 μ 3 + 2 j μ 3 + 4 μ 4
for l = k and k k and k = l .
E δ R l k δ R l k * = 1 L 2 i = 1 L i = 1 L x l t i x l * t i 2 μ 2 + x l t i x k * t i 2 j μ 2 + x l t i 2 μ 3 + 2 j μ 3 + x k * t i x l * t i 2 j μ 2 + x k * t i x k t i 2 μ 2 + x k * t i 2 μ 3 + 2 j μ 3 + x l * t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + x k t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + 4 σ 2 2 + μ 2 μ 2
for l = k and k l and l k and k l .
E δ R l k δ R l k * = 1 L 2 i = 1 L i = 1 L x l t i x l * t i 2 μ 2 + x l t i x k * t i 2 j μ 2 + x l t i 2 μ 3 + 2 j μ 3 + x k * t i x l * t i 2 j μ 2 + x k * t i x k t i σ 2 + 2 μ 2 + x k * t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + x l * t i 2 μ 3 + 2 j μ 3 + x k t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + 4 σ 2 2 + μ 2 μ 2
for l = l and l k and k k and k l .
E δ R l k δ R l k * = 1 L 2 i = 1 L i = 1 L x l t i x l * t i 2 μ 2 + x l t i x k * t i 2 j μ 2 + x l t i 2 μ 3 + 2 j μ 3 + x k * t i x l * t i 2 j μ 2 + x k * t i x k t i 2 μ 2 + x k * t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + x l * t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + x k t i 2 μ 3 + 2 j μ 3 + 4 μ 4
for l = k and k k and k l and l l .
E δ R l k δ R l k * = 1 L 2 i = 1 L i = 1 L x l t i x l * t i 2 μ 2 + x l t i x k * t i 2 j μ 2 + x l t i 2 μ 3 + 2 j μ 3 + x k * t i x l * t i 2 j μ 2 + x k * t i x k t i 2 μ 2 + x k * t i 2 μ 3 + 2 j μ 3 + x l * t i 2 μ 3 + 2 j μ 3 + x k t i 2 μ 3 + 2 j μ 3 + 4 μ 4
for k = l and l l and l k and k k .
E δ R l k δ R l k * = 1 L 2 i = 1 L i = 1 L x l t i x l * t i σ 2 + 2 μ 2 + x l t i x k * t i 2 j μ 2 + x l t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + x k * t i x l * t i 2 j μ 2 + x k * t i x k t i 2 μ 2 + x k * t i 2 μ 3 + 2 j μ 3 + x l * t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + x k t i 2 μ 3 + 2 j μ 3 + 4 σ 2 2 + μ 2 μ 2
for k = k and k l and l l and l k .
E δ R l k δ R l k * = 1 L 2 i = 1 L i = 1 L x l t i x l * t i 2 μ 2 + x l t i x k * t i 2 j μ 2 + x l t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + x k * t i x l * t i 2 j μ 2 + x k * t i x k t i 2 μ 2 + x k * t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + x l * t i 2 μ 3 + 2 j μ 3 + x k t i 2 μ 3 + 2 j μ 3 + 4 σ 2 2 + μ 2 μ 2
for l = k and k l and l k and k l .
E δ R l k δ R l k * = 1 L 2 i = 1 L i = 1 L x l t i x l * t i 2 μ 2 + x l t i x k * t i 2 j μ 2 + x l t i 2 μ 3 + 2 j μ 3 + x k * t i x l * t i 2 j μ 2 + x k * t i x k t i 2 μ 2 + x k * t i 2 μ 3 + 2 j μ 3 + x l * t i 2 μ 3 + 2 j μ 3 + x k t i 2 μ 3 + 2 j μ 3 + 6 μ 4
otherwise. When i i , E δ R l k δ R l k * is given by
E δ R l k δ R l k * = 1 L 2 i = 1 L i = 1 L x l t i x l * t i 2 μ 2 + x l t i x k * t i 2 j μ 2 + x l t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + x k * t i x l * t i 2 j μ 2 + x k * t i x k t i 2 μ 2 + x k * t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + x l * t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + x k t i 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ + 4 σ 2 2 + μ 2 2
for l = k and k = l and l = k .
E δ R l k δ R l k * = 1 L 2 i = 1 L i = 1 L x l t i x l * t i 2 μ 2 + x l t i x k * t i 2 j μ 2 + x l t i 2 μ 3 + 2 j μ 3 + x k * t i x l * t i 2 j μ 2 + x k * t i x k t i 2 μ 2 + x k * t i 2 μ 3 + 2 j μ 3 + x l * t i 2 μ 3 + 2 j μ 3 + x k t i 2 μ 3 + 2 j μ 3 + 4 σ 2 2 + μ 2 2
for l = k and k l and l = k .
E δ R l k δ R l k * = 1 L 2 i = 1 L i = 1 L x l t i x l * t i 2 μ 2 + x l t i x k * t i 2 j μ 2 + x l t i 2 μ 3 + 2 j μ 3 + x k * t i x l * t i 2 j μ 2 + x k * t i x k t i 2 μ 2 + x k * t i 2 μ 3 + 2 j μ 3 + x l * t i 2 μ 3 + 2 j μ 3 + x k t i 2 μ 3 + 2 j μ 3 + 4 σ 2 2 + μ 2 2
for l = l and l k and k = k .
E δ R l k δ R l k * = 1 L 2 i = 1 L i = 1 L x l t i x l * t i 2 μ 2 + x l t i x k * t i 2 j μ 2 + x l t i 2 μ 3 + 2 j μ 3 + x k * t i x l * t i 2 j μ 2 + x k * t i x k t i 2 μ 2 + x k * t i 2 μ 3 + 2 j μ 3 + x l * t i 2 μ 3 + 2 j μ 3 + x k t i 2 μ 3 + 2 j μ 3 + 4 σ 2 2 + μ 2 2
for l = k and k l and l = k .
E δ R l k δ R l k * = 1 L 2 i = 1 L i = 1 L x l t i x l * t i 2 μ 2 + x l t i x k * t i 2 j μ 2 + x l t i 2 μ 3 + 2 j μ 3 + x k * t i x l * t i 2 j μ 2 + x k * t i x k t i 2 μ 2 + x k * t i 2 μ 3 + 2 j μ 3 + x l * t i 2 μ 3 + 2 j μ 3 + x k t i 2 μ 3 + 2 j μ 3 + 4 μ 4
for l l k k .
E δ R l k δ R l k * = 1 L 2 i = 1 L i = 1 L x l t i x l * t i 2 μ 2 + x l t i x k * t i 2 j μ 2 + x l t i 2 μ 3 + 2 j μ 3 + x k * t i x l * t i 2 j μ 2 + x k * t i x k t i 2 μ 2 + x k * t i 2 μ 3 + 2 j μ 3 + x l * t i 2 μ 3 + 2 j μ 3 + x k t i 2 μ 3 + 2 j μ 3 + 4 σ 2 2 + μ 2 μ 2
otherwise.

Appendix H. Fourth Order Non-Central Moment of Non-Zero-Mean Complex Gaussian Random Variables with Variance σ2

E n l t i n k * t i n l * t i n k t i = E Re n l t i + j Im n l t i Re n k t i j Im n k t i × Re n l t i j Im n l t i Re n k t i + j Im n k t i .
(a)
i = i
For i = i , (A78) can be written as
E n l t i n k * t i n l * t i n k t i
= E Re n l t i Re n k t i Re n l t i Re n k t i + E Re n l t i Re n k t i E Im n l t i Im n k t i E Re n l t i Re n k t i E Im n k t i Im n l t i + E Im n l t i Im n k t i E Re n l t i Re n k t i + E Re n k t i Re n k t i E Im n l t i Im n l t i E Re n k t i Re n l t i E Im n l t i Im n k t i + E Re n l t i Re n l t i E Im n k t i Im n k t i + E Im n l t i Im n k t i Im n l t i Im n k t i .
The first term of (A79) is given by
E Re n l t i Re n k t i Re n l t i Re n k t i
= E Re n l t i Re n k t i Re n l t i Re n k t i = μ 4 + 3 μ 2 σ 2 + 3 4 σ 4 l = k   and   k = l   and   l = k E Re n l t i Re n k t i Re n l t i E Re n k t i = μ 3 2 σ 2 + μ 2 l = k   and   k = l   and   l k E Re n l t i Re n k t i Re n k t i E Re n l t i = μ 3 2 σ 2 + μ 2 l = k   and   k = k   and   k l E Re n l t i Re n l t i Re n k t i E Re n k t i = μ 3 2 σ 2 + μ 2 l = l   and   l = k   and   k k E Re n k t i Re n l t i Re n k t i E Re n l t i = μ 3 2 σ 2 + μ 2 k = l   and   l = k   and   k l E Re n l t i Re n k t i E Re n l t i Re n k t i = σ 2 2 + μ 2 2 l = k   and   k l   and   l = k E Re n l t i Re n l t i E Re n k t i Re n k t i = σ 2 2 + μ 2 2 l = l   and   l k   and   k = k E Re n l t i Re n k t i E Re n k t i Re n l t i = σ 2 2 + μ 2 2 l = k   and   k k   and   k = l E Re n l t i Re n k t i E Re n l t i E Re n k t i = σ 2 2 + μ 2 μ 2 l = k   and   k l   and   l k   and   k l E Re n l t i Re n l t i E Re n k t i E Re n k t i = σ 2 2 + μ 2 μ 2 l = l   and   l k   and   k k   and   k l E Re n l t i Re n k t i E Re n k t i E Re n l t i = σ 2 2 + μ 2 μ 2 l = k   and   k k   and   k l   and   l l E Re n k t i Re n l t i E Re n k t i E Re n l t i = σ 2 2 + μ 2 μ 2 k = l   and   l l   and   l k   and   k k E Re n k t i Re n l t i E Re n k t i E Re n l t i = σ 2 2 + μ 2 μ 2 k = k   and   k l   and   l l   and   l k E Re n l t i Re n k t i E Re n l t i E Re n k t i = σ 2 2 + μ 2 μ 2 l = k   and   k l   and   l k   and   l k E Re n l t i E Re n k t i E Re n l t i E Re n k t i = μ 4 otherwise .
Using the same scheme in getting (A80), the other terms of (A79) are given by
E Re n l t i Re n k t i E Im n l t i Im n k t i
= E Re n l t i Re n k t i E Im n l t i Im n k t i = σ 2 2 + μ 2 2     l = k   and   k = l   and   l = k E Re n l t i Re n k t i E Im n l t i Im n k t i = σ 2 2 + μ 2 2     l = k   and   k l   and   l = k E Re n l t i Re n k t i E Im n l t i Im n k t i = μ 4        l k   and   k l E Re n l t i Re n k t i E Im n l t i Im n k t i = σ 2 2 + μ 2 μ 2     otherwise .
E Re n l t i Re n k t i E Im n k t i Im n l t i
= E Re n l t i Re n k t i E Im n k t i Im n l t i = σ 2 2 + μ 2 2     l = k   and   k = l   and   l = k E Re n l t i Re n k t i E Im n k t i Im n l t i = σ 2 2 + μ 2 2     l = k   and   k k   and   k = l E Re n l t i Re n k t i E Im n k t i Im n l t i = μ 4        l k   and   k l   E Re n l t i Re n k t i E Im n k t i Im n l t i = σ 2 2 + μ 2 μ 2     otherwise .
E Im n l t i Im n k t i ] E [ Re n l t i Re n k t i
= E Im n l t i Im n k t i E Re n l t i Re n k t i = σ 2 2 + μ 2 2     l = k   and   k = l   and   l = k E Im n l t i Im n k t i E Re n l t i Re n k t i = σ 2 2 + μ 2 2     l = k   and   k l   and   l = k E Im n l t i Im n k t i ] E [ Re n l t i Re n k t i = μ 4        l k   and   l k E Im n l t i Im n k t i E Re n l t i Re n k t i = σ 2 2 + μ 2 μ 2     otherwise .
E Re n k t i Re n k t i E Im n l t i Im n l t i
= E Re n k t i Re n k t i E Im n l t i Im n l t i = σ 2 2 + μ 2 2     l = k   and   k = l   and   l = k E Re n k t i Re n k t i E Im n l t i Im n l t i = σ 2 2 + μ 2 2     l = l   and   k l   and   k = k E Re n k t i Re n k t i E Im n l t i Im n l t i = μ 4        l l   and   k k E Re n k t i Re n k t i E Im n l t i Im n l t i = σ 2 2 + μ 2 μ 2     otherwise .
E Re n k t i Re n l t i E Im n l t i Im n k t i
= E Re n k t i Re n l t i E Im n l t i Im n k t i = σ 2 2 + μ 2 2     l = k   and   k = l   and   l = k E Re n k t i Re n l t i E Im n l t i Im n k t i = σ 2 2 + μ 2 2     l = k   and   k k   and   k = l E Re n k t i Re n l t i E Im n l t i Im n k t i = μ 4        k l   and   l k E Re n k t i Re n l t i E Im n l t i Im n k t i = σ 2 2 + μ 2 μ 2     otherwise .
E Re n l t i Re n l t i E Im n k t i Im n k t i
= E Re n l t i Re n l t i E Im n k t i Im n k t i = σ 2 2 + μ 2 2     l = k   and   k = l   and   l = k E Re n l t i Re n l t i E Im n k t i Im n k t i = σ 2 2 + μ 2 2     l = l   and   l k   and   k = k E Re n l t i Re n l t i E Im n k t i Im n k t i = μ 4        l l   and   k k E Re n l t i Re n l t i E Im n k t i Im n k t i = σ 2 2 + μ 2 μ 2     otherwise .
E Im n l t i Im n k t i Im n l t i Im n k t i
= E Im n l t i Im n k t i Im n l t i Im n k t i = μ 4 + 3 μ 2 σ 2 + 3 4 σ 4     l = k   and   k = l   and   l = k E Im n l t i Im n k t i E Im n l t i Im n k t i = σ 2 2 + μ 2 2     l = k   and   k l   and   l = k E Im n l t i Im n l t i E Im n k t i Im n k t i = σ 2 2 + μ 2 2     l = l   and   l k   and   k = k E Im n l t i Im n k t i E Im n k t i Im n l t i = σ 2 2 + μ 2 2     l = k   and   k k   and   k = l E Im n l t i Im n k t i E Im n k t i Im n l t i = 3 2 σ 2 + μ 2 μ     l = k   and   k = l   and   l k E Im n l t i Im n k t i E Im n k t i Im n l t i = 3 2 σ 2 + μ 2 μ     l = k   and   k = k   and   l k E Im n l t i Im n k t i E Im n k t i Im n l t i = 3 2 σ 2 + μ 2 μ     l = l   and   l = k   and   k k E Im n l t i Im n k t i E Im n k t i Im n l t i = 3 2 σ 2 + μ 2 μ     l = k   and   l = k   and   l k E Im n l t i Im n k t i E Im n k t i Im n l t i = σ 2 2 + μ 2 μ 2     l = k   and   k l   and   l k   and   k l E Im n l t i Im n k t i E Im n k t i Im n l t i = σ 2 2 + μ 2 μ 2     l = l   and   l k   and   k k   and   k l E Im n l t i Im n k t i E Im n k t i Im n l t i = σ 2 2 + μ 2 μ 2     l = k   and   k k   and   k l   and   l l E Im n l t i Im n k t i E Im n k t i Im n l t i = σ 2 2 + μ 2 μ 2     k = l   and   l l   and   l k   and   k k E Im n l t i Im n k t i E Im n k t i Im n l t i = σ 2 2 + μ 2 μ 2     k = k   and   k l   and   l l   and   l k E Im n l t i Im n k t i E Im n k t i Im n l t i = σ 2 2 + μ 2 μ 2     l = k   and   k l   and   l k   and   l k E Im n l t i Im n k t i Im n l t i Im n k t i = μ 4             otherwise .
From (A80)–(A87), E n l t i n k * t i n l * t i n k t i in (A79) is given by
E n l t i n k * t i n l * t i n k t i = 2 μ 4 + 3 μ 2 σ 2 + 3 4 σ 4 + 2 σ 2 2 + μ 2 2 l = k   and   k = l   and   l = k 2 3 2 σ 2 + μ 2 μ + 2 σ 2 2 + μ 2 μ 2 l = k   and   k = l   and   l k 2 3 2 σ 2 + μ 2 μ + 2 σ 2 2 + μ 2 μ 2 l = k   and   k = k   and   l k 2 3 2 σ 2 + μ 2 μ + 2 σ 2 2 + μ 2 μ 2 l = l   and   l = k   and   k k 2 3 2 σ 2 + μ 2 μ + 2 σ 2 2 + μ 2 μ 2 l = k   and   l = k   and   l k 4 σ 2 2 + μ 2 2 l = k   and   k l   and   l = k 4 σ 2 2 + μ 2 2 l = l   and   l k   and   k = k 4 μ 4 l = k   and   k k   and   k = l 4 σ 2 2 + μ 2 μ 2 l = k   and   k l   and   l k   and   k l 4 σ 2 2 + μ 2 μ 2 l = l   and   l k   and   k k   and   k l 4 μ 4 l = k   and   k k   and   k l   and   l l 4 μ 4 k = l   and   l l   and   l k   and   k k 4 σ 2 2 + μ 2 μ 2 k = k   and   k l   and   l l   and   l k 4 σ 2 2 + μ 2 μ 2 l = k   and   k l   and   l k   and   l k 6 μ 4 otherwise .
(b)
i i
E n l t i n k * t i n l * t i n k t i
= E Re n l t i Re n k t i Re n l t i Re n k t i + E Re n l t i Re n k t i Im n l t i Im n k t i + E Im n l t i Im n k t i Re n l t i Re n k t i + E Im n l t i Im n k t i Im n l t i Im n k t i .
Using a similar way to get (A88), for i i , E n l t i n k * t i n l * t i n k t i is given by
E n l t i n k * t i n l * t i n k t i = σ 2 2 + μ 2 2 + σ 2 2 + μ 2 2 + σ 2 2 + μ 2 2 + σ 2 2 + μ 2 2 = 4 σ 2 2 + μ 2 2 l = k   and   k = l   and   l = k σ 2 2 + μ 2 2 + σ 2 2 + μ 2 2 + σ 2 2 + μ 2 2 + σ 2 2 + μ 2 2 = 4 σ 2 2 + μ 2 2 l = k   and   k l   and   l = k 4 μ 4 l k l k σ 2 2 + μ 2 μ 2 + σ 2 2 + μ 2 μ 2 + σ 2 2 + μ 2 μ 2 + σ 2 2 + μ 2 μ 2 = 4 σ 2 2 + μ 2 μ 2 otherwise .

Appendix I. Third Order Non-Central Moment of Non-Zero-Mean Complex Gaussian Random Variable with Variance σ2

We define ten cases depending on how a , b , c , d , and e are related:
Case   I
    a = b and b = c and e = d
Case   II
    a = b and b c and e = d
Case   III
    a b and b = c and e = d
Case   IV
    a = c and c b and e = d
Case   V
    a b and b c and e = d
Case   VI
    a = b and b = c and e d
Case   VII
    a = b and b c and e d
Case   VIII
    a b and b = c and e d
Case   IX
    a = c and c b and e d
Case   X
    a b and b c and e d
  • E n a ( t d ) n b * ( t e ) n c ( t e )
      E n a ( t d ) n b * ( t e ) n c ( t e ) =   E Re n a ( t d ) + j Im n a ( t d ) Re n b ( t e ) j Im n b ( t e ) Re n c ( t e ) + j Im n c ( t e ) =   E Re n a ( t d ) Re n b ( t e ) Re n c ( t e ) + j E Re n a ( t d ) Re n b ( t e ) Im n c ( t e ) j E Re n a ( t d ) Im n b ( t e ) Re n c ( t e ) + j E Im n a ( t d ) Re n b ( t e ) Re n c ( t e ) + E Re n a ( t d ) Im n b ( t e ) Im n c ( t e ) E Im n a ( t d ) Re n b ( t e ) Im n c ( t e ) + E Im n a ( t d ) Im n b ( t e ) Re n c ( t e ) + j E Im n a ( t d ) Im n b ( t e ) Im n c ( t e ) .
    E n a ( t d ) n b * ( t e ) n c ( t e ) for Case I can be written as:
      E n a ( t d ) n b * ( t e ) n c ( t e ) = E n a ( t d ) n a * ( t d ) n a ( t d ] =   E Re n a ( t d ) Re n a ( t d ) Re n a ( t d ) + j E Re n a ( t d ) Re n a ( t d ) E Im n a ( t d )   j E Re n a ( t d ) Re n a ( t d ) E Im n a ( t d ) + j E Im n a ( t d ) E Re n a ( t d ) Re n a ( t d )   + E Re n a ( t d ) E Im n a ( t d ) Im n a ( t d ) E Im n a ( t d ) Im n a ( t d ) E Re n a ( t d )   + E Im n a ( t d ) Im n a ( t d ) E Re n a ( t d ) + j E Im n a ( t d ) Im n a ( t d ) Im n a ( t d ) =     μ 3 2 σ 2 + μ 2 + j σ 2 2 + μ 2 × μ j σ 2 2 + μ 2 × μ + j σ 2 2 + μ 2 × μ + σ 2 2 + μ 2 × μ   σ 2 2 + μ 2 × μ + σ 2 2 + μ 2 × μ + j × μ 3 2 σ 2 + μ 2 =   μ 3 2 σ 2 + μ 2 + μ σ 2 2 + μ 2 + j μ σ 2 2 + μ 2 + j μ 3 2 σ 2 + μ 2 .
    Note that, in deriving (A91) and (A92), we used the fact that the real part and the imaginary part of noise are independent and identically distributed with N μ , σ 2 2 .
    Using the same manipulation used in obtaining (A92), for Case II–Case X,
    E n a ( t d ) n b * ( t e ) n c ( t e ) can be shown as
    E n a ( t d ) n b * ( t e ) n c ( t e ) = 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ     for   Case   II , Case   III , Case   VI E n a ( t d ) n b * ( t e ) n c ( t e )   =   2 μ 3   +   2 j μ 3        for   Case   IV , Case   V , Case   VII Case   X .
    From (A92) and (A93), in Case I–Case X, E n a ( t d ) n b * ( t e ) n c ( t e ) can be defined as
    E n a ( t d ) n b * ( t e ) n c ( t e ) = 3 2 σ 2 + μ 2 μ + σ 2 2 + μ 2 μ + j σ 2 2 + μ 2 μ + j 3 2 σ 2 + μ 2 μ for   Case   I E n a ( t d ) n b * ( t e ) n c ( t e ) = 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ    for   Case   II , Case   III , Case   VI E n a ( t d ) n b * ( t e ) n c ( t e ) = 2 μ 3 + 2 j μ 3      for   Case   IV , Case   V , Case   VII Case   X . .
  • E n a * ( t d ) n b * ( t e ) n c ( t e )
      E n a * ( t d ) n b * ( t e ) n c ( t e ) =   E Re n a ( t d ) j Im n a ( t d ) Re n b ( t e ) j Im n b ( t e ) Re n c ( t e ) + j Im n c ( t e ) =   E Re n a ( t d ) Re n b ( t e ) Re n c ( t e ) + j E Re n a ( t d ) Re n b ( t e ) Im n c ( t e ) j E Re n a ( t d ) Im n b ( t e ) Re n c ( t e ) j E Im n a ( t d ) Re n b ( t e ) Re n c ( t e ) + E Re n a ( t d ) Im n b ( t e ) Im n c ( t e ) + E Im n a ( t d ) Re n b ( t e ) Im n c ( t e ) E Im n a ( t d ) Im n b ( t e ) Re n c ( t e ) j E Im n a ( t d ) Im n b ( t e ) Im n c ( t e ) .
    In a similar way to get (A94), we get
    E n a * ( t d ) n b * ( t e ) n c ( t e ) = 3 2 σ 2 + μ 2 μ + σ 2 2 + μ 2 μ + j σ 2 2 + μ 2 μ + j 3 2 σ 2 + μ 2 μ for   Case   I E n a * ( t d ) n b * ( t e ) n c ( t e ) = 2 σ 2 2 + μ 2 μ + 2 j σ 2 2 + μ 2 μ    for   Case   III , Case   IV , Case   VI E n a * ( t d ) n b * ( t e ) n c ( t e ) = 2 μ 3 + 2 j μ 3       for   Case   II , Case   V , Case   VII Case   X .

Appendix J. Second Order Non-Central Moment of Non-Zero-Mean Complex Gaussian Random Variable with Variance σ2

Depending on how a, b, d and e are related, we define four cases:
Case   I
    a = b and d = e
Case   II
    a b and d = e
Case   III
    a = b and d e
Case   IV
    a b and d e
  • E n a t d n b * t e
    E n a t d n b * t e = E Re n a t d + j Im n a t d Re n b t e j Im n b t e = E Re n a t d Re n b t e j Re n a t d Im n b t e + j Im n a t d Re n b t e + Im n a t d Im n b t e
    For Case I, E n a t d n b * t e is given by
    E n a t d n b * t e = E n a t d n a * t d = E Re n a t d Re n a t d + E Im n a t d Im n a t d   j E Re n a t d ] E [ Im n a t d + j E Im n a t d ] E [ Re n a t d = σ 2 2 + μ 2 + σ 2 2 + μ 2 + j μ 2 j μ 2 = σ 2 + 2 μ 2 .
    Similarly, it can be shown that E n a t d n b * t e is identically 2 μ 2 for Case II–Case IV:
    E n a t d n b * t e = 2 μ 2 for Case   II Case   IV .
    Note that, in deriving (A97)–(A99), we used the fact that the real part and the imaginary part of noise are independent and identically distributed with N μ , σ 2 2 .
  • E n a * t d n b * t e
    Using the same algebraic manipulation used in evaluating E n a t d n b * t e , it can be shown that E n a * t d n b * t e is equal to 2 j μ 2 for Case I–Case IV:
    E n a * t d n b * t e = 2 j μ 2 for Case   I Case   IV .
  • E n a t d n b t e
    Using the same algebraic manipulation used in evaluating E n a t d n b * t e , it can be shown that E n a t d n b t e is equal to 2 j μ 2 for Case I–Case IV:
    E n a t d n b t e = 2 j μ 2 for Case   I Case   IV .

References

  1. Del Rio, J.E.F.; Catedra-Perez, M.F. A comparison between matrix pencil and Root-MUSIC for direction-of-arrival estimation making use of uniform linear arrays. Digit. Signal Process. 1997, 7, 153–162. [Google Scholar] [CrossRef]
  2. Silverstein, S.D.; Zoltowski, M.D. The mathematical basis for element and fourier beamspace MUSIC and root-MUSIC algorithms. Digit. Signal Process. 1991, 1, 161–175. [Google Scholar] [CrossRef]
  3. Weng, Z.; Djuric, P.M. A search-free DOA estimation algorithm for coprime arrays. Digit. Signal Process. 1991, 1, 27–33. [Google Scholar] [CrossRef] [Green Version]
  4. He, Z.-Q.; Shi, Z.-P.; Huang, L. Covariance sparsity-aware DOA estimation for nonuniform noise. Digit. Signal Process. 2014, 24, 75–81. [Google Scholar] [CrossRef]
  5. Li, J.F.; Zhang, X.F. Two-dimensional angle estimation for monostatic MIMO arbitrary array with velocity receive sensors and unknown locations. Digit. Signal Process. 1991, 24, 34–41. [Google Scholar] [CrossRef]
  6. Anton, H. Elementary Linear Algebra, 11th ed.; Wiley: Hoboken, NJ, USA, 2019. [Google Scholar]
  7. Stoica, P.; Gershman, A.B. Maximum-likelihood DOA estimation by data-supported grid search. IEEE Signal Process. Lett. 1999, 6, 273–275. [Google Scholar] [CrossRef]
  8. Athley, F. Threshold region performance of maximum likelihood direction of arrival estimators. IEEE Trans. Signal Process. 2005, 53, 1359–1373. [Google Scholar] [CrossRef]
  9. Pesavento, M.; Gershman, A.B. Maximum-likelihood direction-of-arrival estimation in the presence of unknown nonuniform noise. IEEE Trans. Signal Process. 2001, 49, 1310–1324. [Google Scholar] [CrossRef]
  10. Vorobyov, S.A.; Gershman, A.B.; Wong, K.M. Maximum likelihood direction-of-arrival estimation in unknown noise filed using sparse sensor arrays. IEEE Trans. Signal Process. 2005, 53, 34–43. [Google Scholar] [CrossRef]
  11. Shin, J.W.; Lee, Y.-J.; Kim, H.-N. Reduced-complexity maximum likelihood direction-of-arrival estimation based on spatial aliasing. IEEE Trans. Signal Process. 2014, 62, 6568–6581. [Google Scholar] [CrossRef]
  12. Vincent, F.; Besson, O.; Chaumette, E. Approximate unconditional maximum likelihood direction of arrival estimation for two closely spaced targets. IEEE Signal Process. Lett. 2015, 22, 86–89. [Google Scholar] [CrossRef] [Green Version]
  13. Liu, Z.-M.; Huang, Z.-T.; Zhou, Y.-Y. An efficient maximum likelihood method for direction-of-arrival estimation via sparse Bayesian learning. IEEE Trans. Wirel. Commun. 2012, 11, 3607–3617. [Google Scholar] [CrossRef]
  14. Boccato, L.; Krummenauer, R.; Attux, R.; Lopes, A. Application of natural computing algorithms to maximum likelihood estimation of direction of arrival. Signal Process. 2012, 92, 1338–1352. [Google Scholar] [CrossRef]
  15. Wang, H.; Kay, S.; Saha, S. An importance sampling maximum likelihood direction of arrival estimator. IEEE Trans. Signal Process. 2008, 56, 5082–5092. [Google Scholar] [CrossRef]
  16. Magdy, A.; Mahmoud, K.R.; Abdel-Gawad, S.G.; Ibrahim, I.I. Direction of arrival estimation based on maximum likelihood criteria using gravitational search algorithm. In Proceedings of the Progress in Electromagnetics Research Symposium, Taipei, Taiwan, 25–28 March 2013; pp. 1162–1167. [Google Scholar]
  17. Zhang, Z.; Lin, J.; Shi, Y. Application of artificial bee colony algorithm to maximum likelihood DOA estimation. J. Bionic Eng. 2013, 10, 100–109. [Google Scholar] [CrossRef]
  18. Seghouane, A.K. A Kullback–Leibler methodology for unconditional ML DOA estimation in unknown nonuniform noise. IEEE Trans. Aerosp. Electron. Syst. 2011, 47, 3012–3021. [Google Scholar] [CrossRef]
  19. Chen, C.E.; Lorenzelli, F.; Hudson, R.E.; Yao, K. Maximum likelihood DOA estimation of multiple wideband sources in the presence of nonuniform sensor noise. EURASIP J. Adv. Signal Process. 2008, 14, 1–12. [Google Scholar]
Figure 1. Analytic and simulated MSEs (Mean Square Errors) of θ 1 and θ 2 with respect to SNR (Signal to Noise Ratio).
Figure 1. Analytic and simulated MSEs (Mean Square Errors) of θ 1 and θ 2 with respect to SNR (Signal to Noise Ratio).
Applsci 10 02415 g001
Figure 2. Analytic and simulated MSEs θ 1 and θ 2 with respect to the number of snapshots (SNR = 0 dB).
Figure 2. Analytic and simulated MSEs θ 1 and θ 2 with respect to the number of snapshots (SNR = 0 dB).
Applsci 10 02415 g002
Figure 3. Analytic and simulated MSEs θ 1 and θ 2 with respect to the number of snapshots (SNR = 5 dB).
Figure 3. Analytic and simulated MSEs θ 1 and θ 2 with respect to the number of snapshots (SNR = 5 dB).
Applsci 10 02415 g003
Figure 4. Analytic and simulated MSEs θ 1 and θ 2 with respect to the number of snapshots (SNR = 10 dB).
Figure 4. Analytic and simulated MSEs θ 1 and θ 2 with respect to the number of snapshots (SNR = 10 dB).
Applsci 10 02415 g004
Figure 5. Comparing the execution times between Monte Carlo simulation-based MSEs and Analytic MSEs.
Figure 5. Comparing the execution times between Monte Carlo simulation-based MSEs and Analytic MSEs.
Applsci 10 02415 g005
Figure 6. Outline of the proposed performance analysis scheme.
Figure 6. Outline of the proposed performance analysis scheme.
Applsci 10 02415 g006

Share and Cite

MDPI and ACS Style

Paik, J.W.; Lee, K.-H.; Lee, J.-H. Asymptotic Performance Analysis of Maximum Likelihood Algorithm for Direction-of-Arrival Estimation: Explicit Expression of Estimation Error and Mean Square Error. Appl. Sci. 2020, 10, 2415. https://doi.org/10.3390/app10072415

AMA Style

Paik JW, Lee K-H, Lee J-H. Asymptotic Performance Analysis of Maximum Likelihood Algorithm for Direction-of-Arrival Estimation: Explicit Expression of Estimation Error and Mean Square Error. Applied Sciences. 2020; 10(7):2415. https://doi.org/10.3390/app10072415

Chicago/Turabian Style

Paik, Ji Woong, Kyu-Ho Lee, and Joon-Ho Lee. 2020. "Asymptotic Performance Analysis of Maximum Likelihood Algorithm for Direction-of-Arrival Estimation: Explicit Expression of Estimation Error and Mean Square Error" Applied Sciences 10, no. 7: 2415. https://doi.org/10.3390/app10072415

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop