Next Article in Journal
Rapid Detection of Ascorbic Acid Based on a Dual-Electrode Sensor System Using a Powder Microelectrode Embedded with Carboxyl Multi-Walled Carbon Nanotubes
Next Article in Special Issue
Direction-of-Arrival Estimation with Coarray ESPRIT for Coprime Array
Previous Article in Journal
Measuring Torque and Temperature in a Rotating Shaft Using Commercial SAW Sensors
Previous Article in Special Issue
Analysis of Maneuvering Targets with Complex Motions by Two-Dimensional Product Modified Lv’s Distribution for Quadratic Frequency Modulation Signals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performance Analysis of the Direct Position Determination Method in the Presence of Array Model Errors

1
National Digital Switching System Engineering & Technological Research Center, Zhengzhou 450002, China
2
Zhengzhou Information Science and Technology Institute, Zhengzhou 450002, China
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(7), 1550; https://doi.org/10.3390/s17071550
Submission received: 8 May 2017 / Revised: 22 June 2017 / Accepted: 29 June 2017 / Published: 2 July 2017

Abstract

:
The direct position determination approach was recently presented as a promising technique for the localization of a transmitting source with accuracy higher than that of the conventional two-step localization method. In this paper, the theoretical performance of a direct position determination estimator proposed by Weiss is examined for situations in which the array model errors are present. Our study starts from a matrix eigen-perturbation result, which expresses the perturbation of eigenvalues as a function of the disturbance added to the Hermitian matrix. The first-order asymptotic expression of the positioning errors is presented, from which an analytical expression for the mean square error of the direct localization is available. Additionally, explicit formulas for computing the probabilities of a successful localization are deduced. Finally, Cramér–Rao bound expressions for the position estimation are derived for two cases: (1) array model errors are absent and (2) array model errors are present. The obtained Cramér-Rao bounds provide insights into the effects of the array model errors on the localization accuracy. Simulation results support and corroborate the theoretical developments made in this paper.

1. Introduction

The techniques of emitter localization using direction of arrival (DOA) measurements [1,2,3,4,5] play an important role in many areas, including vehicle navigation, localization and tracking of acoustic sources, and location services of satellite communications. In such localization systems, a single moving observer or multiple stationary observers are used to determinate the positions of the emitters. Generally, each observer is equipped with an antenna array for measuring the DOAs of the transmitted sources, and the emitter can then be located at the intersection of a set of lines of bearing [6,7,8]. The location procedure described above is typically called the two-step method. In the first step, the signal parameters (e.g., DOA [1,2,3,4,5], time difference of arrival (TDOA) [9,10], time of arrival (TOA) [11,12], frequency difference of arrival (FDOA) [13,14], frequency of arrival (FOA) [15], and received signal strength (RSS) [16,17]) are separately measured at several stations. In the second step, a central station uses the measurements to estimate the position coordinates of the sources. The two-step procedure is also known as the decentralized approach [18]. Note that although the two-step procedure is widely applied to the modern localization system, it is difficult to yield the optimal position estimate from the point of view of statistical characteristics. The reason is that the signal parameters are obtained by ignoring the constraint that all measurements must correspond to a common source position. As a result, information loss between the two steps is unavoidable. Although it can be proved by the extended invariance principle (EXIP) [19] that the two-step method provides an asymptotically efficient estimate under certain conditions, these requirements cannot be easily met in practical scenarios.
To improve the accuracy of two-step location methods, a promising technique, called the direct position determination (DPD) approach, is proposed over the past few years. DPD is a centralized and single-step estimation technique in which the estimator uses exactly the same data as classical two-step methods but searches for the source location directly. Generally, the DPD method outperforms conventional two-step methods under low-signal-to-noise conditions and when there are relatively few samples; moreover, it does not encounter the association problem. More importantly, the DPD technique can be applied to many wireless positioning systems. Specifically, the DPD method for locating a narrowband radio emitter based on a Doppler shift is presented in [20,21], and DPD methods for locating a wideband source based on a time delay metric are proposed in [22,23,24]. Furthermore, DPD estimators using both the Doppler frequency and time delay are developed in [25,26,27,28]. Note that in the DPD methods mentioned above, multiple platforms each equipped with a single-antenna receiver are used for position determination, and as a result, the DOA information of the impinging signals cannot be exploited. In [29], a DPD method based on multiple static stations each equipped with an antenna array is first proposed. In this single-step location method, the array response is modeled as a function of the source position, and only a two-dimensional search is required although there are many stray parameters in the array signal model. Following the work of [29], other DPD estimators for special localization scenarios are developed in the literature. In particular, DPD methods for multiple radio emitters are presented in [30,31], and some high-resolution DPD methods are given in [32,33]. DPD estimators for the cases of known waveforms and multipath environments are developed in [34] and [35,36], respectively. In addition, DPD methods tailored to special signals (e.g., orthogonal frequency division multiplexing signals, cyclostationary signals, and intermittent emissions) are proposed in [37,38,39]. It is noteworthy that all experiment results in [20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39] demonstrate that the single-step approach outperforms the two-step method for a low signal-to-noise ratio (SNR) and small number of samples. Meanwhile, although this kind of localization method may require more computations and communication bandwidth, novel information technology [40,41,42] can be used to overcome these difficulties. For example, the cloud computing and cloud storage technology [40,41] can be used to reduce the computation loads, and the compressive sensing technology [42] is helpful for reducing the communication bandwidth.
In the field of array signal processing, super-resolution DOA estimation methods are known to be sensitive to uncertainties in the array manifold. In recent decades, much attention has been paid to the analysis of the sensitivity of classical DOA estimation algorithms to array model errors. In [43,44,45,46,47,48,49,50,51], the statistic performance of the multiple signal classification algorithm and its extensions in the presence of array model errors is studied. An analysis of the estimation of signal parameters via rotational invariance techniques under random sensor uncertainties is performed in [52], and a sensitivity analysis of the weighted subspace fitting algorithm under the combined effects of array model errors and finite samples is presented in [53]. The statistical performance of the maximum likelihood algorithm is also investigated in [54,55] assuming that array calibration errors exist. Additionally, efficient parameter estimation algorithms are proposed with an uncalibrated array [56,57] or partly calibrated array [58,59,60].
Array model errors are typically caused by gain/phase uncertainties, mutual coupling, and sensor position perturbations. Note that all DPD methods presented in [29,30,31,32,33,35,36,37,38,39] also rely on the accurate knowledge of the array manifold and, therefore, it seems reasonable to expect that their localization accuracy is also severely degraded by array uncertainties. Although the estimation performance of the DPD method in the presence of array model errors is rigorously analyzed in [34,61,62], these theoretical studies are simply performed for the case where signal waveforms are known. However, this is rarely realistic for non-cooperative communications. In this paper, the location performance of the DPD method in the presence of array model errors is examined when signal waveforms are not known in advance. Our theoretical analysis focuses on the DPD estimator in [29] because of its fundamental role in the field of direct localization. Because the objective function of this DPD estimator is formulated as the maximum eigenvalue of a Hermitian matrix, the theoretical development begins with a matrix eigen-perturbation result, which expresses the perturbation of eigenvalues as a function of the disturbance added to the Hermitian matrix. Subsequently, the first-order asymptotic expression of the localization errors is given, from which the analytical formula for the mean square error (MSE) of the DPD estimator is available. Furthermore, two exact formulations for the calculation of the probabilities of a successful localization are also deduced, which offers another statistical perspective on the study of the estimation performance. Finally, Cramér-Rao bound (CRB) expressions for the position estimation are derived for two cases: (a) array model errors do not exist and (b) array model errors are present and follow a Gaussian distribution. The obtained CRBs provide further insights into how array model errors affect the localization performance.
The remainder of this paper is organized as follows. Section 2 lists the notational conventions that will be used throughout the paper. In Section 3, the signal model for direct localization is formulated. Section 4 briefly describes the DPD method, which is first proposed in [29]. Section 5 discusses the statistical assumption and effects of the array model errors. In Section 6, the analytical formula for the MSE of the DPD method is derived in presence of array model errors. Section 7 provides two explicit formulas for the calculation of the probabilities of a successful localization. In Section 8, the CRB expressions for the position estimation are derived for two cases. Numerical simulations are presented in Section 9 to investigate the usefulness of the theoretical expressions for performance prediction. Conclusions are drawn in Section 10. The proofs of the main results are given in the Appendixes.

2. Notation and Nomenclature

The notational conventions that will be used throughout this paper are summarized in Table 1. The variables and parameters that are used in this paper will be defined when they first appear in the following.

3. Signal Models for Direct Position Determination

3.1. Time-Domain Signal Model

Consider an emitter and N base stations intercepting the transmitted signal. Each base station is equipped with an antenna array consisting of M elements. The transmitter’s position is denoted by an L × 1 vector of coordinates p . In practice, L is equal to two or three, and cannot be larger than three. We consider the case where there is no multipath or non-line-of-sight (NLOS) phenomenon. The complex envelopes of the signal observed by the nth base station are then modeled by [29]
x n ( t ) = β n a n ( p ) s ( t τ n ( p ) t 0 ) + ε n ( t ) ( 1 n N ) ,
where
  • a n ( p ) is the nth array response to the signal transmitted from position p ,
  • s ( t τ n ( p ) t 0 ) is the unknown signal waveform transmitted at unknown time t 0 ,
  • τ n ( p ) is the signal propagation time from the emitter to the nth base station (i.e., distance divided by signal propagation speed),
  • β n is an unknown complex scalar representing the channel attenuation between the transmitter and the nth base station,
  • ε n ( t ) is temporally white, circularly symmetric complex Gaussian random noise with zero mean and covariance matrix σ ε 2 I M .
Assuming the observation vector x n ( t ) is sampled with period T , the kth sampled data can be expressed as
x n , k = β n a n ( p ) s ( k T τ n ( p ) t 0 ) + ε n , k ( 1 k K ) ,
where K is the number of snapshots.

3.2. Frequency-Domain Signal Model

To determinate the emitter position directly from all observations, it is desirable to separate the propagation delay τ n ( p ) and transmit time t 0 from the signal waveform. This is easily achieved using the frequency-domain representation of the problem. Taking the discrete Fourier transform (DFT) of (2) produces [29]
x ¯ n , k = β n a n ( p ) s ¯ k exp { j ω k ( τ n ( p ) + t 0 ) } + ε ¯ n , k ( 1 n N ; 1 k K ) ,
where
  • ω k = 2 π ( k 1 ) / ( K T ) is the kth known discrete frequency point,
  • s ¯ k is the kth Fourier coefficient of the unknown signal corresponding to frequency ω k ,
  • ε ¯ n , k is the kth Fourier coefficient of the random noise corresponding to frequency ω k .
It must be emphasized that the unknown and deterministic parameter set in (3) consists of p , t 0 , β n and s ¯ k . However, only the location vector p is of interest for the DPD approach. In addition, because the DFT is an orthogonal linear transformation, the distribution of the random noise vector ε ¯ n , k is the same as that of ε n , k , with first- and second-order moments given by.
{ E [ ε ¯ n , k ] = O M × 1 , E [ ε ¯ n , k ε ¯ n , k T ] = O M × M , E [ ε ¯ n , k ε ¯ n , k H ] = σ ε 2 I M , E [ ε ¯ n , k ε ¯ n , l T ] = E [ ε ¯ n , k ε ¯ n , l H ] = O M × M , ( 1 k , l K ; k l ) .
Note that the DPD technique studied below is derived from (3).

4. Direct Position Determination Method

This section introduces the DPD method presented in [29]. The optimization model for direct localization is established according to the least square criterion, which can be formulated as
min p , { β n } , { s ¯ k } n = 1 N k = 1 K | | x ¯ n , k β n a n ( p ) s ¯ k exp { j ω k ( τ n ( p ) + t 0 ) } | | 2 2 = min p , { β n } , { s ¯ k } n = 1 N | | x ¯ n ( s ¯ n a n ( p ) ) β n | | 2 2 ,
where
{ x ¯ n = [ x ¯ n , 1 H     x ¯ n , 2 H         x ¯ n , K H ] H , s ¯ n = [ s ¯ 1 exp { j ω 1 ( τ n ( p ) + t 0 ) }     s ¯ 2 exp { j ω 2 ( τ n ( p ) + t 0 ) }         s ¯ K exp { j ω K ( τ n ( p ) + t 0 ) } ] T .
Obviously, (5) is a multidimensional nonlinear minimization problem. A direct minimization involves a search over the parameter space and is computationally prohibitive. The technique of the separation of variables can be applied to simplify the optimization problem.
First, the channel attenuation scalar β n that minimizes (5) is given by
β n = 1 | | a n ( p ) | | 2 2 | | s ¯ n | | 2 2 ( s ¯ n a n ( p ) ) H x ¯ n .
It can be assumed, without loss of generality, that | | a n ( p ) | | 2 = | | s ¯ n | | 2 = 1 . Then, substituting (7) into (5) and applying algebraic manipulations leads to the concentrated problem [29]
max p , s ¯   s ¯ H ( n = 1 N A n H ( p ) x ¯ n x ¯ n H A n ( p ) ) s ¯ ,
where
{ A n ( p ) = blkdiag [ a n , 1 ( p )     a n , 2 ( p )         a n , K ( p ) ] s ¯ = [ s ¯ 1 exp { j ω 1 t 0 }     s ¯ 2 exp { j ω 2 t 0 }         s ¯ K exp { j ω K t 0 } ] T
with a n , k ( p ) = a n ( p ) exp { j ω k τ n ( p ) } . According to quadratic form theory, the cost function in (8) is maximized by selecting the vector s ¯ as the eigenvector corresponding to the largest eigenvalue of matrix n = 1 N A n H ( p ) x ¯ n x ¯ n H A n ( p ) . Therefore, (8) reduces to
max p   J ( p ) = max p   λ max { B ( p ) B H ( p ) } = max p   λ max { B H ( p ) B ( p ) } ,
where
B ( p ) = [ A 1 H ( p ) x ¯ 1     A 2 H ( p ) x ¯ 2         A N H ( p ) x ¯ N ] = A H ( p ) X ¯
with
{ A ( p ) = [ A 1 H ( p )     A 2 H ( p )         A N H ( p ) ] H , X ¯ = blkdiag [ x ¯ 1     x ¯ 2         x ¯ N ] .
It is important to stress that the second equality in (10) holds owing to the fact that given any matrix Z , the non-zero eigenvalues of Z H Z and Z Z H are identical [63]. Moreover, note that the dimensions of matrices B ( p ) B H ( p ) and B H ( p ) B ( p ) are respectively K × K and N × N . In practice, K is typically much greater than N and it is therefore more computationally efficient to perform the eigendecomposition on B H ( p ) B ( p ) instead of B ( p ) B H ( p ) . Because the cost function in (10) is not a closed-form expression for p , the most straightforward method of solving (10) is to perform a grid search, as recommended in [29].
Note that when the location is estimated in multipath environments, the localization accuracy may obviously improve if the information contained in the non-line-of-sight signal components is exploited with the aid of appropriate channel modeling [35,36]. As a consequence, the signal model in (1) and (3) and the estimation criterion in (5) must be further adjusted to give a desired solution for the multipath model. Indeed, our performance analysis method also applies to the case of multipath propagation, but we only consider the single-path signal model in this paper owing to limited space.

5. Statistical Assumption and Effects of Array Model Errors

Assume that the actual array response, which differs from the nominal value, can be expressed as
a ^ n ( p ) = a n ( p ) + φ ˜ n ,
where φ ˜ n is the array model error. It must be emphasized that φ ˜ n is modeled as a stochastic variable rather than a deterministic variable throughout this paper. Moreover, there exist a variety of statistical assumptions that could be used to describe φ ˜ n in the literature. To make our results applicable to a more general situation, { φ ˜ n } 1 n N is modeled as a set of independent complex Gaussian vectors with first- and second-order moments given by [43,44,45,46,47,48,49,50,51,52,53,54,55]
{ E [ φ ˜ n ] = O M × 1 , E [ φ ˜ n φ ˜ n T ] = Φ n ( 1 ) , E [ φ ˜ n φ ˜ n H ] = Φ n ( 2 ) E [ φ ˜ n φ ˜ m T ] = E [ φ ˜ n φ ˜ m H ] = O M × M , , ( 1 n , m N ; n m ) .
Furthermore, array model error φ ˜ n is uncorrelated to sensor noise { ε ¯ n , k } 1 k K for each base station. It is noteworthy that (14) will be used to determine the MSE and the CRB of the DPD estimator investigated in this paper.
When array model errors exist, the frequency-domain signal model in (3) becomes
x ¯ ^ n , k = x ¯ n , k , 0 + β n φ ˜ n s ¯ k exp { j ω k ( τ n ( p ) + t 0 ) } + ε ¯ n , k ,
where x ¯ n , k , 0 is the true value of x ¯ ^ n , k in the absence of sensor noise and array model errors, and can be expressed as
x ¯ n , k , 0 = β n a n ( p ) s ¯ k exp { j ω k ( τ n ( p ) + t 0 ) } .
Defining the vectors and matrices
{ x ¯ ^ n = [ x ¯ ^ n , 1 H     x ¯ ^ n , 2 H         x ¯ ^ n , K H ] H , x ¯ n , 0 = [ x ¯ n , 1 , 0 H     x ¯ n , 2 , 0 H         x ¯ n , K , 0 H ] H , ε ¯ n = [ ε ¯ n , 1 H     ε ¯ n , 2 H         ε ¯ n , K H ] H , X ¯ ^ = blkdiag [ x ¯ ^ 1     x ¯ ^ 2         x ¯ ^ N ] , X ¯ 0 = blkdiag [ x ¯ 1 , 0     x ¯ 2 , 0         x ¯ N , 0 ] , E ¯ = blkdiag [ ε ¯ 1     ε ¯ 2         ε ¯ N ] ,
it is easily verified from (15) and (17) that
x ¯ ^ n = x ¯ n , 0 + ε ¯ n + ( r n I M ) φ ˜ n ( 1 n N ) ,
where
r n = β n [ s ¯ 1 exp { j ω 1 ( τ n ( p ) + t 0 ) }     s ¯ 2 exp { j ω 2 ( τ n ( p ) + t 0 ) }         s ¯ K exp { j ω K ( τ n ( p ) + t 0 ) } ] T .
From (17) and (18) we get
blkdiag [ x ¯ ^ 1     x ¯ ^ 2         x ¯ ^ N ] = blkdiag [ x ¯ 1 , 0     x ¯ 2 , 0         x ¯ N , 0 ] + blkdiag [ ε ¯ 1     ε ¯ 2         ε ¯ N ] + blkdiag [ ( r 1 I M ) φ ˜ 1     ( r 2 I M ) φ ˜ 2         ( r N I M ) φ ˜ N ] X ¯ ^ = X ¯ 0 + E ¯ + Ψ ˜ ,
where
Ψ ˜ = blkdiag [ ( r 1 I M ) φ ˜ 1     ( r 2 I M ) φ ˜ 2         ( r N I M ) φ ˜ N ] .
In the presence of array model errors, the emitter position is actually determined by
max p   J ^ ( p ) = max p   λ max { B ^ H ( p ) B ^ ( p ) } ,
where B ^ ( p ) = A H ( p ) X ¯ ^ . We assume the optimal solution to (22) is p ^ and its estimate error is p ˜ = p ^ p . It is evident that the estimate error p ˜ depends on both sensor noise and array model errors. In subsequent sections, the statistical performance of p ˜ is derived under the combined effects of the two sources of error.
For convenience in later formulae, we proceed by defining two error vectors
ε ¯ c = [ ε ¯ ε ¯ ] , φ ˜ c = [ φ ˜ φ ˜ ] ,
where
ε ¯ = [ ε ¯ 1 H    ε ¯ 2 H       ε ¯ N H ] H = E ¯ 1 N × 1 , φ ˜ = [ φ ˜ 1 H    φ ˜ 2 H       φ ˜ N H ] H .
Obviously, ε ¯ c and φ ˜ c are related to sensor noise and array model errors, respectively. Further, we define two permutation matrices
Π ε ¯ = [ O M N K × M N K I M N K I M N K O M N K × M N K ] , Π φ ˜ = [ O M N × M N I M N I M N O M N × M N ] ,
It can then be easily checked from (23) and (25) that ε ¯ c = Π ε ¯ ε ¯ c and φ ˜ c = Π φ ˜ φ ˜ c . In addition, it is straightforward to deduce from (17), (21), and (24) that E ¯ = O ( | | ε ¯ | | 2 ) and Ψ ˜ = O ( | | φ ˜ | | 2 ) .

6. MSE of Direct Position Determination Method in Presence of Array Model Errors

In this section, the MSE for the DPD method stated above is addressed in the presence of uncertainties in the model of the array manifold.

6.1. Perturbation Analysis on the Eigenvalues of Positive Semidefinite Matrix

Because the cost function in (22) is expressed as the maximal eigenvalue of some positive semidefinite matrix, an eigenvalue perturbation result is formally stated in a proposition as follows.
Proposition 1.
Let Z C N × N be a positive semidefinite matrix with eigenvalues λ 1 λ 2 λ N , associated with unit eigenvectors u 1 , u 2 , , u N , respectively. Moreover, λ n differs from the other eigenvalues. Assume Z is corrupted by a Hermitian error matrix Z ˜ C N × N , and the corresponding perturbed matrix is denoted Z ^ ; i.e., Z ^ = Z + Z ˜ C N × N . If the eigenvalues of matrix Z ^ are denoted λ ^ 1 λ ^ 2 λ ^ N , then the relationship between λ ^ n and λ n can be described by
λ ^ n = λ n + u n H Z ˜ u n + u n H Z ˜ U n Z ˜ u n + o ( | | Z ˜ | | 2 2 ) ,
where
U n = i = 1 i n N u i u i H λ n λ i .
The proof of Proposition 1 can be found in [21]. Note that Proposition 1 plays a fundamental role in our subsequent analysis.

6.2. Second-Order Perturbation Analysis on the Cost Function

Generally, first-order analysis is applied to predict the statistical performance of an estimator. The reason is that this analysis method gives the linear relationship between the estimation errors and measurement noise as well as model errors. As a result, the theoretical MSE of the estimator can be obtained according to statistical assumptions of the two sources of error. Moreover, first-order analysis is valid in most cases, provided that the error levels are not too high. In this paper, we employ this approach to derive the performance of the DPD estimator described above. For this purpose, first-order perturbation analysis is performed on the first derivative of the objective function in (22), or alternatively, second-order perturbation analysis is performed on the cost function in (22). Herein, because the analytical expression for the derivative of the cost function is rather complex, we prefer the second approach.
First, performing second-order perturbation analysis on matrix B ^ ( p ^ ) = A H ( p ^ ) X ¯ ^ leads to
B ^ ( p ^ ) = A H ( p ^ ) X ¯ ^ = B 0 + B ˜ ( 1 ) + B ˜ ( 2 ) + o ( | | ξ ˜ | | 2 2 ) ,
where ξ ˜ = [ p ˜ T    ε ¯ T    φ ˜ T ] T consists of all error vectors, and
{ B 0 = A H ( p ) X ¯ 0 B ˜ ( 1 ) = A H ( p ) E ¯ + A H ( p ) Ψ ˜ + l = 1 L < p ˜ > l A ˙ l H ( p ) X ¯ 0 = O ( | | ξ | | 2 ) B ˜ ( 2 ) = l = 1 L < p ˜ > l A ˙ l H ( p ) E ¯ + l = 1 L < p ˜ > l A ˙ l H ( p ) Ψ ˜ + 1 2 l 1 = 1 L l 2 = 1 L < p ˜ > l 1 < p ˜ > l 2 A ¨ l 1 l 2 H ( p ) X ¯ 0 = O ( | | ξ | | 2 2 )
with
A ˙ l ( p ) = A ( p ) < p > l , A ¨ l 1 l 2 ( p ) = 2 A ( p ) < p > l 1 < p > l 2 .
The explicit expressions for A ˙ l ( p ) and A ¨ l 1 l 2 ( p ) are given in Appendix A. It is seen from (28) and (29) that B ˜ ( 1 ) and B ˜ ( 2 ) collect all first- and second-order perturbation terms, respectively. It is deduced from (28) that
B ^ H ( p ^ ) B ^ ( p ^ ) = C 0 + C ˜ ( 1 ) + C ˜ ( 2 ) + o ( | | ξ ˜ | | 2 2 ) ,
where
{ C 0 = B 0 H B 0 , C ˜ ( 1 ) = B 0 H B ˜ ( 1 ) + B ˜ ( 1 ) H B 0 = O ( | | ξ | | 2 ) , C ˜ ( 2 ) = B ˜ ( 1 ) H B ˜ ( 1 ) + B 0 H B ˜ ( 2 ) + B ˜ ( 2 ) H B 0 = O ( | | ξ | | 2 2 ) .
From (31) and (32) we observe that C ˜ ( 1 ) and C ˜ ( 2 ) consist of all first- and second-order perturbation terms, respectively.
Let λ 1 λ 2 λ N and u 1 , u 2 , , u N be the eigenvalues and relevant unit eigenvectors of matrix C 0 , respectively. Additionally, it is not unreasonable to assume that the source location parameters are identifiable, which means that C 0 has unique maximal eigenvalue λ N . Meanwhile, it is noteworthy that the eigenvalue perturbation theory is extensively applied to the performance analysis in array signal processing for DOA estimation. To our best knowledge, there is no relevant mathematical tool that can be used to prove that the eigenvalues of C 0 are distinct. However, a large number of numerical investigations demonstrate that the possibility of the case of equal eigenvalues is small enough that we can ignore it. As a result, we define the matrix
U N = n = 1 N 1 u n u n H λ N λ n .
By combining Proposition 1 and (31), the cost-function value at point p ^ is given by
J ^ ( p ^ ) = λ max { B ^ H ( p ^ ) B ^ ( p ^ ) } = λ max { C 0 } + u N H ( C ˜ ( 1 ) + C ˜ ( 2 ) ) u N + u N H C ˜ ( 1 ) U N C ˜ ( 1 ) u N + o ( | | ξ ˜ | | 2 2 ) = λ N + λ ˜ N ( 1 ) + λ ˜ N ( 2 ) + o ( | | ξ ˜ | | 2 2 ) ,
where
{ λ ˜ N ( 1 ) = u N H B 0 H B ˜ ( 1 ) u N + ( u N H B 0 H B ˜ ( 1 ) u N ) H = O ( | | ξ ˜ | | 2 ) , λ ˜ N ( 2 ) = u N H B 0 H B ˜ ( 1 ) U N B 0 H B ˜ ( 1 ) u N + ( u N H B 0 H B ˜ ( 1 ) U N B 0 H B ˜ ( 1 ) u N ) H + u N H B ˜ ( 1 ) H ( I K + B 0 U N B 0 H ) B ˜ ( 1 ) u N        + u N H B 0 H B ˜ ( 1 ) U N B ˜ ( 1 ) H B 0 u N + u N H B 0 H B ˜ ( 2 ) u N + ( u N H B 0 H B ˜ ( 2 ) u N ) H = O ( | | ξ ˜ | | 2 2 ) .
It is seen from (35) that λ ˜ N ( 1 ) and λ ˜ N ( 2 ) group together all the first- and second-order error terms, respectively. The proof of (34) and (35) can be found in Appendix B. In the following, we express λ ˜ N ( 1 ) and λ ˜ N ( 2 ) as functions of ε ¯ c , φ ˜ c , and p ˜ .
First, inserting the second equality in (29) into the first equality in (35) produces
λ ˜ N ( 1 ) = h 1 H ( p ) ε ¯ c + h 2 H ( p ) φ ˜ c + h 3 T ( p ) p ˜ ,
where
{ h 1 ( p ) = f 1 [ B 0 u N , u N ] + Π ε ¯ ( f 1 [ B 0 u N , u N ] ) , h 2 ( p ) = f 2 [ B 0 u N , u N ] + Π φ ˜ ( f 2 [ B 0 u N , u N ] ) , h 3 ( p ) = 2 Re { f 3 [ B 0 u N , u N ] } ,
in which { f k [ , ] } 1 k 3 can be regarded as a set of vector functions, whose functional forms are given by
{ f 1 [ z 1 , z 2 ] = [ diag [ z 2 1 M K × 1 ] A ( p ) z 1 O M N K × 1 ] , f 2 [ z 1 , z 2 ] = [ blkdiag [ < z 2 > 1 ( r 1 H I M )     < z 2 > 2 ( r 2 H I M )         < z 2 > N ( r N H I M ) ] A ( p ) z 1 O M N × 1 ] , f 3 [ z 1 , z 2 ] = [ z 1 H A ˙ 1 H ( p ) X ¯ 0 z 2     z 1 H A ˙ 2 H ( p ) X ¯ 0 z 2         z 1 H A ˙ L H ( p ) X ¯ 0 z 2 ] H , ( z 1 C K × 1 , z 2 C N × 1 ) .
The proof of (36) to (38) is provided in Appendix C. Secondly, substituting the second and third equalities in (29) into the second equality in (35) leads to
λ ˜ N ( 2 ) = ε ¯ c H H 1 ( p ) ε ¯ c + φ ˜ c H H 2 ( p ) φ ˜ c + p ˜ T H 3 ( p ) p ˜ + ε ¯ c H H 4 ( p ) φ ˜ c + ε ¯ c H H 5 ( p ) p ˜ + φ ˜ c H H 6 ( p ) p ˜ ,
where
{ H 1 ( p ) = F a 1 [ B 0 u N , U N B 0 H , u N ] + ( F a 1 [ B 0 u N , U N B 0 H , u N ] ) H + F b 1 [ u N , I K + B 0 U N B 0 H , u N ]    + F c 1 [ B 0 u N , U N , B 0 u N ] , H 2 ( p ) = F a 2 [ B 0 u N , U N B 0 H , u N ] + ( F a 2 [ B 0 u N , U N B 0 H , u N ] ) H + F b 2 [ u N , I K + B 0 U N B 0 H , u N ]    + F c 2 [ B 0 u N , U N , B 0 u N ] , H 3 ( p ) = F a 3 [ B 0 u N , U N B 0 H , u N ] + ( F a 3 [ B 0 u N , U N B 0 H , u N ] ) H + F b 3 [ u N , I K + B 0 U N B 0 H , u N ]    + F c 3 [ B 0 u N , U N , B 0 u N ] + G 3 [ B 0 u N , u N ] + ( G 3 [ B 0 u N , u N ] ) , H 4 ( p ) = F a 4 [ B 0 u N , U N B 0 H , u N ] + Π ε ¯ ( F a 4 [ B 0 u N , U N B 0 H , u N ] ) Π φ ˜ + F b 4 [ u N , I K + B 0 U N B 0 H , u N ]    + F c 4 [ B 0 u N , U N , B 0 u N ] , H 5 ( p ) = F a 5 [ B 0 u N , U N B 0 H , u N ] + Π ε ¯ ( F a 5 [ B 0 u N , U N B 0 H , u N ] ) + F b 5 [ u N , I K + B 0 U N B 0 H , u N ]    + F c 5 [ B 0 u N , U N , B 0 u N ] + G 1 [ B 0 u N , u N ] + Π ε ¯ ( G 1 [ B 0 u N , u N ] ) , H 6 ( p ) = F a 6 [ B 0 u N , U N B 0 H , u N ] + Π φ ˜ ( F a 6 [ B 0 u N , U N B 0 H , u N ] ) + F b 6 [ u N , I K + B 0 U N B 0 H , u N ]    + F c 6 [ B 0 u N , U N , B 0 u N ] + G 2 [ B 0 u N , u N ] + Π φ ˜ ( G 2 [ B 0 u N , u N ] ) ,
in which { F a k [ , , ] } 1 k 6 , { F b k [ , , ] } 1 k 6 , { F c k [ , , ] } 1 k 6 , and { G k [ , ] } 1 k 3 can be viewed as matrix functions, which are given by
{ F a 1 [ z 1 , Z , z 2 ] = k = 1 K Π ε ¯ ( f 1 [ z 1 , Z i K ( k ) ] ) ( f 1 [ i K ( k ) , z 2 ] ) H , F a 2 [ z 1 , Z , z 2 ] = k = 1 K Π φ ˜ ( f 2 [ z 1 , Z i K ( k ) ] ) ( f 2 [ i K ( k ) , z 2 ] ) H , F a 3 [ z 1 , Z , z 2 ] = k = 1 K ( f 3 [ z 1 , Z i K ( k ) ] ) ( f 3 [ i K ( k ) , z 2 ] ) H , F a 4 [ z 1 , Z , z 2 ] = k = 1 K Π ε ¯ ( ( f 1 [ z 1 , Z i K ( k ) ] ) ( f 2 [ i K ( k ) , z 2 ] ) H + ( f 1 [ i K ( k ) , z 2 ] ) ( f 2 [ z 1 , Z i K ( k ) ] ) H ) , F a 5 [ z 1 , Z , z 2 ] = k = 1 K Π ε ¯ ( ( f 1 [ z 1 , Z i K ( k ) ] ) ( f 3 [ i K ( k ) , z 2 ] ) H + ( f 1 [ i K ( k ) , z 2 ] ) ( f 3 [ z 1 , Z i K ( k ) ] ) H ) , F a 6 [ z 1 , Z , z 2 ] = k = 1 K Π φ ˜ ( ( f 2 [ z 1 , Z i K ( k ) ] ) ( f 3 [ i K ( k ) , z 2 ] ) H + ( f 2 [ i K ( k ) , z 2 ] ) ( f 3 [ z 1 , Z i K ( k ) ] ) H ) , ( z 1 C K × 1 , z 2 C N × 1 , Z C N × K ) ,
{ F b 1 [ z 1 , Z , z 2 ] = k = 1 K f 1 [ Z i K ( k ) , z 1 ] ( f 1 [ i K ( k ) , z 2 ] ) H , F b 2 [ z 1 , Z , z 2 ] = k = 1 K f 2 [ Z i K ( k ) , z 1 ] ( f 2 [ i K ( k ) , z 2 ] ) H , F b 3 [ z 1 , Z , z 2 ] = k = 1 K f 3 [ Z i K ( k ) , z 1 ] ( f 3 [ i K ( k ) , z 2 ] ) H , F b 4 [ z 1 , Z , z 2 ] = k = 1 K ( f 1 [ Z i K ( k ) , z 1 ] ( f 2 [ i K ( k ) , z 2 ] ) H + Π ε ¯ c ( f 1 [ i K ( k ) , z 2 ] ) ( f 2 [ Z i K ( k ) , z 1 ] ) T Π φ ˜ ) , F b 5 [ z 1 , Z , z 2 ] = k = 1 K ( f 1 [ Z i K ( k ) , z 1 ] ( f 3 [ i K ( k ) , z 2 ] ) H + Π ε ¯ c ( f 1 [ i K ( k ) , z 2 ] ) ( f 3 [ Z i K ( k ) , z 1 ] ) T ) , F b 6 [ z 1 , Z , z 2 ] = k = 1 K ( f 2 [ Z i K ( k ) , z 1 ] ( f 3 [ i K ( k ) , z 2 ] ) H + Π φ ˜ ( f 2 [ i K ( k ) , z 2 ] ) ( f 3 [ Z i K ( k ) , z 1 ] ) T ) , ( z 1 C N × 1 , z 2 C N × 1 , Z C K × K ) ,
{ F c 1 [ z 1 , Z , z 2 ] = n = 1 N f 1 [ z 2 , i N ( n ) ] ( f 1 [ z 1 , Z i N ( n ) ] ) H , F c 2 [ z 1 , Z , z 2 ] = n = 1 N f 2 [ z 2 , i N ( n ) ] ( f 2 [ z 1 , Z i N ( n ) ] ) H , F c 3 [ z 1 , Z , z 2 ] = n = 1 N f 3 [ z 2 , i N ( n ) ] ( f 3 [ z 1 , Z i N ( n ) ] ) H , F c 4 [ z 1 , Z , z 2 ] = n = 1 N ( f 1 [ z 2 , i N ( n ) ] ( f 2 [ z 1 , Z i N ( n ) ] ) H + Π ε ¯ c ( f 1 [ z 1 , Z i N ( n ) ] ) ( f 2 [ z 2 , i N ( n ) ] ) T Π φ ˜ c ) , F c 5 [ z 1 , Z , z 2 ] = n = 1 N ( f 1 [ z 2 , i N ( n ) ] ( f 3 [ z 1 , Z i N ( n ) ] ) H + Π ε ¯ c ( f 1 [ z 1 , Z i N ( n ) ] ) ( f 3 [ z 2 , i N ( n ) ] ) T ) , F c 6 [ z 1 , Z , z 2 ] = n = 1 N ( f 2 [ z 2 , i N ( n ) ] ( f 3 [ z 1 , Z i N ( n ) ] ) H + Π φ ˜ c ( f 2 [ z 1 , Z i N ( n ) ] ) ( f 3 [ z 2 , i N ( n ) ] ) T ) , ( z 1 C K × 1 , z 2 C K × 1 , Z C N × N ) ,
{ G 1 [ z 1 , z 2 ] = [ O M N K × L diag [ z 2 1 M K × 1 ] [ A ˙ 1 ( p ) z 1     A ˙ 2 ( p ) z 1         A ˙ L ( p ) z 1 ] ] , G 2 [ z 1 , z 2 ] = [ O M N × L blkdiag [ < z 2 > 1 ( r 1 T I M )     < z 2 > 2 ( r 2 T I M )         < z 2 > N ( r N T I M ) ] × [ A ˙ 1 ( p ) z 1     A ˙ 2 ( p ) z 1         A ˙ L ( p ) z 1 ] ] , G 3 [ z 1 , z 2 ] = 1 2 [ z 1 H A ¨ 11 H ( p ) X ¯ 0 z 2 z 1 H A ¨ 12 H ( p ) X ¯ 0 z 2 z 1 H A ¨ 1 L H ( p ) X ¯ 0 z 2 z 1 H A ¨ 21 H ( p ) X ¯ 0 z 2 z 1 H A ¨ 22 H ( p ) X ¯ 0 z 2 z 1 H A ¨ 2 L H ( p ) X ¯ 0 z 2 z 1 H A ¨ L 1 H ( p ) X ¯ 0 z 2 z 1 H A ¨ L 2 H ( p ) X ¯ 0 z 2 z 1 H A ¨ L L H ( p ) X ¯ 0 z 2 ] , ( z 1 C K × 1 , z 2 C K × 1 ) .
The proof of (39) to (44) is provided in Appendix D. Substituting (36) and (39) back into (34) yields
J ^ ( p ^ ) = λ max { B ^ H ( p ^ ) B ^ ( p ^ ) } λ N + h 1 H ( p ) ε ¯ c + h 2 H ( p ) φ ˜ c + h 3 T ( p ) p ˜ + ε ¯ c H H 1 ( p ) ε ¯ c + φ ˜ c H H 2 ( p ) φ ˜ c + p ˜ T H 3 ( p ) p ˜ + ε ¯ c H H 4 ( p ) φ ˜ c + ε ¯ c H H 5 ( p ) p ˜ + φ ˜ c H H 6 ( p ) p ˜ .
Evidently, Equation (45) can be considered as the second-order perturbation expression with respect to the error vectors ε ¯ c , φ ˜ c , and p ˜ . From (45), we get the linear relationship between the localization error p ˜ and sensor noise ε ¯ c as well as array model error φ ˜ c . The MSE of the DPD estimator can then be derived according to the statistical assumptions on the two sources of error.

6.3. MSE of Direct Position Determination Method

In light of the maximum principle, the true position p and estimated position p ^ satisfy the relations
{ J ^ ( p ) p | ε ¯ = O M N K × 1 φ ˜ = O M N × 1 = O L × 1 , J ^ ( p ^ ) p ^ = O L × 1 .
Obviously, the first equality in (46) leads to
J ^ ( p ) p | ε ¯ = O M N K × 1 φ ˜ = O M N × 1 = h 3 ( p ) = 2 Re { f 3 [ B 0 u N , u N ] } = O L × 1 .
Additionally, using (45) and the second equality in (46), the localization error p ˜ = p ^ p is obtained by
p ˜ = arg max z R L × 1 { h 1 H ( p ) ε ¯ c + h 2 H ( p ) φ ˜ c + h 3 T ( p ) z + ε ¯ c H H 1 ( p ) ε ¯ c + φ ˜ c H H 2 ( p ) φ ˜ c + z T H 3 ( p ) z + ε ¯ c H H 4 ( p ) φ ˜ c + ε ¯ c H H 5 ( p ) z + φ ˜ c H H 6 ( p ) z } = arg max z R L × 1 { h 3 T ( p ) z + z T H 3 ( p ) z + ε ¯ c H H 5 ( p ) z + φ ˜ c H H 6 ( p ) z } ,
which further implies
p ˜ = 1 2 H 3 1 ( p ) H 5 T ( p ) ε ¯ c 1 2 H 3 1 ( p ) H 6 T ( p ) φ ˜ c 1 2 H 3 1 ( p ) h 3 ( p ) = 1 2 H 3 1 ( p ) H 5 T ( p ) ε ¯ c 1 2 H 3 1 ( p ) H 6 T ( p ) φ ˜ c = O ( [ ε ¯ φ ˜ ] 2 ) .
The second equality in (49) follows from (47). In (49), the linear relationship between the localization error p ˜ and the sensor noise ε ¯ c as well as the array model error φ ˜ c is formulated. It is easily observed from (49) that the positioning error vector p ˜ consists of two terms. The first term is associated with the sensor noise, which can be described as
p ˜ 1 = 1 2 H 3 1 ( p ) H 5 T ( p ) ε ¯ c = O ( | | ε ¯ | | 2 ) .
The second term is due to the array model errors, which can be written as
p ˜ 2 = 1 2 H 3 1 ( p ) H 6 T ( p ) φ ˜ c = O ( | | φ ˜ | | 2 ) .
According to the statistical assumptions in Section 3 and Section 5, it is concluded that the localization error p ˜ is asymptotically Gaussian distributed with a zero mean and a covariance matrix given by
P = E [ p ˜ p ˜ T ] = 1 4 H 3 1 ( p ) H 5 T ( p ) E [ ε ¯ c ε ¯ c H ] H 5 ( p ) H 3 T ( p ) + 1 4 H 3 1 ( p ) H 6 T ( p ) E [ φ ˜ c φ ˜ c H ] H 6 ( p ) H 3 T ( p ) ,
where the second equality follows from (49) and the fact that ε ¯ c and φ ˜ c are statistically independent. Furthermore, (4), (14), and (23) together imply that
E [ ε ¯ c ε ¯ c H ] = [ O M N K × M N K σ ε 2 I M N K σ ε 2 I M N K O M N K × M N K ] , E [ φ ˜ c φ ˜ c H ] = [ blkdiag [ Φ 1 ( 1 )    Φ 2 ( 1 )       Φ N ( 1 ) ] blkdiag [ Φ 1 ( 2 )    Φ 2 ( 2 )       Φ N ( 2 ) ] blkdiag [ Φ 1 ( 2 )    Φ 2 ( 2 )       Φ N ( 2 ) ] blkdiag [ Φ 1 ( 1 )    Φ 2 ( 1 )       Φ N ( 1 ) ] ] .
Inserting (53) back into (52) leads to
P = 1 4 H 3 1 ( p ) H 5 T ( p ) [ O M N K × M N K σ ε 2 I M N K σ ε 2 I M N K O M N K × M N K ] H 5 ( p ) H 3 T ( p ) + 1 4 H 3 1 ( p ) H 6 T ( p ) [ blkdiag [ Φ 1 ( 1 )    Φ 2 ( 1 )       Φ N ( 1 ) ] blkdiag [ Φ 1 ( 2 )    Φ 2 ( 2 )       Φ N ( 2 ) ] blkdiag [ Φ 1 ( 2 )     Φ 2 ( 2 )       Φ N ( 2 ) ] blkdiag [ Φ 1 ( 1 )    Φ 2 ( 1 )       Φ N ( 1 ) ] ] H 6 ( p ) H 3 T ( p ) .
From (54) we see that the covariance matrix P is composed of two parts. The first part, due to the sensor noises, is expressed as
P 1 = 1 4 H 3 1 ( p ) H 5 T ( p ) [ O M N K × M N K σ ε 2 I M N K σ ε 2 I M N K O M N K × M N K ] H 5 ( p ) H 3 T ( p ) .
The second part, due to the array model errors, is given by
P 2 = 1 4 H 3 1 ( p ) H 6 T ( p ) [ blkdiag [ Φ 1 ( 1 )    Φ 2 ( 1 )       Φ N ( 1 ) ] blkdiag [ Φ 1 ( 2 )    Φ 2 ( 2 )       Φ N ( 2 ) ] blkdiag [ Φ 1 ( 2 )    Φ 2 ( 2 )       Φ N ( 2 ) ] blkdiag [ Φ 1 ( 1 )    Φ 2 ( 1 )       Φ N ( 1 ) ] ] H 6 ( p ) H 3 T ( p ) .
Remark 1.
It is evident that the trace of P can be viewed as the MSE of localization errors under the combined effects of sensor noise and array model errors.
Remark 2.
When Φ n ( 1 ) O and Φ n ( 2 ) O , the trace of P can be viewed as the MSE of the localization errors when no array model errors are present. Moreover, the value of the trace of P approaches the CRB for the case of none of array model errors, which will be shown in Section 8.1. This is because the DPD method studied here is derived from the maximum likelihood (ML) criterion, which provides an asymptotically efficient solution.
Remark 3.
When σ ε 2 0 , the trace of P can be used to quantify the sensitivity of positioning accuracy to array model errors, and represents the additional estimation errors resulting from uncertainties in the array manifold.
Remark 4.
It is easily seen from (55) and (56) that both P 1 and P 2 rely on matrix H 3 ( p ) , which is the p -corner of the Hessian matrix of the cost function. If this matrix has a large condition number, the positioning accuracy might be high and, conversely, if this matrix is nearly singular, the location error may be extremely large.
Remark 5.
From (54), it is observed that covariance matrix P is related to H 3 ( p ) , H 5 ( p ) , and H 6 ( p ) . According to (38) and (40)–(44), the ijth element of matrix H 3 ( p ) is given by
< H 3 ( p ) > i j = u N H ( X ¯ 0 H A ˙ i ( p ) B 0 U N X ¯ 0 H A ˙ j ( p ) B 0 + B 0 H A ˙ i H ( p ) X ¯ 0 U N B 0 H A ˙ j H ( p ) X ¯ 0 + B 0 H A ˙ j H ( p ) X ¯ 0 U N X ¯ 0 H A ˙ i ( p ) B 0 + X ¯ 0 H A ˙ i ( p ) ( I K + B 0 U N B 0 H ) A ˙ j H ( p ) X ¯ 0 ) u N + Re { u N H B 0 H A ¨ i j H ( p ) X ¯ 0 u N } .
In addition, the expressions for matrices H 5 ( p ) and H 6 ( p ) can be obtained from (38) and (40)–(44). However, the two formulas are complicated and we therefore have to omit them because of space limitations.

7. Success Probability of Direct Position Determination Method in Presence of Array Model Errors

The aim of this section is to deduce the success probability (SP) of the DPD method when array model errors exist. Two quantitative criterions are introduced to justify whether the localization is successful. Additionally, two analytical expressions for the SP of positioning are derived.

7.1. The First Success Probability of Direct Position Determination

Definition 1.
If condition “ | < p ˜ > 1 | Δ 1 , | < p ˜ > 2 | Δ 2 , , | < p ˜ > L | Δ L ” is satisfied, then the localization is successful.
It must be emphasized that the set of parameters { Δ l } 1 l L in Definition 1 shall be appropriately chosen according to the practical scenario. The difference in these parameters reflects the importance of localization accuracy in distinct orientation. If the importance for each direction is identical, then these parameters can be set to the same value.
According to Definition 1, the joint probability density function of positioning error vector p ˜ is required for the calculation of the first localization SP. Applying the results in Section 6.3, the probability density function of random vector p ˜ is given by
f p ˜ ( z ) = ( 2 π ) L / 2 | det [ P ] | 1 / 2 exp { z T P 1 z / 2 } .
Consequently, the first localization SP can be determined by
Pr { | < p ˜ > 1 | Δ 1 , | < p ˜ > 2 | Δ 2 , , | < p ˜ > L | Δ L } = Δ L Δ L Δ 2 Δ 2 Δ 1 Δ 1 ( 2 π ) L / 2 | det [ P ] | 1 / 2 exp { z T P 1 z / 2 } d z 1 d z 2 d z L .
It is apparent from (59) that the first SP can be approximately obtained via numerical integration over a cube in high dimensional Euclidean space.
However, the high-dimensional numerical integration is not attractive from a computational viewpoint. If possible, it is preferable to get an explicit formula. Obviously, this is a non-trivial task and we only consider two-dimensional (2-D) localization scenarios (i.e., L = 2 ) for simplicity of mathematical analysis. First, an explicit formula with which to evaluate the joint probability of the Gaussian distribution is formally concluded in a proposition as below.
Proposition 2.
Consider two joint Gaussian random variables z 1 and z 2 . The mean and variance of z 1 are m 1 and v 11 , respectively. The mean and variance of z 2 are m 2 and v 22 , respectively. In addition, the covariance of the two random variables is v 12 . It follows that
Pr { z 1 α 1 , z 2 α 2 } = Γ 0 [ α 10 / v 11 ] Γ 0 [ ( α 2 E [ z ¯ 2 ] ) / var [ z ¯ 2 ] ] ,
where
{ E [ z ¯ 2 ] = m 2 v 12 exp { α 10 2 / ( 2 v 11 ) } 2 π v 11 Γ 0 [ α 10 / v 11 ] var [ z ¯ 2 ] = v 22 v 12 2 2 π v 11 Γ 0 [ α 10 / v 11 ] ( α 10 exp { α 10 2 / ( 2 v 11 ) } v 11 + exp { α 10 2 / v 11 } 2 π Γ 0 [ α 10 / v 11 ] )
with α 10 = α 1 m 1 and Γ 0 [ x ] = x 1 2 π exp { t 2 / 2 } d t .
Appendix E shows the proof of Proposition 2, which is along the lines of incomplete conditional moments theory presented in [46]. Note that Proposition 2 plays a significant role in the subsequent derivation process.
When L = 2 , it can be verified by algebraic manipulation that
Pr { | < p ˜ > 1 | Δ 1 , | < p ˜ > 2 | Δ 2 } = Pr { Δ 1 < p ˜ > 1 Δ 1 } Pr { < p ˜ > 2 Δ 2 } Pr { < p ˜ > 2 Δ 2 } + Pr { < p ˜ > 1 Δ 1 , < p ˜ > 2 Δ 2 } + Pr { < p ˜ > 1 Δ 1 , < p ˜ > 2 Δ 2 } + Pr { < p ˜ > 1 Δ 1 , < p ˜ > 2 Δ 2 } + Pr { < p ˜ > 1 Δ 1 , < p ˜ > 2 Δ 2 } .
The proof of (62) is shown in Appendix F. Applying the result in Proposition 2 and the definition of Γ 0 [ x ] , we have
Pr { | < p ˜ > 1 | Δ 1 , | < p ˜ > 2 | Δ 2 } = Γ 0 [ Δ 1 / < P > 11 ] Γ 0 [ Δ 1 / < P > 11 ] 2 Γ 0 [ Δ 2 / < P > 22 ] + 2 Γ 0 [ Δ 1 / < P > 11 ] ( Γ 0 [ ( Δ 2 + κ 1 ) / κ 2 ] + Γ 0 [ ( Δ 2 κ 1 ) / κ 2 ] ) ,
where
{ κ 1 = < P > 12 exp { Δ 1 2 / ( 2 < P > 11 ) } 2 π < P > 11 Γ 0 [ Δ 1 / < P > 11 ] , κ 2 = < P > 22 ( < P > 12 ) 2 2 π < P > 11 Γ 0 [ Δ 1 / < P > 11 ] ( Δ 1 exp { Δ 1 2 / ( 2 < P > 11 ) } < P > 11 + exp { Δ 1 2 / < P > 11 } 2 π Γ 0 [ Δ 1 / < P > 11 ] ) .
Remark 6.
The value of Γ 0 [ x ] for arbitrary x is available from a table given in a textbook on probability theory.
Remark 7.
It must be pointed out that the above analytical results cannot be directly applied to the three-dimensional (3-D) case; i.e., L = 3 . This can even be regarded as an open problem. Nevertheless, we can use numerical methods to compute this kind of SP in 3-D space. Indeed, there exist a number of efficient numerical integration methods with which to calculate the probability in (59), such as the Richardson extrapolation algorithm, Simpson algorithm, and Monte Carlo algorithm.

7.2. The Second Success Probability of Direct Position Determination

Definition 2.
If condition “ 1 L l = 1 L < p ˜ > l 2 Δ ” is satisfied, then the localization is successful.
It is readily seen from Definition 2 that the second SP of positioning is equal to Pr { | | p ˜ | | 2 2 L Δ 2 } . To proceed, let us express p ˜ as p ˜ = d P 1 / 2 p ˜ 0 , where p ˜ 0 is a zero-mean Gaussian random vector with covariance matrix I L , and = d indicates that both sides have the same probability distribution. Consequently, | | p ˜ | | 2 2 can be formulated as the quadratic form of p ˜ 0 :
| | p ˜ | | 2 2 = d p ˜ 0 T P p ˜ 0 .
In light of the relationship between the cumulative distribution function and characteristic function [64], we have
Pr { | | p ˜ | | 2 2 L Δ 2 } = 1 2 1 π 0 + 1 t Im { exp { j L Δ 2 t } φ | | p ˜ | | 2 2 ( t ) } d t ,
where φ | | p ˜ | | 2 2 ( t ) denotes the characteristic function of | | p ˜ | | 2 2 . Suppose that matrix P has eigenvalues γ 1 , γ 2 , , γ L . Applying the property of the characteristic function, it can be proved that
φ | | p ˜ | | 2 2 ( t ) = l = 1 L ( 1 + 4 γ l 2 t 2 ) 1 / 4 exp { j ( arctan ( 2 γ l t ) / 2 ) } .
The substitution of (67) into (66) produces
Pr { | | p ˜ | | 2 2 L Δ 2 } = 1 2 1 π 0 + 1 t sin ( δ 1 ( t ) ) δ 2 ( t ) d t ,
where
{ δ 1 ( t ) = l = 1 L arctan ( 2 γ l t ) / 2 L Δ 2 t , δ 2 ( t ) = l = 1 L ( 1 + 4 γ l 2 t 2 ) 1 / 4 .
Remark 8.
It is clear from (68) that a one-dimension numerical integration over [ 0 , + ) is required to evaluate the second SP. To this end, the values of the integrand shall be analyzed as t 0 and t + .
Remark 9.
Applying L’Hospital’s rule leads to
lim t 0 1 t sin { δ 1 ( t ) } δ 2 ( t ) = lim t 0 cos { δ 1 ( t ) } δ ˙ 1 ( t ) δ 2 ( t ) + t δ ˙ 2 ( t ) = δ ˙ 1 ( 0 ) = l = 1 L γ l L Δ 2 .
Remark 10.
The numerator of the integrand is bounded and the denominator tends to infinity when t + and, therefore, the integrand will be arbitrarily close to zero when t + . The integral upper limit in (68) can then be replaced by a sufficiently large positive number for the sake of simplicity.
Remark 11.
It can be rigorously proved that the first SP is always smaller than the second SP, provided that Δ 1 = = Δ L = Δ . The reason is that the first probability is computed by the numerical integral over a cube, while the second probability is equal to the integral over a circumscribed sphere of the cube.
As a byproduct of (68), we can present a new method of determining the radius of circular error probable (CEP), which is first defined in [65]. We denote r CEP by the radius of CEP, and it follows from its definition and (68) that
1 2 = Pr { | | p ˜ | | 2 2 r CEP 2 } = 1 2 1 π 0 + 1 t sin ( l = 1 L arctan ( 2 γ l t ) / 2 r CEP 2 t ) l = 1 L ( 1 + 4 γ l 2 t 2 ) 1 / 4 d t ,
which implies that
0 + 1 t sin ( l = 1 L arctan ( 2 γ l t ) / 2 r CEP 2 t ) l = 1 L ( 1 + 4 γ l 2 t 2 ) 1 / 4 d t = 0 .
As a consequence, a reasonable criterion for calculating r CEP is given by
min x ( 0 + 1 t sin ( l = 1 L arctan ( 2 γ l t ) / 2 x 2 t ) l = 1 L ( 1 + 4 γ l 2 t 2 ) 1 / 4 d t ) 2 ,
which can be solved via a one-dimensional grid search. In addition, it is noteworthy that although the solution for estimating r CEP is presented in [65], it is only applicable to 2-D localization scenarios. In contrast, the method proposed here is suitable for not only 2-D localization but also the 3-D scenario.

8. Cramér-Rao Bound on Covariance Matrix of Localization Errors

The CRB is a commonly used lower bound on the estimation error covariance of any unbiased estimator. In other words, the difference between the covariance and the CRB is a positive semi-definite matrix. Moreover, the CRB is expected to be a good predictor for the performance of the maximum likelihood estimator (MLE) at a moderate noise level. In this section, we derive the CRB for the estimate of the transmitter’s position in two cases: (1) array model errors are absent and (2) array model errors are present. To this end, we first introduce the following proposition whose proof can be found in [66].
Proposition 3.
Assuming that the CRB matrix for the real vector z is equal to CRB ( z ) , and defining a novel real vector as z = J z , where J is an invertible matrix, the CRB matrix for vector z is given by CRB ( z ) = J CRB ( z ) J T .

8.1. Cramér-Rao Bound on Position Estimate in Absence of Array Model Errors

This subsection is devoted to deriving the CRB for localization in the absence of array model errors. We begin by introducing a parameter vector that gathers all unknowns
η a = [ σ ε 2    p T    ( Re { β } ) T    ( Im { β } ) T    ( Re { s ¯ } ) T    ( Im { s ¯ } ) T ] T = [ σ ε 2    μ a T ] T ,
where
μ a = [ p T    β r T    ( Re { s ¯ } ) T    ( Im { s ¯ } ) T ] T
with β r = [ ( Re { β } ) T    ( Im { β } ) T ] T . To proceed, the data vector is defined as
x ¯ = [ x ¯ 1 H     x ¯ 2 H         x ¯ N H ] H = [ x ¯ 1 , 1 H     x ¯ 1 , 2 H         x ¯ 1 , K H     x ¯ 2 , 1 H     x ¯ 2 , 2 H         x ¯ 2 , K H             x ¯ N , 1 H     x ¯ N , 2 H         x ¯ N , K H ] H ,
whose mean vector is given by
x ¯ 0 = E [ x ¯ ] = [ x ¯ 1 , 1 , 0 H     x ¯ 1 , 2 , 0 H         x ¯ 1 , K , 0 H     x ¯ 2 , 1 , 0 H     x ¯ 2 , 2 , 0 H         x ¯ 2 , K , 0 H         x ¯ N , 1 , 0 H     x ¯ N , 2 , 0 H         x ¯ N , K , 0 H ] H .
Then, applying the results in [66,67], the CRB matrix for vector μ a can be obtained by
CRB ( μ a ) = σ ε 2 2 ( Re { Ω μ a H Ω μ a } ) 1 ,
where
Ω μ a = x ¯ 0 μ a T = [ Ω p    Ω Re { β }    Ω Im { β }    Ω Re { s ¯ }    Ω Im { s ¯ } ] .
Using (16) and (77) and performing algebraic manipulations, the sub-matrices in (79) are described as
{ Ω p = x ¯ 0 p T = diag [ β 1 M K × 1 ] diag [ 1 N × 1 s ¯ 1 M × 1 ] a ( p ) p T , Ω Re { β } = x ¯ 0 ( Re { β } ) T = diag [ 1 N × 1 s ¯ 1 M × 1 ] blkdiag [ a 1 ( p )    a 2 ( p )       a N ( p ) ] , Ω Im { β } = x ¯ 0 ( Im { β } ) T = j x ¯ 0 ( Re { β } ) T = j diag [ 1 N × 1 s ¯ 1 M × 1 ] blkdiag [ a 1 ( p )    a 2 ( p )       a N ( p ) ] , Ω Re { s ¯ } = x ¯ 0 ( Re { s ¯ } ) T = diag [ β 1 M K × 1 ] A ( p ) , Ω Im { s ¯ } = x ¯ 0 ( Im { s ¯ } ) T = j x ¯ 0 ( Re { s ¯ } ) T = j diag [ β 1 M K × 1 ] A ( p ) ,
where
{ a ( p ) = [ a 1 H ( p )    a 2 H ( p )       a N H ( p ) ] H , a ( p ) p T = [ ( a 1 ( p ) p T ) H    ( a 2 ( p ) p T ) H       ( a N ( p ) p T ) H ] H , a n ( p ) = [ a n , 1 H ( p )    a n , 2 H ( p )       a n , K H ( p ) ] H = A n ( p ) 1 K × 1 ( 1 n N ) .
Note that only the p corner of the CRB matrix is of interest here. However, it is easily observed from (78) that matrix CRB ( μ a ) does not exhibit a block-diagonal structure, because there might be correlation between the parameters. Hence, it is somewhat difficult to obtain the CRB for position vector p . To overcome this difficulty, we adopt the idea of [59,67] to redefine a parameter vector whose CRB matrix becomes block-diagonal. The new parameter vector is defined as
μ ¯ a = [ p T    β r T    ( Re { s ¯ } + Re { W 1 } p + Re { W 2 } β r ) T    ( Im { s ¯ } + Im { W 1 } p + Im { W 2 } β r ) T ] T ,
where
{ W 1 = Ω Re { s ¯ } Ω p , W 2 = Ω Re { s ¯ } [ Ω Re { β }    Ω Im { β } ] .
It is worth highlighting that because the vector μ ¯ a includes the source location parameters, it is meaningful to derive the CRB matrix for μ ¯ a . In addition, there is a one-to-one mapping between the new and old vectors μ ¯ and μ ¯ a . The relationship between them can be written in matrix form as
μ ¯ a = J μ a = [ I O O O O I O O Re { W 1 } Re { W 2 } I O Im { W 1 } Im { W 2 } O I ] μ a ,
where
J = [ I O O O O I O O Re { W 1 } Re { W 2 } I O Im { W 1 } Im { W 2 } O I ] .
Then, combining the results in Proposition 3 and (84), the CRB matrix for μ ¯ a is given by
CRB ( μ ¯ a ) = J CRB ( μ a ) J T = σ ε 2 2 ( Re { ( Ω μ a J 1 ) H ( Ω μ a J 1 ) } ) 1 ,
where
J 1 = [ I O O O O I O O Re { W 1 } Re { W 2 } I O Im { W 1 } Im { W 2 } O I ] .
Combining (79), (83), and (87) leads to the orthogonal projection matrix
Ω μ a J 1 = [ Τ [ Ω Re { s ¯ } ] Ω p Τ [ Ω Re { s ¯ } ] [ Ω Re { β }    Ω Im { β } ] Ω Re { s ¯ } j Ω Re { s ¯ } ] ,
where
Τ [ Ω Re { s ¯ } ] = I Ω Re { s ¯ } Ω Re { s ¯ } = I Ω Re { s ¯ } ( Ω Re { s ¯ } H Ω Re { s ¯ } ) 1 Ω Re { s ¯ } H
Inserting (88) back into (86) gives
CRB ( μ ¯ a ) = σ ε 2 2 [ V 1 O O V 2 ] 1 ,
where
{ V 1 = [ Re { Ω p H Τ [ Ω Re { s ¯ } ] Ω p } Re { Ω p H Τ [ Ω Re { s ¯ } ] [ Ω Re { β }    Ω Im { β } ] } Re { [ Ω Re { β }    Ω Im { β } ] H Τ [ Ω Re { s ¯ } ] Ω p } Re { [ Ω Re { β }    Ω Im { β } ] H Τ [ Ω Re { s ¯ } ] [ Ω Re { β }    Ω Im { β } ] } ] , V 2 = [ Re { Ω Re { s ¯ } H Ω Re { s ¯ } } Im { Ω Re { s ¯ } H Ω Re { s ¯ } } Im { Ω Re { s ¯ } H Ω Re { s ¯ } } Re { Ω Re { s ¯ } H Ω Re { s ¯ } } ] .
We define three matrices
{ V 1 , 1 = Ω p H Ω p Ω p H Ω Re { s ¯ } ( Ω Re { s ¯ } H Ω Re { s ¯ } ) 1 Ω Re { s ¯ } H Ω p , V 1 , 2 = [ 1    j ] ( Ω p H Ω Re { β } Ω p H Ω Re { s ¯ } ( Ω Re { s ¯ } H Ω Re { s ¯ } ) 1 Ω Re { s ¯ } H Ω Re { β } ) , V 1 , 3 = [ 1 j j 1 ] ( Ω Re { β } H Ω Re { β } Ω Re { β } H Ω Re { s ¯ } ( Ω Re { s ¯ } H Ω Re { s ¯ } ) 1 Ω Re { s ¯ } H Ω Re { β } ) .
The details of calculating the matrices in (92) are provided in Appendix G. Invoking the partitioned matrix inversion formula, the CRB matrix for position vector p is given by
CRB ( p ) = σ ε 2 2 ( ( Re { V 1 , 1 } ) 1 + ( Re { V 1 , 1 } ) 1 Re { V 1 , 2 } ( Re { V 1 , 3 } Re { V 1 , 2 H } ( Re { V 1 , 1 } ) 1 Re { V 1 , 2 } ) 1 Re { V 1 , 2 H } ( Re { V 1 , 1 } ) 1 ) .
Remark 12.
The diagonal elements of CRB ( p ) give the bounds for the estimation variance of the components in p when the array manifold is perfectly calibrated.
Remark 13.
The trace of CRB ( p ) is the bound for the localization MSE in the absence of array model errors.
Remark 14.
Although there is no rigorous proof, it is expected that the trace of P 1 is asymptotically close to that of CRB ( p ) . The reason for this is that the least square estimator in (5) is equivalent to the MLE, which is statistically efficient under the Gaussian noise model.
Remark 15.
By comparing the trace of CRB ( p ) with that of P , we can assess the expected degradation of the emitter location accuracy with respect to the amount of array model error. If the difference is significant, it can be concluded that the DPD method in [29] is sensitive to array model errors.

8.2. Cramér-Rao Bound on Position Estimate in Presence of Array Model Errors

This goal of this subsection is to derive the CRB for the position estimate in the presence of array uncertainties. Because in the present case the full parameter set contains both the deterministic parameters p , β , s ¯ , and σ ε 2 and the stochastic parameter φ ˜ , the CRB derivation should follow the Bayesian theory frame [68,69,70]. It is noteworthy that the CRB derivation can also be used for stochastic parameters, as processed in [68,69,70]. To this end, a novel parameter vector that comprises all the deterministic and stochastic unknowns is introduced
η b = [ σ ε 2    p T    ( Re { β } ) T    ( Im { β } ) T    ( Re { s ¯ } ) T    ( Im { s ¯ } ) T    ( Re { φ ˜ } ) T    ( Im { φ ˜ } ) T ] T = [ σ ε 2    μ b T ] T ,
where
μ b = [ p T    ( Re { β } ) T    ( Im { β } ) T    ( Re { s ¯ } ) T    ( Im { s ¯ } ) T    ( Re { φ ˜ } ) T    ( Im { φ ˜ } ) T ] T .
By performing similar algebraic manipulation in [68,69], the CRB matrix for vector μ b is formulated as
CRB ( μ b ) = ( 2 σ ε 2 Re { Ω μ b H Ω μ b } + [ O O O Φ 1 ] ) 1 ,
where
Ω μ b = x ¯ 0 μ b T = [ Ω p    Ω Re { β }    Ω Im { β }    Ω Re { s ¯ }    Ω Im { s ¯ }    Ω Re { φ ˜ }    Ω Im { φ ˜ } ] ,
Φ = E [ [ Re { φ ˜ } Im { φ ˜ } ] [ Re { φ ˜ } Im { φ ˜ } ] T ] = 1 2 [ blkdiag [ Re { Φ 1 ( 1 ) + Φ 1 ( 2 ) } Re { Φ 2 ( 1 ) + Φ 2 ( 2 ) } Re { Φ N ( 1 ) + Φ N ( 2 ) } ] blkdiag [ Im { Φ 1 ( 1 ) Φ 1 ( 2 ) } Im { Φ 2 ( 1 ) Φ 2 ( 2 ) } Im { Φ N ( 1 ) Φ N ( 2 ) } ] blkdiag [ Im { Φ 1 ( 1 ) + Φ 1 ( 2 ) } Im { Φ 2 ( 1 ) + Φ 2 ( 2 ) } Im { Φ N ( 1 ) + Φ N ( 2 ) } ] blkdiag [ Re { Φ 1 ( 2 ) Φ 1 ( 1 ) } Re { Φ 2 ( 2 ) Φ 2 ( 1 ) } Re { Φ N ( 2 ) Φ N ( 1 ) } ] ] .
Note that (98) comes from the statistical assumption in (14). Appendix H provides the proof of (96).
Owing to the second term in the bracket of (96), it is impossible to get a CRB matrix with block diagonality as in (90) by linear transformation. As a result, the CRB matrix for position estimation can only be obtained from (96), although it may be computationally complex. Meanwhile, because the expressions for matrices Ω p , Ω Re { β } , Ω Im { β } , Ω Re { s ¯ } , and Ω Im { s ¯ } are given in (80), here we only need to deduce the expressions for matrices Ω Re { φ ˜ } and Ω Im { φ ˜ } . Applying (16) and (77) and performing algebraic manipulations gives
{ Ω Re { φ ˜ } = x ¯ 0 ( Re { φ ˜ } ) T = diag [ β 1 M K × 1 ] blkdiag [ s ¯ 1 I M    s ¯ 2 I M       s ¯ N I M ] , Ω Im { φ ˜ } = x ¯ 0 ( Im { φ ˜ } ) T = j x ¯ 0 ( Re { φ ˜ } ) T = j diag [ β 1 M K × 1 ] blkdiag [ s ¯ 1 I M    s ¯ 2 I M       s ¯ N I M ] .
Substituting (97) and (98) into (96) leads to
CRB ( μ b ) = ( 2 σ ε 2 Re { Ω p H Ω p Ω p H Ω Re { β } Ω p H Ω Im { β } Ω p H Ω Re { s ¯ } Ω p H Ω Im { s ¯ } Ω p H Ω Re { φ ˜ } Ω p H Ω Im { φ ˜ } Ω Re { β } H Ω p Ω Re { β } H Ω Re { β } Ω Re { β } H Ω Im { β } Ω Re { β } H Ω Re { s ¯ } Ω Re { β } H Ω Im { s ¯ } Ω Re { β } H Ω Re { φ ˜ } Ω Re { β } H Ω Im { φ ˜ } Ω Im { β } H Ω p Ω Im { β } H Ω Re { β } Ω Im { β } H Ω Im { β } Ω Im { β } H Ω Re { s ¯ } Ω Im { β } H Ω Im { s ¯ } Ω Im { β } H Ω Re { φ ˜ } Ω Im { β } H Ω Im { φ ˜ } Ω Re { s ¯ } H Ω p Ω Re { s ¯ } H Ω Re { β } Ω Re { s ¯ } H Ω Im { β } Ω Re { s ¯ } H Ω Re { s ¯ } Ω Re { s ¯ } H Ω Im { s ¯ } Ω Re { s ¯ } H Ω Re { φ ˜ } Ω Re { s ¯ } H Ω Im { φ ˜ } Ω Im { s ¯ } H Ω p Ω Im { s ¯ } H Ω Re { β } Ω Im { s ¯ } H Ω Im { β } Ω Im { s ¯ } H Ω Re { s ¯ } Ω Im { s ¯ } H Ω Im { s ¯ } Ω Im { s ¯ } H Ω Re { φ ˜ } Ω Im { s ¯ } H Ω Im { φ ˜ } Ω Re { φ ˜ } H Ω p Ω Re { φ ˜ } H Ω Re { β } Ω Re { φ ˜ } H Ω Im { β } Ω Re { φ ˜ } H Ω Re { s ¯ } Ω Re { φ ˜ } H Ω Im { s ¯ } Ω Re { φ ˜ } H Ω Re { φ ˜ } Ω Re { φ ˜ } H Ω Im { φ ˜ } Ω Im { φ ˜ } H Ω p Ω Im { φ ˜ } H Ω Re { β } Ω Im { φ ˜ } H Ω Im { β } Ω Im { φ ˜ } H Ω Re { s ¯ } Ω Im { φ ˜ } H Ω Im { s ¯ } Ω Im { φ ˜ } H Ω Re { φ ˜ } Ω Im { φ ˜ } H Ω Im { φ ˜ } } + [ O O O Φ 1 ] ) 1 = ( 2 σ ε 2 Re { Z 1 Z 2 Z 2 H Z 3 } + [ O O O Φ 1 ] ) 1 ,
where
{ Z 1 = Ω p H Ω p , Z 2 = [ [ 1    j ] ( Ω p H Ω Re { β } ) [ 1    j ] ( Ω p H Ω Re { s ¯ } ) [ 1    j ] ( Ω p H Ω Re { φ ˜ } ) ] , Z 3 = [ [ 1 j j 1 ] ( Ω Re { β } H Ω Re { β } ) [ 1 j j 1 ] ( Ω Re { β } H Ω Re { s ¯ } ) [ 1 j j 1 ] ( Ω Re { β } H Ω Re { φ ˜ } ) [ 1 j j 1 ] ( Ω Re { s ¯ } H Ω Re { β } ) [ 1 j j 1 ] ( Ω Re { s ¯ } H Ω Re { s ¯ } ) [ 1 j j 1 ] ( Ω Re { s ¯ } H Ω Re { φ ˜ } ) [ 1 j j 1 ] ( Ω Re { φ ˜ } H Ω Re { β } ) [ 1 j j 1 ] ( Ω Re { φ ˜ } H Ω Re { s ¯ } ) [ 1 j j 1 ] ( Ω Re { φ ˜ } H Ω Re { φ ˜ } ) ] .
The details of calculating the matrices in (101) appear in Appendix I. Through the application of the partitioned matrix inversion formula, the CRB matrix for position vector p is given by
CRB e ( p ) = σ ε 2 2 ( ( Re { Z 1 } ) 1 + ( Re { Z 1 } ) 1 Re { Z 2 } ( Re { Z 3 } Re { Z 2 H } ( Re { Z 1 } ) 1 Re { Z 2 } + [ O O O σ ε 2 Φ 1 / 2 ] ) 1 × Re { Z 2 H } ( Re { Z 1 } ) 1 ) .
Note that the subscript “e” in (102) is used to distinguish the matrix CRB ( p ) for the case where the knowledge of the array manifold is accurate.
Remark 16.
The trace of CRB e ( p ) is the bound for the localization MSE when array model errors exist.
Remark 17.
It is apparent that the trace of CRB e ( p ) is larger than that of CRB ( p ) as the array model errors increase the uncertainties in parameter estimation.
Remark 18.
It can be readily proved that CRB ( p ) = CRB e ( p ) when Φ n ( 1 ) O and Φ n ( 2 ) O . Therefore, the CRB results derived in the presence of array model errors contain those for the case of no array model errors.
Remark 19.
Although there is no strict proof, it is not hard to conclude that the trace of P is greater than that of CRB e ( p ) . The reason is that the DPD estimator discussed here does not take the array model errors into account and, thus, it is not statistically efficient for this case. Hence, a comparison of the trace of CRB e ( p ) with that of P allows us to decide whether a new DPD method that accounts for the array model errors is necessary to improve the emitter location accuracy.

9. Simulation Results

This section presents a set of Monte Carlo simulations to support the theoretical development in the previous sections. The empirical performances of the DPD method with and without array model errors are given, and they are compared both to the theoretical prediction values given in Section 6 and Section 7 and to the CRBs presented in Section 8. The simulated values are averaged over 5000 independent trials. Moreover, the root-mean-square-error (RMSE), SP of localization, and radius of CEP are used to assess and compare the performance.

9.1. Discussion on RMSE of Direct Localization

This subsection focuses on the RMSE of the DPD method. Two sets of experiments are reported to illustrate the usefulness of the obtained results.

9.1.1. The First Set of Experiments

In the first set of experiments, the location estimation is performed on a 2-D plane and a simple array error model is used, which corresponds to case 1 in Section 4 in [44]. Specifically, φ ˜ n follows a circularly symmetric complex Gaussian distribution with second-order moments given by
E [ φ ˜ n φ ˜ n T ] = Φ n ( 1 ) = O M × M , E [ φ ˜ n φ ˜ n H ] = Φ n ( 2 ) = σ φ ˜ 2 I M ( 1 n N ) ,
where σ φ ˜ is the standard deviation of the array model error.
The location geometry of the first set of experiments is shown in Figure 1, where both base stations and transmitter lie on a plane. We consider four base stations with coordinates [0, 1000] m, [0,0] m, [0,1000] m, and [0, 3000] m, while the emitter position is fixed at [2000, 2000] m. The transmitted waveforms are realizations of a normal Gaussian random process, and are unknown to the receivers. Each base station is equipped with a uniform linear array. The channel attenuation magnitude is fixed at 1, and the channel phase is selected at random from a uniform distribution over [−π, π). In addition, unless stated otherwise, we use the settings (1) K = 64 samples; (2) SNR of 5 dB; (3) M = 5 sensors; (4) σ φ ˜ = 0.1 ; and (5) sensor elements are separated by a half wavelength. Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 display the RMSEs of the DPD method, as functions of the SNR of the emitter signal, the standard deviation of array model error σ φ ˜ , the number of array elements M , the ratio of the intersensor spacing to wavelength, and the number of snapshots K .
Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 reveal that the theoretical RMSE provided by (54) is in close agreement with the simulation result in the presence of array model errors. Consequently, the validity of the theoretical study in Section 6 is confirmed. Furthermore, when array model errors are absent, the empirical RMSE is very close to the CRB given by (92) and the theoretical RMSE in (55), which implies the asymptotical efficiency of the DPD method presented in [29], provided that the array is accurately calibrated. It is also seen that, as expected, the presence of array model errors leads to considerable deteriorations in location accuracy. Furthermore, Figure 2 and Figure 6 show that the RMSE of the DPD method remains approximately constant no matter how much the SNR and sample number increase. The reason for this is that when the SNR or the sample number is large enough, the effects of sensor noise can be neglected and the localization errors are therefore primarily caused by array model errors, whose affects cannot be effectively eliminated in this DPD method yet. Additionally, we find that the RMSE performance in the presence of array uncertainties is significantly greater than the CRB provided by (102), especially when the standard deviation σ φ ˜ increases (see Figure 3). Consequently, a new DPD method that accounts for array model errors is needed to improve the location accuracy.

9.1.2. The Second Set of Experiments

In the second set of experiments, the source location is estimated in 3-D space and we assume that the array error is caused by sensor gain and phase uncertainties, which corresponds to case 2 in Section 4 in [44]. The second-order moments of φ ˜ n can then be expressed as
{ E [ φ ˜ n φ ˜ n T ] = Φ n ( 1 ) = ( σ φ ˜ 1 2 σ φ ˜ 2 2 ) diag [ a n ( p ) a n ( p ) ] , E [ φ ˜ n φ ˜ n H ] = Φ n ( 2 ) = ( σ φ ˜ 1 2 + σ φ ˜ 2 2 ) diag [ a n ( p ) a n ( p ) ] , ( 1 n N ) ,
where σ φ ˜ 1 and σ φ ˜ 2 are the sensor gain and phase perturbation standard deviation, respectively. Moreover, we assume σ φ ˜ 1 = 2 σ φ ˜ 2 hereafter, and thus if σ φ ˜ 1 is changed, σ φ ˜ 2 alters accordingly.
Figure 7 illustrates the geometry for the source location in the second set of experiments. Obviously, it depicts a 3-D localization scenario. The source is positioned at [1000, 500, 1500] m, and the coordinates of three base stations are set to [0, 2000, 0] m, [0, 0, 0] m, and [0, −2000, 0] m. Each base station is equipped with a uniform circular array. The envelope of the transmitted signal and array model errors are generated in exactly the same manner as previously. Additionally, unless stated otherwise, we adopt the settings (1) K = 64 samples; (2) SNR of 5 dB; (3) M = 5 sensors; (4) σ φ ˜ 1 = 0.1 ; and (5) an array radius equal to the wavelength. Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12 show the RMSEs of the DPD method by varying the SNR of the emitter signal, standard deviation of sensor gain perturbation σ φ ˜ 1 , the number of array elements M , the ratio of array radius to wavelength, and the number of snapshots K .
The results presented in Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12 coincide with the results presented Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 although the dimensionality of the localization scenario and the model of the array error differ from each other. Owing to limited space, we do not present the results again. We simply highlight that the good agreement between the empirical and theoretical RMSE once again demonstrates the effectiveness of the theoretical development in Section 6.

9.2. Discussion on Success Probability of Direct Localization

This subsection focuses on the SP of the DPD method. Two sets of experiments are carried out to validate the obtained probability formulas, and the simulation parameters are the same as those in Section 9.1.

9.2.1. The First Set of Experiments

Both the localization scenario and the array error model for the first set of experiments are the same as those in Section 9.1.1. Moreover, the parameters Δ 1 and Δ 2 , which are used to specify the first SP, are set to the same value of 40, and the parameter Δ , which is related to the second SP, is also selected as 40. Because the localization scenario is on a 2-D plane, the theoretical value of the first SP can be obtained with (63). In Figure 13, Figure 14 and Figure 15, we plot the two kinds of SP of the DPD method against the SNR of the emitter signal, standard deviation of the array model error σ φ ˜ , and number of snapshots K .
Figure 13, Figure 14 and Figure 15 reveal that there is a close match between the analytical results and the simulation results and hence the validity of (63) and (68) can be supported. Furthermore, the simulated values in the absence of array model errors approach the lower bound calculated with the CRB in (93), which further indicates that the DPD estimator can achieve the CRB accuracy as long as the array is perfectly calibrated. However, when array model errors exist, the empirical values deviate significantly from the lower bound. Moreover, the difference increases with the standard deviation of array model error (see Figure 14). We thus need to develop a new DPD estimator with improved robustness against array model errors. Furthermore, it is seen that the first SP is always smaller than the second SP, which is consistent with the analysis in Remark 11.

9.2.2. The Second Set of Experiments

Both the localization scenario and the array error model for the second set of experiments are the same as those in Section 9.1.2. Because the situation is a 3-D localization scenario, the theoretical value of the first SP must be calculated with numerical integration methods. Herein, the Richardson extrapolation algorithm is exploited. Figure 16, Figure 17 and Figure 18 depict the two kinds of SP of the DPD method as functions of the SNR of the emitter signal, standard deviation of sensor gain perturbation σ φ ˜ 1 , and number of snapshots K .
For Figure 16, Figure 17 and Figure 18 we make similar observations as for Figure 13, Figure 14 and Figure 15. We simply emphasize that the good agreement between empirical and analytical SP once again validates the probability formulas obtained in Section 7.

9.3. Discussion on Radius of CEP

This subsection discusses the radius of CEP of the DPD method. Two simulation experiments are conducted to illustrate the validity of (73), which is used to estimate the radius of CEP. The first and second simulation settings are the same as those in Figure 2 and Figure 8, respectively. In the following two figures, the radius of CEP of the DPD method in the two experiments is plotted as a function of the SNR of the emitter signal.
Figure 19 and Figure 20 show that the simulation results agree well with the analytical results calculated with (73) and therefore the validity of (73) is corroborated. Moreover, we observe that the increase in the radius of CEP due to the array model errors is significant, especially when the SNR of the emitter signal is sufficiently high. Furthermore, when array model errors exist, the radius of CEP remains approximately constant no matter how much the SNR increases. Therefore, a robust DPD method that restrains the uncertainties in an array manifold is required.

10. Conclusions

In this paper, the statistical performance of the DPD estimator presented in [29] is analytically studied when array model errors are present as well as signal waveforms are not available. The theoretical analysis begins with a matrix eigen-perturbation result, which can express the perturbation of eigenvalues as a function of the disturbance added to the Hermitian matrix. Then, the first-order asymptotic expression of the localization errors is given, from which the analytical expression for the MSE of the DPD estimator is obtained. Besides, the closed-form expressions for the calculation of the probabilities of a successful localization are also deduced, which can offer another theoretical perspective on the study of the localization accuracy. Additionally, the obtained probability formula can be used to provide a new criterion to estimate the radius of CEP. Finally, the CRB expressions for the position estimation are derived for two cases: (a) array model errors do not exist, and (b) array model errors are present and are drawn from Gaussian distribution. Several simulation experiments are performed to confirm the usefulness of the obtained results. The experimental results show that the uncertainties in the model of the array manifold can seriously deteriorate the source location accuracy of the DPD method. Therefore, our future work is to present a new DPD method that is expected to be more robust against the array model errors.

Acknowledgments

The author would like to thank all the anonymous reviewers for their valuable comments and suggestions which vastly improved the content and presentation of this paper. The author also acknowledges support from National Natural Science Foundation of China (Grant No. 61201381 and No. 61401513), China Postdoctoral Science Foundation (Grant No. 2016M592989), the Self-Topic Foundation of Information Engineering University (Grant No. 2016600701), and the Outstanding Youth Foundation of Information Engineering University (Grant No. 2016603201).

Author Contributions

Ding Wang wrote the manuscript and Hongyi Yu helped with the writing, data analysis, and publication process. Zhidong Wu and Cheng Wang was in charge of the experiment and its results. All authors have contributed to the scientific discussion.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A—Detailed Derivation of Matrices in (30)

It follows from the first equality in (9) and the first equality in (12) that
A ˙ l ( p ) = A ( p ) < p > l = [ A 1 ( p ) < p > l A 2 ( p ) < p > l A N ( p ) < p > l ] = [ blkdiag [ a 1 , 1 ( p ) < p > l    a 1 , 2 ( p ) < p > l       a 1 , K ( p ) < p > l ] blkdiag [ a 2 , 1 ( p ) < p > l    a 2 , 2 ( p ) < p > l       a 2 , K ( p ) < p > l ] blkdiag [ a N , 1 ( p ) < p > l    a N , 2 ( p ) < p > l       a N , K ( p ) < p > l ] ] ,
A ¨ l 1 l 2 ( p ) = 2 A ( p ) < p > l 1 < p > l 2 = [ 2 A 1 ( p ) < p > l 1 < p > l 2 2 A 2 ( p ) < p > l 1 < p > l 2 2 A N ( p ) < p > l 1 < p > l 2 ] = [ blkdiag [ 2 a 1 , 1 ( p ) < p > l 1 < p > l 2    2 a 1 , 2 ( p ) < p > l 1 < p > l 2       2 a 1 , K ( p ) < p > l 1 < p > l 2 ] blkdiag [ 2 a 2 , 1 ( p ) < p > l 1 < p > l 2    2 a 2 , 2 ( p ) < p > l 1 < p > l 2       2 a 2 , K ( p ) < p > l 1 < p > l 2 ] blkdiag [ 2 a N , 1 ( p ) < p > l 1 < p > l 2    2 a N , 2 ( p ) < p > l 1 < p > l 2       2 a N , K ( p ) < p > l 1 < p > l 2 ] ] ,
where
a n , k ( p ) < p > l = exp { j ω k τ n ( p ) } ( a n ( p ) < p > l j ω k a n ( p ) τ n ( p ) < p > l ) ,
2 a n , k ( p ) < p > l 1 < p > l 2 = exp { j ω k τ n ( p ) } ( 2 a n ( p ) < p > l 1 < p > l 2 j ω k ( a n ( p ) < p > l 2 τ n ( p ) < p > l 1 + a n ( p ) 2 τ n ( p ) < p > l 1 < p > l 2 ) ) j ω k exp { j ω k τ n ( p ) } τ n ( p ) < p > l 2 ( a n ( p ) < p > l 1 j ω k a n ( p ) τ n ( p ) < p > l 1 ) .

Appendix B—Proof of (34) and (35)

Applying Proposition 1 and (31) leads to
J ^ ( p ^ ) = λ max { B ^ H ( p ^ ) B ^ ( p ^ ) } = λ max { C 0 } + u N H ( C 0 + C ˜ ( 1 ) + C ˜ ( 2 ) + o ( | | ξ ˜ | | 2 2 ) ) u N + u N H ( C 0 + C ˜ ( 1 ) + C ˜ ( 2 ) + o ( | | ξ ˜ | | 2 2 ) ) U N ( C 0 + C ˜ ( 1 ) + C ˜ ( 2 ) + o ( | | ξ ˜ | | 2 2 ) ) u N + o ( | | C ˜ ( 1 ) + C ˜ ( 2 ) + o ( | | ξ ˜ | | 2 2 ) | | 2 2 ) = λ max { C 0 } + u N H ( C ˜ ( 1 ) + C ˜ ( 2 ) ) u N + u N H C ˜ ( 1 ) U N C ˜ ( 1 ) u N + o ( | | ξ ˜ | | 2 2 ) = λ N + λ ˜ N ( 1 ) + λ ˜ N ( 2 ) + o ( | | ξ ˜ | | 2 2 ) ,
where λ ˜ N ( 1 ) and λ ˜ N ( 2 ) consist of all first- and second-order error terms, respectively. It then follows that
{ λ ˜ N ( 1 ) = u N H C ˜ ( 1 ) u N = O ( | | ξ ˜ | | 2 ) , λ ˜ N ( 2 ) = u N H C ˜ ( 2 ) u N + u N H C ˜ ( 1 ) U N C ˜ ( 1 ) u N = O ( | | ξ ˜ | | 2 2 ) .
Inserting the second equality in (32) into the first equality in (A6) leads to
λ ˜ N ( 1 ) = u N H ( B 0 H B ˜ ( 1 ) + B ˜ ( 1 ) H B 0 ) u N = u N H B 0 H B ˜ ( 1 ) u N + ( u N H B 0 H B ˜ ( 1 ) u N ) H = O ( | | ξ ˜ | | 2 ) .
Substituting the second and third equalities in (32) into the second equality in (A6) leads to
λ ˜ N ( 2 ) = u N H ( B ˜ ( 1 ) H B ˜ ( 1 ) + B 0 H B ˜ ( 2 ) + B ˜ ( 2 ) H B 0 ) u N + u N H ( B 0 H B ˜ ( 1 ) + B ˜ ( 1 ) H B 0 ) U N ( B 0 H B ˜ ( 1 ) + B ˜ ( 1 ) H B 0 ) u N       = u N H B 0 H B ˜ ( 1 ) U N B 0 H B ˜ ( 1 ) u N + ( u N H B 0 H B ˜ ( 1 ) U N B 0 H B ˜ ( 1 ) u N ) H + u N H B ˜ ( 1 ) H ( I K + B 0 U N B 0 H ) B ˜ ( 1 ) u N           + u N H B 0 H B ˜ ( 1 ) U N B ˜ ( 1 ) H B 0 u N + u N H B 0 H B ˜ ( 2 ) u N + ( u N H B 0 H B ˜ ( 2 ) u N ) H = O ( | | ξ ˜ | | 2 2 ) .
Combining (A7) and (A8) completes the proof.

Appendix C—Proof of (36) to (38)

From the second equality in (29), it follows for any vectors z 1 C K × 1 and z 2 C N × 1 that
z 1 H B ˜ ( 1 ) z 2 = z 1 H A H ( p ) E ¯ z 2 + z 1 H A H ( p ) Ψ ˜ z 2 + l = 1 L < p ˜ > l z 1 H A ˙ l H ( p ) X ¯ 0 z 2 .
According to the last equality in (17), it can be readily checked that
z 1 H A H ( p ) E ¯ z 2 = z 1 H A H ( p ) diag [ z 2 1 M K × 1 ] ε ¯ = ( f 1 [ z 1 , z 2 ] ) H ε ¯ c ,
where f 1 [ z 1 , z 2 ] is given in the first equality in (38). With (21), we have
z 1 H A H ( p ) Ψ ˜ z 2 = z 1 H A H ( p ) blkdiag [ < z 2 > 1 ( r 1 I M )      < z 2 > 2 ( r 2 I M )           < z 2 > N ( r N I M ) ] φ ˜ = ( f 2 [ z 1 , z 2 ] ) H φ ˜ c ,
where f 2 [ z 1 , z 2 ] is given in the second equality in (38). In addition, it can be easily verified that
l = 1 L < p ˜ > l z 1 H A ˙ l H ( p ) X ¯ 0 z 2 = [ z 1 H A ˙ 1 H ( p ) X ¯ 0 z 2     z 1 H A ˙ 2 H ( p ) X ¯ 0 z 2         z 1 H A ˙ L H ( p ) X ¯ 0 z 2 ] p ˜ = ( f 3 [ z 1 , z 2 ] ) H p ˜ ,
where f 3 [ z 1 , z 2 ] is given in the third equality in (38). Combining (A9) to (A12) yields
z 1 H B ˜ ( 1 ) z 2 = ( f 1 [ z 1 , z 2 ] ) H ε ¯ c + ( f 2 [ z 1 , z 2 ] ) H φ ˜ c + ( f 3 [ z 1 , z 2 ] ) H p ˜ .
It follows easily from (A13) that
u N H B 0 H B ˜ ( 1 ) u N = ( f 1 [ B 0 u N , u N ] ) H ε ¯ c + ( f 2 [ B 0 u N , u N ] ) H φ ˜ c + ( f 3 [ B 0 u N , u N ] ) H p ˜ ,
which, combined with (23) and (25), gives
( u N H B 0 H B ˜ ( 1 ) u N ) H = ( f 1 [ B 0 u N , u N ] ) T ε ¯ c + ( f 2 [ B 0 u N , u N ] ) T φ ˜ c + ( f 3 [ B 0 u N , u N ] ) T p ˜ = ( f 1 [ B 0 u N , u N ] ) T Π ε ¯ ε ¯ c + ( f 2 [ B 0 u N , u N ] ) T Π φ ˜ φ ˜ c + ( f 3 [ B 0 u N , u N ] ) T p ˜ = ( Π ε ¯ ( f 1 [ B 0 u N , u N ] ) ) H ε ¯ c + ( Π φ ˜ ( f 2 [ B 0 u N , u N ] ) ) H φ ˜ c + ( f 3 [ B 0 u N , u N ] ) T p ˜ .
Combining (A14), (A15), and the first equality in (35) completes the proof.

Appendix D—Proof of (39) to (44)

For any vectors z 1 C K × 1 and z 2 C N × 1 and matrix Z C N × K , it is straightforward to obtain that
z 1 H B ˜ ( 1 ) Z B ˜ ( 1 ) z 2 = k = 1 K ( z 1 H B ˜ ( 1 ) Z i K ( k ) ) ( i K ( k ) H B ˜ ( 1 ) z 2 ) .
Inserting (A13) into (A16) produces
z 1 H B ˜ ( 1 ) Z B ˜ ( 1 ) z 2 = k = 1 K ( ( f 1 [ z 1 , Z i K ( k ) ] ) H ε ¯ c + ( f 2 [ z 1 , Z i K ( k ) ] ) H φ ˜ c + ( f 3 [ z 1 , Z i K ( k ) ] ) H p ˜ ) ( ( f 1 [ i K ( k ) , z 2 ] ) H ε ¯ c + ( f 2 [ i K ( k ) , z 2 ] ) H φ ˜ c + ( f 3 [ i K ( k ) , z 2 ] ) H p ˜ ) = ε ¯ c H ( k = 1 K Π ε ¯ ( f 1 [ z 1 , Z i K ( k ) ] ) ( f 1 [ i K ( k ) , z 2 ] ) H ) ε ¯ c + φ ˜ c H ( k = 1 K Π φ ˜ ( f 2 [ z 1 , Z i K ( k ) ] ) ( f 2 [ i K ( k ) , z 2 ] ) H ) φ ˜ c + p ˜ T ( k = 1 K ( f 3 [ z 1 , Z i K ( k ) ] ) ( f 3 [ i K ( k ) , z 2 ] ) H ) p ˜ + ε ¯ c H ( k = 1 K Π ε ¯ ( ( f 1 [ z 1 , Z i K ( k ) ] ) ( f 2 [ i K ( k ) , z 2 ] ) H + ( f 1 [ i K ( k ) , z 2 ] ) ( f 2 [ z 1 , Z i K ( k ) ] ) H ) ) φ ˜ c + ε ¯ c H ( k = 1 K Π ε ¯ ( ( f 1 [ z 1 , Z i K ( k ) ] ) ( f 3 [ i K ( k ) , z 2 ] ) H + ( f 1 [ i K ( k ) , z 2 ] ) ( f 3 [ z 1 , Z i K ( k ) ] ) H ) ) p ˜ + φ ˜ c H ( k = 1 K Π φ ˜ ( ( f 2 [ z 1 , Z i K ( k ) ] ) ( f 3 [ i K ( k ) , z 2 ] ) H + ( f 2 [ i K ( k ) , z 2 ] ) ( f 3 [ z 1 , Z i K ( k ) ] ) H ) ) p ˜ = ε ¯ c H F a 1 [ z 1 , Z , z 2 ] ε ¯ c + φ ˜ c H F a 2 [ z 1 , Z , z 2 ] φ ˜ c + p ˜ T F a 3 [ z 1 , Z , z 2 ] p ˜ + ε ¯ c H F a 4 [ z 1 , Z , z 2 ] φ ˜ c + ε ¯ c H F a 5 [ z 1 , Z , z 2 ] p ˜ + φ ˜ c H F a 6 [ z 1 , Z , z 2 ] p ˜ ,
where { F a k [ , , ] } 1 k 6 are given in (41).
For any vectors z 1 C N × 1 and z 2 C N × 1 and matrix Z C K × K , it can be readily checked that
z 1 H B ˜ ( 1 ) H Z B ˜ ( 1 ) z 2 = k = 1 K ( z 1 H B ˜ ( 1 ) H Z i K ( k ) ) ( i K ( k ) H B ˜ ( 1 ) z 2 ) = k = 1 K ( ( Z i K ( k ) ) H B ˜ ( 1 ) z 1 ) H ( i K ( k ) H B ˜ ( 1 ) z 2 ) .
Putting (A13) into (A18) gives
z 1 H B ˜ ( 1 ) H Z B ˜ ( 1 ) z 2 = k = 1 K ( ( f 1 [ Z i K ( k ) , z 1 ] ) H ε ¯ c + ( f 2 [ Z i K ( k ) , z 1 ] ) H φ ˜ c + ( f 3 [ Z i K ( k ) , z 1 ] ) H p ˜ ) H ( ( f 1 [ i K ( k ) , z 2 ] ) H ε ¯ c + ( f 2 [ i K ( k ) , z 2 ] ) H φ ˜ c + ( f 3 [ i K ( k ) , z 2 ] ) H p ˜ ) = ε ¯ c H ( k = 1 K f 1 [ Z i K ( k ) , z 1 ] ( f 1 [ i K ( k ) , z 2 ] ) H ) ε ¯ c + φ ˜ c H ( k = 1 K f 2 [ Z i K ( k ) , z 1 ] ( f 2 [ i K ( k ) , z 2 ] ) H ) φ ˜ c + p ˜ T ( k = 1 K f 3 [ Z i K ( k ) , z 1 ] ( f 3 [ i K ( k ) , z 2 ] ) H ) p ˜ + ε ¯ c H ( k = 1 K ( f 1 [ Z i K ( k ) , z 1 ] ( f 2 [ i K ( k ) , z 2 ] ) H + Π ε ¯ c ( f 1 [ i K ( k ) , z 2 ] ) ( f 2 [ Z i K ( k ) , z 1 ] ) T Π φ ˜ ) ) φ ˜ c + ε ¯ c H ( k = 1 K ( f 1 [ Z i K ( k ) , z 1 ] ( f 3 [ i K ( k ) , z 2 ] ) H + Π ε ¯ c ( f 1 [ i K ( k ) , z 2 ] ) ( f 3 [ Z i K ( k ) , z 1 ] ) T ) ) p ˜ + φ ˜ c H ( k = 1 K ( f 2 [ Z i K ( k ) , z 1 ] ( f 3 [ i K ( k ) , z 2 ] ) H + Π φ ˜ ( f 2 [ i K ( k ) , z 2 ] ) ( f 3 [ Z i K ( k ) , z 1 ] ) T ) ) p ˜ = ε ¯ c H F b 1 [ z 1 , Z , z 2 ] ε ¯ c + φ ˜ c H F b 2 [ z 1 , Z , z 2 ] φ ˜ c + p ˜ T F b 3 [ z 1 , Z , z 2 ] p ˜ + ε ¯ c H F b 4 [ z 1 , Z , z 2 ] φ ˜ c + ε ¯ c H F b 5 [ z 1 , Z , z 2 ] p ˜ + φ ˜ c H F b 6 [ z 1 , Z , z 2 ] p ˜ ,
where { F b k [ , , ] } 1 k 6 are given in (42).
For any vectors z 1 C K × 1 and z 2 C K × 1 and matrix Z C N × N , it is straightforward to deduce that
z 1 H B ˜ ( 1 ) Z B ˜ ( 1 ) H z 2 = n = 1 N ( z 1 H B ˜ ( 1 ) Z i N ( n ) ) ( i N ( n ) H B ˜ ( 1 ) H z 2 ) = n = 1 N ( z 1 H B ˜ ( 1 ) Z i N ( n ) ) ( z 2 H B ˜ ( 1 ) i N ( n ) ) H .
The substitution of (A13) into (A20) leads to
z 1 H B ˜ ( 1 ) Z B ˜ ( 1 ) H z 2 = n = 1 N ( ( f 1 [ z 1 , Z i N ( n ) ] ) H ε ¯ c + ( f 2 [ z 1 , Z i N ( n ) ] ) H φ ˜ c + ( f 3 [ z 1 , Z i N ( n ) ] ) H p ˜ ) ( ( f 1 [ z 2 , i N ( n ) ] ) H ε ¯ c + ( f 2 [ z 2 , i N ( n ) ] ) H φ ˜ c + ( f 3 [ z 2 , i N ( n ) ] ) H p ˜ ) H = ε ¯ c H ( n = 1 N f 1 [ z 2 , i N ( n ) ] ( f 1 [ z 1 , Z i N ( n ) ] ) H ) ε ¯ c + φ ˜ c H ( n = 1 N f 2 [ z 2 , i N ( n ) ] ( f 2 [ z 1 , Z i N ( n ) ] ) H ) φ ˜ c + p ˜ T ( n = 1 N f 3 [ z 2 , i N ( n ) ] ( f 3 [ z 1 , Z i N ( n ) ] ) H ) p ˜ + ε ¯ c H ( n = 1 N ( f 1 [ z 2 , i N ( n ) ] ( f 2 [ z 1 , Z i N ( n ) ] ) H + Π ε ¯ c ( f 1 [ z 1 , Z i N ( n ) ] ) ( f 2 [ z 2 , i N ( n ) ] ) T Π φ ˜ c ) ) φ ˜ c + ε ¯ c H ( n = 1 N ( f 1 [ z 2 , i N ( n ) ] ( f 3 [ z 1 , Z i N ( n ) ] ) H + Π ε ¯ c ( f 1 [ z 1 , Z i N ( n ) ] ) ( f 3 [ z 2 , i N ( n ) ] ) T ) ) p ˜ + φ ˜ c H ( n = 1 N ( f 2 [ z 2 , i N ( n ) ] ( f 3 [ z 1 , Z i N ( n ) ] ) H + Π φ ˜ c ( f 2 [ z 1 , Z i N ( n ) ] ) ( f 3 [ z 2 , i N ( n ) ] ) T ) ) p ˜ = ε ¯ c H F c 1 [ z 1 , Z , z 2 ] ε ¯ c + φ ˜ c H F c 2 [ z 1 , Z , z 2 ] φ ˜ c + p ˜ T F c 3 [ z 1 , Z , z 2 ] p ˜ + ε ¯ c H F c 4 [ z 1 , Z , z 2 ] φ ˜ c + ε ¯ c H F c 5 [ z 1 , Z , z 2 ] p ˜ + φ ˜ c H F c 6 [ z 1 , Z , z 2 ] p ˜ ,
where { F c k [ , , ] } 1 k 6 are given in (43).
For any vectors z 1 C K × 1 and z 2 C N × 1 , it can be easily verified from the third equality in (29) that
z 1 H B ˜ ( 2 ) z 2 = l = 1 L < p ˜ > l z 1 H A ˙ l H ( p ) E ¯ z 2 + l = 1 L < p ˜ > l z 1 H A ˙ l H ( p ) Ψ ˜ z 2 + 1 2 l 1 = 1 L l 2 = 1 L < p ˜ > l 1 < p ˜ > l 2 z 1 H A ¨ l 1 l 2 H ( p ) X ¯ 0 z 2 .
According to the last equality in (17), we get
l = 1 L < p ˜ > l z 1 H A ˙ l H ( p ) E ¯ z 2 = ε ¯ T diag [ z 2 1 M K × 1 ] [ A ˙ 1 ( p ) z 1     A ˙ 2 ( p ) z 1         A ˙ L ( p ) z 1 ] p ˜ = ε ¯ c H G 1 [ z 1 , z 2 ] p ˜ ,
where G 1 [ z 1 , z 2 ] is given in the first equality in (44). It follows from (21) that
l = 1 L < p ˜ > l z 1 H A ˙ l H ( p ) Ψ ˜ z 2 = φ ˜ T blkdiag [ < z 2 > 1 ( r 1 T I M )     < z 2 > 2 ( r 2 T I M )         < z 2 > N ( r N T I M ) ] [ A ˙ 1 ( p ) z 1     A ˙ 2 ( p ) z 1         A ˙ L ( p ) z 1 ] p ˜ = φ ˜ c H G 2 [ z 1 , z 2 ] p ˜ ,
where G 2 [ z 1 , z 2 ] is given in the second equality in (44). In addition, it can be easily verified that
1 2 l 1 = 1 L l 2 = 1 L < p ˜ > l 1 < p ˜ > l 2 z 1 H A ¨ l 1 l 2 H ( p ) X ¯ 0 z 2 = 1 2 p ˜ T [ z 1 H A ¨ 11 H ( p ) X ¯ 0 z 2 z 1 H A ¨ 12 H ( p ) X ¯ 0 z 2 z 1 H A ¨ 1 L H ( p ) X ¯ 0 z 2 z 1 H A ¨ 21 H ( p ) X ¯ 0 z 2 z 1 H A ¨ 22 H ( p ) X ¯ 0 z 2 z 1 H A ¨ 2 L H ( p ) X ¯ 0 z 2 z 1 H A ¨ L 1 H ( p ) X ¯ 0 z 2 z 1 H A ¨ L 2 H ( p ) X ¯ 0 z 2 z 1 H A ¨ L L H ( p ) X ¯ 0 z 2 ] p ˜ = p ˜ T G 3 [ z 1 , z 2 ] p ˜ ,
where G 3 [ z 1 , z 2 ] is given in the third equality in (44). Combining (A22) to (A25) gives
z 1 H B ˜ ( 2 ) z 2 = ε ¯ c H G 1 [ z 1 , z 2 ] p ˜ + φ ˜ c H G 2 [ z 1 , z 2 ] p ˜ + p ˜ T G 3 [ z 1 , z 2 ] p ˜ .
With (A17) we have
u N H B 0 H B ˜ ( 1 ) U N B 0 H B ˜ ( 1 ) u N = ε ¯ c H F a 1 [ B 0 u N , U N B 0 H , u N ] ε ¯ c + φ ˜ c H F a 2 [ B 0 u N , U N B 0 H , u N ] φ ˜ c + p ˜ T F a 3 [ B 0 u N , U N B 0 H , u N ] p ˜ + ε ¯ c H F a 4 [ B 0 u N , U N B 0 H , u N ] φ ˜ c + ε ¯ c H F a 5 [ B 0 u N , U N B 0 H , u N ] p ˜ + φ ˜ c H F a 6 [ B 0 u N , U N B 0 H , u N ] p ˜ ,
which, together with (23) and (25), gives
( u N H B 0 H B ˜ ( 1 ) U N B 0 H B ˜ ( 1 ) u N ) H = ε ¯ c H ( F a 1 [ B 0 u N , U N B 0 H , u N ] ) H ε ¯ c + φ ˜ c H ( F a 2 [ B 0 u N , U N B 0 H , u N ] ) H φ ˜ c + p ˜ T ( F a 3 [ B 0 u N , U N B 0 H , u N ] ) H p ˜ + ε ¯ c T ( F a 4 [ B 0 u N , U N B 0 H , u N ] ) φ ˜ c + ε ¯ c T ( F a 5 [ B 0 u N , U N B 0 H , u N ] ) p ˜ + φ ˜ c T ( F a 6 [ B 0 u N , U N B 0 H , u N ] ) p ˜ = ε ¯ c H ( F a 1 [ B 0 u N , U N B 0 H , u N ] ) H ε ¯ c + φ ˜ c H ( F a 2 [ B 0 u N , U N B 0 H , u N ] ) H φ ˜ c + p ˜ T ( F a 3 [ B 0 u N , U N B 0 H , u N ] ) H p ˜ + ε ¯ c H Π ε ¯ ( F a 4 [ B 0 u N , U N B 0 H , u N ] ) Π φ ˜ φ ˜ c + ε ¯ c H Π ε ¯ ( F a 5 [ B 0 u N , U N B 0 H , u N ] ) p ˜ + φ ˜ c H Π φ ˜ ( F a 6 [ B 0 u N , U N B 0 H , u N ] ) p ˜ .
Applying (A19), it can be shown that
u N H B ˜ ( 1 ) H ( I K + B 0 U N B 0 H ) B ˜ ( 1 ) u N = ε ¯ c H F b 1 [ u N , I K + B 0 U N B 0 H , u N ] ε ¯ c + φ ˜ c H F b 2 [ u N , I K + B 0 U N B 0 H , u N ] φ ˜ c + p ˜ T F b 3 [ u N , I K + B 0 U N B 0 H , u N ] p ˜ + ε ¯ c H F b 4 [ u N , I K + B 0 U N B 0 H , u N ] φ ˜ c + ε ¯ c H F b 5 [ u N , I K + B 0 U N B 0 H , u N ] p ˜ + φ ˜ c H F b 6 [ u N , I K + B 0 U N B 0 H , u N ] p ˜ .
According to (A21), we have
u N H B 0 H B ˜ ( 1 ) U N B ˜ ( 1 ) H B 0 u N = ε ¯ c H F c 1 [ B 0 u N , U N , B 0 u N ] ε ¯ c + φ ˜ c H F c 2 [ B 0 u N , U N , B 0 u N ] φ ˜ c + p ˜ T F c 3 [ B 0 u N , U N , B 0 u N ] p ˜ + ε ¯ c H F c 4 [ B 0 u N , U N , B 0 u N ] φ ˜ c + ε ¯ c H F c 5 [ B 0 u N , U N , B 0 u N ] p ˜ + φ ˜ c H F c 6 [ B 0 u N , U N , B 0 u N ] p ˜ .
Additionally, it follows from (A26) that
u N H B 0 H B ˜ ( 2 ) u N = ε ¯ c H G 1 [ B 0 u N , u N ] p ˜ + φ ˜ c H G 2 [ B 0 u N , u N ] p ˜ + p ˜ T G 3 [ B 0 u N , u N ] p ˜ ,
which, together with (23) and (25), gives
( u N H B 0 H B ˜ ( 2 ) u N ) H = ε ¯ c T ( G 1 [ B 0 u N , u N ] ) p ˜ + φ ˜ c T ( G 2 [ B 0 u N , u N ] ) p ˜ + p ˜ T ( G 3 [ B 0 u N , u N ] ) p ˜ = ε ¯ c H Π ε ¯ ( G 1 [ B 0 u N , u N ] ) p ˜ + φ ˜ c H Π φ ˜ ( G 2 [ B 0 u N , u N ] ) p ˜ + p ˜ T ( G 3 [ B 0 u N , u N ] ) p ˜ .
Combining (A27) to (A32) and the second equality in (35) completes the proof.

Appendix E—Proof of Proposition 2

First introduce the event z 2 { z 2 | z 1 α 1 } . The joint probability can then be expressed as
Pr { z 1 α 1 , z 2 α 2 } = Pr { z 1 α 1 } Pr { z 2 α 2 } .
It is obvious that
Pr { z 1 α 1 } = α 1 1 2 π v 11 exp { ( t m 1 ) 2 / ( 2 v 11 ) } d t = ( α 1 m 1 ) / v 11 1 2 π exp { t 2 / 2 } d t = Γ 0 [ α 10 / v 11 ] .
Additionally, random variable z 2 can be decomposed with classical minimum-MSE theory into
z 2 = 1 v 11 z 0 + v 12 v 11 ( z 1 m 1 ) + m 2 ,
where z 0 is drawn from a zero-mean Gaussian distribution, independent of z 1 , with variance
var [ z 0 ] = E [ z 0 2 ] = v 11 ( v 11 v 22 v 12 2 ) .
According to (A35), it can be verified that
E [ z 2 | z 1 α 1 ] = 1 v 11 E [ z 0 | z 1 α 1 ] + v 12 v 11 E [ ( z 1 m 1 ) | z 1 α 1 ] + m 2 = v 12 v 11 E [ z 10 | z 10 α 10 ] + m 2 ,
E [ z 2 2 | z 1 α 1 ] = 1 v 11 2 E [ z 0 2 | z 1 α 1 ] + v 12 2 v 11 2 E [ ( z 1 m 1 ) 2 | z 1 α 1 ] 2 m 2 v 11 E [ z 0 | z 1 α 1 ]        2 v 12 v 11 2 E [ z 0 ( z 1 m 1 ) | z 1 α 1 ] + 2 m 2 v 12 v 11 E [ ( z 1 m 1 ) | z 1 α 1 ] + m 2 2         = v 11 v 22 v 12 2 v 11 + v 12 2 v 11 2 E [ z 10 2 | z 10 α 10 ] + 2 m 2 v 12 v 11 E [ z 10 | z 10 α 10 ] + m 2 2 ,
where z 10 = z 1 m 1 and α 10 = α 1 m 1 . Applying the incomplete moment theory presented in [28], we get
E [ z 10 | z 10 α 10 ] = α 10 t 2 π v 11 exp { t 2 / ( 2 v 11 ) } d t Pr { z 10 < α 10 } = v 11 exp { α 10 2 / ( 2 v 11 ) } 2 π Γ 0 [ α 10 / v 11 ] ,
E [ z 10 2 | z 10 α 10 ] = α 10 t 2 2 π v 11 exp { t 2 / ( 2 v 11 ) } d t Pr { z 10 < α 10 } = v 11 v 11 2 π α 10 exp { α 10 2 / ( 2 v 11 ) } Γ 0 [ α 10 / v 11 ] .
Inserting (A39) back into (A37) yields
E [ z ¯ 2 ] = E [ z 2 | z 1 α 1 ] = m 2 v 12 exp { α 10 2 / ( 2 v 11 ) } 2 π v 11 Γ 0 [ α 10 / v 11 ] .
Furthermore, substituting (A39) and (A40) into (A38) leads to
E [ z ¯ 2 2 ] = E [ z 2 2 | z 1 α 1 ] = v 11 v 22 v 12 2 v 11 + v 12 2 v 11 2 E [ z 10 2 | z 10 α 10 ] + 2 m 2 v 12 v 11 E [ z 10 | z 10 α 10 ] + m 2 2 = v 22 v 12 2 2 π v 11 α 10 exp { α 10 2 / ( 2 v 11 ) } v 11 Γ 0 [ α 10 / v 11 ] 2 m 2 v 12 2 π v 11 exp { α 10 2 / ( 2 v 11 ) } Γ 0 [ α 10 / v 11 ] + m 2 2 ,
which together with (A41) gives
var [ z ¯ 2 ] = E [ z ¯ 2 2 ] ( E [ z ¯ 2 ] ) 2 = v 22 v 12 2 2 π v 11 Γ 0 [ α 10 / v 11 ] ( α 10 exp { α 10 2 / ( 2 v 11 ) } v 11 + exp { α 10 2 / v 11 } 2 π Γ 0 [ α 10 / v 11 ] ) .
Applying (A41) and (A43) produces
Pr { z 2 α 2 } = α 2 1 2 π var [ z ¯ 2 ] exp { ( t E [ z ¯ 2 ] ) 2 / ( 2 var [ z ¯ 2 ] ) } d t = ( α 2 E [ z ¯ 2 ] ) / var [ z ¯ 2 ] 1 2 π exp { t 2 / 2 } d t = Γ 0 [ ( α 2 E [ z ¯ 2 ] ) / var [ z ¯ 2 ] ] .
Combining (A33), (A34), and (A44) completes the proof.

Appendix F—Proof of (62)

Making use of simple properties of probability, it can be readily verified that
Pr { | < p ˜ > 1 | Δ 1 , | < p ˜ > 2 | Δ 2 } = Pr { Δ 1 < p ˜ > 1 Δ 1 , Δ 2 < p ˜ > 2 Δ 2 } = Pr { Δ 1 < p ˜ > 1 Δ 1 } Pr { Δ 1 < p ˜ > 1 Δ 1 , < p ˜ > 2 Δ 2 } Pr { Δ 1 < p ˜ > 1 Δ 1 , < p ˜ > 2 Δ 2 } .
Likewise, we have
Pr { Δ 1 < p ˜ > 1 Δ 1 , < p ˜ > 2 Δ 2 } = Pr { < p ˜ > 2 Δ 2 } Pr { < p ˜ > 1 Δ 1 , < p ˜ > 2 Δ 2 } Pr { < p ˜ > 1 Δ 1 , < p ˜ > 2 Δ 2 } = Pr { < p ˜ > 2 Δ 2 } Pr { < p ˜ > 1 Δ 1 , < p ˜ > 2 Δ 2 } Pr { < p ˜ > 1 Δ 1 , < p ˜ > 2 Δ 2 } ,
Pr { ε 1 < p ˜ > 1 Δ 1 , < p ˜ > 2 Δ 2 } = Pr { < p ˜ > 2 Δ 2 } Pr { < p ˜ > 1 Δ 1 , < p ˜ > 2 Δ 2 } Pr { < p ˜ > 1 Δ 1 , < p ˜ > 2 Δ 2 } = Pr { < p ˜ > 2 Δ 2 } Pr { < p ˜ > 1 Δ 1 , < p ˜ > 2 Δ 2 } Pr { < p ˜ > 1 Δ 1 , < p ˜ > 2 Δ 2 } .
Inserting (A46) and (A47) back into (A45) yields (62).

Appendix G—Detailed Derivation of Matrices in (92)

Performing algebraic manipulation, and using (80), we have
Ω p H Ω p = n = 1 N k = 1 K | β n | 2 | s ¯ k | 2 ( a n , k ( p ) p T ) H a n , k ( p ) p T ,
Ω Re { s ¯ } H Ω Re { s ¯ } = n = 1 N | β n | 2 A n H ( p ) A n ( p ) = n = 1 N | β n | 2 diag [ | | a n , 1 ( p ) | | 2 2     | | a n , 2 ( p ) | | 2 2         | | a n , K ( p ) | | 2 2 ] ,
Ω Re { β } H Ω Re { β } = diag [ k = 1 K | s ¯ k | 2 | | a 1 , k ( p ) | | 2 2 k = 1 K | s ¯ k | 2 | | a 2 , k ( p ) | | 2 2 k = 1 K | s ¯ k | 2 | | a N , k ( p ) | | 2 2 ] ,
Ω p H Ω Re { s ¯ } = [ n = 1 N | β n | 2 ( a n , 1 ( p ) p T ) H a n , 1 ( p ) n = 1 N | β n | 2 ( a n , 2 ( p ) p T ) H a n , 2 ( p ) n = 1 N | β n | 2 ( a n , K ( p ) p T ) H a n , K ( p ) ] diag [ s ¯ ] ,
Ω p H Ω Re { β } = [ k = 1 K | s ¯ k | 2 ( a 1 , k ( p ) p T ) H a 1 , k ( p ) k = 1 K | s ¯ k | 2 ( a 2 , k ( p ) p T ) H a 2 , k ( p ) k = 1 K | s ¯ k | 2 ( a N , k ( p ) p T ) H a N , k ( p ) ] diag [ β ] ,
Ω Re { β } H Ω Re { s ¯ } = diag [ β ] [ A 1 H ( p ) A 1 ( p ) s ¯ A 2 H ( p ) A 2 ( p ) s ¯ A N H ( p ) A N ( p ) s ¯ ] H .
Firstly, inserting (A48), (A49), and (A51) into the first equality in (92) yields
V 1 , 1 = Ω p H Ω p Ω p H Ω Re { s ¯ } ( Ω Re { s ¯ } H Ω Re { s ¯ } ) 1 Ω Re { s ¯ } H Ω p = n = 1 N k = 1 K | β n | 2 | s ¯ k | 2 ( a n , k ( p ) p T ) H a n , k ( p ) p T k = 1 K ( | s ¯ k | 2 n = 1 N | β n | 2 | | a n , k ( p ) | | 2 2 ) ( n 1 = 1 N n 2 = 1 N | β n 1 β n 2 | 2 ( a n 1 , k ( p ) p T ) H a n 1 , k ( p ) a n 2 , k H ( p ) a n 2 , k ( p ) p T ) .
Secondly, substituting (A49), (A51), (A52), and (A53) into the second equality in (92) gives
V 1 , 2 = [ 1    j ] ( Ω p H Ω Re { β } Ω p H Ω Re { s ¯ } ( Ω Re { s ¯ } H Ω Re { s ¯ } ) 1 Ω Re { s ¯ } H Ω Re { β } ) = [ 1    j ] [ V 1 , 2 ( 1 )    V 1 , 2 ( 2 )       V 1 , 2 ( N ) ] ,
where
V 1 , 2 ( n ) = k = 1 K | s ¯ k | 2 ( a n , k ( p ) p T ) H a n , k ( p ) k = 1 K β n | s ¯ k | 2 | | a n , k ( p ) | | 2 2 n 1 = 1 N | β n 1 | 2 | | a n 1 , k ( p ) | | 2 2 n 2 = 1 N | β n 2 | 2 ( a n 2 , k ( p ) p T ) H a n 2 , k ( p ) ( 1 n N ) .
Finally, putting (A49), (A50), and (A53) into the third equality in (92) leads to
V 1 , 3 = [ 1 j j 1 ] ( Ω Re { β } H Ω Re { β } Ω Re { β } H Ω Re { s ¯ } ( Ω Re { s ¯ } H Ω Re { s ¯ } ) 1 Ω Re { s ¯ } H Ω Re { β } ) = [ 1 j j 1 ] ( diag [ k = 1 K | s ¯ k | 2 | | a 1 , k ( p ) | | 2 2 k = 1 K | s ¯ k | 2 | | a 2 , k ( p ) | | 2 2 k = 1 K | s ¯ k | 2 | | a N , k ( p ) | | 2 2 ] diag [ β ] [ A 1 H ( p ) A 1 ( p ) s ¯ A 2 H ( p ) A 2 ( p ) s ¯ A N H ( p ) A N ( p ) s ¯ ] H × diag [ ( n = 1 N | β n | 2 | | a n , 1 ( p ) | | 2 2 ) 1 ( n = 1 N | β n | 2 | | a n , 2 ( p ) | | 2 2 ) 1 ( n = 1 N | β n | 2 | | a n , K ( p ) | | 2 2 ) 1 ] × [ A 1 H ( p ) A 1 ( p ) s ¯ A 2 H ( p ) A 2 ( p ) s ¯ A N H ( p ) A N ( p ) s ¯ ] diag [ β ] ) .

Appendix H—Proof of (96)

We start by introducing a real array model error vector φ ˜ r = [ Re T { φ ˜ }    Im T { φ ˜ } ] T with probability density function given by
f φ ˜ r ( z ) = ( 2 π ) M N | det [ Φ ] | 1 / 2 exp { z T Φ 1 z / 2 } .
When the deterministic and stochastic parameters coexist, the Fisher information matrix (FIM) for vector η b is given by [68,69],
< FISH ( η b ) > i j = E [ 2 f ml ( η b | x ¯ ) < η b > i < η b > j ] + E [ 1 2 2 φ ˜ T Φ 1 φ ˜ < η b > i < η b > j ] ,
where f ml ( η b | x ¯ ) is the ML function of the compound data vector x ¯ . Combining (A59) and the results in [66,67], the FIM for vector μ b can be expressed as
< FISH ( μ b ) > i j = 2 σ ε 2 < Re { Ω μ b H Ω μ b } > i j + < Φ 1 > i j δ ( i , j ) ,
where δ ( i , j ) is an indicator function such that δ ( i , j ) = 1 if both i and j correspond to the element in φ ˜ r , and δ ( i , j ) = 0 otherwise. It follows from (A60) that
CRB ( μ b ) = ( FISH ( μ b ) ) 1 = ( 2 σ ε 2 Re { Ω μ b H Ω μ b } + [ O O O Φ 1 ] ) 1 ,
which completes the proof.

Appendix I—Detailed Derivation of Matrices in (101)

Note that matrices Ω p H Ω p , Ω Re { β } H Ω Re { β } , Ω Re { s ¯ } H Ω Re { s ¯ } , Ω p H Ω Re { β } , Ω p H Ω Re { s ¯ } , and Ω Re { β } H Ω Re { s ¯ } are given in (A48) to (A53). Therefore, to calculate the matrices in (101), we only need to derive the expressions for matrices Ω Re { φ ˜ } H Ω Re { φ ˜ } , Ω p H Ω Re { φ ˜ } , Ω Re { β } H Ω Re { φ ˜ } , and Ω Re { s ¯ } H Ω Re { φ ˜ } . It follows from (99) that
Ω Re { φ ˜ } H Ω Re { φ ˜ } = blkdiag [ | β 1 | 2 | | s ¯ | | 2 2 I M | β 2 | 2 | | s ¯ | | 2 2 I M | β N | 2 | | s ¯ | | 2 2 I M ] ,
Ω p H Ω Re { φ ˜ } = [ k = 1 K | β 1 | 2 | s ¯ k | 2 exp { j ω k τ 1 ( p ) } ( a 1 , k ( p ) p T ) H k = 1 K | β 2 | 2 | s ¯ k | 2 exp { j ω k τ 2 ( p ) } ( a 2 , k ( p ) p T ) H k = 1 K | β N | 2 | s ¯ k | 2 exp { j ω k τ N ( p ) } ( a N , k ( p ) p T ) H ] ,
Ω Re { β } H Ω Re { φ ˜ } = blkdiag [ | | s ¯ | | 2 2 β 1 a 1 H ( p ) | | s ¯ | | 2 2 β 2 a 2 H ( p ) | | s ¯ | | 2 2 β N a N H ( p ) ] ,
Ω Re { s ¯ } H Ω Re { φ ˜ } = [ | β 1 | 2 A 1 H ( p ) ( s ¯ 1 I M ) | β 2 | 2 A 2 H ( p ) ( s ¯ 2 I M ) | β N | 2 A N H ( p ) ( s ¯ N I M ) ] = [ | β 1 | 2 s ¯ a 1 H ( p ) | β 2 | 2 s ¯ a 2 H ( p ) | β N | 2 s ¯ a N H ( p ) ] .

References

  1. Schmidt, R.O. Multiple emitter location and signal parameter estimation. IEEE Trans. Antennas Propag. 1986, 34, 267–280. [Google Scholar] [CrossRef]
  2. Stoica, P.; Nehorai, A. MUSIC, maximum likelihood, and Cramér-Rao bound. IEEE Trans. Acoust. Speech Signal Process. 1989, 37, 720–741. [Google Scholar] [CrossRef]
  3. Viberg, M.; Ottersten, B. Sensor array processing based on subspace fitting. IEEE Trans. Signal Process. 1991, 39, 1110–1121. [Google Scholar] [CrossRef]
  4. Liao, B.; Chan, S.C.; Huang, L.; Guo, C. Iterative methods for subspace and DOA estimation in nonuniform noise. IEEE Trans. Signal Process. 2016, 64, 3008–3020. [Google Scholar] [CrossRef]
  5. Sun, F.; Gao, B.; Chen, L.; Lan, P. A low-complexity ESPRIT-based DOA estimation method for co-prime linear arrays. Sensors 2016, 16, 1367. [Google Scholar] [CrossRef] [PubMed]
  6. Nardone, S.C.; Graham, M.L. A closed-form solution to bearings-only target motion analysis. IEEE J. Ocean. Eng. 1997, 22, 168–178. [Google Scholar] [CrossRef]
  7. Kutluyil, D. Bearings-only target localization using total least squares. Signal Process. 2005, 85, 1695–1710. [Google Scholar]
  8. Lin, Z.; Han, T.; Zheng, R.; Fu, M. Distributed localization for 2-D sensor networks with bearing-only measurements under switching topologies. IEEE Trans. Signal Process. 2016, 64, 6345–6359. [Google Scholar] [CrossRef]
  9. Yang, K.; An, J.; Bu, X.; Sun, G. Constrained total least-squares location algorithm using time-difference-of-arrival measurements. IEEE Trans. Veh. Technol. 2010, 59, 1558–1562. [Google Scholar] [CrossRef]
  10. Jiang, W.; Xu, C.; Pei, L.; Yu, W. Multidimensional Scaling-Based TDOA Localization Scheme Using an Auxiliary Line. IEEE Signal Process. Lett. 2016, 23, 546–550. [Google Scholar] [CrossRef]
  11. Ma, Z.H.; Ho, K.C. TOA localization in the presence of random sensor position errors. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Prague, Czech, 22–27 May 2011; pp. 2468–2471. [Google Scholar]
  12. Shen, H.; Ding, Z.; Dasgupta, S.; Zhao, C. Multiple source localization in wireless sensor networks based on time of arrival measurement. IEEE Trans. Signal Process. 2014, 62, 1938–1949. [Google Scholar] [CrossRef]
  13. Yu, H.G.; Huang, G.M.; Gao, J.; Liu, B. An efficient constrained weighted least squares algorithm for moving source location using TDOA and FDOA measurements. IEEE Trans. Wirel. Commun. 2012, 11, 44–47. [Google Scholar] [CrossRef]
  14. Wang, G.; Li, Y.; Ansari, N. A semidefinite relaxation method for source localization Using TDOA and FDOA Measurements. IEEE Trans. Veh. Technol. 2013, 62, 853–862. [Google Scholar] [CrossRef]
  15. Mason, J. Algebraic two-satellite TOA/FOA position solution on an ellipsoidal earth. IEEE Trans. Aerosp. Electron. Syst. 2004, 40, 1087–1092. [Google Scholar] [CrossRef]
  16. Cheung, K.W.; So, H.C.; Ma, W.K.; Chan, Y.T. Received signal strength based mobile positioning via constrained weighted least squares. In Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing, Hong Kong, China, 6–8 April 2003; pp. 137–140. [Google Scholar]
  17. Ho, K.C.; Sun, M. An accurate algebraic closed-form solution for energy-based source localization. IEEE Trans. Audio Speech Lang. Process. 2007, 15, 2542–2550. [Google Scholar] [CrossRef]
  18. Wax, M.; Kailath, T. Decentralized processing in sensor arrays. IEEE Trans. Signal Process. 1985, 33, 1123–1129. [Google Scholar] [CrossRef]
  19. Stoica, P. On reparametrization of loss functions used in estimation and the invariance principle. Signal Process. 1989, 17, 383–387. [Google Scholar] [CrossRef]
  20. Amar, A.; Weiss, A.J. Localization of narrowband radio emitters based on Doppler frequency shifts. IEEE Trans. Signal Process. 2008, 56, 5500–5508. [Google Scholar] [CrossRef]
  21. Wang, D.; Wu, Y. Statistical performance analysis of direct position determination method based on doppler shifts in presence of model errors. Multidimens. Syst. Signal Process. 2017, 28, 149–182. [Google Scholar] [CrossRef]
  22. Tzoreff, E.; Weiss, A.J. Expectation-maximization algorithm for direct position determination. Signal Process. 2017, 97, 32–39. [Google Scholar] [CrossRef]
  23. Vankayalapati, N.; Kay, S.; Ding, Q. TDOA based direct positioning maximum likelihood estimator and the Cramer-Rao bound. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 1616–1634. [Google Scholar] [CrossRef]
  24. Xia, W.; Liu, W.; Zhu, L.F. Distributed adaptive direct position determination based on diffusion framework. J. Syst. Eng. Electron. 2016, 27, 28–38. [Google Scholar]
  25. Weiss, A.J. Direct geolocation of wideband emitters based on delay and Doppler. IEEE Trans. Signal Process. 2011, 59, 2513–5520. [Google Scholar] [CrossRef]
  26. Pourhomayoun, M.; Fowler, M.L. Distributed computation for direct position determination emitter location. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 2878–2889. [Google Scholar] [CrossRef]
  27. Bar-Shalom, O.; Weiss, A.J. Emitter geolocation using single moving receiver. Signal Process. 2014, 94, 70–83. [Google Scholar] [CrossRef]
  28. Li, J.Z.; Yang, L.; Guo, F.C.; Jiang, W.L. Coherent summation of multiple short-time signals for direct positioning of a wideband source based on delay and Doppler. Digit. Signal Process. 2016, 48, 58–70. [Google Scholar] [CrossRef]
  29. Weiss, A.J. Direct position determination of narrowband radio frequency transmitters. IEEE Signal Process. Lett. 2004, 11, 513–516. [Google Scholar] [CrossRef]
  30. Weiss, A.J.; Amar, A. Direct position determination of multiple radio signals. EURASIP J. Appl. Signal Process. 2005, 2005, 37–49. [Google Scholar] [CrossRef]
  31. Amar, A.; Weiss, A.J. A decoupled algorithm for geolocation of multiple emitters. Signal Process. 2007, 87, 2348–2359. [Google Scholar] [CrossRef]
  32. Tirer, T.; Weiss, A.J. High resolution direct position determination of radio frequency sources. IEEE Signal Process. Lett. 2016, 23, 192–196. [Google Scholar] [CrossRef]
  33. Tzafri, L.; Weiss, A.J. High-resolution direct position determination using MVDR. IEEE Trans. Wirel. Commun. 2016, 15, 6449–6461. [Google Scholar] [CrossRef]
  34. Amar, A.; Weiss, A.J. Direct position determination in the presence of model errors—known waveforms. Digit. Signal Process. 2006, 16, 52–83. [Google Scholar] [CrossRef]
  35. Demissie, B. Direct localization and detection of multiple sources in multi-path environments. In Proceedings of the IEEE International Conference on Information Fusion, Chicago, IL, USA, 5–8 July 2011; pp. 1–8. [Google Scholar]
  36. Papakonstantinou, K.; Slock, D. Direct location estimation for MIMO systems in multipath environments. In Proceedings of the IEEE International Conference on Global Telecommunications, New Orleans, LA, USA, 30 November–4 December 2008; pp. 1–5. [Google Scholar]
  37. Bar-Shalom, O.; Weiss, A.J. Efficient direct position determination of orthogonal frequency division multiplexing signals. IET Radar Sonar Navig. 2009, 3, 101–111. [Google Scholar] [CrossRef]
  38. Reuven, A.M.; Weiss, A.J. Direct position determination of cyclostationary signals. Signal Process. 2009, 89, 2448–2464. [Google Scholar] [CrossRef]
  39. Oispuu, M.; Nickel, U. Direct detection and position determination of multiple sources with intermittent emission. Signal Process. 2010, 90, 3056–3064. [Google Scholar] [CrossRef]
  40. Shen, J.; Shen, J.; Chen, X.F.; Huang, X.Y.; Susilo, W. An efficient public auditing protocol with novel dynamic structure for cloud data. IEEE Trans. Inf. Forensics Secur. 2017, PP, 1. [Google Scholar] [CrossRef]
  41. Fu, Z.J.; Ren, K.; Shu, J.G.; Sun, X.M.; Huang, F.X. Enabling personalized search over encrypted outsourced data with efficiency improvement. IEEE Trans. Parallel Distrib. Syst. 2016, 27, 2546–2559. [Google Scholar] [CrossRef]
  42. Sun, Y.J.; Gu, F.H. Compressive sensing of piezoelectric sensor response signal for phased array structural health monitoring. Int. J. Sens. Netw. 2017, 23, 258–264. [Google Scholar] [CrossRef]
  43. Friedlander, B. A sensitivity analysis of the MUSIC algorithm. IEEE Trans. Acoust. Speech Signal Process. 1990, 38, 1740–1751. [Google Scholar] [CrossRef]
  44. Swindlehurst, A.; Kailath, T. A performance analysis of subspace-based methods in the presence of model errors, part I: The MUSIC algorithm. IEEE Trans. Signal Process. 1992, 40, 1758–1774. [Google Scholar] [CrossRef]
  45. Ferréol, A.; Larzabal, P.; Viberg, M. On the asymptotic performance analysis of subspace DOA estimation in the presence of modeling errors: Case of MUSIC. IEEE Trans. Signal Process. 2006, 54, 907–920. [Google Scholar] [CrossRef]
  46. Ferréol, A.; Larzabal, P.; Viberg, M. On the resolution probability of MUSIC in presence of modeling errors. IEEE Trans. Signal Process. 2008, 56, 1945–1953. [Google Scholar] [CrossRef]
  47. Ferréol, A.; Larzabal, P.; Viberg, M. Statistical analysis of the MUSIC algorithm in the presence of modeling errors, taking into account the resolution probability. IEEE Trans. Signal Process. 2010, 58, 4156–4166. [Google Scholar] [CrossRef]
  48. Inghelbrecht, V.; Verhaevert, J.; van Hecke, T.; Rogier, H. The influence of random element displacement on DOA estimates obtained with (Khatri-Rao-) root-MUSIC. Sensors 2014, 14, 21258–21280. [Google Scholar] [CrossRef] [PubMed]
  49. Huang, Z.T.; Liu, Z.M.; Liu, J.; Zhou, Y.Y. Performance analysis of MUSIC for non-circular signals in the presence of mutual coupling. IET Signal Process. 2010, 4, 703–711. [Google Scholar] [CrossRef]
  50. Khodja, M.; Belouchrani, A.; Abed-Meraim, K. Performance analysis for time-frequency MUSIC algorithm in presence of both additive noise and array calibration errors. EURASIP J. Adv. Signal Process. 2012, 2012, 94–104. [Google Scholar] [CrossRef]
  51. Hari, K.V.S.; Gummadavelli, U. Effect of spatial smoothing on the performance of subspace methods in the presence of array model errors. Automatica 1994, 30, 11–26. [Google Scholar] [CrossRef]
  52. Soon, V.C.; Huang, Y.F. An analysis of ESPRIT under random sensor uncertainties. IEEE Trans. Signal Process. 1992, 40, 2353–2358. [Google Scholar] [CrossRef]
  53. Swindlehurst, A.; Kailath, T. A performance analysis of subspace-based methods in the presence of model errors: Part II-Multidimensional algorithm. IEEE Trans. Signal Process. 1993, 41, 2882–2890. [Google Scholar] [CrossRef]
  54. Friedlander, B. Sensitivity analysis of the maximum likelihood direction-finding algorithm. IEEE Trans. Aerosp. Electron. Syst. 1990, 26, 708–717. [Google Scholar] [CrossRef]
  55. Ferréol, A.; Larzabal, P.; Viberg, M. Performance prediction of maximum-likelihood direction-of-arrival estimation in the presence of modeling errors. IEEE Trans. Signal Process. 2008, 56, 4785–4793. [Google Scholar] [CrossRef]
  56. Cao, X.; Xin, J.M.; Nishio, Y.; Zheng, N.N. Spatial signature estimation with an uncalibrated uniform linear array. Sensors 2015, 15, 13899–13915. [Google Scholar] [CrossRef] [PubMed]
  57. Wang, W.J.; Ren, S.W.; Ding, Y.T.; Wang, H.Y. An efficient algorithm for direction finding against unknown mutual coupling. Sensors 2014, 14, 20064–20077. [Google Scholar] [CrossRef] [PubMed]
  58. Weiss, A.J.; Friedlander, B. DOA and steering vector estimation using a partially calibrated array. IEEE Trans. Aerosp. Electron. Syst. 1996, 32, l047–l057. [Google Scholar] [CrossRef]
  59. Pesavento, M.; Gershman, A.B.; Wong, K.M. Direction finding in partly-calibrated sensor arrays composed of multiple subarrays. IEEE Trans. Signal Process. 2002, 55, 2103–2115. [Google Scholar] [CrossRef]
  60. Wang, B.; Wang, W.; Gu, Y.J.; Lei, S.J. Underdetermined DOA estimation of quasi-stationary signals using a partly-calibrated array. Sensors 2017, 17, 702. [Google Scholar] [CrossRef] [PubMed]
  61. Amar, A.; Weiss, A.J. Analysis of the direct position determination approach in the presence of model errors. In Proceedings of the IEEE Convention on Electrical and Electronics Engineers, Telaviv, Israel, 6–7 September 2004; pp. 408–411. [Google Scholar]
  62. Amar, A.; Weiss, A.J. Analysis of direct position determination approach in the presence of model errors. In Proceedings of the IEEE Workshop on Statistical Signal Processing, Novosibirsk, Russia, 17–20 July 2005; pp. 521–524. [Google Scholar]
  63. Rao, C.R. Linear Statistical Inference and Its Application, 2nd ed.; Wiley: New York, NY, USA, 2002. [Google Scholar]
  64. Imhof, J.P. Computing the distribution of quadratic forms in normal variables. Biometrika 1961, 48, 419–426. [Google Scholar] [CrossRef]
  65. Torrieri, D.J. Statistical theory of passive location systems. IEEE Trans. Aerosp. Electron. Syst. 1984, 20, 183–198. [Google Scholar] [CrossRef]
  66. Gu, H. Linearization method for finding Cramér-Rao bounds in signal processing. IEEE Trans. Signal Process. 2000, 48, 543–545. [Google Scholar]
  67. Stoica, P.; Larsson, E.G. Comments on “Linearization method for finding Cramér-Rao bounds in signal processing”. IEEE Trans. Signal Process. 2001, 49, 3168–3169. [Google Scholar] [CrossRef]
  68. Viberg, M.; Swindlehurst, A.L. A Bayesian approach to auto-calibration for parametric array signal processing. IEEE Trans. Signal Processing 1994, 42, 3495–3507. [Google Scholar] [CrossRef]
  69. Jansson, M.; Swindlehurst, A.L.; Ottersten, B. Weighted subspace fitting for general array error models. IEEE Trans. Signal Process. 1998, 46, 2484–2498. [Google Scholar] [CrossRef]
  70. Wang, D. Sensor array calibration in presence of mutual coupling and gain/phase errors by combining the spatial-domain and time-domain waveform information of the calibration sources. Circuits Syst. Signal Process. 2013, 32, 1257–1292. [Google Scholar] [CrossRef]
Figure 1. Location geometry for simulation.
Figure 1. Location geometry for simulation.
Sensors 17 01550 g001
Figure 2. Root-mean-square-error (RMSE) of direct position determination (DPD) versus signal-to-noise ratio (SNR) of the emitter signal.
Figure 2. Root-mean-square-error (RMSE) of direct position determination (DPD) versus signal-to-noise ratio (SNR) of the emitter signal.
Sensors 17 01550 g002
Figure 3. RMSE of DPD versus standard deviation of array model error.
Figure 3. RMSE of DPD versus standard deviation of array model error.
Sensors 17 01550 g003
Figure 4. RMSE of DPD versus number of array elements.
Figure 4. RMSE of DPD versus number of array elements.
Sensors 17 01550 g004
Figure 5. RMSE of DPD versus ratio of intersensor spacing to wavelength.
Figure 5. RMSE of DPD versus ratio of intersensor spacing to wavelength.
Sensors 17 01550 g005
Figure 6. RMSE of DPD versus number of snapshots.
Figure 6. RMSE of DPD versus number of snapshots.
Sensors 17 01550 g006
Figure 7. Source location scenario for simulation.
Figure 7. Source location scenario for simulation.
Sensors 17 01550 g007
Figure 8. RMSE of DPD as a function of SNR of the emitter signal.
Figure 8. RMSE of DPD as a function of SNR of the emitter signal.
Sensors 17 01550 g008
Figure 9. RMSE of DPD as a function of standard deviation of sensor gain perturbation.
Figure 9. RMSE of DPD as a function of standard deviation of sensor gain perturbation.
Sensors 17 01550 g009
Figure 10. RMSE of DPD as a function of number of array elements.
Figure 10. RMSE of DPD as a function of number of array elements.
Sensors 17 01550 g010
Figure 11. RMSE of DPD as a function of ratio of array radius to wavelength.
Figure 11. RMSE of DPD as a function of ratio of array radius to wavelength.
Sensors 17 01550 g011
Figure 12. RMSE of DPD as a function of number of snapshots.
Figure 12. RMSE of DPD as a function of number of snapshots.
Sensors 17 01550 g012
Figure 13. Success probability (SP) of localization versus SNR of the emitter signal. (a) The first SP of localization versus SNR of the emitter signal. (b) The second SP of localization versus SNR of the emitter signal.
Figure 13. Success probability (SP) of localization versus SNR of the emitter signal. (a) The first SP of localization versus SNR of the emitter signal. (b) The second SP of localization versus SNR of the emitter signal.
Sensors 17 01550 g013
Figure 14. SP of localization versus standard deviation of array model error. (a) The first SP of localization versus standard deviation of array model error. (b) The second SP of localization versus standard deviation of array model error.
Figure 14. SP of localization versus standard deviation of array model error. (a) The first SP of localization versus standard deviation of array model error. (b) The second SP of localization versus standard deviation of array model error.
Sensors 17 01550 g014
Figure 15. SP of localization versus number of snapshots. (a) The first SP of localization versus number of snapshots. (b) The second SP of localization versus number of snapshots.
Figure 15. SP of localization versus number of snapshots. (a) The first SP of localization versus number of snapshots. (b) The second SP of localization versus number of snapshots.
Sensors 17 01550 g015aSensors 17 01550 g015b
Figure 16. SP of localization as a function of SNR of the emitter signal. (a) The first SP of localization as a function of SNR of the emitter signal. (b) The second SP of localization as a function of SNR of the emitter signal.
Figure 16. SP of localization as a function of SNR of the emitter signal. (a) The first SP of localization as a function of SNR of the emitter signal. (b) The second SP of localization as a function of SNR of the emitter signal.
Sensors 17 01550 g016
Figure 17. SP of localization as a function of standard deviation of sensor gain perturbation. (a) The first SP of localization as a function of standard deviation of sensor gain perturbation. (b) The second SP of localization as a function of standard deviation of sensor gain perturbation.
Figure 17. SP of localization as a function of standard deviation of sensor gain perturbation. (a) The first SP of localization as a function of standard deviation of sensor gain perturbation. (b) The second SP of localization as a function of standard deviation of sensor gain perturbation.
Sensors 17 01550 g017aSensors 17 01550 g017b
Figure 18. SP of localization as a function of number of snapshots. (a) The first SP of localization as a function of number of snapshots. (b) The second SP of localization as a function of number of snapshots.
Figure 18. SP of localization as a function of number of snapshots. (a) The first SP of localization as a function of number of snapshots. (b) The second SP of localization as a function of number of snapshots.
Sensors 17 01550 g018
Figure 19. Radius of CEP versus SNR of the emitter signal in the first experiment.
Figure 19. Radius of CEP versus SNR of the emitter signal in the first experiment.
Sensors 17 01550 g019
Figure 20. Radius of circular error probable (CEP) versus SNR of the emitter signal in the second experiment.
Figure 20. Radius of circular error probable (CEP) versus SNR of the emitter signal in the second experiment.
Sensors 17 01550 g020
Table 1. Notational conventions.
Table 1. Notational conventions.
NotationExplanation
Kronecker product
Schur product
diag   [ ] a diagonal matrix with diagonal entries formed from the vector
blkdiag   [ ] a block-diagonal matrix formed from the matrices or vectors
[ ] Moore-Penrose inverse of the matrix
I n n × n identity matrix
i n ( k ) the kth column vector of I n
O n × m n × m matrix of zeros
1 n × 1 n × 1 vector of ones
λ max { } the largest eigenvalue of the matrix
| | | | 2 Euclidean norm
< > n the nth entry of the vector
< > n m the nmth entry of the matrix
Re { } real part of the argument
Im { } imaginary part of the argument
Pr { } probability of the given event
E [ ] mathematical expectation of the random variable
var [ ] variance of the random variable

Share and Cite

MDPI and ACS Style

Wang, D.; Yu, H.; Wu, Z.; Wang, C. Performance Analysis of the Direct Position Determination Method in the Presence of Array Model Errors. Sensors 2017, 17, 1550. https://doi.org/10.3390/s17071550

AMA Style

Wang D, Yu H, Wu Z, Wang C. Performance Analysis of the Direct Position Determination Method in the Presence of Array Model Errors. Sensors. 2017; 17(7):1550. https://doi.org/10.3390/s17071550

Chicago/Turabian Style

Wang, Ding, Hongyi Yu, Zhidong Wu, and Cheng Wang. 2017. "Performance Analysis of the Direct Position Determination Method in the Presence of Array Model Errors" Sensors 17, no. 7: 1550. https://doi.org/10.3390/s17071550

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop