Abstract
Target location is the basic application of a multistatic sonar system. Determining the position/velocity vector of a target from the related sonar observations is a nonlinear estimation problem. The presence of possible sensor position uncertainties turns this problem into a more challenging hybrid parameter estimation problem. Conventional gradient-based iterative estimators suffer from the problems of initialization difficulties and local convergence. Even if there is no problem with initialization and convergence, a large computational cost is required in most cases. In view of these drawbacks, we develop a computationally efficient non-iterative position/velocity estimator. The main numerical computation involved is the weighted least squares optimization, which makes the estimator computationally efficient. Parameter transformation, model linearization and two-stage processing are exploited to prevent the estimator from iterative computation. Through performance analysis and experimental verification, we find that the proposed estimator reaches the hybrid Cramér–Rao bound and has linear computational complexity.
1. Introduction
In recent years, there has been a lively interest in target location using multistatic sonars [1,2,3,4,5,6,7,8,9,10,11,12,13]. In a multistatic sonar system, the sum of each pair of transmitter–target range and target–receiver range defines an ellipse. Then the target is at the intersection of all these ellipses [7]. The elliptical location encountered in the multistatic sonar systems has also been considered in the MIMO radar [14,15,16,17,18,19,20], multistatic radar [21,22,23,24,25] and indoor positioning systems [26,27].
A considerable amount of literature has been published on the problem of estimating the coordinates of the intersection of the ellipses, which can be statistically modelled as a nonlinear estimation problem. To resolve the essential nonlinearity in the problem, linearization is a natural idea. In particular, the measurement equations were linearized by Taylor expansion, resulting in an iterative algorithm [20]. Alternative to the Taylor expansion, introducing nuisance parameters is another approach to linearization. For example, the classic spherical-interpolation [28] and spherical-intersection [29] methods were ported to the elliptical location problems [25]. However, the estimation accuracy is not optimum in [25]. Slightly more complex than the linear models, a quadratically constrained least squares model was constructed, which is generally difficult to solve effectively [27,30]. More recently, as another major methodology for parameter estimation, a Bayes estimator was presented for elliptical location, involving formidable numerical integration [4]. Intuitively, integrating other kinds of observations helps improve the positioning accuracy. For instance, the Doppler shift measurements were incorporated to improve the position estimate and identify the velocity additionally [5].
In addition to the difficulties raised by the high nonlinearity in the statistical models, another obstacle in the multistatic sonar location is that the complex ocean environments introduce uncertainties in the positions of the transmitters and receivers. Preliminary work considering sensor location errors in elliptical location was reported in the literature [6,10,13]. Recent advances have seen an efficient non-iterative estimator for the multistatic sonar location [5,6] inspired by the renowned work of [31].
Perturbation analysis of least squares problems is a major topic in numerical linear algebra. Related work has focused on establishing various error bounds [32,33,34]. We combine the basic techniques of perturbation analysis with multivariate statistics [35] to quantitatively evaluate the estimators for a nonlinear estimation problem.
On the basis of the above work, our technical contributions are summarized here.
- We establish a statistical model of determining both the position and velocity of a moving target in a multistatic sonar system using differential delays and Doppler shifts. The uncertainties in the sensor positions are carefully taken into account in our model. The performance limit is developed for this problem.
- To tackle the proposed nonlinear hybrid parameter estimation problem, we design an efficient non-iterative solution using parameter transformation, model linearization and two-stage processing.
- We further analyze the bias vector and covariance matrix of our estimator theoretically using the second/first-order perturbation analysis and multivariate statistics.
- We prove that the proposed estimator has approximate statistical efficiency and linear complexity.
The rest of this paper is organized as follows. Section 2 lists the notational conventions that will be used throughout the paper. Section 3 provides the location scenario and formulates the problem as a nonlinear estimation problem. In Section 4, we evaluate the performance limit for the proposed problem. Section 5 is devoted to developing our estimator. Then, Section 6 analyzes the bias vector and covariance matrix of our estimator up to the second/first-order random errors. Section 7 contains comprehensive Monte Carlo simulation results, and finally Section 8 draws the conclusion.
2. Notational Conventions
We will use bold lowercase letters to denote the column vectors and bold uppercase ones to denote the matrices. Specifically, is a zeros matrix, is a ones matrix, and is an identity matrix of appropriate size. The operators ⊗ and ∘ represent the Kronecker product and Hadamard product respectively. The expression means that is a positive semidefinite matrix. is the square diagonal matrix with the elements of vector on the main diagonal. is the block diagonal matrix created by aligning the matrices along the diagonal of . When we want to access selected elements of a vector/matrix, we imitate the syntax of MATLAB programming language. For simplicity of presentation, we use numerous symbols and notations. They are summarized in Table 1 for quick reference. For the sake of readability, the text also includes relevant explanations about these symbols and notations.
Table 1.
List of symbols and notations (□ as a placeholder).
3. Problem Formulation and Statistical Model
We now turn to the mathematical formulation of the problem. In the multistatic sonar location scenario here, the transmitters and receivers are stationary and the target is moving. Let M be the number of transmitters and N be the number of receivers. We consider a two-dimensional location scenario. The unknown position vector and velocity vector of the target are denoted by and . For simplicity, the complete unknown parameter vector will be denoted by
To characterize the sensor location errors, the position vectors of the i-th transmitter and j-th receiver are modeled as random vectors and respectively, where and . We write compactly
where and . Generally, it can be assumed that
where the nominal positions of the sensors and the covariance matrix are known [6]. Then the sensor position errors vector is .
Physically, each transmitter radiates a sonar signal and all receivers observe the signals both from direct propagation and from indirect reflection of the target. Thus, the observation model of differential delay time between and is
where c is the signal propagation speed and is the observation noise of [6]. Furthermore, as the target is moving, we can also obtain the observation model of range rate (i.e., the Doppler shift measurements divided by the carrier frequency) between and , that is,
where is the observation noise of . For the notations and , see Table 1.
For the transmitter at position , all the related observations can be collected in an observation vector
where , and for . Then, the observations related to all the transmitters can be denoted by
Furthermore, it is assumed that the conditional distribution (given ) of the observed vector is of the form
where is the ideal error-free observation vector and is the covariance matrix of . The corresponding observation error vector can be denoted by .
As part of the observation model, the following small error assumptions are claimed.
- ,
- ,
- ,
- ,
- ,
- .
The physical motivation for these assumptions is that the position uncertainty of a given transmitter is small relative to its distance to the target and its distances to all the receivers, the position uncertainty of a given receiver is small relative to its distance to the target and its distances to all the transmitters, and the relative measurement errors are small. Besides, and are assumed to be statistically independent for ease of illustration.
Given the statistical model in Equation (8), the problem is to estimate the target position vector and velocity vector , i.e., , in real time and at a reasonable computational cost. Another significant work is the theoretical analysis of statistical performance of the designed estimator.
We conclude this section with some comments. Generally, the small error assumptions can be satisfied by increasing the observation period in obtaining the differential delay time and range rate measurements in a nonsingular location geometry. In addition, as we will see in Section 5, our estimator requires accurate knowledge of the positive definite covariance matrices and . They can usually be obtained during the calibration stage of a multistatic sonar system. Specifically, some scattering models from the environment may also help determine .
4. Hybrid Cramér–Rao Bound
In order to set a benchmark before designing an estimator, we now evaluate the Hybrid Cramér–Rao Bound (HCRB) [36,37,38,39] for the hybrid parameter estimation problem proposed in Section 3. The HCRB provides a lower bound on the error covariance matrix of the estimator of a hybrid unknown parameter vector.
In our statistical model, the wanted parameter vector and the nuisance parameter vector (i.e., the actual sensor positions) are both unknown. What makes them different is that is deterministic, and is a random parameter vector. Such models arise in many applications where we want to investigate model uncertainty or environmental mismatch. Here, we consider and together as a hybrid parameter vector
Before moving on to the estimator design, we outline the procedures for deriving the HCRB. In such a hybrid parameter case, the HCRB is calculated using the joint probability density of the observed measurement vector and the sensor position vector . The hybrid information matrix can be expressed as the sum
where represents the contribution of observations and represents the contribution of the prior knowledge on . Note that the unknown parameter vector is in the mean vector of the multivariate normal distribution in Equation (8) and is a multivariate normal random vector itself as in Equation (3). Section 3 reveals that depends on , and c. In our model, the random parameter vector does not depend on the deterministic parameter vector . Thus, and is fairly easy to get [40]. Consequently,
When the levels of sensor positions’ uncertainties are small, according to the approximation principle suggested by [41], the expected value matrix in Equation (11) can be approximated by replacing random vector with its expected value vector . Then, from the blockwise inversion of and the matrix inversion lemma, we have the HCRB for the estimation of as follows:
For numerical computation using Equation (13), and are required. More information is available in Appendix A.
5. Estimator Design
In this section, we use Taylor expansion, introduce auxiliary variables and apply multi-stage processing to deal with the nonlinear estimation problem proposed in Section 3. In particular, our algorithm can be divided into two stages, each involving an unconstrained linear weighted least squares (WLS) computation which is computationally attractive. During the algorithm design and performance analysis of our estimator, it is necessary to use many matrix symbols to simplify the presentation. These matrices are shown in Table 2 for easy reference. When justifying the introduction of these matrices, we find that these matrices naturally arise in a general weighted least squares problem. To prevent ourselves from obscuring the design of the estimator, the reader is referred to the Appendix B.
Table 2.
List of matrix symbols.
Based on the Conditions 1 through 4 in Section 3, it follows from the first-order Taylor’s formula that
5.1. First Stage
Without loss of generality, let . Move from the right side to the left side in Equation (19), and square both sides. Then, we see that
Applying similar procedures to Equation (21) gives
If we define an unknown parameter vector as
where
then a linear system of equations can be obtained from Equation (23) and Equation (24) as
We leave the details of , , , and presented in Appendix C. Note that and are first-order and second-order approximation error respectively.
By ignoring the second-order error term , the WLS solution to Equation (30) is
and has covariance matrix
where , and is the zero-order approximation of . The weighting matrix is the inverse of the covariance matrix of the approximation error , that is,
The computation of is straightforward. Because of the assumed statistical independence between and in Equation (A15),
where is shown in Equation (A16).
We get Equation (32) by the first-order perturbation analysis.
5.2. Second Stage
With and its covariance matrix , the aim of the second stage is to estimate the estimation error vector introduced in the first stage. In order to use symbols similar to the first stage, we denote this estimation error vector as , i.e.,
By substituting and into , we obtain
Furthermore, plugging , , and into gives
In matrix notation, from Equation (37) and Equation (38), we have
where
i.e., the estimation error in the first stage is considered as the first-order approximation error in the second stage. This is a key point of our estimator. The details of , , and the second-order approximation error can be found in Appendix D.
By ignoring the second-order error term and following the first stage’s approach, the WLS solution to Equation (39) is
and has error covariance matrix
where , and is the zero-order approximation of . The weighting matrix is the inverse of the covariance matrix of the approximation error , that is,
Last but not least, some obstacles arise in the practical computation of our estimator. In the first stage, and , as shown in Equation (A13) and Equation (A16), involve and which are unavailable for the algorithm. To resolve this problem, we first assign an identity matrix to and an all-zero matrix to to get coarse estimates of and from Equation (31), and then substitute the coarse estimates into Equation (A13) and Equation (A16) to update and . Confronted with similar problems in the second stage in Equation (A20), we substitute and for computing , resulting and . These approximations will be considered properly in the statistical performance analysis in Section 6.
5.3. Summary
As a guide to implementation, the flowchart of the proposed estimator is shown in Figure 1 and the algorithm of the first stage of our estimator is summarized in Algorithm 1.
| Algorithm 1 First stage of the estimator. |
Figure 1.
Flowchart of the proposed estimator. Algorithm 1 is called in the flowchart.
6. Performance Analysis
The covariance matrix and the bias vector are the two most important numerical characterization of a vector estimator. The perturbations in the design matrices (i.e., and ) and the higher-order noise terms in the observation vector (i.e., and ) in Equation (30) and Equation (39) make the conditions of Gauss–Markov theorem no longer true.
6.1. Bias Vector
We first derive the covariance matrix of our estimator in Equation (44). The bias analysis here will be up to the second-order statistics of the observation errors and the sensor position errors, i.e., the random terms higher than the second-order are ignored. Matrix differential calculus presented in [42] is intensively used in this subsection.
It can be seen from Equation (44) that the total bias vector of our estimator is
where and are the bias vectors of and , respectively. The rest of our task is to compute the bias vector in the first stage and second stage, i.e., and . We reiterate that the random terms higher than the second-order will be ignored at each occurrence of the approximately equal symbol. Please reference Table 2 for the matrix symbols involved.
The error of is
where and its related differentials can be obtained by matrix differential as follows:
Putting Equation (47) through Equation (49) into Equation (46) gives the estimation error vector in the first stage
We note that the weight matrix has no random errors in the first stage.
As in the first stage, the estimation error vector in the second stage is
Before moving on to the expression of , we note that we use rather than in the practical implementation. So, can be obtained by matrix differential as follows:
By comparing Equation (52) through Equation (54) with Equation (47) through Equation (49), we see that in Equation (55) contributes new perturbation resulted from practical implementation, which are introduced by and . Furthermore, can be expressed with by Equation (49).
Then from Equation (51), combining Equation (52) through Equation (55) together with Equation (49) we have
The last line above is by Equation (46) and Equation (49).
Finally, taking expectation of and will yield and . This involves complicated matrix partitioning and multivariate statistical analysis. Interested readers may refer to Appendix E.
6.2. Covariance Matrix
We then derive the covariance matrix of our estimator. The covariance analysis here will be up to the first-order statistics of the estimator errors, i.e., the error terms higher than the first-order are ignored.
We note that Yang et al. have explicitly given the covariance matrix of the two-stage weighted least squares (TS-WLS) method as follows:
where
We now claim the following proposition.
Proposition 1.
Proof.
As has full column rank, to complete the proof, we need only show that
We can prove Equation (65) by Matrix Schwarz inequality (Lemma 1.1 in [43]). Because is positive-definite, we can provide a Cholesky factorization as follows:
where is a unique lower triangular matrix. Let
By Matrix Schwarz inequality, we get
Then with a straightforward verification, we have
The equations above imply that , as desired. □
We close this section by establishing the approximate statistical efficiency of our estimator as the proposition below shows.
Proposition 2.
When the small error conditions in 1 through 6 are satisfied,
Proof.
Expanding Equation (73) (in Appendix F) and comparing it with Equation (A1), we can show that, under the small error conditions,
6.3. Time and Space Complexity
The computational load of our algorithm is focused on solving WLS problems. Singular value decomposition (SVD) is an efficient algorithm for solving WLS problems. With the method of truncated SVD, the time complexity is and the space complexity is for a matrix of size [44].
To facilitate complexity analysis, we list the matrices involved in the algorithm and their sizes in Table 3. As can be seen from the table, under the assumptions that , the computational complexity of the first stage is dominant in the total computational complexity. For the design matrix in the first stage, and . By keeping the highest order term of N and its coefficient, it can be seen that our algorithm takes time and space. In summary, our algorithm has linear complexity both in time and space.
Table 3.
Size of matrices.
7. Results and Discussion
In the previous section, we have theoretically analyzed the performance of our estimator. Now we ascertain the performance of our estimator via computer simulations. Our simulations are divided into four subsections. Section 7.1 compares the error covariance matrix of our estimator with HCRB and the ones of two typical estimators, i.e., the spherical-interpolation initialized Taylor series method (SI-TS) [28,45] and TS-WLS [5]. Then, surface plots of the biases are shown in Section 7.2. In Section 7.3, we empirically explore the time complexity of our estimator for locating multiple disjoint targets. Finally, we use 80 randomly generated large-scale localization scenarios to further test the proposed estimator in Section 7.4.
The first three subsections base on the simulation settings of [6]. Specifically, the simulations use transmitters and receivers to determine the unknown position and velocity of a moving target. As in [6], the nominal positions of the sensors are known and given as follows. m, m, m, m, m, m, m, and m. Graphically, the nominal location geometry is shown in Figure 2.
Figure 2.
Nominal location geometry for computer simulations.
The additional common settings for Section 7.1 and Section 7.2 are as follows. The target is at m with velocity m/s and the signal propagation speed is m/s. The observation error covariance matrix related to the transmitter at is for , where is a given positive constant, , and [31]. The sensors’ position error covariance matrix is , where is a given positive constant.
We list the settings of the Monte Carlo simulations in Table 4 to illustrate our experiments more clearly. Using Equation (8) based on Table 4, we generate data for simulations.
Table 4.
Monte Carlo simulation settings.
7.1. Performance Comparison
We now turn to the performance comparison of several estimators. For a specific estimator of the unknown parameter vector , its performance can be measured by the root-mean-square error (RMSE), which is defined as follows.
where L is the number of Monte Carlo simulations and is the ℓ-th random realization of .
The RMSEs of our estimator, SI-TS and TS-WLS are compared with HCRB here. The simulation settings are as follows. is s, is from 0 m to 200 m with a step size of 20 m, and the number of Monte Carlo simulations is for each value of . The comparison curves for both the position estimator and the velocity estimator are plotted respectively in Figure 3 and Figure 4. It is evident that our estimator has the least RMSE and can attain the HCRB accuracy at lower noise levels for determining both the position and the velocity.
Figure 3.
RMSE and HCRB for position estimator.
Figure 4.
RMSE and HCRB for velocity estimator.
7.2. Bias Calculation
In this subsection, we evaluate the bias of our estimator. The simulation settings are as follows. is from s to s with a step size of s, and is from 20 m to 200 m with a step size of 20 m. The norms of the theoretical bias vectors of and are calculated using results from Section 6 and further visualized as surface plots in Figure 5 and Figure 6. It is consistent with intuition that the biases of both and increase with both and . It should be noted that the biases are relatively small compared with the norms of m and m/s, even if the noise levels are high, e.g., s and m.
Figure 5.
Surface plot of norm of the approximate bias of .
Figure 6.
Surface plot of norm of the approximate bias of .
7.3. Localizing Multiple Disjoint Targets
The aim of this section is to evaluate the computational complexity of the algorithm in the sense of scalability, since the WLS algorithm involved in our estimator is computationally efficient. One advantage of our estimator is that it is ready to be extended to location of multiple disjoint targets by concatenating the data matrices in Section 5. Let the number of the disjoint targets be K, and Monte Carlo experiments of joint location are performed for each value of . Then, the running time of the experiments are recorded. For convenience of comparison, we normalize the running time for each K with the one for . The normalized running times are plotted in Figure 7 using log-log scale. It can be seen that the running time grows almost exponentially with respected to the number of targets. This observation indicates that localizing multiple targets sequentially is more time-efficient than localizing them simultaneously using our estimator. Such defects may root in the fact that our joint estimator does not share the nuisance parameters across the multiple targets.
Figure 7.
Normalized running time for locating multiple disjoint targets.
7.4. Large-Scale Simulation Experiments
The location scenario in Section 7.1 through Section 7.3 is the one examined in [6]. In order to evaluate the performance of the proposed estimator more comprehensively, we design the following lareg-scale random experiments. In view of the symmetry of the transmitter and receiver in the observation model, we fix the number of transmitters to 1, and increase the number of receivers from 21 to 100. The transmitter’s position is fixed at m. Both the x-ordinate and y-ordinate of the individual receiver’s position have the uniform probability distribution within the interval m. m, and s. Other unspecified settings in these experiments are referred in Table 4. In each location scenario, we conduct Monte Carlo simulations. Then we explore the effect of the number of reveivers on the bias/RMSE and computational complexity of the proposed estimator in Figure 8 and Figure 9.
Figure 8.
Normalized running time for locating multiple disjoint targets.
Figure 9.
Normalized running time for locating multiple disjoint targets.
Figure 8 shows that increasing the number of receivers helps to reduce the RMSE of the estimator. It should be noted that increasing the number of receivers does not lead to a decrease of bias. This fact may imply that designing unbiased estimators is an inherently difficult problem in nonlinear estimation.
In addition, as can be seen in Figure 9, the estimator’s relative running time scales linearly as more receivers are used when the number of the receivers is large enough (e.g., here). This trend coincides with the theoretical linear complexity obtained in Section 6.3.
8. Conclusions
This paper develops a non-iterative solution to the nonlinear hybrid parameter estimation problem of determining the position and velocity of a moving target in a multistatic sonar system in the presence of sensor position uncertainties. It outperforms conventional methods such as SI-TS and TS-WLS in RMSE, and can achieve the HCRB for moderate Gaussian observation noises and sensor position errors. Our estimator involves only two WLS minimizations. Thus, it is computationally efficient, and does not need to deal with the difficulties of initialization and local convergence. Moreover, we obtain the bias vector and covariance matrix of this estimator using perturbation analysis and multivariate statistics.
Author Contributions
Conceptualization, X.W. and J.L.; methodology, X.W.; software, X.W. and Z.Y.; validation, L.Y.; writing—original draft preparation, X.W. and Z.Y.; writing—review and editing, X.W. and L.Y.; funding acquisition, X.W. and L.Y. ’All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the National Natural Science Foundation of China (Grant No. 61703185), the Natural Science Foundation of Jiangsu Province in China (Grant No. BK20140166) and the 111 Project (Grant no. B12018).
Acknowledgments
The authors would like to thank the anonymous reviewers and the editor for their careful reviews and constructive suggestions to help us improve the quality of this paper.
Conflicts of Interest
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| HCRB | Hybrid Cramér–Rao bound |
| WLS | Weighted least-squares |
| SI-TS | Spherical-interpolation initialized Taylor series method |
| SVD | Singular value decomposition |
| TS-WLS | Two-stage weighted least squares method |
| RMSE | Root-mean-square error |
Appendix A. Jacobian Matrices for HCRB
Appendix A.1. Jacobian Matrix of Target Position and Velocity
Appendix A.2. Jacobian Matrix of Sensor Positions
Appendix B. Matrices Related to Weighted Least Squares
A weighted least squares problem is an optimization problem as follows.
where is the design matrix (of full column rank), is the observation vector, is the parameter vector, and is the (positive definite) weighted matrix. We refer as residual vector.
We introduce the weighted residual vector as follows.
It follows from Equation (A4) that
By the orthogonality projection principle of least squares method,
Appendix C. Linear Model for the First Stage of Our Algorithm
Appendix D. Linear Model for the Second Stage of Our Algorithm
Appendix E. Formulas for Computing Bias Vector of Our Estimator
Appendix E.1. Formulas for Computing Bias in the First Stage
With some well-known formulas in multivariate statistics and Equation (A15), we list the related expected values as follows, where
- Let for . Then
- wherefor .
Appendix E.2. Formulas for Computing Bias in the Second Stage
All the expected values required for calculating are listed as follows, where is the mean squared error matrix of , i.e.,
- where
- wherefor .
- wherefor .
- wherefor .
References
- Zhang, Y.; Ho, K.C. Multistatic localization in the absence of transmitter position. IEEE Trans. Signal Process. 2019, 67, 4745–4760. [Google Scholar] [CrossRef]
- He, C.; Wang, Y.; Yu, W.; Song, L. Underwater target localization and synchronization for a distributed SIMO sonar with an isogradient SSP and uncertainties in receiver locations. Sensors 2019, 19, 1976. [Google Scholar] [CrossRef] [PubMed]
- Liang, J.; Chen, Y.; So, H.; Jing, Y. Circular/hyperbolic/elliptic localization via Euclidean norm elimination. Signal Process. 2018, 148, 102–113. [Google Scholar] [CrossRef]
- Peters, D.J. A Bayesian method for localization by multistatic active sonar. IEEE J. Ocean. Eng. 2017, 42, 135–142. [Google Scholar] [CrossRef]
- Yang, L.; Yang, L.; Ho, K.C. Moving target localization in multistatic sonar by differential delays and Doppler shifts. IEEE Signal Process. Lett. 2016, 23, 1160–1164. [Google Scholar] [CrossRef]
- Rui, L.; Ho, K.C. Efficient closed-form estimators for multistatic sonar localization. IEEE Trans. Aerosp. Electron. Syst. 2015, 51, 600–614. [Google Scholar] [CrossRef]
- Rui, L.; Ho, K.C. Elliptic localization: Performance study and optimum receiver placement. IEEE Trans. Signal Process. 2014, 62, 4673–4688. [Google Scholar] [CrossRef]
- Ehlers, F.; Ricci, G.; Orlando, D. Batch tracking algorithm for multistatic sonars. IET Radar Sonar Navig. 2012, 6, 746–752. [Google Scholar] [CrossRef]
- Daun, M.; Ehlers, F. Tracking algorithms for multistatic sonar systems. EURASIP J. Adv. Signal Process. 2010, 2010, 461538. [Google Scholar] [CrossRef]
- Simakov, S. Localization in airborne multistatic sonars. IEEE J. Ocean. Eng. 2008, 33, 278–288. [Google Scholar] [CrossRef]
- Coraluppi, S. Multistatic sonar localization. IEEE J. Ocean. Eng. 2006, 31, 964–974. [Google Scholar] [CrossRef]
- Coraluppi, S.; Carthel, C. Distributed tracking in multistatic sonar. IEEE Trans. Aerosp. Electron. Syst. 2005, 41, 1138–1147. [Google Scholar] [CrossRef]
- Sandys-Wunsch, M.; Hazen, M.G. Multistatic localization error due to receiver positioning errors. IEEE J. Ocean. Eng. 2002, 27, 328–334. [Google Scholar] [CrossRef]
- Shin, H.; Chung, W. Target localization using double-sided bistatic range measurements in distributed MIMO radar systems. Sensors 2019, 19, 2524. [Google Scholar] [CrossRef]
- Amiri, R.; Behnia, F.; Noroozi, A. Efficient algebraic solution for elliptic target localisation and antenna position refinement in multiple-input–multiple-output radars. IET Radar Sonar Navig. 2019, 13, 2046–2054. [Google Scholar] [CrossRef]
- Amiri, R.; Behnia, F.; Noroozi, A. Efficient joint moving target and antenna localization in distributed MIMO radars. IEEE Trans. Wirel. Commun. 2019, 18, 4425–4435. [Google Scholar] [CrossRef]
- Amiri, R.; Behnia, F.; Zamani, H. Asymptotically efficient target localization from bistatic range measurements in distributed MIMO radars. IEEE Signal Process. Lett. 2017, 24, 299–303. [Google Scholar] [CrossRef]
- Einemo, M.; So, H.C. Weighted least squares algorithm for target localization in distributed MIMO radar. Signal Process. 2015, 115, 144–150. [Google Scholar] [CrossRef]
- Dianat, M.; Taban, M.R.; Dianat, J.; Sedighi, V. Target localization using least squares estimation for MIMO radars with widely separated antennas. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 2730–2741. [Google Scholar] [CrossRef]
- Godrich, H.; Haimovich, A.M.; Blum, R.S. Target localization accuracy gain in MIMO radar-based systems. IEEE Trans. Inf. Theory 2010, 56, 2783–2803. [Google Scholar] [CrossRef]
- Zhao, Y.; Hu, D.; Zhao, Y.; Liu, Z.; Zhao, C. Refining inaccurate transmitter and receiver positions using calibration targets for target localization in multi-static passive radar. Sensors 2019, 19, 3365. [Google Scholar] [CrossRef] [PubMed]
- Wang, J.; Qin, Z.; Wei, S.; Sun, Z.; Xiang, H. Effects of nuisance variables selection on target localisation accuracy in multistatic passive radar. Electron. Lett. 2018, 54, 1139–1141. [Google Scholar] [CrossRef]
- Chalise, B.K.; Zhang, Y.D.; Amin, M.G.; Himed, B. Target localization in a multi-static passive radar system through convex optimization. Signal Process. 2014, 102, 207–215. [Google Scholar] [CrossRef]
- Gorji, A.A.; Tharmarasa, R.; Kirubarajan, T. Widely separated MIMO versus multistatic radars for target localization and tracking. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 2179–2194. [Google Scholar] [CrossRef]
- Malanowski, M.; Kulpa, K. Two methods for target localization in multistatic passive radar. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 572–580. [Google Scholar] [CrossRef]
- Yin, Z.; Jiang, X.; Yang, Z.; Zhao, N.; Chen, Y. WUB-IP: A high-precision UWB positioning scheme for ndoor multiuser applications. IEEE Syst. J. 2019, 13, 279–288. [Google Scholar] [CrossRef]
- Zhou, Y.; Law, C.L.; Guan, Y.L.; Chin, F. Indoor elliptical localization based on asynchronous UWB range measurement. IEEE Trans. Instrum. Meas. 2011, 60, 248–257. [Google Scholar] [CrossRef]
- Smith, J.O.; Abel, J.S. Closed-form least-squares source location estimation from range-difference measurements. IEEE Trans. Acoust. Speech Signal Process. 1987, 35, 1661–1669. [Google Scholar] [CrossRef]
- Schau, H.; Robinson, A. Passive source localization employing intersecting spherical surfaces from time-of-arrival differences. IEEE Trans. Acoust. Speech Signal Process. 1987, 35, 1223–1225. [Google Scholar] [CrossRef]
- Amiri, R.; Behnia, F.; Sadr, M.A.M. Exact solution for elliptic localization in distributed MIMO radar systems. IEEE Trans. Veh. Technol. 2018, 67, 1075–1086. [Google Scholar] [CrossRef]
- Chan, Y.T.; Ho, K.C. A simple and efficient estimator for hyperbolic location. IEEE Trans. Signal Process. 1994, 42, 1905–1915. [Google Scholar] [CrossRef]
- Zheng, B.; Yang, Z. Perturbation analysis for mixed least squares–total least squares problems. Numer. Linear Algebra Appl. 2019, 26, e2239. [Google Scholar] [CrossRef]
- Buranay, S.C.; Iyikal, O.C. A predictor-corrector iterative method for solving linear least squares problems and perturbation error analysis. J. Inequal. Appl. 2019, 2019, 203. [Google Scholar] [CrossRef]
- Xie, P.; Xiang, H.; Wei, Y. A contribution to perturbation analysis for total least squares problems. Numer. Algorithms 2017, 75, 381–395. [Google Scholar] [CrossRef]
- Harville, D.A. Linear Models and the Relevant Distributions and Matrix Algebra; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
- Bar, S.; Tabrikian, J. The risk-unbiased Cramér–Rao bound for non-Bayesian multivariate parameter estimation. IEEE Trans. Signal Process. 2018, 66, 4920–4934. [Google Scholar] [CrossRef]
- Bergel, I.; Noam, Y. Lower bound on the localization error in infinite networks with random sensor locations. IEEE Trans. Signal Process. 2018, 66, 1228–1241. [Google Scholar] [CrossRef]
- Messer, H. The hybrid Cramér–Rao lower bound—From practice to theory. In Proceedings of the 4th IEEE Sensor Array and Multichannel Signal Processing Workshop, Waltham, MA, USA, 12–14 July 2006; pp. 304–307. [Google Scholar] [CrossRef]
- Noam, Y.; Messer, H. The hybrid Cramér–Rao bound and the generalized Gaussian linear estimation problem. In Proceedings of the 5th IEEE Sensor Array and Multichannel Signal Processing Workshop, Darmstadt, Germany, 21–23 July 2008; pp. 395–399. [Google Scholar] [CrossRef]
- Van Trees, H.L.; Bell, K.L.; Tian, Z. Detection, Estimation, and Modulation Theory Part I: Detection, Estimation, and Filtering Theory; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
- Rockah, Y.; Schultheiss, P. Array shape calibration using sources in unknown locations—Part I: far-field sources. IEEE Trans. Acoust. Speech Signal Process. 1987, 35, 286–299. [Google Scholar] [CrossRef]
- Magnus, J.R.; Neudecker, H. Matrix Differential Calculus with Applications in Statistics and Econometrics; John Wiley & Sons: Hoboken, NJ, USA, 2007. [Google Scholar]
- Chui, C.K.; Chen, G. Kalman Filtering with Real-Time Applications; Springer International Publishing: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
- Li, X.; Wang, S.; Cai, Y. Tutorial: Complexity analysis of Singular Value Decomposition and its variants. arXiv 2019, arXiv:1906.12085. [Google Scholar]
- Foy, W.H. Position-location solutions by Taylor-series estimation. IEEE Trans. Aerosp. Electron. Syst. 1976, 12, 187–194. [Google Scholar] [CrossRef]
- Horn, R.A.; Johnson, C.R. Matrix Analysis, 2nd ed.; Cambridge University Press: Cambridge, UK, 2013. [Google Scholar]
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).