Next Article in Journal
Study of Out-Of-Hospital Access to HIS System: A Security Perspective
Next Article in Special Issue
NLOS Identification and Mitigation Using Low-Cost UWB Devices
Previous Article in Journal
A Novel Frequency Domain Impedance Sensor with a Perforated Cylinder Coaxial Design for In-Situ Measuring Soil Matric Potential
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Least-SquareLocalization Based on Relative Angular Matrix in Wireless Sensor Networks

College of Communications Engineering, Army Engineering University of PLA, Nanjing 210042, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2019, 19(11), 2627; https://doi.org/10.3390/s19112627
Submission received: 8 May 2019 / Revised: 1 June 2019 / Accepted: 3 June 2019 / Published: 10 June 2019
(This article belongs to the Special Issue Sensors Localization in Indoor Wireless Networks)

Abstract

:
Accurate position information plays an important role in wireless sensor networks (WSN), and cooperative positioning based on cooperation among agents is a promising methodology of providing such information. Conventional cooperative positioning algorithms, such as least squares (LS), rely on approximate position estimates obtained from prior measurements. This paper explores the fundamental mechanism underlying the least squares algorithm’s sensitivity to the initial position selection and approaches to dealing with such sensitivity. This topic plays an essential role in cooperative positioning, as it determines whether a cooperative positioning algorithm can be implemented ubiquitously. In particular, a sufficient and unnecessary condition for the least squares cost function to be convex is found and proven. We then propose a robust algorithm for wireless sensor network positioning that transforms the cost function into a globally convex function by detecting the null space of the relative angle matrix when all the targets are located inside the convex polygon formed by its neighboring nodes. Furthermore, we advance one step further and improve the algorithm to apply it in both the time of arrival (TOA) and angle of arrival/time of arrival (AOA/TOA) scenarios. Finally, the performance of the proposed approach is quantified via simulations, and the results show that the proposed method has a high positioning accuracy and is robust in both line-of-sight (LOS) and non-line-of-sight (NLOS) positioning environments.

1. Introduction

In recent years, positioning and navigation technology has been playing an increasingly important role in many applications, such as public safety, law enforcement, rescue operations, traffic management, inventory tracking, home automation, etc. At the same time, location-based services also have significant commercial value [1]. The Global Navigation Satellite System (GNSS) is the most widely-used navigation and positioning technology, providing services that are suitable for most applications in an open environment [2]. However, the GNSS might fail to provide reliable services due to interference and in some challenging environments such as cities, forests and indoors, due to the weakness of GNSS signals.
An effective way of solving this problem is to supplement and enhance GNSS with terrestrial positioning systems. At present, there are several positioning systems, including cellular-based positioning, WiFi positioning, and ultra-wideband-based (UWB-based) positioning systems. In particular, with the development of large-scale multiple-input and multiple-output (MIMO) systems, positioning based on mmWave communication that is becoming an emerging research focus has also received increasing attention [3,4].
The sensors in the terrestrial system constitute a positioning network, and we are interested in locating the sensors based solely on measurements from the multi-target scene. Based on the nodes’ exchange measurements and other data, the WSN positioning scenarios can be divided into cooperative and noncooperative. Compared with traditional positioning methods, cooperative (also known as collaborative) positioning has important research and application significance. For instance, cooperative positioning of connected vehicles that effectively utilizes the relative observations from the vehicle-to-vehicle (V2V) devices has become a significant trend of future cooperative intelligent transportation system (ITS) applications [5]. Its advantages have been confirmed theoretically and algorithmically [6,7]. The analysis of Fisher information can show that nodes can obtain better positioning accuracy and availability in cooperative scenarios [8].
For various ground systems, the primary positioning methods can be divided into four categories, among which the distance-based time of arrival (TOA) and time difference of arrival (TDOA) positioning methods are more common [9]. At present, various classic algorithms for cooperative positioning are available, such as maximum likelihood (ML) estimation [10], extended Kalman filter (EKF) [11], particle filter (PF) [12], etc. These algorithms use the minimum mean squared error (MESE) as the evaluation criterion of estimation, and under certain conditions, they are essentially equivalent to the LS estimator. For instance, when the ranging error has a Gaussian distribution, the ML algorithm is equivalent to the LS estimator.

1.1. Related Works

When estimating the location of user nodes, it is necessary to select a reasonable initial value. Unfortunately, sometimes, there is a problem with initial value sensitivity when the algorithms are used. In addition, the NLOS propagation error of the signal is also an important factor affecting the positioning accuracy. Thus far, some efforts have been made to solve these problems. By way of observation, if measurements based on the received signal strength (RSS) and angle of arrival (AOA) are considered as the auxiliary positioning methods, the NLOS effect can be suppressed by the algorithms, and the positioning accuracy can be improved [13,14,15,16]. In general, hybrid positioning methods with multiple measurement methods tend to have higher positioning accuracy and be more robust than those based on a single measurement. The localization based on mmWave communication is a hybrid positioning method, but differs from the traditional approach. Its advantage lies in the unique structure of the MIMO systems, so it can determine the position of targets using only one base station by measuring the angles of both the transmitted and received signals [17]. This scheme reduces the cost of base station deployment, but also has apparent disadvantages. One major problem is the diffuse reflection effect of the signal, which causes a higher angle measurement error. For this reason, the positioning results will not be accurate; this topic remains to be studied, and the related improvements are yet to be attained [18,19].
As for the algorithms, a variety of robust positioning algorithms are available. The localization technique based on multidimensional scaling (MDS), which was first proposed by Shang et al. [20], offers a new solution of node localization. Recently, several localization methods related to the classical MDS method have been applied to sensor networks. Forero and Giannakis presented a robust multidimensional scaling based on regularized least squares [21]. Focusing on the problem of localization in mixed LOS and NLOS scenarios, a novel localization algorithm called the Gaussian mixed model based non-metric multidimensional (GMDS) was proposed [22]. The advantages of MDS are that one can obtain actual positions between nodes by setting only a few anchor nodes; besides, the anchor nodes’ deployment has no strict restriction. However, it is unreliable in large-scale networks with sparse connectivity. The work of Destino and Abreu transformed the original WLS function into a convex one by introducing the Gaussian kernel function and optimizing the smoothing parameters [23]. In the literature [24,25], the problem has been formulated by applying robust statistics techniques on squared range measurements. This provides the opportunity to find the estimate efficiently. However, this formulation is not optimal in the ML sense [23]. Another class of methods is based on the convex relaxation technique. The paper [26] derived a maximum likelihood estimator for Laplacian noise and relaxed it to a convex program by linearizing and dropping a rank constraint. Soares et al. [27] set forth a convex underestimator of the maximum likelihood cost for the sensor network localization based on the convex envelopes of its parcels. At the same time they capitalized on the robust estimation properties of the Huber function and derived a convex relaxation [28]. It is known that the semidefinite programming (SDP) algorithm is one of the most used methods, which also transforms the position model into a convex optimization problem by applying the convex relaxation technique [29,30,31]. These approaches can not only limit the errors caused by the NLOS effect, but also make the cost function convex. In other words, they are insensitive to the initial value’s selection. However, the downside of these algorithms is that the estimation accuracy will decrease slightly. Besides, other approaches such as the parallel projection method (PPM) [19,32], projection onto convex sets (POCS) [33,34], etc., have been proposed. These two methods turn the LS cost function into a convex one. At the same time, the PPM can be used in the distributed cooperative scenario, which significantly reduces the pressure of information interaction between the two nodes. An outlier detection method was proposed by Wang et al. [35] based on the maximum entropy principle and fuzzy set theory.

1.2. Contributions

It is known that the LS cost function is nonlinear and nonconvex in the global region. We consider the location result of an iterative algorithm depending on the selected initial value if the function has more than one local optimal point, and it seems that the initial value selection is associated with the convexity of the cost function. Existing studies analyzed the LS source localization problem to determine the condition of the function being convex [36,37]. Similarly, for WSN localization, the convexity or the number of local extremum points seems to relate to the quantity of targets and the ranging error. To explore this problem further, we perform a study of the LS model for WSN localization to understand theoretically how the targets and the ranging error affect the extremum point. The major contributions of the paper are as follows:
  • A sufficient and unnecessary condition for the LS cost function to be convex is proposed and proven for WSN positioning.
  • We define the relative angle matrix for both noncooperative and cooperative scenarios and show that the LS function can be transformed into a globally convex function if all the targets are located inside the convex polygon formed by its adjacent nodes.
  • A robust algorithm that detects the relative angular matrix is proposed for WSN localization. Additionally, we improve the algorithm by using angle constraints so that it can be used in both the AOA/TOA and TOA positioning methods, which extends the applicability of the method.
It is worth noting that with the development of MIMO technology, the acquisition of ranging information and angle information between two nodes becomes easier, as the technology provides hardware and technical support for the measurement of the relative angular matrix. On the other hand, the position of the virtual anchors can be calculated, so mmWave positioning can be transformed into an AOA/TOA positioning model in the traditional sense, and the null space algorithm improved in this paper can be used to solve the position.
The rest of the paper is organized as follows. In Section 2, some basic definitions and model descriptions are given. In Section 3, we analyze the convexity of the unconstrained LS positioning model and derive a sufficient and unnecessary condition for the function to be convex. In Section 4, we first provide the definition of the relative angular matrix, then subsequently prove some important properties, and propose a novel null space algorithm by adding the angle constraint. We perform a numerical simulation that aims to verify the correctness of the proposition and evaluate the performance of the proposed algorithm in Section 5. Finally, we conclude the paper in Section 6.

2. Definition and Scenario Description

2.1. Definition of the Nodes and Links

In the wireless network location scenario based on the TOA method, we assume there are M targets and use the set F c = T 1 , T 2 , , T M to enumerate them. The real coordinates of a target are x j R η , 1 j M , j N + , where η N + is the Euclidean spatial dimension of the location scene. There are N anchors, which are represented by the set F a = A 1 , A 2 , , A N , and their real coordinates are s i R η , 1 i M , i N + . Let the set of all node records be F t ; then, F t = F a F c . Assume that there is a total of L ranging links in the positioning scenario and the set of links is L = l 1 , l 2 , , l L . In the cooperative scenario, the ranging links can be divided into two categories: that of ranging links between targets and anchors (AT links) and that of cooperative ranging links between two targets (TT links). The set of all distance observations is denoted by D t , the set of AT links by D a , and the set of TT links by D c . Obviously, we have D t = D a D c , D a D c = , D t = L , and D a = N . Let d K i K j and d ^ K i K j be the real and estimated distances between the nodes K i and K j .

2.2. Definition of the Errors

In the TOA positioning method, since the signal is affected by noise, the multipath effect, and the NLOS effect during propagation, the observed value is not the real distance between the two nodes, and there usually exists a ranging error. Let ε = ε 1 ε 2 ε L T R L be the ranging error vector, where ε i represents the error on the i th link. In the LOS environment, the ranging error is caused entirely by noise, which usually follows a Gaussian distribution with a mean of zero and a constant variance. Here, we denote it by ε l o s . In the NLOS environment, besides the noise, there also exists a positive deviation that follows the Gaussian distribution with both the mean and variance being constant. Here, we denote it by ε n l o s . Then, the ranging error can be modeled as follows:
ε i = ε l o s , if the i th path is LOS ε l o s + ε n l o s , if the i th path is NLOS
where ε l o s N 0 , σ l o s 2 , ε n l o s N μ n l o s , σ n l o s 2 , and μ n l o s > 0 .
In the AOA method, we assume that there are observation errors in the relative angle matrix Ω , which will be defined in the fourth part of this paper. Hence,
Ω ˜ = Ω + Δ
where Ω ˜ represents the observation of the relative angular matrix and Δ is the error matrix, an element δ i j of which follows the Gaussian distribution δ i j N μ α , σ α 2 .

2.3. Noncooperative Scenario Description

In the noncooperative localization scenario, there are M targets, and there is no link between any two targets. Consider target T j ; the target’s real and estimated positions are x j and x ^ j ; then, the distance can be given by:
d T j A i = s i x j 2 d ^ T j A i = s i x ^ j 2
where · 2 represents the Euclidean distance between two nodes. In the noncooperative scenario, there are N ranging links that are denoted by l 1 , l 2 , , l N , respectively. Let d i and d ^ i be the real and estimated distances, corresponding to link l i . Then, we can assign the values as follows:
d i = d T 1 A i 1 i q 1 a d T j A i m = 1 j 1 q m a < i m = 1 j q m a j 2 ,
d ^ i = d ^ T 1 A i 1 i q 1 a d ^ T j A i m = 1 j 1 q m a < i m = 1 j q m a j 2 ,
where 1 i N , i N + and q j a is the quantity of all links directly connected to node T j . Assume that the distance observations are ρ i D a , 1 i N , i N + . If the observation error is considered, the relationship between the distance observation and the true distance is given by:
ρ i = d i + ε i .
Accordingly, the unconstrained LS estimation model in the noncooperative scenario can be expressed as follows:
a r g min d ^ i F s = i = 1 N d ^ i ρ i 2 .

2.4. Cooperative Scenario Description

The cooperative scenario is very similar to the noncooperative one. The difference is that there exist ranging and information interactions between two targets. Let x j and x ^ j be the real and estimated coordinates of node T j ; then,
d T j A i = s i x j 2 d ^ T j A i = s i x ^ j 2 , d T j T k = x k x j 2 d ^ T j T k = x ^ k x ^ j 2 , k > j .
In the cooperative scenario, there are L ranging links in total, which are denoted by l 1 , l 2 , , l L . Let d i and d ^ i be the real and estimated distances corresponding to link l i . Next, we can assign the values as follows. If 1 i N , i N + , each l i is an AT link, and we assign the values according to Formulas (4) and (5). If N < i L , i N + , l i is a TT link, and we assign the values as follows:
d i = d T 1 T k , N + 1 i q 1 c d T j T k , N + m = 1 j 1 q m c < i N + m = 1 j q m c j 2 ,
d ^ i = d ^ T 1 T k , N + 1 i q 1 c d ^ T j T k , N + m = 1 j 1 q m c < i N + m = 1 j q m c j 2 .
Above, q j c is the quantity of TT links with endpoints T j , T k such that k > j . Additionally, there also are T k U j , and U j is the set of targets that have ranging links with T j . In set U j , the elements’ subscripts are arranged in increasing order, and T k is the g th element, the index of which is calculated by:
g = i N , j = 1 i N m = 1 j 1 q m c , j 2 .
Let ρ i D t , 1 i L , i N + be distance observations; then, the relationship between ρ i and d i satisfies Formula (6). Accordingly, in the cooperative scenario, the unconstrained LS model is constructed as:
a r g min d ^ i F c = i = 1 L d ^ i ρ i 2 .
In this paper, we refer to F s and F c together as the LS positioning cost function and denote it by F.

3. Convex Analysis of the Model

In the previous part, we presented the unconstrained LS localization model in the cooperative and noncooperative scenarios. For the positioning problem, we are interested in finding the global optimum of the cost function. Due to the nonconvex property of the LS function, there may be multiple local optima or stagnation points in most scenes. As a result, a local minimum solution is obtained in the iterative process, which affects the positioning accuracy significantly. In this part, we discuss the nonconvex property of the unconstrained LS localization model to study when the cost function is convex and when it is nonconvex. To simplify the analysis, it may be worth considering the positioning problem on the two-dimensional plane, i.e., for η = 2 . It is convenient to generalize the conclusion to a higher dimensional space. Let s j = x s i y s i T be the coordinates of the anchor and x j = x j y j T be the coordinates of the target. At the same time, the corresponding estimated position is x ^ j = x ^ j y ^ j T .

3.1. Analysis of the Noncooperative Scenario

It is known that the convex property of a function is directly related to its Hessian matrix, and the following theorem holds:
Theorem 1.
The second-order condition ensures the function is convex. Assume that function f of x is second-order differentiable, and let its domain be d o m f . If d o m f is a convex set and the Hessian matrix exists, then a sufficient and necessary condition for the function to be convex in d o m f is that, for x d o m f , its Hessian matrix is a semipositive definite matrix [38].
To simplify the problem, we first consider the scenario of only one target, and the target number is T 1 . In this case, there are N links between T 1 and the anchors. We try to compute the gradient (the first-order differential) and the Hessian matrix (the second-order differential) of the cost function F s . The calculation results are shown in Equations (13) and (14).
g r a d F s = x ^ 1 = 2 i = 1 N d ^ i ρ i x s i x ^ 1 d ^ i d ^ i ρ i y s i y ^ 1 d ^ i ,
x ^ 1 2 = 2 i = 1 N d ^ i ρ i d ^ i + ρ i d ^ i 3 x s i x ^ 1 2 ρ i d ^ i 3 x s i x ^ 1 y s i y ^ 1 ρ i d ^ i 3 x s i x ^ 1 y s i y ^ 1 d ^ i ρ i d ^ i + ρ i d ^ i 3 y s i y ^ 1 2 .
Considering Theorem 1, we obtain the following corollary:
Corollary 1.
If d o m F s of the LS cost function is R 2 , it obviously is a convex set. Assume that the set of all x ^ 1 , which satisfy condition x ^ 1 2 0 , is A . Then, for x ^ 1 A , F s is a convex function.
Corollary 1 is a sufficient and necessary condition for F s to be convex. Finding a set A that satisfies the condition is equivalent to dividing the domain in R 2 so that F s is convex in each divided subinterval. Next, the condition of x ^ 1 2 0 will be further analyzed. Note that x ^ 1 2 is a real symmetric matrix, and the two following lemmas hold:
Lemma 1.
The eigenvalues of a real symmetric matrix are all real numbers.
Lemma 2.
A real symmetric matrix is a semipositive definite matrix if and only if all its eigenvalues are nonnegative [39].
Lemma 2 shows that the assessment of whether a matrix is positive semidefinite can be transformed into the determination of whether the eigenvalues are positive or negative. Hence, we consider calculating the eigenvalues of the Hessian matrix. Let J = x ^ 1 2 R 2 × 2 , and let λ be the eigenvalue of J ; then, the eigenvalue polynomial of J is:
G = λ J 11 J 12 J 21 λ J 22 ,
where J i j are the elements of J . Let G = 0 ; then, the following characteristic polynomial equation can be obtained:
λ 2 J 11 + J 22 λ + J 11 J 22 J 12 J 21 = 0 .
It is a quadratic equation of one variable. From Lemma 1, we observe that there must be two real roots, and the discriminant of roots Δ 0 is invariable. From the distribution relation of the two roots, the sufficient and necessary condition of having two nonnegative real roots is:
J 11 + J 22 > 0 J 11 J 22 J 12 J 21 0 .
Therefore, we can derive the following corollary:
Corollary 2.
Set A consists of all the sets of x ^ 1 that satisfy the condition of the inequality group (15).
Corollary 2 is an equivalent condition of the LS cost function F s being convex. Compared with Corollary 1, Corollary 2 transforms the condition that the Hessian matrix is semipositive definite into the solution of the inequality system, which provides a feasible method for finding the set satisfying the requirements. However, due to the nonlinear characteristics of Equation (17), it is difficult to obtain the analytic solutions of inequalities. In the next part, we discuss a particular case and try to provide the proof.
Proposition 1.
If there exists a set C in which all the estimates x ^ 1 satisfy d ^ i ρ i 0 , then the LS cost function F s is convex and C A .
The proof of Proposition 1 is given in Appendix A.
If there is more than one target and their number is M, the Hessian matrix J has 2 M rows and 2 M columns, i.e., J R 2 M × 2 M . Consider a decomposition of J , and let J = i = 1 N Q i . In the preceding formula, Q i is the Hessian matrix corresponding to the ranging link l i . It is easy to observe that there are four elements, and the elements of Q i are located on the diagonal or adjacent positions of J . An example is shown in Figure 1b, where D i j k represents the corresponding elements in J , which are the second-order partial derivatives of F s with respect to the coordinates of a target. We translate the four elements in Q i to the upper left corner via the elementary row-column transformation of matrices. Let the transformed matrix be Q i , as shown in Figure 1c; we observe that Q i and Q i are similar matrices. The following lemma holds for similar matrices:
Lemma 3.
Similar matrices have the same eigenvalues.
This shows that Q i and Q i have the same eigenvalues. Let G i = λ I Q i ; to obtain the eigenvalues of G i , we construct the following block for G i :
G i = G 11 G 12 G 21 G 22 .
Above, G 11 = λ g 11 g 12 g 21 λ g 22 R 2 × 2 , G 12 = 0 R 2 × 2 M 2 , G 21 = 0 R 2 M 2 × 2 , G 22 = Λ R 2 M 2 × 2 M 2 , where g k l is the element at the k th row and l th column of G i . From the determinant theorem of block matrices, we know that:
| G i | = | G 11 | · | G 22 G 21 G 11 1 G 12 | = | G 11 | · | G 22 | = λ 2 M 2 | G 11 | .
Let | G i | = 0 . According to the previous analysis, the two solutions of | G 11 | = 0 are λ = 1 and λ = d ^ i ρ i d ^ i , and the remaining 2 M 2 solutions of | G i | = 0 are all zero. If d ^ i ρ i 0 , then Q i 0 .
Proposition 1 still holds if there is more than one target, and it is a sufficient and unnecessary condition for F s to be locally convex. The nonconvex interval of F s decreases with the increasing ε i and the number of links that satisfy ε i > 0 .

3.2. Analysis of the Cooperative Scenario

Similarly, we try to find a set of x ^ j for the condition that the LS cost function is convex in the cooperative scenario. We guess that Proposition 1 is also valid in that scenario and try to prove it. In this scenario, the ranging link consists of two parts, an AT link and a TT link. Inspired by Lemma 3, we decompose Hessian matrix J into several submatrices Q i . Each Q i is actually a function related to d ^ i , which is denoted by f d ^ i = d ^ i ρ i 2 . The element of Q i is the second-order partial derivative of function f d ^ i with respect to the target coordinates x ^ j . In Section 3.1, it has been proven that Proposition 1 is satisfied if l i is an AT link. Next, we will show that Proposition 1 is also satisfied for TT links. To facilitate the analysis, we select one TT link l k in the cooperative scenario and denote the targets at the end of the link as T m and T n , the estimates of which are x ^ m , y ^ m and x ^ n , y ^ n , respectively. If the distance measurement is ρ k , then the LS cost function can be written as:
F c = d ^ k ρ k 2 .
Computing the partial derivatives of F c with respect to x ^ m , x ^ n , y ^ m , y ^ n , the gradient vector (the first-order differential) can be obtained as follows:
g r a d F c = x ^ m , x ^ n = 2 d ^ k ρ k d ^ k x ^ m x ^ n x ^ n x ^ m y ^ m y ^ n y ^ n y ^ m .
Hence, the Hessian matrix is:
J = x ^ m , x ^ n 2 .
We are interested in the eigenvalue of J ; the characteristic polynomial can be calculated as follows:
λ 2 λ 4 λ 4 d ^ k ρ k d ^ k = 0 .
Then, the four eigenvalues can be solved as follows:
λ 1 = λ 2 = 0 , λ 3 = 4 , λ 4 = 4 d ^ k ρ k d ^ k .
If d ^ k ρ k 0 , thus a TT link has similar properties to those of an AT link. In the following, we try to generalize this further. If there are two kinds of ranging links in the LS cost function, this property remains unchanged.
Consider the case of M targets in the cooperative scenario; the Hessian matrix of F c is J R 2 M × 2 M . For example, the elements of the matrix are shown in Figure 1a, where D i j k represents the corresponding element that is the second-order partial derivative of F c with respect to the coordinates of a target. Consider a decomposition of J , and let J = i = 1 L Q i . In the preceding formula, Q i is the Hessian matrix corresponding to the ranging link l i . If l i is an AT link, the analysis and result shown in Section 3.1 apply. If the ranging link is a TT link, there are sixteen elements in each Q i . An example is shown in Figure 1d. All the elements in Q i are translated to the upper left corner, as shown in Figure 1e. We denote the transformed matrix by Q i and let G i = λ I Q i . Similarly, we construct the following block for G i :
G i = G 11 G 12 G 21 G 22 .
Above, G 11 = λ g 11 g 14 g 41 λ g 44 R 4 × 4 , G 12 = 0 R 4 × 2 M 4 , G 21 = 0 R 2 M 4 × 4 , and G 22 = Λ R 2 M 4 × 2 M 4 , and g k l is the element at the kth row and the lth column of G i . Similar to Formula (23), we have:
| G i | = | G 11 | · | G 22 G 21 G 11 1 G 12 | = | G 11 | · | G 22 | = λ 2 M 4 | G 11 | .
Let | G i | = 0 . Reviewing the analysis for TT links, the four solutions of | G 11 | = 0 are λ 1 = λ 2 = 0 , λ 3 = 4 , and λ 4 = 4 d ^ i ρ i d ^ i , and the remaining 2 M 4 solutions of | G i | = 0 are all zero. Hence, if d ^ i ρ i 0 , then Q i 0 .
The analysis above shows that Proposition 1 is still valid in the cooperative scenario. Similar to the noncooperative scenario, the nonconvex interval of F c decreases with the increasing ε i and the number of links that satisfy ε i > 0 .
Proposition 1 shows that if the condition d ^ i ρ i 0 is satisfied, an appropriate interval in which the LS cost function F is convex can always be found. Based on this, we can describe the condition for F to be convex globally.
Proposition 2.
For i , if d ^ i ρ i 0 always holds on x ^ = x ^ 1 x ^ 2 x ^ M T R 2 M , then F is convex on R 2 M .
Proposition 2 is a sufficient and unnecessary condition for the global convexity of the LS cost function. In a practical scenario of the ranging error being greater than zero, the distance observation ρ i is positive. In this case, the condition of global convexity is not satisfied. If ε i d i , we have ρ i 0 ; then, d ^ i ρ i 0 is invariable, and F is convex in the global range. Although the global convexity condition is satisfied if all ε i d i , it is a low probability event that all the observation values will be negative because of the independence between the distance measurement errors. Therefore, in an actual location determination scenario, it is not a common phenomenon for the LS cost function to be convex in the global range.

4. Null Space of the Relative Angle Matrix

From the analysis in Section 3, it is known that a positive ranging error will cause the LS cost function to be nonconvex. In this condition, when the iterative method is used to search for the optimal solution, it may fall into a local minimum, causing the result obtained to not be the optimal global solution. There are generally two ways of solving this problem. The first is to divide the domain R 2 M into several intervals so that the cost function is convex in each subinterval. Then, the appropriate initial value is selected in each subinterval, and the result is obtained by the iterative method. Proposition 1 shows that such subintervals must exist, so this method is feasible in any case. The second method is to modify the original LS localization model to make it convex in the global range. The advantage of this method is that the convexity weakens the requirement of initial value selection. That is, the solution is insensitive to initial value selection. Based on Proposition 2, we propose a robust method using the relative angle matrix for WSN positioning. The basic idea of this method is to transform the LS cost function into a globally convex function by calculating the relative angle matrix. This algorithm ensures that the minimum obtained by the Gauss–Newton iteration method will be the optimal global solution. Some further analysis will also be performed in this part.

4.1. Definition of the Relative Angle Matrix

Definition 1.
The relative angle matrix in the noncooperative scenario is defined as:
Ω s = 1 cos θ 1 T 2 cos θ 1 T N cos θ 2 T 1 1 cos θ 2 T N cos θ 1 T N cos θ 2 T N 1 ,
where Ω s R N × N . If θ i T j is the angle between l i and l j , then θ i T j is given by:
θ i T j = θ i T j , l i l j = T k π 2 , l i l j T k .
Assume the following formulation:
P s = x s 1 x ^ 1 d ^ 1 y s 1 y ^ 1 d ^ 1 0 0 x s 2 x ^ 1 d ^ 2 y s 2 y ^ 1 d ^ 2 0 0 0 0 x s N x ^ M d ^ N y s N y ^ M d ^ N , ε s = ρ 1 d ^ 1 ρ 2 d ^ 2 ρ N d ^ N ,
where P s R N × 2 M and ε s R N . Let x ^ 1 = 0 ; this condition is equivalent to:
P T ε s = 0 .
Further, the relationship between relative angle Ω s and P s is as follows:
Ω s = P s P s T .
Substituting (30) into (31), we obtain:
Ω s ε s = 0 .
Imitating the noncooperative scenario, we can define the relative angle matrix in the cooperative scenario.
Definition 2.
The relative angle matrix in the cooperative scenario is defined as:
Ω c = 1 cos θ 1 T 2 cos θ 1 T 3 cos θ 1 T L cos θ 2 T 1 1 cos θ 2 T 3 cos θ 2 T L cos θ L 1 T 1 cos θ L 1 T 2 cos θ L 1 T 3 cos θ L 1 T L cos θ L T 1 cos θ L 2 cos θ L T 3 2 ,
where Ω c R L × L ; when i j , if θ i T j is the angle between l i and l j , then θ i T j is defined by Formula (28). If i = j , let ω i j represent the element of the i th row and j th column in Ω c . If l i is an AT link, then ω i j = 1 ; otherwise, l i is a TT link, and then, ω i j = 2 . Assume that:
P c = x s 1 x ^ 1 d ^ 1 y s 1 y ^ 1 d ^ 1 0 0 0 0 x s 2 x ^ 1 d ^ 2 y s 2 y ^ 1 d ^ 2 0 0 0 0 0 0 0 0 x s N 1 x ^ M d ^ N 1 y s N 1 y ^ M d ^ N 1 0 0 0 0 x s N x ^ M d ^ N y s N y ^ M d ^ N x ^ 2 x ^ 1 d ^ N + 1 y ^ 2 y ^ 1 d ^ N + 1 x ^ 1 x ^ 2 d ^ N + 1 y ^ 1 y ^ 2 d ^ N + 1 0 0 0 0 x ^ k x ^ 2 d ^ L y ^ k y ^ 2 d ^ L 0 0 , ε c = ρ 1 d ^ 1 ρ 2 d ^ 2 ρ L d ^ L ,
where P c R L × 2 M and ε c R L . Then, x ^ j = 0 is equivalent to:
P c T ε c = 0 .
Multiplying both ends of the equation by P c results in:
P c P c T ε c = 0 .
In the cooperative scenario, it still holds that:
Ω c = P c P c T .
Substituting Formula (36) into Formula (37), we obtain:
Ω c ε c = 0 .
We refer to the relative angle matrices Ω s and Ω c collectively as Ω . The following property of Ω is established generally.
Property 1.
In the two-dimensional plane, let r be the rank of Ω, i.e., r = r a n k Ω . Assume that there are M targets in the localization scenario, and the number of unknown variables is τ = 2 M ; then, the inequality r τ is satisfied.
The proof of Property 1 is given in Appendix B.

4.2. Null Space Algorithms

From Formulas (32) and (38), we can conclude that the ranging error is the null space of Ω . It is also known from Property 1 that once Ω has been determined, if the number of nontrivial solutions of ranging error ε is N B , we have N B 1 . If and only if r = τ , then N B = 1 . If r < τ , the solution satisfying the equation should be a set, i.e., there is an infinite number of ε satisfying the equation. Hence, if a basic solution of ε has been obtained, ε is the linear space formed by Φ , where Φ R N A × τ r . Let Φ = φ 1 φ 2 φ τ r ; then, the general solution of ε can be expressed as:
ε = i = 1 τ r k i φ i ,
where k i R and φ i R N A . Equation (39) indicates that for ε in the same linear space, the LS cost function F has the same local optimal point. However, convexity will vary with ε . If ε > 0 , F is likely to be a nonconvex function; thus, multiple local optima will exist. If ε 0 , it is known from Proposition 2 that F can be transformed into a convex function in the global range. Assuming that Ω is known, we can determine Φ for a given ε . If there exists a linear combination of column vectors in Φ that results in ν 0 , then by the same property of the local optima, we can change the LS cost function F and make it convex in the global range.
Proposition 3.
Let ν be an element of the null space of Ω such that ν 0 ; then, Equations (7) and (12) can be rewritten as:
a r g min d ^ i F s = i = 1 N d ^ i ρ i ν i 2 ,
a r g min d ^ i F c = i = 1 L d ^ i ρ i ν i 2 .
In this paper, we refer to F s and F c collectively as F . In Proposition 3, F and F have the same local optimal point, i.e., the objective functions are equivalent. Additionally, F is also globally convex, which endows it with the characteristic of large-scale convergence. Thus, using it can reduce the sensitivity to the initial value selection. If we want to apply Proposition 3, the following problems remain to be solved:
  • How do we achieve Ω ?
  • For an arbitrary Ω , does ν satisfy the condition of ν 0 ?
  • When there are errors in Ω , how do we deal with them?
For the first problem, the direct method is to measure the angle, which is similar to the AOA method. In this way, Proposition 3 is transformed into an AOA/TOA hybrid location algorithm. Another method is to transform the distance data into the corresponding angle via the cosine theorem and construct the relative angle matrix. To answer the second question, we will prove that the following properties are valid:
Property 2.
In both noncooperative and cooperative scenarios, there always exists ν 0 if the node to be located is inside the convex polygon composed of adjacent nodes, whereas there is no ν satisfying this condition if the node is outside the convex polygon.
The proof of Property 2 is given in Appendix C.
According to Property 2, Proposition 3 can be applied in both noncooperative and cooperative scenarios; however, there is a limitation that it can only be applied if all the targets are located in the convex hull composed of adjacent nodes. In contrast, Proposition 3 does not hold if there are targets outside the convex hull formed by neighboring nodes.
In the third problem, the angle measurement errors will influence the final positioning result. In addition, in the process of calculating the null space of Ω , it is possible that no suitable ν 0 exists due to the errors. In this case, Proposition 3 will also be invalid, and it is necessary to eliminate it.
The paper [36] used the method of principal component analysis (PCA) to reduce the deviation of the relative angular matrix in source localization. The main steps are as follows. First, Ω is decomposed by SVD. Then, all the eigenvalues are sorted in descending order, and the large eigenvalues are selected as the main eigenvalues. At the same time, the eigenvectors corresponding to the eigenvalues are selected to reconstruct Ω , which is denoted by Ω . The null space of Ω is determined; one of the vectors is selected as the value of the base vector φ 1 and is multiplied by the coefficient k 1 to satisfy ν = k 1 φ 1 0 . The PCA method is simple and efficient; when the measurement errors are not large, it can calculate ν well. In WSN localization, the PCA method cannot obtain the appropriate ν as the errors and the number of targets increase. If the cosine theorem is used to transform the ranging data into angle data, the ranging measurement error will be converted into the angle measurement error after the calculation. Hence, similar problems will also exist. We consider the main reason for this problem to be that the distance circles formed by the node and the range measurement may not intersect at one point. Then, the sum of radian measures of the relative angles of each node at the same point will not equal 2 π .
Let α ^ 1 , α ^ 2 , , α ^ n be the estimated values of the angles, formed by the target and its adjacent nodes, and the corresponding angle measurements or calculated values be α 1 , α 2 , , α n . We define the angle least squares cost function as follows:
a r g min α ^ i F α = i = 1 N α ^ i α i 2 s . t . i = 1 N α ^ i = 2 π .
Formula (42) is a linear optimization problem with equality constraints and can be transformed into an unconstrained optimization problem by introducing Lagrange multipliers. If the Lagrange multiplier is λ , then Formula (42) is equivalent to:
a r g min α ^ i f α = i = 1 N α ^ i α 2 + λ i = 1 N α ^ i 2 π
We calculate the partial derivatives of f α :
f α α ^ 1 = 2 α ^ 1 α 1 λ f α α ^ n = 2 α ^ n α n λ f α λ = α ^ 1 + α ^ 2 + α ^ n 2 π .
Setting every derivative in (44) to zero, we obtain the following optimal point:
α ^ 1 = 1 2 λ + α 1 α ^ n = 1 2 λ + α n λ = 2 n 2 π α ^ 1 + α ^ 2 + α ^ n .
Using this method, the relative angle between targets and adjacent nodes can be estimated. After that, the sum of relative angular radians of each target will be 2 π . According to the properties of the relative angular matrix, ν 0 must exist. After that, F can be transformed into a globally convex function by Proposition 3 and solved by the Gauss–Newton iteration method.
If the cosine theorems are used to calculate the angle, the theorem may be inapplicable. In other words, because of the existence of the ranging error, the triangle’s trilateral side lengths may not satisfy the theorem’s condition, and the method needs to be further improved. In this paper, we use the generalized cosine law. Assume that the angle in a triangle is denoted by β ; then, we calculate β as follows:
β = π t < 1 arccos t 1 t 1 0 t > 1 .
Above, tis calculated from the trilateral relationship according to the cosine theorem. We call the respective methods the “angle-based null space algorithm” (A-NLS) and “cosine law-based null space algorithm” (C-NLS), whereby the angles are obtained by angle measurement or the cosine law calculation. The main steps of the algorithm are shown in Algorithm 1.
Algorithm 1: A-NLS and C-NLS algorithms.
Input: s i , ρ i , 1 i L , i N + and α i (the angle observation is unnecessary for C-NLS);
Output: x ^
 1: Measuring or calculating Ω that entails errors;
 2: Angle estimation is performed by using the Equation (45);
 3: Obtain Ω ;
 4: Calculate the null space of Ω , and choose the appropriate k i so that ν 0 ;
 5: For i , calculate ρ i = ρ i ν i , set K > 0 , t = 0 , and choose random x 0 ;
 6: while x t + 1 x t > K do
 7:  Update x t + 1 for using the Gauss–Newton iteration method;
 8:   t = t + 1 ;
 9: end while
 10: return x ^ = x t

5. Simulations and Results

In the fifth part of this paper, we will validate the proposition and the proposed algorithm using a numerical simulation. The following simulation chooses two typical scenarios of non-cooperative and cooperative positioning for analysis and verification.

5.1. Simulation Scenario Setting

In the noncooperative scenario, we assumed that there were four anchors and one target. In the cooperative scenario, there were two targets and six anchors. The Cartesian coordinate system was established in the two-dimensional plane. The coordinates of each anchor are shown in Table 1. Three kinds of environments were simulated: the NLOS environment, the LOS environment, and the case of negative ranging errors. We can consider the latter case as a particular condition. Although it is not common in actual positioning practice, we can regard it as the equivalent ranging error after using Proposition 3. The magnitudes of errors of each scenario in various situations are shown in Table 2 and Table 3.

5.2. Convexity Verification

First, function F s was simulated globally to observe the convexity in various environments. The convexity of F s was considered under various ranging errors. The function image, the semipositive definite condition, and the estimates calculated by the iterative algorithm were obtained. The results are shown in Figure 2.
Figure 2c,f,g shows the condition of Hessian matrix J at each point in the plane; the red part represents semipositive definite, the blue part seminegative definite, and the green part indefinite. Theorem 1 shows that if J is semipositive definite, the function is convex. It is observed from Figure 2c that if the targets were in the NLOS environment, the semipositive definite area was discontinuous. The sum of the indefinite and seminegative definite areas was larger. If the targets were in the LOS environment, the semipositive definite area was continuous, while the sum of indefinite and seminegative definite areas was smaller. Proposition 2 shows that if there is a positive ranging error, the cost function is nonconvex in the global range. Therefore, in the above two environments, F s cannot be nonconvex in the global range.
The target position was solved for by the Gauss–Newton iteration method, with different initial values selected from different directions. The results are shown in Figure 2b,e,h. In the NLOS environment, because of the positive errors, the LS cost function F s had more than one local optimal point due to nonconvexity. When the initial value varied, the iterative algorithm converged to different minimums. In the LOS environment, there were lower ranging errors. Although the LS cost function F s was also nonconvex in the global range, there was no increase in the number of local optimal points. In this case, the iterative algorithm converged to the same location. Hence, the original LS model was applicable in the LOS environment.
The results obtained if the ranging error was negative and satisfied the condition for the cost function F s to be convex are shown in Figure 2g,i. It is observed that J was semipositive definite and F s was convex in the global range. Figure 2h shows the results of the Gauss–Newton iteration algorithm with various initial values. The circle formed by the dotted lines in the graph indicates that the ranging error was negative, and its size is the absolute value of the distance observation. According to the results, the algorithm can eventually iterate to the same location regardless of the initial value, which confirms the global convergence in this case.
In the cooperative scenario, various initial values were selected, and the target location was calculated by the Gauss–Newton iterative algorithm. The iterative images are shown in Figure 3a–c. In the NLOS positioning environment, because of the positive ranging errors, the cost function was not convex in the global range. Therefore, various initial values caused the iteration algorithm to converge to different minimums. In the LOS positioning environment, although F c was also nonconvex in the global range, the positive ranging error was lower; hence, the algorithm could still converge to the same local optimal point. If the ranging error was negative, the cost function was convex in the global range. Thus, regardless of the selected initial value, the algorithm would converge to the global optimal point.

5.3. Null Space Algorithm Performance

To compare the performance of algorithms (LS, A-NLS, and C-NLS), the following simulations were performed in both the noncooperative and cooperative scenarios. We assumed that all the targets were located in the polygon composed of adjacent nodes. The coordinates of each anchor are shown in Table 1. We performed simulations in both the LOS and NLOS environments. The ranging error parameters were set to μ n l o s = 5 m, σ l o s 2 = 3 m2, and σ n l o s 2 = 4.5 m2, and in the AOA angle measurement, it was assumed that the parameters of the relative angle error matrix were μ α = 0.1 , σ α 2 = 0.5 . In the noncooperative scenario, the real location of the target was x 1 = 2 , 4 T , while in the cooperative scenario, the real locations of the two targets were x 1 = 4 , 3 T and x 2 = 6 , 5 T . For the LS, C-NLS, and A-NLS algorithms, any position was selected as the initial iteration value of the algorithm in each calculation. To consider all directions of the location of the initial value, the distribution x ^ 0 N μ 0 , Σ 0 was used, where μ 0 is the mean vector and Σ 0 is the covariance matrix; their values were set to μ 0 = 50 , 50 T and Σ 0 = 100 0 0 100 . As a reference, we chose the SDP and PPM algorithms to compare the performance with that of the null space algorithm proposed in this paper. For the PPM algorithm, the initial iteration position of each target was equal to the average of coordinates of its adjacent nodes. For each scenario, 100 numerical simulations were performed. The estimated positions obtained by the algorithms were compared with the real positions of the targets. The root mean squared errors (RMSE) were calculated according to Formula (47).
e R M S E = 1 M j = 1 M x ^ j x j 2 .
We calculated the convergence probability for different algorithms. The simulation results are shown in Figure 4, Table 4 and Table 5. If the convergence threshold was 5 m, we can see from the graphs or tables that in the LOS environment, there was a high convergence probability of each algorithm in the noncooperative scenario, while in the cooperative scenario, the convergence probability of the LS and PPM algorithm became lower. In the NLOS environment, the convergence probability of the LS and PPM algorithm decreased seriously, especially in the cooperative scenarios, which was almost non-convergent. The convergence performance of the SDP algorithm in the cooperative scenario also decreased considerably. However, the C-NLS algorithm maintained a high convergence probability in all environments and scenarios. The performance of the A-NLS algorithm was similar to that of the C-NLS algorithm, but it slightly decreased in the NLOS environment of the cooperative scenario.
Afterwards, the corresponding cumulative probability distributions of errors were obtained. The results are shown in Figure 5. In particular, Figure 5a,c shows that in the LOS environment, the differences between the algorithms were not significant in both the noncooperative and cooperative scenarios. In the NLOS environment, as shown in Figure 5b,d, the differences between the algorithms were apparent. The LS and PPM algorithms showed large positioning errors in the cooperative and noncooperative scenarios. The main reason is that the positive ranging errors increased, which directly led to a sharp decline of positioning performance.
The location precision of the SDP and null space algorithms (A-NLS, C-NLS) did not decrease in the NLOS environment, which reflects the stability of these algorithms in various situations; i.e., the algorithms can obtain better location estimates in both the LOS and NLOS environments. At the same time, the null space algorithm proposed in this paper was slightly better than the traditional SDP algorithm in both noncooperative and cooperative scenarios and achieved the desired goal.

6. Conclusions

In this paper, a necessary and sufficient condition for the global convexity of the LS cost function was specified for WSN positioning. Generally, when all the ranging errors were far less than zero, the LS cost function was convex. Next, we defined the relative angle matrix in both the noncooperative and cooperative scenarios and proved the two essential properties. We observed that if all the targets were located in the convex polygon formed by their adjacent nodes, the LS cost function could be transformed into a globally convex function by constructing measurements with a negative distance. Based on the analysis, we proposed a robust algorithm for WSN localization. The proposed method reduced the sensitivity of the Gauss–Newton iteration algorithm to initial value selection and made the function globally convex. In other words, the function had the characteristic of large-scale convergence. In the fifth part of the article, numerical simulations were performed to verify the proposition described in the third part, and the robust algorithm was compared with the conventional methods. The results showed that the null space algorithm effectively constrained the error in the NLOS environment and obtained more accurate positioning results in both the LOS and NLOS environments.
In the future, we will carry out a study on the impact of varying the topologies and number of anchors and targets. Furthermore, we will deal with the problem when the targets are not located inside the convex polygon formed by its neighboring nodes.

Author Contributions

Y.T. and J.L. proposed and implemented the main idea. S.T. organized the overall flow of the manuscript, and Y.T. drafted the manuscript. J.Z. and W.L. refined the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (Grant No. 61601511).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Before proving the proposition, we first give the following lemma:
Lemma A1.
A matrix, which is the sum of finite semipositive definite matrices, is still semipositive definite. That is, if R i 0 , 1 i N , i N + , there exists W = i = 1 N R i 0 .
Proof of Proposition 1.
Consider decomposing J and the representation J = i = 1 N Q i , where:
Q i = d ^ i ρ i d ^ i + ρ i d ^ i 3 x s i x ^ 1 2 ρ i d ^ i 3 x s i x ^ 1 y s i y ^ 1 ρ i d ^ i 3 x s i x ^ 1 y s i y ^ 1 d ^ i ρ i d ^ i + ρ i d ^ i 3 y s i y ^ 1 2 .
To find the eigenvalues of Q i , let:
λ I Q i = 0 .
Simplifying it, we obtain:
λ 2 2 d ^ i ρ i d ^ i λ + d ^ i ρ i d ^ i = 0 .
The two roots of λ can be obtained as follows:
λ 1 = 1 λ 2 = d ^ i ρ i d ^ i .
As Formula (A4) shows, matrix Q i has two eigenvectors, and λ 1 > 0 is invariable. If another eigenvector satisfies λ 2 = d ^ i ρ i 0 , then Q i 0 . From Lemma A1, we know that if d ^ i ρ i 0 , 1 i N , i N + holds, then J 0 . Then, the set of all estimates x ^ 1 that satisfy this condition is C . □

Appendix B

Proof of Property 1.
We first show that this property holds in the noncooperative scenario with one target. Let the angles between two adjacent anchor nodes and the target be α 1 , α 2 , , α N . Then, we can compute that:
Ω s = 1 cos α 1 cos α 1 + α 2 cos i = 1 N 1 α i cos i = 2 N α i 1 cos α 2 cos i = 2 N 1 α i cos i = 3 N α i cos i = 3 N α i + α 1 1 cos i = 3 N 1 α i cos α N cos α N + α 1 cos α N + α 1 + α 2 1
= 1 cos α 1 cos α 1 + α 2 cos i = 1 N 1 α i cos α 1 1 cos α 1 + α 1 + α 2 cos α 1 i = 1 N 1 α i cos α 1 α 2 cos α 1 α 2 + α 1 1 cos α 1 α 2 i = 1 N 1 α i cos i = 1 N 1 α i cos i = 1 N 1 α i + α 1 cos i = 1 N 1 α i + α 1 + α 2 1 .
An elementary row transformation is performed on Ω s ; then, the simplified matrix can be structured as:
Ω s = 1 cos α 1 cos α 1 + α 2 cos i = 1 N 1 α i 0 sin α 1 sin α 1 + α 2 sin i = 1 N 1 α i 0 0 0 0 0 0 0 0 .
From this, we observe that the rank of Ω s with one target satisfies r 2 . Consider the case of more than one target in the noncooperative scenario. For each node, we can perform a similar elementary row transformation, and the number of nonzero rows is no more than two. Hence, we know that the number of nonzero rows of Ω s will not be greater than τ . In the cooperative scenario, the first to the N th rows of Ω c are similar to those of Ω s . For rows numbered from N + 1 to L, due to the existence of the TT link, the number of nonzero elements per row is four; if i = j , we have ω i j = 2 . The reason is that the targets connect between the cooperative link. If each target has at least two known location nodes connected to it, the element in the respective row can be reduced to zero by a row transformation similar to that in the noncooperative scenario. In the simplified relative angular matrix Ω c , each user node is independent of other nodes, so it must hold that r τ . □

Appendix C

Definition A1.
An adjacent node of T j is defined as the node that has a ranging link with T j , whether it is an anchor or a target.
Definition A2.
A point p in a convex set K is said to be an extreme point if it cannot be written in the form p = t x + 1 t y , where x and y are distinct points of K and 0 < t < 1 ; informally, this means p is not between two other points of K .
Lemma A2.
Every point in a bounded closed convex set must be a convex combination of its extreme points [40].
Lemma A3.
Let K 1 , K 2 be convex subsets of R n and R m , respectively. Then, K 1 × K 2 R n × R m R m + n is convex. Furthermore, point P 1 , P 2 is an extreme point of K 1 × K 2 if and only if P 1 is an extreme point of K 1 and P 2 is an extreme point of K 2 [40].
Figure A1. Establish the coordinate system with the user node as the origin. Draw the unit circle with the origin as the center. The points and angles defined by each link are as shown. Graph (a) denotes one target T 1 in the noncooperative scenario, and Graph (b) denotes the cooperative scenario, where one of the user nodes is T j .
Figure A1. Establish the coordinate system with the user node as the origin. Draw the unit circle with the origin as the center. The points and angles defined by each link are as shown. Graph (a) denotes one target T 1 in the noncooperative scenario, and Graph (b) denotes the cooperative scenario, where one of the user nodes is T j .
Sensors 19 02627 g0a1
Proof of Property 2.
We first assume that there is only one node in the scenario, and the error vector is set to ε = ε 1 ε 2 ε N T . According to Formula (32),
ε 1 + i = 2 N cos j = 1 i 1 α j ε i = 0 i = 2 N sin j = 1 i 1 α j ε i = 0 .
As Figure A1a shows, if a coordinate system is established with T 1 as the origin, then Formula (32) can be rewritten as follows:
1 0 ε 1 + i = 2 N cos j = 1 i 1 α j sin j = 1 i 1 α j ε i = 0 0 .
Take T 1 as the center of a circle; consider a unit circle, and suppose that the intersection with l i occurs at point P i . It is easy to observe that 1 0 , cos α 1 sin α 1 , , cos j = 1 N 1 α j sin j = 1 N 1 α j are the coordinates of points P 1 , P 2 , , P N in the Cartesian coordinate system. If P 1 , P 2 , , P N form a convex polygon and P 1 , P 2 , , P N are the extreme points of that polygon, the origin T 1 must be a point inside the convex polygon. Lemma 3 shows that there exists a convex combination, so that the coordinates of T 1 are composed of convex combinations of P 1 , P 2 , , P N . In other words, ξ 1 , ξ 2 , , ξ N , ξ i 0 , 1 and i = 1 N ξ i = 1 such that:
1 0 ξ 1 + i = 2 N cos j = 1 i 1 α j sin j = 1 i 1 α j ξ i = 0 0 .
Let ε i = ξ i ; then, we can prove that Property 2 holds in the single target scenario.
If more than one node exists, we suppose that the j th user node T j has a total of r j t links, including r j a AT links and r j c TT links. These links divide the plane into r j t parts. Every two adjacent links form an angle with the T j as the vertex. Let the radian measures of angles at T j be γ j 1 , γ j 2 , , γ j r j a 1 , γ j r j a , , γ j r j t . The two adjacent links constituting these angles are numbered l s j 1 , l s j 2 , , l s j r j t , l s j 1 , where s j k is the number of all links connected to T j , which satisfies s j k + 1 > s j k . The error vector is set to ε = ε 1 ε 2 ε L T ; according to Formula (36), it holds that:
ε s 11 + k = 2 r 1 t cos m = 1 k 1 γ 1 m ε s 1 k = 0 k = 2 r 1 t sin m = 1 k 1 γ 1 m ε s 1 k = 0 ε s M 1 + k = 2 r M t cos m = 1 k 1 γ M m ε s M k = 0 k = 2 r M t sin m = 1 k 1 γ M m ε s M k = 0 .
As shown in Figure A1b, consider any T j , and consider a unit circle centered at T j that intersects l i at points P s j 1 , P s j 2 , , P s j r j t . Then, the j th and j + 1 th lines in Equation (A10) can be written as follows:
1 0 ε s j 1 + k = 2 r j t cos m = 1 k 1 γ j m sin m = 1 k 1 γ j m ε s j k = 0 0 ,
where 1 0 , cos γ j 1 sin γ j 1 , , cos m = 1 r j t γ j m sin m = 1 r j t γ j m represent the coordinates of points P s j 1 , P s j 2 , , P s j r j t in the Cartesian coordinate system. Then, P s j 1 , P s j 2 , , P s j r j t form a convex polygon, and they are the extreme points of that polygon. Hence, all the points inside the polygon constitute a closed convex set H j R 2 . If all T i are located in the corresponding convex polygons, then set H j exists for j , 1 j M , j N + .
As can be observed from Lemma A3, the Cartesian product H t = H 1 × H 2 × × H M R η is also a convex set. Combining extreme points x ^ j , 1 j M , j N + in H j forms a new extreme point x ^ = x ^ 1 x ^ 2 x ^ M T R 2 M in H t in H t . According to Lemma A2, the origin is a convex combination of extreme points. In other words, ξ 1 , ξ 2 , , ξ L , ξ i 0 , 1 and i = 1 L ξ i = 1 satisfy Equation (37). If ε i = ξ i , then Property 2 also holds in the multitarget scenario. □

References

  1. Tian, S.; Dai, W.; Li, G.; Lv, J.; Xu, R.; Cheng, J. On the Research of Cooperative Positioning: A Review. In Proceedings of the 4th China Satellite Navigation Conference, Wuhan, China, 15–17 May 2013. [Google Scholar]
  2. Parkinson, B.W. Global Positioning System: Theory and Applications, Volume II; American Institute of Aeronautics and Astronautics (AIAA): Reston, VA, USA, 1996; Volume 163. [Google Scholar]
  3. Witrisal, K.; Meissner, P.; Leitinger, E.; Shen, Y.; Gustafson, C.; Tufvesson, F.; Haneda, K.; Dardari, D.; Molisch, A.F.; Conti, A. High-Accuracy Localization for Assisted Living: 5G systems will turn multipath channels from foe to friend. IEEE Signal Process. Mag. 2016, 33, 59–70. [Google Scholar] [CrossRef]
  4. Shahmansoori, A.; Garcia, G.E.; Destino, G.; Seco-Granados, G.; Wymeersch, H. Position and Orientation Estimation Through Millimeter-Wave MIMO in 5G Systems. IEEE Trans. Wirel. Commun. 2018, 17, 1822–1835. [Google Scholar] [CrossRef]
  5. Jiang, L.; Cai, B.G.; Jian, W. Cooperative Localization of Connected Vehicles: Integrating GNSS with DSRC Using a Robust Cubature Kalman Filter. IEEE Trans. Intell. Transp. Syst. 2017, 18, 2111–2125. [Google Scholar]
  6. Shen, Y.; Win, M.Z. Fundamental Limits of Wideband Localization and mdash; Part I: A General Framework. IEEE Trans. Inf. Theory 2010, 56, 4956–4980. [Google Scholar] [CrossRef]
  7. Shen, Y.; Wymeersch, H.; Win, M.Z. Fundamental Limits of Wideband Localization—Part II: Cooperative Networks. IEEE Trans. Inf. Theory 2010, 56, 4981–5000. [Google Scholar] [CrossRef]
  8. Penna, F.; Caceres, M.A.; Wymeersch, H. Cramer-Rao Bound for Hybrid GNSS-Terrestrial Cooperative Positioning. IEEE Commun. Lett. 2010, 14, 1005–1007. [Google Scholar] [CrossRef]
  9. Gante, A.D.; Siller, M. A Survey of Hybrid Schemes for Location Estimation in Wireless Sensor Networks. Procedia Technol. 2013, 7, 377–383. [Google Scholar] [CrossRef] [Green Version]
  10. Buehrer, R.M.; Jia, T. Collaborative Position Location; Wiley-IEEE Press: New York, NY, USA, 2011; pp. 1089–1114. [Google Scholar]
  11. Xiong, Z.; Sottile, F.; Caceres, M.A.; Spirito, M.A.; Garello, R. Hybrid WSN-RFID cooperative positioning based on extended kalman filter. In Proceedings of the 2011 IEEE-APS Topical Conference on Antennas and Propagation in Wireless Communications, Torino, Italy, 12–16 September 2011. [Google Scholar]
  12. Sottile, F.; Wymeersch, H.; Caceres, M.A.; Spirito, M.A. Hybrid GNSS-Terrestrial Cooperative Positioning Based on Particle Filter. In Proceedings of the 2011 IEEE Global Telecommunications Conference, Kathmandu, Nepal, 5–9 December 2011. [Google Scholar]
  13. Chen, L.; Wen, F. Received Signal Strength based Robust Cooperative Localization with Dynamic Path Loss Model. IEEE Sens. J. 2016, 16, 1265–1270. [Google Scholar]
  14. Bao, D.; Hao, Z.; Hao, C.; Liu, S.; Zhang, Y.; Feng, Z. TOA Based Localization under NLOS in Cognitive Radio Network; Springer Press: New York, NY, USA, 2016; pp. 668–679. [Google Scholar]
  15. Tomic, S.; Beko, M.; Rui, D.; Montezuma, P. A Robust Bisection-based Estimator for TOA-based Target Localization in NLOS Environments. IEEE Commun. Lett. 2017, 21, 2488–2491. [Google Scholar] [CrossRef]
  16. Nguyen, T.L.N.; Vy, T.D.; Shin, Y. An Efficient Hybrid RSS-AoA Localization for 3D Wireless Sensor Networks. Sensors 2019, 19, 2021. [Google Scholar] [CrossRef]
  17. Ji, L.; Conan, J.; Pierre, S. Mobile Terminal Location for MIMO Communication Systems. IEEE Trans. Antennas Propag. 2007, 55, 2417–2420. [Google Scholar]
  18. Goulianos, A.A.; Freire, A.L.; Barratt, T.; Mellios, E.; Beach, M. Measurements and Characterisation of Surface Scattering at 60 GHz. In Proceedings of the 2017 IEEE 86th Vehicular Technology Conference (VTC-Fall), Toronto, ON, Canada, 24–27 September 2017. [Google Scholar]
  19. Vaghefi, R.M.; Buehrer, R.M.; Wymeersch, H. Collaborative Sensor Network Localization: Algorithms and Practical Issues. Proc. IEEE 2018, 106, 1089–1114. [Google Scholar]
  20. Shang, Y.; Rumi, W.; Zhang, Y.; Fromherz, M. Localization from Connectivity in Sensor Networks. IEEE Trans. Parallel Distrib. Syst. 2004, 15, 961–974. [Google Scholar] [CrossRef]
  21. Forero, P.A.; Giannakis, G.B. Sparsity-Exploiting Robust Multidimensional Scaling. IEEE Trans. Signal Process. 2012, 60, 4118–4134. [Google Scholar] [CrossRef]
  22. Li, B.; Cui, W.; Wang, B. A Robust Wireless Sensor Network Localization Algorithm in Mixed LOS/NLOS Scenario. Sensors 2015, 15, 23536–23553. [Google Scholar] [CrossRef] [PubMed]
  23. Destino, G.; Abreu, G. On the Maximum Likelihood Approach for Source and Network Localization. IEEE Trans. Signal Process. 2011, 59, 4954–4970. [Google Scholar] [CrossRef]
  24. Beck, A.; Stoica, P.; Li, J. Exact and Approximate Solutions of Source Localization Problems. IEEE Trans. Signal Process. 2008, 56, 1770–1778. [Google Scholar] [CrossRef]
  25. Zaeemzadeh, A.; Joneidi, M.; Shahrasbi, B.; Rahnavard, N. Robust Target Localization Based on Squared Range Iterative Reweighted Least Squares. In Proceedings of the IEEE International Conference on Mobile Ad Hoc and Sensor Systems, Orlando, FL, USA, 22–25 October 2017. [Google Scholar]
  26. Oguz-Ekim, P.; Gomes, J.P.; Xavier, J.; Oliveira, P. Robust Localization of Nodes and Time-Recursive Tracking in Sensor Networks Using Noisy Range Measurements. IEEE Trans. Signal Process. 2011, 59, 3930–3942. [Google Scholar] [CrossRef] [Green Version]
  27. Soares, C.; Xavier, J.; Gomes, J. Simple and fast cooperative localization for sensor networks. IEEE Trans. Signal Process. 2015, 63, 4532–4543. [Google Scholar] [CrossRef]
  28. Soares, C.; Gomes, J. Dealing with bad apples: Robust range-based network localization via distributed relaxation methods. arXiv 2016, arXiv:1610.09020v1. [Google Scholar]
  29. Biswas, P.; Ye, Y. Semidefinite programming for ad hoc wireless sensor network localization. In Proceedings of the International Symposium on Information Processing in Sensor Networks, Berkeley, CA, USA, 26–27 April 2004. [Google Scholar]
  30. Vaghefi, R.M.; Buehrer, R.M. Cooperative sensor localization with NLOS mitigation using semidefinite programming. In Proceedings of the 2012 9th Workshop on Positioning Navigation and Communication, Dresden, Germany, 15–16 March 2012. [Google Scholar]
  31. Ghari, P.; Shahbazian, R.; Ghorashi, S. Wireless Sensor Network Localization in Harsh Environments using SDP Relaxation. IEEE Commun. Lett. 2016, 20, 137–140. [Google Scholar] [CrossRef]
  32. Tao, J.; Buehrer, R.M. A Set-Theoretic Approach to Collaborative Position Location for Wireless Networks. IEEE Trans. Mob. Comput. 2011, 10, 1264–1275. [Google Scholar]
  33. Rydstrom, M.; Strom, E.G.; Svensson, A. Robust Sensor Network Positioning Based on Projections onto Circular and Hyperbolic Convex Sets (POCS). In Proceedings of the IEEE Workshop on Signal Processing Advances in Wireless Communications, Cannes, France, 2–5 July 2006. [Google Scholar]
  34. Blatt, D.; Hero, A.O. Energy-based sensor network source localization via projection onto convex sets. IEEE Trans. Signal Process. 2006, 54, 3614–3619. [Google Scholar] [CrossRef]
  35. Wang, D.; Wan, J.; Wang, M.; Zhang, Q. An MEF-Based Localization Algorithm against Outliers in Wireless Sensor Networks. Sensors 2016, 16, 1041. [Google Scholar] [CrossRef] [PubMed]
  36. Destino, G.; Abreu, G. Reformulating the least-square source localization problem with contracted distances. In Proceedings of the 2009 Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 1–4 November 2009. [Google Scholar]
  37. Destino, G.; Abreu, G.T.F.D. Improving source localization in NLOS conditions via ranging contraction. In Proceedings of the Workshop on Positioning Navigation and Communication, Dresden, Germany, 11–12 March 2010. [Google Scholar]
  38. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004; pp. 61–65. [Google Scholar]
  39. Lay, D.C. Linear Algebra and Its Applications, 2e; Addison-Wesley: New York, NY, USA, 2000; pp. 341–358. [Google Scholar]
  40. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1970; p. 19. [Google Scholar]
Figure 1. A simple example of Hessian matrix J . Graph (a) shows the arrangement of elements in one of the cooperative scenarios. Graph (b) represents Q i associated with the target T i , and (c) represents Q i , which is the matrix after the row and column transformation of Q i . Similarly, Graphs (d,e) represent Q i and Q i associated with target-target (TT) links in the cooperative scenario.
Figure 1. A simple example of Hessian matrix J . Graph (a) shows the arrangement of elements in one of the cooperative scenarios. Graph (b) represents Q i associated with the target T i , and (c) represents Q i , which is the matrix after the row and column transformation of Q i . Similarly, Graphs (d,e) represent Q i and Q i associated with target-target (TT) links in the cooperative scenario.
Sensors 19 02627 g001
Figure 2. The graphs related to F s of one target in the noncooperative scenario. Above, Graphs (a,d,g) are the images of F s in each scene; Graphs (b,e,h) are the results of the Gauss–Newton iteration algorithm when different initial values were selected; Graphs (c,f,i) represent the semipositive definite distribution of Hessian matrix J in the two-dimensional plane.
Figure 2. The graphs related to F s of one target in the noncooperative scenario. Above, Graphs (a,d,g) are the images of F s in each scene; Graphs (b,e,h) are the results of the Gauss–Newton iteration algorithm when different initial values were selected; Graphs (c,f,i) represent the semipositive definite distribution of Hessian matrix J in the two-dimensional plane.
Sensors 19 02627 g002
Figure 3. The results of the Gauss–Newton iteration algorithm when different initial values were selected in the cooperative scenario. Above, graph (a) shows the result for NLOS positioning environment, graph (b) shows the result for LOS environment, and graph (c) shows the result when the ranging error was negative.
Figure 3. The results of the Gauss–Newton iteration algorithm when different initial values were selected in the cooperative scenario. Above, graph (a) shows the result for NLOS positioning environment, graph (b) shows the result for LOS environment, and graph (c) shows the result when the ranging error was negative.
Sensors 19 02627 g003
Figure 4. Convergence for algorithms in the different scenarios and environments. Above, Graphs (a,b) are the results in the noncooperative scenario, where (a) represents the LOS environment, while (b) represents the NLOS environment. Graphs (c,d) are the results in the cooperative scenario, which represent the LOS and NLOS environments, respectively. The red boxes in the figure represent the anchors, while the blue stars represent the targets. We drew the ranging link as the green dotted line and the convergence radius as the blue dotted circle. The solid points represent the results of positioning.
Figure 4. Convergence for algorithms in the different scenarios and environments. Above, Graphs (a,b) are the results in the noncooperative scenario, where (a) represents the LOS environment, while (b) represents the NLOS environment. Graphs (c,d) are the results in the cooperative scenario, which represent the LOS and NLOS environments, respectively. The red boxes in the figure represent the anchors, while the blue stars represent the targets. We drew the ranging link as the green dotted line and the convergence radius as the blue dotted circle. The solid points represent the results of positioning.
Sensors 19 02627 g004
Figure 5. The CDF of the minimum mean squared error (MESE) for algorithms in the different scenarios and environments. Above, Graphs (a,b) are the results in the noncooperative scenario, where (a) represents the LOS environment, while (b) represents the NLOS environment. Graphs (c,d) are the results in the cooperative scenario, which represent the LOS and NLOS environments, respectively.
Figure 5. The CDF of the minimum mean squared error (MESE) for algorithms in the different scenarios and environments. Above, Graphs (a,b) are the results in the noncooperative scenario, where (a) represents the LOS environment, while (b) represents the NLOS environment. Graphs (c,d) are the results in the cooperative scenario, which represent the LOS and NLOS environments, respectively.
Sensors 19 02627 g005
Table 1. Coordinates of the anchors.
Table 1. Coordinates of the anchors.
Node Number A 1 A 2 A 3 A 4 A 5 A 6
Noncooperative/(m) 10 , 10 10 , 10 10 , 10 10 , 10
Cooperative/(m) 15 , 15 30 , 0 15 , 15 15 , 15 30 , 0 15 , 15
Table 2. Ranging errors in the noncooperative scenario.
Table 2. Ranging errors in the noncooperative scenario.
Link Number l 1 l 2 l 3 l 4
NLOS ε i /(m) 10.59 6.53 7.23 6.93
LOS ε i /(m) 2.03 1.58 1.48 1.42
Negative ε i /(m) 17.42 24.06 26.57 20.69
Table 3. Ranging errors in the cooperative scenario.
Table 3. Ranging errors in the cooperative scenario.
Link Number l 1 l 2 l 3 l 4 l 5 l 6 l 7
NLOS ε i /(m) 10.31 8.98 9.54 11.04 8.42 2.72 3.67
LOS ε i /(m) 1.62 2.26 3.08 1.32 4.22 2.13 2.74
Negative ε i /(m) 29.13 30.32 34.61 38.75 35.69 28.02 36.64
Table 4. Convergence probability in the noncooperative scenario. PPM, parallel projection method; SDP, semidefinite programming; A-NLS, angle-based null space algorithm; C-NLS, cosine law-based null space algorithm.
Table 4. Convergence probability in the noncooperative scenario. PPM, parallel projection method; SDP, semidefinite programming; A-NLS, angle-based null space algorithm; C-NLS, cosine law-based null space algorithm.
AlgorithmsLSPPMSDPA-NLSC-NLS
LOS/%999910099100
NLOS/%6394100100
Table 5. Convergence probability in the cooperative scenario.
Table 5. Convergence probability in the cooperative scenario.
AlgorithmsLSPPMSDPA-NLSC-NLS
LOS/%66689310099
NLOS/%00307897

Share and Cite

MDPI and ACS Style

Tian, Y.; Lv, J.; Tian, S.; Zhu, J.; Lu, W. Robust Least-SquareLocalization Based on Relative Angular Matrix in Wireless Sensor Networks. Sensors 2019, 19, 2627. https://doi.org/10.3390/s19112627

AMA Style

Tian Y, Lv J, Tian S, Zhu J, Lu W. Robust Least-SquareLocalization Based on Relative Angular Matrix in Wireless Sensor Networks. Sensors. 2019; 19(11):2627. https://doi.org/10.3390/s19112627

Chicago/Turabian Style

Tian, Yuyang, Jing Lv, Shiwei Tian, Jinfei Zhu, and Wei Lu. 2019. "Robust Least-SquareLocalization Based on Relative Angular Matrix in Wireless Sensor Networks" Sensors 19, no. 11: 2627. https://doi.org/10.3390/s19112627

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop