Optimal Geometry and Motion Coordination for Multisensor Target Tracking with Bearings-Only Measurements

This paper focuses on the optimal geometry and motion coordination problem of mobile bearings-only sensors for improving target tracking performance. A general optimal sensor–target geometry is derived with uniform sensor–target distance using D-optimality for arbitrary n (n≥2) bearings-only sensors. The optimal geometry is characterized by the partition cases dividing n into the sum of integers no less than two. Then, a motion coordination method is developed to steer the sensors to reach the circular radius orbit (CRO) around the target with a minimum sensor–target distance and move with a circular formation. The sensors are first driven to approach the target directly when outside the CRO. When the sensor reaches the CRO, they are then allocated to different subsets according to the partition cases through matching the optimal geometry. The sensor motion is optimized under constraints to achieve the matched optimal geometry by minimizing the sum of the distance traveled by the sensors. Finally, two illustrative examples are used to demonstrate the effectiveness of the proposed approach.


Introduction
Bearings-only target tracking is widely applied in wireless sensor networks for the civilian and military areas [1,2]. Different from other sensors such as range-only sensors, time difference of arrival (TDOA) sensors, and so on, bearings-only sensors work in passive mode and easily survive from being detected and attacked. However, they are highly sensitive to a range in which even a small angle measurement error may lead to a large tracking error. Therefore, bearings-only target tracking has been a research area of considerable interest for decades. Meanwhile, with the development of the unmanned vehicles, the traditional stationary sensor platforms have evolved into mobile ones characterized by high speed and long endurance. Accordingly, flexible sensor motion coordination can be achieved, so the tracking accuracy and the survival ability are significantly improved from sensor coordination.
Much previous work has been dedicated to developing different estimators for target tracking based on bearings-only measurements in two-and three-dimensional space [3][4][5][6]. The extended Kalman filter (EKF) is a classical method for the nonlinear tracking problem [7] but often diverges when the model nonlinearity is strong. The pseudolinear Kalman filter (PLKF) was introduced in [8,9], with better convergence than the EKF. However, the estimate is biased, which is highly dependent on sensor geometry [10]. Furthermore, other estimation algorithms such as the unscented Kalman filter (UKF) [11], cubature Kalman filter (CKF) [12][13][14][15], and particle filter (PF) [16,17] have been applied in bearings-only target tracking with different estimation performance advantages.
Compared with the improvement in the tracking accuracy produced by the estimation algorithms, sensor-target geometry plays a fundamental role in determining the accuracy of target tracking systems [18][19][20][21]. The Fisher information matrix (FIM) is a commonly used criterion assessing target tracking accuracy. The inverse of the FIM, called the Cramer-Rao lower bound (CRLB), indicates the optimal performance of a tracking system. Three popular optimality criteria are adopted tp achieve the optimal sensor configuration based on the FIM [19]. D-optimality minimizes the area of the uncertainty ellipse by maximizing the determinant of the FIM [18,20,22]; A-optimality suppresses the average variance by minimizing the trace of CRLB [23,24]; and E-optimality minimizes the length of the largest axis of the same ellipsoid by minimizing the maximum eigenvalue of the CRLB [19]. In [25], D-optimality was adopted to optimize sensor placement for range-based target tracking. In [26], the conditions for optimal placement of heterogeneous sensors were derived based on maximizing the information matrix, and the optimal placement for paired sensors was developed leveraging a "divide-and-conquer" strategy. In [27], A-optimality was used to solve sensor placement for 3D angle-of-arriving target localization. Geometric dilution of precision (GDOP) [28] is another criterion used to evaluate tracking accuracy. GDOP is defined as the root mean square position error and illustrates how an estimation is influenced by sensor-target geometry [29]. The optimal deployment for multitarget localization was developed in [30] by minimizing the GDOP.
In addition to the above theoretical analysis on the sensor-target geometries, some sensor path optimization methods have been proposed for target tracking to avoid the difficulty in finding the closed-form solution. A gradient-descent-based motion planning algorithm was presented for decentralized target tracking [31]. In [32], a gradient descent optimization algorithm was proposed for single-and multisensor path planning by minimizing the mean square error in 2D space. In [33], the path optimization for passive emitter localization in 2D space was transformed into a nonlinear programming problem with the FIM as the cost function. In [34], the path optimization strategy for 3D AOA target tracking was developed by minimizing the trace of covariance matrices with gradient descent optimization and a grid search method. In [35], the optimal sensor placement for AOA sensors was derived with a Gaussian prior using D-and A-optimality. In addition, the result was extended to path optimization based on a projection algorithm.
Most of the existing work has focused on optimal deployment using multiple bearingsonly sensors for target localization. Some closed-form solutions have been derived with equal angular distribution. Inspired by the "divide-and-conquer" strategy in [26], the continuum of the optimal solution for bearings-only measurement has potential to be extended to general circumstances. Moreover, for bearings-only target tracking problems using mobile sensors, some studies in the literature have adopted optimization methods such as gradient descent, Gauss-Seidel relaxation, and so on. Nevertheless, the solution space for optimization is complex due to the high nonlinearity of the cost functions related to the FIM. As a result, these numerical methods may lead to falling into local optima and fail to reach the globally optimal tracking performance. Motivated by the aforementioned aspects, this paper focuses on the optimal sensor-target geometry and motion coordination problem of mobile bearings-only sensors for target tracking. The sensors are driven to approach the target from a distance and eventually move in a circular formation to track the target.
The contributions of this paper are summarized as follows. (1) The suboptimality of approaching the target for bearings-only sensors to improve tracking performance is analyzed. (2) A continuum solution to optimal sensor-target geometry is derived with uniform sensor-target distance using D-optimality for arbitrary n (n ≥ 2) bearings-only sensors. The optimal geometry is characterized by the partition cases dividing n into the sum of integers no less than two. (3) A motion coordination algorithm to achieve globally optimal performance is developed based on matching optimal geometry and motion optimization to achieve the optimal target tracking performance.
The remainder of this paper is organized as follows: Section 2 presents the problem formulation. The CKF and FIM are introduced in Section 3. Section 4 reformulates the problem and investigates the optimality analysis. In Section 5, we design a motion coordination strategy based on the results in Section 4. The proposed method is verified by simulations in Section 6. Section 7 concludes this paper.
Notations: Define θ i ∈ (−π, π], θ ij = θ j − θ i ∈ (−π, π]. The two-norm of a vector x ∈ R n is defined as x = √ x T x. Chol(M) indicates the Cholesky decomposition of M. tr(·) and det(·) denote the trace and the determinant of the matrix contained in the bracket, respectively. |S| represents the cardinality of the set S. S\T = {e|e ∈ S and e / ∈ T}.

Problem Formulation
This paper focuses on the problem of sensor motion and coordination for singlemoving-target tracking with n ≥ 2 bearings-only sensors in 2D space. The target tracking geometry is depicted in Figure 1. θ i,k is the angle of the line of sight (LOS) from sensor i at discrete time k. Define z i,k as the measurement of θ i,k ; then, the measurement function is where p k = [x p k , y p k ] T is the position of the target at time k; tan −1 (·) is the four-quadrant inverse tangent function and θ i,k ∈ (−π, π]; s i (k) = [x i (k), y i (k)] T is the location of sensor i; η i,k is the measurement noise and assumed to be i.i.d Gaussian noise with zero mean and variance σ 2 i , i ∈ {1, 2, . . . , n}. The sensors are homogeneous, i.e., σ 2 i = σ 2 θ . Write the measurements in a compact form as z k = [z 1,k , z 2,k , . . . , z n,k ] T ∈ R n , and η k = [η 1,k , η 2,k , . . . , η n,k ] T ∈ R n is measurement Gaussian noise with zero mean and covariance R k = σ 2 θ I, where I is an identity matrix.
Consider the target whose motion is described by a nonlinear dynamic discrete system where x k ∈ R n x is the state vector of the dynamic system at discrete time k; w k ∈ R n x is process Gaussian noise with zero mean and covariance Q k ; n x is the dimension of the state vector. Meanwhile, w k and η k are mutually independent processes. The dynamic model of the mobile sensors is given by where s i (k) is the position of sensor i at discrete time k; u i (k) is the control input for sensor i at time k; v i (k) is the designed velocity of sensor i at time k; and T is the sampling time.
The state parameters of the target are unknown. We assume that the state of the mobile sensors and the measurements taken by them are known. Because of the noncooperative scenario, the minimum distance restriction between the target and the sensors should be ensured. We aimed to estimate the target state using the bearings-only measurements and improve the tracking accuracy by optimizing the sensor-target geometry of cooperative mobile sensors under practical constraints. Assumption 1. At the beginning of the tracking process, at least two sensors are deployed in positions that are not colinear to the target to ensure the observability of the target by the sensors [19,36]. Assumption 2. The mobile sensors are homogeneous with a maximum speed v max and a maximum turn rate ϕ max due to the limitations of the mechanical properties. The maximum speed of the sensor is faster than that of the target to ensure they can catch up the target. The minimum distance between sensors and the target is denoted as d min .

Parameter Estimation
In this paper, we use the cubature Kalman filter [12] to estimate the state of the target. The CKF is a nonlinear filter rising in the past decade with improved performance over the conventional nonlinear filter, particularly in addressing the strong nonlinearity in bearings-only target tracking.
In addition, it is known that the tracking performance of static sensors has a limited track range performance. Obviously, one feasible way to improve the tracking accuracy is moving sensors to better locations to accurately track the target. Therefore, the FIM based on bearings-only measurements is introduced in this section for optimality analysis in the following section.

Cubature Kalman Filter
Denotex k|k as the estimate of x k and P k|k as the estimate error covariance by using the bearings-only measurements z k . The cubature Kalman filter, in its time-and measurementupdate forms, can be computed by starting fromx 0|0 and P 0|0 . The iteration functions are as follows: Step 1. Evaluate cubature points (i = 1, 2, . . . , 2n x ) sents the ith element of the following set Step 2. Time update wherex k|k−1 is the state prediction, and P k|k−1 is the predicted error covariance.
Step 3. Measurement update whereẑ k|k−1 is the predicted measurement; P zz,k|k−1 is the innovation covariance matrix; P xz,k|k−1 is the cross-covariance matrix; W k is the Kalman gain.

Fisher Information Matrix
The error covariance matrix is defined as where J k is called the FIM, which quantifies the amount of information obtained from the measurements, with the expression where p(z k |x k ) is the probability density function, expressed as Given the measurements vector z k , the FIM is determined as where r i,k = p k − s i (k) represents the distance between the target position p k and the sensor position s i (k) at time k. (10); then, the following expressions of the determinant of the FIM are equivalent:

Optimality Analysis
The problem of path planning and motion coordination for improving tracking performance is equivalent to finding the next waypoints at each time step by maximizing the determinant of the FIM. There exist two kinds of parameters influencing the determinant of the FIM. So, we can maximize det(J k ) by simultaneously reducing the distances between the sensors and target and configuring the angles among the sensors.
In order to ensure the minimum distance constraint, the sensors move on a circular trajectory at a distance radius around the target. Before that, the path to reaching the circular radius orbit (CRO) for improving the tracking accuracy was studied. Thus, the design of the motion coordination for multiple sensors is divided into two stages, including outside the CRO distance and on the CRO distance d min .

Outside the CRO Distance
Consider the bearings-only tracking problem. When the range between the target and the sensor is greater than d min , the problem of the optimal sensor movement is equivalent to the following optimization problem: where ∠v i (k) is the angle of the velocity vector at time k, and the difference between ∠v i (k) and ∠v i (k − 1) is bounded by ϕ max due to the limited turn rate. Obviously, the difficulty of solving problem (12) increases with the increase in the number of the mobile sensors, though they can be solved via numerical methods. As such, we turned to suboptimal motion to reduce computational complexity.
When the sensors are far away from the target, the sensors are expected to move with maximum speed v max to approach the target. As shown in Figure 2, the location s i (k + 1) that sensor i is able to reach can be expressed by Theorem 1. Consider the bearings-only tracking problem. When the range between the target and the sensor is greater than d min , and the position of the target is p k+1 at time k + 1, the suboptimal heading direction of sensor i at time k is Proof. According to the Cauchy inequality, Consider the function where γ = [r 1,k+1 , r 2,k+1 , . . . , r n,k+1 ] T . To achieve the maximum of F(γ), take the partial derivatives of F(γ) with respect to φ i,k . Then, we have Let Additionally, let H ∈ R n×n denote the Hessian matrix of F(γ) at φ 0 , with elements We obtain where ∆r i = ∆x 2 i + ∆y 2 i . Obviously, H is a negative definite matrix, and as a consequence, φ 0 is the maximum point.
Furthermore, taking the limitation of the turn rate into consideration, the heading direction of sensor i at time k is Note that the determinant of FIM increases with the range between the sensors and target when the angles among the sensors remain unchanged. In other words, the optimal heading direction is always toward the target, so we can force the sensors to directly approach the CRO around the target. The tracking accuracy is improved as well but does not reach the optimum.

On the CRO Distance d min
When all sensors reach the CRO around the target, which is a circle centered on the target and with a radius of d min , we have The sensor-target geometry is depicted in Figure 3. In this section, the time step k is omitted for the convenience of description. In order to simplify the analysis of optimal sensor-target geometry, the related propositions are reclaimed. Proposition 1. The determinant of the FIM in (11) remains unchanged in the following three operations:

1.
Switching the position of any two sensors; 2.
Rotating all the sensors around the target; 3.
Flipping arbitrary sensors about the target. Remark 1. Proposition 1 originated from [18] and recognized in [20]. It implies that det(J) is invariant to these geometric operations.
Without loss of generality, the sensors are assumed to be renumbered counterclockwise with θ i , θ i ∈ (0, π] through the geometric operations according to Proposition 1, which is equivalent to flipping the sensors with the actual angles of a LOS ranging from −π to 0. The target tracking system achieves optimal estimation performance when all sensors move at the same speed as the target on the CRO in the formation, confirming the following results.

Lemma 2 ([30]
). Consider n bearings-only sensors tracking a single target. When all sensors are on the CRO around the target (r i = d min ), ∆θ 1 = ∆θ 2 = · · · = ∆θ n−1 = ∆θ, the Fisher information determinant given in (11)  Remark 2. When n ≥ 3, there are two solutions for optimal geometry with equal angular distribution in [30], i.e., ∆θ i = π n or 2 n π. However, the optimal geometry when ∆θ i = 2 n π can be obtained through flipping part of the sensors about the target in the optimal geometry when ∆θ i = π n . Therefore, we consider them as identical optimal geometry for n sensors and retain the solution of ∆θ i = π n , which avoids the complexity arising from two optional solutions.  The sensors in S i are placed as Lemma 2. Then, where Ψ i = {{a, b}} is the set of all combinations of a and b with a < b and a, b ∈ S i . Since ∑ n i=1 cos(α + 2(i−1) n π) = 0 (α is arbitrary, n ≥ 2), for j ∈ S i , ∀l ∈ S g (i = g) Finally, consider the following function In view of (25) in the proof of Theorem 2, the angles between the sensors not in the same subset do not affect the optimal sensor-target geometry. In addition, it remains the optimal sensor-target geometry when the sensors are managed by the geometric operations in Proposition 1. Therefore, we can classify the optimal sensor-target geometry by the set Ξ = {q 1 , q 2 , . . . , q m }, which is recognized as the partition case dividing n into the sum of integers no less than 2. In other words, the optimal sensor-target geometries are regarded as identical for equivalent Ξ. Figures 4 and 5 illustrate some examples of the optimal sensor-target geometry for n = 4, 5. In Figure 4a,b, two sensor-target geometries are considered the same because the sensors are both divided into two subsets with Ξ = {2, 2}. Additionally, the sensors with the same Ξ = {2, 3} in Figure 5a,b are also regarded as having identical sensor-target geometry, because the optimal sensor-target geometry in Figure 5b can be obtained by flipping sensor 4 about the target in Figure 5a. Additionally, the optimal sensor-target geometry with another partition case for n = 5 is shown in Figure 5c,d, which is regarded as identical optimal geometry with Ξ = {5, }, but they differ from the optimal geometry in Figure 5a,b due to different partition cases.

Remark 3.
Although the number of optimal sensor-target geometries described in Theorem 2 is infinite due to rotation invariance, we are only concerned with the partition cases of the set S according the classification method in this paper. The number of the partition cases dividing n into a sum of positive integers no less than 2, denoted as A(n), asymptotically equals exp π 2(n−1) 3 [37].

Motion Coordination
In this section, we propose a motion coordination strategy for mobile sensors to improve target tracking performance. According to our analysis above, the mobile sensors are required to reach the CRO around the target as soon as possible and coordinate with each other. Figure 6 illustrates the main steps of sensor motion coordination to achieve optimal geometry. On the CRO Figure 6. Sensor motion coordination to achieve optimal geometry.

Single Sensor Motion
In practice, the real state of the target is unknown. We utilize the one-step predicted position of the targetp k+1|k = [x p k+1|k ,ŷ p k+1|k ] T instead of p k+1 at time k. The velocity of sensor i is designed as As we want the sensors to approach the target as soon as possible, the velocities of the sensors are set to their maximum before they reach the boundary of the CRO around the target. After the sensors reach the CRO around the target, they are expected to follow the target on the CRO around the target.

Coordination Strategy
As all sensors reach the CRO around the target, they enter the coordination stage. The coordination strategy consists of matching the optimal sensor-target geometry and sensor motion optimization. The task of matching the optimal geometry involves allocateing the sensors into the subsets by comparing current sensor-target geometry with optimal geometry with the desired partition case Ξ. The sensor motion is optimized to achieve the optimal geometry with minimum energy consumption based on the result of matching the optimal geometry.
Letŝ i (k + 1) = [x i (k + 1),ŷ i (k + 1)] T denote the expected location of sensor i at time k + 1 calculated by u i (k) in (28) aŝ Defineθ i as the predicted anglê whereθ i is constrained within the range of 0 to π to simplify the step of matching the optimal geometry. Matching optimal geometry for a given Ξ = {q 1 , q 2 , . . . , q m } can be described as follows: where κ is defined as the difference degree compared with the optimal sensor-target geometry. The problem is naturally a combinatorial optimization problem, which is NPhard. An algorithm to search for an approximate solution with a given Ξ was developed and is shown in Algorithm 1 based on the greedy search method.
After matching the optimal sensor-target geometry, the sensors engage in motion coordination to achieve the optimal geometry, thereby improving tracking performance. For the purpose of energy conservation, sensor motion optimization can be described as a nonlinear optimization problem where ϑ is the sum of the distance traveled by the sensors;ū n i andθ * i are the predicted angles for s * i (k + 1). The nonlinear optimization problem in (32) can be solved by "fmincon" (Optimization toolbox) in Matlab®. Therefore, the control input for the sensor i is finally determined by The restriction of the turn rate can be implemented by choosing min{ϕ max , |∠u i (k) − ∠u i (k − 1)|}.

Remark 5.
In terms of bearings-only target tracking accuracy, both the enveloping and semienveloping optimal sensor-target geometry configurations are considered equivalent. The selection of the configurations depends on the objectives of target tracking. When the sensors are expected to perform other operations, such as surveillance, recording, and so on, circumnavigation tracking is a more preferable approach, driving the sensors to achieve complete surrounding of a target on the CRO.

Collision Avoidance
A distance constraint is necessary to avoid collisions among the mobile sensors. Let l min denote the minimum distance between two sensors. When s i (k) − s j (k) < ρ min , the collision avoidance algorithm is enabled, and we have where δ is a small heading change for the sensor, and ±δ is selected to make the range between them larger. To summarize, the sensor motion coordination algorithm is presented in the Algorithm 2.
Algorithm 2 Sensor motion coordination for target tracking.

Input:
The estimate of the target at time k,x k|k ; The location of the sensor at time k, s i (k); Output: The estimate of the target at time k + 1,x k+1|k+1 ; The location of the sensor at time k + 1, s i (k + 1); 1: Receivex k+1|k from the estimation center; 2: Compute u i (k) with (28), (31), (33) and (34); 3: Move to a new position s i (k + 1); 4: Take new measurements of the target z k+1 , and estimate the state of the target via CKF; 5: returnx k+1|k+1 , s i (k + 1).

Simulation Experiments
In this section, we illustrate the proposed sensor motion coordination algorithm with some simulation examples. By default, all variables used in the simulation were in SI units. As introduced in Section 3.1, we used a CKF method to estimate the state of the target. For comparison, the gradient descent method in [34] and the projection method in [35] were adopted to optimize the sensor motion under the same conditions.
To compare the tracking performance, we used the root mean square error (RMSE) of the position of the target. The RMSE of position at time k is defined as where N c is the total numbers of Monte Carlo runs; [x i (k), y i (k)] T and [x i (k),ŷ i (k)] T are the true and estimated positions at the nth Monte Carlo run respectively. Scenario 1: We consider a problem of tracking a moving target using 5 mobile sensors in 2D space. The dynamic function of the target is described by the constant velocity model ẋ k , y k ,ẏ k ] T and T = 0.2 s is the sampling time. The process noise w k is a zero-mean Gaussian with a covariance matrix Q k = diag[qM qM], where The scalar parameter q = 0.1 m/s 3 denotes the process noise intensity. The measurements taken by sensor i at time k is given in (1)  There are two partition cases for n = 5 with Ξ = {2, 3} and Ξ = {5}. We first compared the tracking performance and distance traveled by the mobile sensors when the sensors are steered to achieve these two kinds of optimal sensor-target geometries. Additionally, we included static sensors and mobile sensors whose waypoints were computed by the methods in [34,35] in the comparative experiment. Figure 7a,b show the trajectory of the 5 bearings-only sensors achieving the optimal geometry with partition cases Ξ = {5} and Ξ = {2, 3}, respectively. As shown in Figure 7b, sensor 1, sensor 3, and sensor 5 are assigned into the subset with three sensors and the others in the subset with two sensors after matching the optimal geometry. The sensors eventually move with the target in the optimal geometry, as expected. The optimal geometry is referenced to the estimated target position and shows discrepancies with the true optimal sensor-target geometry. This discrepancy is unavoidable in practical applications since the true target position is unknown. However, the proposed motion coordination method can enhance the estimation performance, and the circular formation approaches closer to the true optimal geometry, thus achieving the theoretically optimal estimation accuracy, as shown by the compared RMSEs of the position illustrated in Figure 8. Obviously, the tracking performance estimated by mobile sensors is better than that estimated by static sensors. The proposed method significantly improves the tracking performance and exhibits lower estimate error compared with the method in [34]. Meanwhile, the tracking performance of the method in [35] is close to the proposed method in this scenario. There is a negligible difference in the tracking performance between the two kinds of optimal geometries with Ξ = {2, 3} and Ξ = {5}. Additionally, the sums of the distance traveled by all mobile sensors to achieve the optimal geometry with Ξ = {2, 3} and Ξ = {5} are 1488.9 m and 1548.6 m, respectively. The reduction in distance between Ξ = {2, 3} and Ξ = {5} is attributed to the fact that the sensor-target geometry is closer to the optimal geometry with Ξ = {2, 3}, whose κ is smaller, when the sensors reach the CRO. Scenario 2: We consider a problem of tracking a moving target using 4 mobile sensors in 2D space. The dynamic function of the target is described by where x k = [x k ,ẋ k , y k ,ẏ k , Ω k ] T and T = 1 s. The process noise w k is a zero-mean Gaussian with a covariance matrix Q k = diag[q 1 Γ q 1 Γ q 2 T], where  There are two partition cases for n = 4 with Ξ = {2, 2} and Ξ = {4}. However, the optimal geometry with Ξ = {4} can be obtained by rotating the sensors in one subset in the optimal geometry with Ξ = {2, 2} as a whole by a proper angle. Thus, the partition case for n = 4 is selected as Ξ = {2, 2} in Scenario 2. Figure 9 shows the trajectory of the 4 bearings-only sensors tracking a target. In this run, sensors 1 and 3 are assigned in a subset and the others in another subset after matching the optimal geometry. Figure 10 shows the compared RMSEs of the position. Obviously, the tracking performance estimated by static sensors is the poorest, and it continues to degrade as the distance from the target increases. The proposed method improves the tracking performance and exhibits lower estimate error compared with the methods in [34,35] for maneuver turning target tracking.

Conclusions
In this study, optimal sensor-target geometry and a motion coordination strategy were proposed for a target tracking system using mobile bearings-only sensors in 2D space. We discussed the suboptimality of approaching the target for bearings-only sensors to improve tracking performance. A general optimal sensor-target geometry was derived with uniform sensor-target distance using D-optimality for arbitrary n (n ≥ 2) bearingsonly sensors. A motion coordination algorithm was developed based on the previous optimality analysis to achieve the optimal target tracking performance efficiently. In future work, we will investigate a distributed optimization method for mobile sensors and its extension to multitarget tracking.