Next Article in Journal
Fault Detection in Power Distribution Networks Based on Comprehensive-YOLOv5
Previous Article in Journal
Coupling Analysis of Compound Continuum Robots for Surgery: Another Line of Thought
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Geometry and Motion Coordination for Multisensor Target Tracking with Bearings-Only Measurements

School of Automation, Nanjing University of Science and Technology, Nanjing 210094, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(14), 6408; https://doi.org/10.3390/s23146408
Submission received: 5 June 2023 / Revised: 12 July 2023 / Accepted: 12 July 2023 / Published: 14 July 2023

Abstract

:
This paper focuses on the optimal geometry and motion coordination problem of mobile bearings-only sensors for improving target tracking performance. A general optimal sensor–target geometry is derived with uniform sensor–target distance using D-optimality for arbitrary n ( n 2 ) bearings-only sensors. The optimal geometry is characterized by the partition cases dividing n into the sum of integers no less than two. Then, a motion coordination method is developed to steer the sensors to reach the circular radius orbit (CRO) around the target with a minimum sensor–target distance and move with a circular formation. The sensors are first driven to approach the target directly when outside the CRO. When the sensor reaches the CRO, they are then allocated to different subsets according to the partition cases through matching the optimal geometry. The sensor motion is optimized under constraints to achieve the matched optimal geometry by minimizing the sum of the distance traveled by the sensors. Finally, two illustrative examples are used to demonstrate the effectiveness of the proposed approach.

1. Introduction

Bearings-only target tracking is widely applied in wireless sensor networks for the civilian and military areas [1,2]. Different from other sensors such as range-only sensors, time difference of arrival (TDOA) sensors, and so on, bearings-only sensors work in passive mode and easily survive from being detected and attacked. However, they are highly sensitive to a range in which even a small angle measurement error may lead to a large tracking error. Therefore, bearings-only target tracking has been a research area of considerable interest for decades. Meanwhile, with the development of the unmanned vehicles, the traditional stationary sensor platforms have evolved into mobile ones characterized by high speed and long endurance. Accordingly, flexible sensor motion coordination can be achieved, so the tracking accuracy and the survival ability are significantly improved from sensor coordination.
Much previous work has been dedicated to developing different estimators for target tracking based on bearings-only measurements in two- and three-dimensional space [3,4,5,6]. The extended Kalman filter (EKF) is a classical method for the nonlinear tracking problem [7] but often diverges when the model nonlinearity is strong. The pseudolinear Kalman filter (PLKF) was introduced in [8,9], with better convergence than the EKF. However, the estimate is biased, which is highly dependent on sensor geometry [10]. Furthermore, other estimation algorithms such as the unscented Kalman filter (UKF) [11], cubature Kalman filter (CKF) [12,13,14,15], and particle filter (PF) [16,17] have been applied in bearings-only target tracking with different estimation performance advantages.
Compared with the improvement in the tracking accuracy produced by the estimation algorithms, sensor–target geometry plays a fundamental role in determining the accuracy of target tracking systems [18,19,20,21]. The Fisher information matrix (FIM) is a commonly used criterion assessing target tracking accuracy. The inverse of the FIM, called the Cramer–Rao lower bound (CRLB), indicates the optimal performance of a tracking system. Three popular optimality criteria are adopted tp achieve the optimal sensor configuration based on the FIM [19]. D-optimality minimizes the area of the uncertainty ellipse by maximizing the determinant of the FIM [18,20,22]; A-optimality suppresses the average variance by minimizing the trace of CRLB [23,24]; and E-optimality minimizes the length of the largest axis of the same ellipsoid by minimizing the maximum eigenvalue of the CRLB [19]. In [25], D-optimality was adopted to optimize sensor placement for range-based target tracking. In [26], the conditions for optimal placement of heterogeneous sensors were derived based on maximizing the information matrix, and the optimal placement for paired sensors was developed leveraging a “divide-and-conquer” strategy. In [27], A-optimality was used to solve sensor placement for 3D angle-of-arriving target localization. Geometric dilution of precision (GDOP) [28] is another criterion used to evaluate tracking accuracy. GDOP is defined as the root mean square position error and illustrates how an estimation is influenced by sensor–target geometry [29]. The optimal deployment for multitarget localization was developed in [30] by minimizing the GDOP.
In addition to the above theoretical analysis on the sensor–target geometries, some sensor path optimization methods have been proposed for target tracking to avoid the difficulty in finding the closed-form solution. A gradient-descent-based motion planning algorithm was presented for decentralized target tracking [31]. In [32], a gradient descent optimization algorithm was proposed for single- and multisensor path planning by minimizing the mean square error in 2D space. In [33], the path optimization for passive emitter localization in 2D space was transformed into a nonlinear programming problem with the FIM as the cost function. In [34], the path optimization strategy for 3D AOA target tracking was developed by minimizing the trace of covariance matrices with gradient descent optimization and a grid search method. In [35], the optimal sensor placement for AOA sensors was derived with a Gaussian prior using D- and A-optimality. In addition, the result was extended to path optimization based on a projection algorithm.
Most of the existing work has focused on optimal deployment using multiple bearings-only sensors for target localization. Some closed-form solutions have been derived with equal angular distribution. Inspired by the “divide-and-conquer” strategy in [26], the continuum of the optimal solution for bearings-only measurement has potential to be extended to general circumstances. Moreover, for bearings-only target tracking problems using mobile sensors, some studies in the literature have adopted optimization methods such as gradient descent, Gauss–Seidel relaxation, and so on. Nevertheless, the solution space for optimization is complex due to the high nonlinearity of the cost functions related to the FIM. As a result, these numerical methods may lead to falling into local optima and fail to reach the globally optimal tracking performance. Motivated by the aforementioned aspects, this paper focuses on the optimal sensor–target geometry and motion coordination problem of mobile bearings-only sensors for target tracking. The sensors are driven to approach the target from a distance and eventually move in a circular formation to track the target.
The contributions of this paper are summarized as follows. (1) The suboptimality of approaching the target for bearings-only sensors to improve tracking performance is analyzed. (2) A continuum solution to optimal sensor–target geometry is derived with uniform sensor–target distance using D-optimality for arbitrary n ( n 2 ) bearings-only sensors. The optimal geometry is characterized by the partition cases dividing n into the sum of integers no less than two. (3) A motion coordination algorithm to achieve globally optimal performance is developed based on matching optimal geometry and motion optimization to achieve the optimal target tracking performance.
The remainder of this paper is organized as follows: Section 2 presents the problem formulation. The CKF and FIM are introduced in Section 3. Section 4 reformulates the problem and investigates the optimality analysis. In Section 5, we design a motion coordination strategy based on the results in Section 4. The proposed method is verified by simulations in Section 6. Section 7 concludes this paper.
Notations: Define θ i ( π , π ] , θ i j = θ j θ i ( π , π ] . The two-norm of a vector x R n is defined as x = x T x . Chol ( M ) indicates the Cholesky decomposition of M . tr ( · ) and det ( · ) denote the trace and the determinant of the matrix contained in the bracket, respectively. | S | represents the cardinality of the set S. S \ T = { e | e S and e T } .

2. Problem Formulation

This paper focuses on the problem of sensor motion and coordination for single-moving-target tracking with n 2 bearings-only sensors in 2D space. The target tracking geometry is depicted in Figure 1. θ i , k is the angle of the line of sight (LOS) from sensor i at discrete time k. Define z i , k as the measurement of θ i , k ; then, the measurement function is
z i , k = θ i , k + η i , k = tan 1 y k p y i ( k ) x k p x i ( k ) + η i , k
where p k = [ x k p , y k p ] T is the position of the target at time k; tan 1 ( · ) is the four-quadrant inverse tangent function and θ i , k ( π , π ] ; s i ( k ) = [ x i ( k ) , y i ( k ) ] T is the location of sensor i; η i , k is the measurement noise and assumed to be i.i.d Gaussian noise with zero mean and variance σ i 2 , i { 1 , 2 , , n } . The sensors are homogeneous, i.e., σ i 2 = σ θ 2 . Write the measurements in a compact form as z k = [ z 1 , k , z 2 , k , , z n , k ] T R n , and η k = [ η 1 , k , η 2 , k , , η n , k ] T R n is measurement Gaussian noise with zero mean and covariance R k = σ θ 2 I , where I is an identity matrix.
Consider the target whose motion is described by a nonlinear dynamic discrete system
x k + 1 = f ( x k ) + w k
where x k R n x is the state vector of the dynamic system at discrete time k; w k R n x is process Gaussian noise with zero mean and covariance Q k ; n x is the dimension of the state vector. Meanwhile, w k and η k are mutually independent processes.
The dynamic model of the mobile sensors is given by
s i ( k + 1 ) = s i ( k ) + u i ( k ) u i ( k ) = v i ( k ) T
where s i ( k ) is the position of sensor i at discrete time k; u i ( k ) is the control input for sensor i at time k; v i ( k ) is the designed velocity of sensor i at time k; and T is the sampling time.
The state parameters of the target are unknown. We assume that the state of the mobile sensors and the measurements taken by them are known. Because of the noncooperative scenario, the minimum distance restriction between the target and the sensors should be ensured. We aimed to estimate the target state using the bearings-only measurements and improve the tracking accuracy by optimizing the sensor–target geometry of cooperative mobile sensors under practical constraints.
Assumption A1. 
At the beginning of the tracking process, at least two sensors are deployed in positions that are not colinear to the target to ensure the observability of the target by the sensors [19,36].
Assumption A2. 
The mobile sensors are homogeneous with a maximum speed v max and a maximum turn rate φ max due to the limitations of the mechanical properties. The maximum speed of the sensor is faster than that of the target to ensure they can catch up the target. The minimum distance between sensors and the target is denoted as d min .

3. Parameter Estimation

In this paper, we use the cubature Kalman filter [12] to estimate the state of the target. The CKF is a nonlinear filter rising in the past decade with improved performance over the conventional nonlinear filter, particularly in addressing the strong nonlinearity in bearings-only target tracking.
In addition, it is known that the tracking performance of static sensors has a limited track range performance. Obviously, one feasible way to improve the tracking accuracy is moving sensors to better locations to accurately track the target. Therefore, the FIM based on bearings-only measurements is introduced in this section for optimality analysis in the following section.

3.1. Cubature Kalman Filter

Denote x ^ k | k as the estimate of x k and P k | k as the estimate error covariance by using the bearings-only measurements z k . The cubature Kalman filter, in its time- and measurement-update forms, can be computed by starting from x ^ 0 | 0 and P 0 | 0 . The iteration functions are as follows:
Step 1. Evaluate cubature points ( i = 1 , 2 , , 2 n x )
S k 1 | k 1 = Chol P k 1 | k 1 X i , k 1 | k 1 = S k 1 | k 1 ξ i + x ^ k 1 | k 1
where S k 1 | k 1 is the Cholesky decomposition of P k 1 | k 1 ; ξ i = n x [ 1 ] i ; [ 1 ] i R n x represents the ith element of the following set
1 0 0 , 0 1 0 , , 0 0 1 , 1 0 0 , 0 1 0 , , 0 0 1 2 n x
Step 2. Time update
X i , k | k 1 = f ( X i , k 1 | k 1 ) x ^ k | k 1 = 1 2 n x i = 1 2 n x X i , k | k 1 P k | k 1 = 1 2 n x i = 1 2 n x X i , k | k 1 X i , k | k 1 T x ^ k | k 1 x ^ k | k 1 T + Q k 1
where x ^ k | k 1 is the state prediction, and  P k | k 1 is the predicted error covariance.
Step 3. Measurement update
S k | k 1 = Chol P k | k 1 χ i , k | k 1 = S k | k 1 ξ i + x ^ k | k 1 Z i , k | k 1 = h ( χ i , k | k 1 ) z ^ k | k 1 = 1 2 n x i = 1 2 n x Z i , k | k 1 P z z , k | k 1 = 1 2 n x i = 1 2 n x Z i , k | k 1 Z i , k | k 1 T z ^ k | k 1 z ^ k | k 1 T + R k P x z , k | k 1 = 1 2 n x i = 1 2 n x χ i , k | k 1 Z i , k | k 1 T x ^ k | k 1 z ^ k | k 1 T W k = P x z , k | k 1 P z z , k | k 1 1 x ^ k | k = x ^ k | k 1 + W k ( z k z ^ k | k 1 ) P k | k = P k | k 1 W k P z z , k | k 1 W k T
where z ^ k | k 1 is the predicted measurement; P z z , k | k 1 is the innovation covariance matrix; P x z , k | k 1 is the cross-covariance matrix; W k is the Kalman gain.

3.2. Fisher Information Matrix

The error covariance matrix is defined as
P k | k E ( x k x ^ k | k ) ( x k x ^ k | k ) T J k 1
where J k is called the FIM, which quantifies the amount of information obtained from the measurements, with the expression
J k = E 2 ln p ( z k | x k ) x k 2
where p ( z k | x k ) is the probability density function, expressed as
p ( z k | x k ) = 1 ( 2 π ) n det ( R k ) × exp 1 2 [ ( z k h ( x k ) ] T R k 1 [ ( z k h ( x k ) ]
Given the measurements vector z k , the FIM is determined as
J k = 1 σ θ 2 i = 1 n 1 r i , k 2 cos 2 ( θ i , k ) 1 2 sin ( 2 θ i , k ) 1 2 sin ( 2 θ i , k ) sin 2 ( θ i , k )
where r i , k = p k s i ( k ) represents the distance between the target position p k and the sensor position s i ( k ) at time k.
Lemma 1 
([18]). The FIM is expressed in (10); then, the following expressions of the determinant of the FIM are equivalent:
( 1 ) det ( J k ) = 1 4 σ θ 4 i = 1 n 1 r i , k 2 2 i = 1 n cos ( 2 θ i , k ) r i , k 2 2 i = 1 N sin ( 2 θ i , k ) r i , k 2 2 ( 2 ) det ( J k ) = 1 σ θ 4 Ψ sin 2 ( θ i j ) r i , k 2 r j , k 2
where Ψ = { { i , j } } is the set of all combinations of i and j with 1 i < j n ; θ i j = θ j , k θ i , k .

4. Optimality Analysis

The problem of path planning and motion coordination for improving tracking performance is equivalent to finding the next waypoints at each time step by maximizing the determinant of the FIM. There exist two kinds of parameters influencing the determinant of the FIM. So, we can maximize det ( J k ) by simultaneously reducing the distances between the sensors and target and configuring the angles among the sensors.
In order to ensure the minimum distance constraint, the sensors move on a circular trajectory at a distance radius around the target. Before that, the path to reaching the circular radius orbit (CRO) for improving the tracking accuracy was studied. Thus, the design of the motion coordination for multiple sensors is divided into two stages, including outside the CRO distance and on the CRO distance d min .

4.1. Outside the CRO Distance

Consider the bearings-only tracking problem. When the range between the target and the sensor is greater than d min , the problem of the optimal sensor movement is equivalent to the following optimization problem:
max det ( J k + 1 ) s . t . v i ( k ) v max | v i ( k ) v i ( k 1 ) | φ max
where v i ( k ) is the angle of the velocity vector at time k, and the difference between v i ( k ) and v i ( k 1 ) is bounded by φ max due to the limited turn rate.
Obviously, the difficulty of solving problem (12) increases with the increase in the number of the mobile sensors, though they can be solved via numerical methods. As such, we turned to suboptimal motion to reduce computational complexity.
When the sensors are far away from the target, the sensors are expected to move with maximum speed v max to approach the target. As shown in Figure 2, the location s i ( k + 1 ) that sensor i is able to reach can be expressed by
x i ( k + 1 ) = x i ( k ) + v max T cos ϕ i , k y i ( k + 1 ) = y i ( k ) + v max T sin ϕ i , k
where ϕ i , k [ 0 , 2 π ) is the heading direction of sensor i at time k. For convenience, denote Δ x i x k + 1 p x i ( k ) , Δ y i y k + 1 p y i ( k ) and d i v max T .
Theorem 1. 
Consider the bearings-only tracking problem. When the range between the target and the sensor is greater than d min , and the position of the target is p k + 1 at time k + 1 , the suboptimal heading direction of sensor i at time k is
ϕ i , k * = tan 1 Δ y i Δ x i
Proof. 
According to the Cauchy inequality,
det ( J k + 1 ) 1 4 σ θ 4 i = 1 n 1 r i , k + 1 2 2 i = 1 n cos 2 2 θ i , k + 1 r i , k + 1 4 i = 1 n sin 2 2 θ i , k + 1 r i , k + 1 4 = 1 4 σ θ 4 i = 1 n 1 r i , k + 1 2 2 i = 1 n 1 r i , k + 1 4 F ( γ )
Consider the function
F ( γ ) = 1 4 σ θ 4 i = 1 n 1 r i , k + 1 2 2 i = 1 N 1 r i , k + 1 4
where γ = [ r 1 , k + 1 , r 2 , k + 1 , , r n , k + 1 ] T .
To achieve the maximum of F ( γ ) , take the partial derivatives of F ( γ ) with respect to ϕ i , k . Then, we have
F ( γ ) ϕ i , k = 1 4 σ θ 4 j = 1 n 1 r j , k + 1 2 · 4 d i r i , k + 1 4 ( Δ y i cos ϕ i , k Δ x i sin ϕ i , k ) + d i r i , k + 1 6 ( Δ y i cos ϕ i , k Δ x i sin ϕ i , k )
Let F ( γ ) ϕ i , k = 0 , we obtain
ϕ 0 = tan 1 Δ y 1 Δ x 1 tan 1 Δ y 2 Δ x 2 tan 1 Δ y n Δ x n
Additionally, let H R n × n denote the Hessian matrix of F ( γ ) at ϕ 0 , with elements
H i j = 2 F ( γ ) ϕ i , k ϕ j , k
We obtain
H i j | ϕ 0 = 0 i j d i Δ r i σ θ 4 r i , k + 1 4 l = 1 n 1 r l , k + 1 2 i = j
where Δ r i = Δ x i 2 + Δ y i 2 . Obviously, H is a negative definite matrix, and as a consequence, ϕ 0 is the maximum point.    □
Furthermore, taking the limitation of the turn rate into consideration, the heading direction of sensor i at time k is
ϕ i , k = ϕ ̲ i , k ϕ i , k * < ϕ ̲ i , k ϕ i , k * ϕ ̲ i , k ϕ i , k * ϕ ¯ i , k ϕ ¯ i , k ϕ i , k * > ϕ ¯ i , k
where ϕ ¯ i , k = v i ( k 1 ) + φ max and ϕ ̲ i , k = v i ( k 1 ) φ max .
Note that the determinant of FIM increases with the range between the sensors and target when the angles among the sensors remain unchanged. In other words, the optimal heading direction is always toward the target, so we can force the sensors to directly approach the CRO around the target. The tracking accuracy is improved as well but does not reach the optimum.

4.2. On the CRO Distance d min

When all sensors reach the CRO around the target, which is a circle centered on the target and with a radius of d min , we have r i = d min . Define Δ θ i = θ i + 1 θ i , i { 1 , 2 , , n 1 } . The sensor–target geometry is depicted in Figure 3. In this section, the time step k is omitted for the convenience of description.
In order to simplify the analysis of optimal sensor–target geometry, the related propositions are reclaimed.
Proposition 1. 
The determinant of the FIM in (11) remains unchanged in the following three operations:
1. 
Switching the position of any two sensors;
2. 
Rotating all the sensors around the target;
3. 
Flipping arbitrary sensors about the target.
Remark 1. 
Proposition 1 originated from [18] and recognized in [20]. It implies that det ( J ) is invariant to these geometric operations.
Without loss of generality, the sensors are assumed to be renumbered counterclockwise with θ i , θ i ( 0 , π ] through the geometric operations according to Proposition 1, which is equivalent to flipping the sensors with the actual angles of a LOS ranging from π to 0.
The target tracking system achieves optimal estimation performance when all sensors move at the same speed as the target on the CRO in the formation, confirming the following results.
Lemma 2 
([30]). Consider n bearings-only sensors tracking a single target. When all sensors are on the CRO around the target ( r i = d min ), Δ θ 1 = Δ θ 2 = = Δ θ n 1 = Δ θ , the Fisher information determinant given in (11) has the upper bound N 2 4 σ θ 4 d min 4 . The upper bound is achieved when Δ θ i = π n .
Remark 2. 
When n 3 , there are two solutions for optimal geometry with equal angular distribution in [30], i.e.,  Δ θ i = π n or 2 n π . However, the optimal geometry when Δ θ i = 2 n π can be obtained through flipping part of the sensors about the target in the optimal geometry when Δ θ i = π n . Therefore, we consider them as identical optimal geometry for n sensors and retain the solution of Δ θ i = π n , which avoids the complexity arising from two optional solutions.
For a more general circumstances, there is less limitation to Δ θ i . Denote S = { 1 , 2 , , n } as the set of all sensors and S i = { n 1 i , n 2 i , , n q i i } as the subset of S, i { 1 , 2 , , m } . Denote Ξ = { q 1 , q 2 , , q m } , where q i = | S i | . Then, we have the following result:
Theorem 2. 
Consider the bearings-only tracking problem. When all sensors are on the CRO around the target ( r i = d min ), the Fisher information determinant given in (11) has the upper bound N 2 4 σ θ 4 d min 4 . The upper bound is achieved if the following conditions hold true
i = 1 m S i = S , S i S j = ( i j ) 2 q i n θ n l + 1 i θ n 1 i = l q i π , l { 1 , 2 , , q i 1 }
Proof. 
When r i = d min , then
det ( J ) = 1 σ θ 4 d min 4 Ψ sin 2 ( θ i j ) = 1 2 σ θ 4 d min 4 n ( n 1 ) 2 Ψ cos ( 2 θ i j )
The sensors in S i are placed as Lemma 2. Then,
Ψ i cos 2 ( θ a b ) = q i 2
where Ψ i = { { a , b } } is the set of all combinations of a and b with a < b and a , b S i . Since i = 1 n cos ( α + 2 ( i 1 ) n π ) = 0 ( α is arbitrary, n 2 ), for  j S i , l S g ( i g )
l = 1 q g cos ( 2 θ j l ) = l = 1 q g cos ( 2 θ j n 1 g + 2 ( l 1 ) π q v ) = 0
Finally, consider the following function
Ψ cos ( 2 θ i j ) = Ψ cos ( 2 θ i j ) + Ψ \ Ψ cos ( 2 θ i j ) = i = 1 m q i 2 + 0 = n 2
where Ψ = i = 1 m Ψ i .
Hence,
det ( J ) = n 2 4 σ θ 4 d min 4
   □
In view of (25) in the proof of Theorem 2, the angles between the sensors not in the same subset do not affect the optimal sensor–target geometry. In addition, it remains the optimal sensor–target geometry when the sensors are managed by the geometric operations in Proposition 1. Therefore, we can classify the optimal sensor–target geometry by the set Ξ = { q 1 , q 2 , , q m } , which is recognized as the partition case dividing n into the sum of integers no less than 2. In other words, the optimal sensor–target geometries are regarded as identical for equivalent Ξ . Figure 4 and Figure 5 illustrate some examples of the optimal sensor–target geometry for n = 4 , 5 . In Figure 4a,b, two sensor–target geometries are considered the same because the sensors are both divided into two subsets with Ξ = { 2 , 2 } . Additionally, the sensors with the same Ξ = { 2 , 3 } in Figure 5a,b are also regarded as having identical sensor–target geometry, because the optimal sensor–target geometry in Figure 5b can be obtained by flipping sensor 4 about the target in Figure 5a. Additionally, the optimal sensor–target geometry with another partition case for n = 5 is shown in Figure 5c,d, which is regarded as identical optimal geometry with Ξ = { 5 , } , but they differ from the optimal geometry in Figure 5a,b due to different partition cases.
Remark 3. 
Although the number of optimal sensor–target geometries described in Theorem 2 is infinite due to rotation invariance, we are only concerned with the partition cases of the set S according the classification method in this paper. The number of the partition cases dividing n into a sum of positive integers no less than 2, denoted as A ( n ) , asymptotically equals 1 4 3 n exp π 2 n 3 1 4 3 ( n 1 ) exp π 2 ( n 1 ) 3 [37].

5. Motion Coordination

In this section, we propose a motion coordination strategy for mobile sensors to improve target tracking performance. According to our analysis above, the mobile sensors are required to reach the CRO around the target as soon as possible and coordinate with each other. Figure 6 illustrates the main steps of sensor motion coordination to achieve optimal geometry.

5.1. Single Sensor Motion

In practice, the real state of the target is unknown. We utilize the one-step predicted position of the target p ^ k + 1 | k = [ x ^ k + 1 | k p , y ^ k + 1 | k p ] T instead of p k + 1 at time k. The velocity of sensor i is designed as
u i ( k ) = v max T [ cos ϕ i , k , sin ϕ i , k ] T r i , k > d min ( p ^ k + 1 | k p ^ k | k ) r i , k = d min
As we want the sensors to approach the target as soon as possible, the velocities of the sensors are set to their maximum before they reach the boundary of the CRO around the target. After the sensors reach the CRO around the target, they are expected to follow the target on the CRO around the target.

5.2. Coordination Strategy

As all sensors reach the CRO around the target, they enter the coordination stage. The coordination strategy consists of matching the optimal sensor–target geometry and sensor motion optimization. The task of matching the optimal geometry involves allocateing the sensors into the subsets by comparing current sensor–target geometry with optimal geometry with the desired partition case Ξ . The sensor motion is optimized to achieve the optimal geometry with minimum energy consumption based on the result of matching the optimal geometry.
Let s ^ i ( k + 1 ) = [ x ^ i ( k + 1 ) , y ^ i ( k + 1 ) ] T denote the expected location of sensor i at time k + 1 calculated by u i ( k ) in (28) as
s ^ i ( k + 1 ) = s i ( k ) + u i ( k )
Define θ ^ i as the predicted angle
θ ^ i = tan 1 y ^ k + 1 | k p y ^ i ( k + 1 ) x ^ k + 1 | k p x ^ i ( k + 1 )
where θ ^ i is constrained within the range of 0 to π to simplify the step of matching the optimal geometry.
Matching optimal geometry for a given Ξ = { q 1 , q 2 , , q m } can be described as follows:
min κ = i = 1 m l = 1 q i 1 ( θ ^ n l + 1 i θ ^ n 1 i l q i π ) 2 s . t . S i S j = , i j S i = { n 1 i , n 2 i , , n q i i } S | S i | = q i , i { 1 , 2 , , m } l { 1 , 2 , , q i 1 }
where κ is defined as the difference degree compared with the optimal sensor–target geometry. The problem is naturally a combinatorial optimization problem, which is NP-hard. An algorithm to search for an approximate solution with a given Ξ was developed and is shown in Algorithm 1 based on the greedy search method.
Algorithm 1 Matching optimal geometry.
Input: 
  
  
S = { 1 , 2 , , n } , Ξ = { q 1 , q 2 , , q m } ;
Output: 
  
  
The sensor grouping S 1 , S 2 , , S m ;
1:
for  i = 1 , , m do
2:
   for  j S  do
3:
     for  l = 2 , , q i  do
4:
                  L j l = arg min k S \ { j } ( θ ^ k θ ^ j l 1 q i π ) 2 ;
5:
     end for
6:
            κ j = l = 2 q i ( θ ^ L j l θ j ^ l 1 q i π ) 2 ;
7:
   end for
8:
   Find the minimum κ j , S i { j , L j 2 , , L j q i } ,
 
       S S \ S i ;
9:
end for
10:
return { S 1 , S 2 , , S m } .
Remark 4. 
The step of matching the optimal geometry only needs to be performed once when the sensors all reach the CRO. The sensor coordination follows the optimal geometry matched via Algorithm 1 in later sensor movement on the CRO. Moreover, the computation complexity of Algorithm 1 is O m n 2 .
After matching the optimal sensor–target geometry, the sensors engage in motion coordination to achieve the optimal geometry, thereby improving tracking performance. For the purpose of energy conservation, sensor motion optimization can be described as a nonlinear optimization problem
min ϑ = i = 1 m j = 1 q i 1 u ¯ n j i ( k ) s . t . θ ^ n j + 1 i * θ ^ n 1 i * = j q i π s n j i * ( k + 1 ) p ^ k + 1 | k = d min j { 1 , 2 , , q i 1 } , i { 1 , 2 , , m }
where ϑ is the sum of the distance traveled by the sensors; u ¯ n j i ( k ) = s n j i * ( k + 1 ) s ^ n j i ( k + 1 ) and θ ^ i * are the predicted angles for s i * ( k + 1 ) . The nonlinear optimization problem in (32) can be solved by “fmincon” (Optimization toolbox) in Matlab®. Therefore, the control input for the sensor i is finally determined by
u i ( k ) = u i ( k ) + u ¯ i ( k ) = s i * ( k + 1 ) s i ( k )
The restriction of the turn rate can be implemented by choosing min { φ max , | u i ( k ) u i ( k 1 ) | } .
Remark 5. 
In terms of bearings-only target tracking accuracy, both the enveloping and semienveloping optimal sensor–target geometry configurations are considered equivalent. The selection of the configurations depends on the objectives of target tracking. When the sensors are expected to perform other operations, such as surveillance, recording, and so on, circumnavigation tracking is a more preferable approach, driving the sensors to achieve complete surrounding of a target on the CRO.

5.3. Collision Avoidance

A distance constraint is necessary to avoid collisions among the mobile sensors. Let l min denote the minimum distance between two sensors. When s i ( k ) s j ( k ) < ρ min , the collision avoidance algorithm is enabled, and we have
s i ( k + 1 ) = x i ( k ) + u i ( k ) cos ( u i ( k ) ± δ ) y i ( k ) + u i ( k ) sin ( u i ( k ) ± δ ) s j ( k + 1 ) = x j ( k ) + u j ( k ) cos ( u j ( k ) ± δ ) y j ( k ) + u j ( k ) sin ( u j ( k ) ± δ )
where δ is a small heading change for the sensor, and ± δ is selected to make the range between them larger.
To summarize, the sensor motion coordination algorithm is presented in the Algorithm 2.
Algorithm 2 Sensor motion coordination for target tracking.
Input: 
  
  
The estimate of the target at time k, x ^ k | k ;
  
The location of the sensor at time k, s i ( k ) ;
Output: 
  
  
The estimate of the target at time k + 1 , x ^ k + 1 | k + 1 ;
  
The location of the sensor at time k + 1 , s i ( k + 1 ) ;
1:
Receive x ^ k + 1 | k from the estimation center;
2:
Compute u i ( k ) with (28), (31), (33) and (34);
3:
Move to a new position s i ( k + 1 ) ;
4:
Take new measurements of the target z k + 1 , and estimate the state of the target via CKF;
5:
return  x ^ k + 1 | k + 1 , s i ( k + 1 ) .

6. Simulation Experiments

In this section, we illustrate the proposed sensor motion coordination algorithm with some simulation examples. By default, all variables used in the simulation were in SI units. As introduced in Section 3.1, we used a CKF method to estimate the state of the target. For comparison, the gradient descent method in [34] and the projection method in [35] were adopted to optimize the sensor motion under the same conditions.
To compare the tracking performance, we used the root mean square error (RMSE) of the position of the target. The RMSE of position at time k is defined as
RMSE p ( k ) = 1 N c i = 1 N c ( x i ( k ) x ^ i ( k ) ) 2 + ( y i ( k ) y ^ i ( k ) ) 2
where N c is the total numbers of Monte Carlo runs; [ x i ( k ) , y i ( k ) ] T and [ x ^ i ( k ) , y ^ i ( k ) ] T are the true and estimated positions at the nth Monte Carlo run respectively.
Scenario 1: We consider a problem of tracking a moving target using 5 mobile sensors in 2D space. The dynamic function of the target is described by the constant velocity model
x k + 1 = 1 T 0 0 0 1 0 0 0 0 1 T 0 0 0 1 x k + w k
where x k = [ x k , x ˙ k , y k , y ˙ k ] T and T = 0.2 s is the sampling time. The process noise w k is a zero-mean Gaussian with a covariance matrix Q k = diag [ q M q M ] , where
M = T 3 / 3 T 2 / 2 T 2 / 2 T
The scalar parameter q = 0.1 m / s 3 denotes the process noise intensity. The measurements taken by sensor i at time k is given in (1) and σ θ = 0.1 rad .
The true initial state of the target is x 0 = [ 50 m 3 m / s 50 m 1 m / s ] T , and its associated covariance is P 0 | 0 = diag [ 1000 m 2 100 m 2 / s 2 1000 m 2 100 m 2 / s 2 ] . The initial state estimate x 0 | 0 is randomly chosen from N ( x 0 , P 0 | 0 ) in each run. The initial positions of the 5 sensors are s 1 ( 0 ) = [ 100 m 120 m ] T , s 2 ( 0 ) = [ 150 m 50 m ] T , s 3 ( 0 ) = [ 100 m 60 m ] T , s 4 ( 0 ) = [ 100 m 120 m ] T , and s 5 ( 0 ) = [ 100 m 200 m ] T . The maximum velocity and turn rate are v max = 10 m / s and φ max = π 3 rad , respectively. The minimum restriction is d min = 50 m , and the minimum distance among the sensors is ρ min = 10 m . Set N c = 2000 .
There are two partition cases for n = 5 with Ξ = { 2 , 3 } and Ξ = { 5 } . We first compared the tracking performance and distance traveled by the mobile sensors when the sensors are steered to achieve these two kinds of optimal sensor–target geometries. Additionally, we included static sensors and mobile sensors whose waypoints were computed by the methods in [34,35] in the comparative experiment. Figure 7a,b show the trajectory of the 5 bearings-only sensors achieving the optimal geometry with partition cases Ξ = { 5 } and Ξ = { 2 , 3 } , respectively. As shown in Figure 7b, sensor 1, sensor 3, and sensor 5 are assigned into the subset with three sensors and the others in the subset with two sensors after matching the optimal geometry. The sensors eventually move with the target in the optimal geometry, as expected. The optimal geometry is referenced to the estimated target position and shows discrepancies with the true optimal sensor–target geometry. This discrepancy is unavoidable in practical applications since the true target position is unknown. However, the proposed motion coordination method can enhance the estimation performance, and the circular formation approaches closer to the true optimal geometry, thus achieving the theoretically optimal estimation accuracy, as shown by the compared RMSEs of the position illustrated in Figure 8. Obviously, the tracking performance estimated by mobile sensors is better than that estimated by static sensors. The proposed method significantly improves the tracking performance and exhibits lower estimate error compared with the method in [34]. Meanwhile, the tracking performance of the method in [35] is close to the proposed method in this scenario. There is a negligible difference in the tracking performance between the two kinds of optimal geometries with Ξ = { 2 , 3 } and Ξ = { 5 } . Additionally, the sums of the distance traveled by all mobile sensors to achieve the optimal geometry with Ξ = { 2 , 3 } and Ξ = { 5 } are 1488.9 m and 1548.6 m , respectively. The reduction in distance between Ξ = { 2 , 3 } and Ξ = { 5 } is attributed to the fact that the sensor–target geometry is closer to the optimal geometry with Ξ = { 2 , 3 } , whose κ is smaller, when the sensors reach the CRO.
Scenario 2: We consider a problem of tracking a moving target using 4 mobile sensors in 2D space. The dynamic function of the target is described by
x k + 1 = 1 sin Ω k T Ω k 0 1 cos Ω k T Ω k 0 0 cos Ω k T 0 sin Ω k T 0 0 1 cos Ω k T Ω k 1 sin Ω k T Ω k 0 0 sin Ω k T 0 cos Ω k T 0 0 0 0 0 1 x k + w k
where x k = [ x k , x ˙ k , y k , y ˙ k , Ω k ] T and T = 1 s . The process noise w k is a zero-mean Gaussian with a covariance matrix Q k = diag [ q 1 Γ q 1 Γ q 2 T ] , where
Γ = T 3 / 3 T 2 / 2 T 2 / 2 T
and q 1 = 0.1 m / s 3 and q 2 = 1.75 × 10 4 rad / s 2 denote the process noise intensity. The true initial state of the target is x 0 = [ 0 m 20 m 0 m 0 m 0.05 rad / s ] T , and its associated covariance is P 0 | 0 = diag [ 1000 m 2 100 m 2 / s 2 1000 m 2 100 m 2 / s 2 10 4 rad 2 / s 2 ] . The initial positions of the 4 sensors are randomly deployed. The rest parameters are listed as: σ θ = 0.05 rad , d min = 100 m , ρ min = 20 m , v max = 50 m / s , φ max = π 3 rad and N c = 2000 .
There are two partition cases for n = 4 with Ξ = { 2 , 2 } and Ξ = { 4 } . However, the optimal geometry with Ξ = { 4 } can be obtained by rotating the sensors in one subset in the optimal geometry with Ξ = { 2 , 2 } as a whole by a proper angle. Thus, the partition case for n = 4 is selected as Ξ = { 2 , 2 } in Scenario 2. Figure 9 shows the trajectory of the 4 bearings-only sensors tracking a target. In this run, sensors 1 and 3 are assigned in a subset and the others in another subset after matching the optimal geometry. Figure 10 shows the compared RMSEs of the position. Obviously, the tracking performance estimated by static sensors is the poorest, and it continues to degrade as the distance from the target increases. The proposed method improves the tracking performance and exhibits lower estimate error compared with the methods in [34,35] for maneuver turning target tracking.

7. Conclusions

In this study, optimal sensor–target geometry and a motion coordination strategy were proposed for a target tracking system using mobile bearings-only sensors in 2D space. We discussed the suboptimality of approaching the target for bearings-only sensors to improve tracking performance. A general optimal sensor–target geometry was derived with uniform sensor–target distance using D-optimality for arbitrary n ( n 2 ) bearings-only sensors. A motion coordination algorithm was developed based on the previous optimality analysis to achieve the optimal target tracking performance efficiently. In future work, we will investigate a distributed optimization method for mobile sensors and its extension to multitarget tracking.

Author Contributions

Conceptualization, S.W.; methodology, S.W.; software, S.W.; validation, S.W. and Y.L.; formal analysis, S.W. and Y.L.; investigation, S.W. and Y.L.; resources, Y.L.; data curation, S.W.; writing—original draft preparation, S.W.; writing—review and editing, S.W., Y.L. and G.Q.; supervision, Y.L.; funding acquisition, Y.L. and A.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China (62171223 and 61871221).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Farina, A. Target tracking with bearings–only measurements. Signal Process. 1999, 78, 61–78. [Google Scholar] [CrossRef]
  2. Bar-Shalom, Y.; Li, X.R.; Kirubarajan, T. Estimation with Applications to Tracking and Navigation: Theory Algorithms and Software; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
  3. Shi, Y.; Farina, A.; Song, T.L.; Peng, D.; Guo, Y. Distributed fusion in harsh environments using multiple bearings-only sensors with out-of-sequence-refined measurements. Aerosp. Sci. Technol. 2021, 117, 106950. [Google Scholar] [CrossRef]
  4. Wei, Z.; Duan, Z.; Han, Y.; Mallick, M. A New Coarse Gating Strategy Driven Multidimensional Assignment for Two-Stage MHT of Bearings-Only Multisensor-Multitarget Tracking. Sensors 2022, 22, 1802. [Google Scholar] [CrossRef]
  5. Ding, X.; Wang, J.; Wang, C.; Xia, K.; Xin, M. Cooperative Estimation and Guidance Strategy Using Bearings-Only Measurements. J. Guid. Control Dyn. 2023, 46, 761–769. [Google Scholar] [CrossRef]
  6. Jiang, H.; Wang, X.; Deng, Y.; Zhang, Y. Event-Triggered Distributed Bias-Compensated Pseudolinear Information Filter for Bearings-Only Tracking Under Measurement Uncertainty. IEEE Sens. J. 2023, 23, 8504–8513. [Google Scholar] [CrossRef]
  7. Barshalom, Y. Multitarget-Multisensor Tracking: Aplications and Advances; Artech House: Norwood, MA, USA, 1993. [Google Scholar]
  8. Aidala, V.; Nardone, S. Biased Estimation Properties of the Pseudolinear Tracking Filter. IEEE Trans. Aerosp. Electron. Syst. 2019, 18, 432–441. [Google Scholar] [CrossRef]
  9. Bu, S.; Meng, A.; Zhou, G. A New Pseudolinear Filter for Bearings-Only Tracking without Requirement of Bias Compensation. Sensors 2021, 21, 5444. [Google Scholar] [CrossRef]
  10. Doğançay, K. 3D Pseudolinear Target Motion Analysis From Angle Measurements. IEEE Trans. Signal Process. 2015, 63, 1570–1580. [Google Scholar] [CrossRef]
  11. Julier, S.J.; Uhlmann, J.K. Unscented filtering and nonlinear estimation. Proc. IEEE 2004, 92, 401–422. [Google Scholar] [CrossRef]
  12. Arasaratnam, I.; Haykin, S. Cubature Kalman filters. IEEE Trans. Autom. Control 2009, 54, 1254–1269. [Google Scholar] [CrossRef] [Green Version]
  13. Ali, W.; Li, Y.; Chen, Z.; Raja, M.A.Z.; Ahmed, N.; Chen, X. Application of Spherical-Radial Cubature Bayesian Filtering and Smoothing in Bearings Only Passive Target Tracking. Entropy 2019, 21, 1088. [Google Scholar] [CrossRef] [Green Version]
  14. Liu, Z.; Ji, L.; Yang, F.; Qu, X.; Yang, Z.; Qin, D. Cubature Information Gaussian Mixture Probability Hypothesis Density Approach for Multi Extended Target Tracking. IEEE Access 2019, 7, 103678–103692. [Google Scholar] [CrossRef]
  15. Lv, Y.W.; Yang, G.H. Centralized and distributed adaptive cubature information filters for multi-sensor systems with unknown probability of measurement loss. Inf. Sci. 2023, 630, 173–189. [Google Scholar] [CrossRef]
  16. Arulampalam, M.S.; Maskell, S.; Gordon, N.; Clapp, T. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process. 2002, 50, 174–188. [Google Scholar] [CrossRef] [Green Version]
  17. Dunik, J.; Straka, O.; Simandl, M.; Blasch, E. Random-point-based filters: Analysis and comparison in target tracking. IEEE Trans. Aerosp. Electron. Syst. 2015, 51, 1403–1421. [Google Scholar] [CrossRef]
  18. Bishop, A.N.; Fidan, B.; Anderson, B.D.; Doğançay, K.; Pathirana, P.N. Optimality analysis of sensor–target localization geometries. Automatica 2010, 46, 479–492. [Google Scholar] [CrossRef]
  19. Yang, C.; Kaplan, L.; Blasch, E. Performance Measures of Covariance and Information Matrices in Resource Management for Target State Estimation. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 2594–2613. [Google Scholar] [CrossRef]
  20. Zhao, S.; Chen, B.M.; Lee, T.H. Optimal sensor placement for target localisation and tracking in 2D and 3D. Int. J. Control 2013, 86, 1687–1704. [Google Scholar] [CrossRef] [Green Version]
  21. Moreno-Salinas, D.; Pascoal, A.; Aranda, J. Sensor Networks for Optimal Target Localization with Bearings-Only Measurements in Constrained Three-Dimensional Scenarios. Sensors 2013, 13, 10386–10417. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Ronghua, Z.; Hemin, S.; Hao, L.; Weilin, L. TDOA and track optimization of UAV swarm based on D-optimality. J. Syst. Eng. Electron. 2020, 31, 1140–1151. [Google Scholar] [CrossRef]
  23. Ucinski, D. Optimal Measurement Methods for Distributed Parameter System Identification; CRC Press: Boca Raton, FL, USA, 2004. [Google Scholar]
  24. Xu, S.; Wu, L.; Doğançay, K.; Alaee-Kerahroodi, M. A Hybrid Approach to Optimal TOA-Sensor Placement With Fixed Shared Sensors for Simultaneous Multi-Target Localization. IEEE Trans. Signal Process. 2022, 70, 1197–1212. [Google Scholar] [CrossRef]
  25. Martínez, S.; Bullo, F. Optimal sensor placement and motion coordination for target tracking. Automatica 2006, 42, 661–668. [Google Scholar] [CrossRef]
  26. Yang, C.; Kaplan, L.; Blasch, E.; Bakich, M. Optimal Placement of Heterogeneous Sensors for Targets with Gaussian Priors. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 1637–1653. [Google Scholar] [CrossRef]
  27. Xu, S.; Doğançay, K. Optimal sensor placement for 3-D angle-of-arrival target localization. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 1196–1211. [Google Scholar] [CrossRef]
  28. Sharp, I.; Yu, K.; Guo, Y.J. GDOP analysis for positioning system design. IEEE Trans. Veh. Technol. 2009, 58, 3371–3382. [Google Scholar] [CrossRef]
  29. Zhong, Y.; Wu, X.Y.; Huang, S.C. Geometric dilution of precision for bearing-only passive location in three-dimensional space. Electron. Lett. 2015, 51, 518–519. [Google Scholar] [CrossRef]
  30. Li, Y.; Qi, G.; Sheng, A. Optimal deployment of vehicles with circular formation for bearings-only multi-target localization. Automatica 2019, 105, 347–355. [Google Scholar] [CrossRef]
  31. Chung, T.H.; Burdick, J.W.; Murray, R.M. A decentralized motion coordination strategy for dynamic target tracking. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation (ICRA 2006), Orlando, FL, USA, 15–19 May 2006; pp. 2416–2422. [Google Scholar]
  32. Doğançay, K. Single-and multi-platform constrained sensor path optimization for angle-of-arrival target tracking. In Proceedings of the 2010 18th European Signal Processing Conference, Aalborg, Denmark, 23–27 August 2010; pp. 835–839. [Google Scholar]
  33. Doğançay, K. UAV Path Planning for Passive Emitter Localization. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 1150–1166. [Google Scholar] [CrossRef]
  34. Xu, S.; Doğançay, K.; Hmam, H. Distributed pseudolinear estimation and UAV path optimization for 3D AOA target tracking. Signal Process. 2017, 133, 64–78. [Google Scholar] [CrossRef]
  35. Dogancay, K. Optimal Geometries for AOA Localization in the Bayesian Sense. Sensors 2022, 22, 9802. [Google Scholar] [CrossRef]
  36. Zhong, Y.; Wu, X.; Huang, S.; Li, C.; Wu, J. Optimality Analysis of sensor–target Geometries for Bearing-Only Passive Localization in Three Dimensional Space. Chin. J. Electron. 2016, 25, 391–396. [Google Scholar] [CrossRef]
  37. Andrews, G.E. Number Theory; Courier Corporation: North Chelmsford, MA, USA, 1994. [Google Scholar]
Figure 1. Target tracking geometry for n > 2 bearings-only sensors, where i > 2 .
Figure 1. Target tracking geometry for n > 2 bearings-only sensors, where i > 2 .
Sensors 23 06408 g001
Figure 2. Optimal sensor motion for target tracking.
Figure 2. Optimal sensor motion for target tracking.
Sensors 23 06408 g002
Figure 3. Sensor–target geometry.
Figure 3. Sensor–target geometry.
Sensors 23 06408 g003
Figure 4. Optimal sensor–target geometries for n = 4 . (a) S 1 = { 1 , 2 } , S 2 = { 3 , 4 } . (b) S 1 = { 1 , 3 } , S 2 = { 2 , 4 } .
Figure 4. Optimal sensor–target geometries for n = 4 . (a) S 1 = { 1 , 2 } , S 2 = { 3 , 4 } . (b) S 1 = { 1 , 3 } , S 2 = { 2 , 4 } .
Sensors 23 06408 g004
Figure 5. Optimal sensor–target geometry for n = 5 . (a) S 1 = { 1 , 3 } , S 2 = { 2 , 4 , 5 } . (b) S 1 = { 1 , 3 } , S 2 = { 2 , 4 , 5 } . (c) S 1 = { 1 , 2 , 3 , 4 , 5 } . (d) S 1 = { 1 , 2 , 3 , 4 , 5 } .
Figure 5. Optimal sensor–target geometry for n = 5 . (a) S 1 = { 1 , 3 } , S 2 = { 2 , 4 , 5 } . (b) S 1 = { 1 , 3 } , S 2 = { 2 , 4 , 5 } . (c) S 1 = { 1 , 2 , 3 , 4 , 5 } . (d) S 1 = { 1 , 2 , 3 , 4 , 5 } .
Sensors 23 06408 g005aSensors 23 06408 g005b
Figure 6. Sensor motion coordination to achieve optimal geometry.
Figure 6. Sensor motion coordination to achieve optimal geometry.
Sensors 23 06408 g006
Figure 7. Sensor trajectory for target tracking in Scenario 1. (a) The optimal geometry with partition case Ξ = { 5 } . (b) The optimal geometry with partition case Ξ = { 2 , 3 } .
Figure 7. Sensor trajectory for target tracking in Scenario 1. (a) The optimal geometry with partition case Ξ = { 5 } . (b) The optimal geometry with partition case Ξ = { 2 , 3 } .
Sensors 23 06408 g007
Figure 8. Comparison of RMSE p for target tracking in Scenario 1 [34,35].
Figure 8. Comparison of RMSE p for target tracking in Scenario 1 [34,35].
Sensors 23 06408 g008
Figure 9. Sensor trajectory for target tracking in Scenario 2.
Figure 9. Sensor trajectory for target tracking in Scenario 2.
Sensors 23 06408 g009
Figure 10. Comparison of RMSE p for target tracking in Scenario 2 [34,35].
Figure 10. Comparison of RMSE p for target tracking in Scenario 2 [34,35].
Sensors 23 06408 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, S.; Li, Y.; Qi, G.; Sheng, A. Optimal Geometry and Motion Coordination for Multisensor Target Tracking with Bearings-Only Measurements. Sensors 2023, 23, 6408. https://doi.org/10.3390/s23146408

AMA Style

Wang S, Li Y, Qi G, Sheng A. Optimal Geometry and Motion Coordination for Multisensor Target Tracking with Bearings-Only Measurements. Sensors. 2023; 23(14):6408. https://doi.org/10.3390/s23146408

Chicago/Turabian Style

Wang, Shen, Yinya Li, Guoqing Qi, and Andong Sheng. 2023. "Optimal Geometry and Motion Coordination for Multisensor Target Tracking with Bearings-Only Measurements" Sensors 23, no. 14: 6408. https://doi.org/10.3390/s23146408

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop