4.1. Geometric Localization Model
Anchors for localization are distributed in an indoor space. The UWB sensors on the three anchors are adjusted to the same height
${H}_{Tripod}$ using tripods. In this horizontal plane, Anchor (A) is positioned midway between Anchor (B) and Anchor (C). As depicted in
Figure 7, the geometry model is based on these three anchors. The tag location is denoted as point
$\mathbf{P}$. The locations of the three anchors are denoted as points
$\mathbf{A}$,
$\mathbf{B}$,
$\mathbf{C}$. The point
$\mathbf{O}$ is the midpoint of line segment
$BC$. Therefore, the localization system is defined with the x-axis in the direction of
$\overrightarrow{\mathbf{OA}}$, the y-axis in the direction of
$\overrightarrow{\mathbf{BC}}$, and the z-axis facing vertically upwards. The lines
$AP$,
$BP$,
$CP$ represent the true length of the tag from each anchor. The projection of the tag point
$\mathbf{P}$ in the plane is the point
$\mathbf{K}$, and the coordinates
$({P}_{x},{P}_{y},{P}_{z})$ of point
$\mathbf{P}$ need to be computed.
The UWB sensors at each anchor point measure the distance (
${L}_{UW{B}_{A}\left(t\right)}$,
${L}_{UW{B}_{B}\left(t\right)}$,
${L}_{UW{B}_{C}\left(t\right)}$) from the tag point
$\mathbf{P}$ in real time. At a given moment, these moments are denoted as
$\widehat{AP}$,
$\widehat{BP}$, and
$\widehat{CP}$, respectively. To calculate the tag’s coordinates
$({P}_{x\left(t\right)},{P}_{y\left(t\right)})$ in the X-axis and Y-axis, it is necessary to project the UWB measurements in the real 3D environment to the anchors’ planes. Equation (
9) employs the Pythagorean theorem to calculate the projected distance values from the UWB sensors, integrating the tag’s height information estimated by dual BMP sensors as the PK in the geometric model. The lengths obtained are
${\overline{L}}_{UW{B}_{A}\left(t\right)}$,
${\overline{L}}_{UW{B}_{B}\left(t\right)}$,
${\overline{L}}_{UW{B}_{C}\left(t\right)}$, corresponding to line segments
$\widehat{AK}$,
$\widehat{BK}$, and
$\widehat{CK}$, respectively:
Within the plane of the anchors, circles are drawn with the anchor as the center and the measured projected distance as the radius, respectively. For instance, a circle is drawn with the Anchor C as the center and
$\widehat{CK}$ as the radius. Ideally, the three circles should intersect at point
$\mathbf{K}$. However, since the lengths
$\widehat{AP}$,
$\widehat{BP}$, and
$\widehat{CP}$ estimated by the UWB sensor are approximations, the circles might intersect at multiple points as depicted in
Figure 8. The three intersections
${\overline{\mathbf{P}}}_{BC}$,
${\overline{\mathbf{P}}}_{AC}$,
${\overline{\mathbf{P}}}_{BC}$ that are closest to each other are selected to form a triangle, and its centroid is used as the estimated point
$\mathbf{K}$. To determine the centroid’s position, each of the two circles needs to provide a vertex for the final triangle.
The geometric calculation starts by determining whether these circles intersect, based on the locations of points $\mathbf{A}$, $\mathbf{B}$, and $\mathbf{C}$, and the lengths of $\widehat{AP}$, $\widehat{BP}$, and $\widehat{CP}$. When each pair of circles has two intersection points, a total of six intersection points are generated. Additionally, due to the impact of UWB data noise, the measured results for $\widehat{AP}$, $\widehat{BP}$, and $\widehat{CP}$ may result in the circles not intersecting. In this case, the vertex between the two circles needs to be redefined. Therefore, this study addresses several potential geometric scenarios separately. As an example, the intersection of circles A and C is divided into two cases for solving the intersection’s result. Then, the results of the other two sets of two circles are combined to find the triangle’s centroid.
Case 1: Two circles intersect at two points.
There are two intersections when two circles intersect. As shown in
Figure 9, the intersection points
${\mathbf{P}}_{\mathbf{1}}$ and
${\mathbf{P}}_{\mathbf{2}}$ to be found are symmetrical about the line segment
$AC$. The area of
$\u25b5AC{P}_{1}$ is calculated using Heron’s formula, and the lengths of
${P}_{1}G$ and
$CG$ are determined as shown in Equations (
10)–(12).
Then, the coordinates of point $\mathbf{G}$ can be obtained by the principle of similar triangles, i.e., by calculating the ratio of the lengths of CG to AC. Therefore, based on the point $\mathbf{G}$, the coordinates of the points ${\mathbf{P}}_{\mathbf{1}}$ and ${\mathbf{P}}_{\mathbf{2}}$ in this plane are solved from the perpendicular relation and the length of ${P}_{1}G$. Then, the point ${\mathbf{P}}_{\mathbf{2}}$ inside the circle B is chosen as the coordinates of ${\overline{\mathbf{P}}}_{AC}$. Using the same method, ${\overline{\mathbf{P}}}_{BC}$ and ${\overline{\mathbf{P}}}_{BC}$ are determined through geometric calculation.
Case 2: Two circles are tangent or non-intersecting.
As in
Figure 10, the inclusion relationship between circle
A and circle
C is determined by the length relationship between
$\widehat{AP}$,
$\widehat{CP}$, and
$AC$. When two circles are tangents, there is a shared tangent point that can be used as a vertex (
${\overline{\mathbf{P}}}_{AC}$) of the required triangle. And when two circles do not have an intersection point, it can be the case that the two circles do not contain each other, or that one circle contains the other. In this case, it is necessary to determine the point
${\overline{\mathbf{P}}}_{AC}$ in these unsolved cases by the percentage of
$\widehat{AP}$ and
$\widehat{CP}$.
The cases where the two circles are tangents or non-intersecting can be calculated as Equations (
13) and (14):
where
$({A}_{x},{A}_{y})$ represents the coordinate of point
$\mathbf{A}$.
$({C}_{x},{C}_{y})$ represents the coordinate of point
$\mathbf{C}$, and
$({P}_{A{C}_{x}},{P}_{A{C}_{y}})$ is the coordinate of point
${\overline{\mathbf{P}}}_{AC}$. The three scenarios in the piecewise function correspond to the following: the two circles do not encompass each other; circle
A contains circle
C; and circle
C contains circle
A. Here, when the judgment in the equation is an equal sign, it represents scenarios where the two circles are tangents.
Similarly, the results of geometrically resolving circles
A and
B and circles
B and
C, respectively, result in a total of three intersections:
${\overline{\mathbf{P}}}_{BC}$,
${\overline{\mathbf{P}}}_{AC}$, and
${\overline{\mathbf{P}}}_{BC}$.
Figure 11 shows a possible scenario where circles
A and
C do not intersect but circle
B intersects circles
A and
C, respectively. In addition, for any other possible scenarios, it is possible to solve for three sets of two circles based on Case 1 and Case 2, respectively, thus forming a triangle that can be used to solve for the centroid
K.
Finally, by the centroid method, the location
$({P}_{x},{P}_{y})$ of the point
$\mathbf{P}$ projected on the anchors’ plane is estimated. Then, the tag’s height
${H}_{dual\left(t\right)}$ estimated by the dual BMP sensor is used again. The tag’s 3D coordinates are denoted as:
4.2. Optimization of Location Estimation Based on Kalman Filtering
Due to the fluctuations in the data measured by the sensors, the tag’s location calculated using the geometric model may deviate from the actual value. The Kalman filtering algorithm is an optimization estimation algorithm that is very effective in dealing with linear dynamic systems containing Gaussian noise. To optimize the location estimation, this study employs the Kalman filtering method to predict and update the optimal coordinates of the tag.
First, the time node information of the running loop in the localization system can be used to estimate the tag’s velocity
${\mathbf{v}}_{(t-1)}$ at the last moment
$(t-1)$:
where the tag’s velocity
${\mathbf{v}}_{(t-1)}$ in the three dimensions consists of the components [
${v}_{x(t-1)}$,
${v}_{y(t-1)}$,
${v}_{z(t-1)}$]′. The tag’s height
${P}_{z(t-1)}$ is equal to the height estimated by the dual BMP sensors
${H}_{dual(t-1)}$.
$\Delta t$ denotes the change, with the change in locations represented by [
$\Delta {P}_{x(t-1)}$,
$\Delta {P}_{y(t-1)}$,
$\Delta {P}_{z(t-1)}$]′.
Assume that the labels have similar velocities in a very short period of time. Then, the predicted location
${\mathcal{X}}_{\left(t\right|t-1)}$ at the current moment is as follows:
where the subscript
$\left(t\right|t-1)$ denotes the estimation of the
$\left(t\right)$ moment based on the
$(t-1)$ moment. The
${\mathcal{X}}_{(t-1|t-1)}$ is the optimal solution for the location estimated at the previous moment, [
${P}_{x(t-1)}$,
${P}_{y(t-1)}$,
${P}_{z(t-1)}$]′. And the control input
${\mathcal{U}}_{\left(t\right)}$ is the predicted location increment.
Then, the update part of the Kalman filter for the localization data is as follows:
Herein,
$\mathcal{P}$ represents the prediction error covariance,
${\mathcal{K}}_{\left(t\right)}$ is the Kalman gain, and
${\widehat{\mathcal{X}}}_{\left(t\right|t)}$ is the state estimation.
A is the state transition matrix,
H is the observation matrix, and
I is a
$3\times 3$ identity matrix.
${\mathcal{Z}}_{\left(t\right)}$ is the actual measurement at the current moment, and its data are the tag’s coordinate data calculated by the geometric localization model in Equation (
15). Referring to the tests of UWB and BMP sensors in
Section 3, the process noise covariance matrix
Q in the Kalman filter is set to
$diag\left([1\times {10}^{-4},1\times {10}^{-4},1\times {10}^{-3}]\right)$, and the measurement noise covariance matrix
R is set to
$diag\left([4\times {10}^{-4},4\times {10}^{-4},5\times {10}^{-3}]\right)$. Finally, the positioning system’s output at the current moment is
${\widehat{\mathcal{X}}}_{\left(t\right|t)}$, i.e., the estimated location of the tag [
$Ta{g}_{x\left(t\right)}$,
$Ta{g}_{y\left(t\right)}$,
$Ta{g}_{z\left(t\right)}$]′.
In addition, at the start of the localization system operation, the Kalman filter cannot function accurately immediately due to the lack of initial measurement data for prediction. Therefore, at the first moment, the filter does not perform computations, and its output is directly set to the input positioning data. The initial error covariance matrix is set as a diagonal matrix diag[1, 1, 1], and from the second moment onwards, the filter begins to operate formally.
This section introduced the proposed indoor 3D localization method utilizing sensor fusion. UWB sensors at three anchors calculated distances to the tag, while dual BMP sensors provided height estimates, enabling the mapping of UWB sensors’ range measurements onto a 2D plane. The geometric localization model, based on these measurements, integrated various relationships and compensated for errors. The tag’s location on the anchor’s plane was determined using the centroid method, and Kalman filtering was applied to enhance the accuracy of the 3D location estimates. The scheme will be validated experimentally in the following section.