3D Indoor Position Estimation Based on a UDU Factorization Extended Kalman Filter Structure Using Beacon Distance and Inertial Measurement Unit Data

The development of the GPS (Global Positioning System) and related advances have made it possible to conceive of an outdoor positioning system with great accuracy; however, for indoor positioning, more efficient, reliable, and cost-effective technology is required. There are a variety of techniques utilized for indoor positioning, such as those that are Wi-Fi, Bluetooth, infrared, ultrasound, magnetic, and visual-marker-based. This work aims to design an accurate position estimation algorithm by combining raw distance data from ultrasonic sensors (Marvelmind Beacon) and acceleration data from an inertial measurement unit (IMU), utilizing the extended Kalman filter (EKF) with UDU factorization (expressed as the product of a triangular, a diagonal, and the transpose of the triangular matrix) approach. Initially, a position estimate is calculated through the use of a recursive least squares (RLS) method with a trilateration algorithm, utilizing raw distance data. This solution is then combined with acceleration data collected from the Marvelmind sensor, resulting in a position solution akin to that of the GPS. The data were initially collected via the ROS (Robot Operating System) platform and then via the Pixhawk development card, with tests conducted using a combination of four fixed and one moving Marvelmind sensors, as well as three fixed and one moving sensors. The designed algorithm is found to produce accurate results for position estimation, and is subsequently implemented on an embedded development card (Pixhawk). The tests showed that the designed algorithm gives accurate results with centimeter precision. Furthermore, test results have shown that the UDU-EKF structure integrated into the embedded system is faster than the classical EKF.


Introduction
With the advancement of technology in recent years, positioning systems have advanced significantly; however, these systems typically take place outdoors, where GPS signals are readily available.For indoor environments, GPS signals are insufficient for positioning accuracy because they are not at the desired level.This situation creates problems for indoor unmanned aerial (UAVs) and ground vehicles (UGVs).Therefore, studies have begun on position estimation for indoor environments.There are lots of different approaches related to indoor positioning, such as those that are motion-captured-based, lidar-based, Wi-Fi-based, Bluetooth-based, ultra-wideband (UWB)-based, and IMU-based [1][2][3].Motioncapturing-based systems like VICON and OptiTrack employ multiple high-speed cameras to determine the position of an object relative to one another; however, these systems have a drawback in that they require a complex setup and challenging calibration.Lidar-based positioning approaches, such as Gmapping, Hector, and Cartographer [4][5][6], are also an option, but they only provide the two-dimensional position due to weight limitations, especially for UAVs.
Wi-Fi-and Bluetooth-based positioning techniques are widely used, with an accuracy of a few meters [7]; however, they have constraints regarding the required number of access points, costs, and energy consumption.Bluetooth Low Energy (BLE) and Wi-Fi operate on the same frequency.BLE is intended for short-range and energy-efficient communication using brief messages [8].BLE-based location is usually achieved by placing proximity beacons at specific locations.Receivers determine their position by measuring the RSSI (receiver's distance from the sender) of the closest beacons.
UWB positioning is lightweight, has a straightforward design, provides stable positioning results, and can achieve a centimeter level of accuracy; however, relying solely on UWB does not fulfill the demands of high-precision indoor operations.Although an IMU is a popular sensor for determining orientation, its position estimates can accumulate errors over time due to drift [9,10].
Therefore, an algorithm is designed to perform more accurate indoor positioning estimation.This paper presents an algorithm that uses the EKF with UDU factorization to combine the data from an IMU and raw distance data to calculate an accurate position estimation.The algorithm begins by using the RLS method with a trilateration algorithm to obtain an initial position estimation.This estimation is then fused with data from the IMU's acceleration sensor.
The main contributions of this work are listed below: • The use of a sensor fusion algorithm, utilizing the EKF structure, in conjunction with acceleration data from IMU sensors, to enhance the accuracy of position information obtained from distance data provided by beacon sensors via the RLS algorithm.

•
The incorporation of a UDU factorization structure to reduce computation costs in embedded systems, in addition to the utilization of the EKF structure for sensor fusion.

•
The ability of the designed algorithm to produce a solution even under suboptimal conditions, such as the use of only three beacon sensors instead of the ideal four.
In this article, detailed information will be provided on the position estimation algorithm, followed by a description of the data collection process from the real sensor, and, finally, the results of the real-time tests of the designed algorithm will be conveyed.

Position Estimation Algorithm
In this section, the structure of algorithms designed for position estimation will be discussed.Figure 1 illustrates the general framework of the algorithm.Initially, information related to the geometric approach will be provided, and, subsequently, detailed information about the sensor fusion aspect will be conveyed.The designed algorithm is intended to be applied in real-time systems.For this reason, the EKF structure has been decomposed into UDU factorization to reduce the computation cost.Hence, the UDU-EKF structure is employed instead of the classical EKF structure in this study.
Sensors 2024, 24, x FOR PEER REVIEW 2 of 18 calibration.Lidar-based positioning approaches, such as Gmapping, Hector, and Cartographer [4][5][6], are also an option, but they only provide the two-dimensional position due to weight limitations, especially for UAVs.Wi-Fi-and Bluetooth-based positioning techniques are widely used, with an accuracy of a few meters [7]; however, they have constraints regarding the required number of access points, costs, and energy consumption.Bluetooth Low Energy (BLE) and Wi-Fi operate on the same frequency.BLE is intended for short-range and energy-efficient communication using brief messages [8].BLE-based location is usually achieved by placing proximity beacons at specific locations.Receivers determine their position by measuring the RSSI (receiver's distance from the sender) of the closest beacons.
UWB positioning is lightweight, has a straightforward design, provides stable positioning results, and can achieve a centimeter level of accuracy; however, relying solely on UWB does not fulfill the demands of high-precision indoor operations.Although an IMU is a popular sensor for determining orientation, its position estimates can accumulate errors over time due to drift [9,10].
Therefore, an algorithm is designed to perform more accurate indoor positioning estimation.This paper presents an algorithm that uses the EKF with UDU factorization to combine the data from an IMU and raw distance data to calculate an accurate position estimation.The algorithm begins by using the RLS method with a trilateration algorithm to obtain an initial position estimation.This estimation is then fused with data from the IMU's acceleration sensor.
The main contributions of this work are listed below: • The use of a sensor fusion algorithm, utilizing the EKF structure, in conjunction with acceleration data from IMU sensors, to enhance the accuracy of position information obtained from distance data provided by beacon sensors via the RLS algorithm.

•
The incorporation of a UDU factorization structure to reduce computation costs in embedded systems, in addition to the utilization of the EKF structure for sensor fusion.

•
The ability of the designed algorithm to produce a solution even under suboptimal conditions, such as the use of only three beacon sensors instead of the ideal four.
In this article, detailed information will be provided on the position estimation algorithm, followed by a description of the data collection process from the real sensor, and, finally, the results of the real-time tests of the designed algorithm will be conveyed.

Position Estimation Algorithm
In this section, the structure of algorithms designed for position estimation will be discussed.Figure 1 illustrates the general framework of the algorithm.Initially, information related to the geometric approach will be provided, and, subsequently, detailed information about the sensor fusion aspect will be conveyed.The designed algorithm is intended to be applied in real-time systems.For this reason, the EKF structure has been decomposed into UDU factorization to reduce the computation cost.Hence, the UDU-EKF structure is employed instead of the classical EKF structure in this study.This study is based on a geometric method, as depicted in Figure 2, which involves three reference points, B 1 , B 2 , and B 3 , represented by their coordinates: (x 1 , y 1 , z 1 ), (x 2 , y 2 , z 2 ), and (x 3 , y 3 , z 3 ), respectively.Additionally, the study also includes distance measurements, d 1 , d 2 , and d 3 , up to point A. The determination of the coordinates (x, y, z) of point A is equivalent to finding the solutions to the following system of quadratic equations [11][12][13].

Solution Based on Three Reference Points
This study is based on a geometric method, as depicted in Figure 2, which involves three reference points, B1, B2, and B3, represented by their coordinates: (x1, y1, z1), (x2, y2, z2), and (x3, y3, z3), respectively.Additionally, the study also includes distance measurements, d1, d2, and d3, up to point A. The determination of the coordinates (x, y, z) of point A is equivalent to finding the solutions to the following system of quadratic equations [11][12][13].
( The equations in this system can be expressed as follows [11][12][13]: The system of equations can be represented in matrix form as follows [11][12][13]: where The solution set of this system has two possible interpretations: In one interpretation, points B1, B2, and B3 are not in a straight line.In the other interpretation, they are in a straight line [11][12][13].
Case 1. B1, B2, and B3 are not in a straight line.
In this situation, the following propositions hold true:

•
The rank of matrix A0 is 3.

•
The dimension of the null space of A0 is 1.
The general solution to Equation (3) can be represented as follows: The equations in this system can be expressed as follows [11][12][13]: The system of equations can be represented in matrix form as follows [11][12][13]: where The solution set of this system has two possible interpretations: In one interpretation, points B 1 , B 2 , and B 3 are not in a straight line.In the other interpretation, they are in a straight line [11][12][13].In this situation, the following propositions hold true:

•
The rank of matrix A 0 is 3.

•
The dimension of the null space of A 0 is 1.
The general solution to Equation (3) can be represented as follows: where t is a real number coefficient, u p is a particular solution of (3), and u h is a solution of the homogenous system A 0 .u= 0.Both u p and u h can be calculated using the Gaussian elimination method: If we substitute the expression for u p , u h , and u into Equation (4), the following expressions are obtained: By using the constraint u ∈ E: and then This equation is a quadratic in the form of at 2 + bt + c = 0, and has two solutions: The solutions of the equation system can be presented as follows: Case 2. B 1 , B 2 , and B 3 are in a straight line.
In this situation, the following propositions hold true:

•
The rank of matrix A 0 is 2.

•
The dimension of the null space of A 0 is 2.
The general solution to Equation (3) can be expressed as follows: where u p is a particular solution of Equation ( 3), and u h1 as well as u h2 are two linearly independent solutions of the homogeneous system described by A 0 .u= 0.

Solution Based on More Than Three Reference Points
If there is an additional distance of d 4 , d 5 , . . .d n to the reference points, B 4 , B 5 . . .B n can be extend as follows [11][12][13]: Sensors 2024, 24, 3048 5 of 18 Expressed in a known form, this system can be written as follows: The general solution, denoted as û, can be obtained using the least squares method as follows: The projection of point p on the column space of matrix A is as follows: In this case, the coordinates of point p in the column space of Col (A) correspond to the solution, û.Moreover, vector b consists of distances between the unknown point, A, and all the reference points; however, if the measurements are uncorrelated but have varying levels of uncertainty, the weighted least squares (WLS) method is utilized.The solution, û, can then be determined using the following equation: V is the covariance matrix of random errors.These errors refer to the variability or uncertainty in the measurements of the distances between points (represented by the vector u) and the reference points.It might be measurement noise (e.g., signal interference) or propagation effects (e.g., signal reflection or refraction).The RLS method uses u 0 as the initial solution.It is continually updated with each new distance measurement, resulting in the creation of a new solution, u 1 .This approach enables the simultaneous calculation of both distance measurements and positioning, allowing for the initiation of position assignment even when not all distances are known, thus reducing waiting time and improving the speed of the positioning calculation.Generally, for indoor positioning, a limited number of participating reference stations does not cause a big challenge to the computational resources for multilateration.This presented algorithm utilizes a linear algebraic approach with low computational complexity, making it suitable for real-time applications.
The combination of distance data and the RLS algorithm is used to calculate the position.The following section will go into further detail on how to improve position estimation using the EKF.

Sensor Fusion Algorithm
Several algorithms are used for sensor fusion, including Bayesian networks, convolutional neural networks, Dempster-Shafer theory, and the Kalman filter [14].This system employs the Kalman filter to achieve more precise position estimation; however, the Kalman filter is optimized for linear systems, while real-world systems often exhibit non-linearity.Thus, the EKF is utilized to enhance performance [15].The implementation of the EKF used in this study will be discussed in further detail.The state vector is defined as follows [13]: The vector p(w) represents the coordinates of the object along the x, y, and z axes in the world coordinate system, and is given as The velocity vector of the object along the x, y, and z axes in the world coordinate system is represented by v . Additionally, the accelerometer vector of the object in the world coordinate system is represented by a The world coordinate system here (w) can also be specified as the coordinate system of the UWB sensor (Marvelmind).At this point, ∆t is defined as the sampling time interval, and (∆t)w[k] can be defined as the process noise of acceleration, representing uncertainties in the acceleration process.∆t 2 2 w[k], ∆t 3 6 w[k] can be represented as the process noise of velocity and process noise of positions, respectively.Equations of motion in the (k + 1) time interval can be expressed as follows [13]: The equation states that the new position at time step k + 1 is determined by the current position, the velocity multiplied by the time step, half of the acceleration multiplied by the square of the time step, and a noise term.The noise term is added to account for uncertainties or external disturbances that may affect the motion of the system: The equation states that the new velocity at time step k + 1 is determined by the current velocity, the acceleration multiplied by the time step, and a noise term: The equation states that the new acceleration at time step k + 1 is determined by the current acceleration and a noise term.
The choice of noise term w[k] depends on the characteristics of the system and the sources of uncertainty or disturbances.As mentioned earlier, it is common to assume w[k] to be white noise.White noise has equal power at all frequencies, making it a simple and convenient choice in many applications.
The state equation can be represented in matrix form, as follows: The matrices A and G represent the transition matrix and noise process matrix, respectively.Process noise vector and Q covariance can be defined as follows [16][17][18]: The observation vector, o[k], contains the distance values used in the geometric approach and an additional noise vector.Since the observation vector typically contains noisy or imperfect measurements of the true state of the system, the measurement vector, H[k], represents the predicted measurements based on the current estimate of the system's state.The observation vector contains the real-world measurements obtained from sensors, while the measurement vector represents the predicted measurements based on a current estimate of the system's state: d 1 , . .., d 4 represents the true distance measurements.In addition to this, n 1 , n 2 , n 3 , and n 4 represent the noise associated with each measurement.
The measurement vector is represented by H[k], and the vector of noise, with a mean of zero and a covariance matrix, is represented by n[k].As seen in Figure 3, in the general structure, five Marvelmind ultrasonic sensors are used for the system, with one being mobile and four being stationary.The distances from the four stationary sensors to the mobile sensor can be defined as r 1 , r 2 , r 3 , and r 4 .The matrix, R, a covariance matrix, is defined by the covariances of these distance data as follows: Sensors 2024, 24, x FOR PEER REVIEW 7 of 1 , …,  represents the true distance measurements.In addition to this, n1, n2, n3, and n represent the noise associated with each measurement.The measurement vector is represented by H[], and the vector of noise, with a mean of zero and a covariance matrix, is represented by n [𝑘].
As seen in Figure 3, in the general structure, five Marvelmind ultrasonic sensors are used for the system, with one being mobile and four being stationary.The distances from the four stationary sensors to the mobile sensor can be defined as r1, r2, r3, and r4.The matrix R, a covariance matrix, is defined by the covariances of these distance data as follows: The equations described in the structure of the applied EKF are utilized.The initiali zation stage of this EKF is one of the crucial factors, where the initial values of the covari ance matrices are assigned.Subsequently, the prediction and update steps are executed in sequence [15,19].
(1) Prediction of state: where f is a nonlinear transition function and u[k] represents control inputs if applicable (2) Prediction of state covariance: where Ak₋1 is the Jacobian matrix of the nonlinear transition function, f, with respect to the state, X.
(3) Gain calculation of the EKF:  The equations described in the structure of the applied EKF are utilized.The initialization stage of this EKF is one of the crucial factors, where the initial values of the covariance matrices are assigned.Subsequently, the prediction and update steps are executed in sequence [15,19].
(1) Prediction of state: x where f is a nonlinear transition function and u[k] represents control inputs if applicable.
(2) Prediction of state covariance: where A k − 1 is the Jacobian matrix of the nonlinear transition function, f, with respect to the state, X.
(3) Gain calculation of the EKF: where h is the Jacobian matrix and represents the nonlinear measurement function.
(5) State covariance correction: In EKF, the complexity of computation is increased by calculating the Jacobian matrix.To enhance the efficiency and accuracy, and to decrease the computational cost of the EKF computations, UDU factorization is applied.The details of this algorithm are given in the next section.

UDU Factorization
UDU factorization employs the above equations.It substitutes the covariance matrix, P[k], with an upper triangular matrix with ones on the diagonal (U[k]) and a diagonal matrix, D[k], and can be expressed as follows [20]:

UDU Measurement Update
Reword the sentence by expressing the process of factorizing the covariance, P, into an LDL form, where instead of utilizing an upper triangular matrix a lower triangular matrix is used [21].
where ∆ is the diagonal matrix and U and D are the inverses of L T and ∆, respectively [21].
Therefore, the measurement update becomes as follows: Then, factorize the mxm matrix R k into the LDL form as described in [21]: Therefore, (34) becomes as follows: and the term U R[k] H can be expressed as follows [21]: Sensors 2024, 24, 3048 can be expressed as follows [21]: Therefore, the measurement update is expressed as follows: In addition to this, ẑ[k] and z[k], which are related to xk and x k , are defined below [20,21]: where m[k] is the measurement vector.

UDU Time Update
The Kalman filter uses a standard method.It involves inverting the information matrix to obtain the covariance, then propagating the covariance, and finally inverting the propagated covariance matrix to prepare for the measurement update step.This proposed algorithm utilizes this method for covariance propagation and factorizing Q[k] with UDU parameterization [20,21]: where ∆ Q[k] is a diagonal p × p matrix, and G Q[k] is defined as follows: so P[k] becomes the following: Using the matrix inversion lemma, we obtain the following: The inversion of the propagated covariance matrix is as follows [20,21]: Then, the expression for P[k + 1] −1 is given by the following: and the Kalman (K UDU ) gain is defined as follows: Then, if we define brackets in (47) as P [k] −1 , we obtain the following: Equation ( 48) is an analog to Equations ( 28)-(30), with the following: and since D Q[k] is the diagonal matrix, the UDU factorization of P [k] −1 is solved directly by using Carlson's rank-one update [22]: Sensors 2024, 24, x FOR PEER REVIEW 10 of 18 and time-propagated UDU factors of  [ + 1] are defined as below: The time update for ̅ [ + 1] is obtained as follows.
then it becomes and it can be written as: The equation systems specified in this section have been utilized to enhance the performance of the EKF structure in real-time applications.The outcomes of this structure will be comprehensively analyzed in Section 4 of the article.The next section will explain how data are collected from the Marvelmind sensor.

Data Acquisition
This section will provide detail on how to retrieve data from the Marvelmind sensor.The mobile Marvelmind sensor is connected to a laptop computer via the serial port.To ensure that all of the necessary components are working correctly, four Marvelmind sensors attached to the wall were activated, and their functionality was confirmed through the Marvelmind's own interface [23].After verifying that all of the sensors were operating properly, a software tool on the ROS platform was run on the laptop to gather the data shown in Table 1.
and time-propagated UDU factors of  [ + 1] are defined as below: The time update for ̅ [ + 1] is obtained as follows.

𝑃 [𝑘 + 1] 𝑥̅ [𝑘 + 1] = 𝑃 [𝑘 + 1] 𝜙[𝑘]𝑃[𝑘]𝑃[𝑘] 𝑥 [𝑘]
(52) then it becomes and it can be written as: The equation systems specified in this section have been utilized to enhance the performance of the EKF structure in real-time applications.The outcomes of this structure will be comprehensively analyzed in Section 4 of the article.The next section will explain how data are collected from the Marvelmind sensor.

Data Acquisition
This section will provide detail on how to retrieve data from the Marvelmind sensor.The mobile Marvelmind sensor is connected to a laptop computer via the serial port.To ensure that all of the necessary components are working correctly, four Marvelmind sensors attached to the wall were activated, and their functionality was confirmed through the Marvelmind's own interface [23].After verifying that all of the sensors were operating properly, a software tool on the ROS platform was run on the laptop to gather the data shown in Table 1.
The time update for z[k + 1] is obtained as follows.
and it can be written as: The equation systems specified in this section have been utilized to enhance the performance of the EKF structure in real-time applications.The outcomes of this structure will be comprehensively analyzed in Section 4 of the article.The next section will explain how data are collected from the Marvelmind sensor.

Data Acquisition
This section will provide detail on how to retrieve data from the Marvelmind sensor.The mobile Marvelmind sensor is connected to a laptop computer via the serial port.To ensure that all of the necessary components are working correctly, four Marvelmind sensors attached to the wall were activated, and their functionality was confirmed through the Marvelmind's own interface [23].After verifying that all of the sensors were operating properly, a software tool on the ROS platform was run on the laptop to gather the data shown in Table 1.After the data were collected in the ROS environment, the packet parsing functions that interpret the data were added to Pixhawk in order to be able to collect them in the same way from Pixhawk as well.In addition to the written codes, a physical connection must also be made between Pixhawk and the Marvelmind sensor.This physical connection schema can be seen in Figure 4 [24].

Testing Algorithm and Results
The UDU-factorized EKF structure and the geometric approach based on the RLS method were implemented in the C++ environment using MATLAB code generation support.The integration was carried out on the Pixhawk platform.The trajectories obtained from the Marvelmind sensor are presented in Table 2.The robotic platform used in this study is a quadcopter.Due to reliability concerns associated with indoor flight, the quadcopter was constrained to a fixed table, and data were collected using Marvelmind sensors.In the initial six trajectories, no motion was applied to the robot along the z-axis; however, intentional movement in the z-axis was introduced during the data collection for the last two trajectories to assess the algorithm's performance.

Testing Algorithm and Results
The UDU-factorized EKF structure and the geometric approach based on the RLS method were implemented in the C++ environment using MATLAB code generation support.The integration was carried out on the Pixhawk platform.The trajectories obtained from the Marvelmind sensor are presented in Table 2.The robotic platform used in this study is a quadcopter.Due to reliability concerns associated with indoor flight, the quadcopter was constrained to a fixed table, and data were collected using Marvelmind sensors.In the initial six trajectories, no motion was applied to the robot along the z-axis; however, intentional movement in the z-axis was introduced during the data collection for the last two trajectories to assess the algorithm's performance.The explanation of the algorithm tests and their outcomes will be provided in detail in the next section.

Testing Algorithm and Results
The UDU-factorized EKF structure and the geometric approach based on the RLS method were implemented in the C++ environment using MATLAB code generation support.The integration was carried out on the Pixhawk platform.The trajectories obtained from the Marvelmind sensor are presented in Table 2.The robotic platform used in this study is a quadcopter.Due to reliability concerns associated with indoor flight, the quadcopter was constrained to a fixed table, and data were collected using Marvelmind sensors.In the initial six trajectories, no motion was applied to the robot along the z-axis; however, intentional movement in the z-axis was introduced during the data collection for the last two trajectories to assess the algorithm's performance.fixed on the wall.

Testing Algorithm and Results
The UDU-factorized EKF structure and the geometric approach based on the RLS method were implemented in the C++ environment using MATLAB code generation support.The integration was carried out on the Pixhawk platform.The trajectories obtained from the Marvelmind sensor are presented in Table 2.The robotic platform used in this study is a quadcopter.Due to reliability concerns associated with indoor flight, the quadcopter was constrained to a fixed table, and data were collected using Marvelmind sensors.In the initial six trajectories, no motion was applied to the robot along the z-axis; however, intentional movement in the z-axis was introduced during the data collection for the last two trajectories to assess the algorithm's performance.

Testing Algorithm and Results
The UDU-factorized EKF structure and the geometric approach based on the RLS method were implemented in the C++ environment using MATLAB code generation support.The integration was carried out on the Pixhawk platform.The trajectories obtained from the Marvelmind sensor are presented in Table 2.The robotic platform used in this study is a quadcopter.Due to reliability concerns associated with indoor flight, the quadcopter was constrained to a fixed table, and data were collected using Marvelmind sensors.In the initial six trajectories, no motion was applied to the robot along the z-axis; however, intentional movement in the z-axis was introduced during the data collection for the last two trajectories to assess the algorithm's performance.

Testing Algorithm and Results
The UDU-factorized EKF structure and the geometric approach based on the RLS method were implemented in the C++ environment using MATLAB code generation support.The integration was carried out on the Pixhawk platform.The trajectories obtained from the Marvelmind sensor are presented in Table 2.The robotic platform used in this study is a quadcopter.Due to reliability concerns associated with indoor flight, the quadcopter was constrained to a fixed table, and data were collected using Marvelmind sensors.In the initial six trajectories, no motion was applied to the robot along the z-axis; however, intentional movement in the z-axis was introduced during the data collection for the last two trajectories to assess the algorithm's performance.

Testing Algorithm and Results
The UDU-factorized EKF structure and the geometric approach based on the RLS method were implemented in the C++ environment using MATLAB code generation support.The integration was carried out on the Pixhawk platform.The trajectories obtained from the Marvelmind sensor are presented in Table 2.The robotic platform used in this study is a quadcopter.Due to reliability concerns associated with indoor flight, the quadcopter was constrained to a fixed table, and data were collected using Marvelmind sensors.In the initial six trajectories, no motion was applied to the robot along the z-axis; however, intentional movement in the z-axis was introduced during the data collection for the last two trajectories to assess the algorithm's performance.

Testing Algorithm and Results
The UDU-factorized EKF structure and the geometric approach based on the RLS method were implemented in the C++ environment using MATLAB code generation support.The integration was carried out on the Pixhawk platform.The trajectories obtained from the Marvelmind sensor are presented in Table 2.The robotic platform used in this study is a quadcopter.Due to reliability concerns associated with indoor flight, the quadcopter was constrained to a fixed table, and data were collected using Marvelmind sensors.In the initial six trajectories, no motion was applied to the robot along the z-axis; however, intentional movement in the z-axis was introduced during the data collection for the last two trajectories to assess the algorithm's performance.

Testing Algorithm and Results
The UDU-factorized EKF structure and the geometric approach based on the RLS method were implemented in the C++ environment using MATLAB code generation support.The integration was carried out on the Pixhawk platform.The trajectories obtained from the Marvelmind sensor are presented in Table 2.The robotic platform used in this study is a quadcopter.Due to reliability concerns associated with indoor flight, the quadcopter was constrained to a fixed table, and data were collected using Marvelmind sensors.In the initial six trajectories, no motion was applied to the robot along the z-axis; however, intentional movement in the z-axis was introduced during the data collection for the last two trajectories to assess the algorithm's performance.

Testing Algorithm and Results
The UDU-factorized EKF structure and the geometric approach based on the RLS method were implemented in the C++ environment using MATLAB code generation support.The integration was carried out on the Pixhawk platform.The trajectories obtained from the Marvelmind sensor are presented in Table 2.The robotic platform used in this study is a quadcopter.Due to reliability concerns associated with indoor flight, the quadcopter was constrained to a fixed table, and data were collected using Marvelmind sensors.In the initial six trajectories, no motion was applied to the robot along the z-axis; however, intentional movement in the z-axis was introduced during the data collection for the last two trajectories to assess the algorithm's performance.

Testing Algorithm and Results
The UDU-factorized EKF structure and the geometric approach based on the RLS method were implemented in the C++ environment using MATLAB code generation support.The integration was carried out on the Pixhawk platform.The trajectories obtained from the Marvelmind sensor are presented in Table 2.The robotic platform used in this study is a quadcopter.Due to reliability concerns associated with indoor flight, the quadcopter was constrained to a fixed table, and data were collected using Marvelmind sensors.In the initial six trajectories, no motion was applied to the robot along the z-axis; however, intentional movement in the z-axis was introduced during the data collection for the last two trajectories to assess the algorithm's performance.The position estimations, calculated solely through the trilateration algorithm, and the positions derived by integrating the trilateration with the EKF and UDU factorization algorithms, incorporating IMU data, are presented in Figures 6-13.Additionally, the locations of the fixed sensors used in these figures are indicated as black dots.Comprehensive information on the minimum, maximum, and average error values for the calculated positions is provided in Table 3, generated using the RMS value.The position estimations, calculated solely through the trilateration algorithm, and the positions derived by integrating the trilateration with the EKF and UDU factorization algorithms, incorporating IMU data, are presented in Figures 6-13.Additionally, the locations of the fixed sensors used in these figures are indicated as black dots.Comprehensive information on the minimum, maximum, and average error values for the calculated positions is provided in Table 3, generated using the RMS value.It is evident from the figures that there are differences between the position estimations obtained using UDU factorization applied to the EKF structure and those obtained solely through the geometric approach.The position estimation derived from UDU factorization is closer to the reference position compared to the geometric approach.This is  It is evident from the figures that there are differences between the position estimations obtained using UDU factorization applied to the EKF structure and those obtained solely through the geometric approach.The position estimation derived from UDU factorization is closer to the reference position compared to the geometric approach.This is introduced.At this point, the obtained RMS values exhibit differences depending on the trajectories, the number of sensors used, and the algorithmic structure.When examining Table 3, it is apparent that the results obtained solely through the geometric approach for all trajectories fall short of the desired accuracy, with a notably high error amount; however, it is observed that position estimations reached the desired levels when UDU-EKF was employed in conjunction with the geometric approach.The primary reason for this is, as detailed in the article, that the sensor fusion algorithm is capable of providing a more accurate estimation compared to the geometric-based algorithm.Incorporating accelerometer data into the position estimation process contributes to achieving a more accurate outcome.
For each trajectory, the geometric approach, relying solely on trilateration, exhibits notable variability in accuracy across trajectories, with maximum errors reaching up to 5.4721 m in Trajectory 6.In contrast, the integration of UDU factorization into the EKF consistently enhances accuracy, as demonstrated by reduced minimum, mean, and introduced.At this point, the obtained RMS values exhibit differences depending on the trajectories, the number of sensors used, and the algorithmic structure.When examining Table 3, it is apparent that the results obtained solely through the geometric approach for all trajectories fall short of the desired accuracy, with a notably high error amount; however, it is observed that position estimations reached the desired levels when UDU-EKF was employed in conjunction with the geometric approach.The primary reason for this is, as detailed in the article, that the sensor fusion algorithm is capable of providing a more accurate estimation compared to the geometric-based algorithm.Incorporating accelerometer data into the position estimation process contributes to achieving a more accurate outcome.
For each trajectory, the geometric approach, relying solely on trilateration, exhibits notable variability in accuracy across trajectories, with maximum errors reaching up to 5.4721 m in Trajectory 6.In contrast, the integration of UDU factorization into the EKF consistently enhances accuracy, as demonstrated by reduced minimum, mean, and      It is evident from the figures that there are differences between the position estimations obtained using UDU factorization applied to the EKF structure and those obtained solely through the geometric approach.The position estimation derived from UDU factorization is closer to the reference position compared to the geometric approach.This is expected because the geometric approach calculates position estimations solely based on distance data from the Marvelmind sensor.On the other hand, in the approach utilizing the EKF, a sensor fusion algorithm is employed, incorporating both distance and accelerometer data to enhance the accuracy of position estimations.
In Figures 6-8, four mobile and one fixed Marvelmind sensors were utilized.In Figures 9-11, three mobile and one fixed Marvelmind sensors were used.In Figures 12 and 13, both four mobile and one fixed in addition to three mobile and one fixed Marvelmind sensors were sequentially employed.In the first six scenarios, there was no movement along the z-axis.Conversely, in the last two scenarios, motion along the z-axis was introduced.At this point, the obtained RMS values exhibit differences depending on the trajectories, the number of sensors used, and the algorithmic structure.
When examining Table 3, it is apparent that the results obtained solely through the geometric approach for all trajectories fall short of the desired accuracy, with a notably high error amount; however, it is observed that position estimations reached the desired levels when UDU-EKF was employed in conjunction with the geometric approach.The primary reason for this is, as detailed in the article, that the sensor fusion algorithm is capable of providing a more accurate estimation compared to the geometric-based algorithm.Incorporating accelerometer data into the position estimation process contributes to achieving a more accurate outcome.
For each trajectory, the geometric approach, relying solely on trilateration, exhibits notable variability in accuracy across trajectories, with maximum errors reaching up to 5.4721 m in Trajectory 6.In contrast, the integration of UDU factorization into the EKF consistently enhances accuracy, as demonstrated by reduced minimum, mean, and maximum errors.Trajectory 6 stands out, with a remarkable improvement, showcasing a substantial drop in the maximum error from 5.4721 to 0.4796 m.This underscores the pivotal role of UDU-EKF in refining position estimates, particularly in scenarios involving intentional z-axis movement.The overall pattern highlights the algorithm's adaptability and robustness in dynamic indoor environments, reaffirming its efficacy for precise positioning in real-world applications.
In addition to that, the code structure was running multiple times for the comparison of UDU factorization EKF with normal EKF in terms of processing speeds, and the processing times for both structures were observed.The comparison of processing speeds between the EKF with UDU factorization and the conventional EKF, as detailed in Table 4, offers valuable insights into their computational efficiency.The experimental setup involved multiple runs to ensure a thorough assessment, and the recorded processing times for UDU-EKF and the EKF, presented in seconds for each trajectory, quantify the time required for algorithm execution.The observed results confirm the consistent outperformance of the EKF structure with UDU factorization, with a minimum increase in the processing time of 18%.The percentage difference analysis underscores the substantial improvement in processing speed achieved by incorporating UDU factorization.Trajectory-specific performance evaluations reveal notable percentage differences of 21%, 25%, and 18% for Trajectories 1, 2, and 3, respectively, emphasizing UDU-EKF's faster execution across diverse scenarios.The implications for real-world applications are substantial, especially in time-sensitive scenarios where quick and accurate positioning is paramount.The observed reduction in computational cost further positions UDU factorization as an efficient choice, particularly for resource-constrained systems.These findings suggest that implementing UDU factorization in the EKF structure can be considered a strategic optimization for improving overall algorithm efficiency.In summary, the analysis highlights UDU-EKF's superiority in processing speed, showcasing its potential to significantly enhance the efficiency of positioning algorithms in critical real-world applications.As a result, the integration of UDU factorization into the EKF structure has been observed to improve the accuracy of position estimation, as evidenced by both the graphs and Table 3. UDU-EKF achieves reliable results even in scenarios involving motion along the three axes.Additionally, upon reviewing Table 4, it is evident that the UDU-EKF structure operates faster than the traditional EKF.Considering both algorithms, it is observed that UDU-EKF plays a significant role in position estimation with lower computational cost.

Conclusions
This paper presents a comprehensive study on the integration of UDU factorization into the structure of the EKF for indoor position estimation algorithms.The proposed algorithm demonstrates its effectiveness in real-time systems, showcasing improved accuracy in position estimation compared to traditional EKF approaches.The decomposition of the EKF structure with UDU factorization aims to reduce computational costs, making it particularly suitable for real-time applications.
The study focuses on a geometric method based on reference points, introducing an approach that utilizes distance measurements and quadratic equations to determine the position coordinates of a target point.
Experimental results, conducted using Marvelmind sensors and a quadcopter, provide a thorough analysis of the algorithm's performance.Trajectory-based comparisons emphasize the superiority of UDU-EKF over the geometric approach, highlighting its ability to achieve more accurate position predictions.The integration of UDU factorization into the EKF not only enhances accuracy but also demonstrates adaptability to dynamic indoor environments.
Furthermore, the paper investigates the computational efficiency of the proposed algorithm, comparing processing speeds between UDU-EKF and traditional EKF structures.The results consistently show that UDU-EKF outperforms traditional EKF in terms of processing speed.This finding is crucial for real-world applications, especially those requiring quick and accurate positioning.
In summary, the integration of UDU factorization into the EKF structure emerges as a strategic optimization for achieving higher accuracy with lower computational costs.The algorithm's adaptability to all three axes and its superior processing speed position it as a promising solution for real-time positioning systems, emphasizing its potential impact on various applications, including robotics and indoor navigation.

Figure 1 .
Figure 1.General structure of position estimation.Figure 1.General structure of position estimation.

Figure 1 .
Figure 1.General structure of position estimation.Figure 1.General structure of position estimation.

Figure 2 .
Figure 2. Reference points and interval measurements.

Case 1 .
B 1 , B 2 , and B 3 are not in a straight line.

Figure 3 .
Figure 3. Displaying used sensors on the interface.

Figure 3 .
Figure 3. Displaying used sensors on the interface.

Figure 4 .
Figure 4. Pixhawk physical wiring diagram with the Marvelmind sensor [24].Following the schematic connection depicted in Figure5a,b, a physical connection between Pixhawk and the Marvelmind sensor was established.Data collection was then conducted on Pixhawk, while the other Marvelmind sensors remained fixed to the wall.

Figure 5 .
Figure 5. (a).Pixhawk physical connection with the Marvelmind sensor (b).Marvelmind sensor fixed on the wall.

Figure 5 .
Figure 5. (a).Pixhawk physical connection with the Marvelmind sensor (b).Marvelmind sensor fixed on the wall.

Figure 5 .
Figure 5. (a).Pixhawk physical connection with the Marvelmind sensor (b).Marvelmind sensor fixed on the wall.

Figure 5 .
Figure 5. (a).Pixhawk physical connection with the Marvelmind sensor (b).Marvelmind sensor fixed on the wall.

Figure 6 .
Figure 6.Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 1.

Figure 6 .
Figure 6.Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 1.

Figure 6 .
Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 1.

Figure 7 .
Figure 7. Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 2.

Figure 9 .
Figure 9. Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 4.

Figure 9 .
Figure 9. Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 4.

Figure 9 .
Figure 9. Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 4.

Figure 10 .
Figure 10.Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 5.

Figure 11 .
Figure 11.Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 6.

Figure 10 .
Figure 10.Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 5.

Figure 10 .
Figure 10.Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 5.

Figure 11 .
Figure 11.Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 6.

Figure 11 .
Figure 11.Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 6.

Figure 12 .
Figure 12.Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 7.

Figure 12 .
Figure 12.Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 7.

Figure 12 .
Figure 12.Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 7.

Figure 13 .
Figure 13.Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 8.

Table 2 .
Trajectories of the tests.

Table 2 .
Trajectories of the tests.

Table 2 .
Trajectories of the tests.

Table 2 .
Trajectories of the tests.

Table 2 .
Trajectories of the tests.

Table 2 .
Trajectories of the tests.

Table 2 .
Trajectories of the tests.

Table 2 .
Trajectories of the tests.

Table 2 .
Trajectories of the tests.

Table 2 .
Trajectories of the tests.

Table 4 .
Process time difference of the EKF vs. UDU-EKF.