Improved-UWB/LiDAR-SLAM Tightly Coupled Positioning System with NLOS Identiﬁcation Using a LiDAR Point Cloud in GNSS-Denied Environments

: Reliable absolute positioning is indispensable in long-term positioning systems. Although simultaneous localization and mapping based on light detection and ranging (LiDAR-SLAM) is effective in global navigation satellite system (GNSS)-denied environments, it can provide only local positioning results, with error divergence over distance. Ultrawideband (UWB) technology is an effective alternative; however, non-line-of-sight (NLOS) propagation in complex indoor environments severely affects the precision of UWB positioning, and LiDAR-SLAM typically provides more robust results under such conditions. For robust and high-precision positioning, we propose an improved-UWB/LiDAR-SLAM tightly coupled (TC) integrated algorithm. This method is the ﬁrst to combine a LiDAR point cloud map generated via LiDAR-SLAM with position information from UWB anchors to distinguish between line-of-sight (LOS) and NLOS measurements through obstacle detection and NLOS identiﬁcation (NI) in real time. Additionally, to alleviate positioning error accumulation in long-term SLAM, an improved-UWB/LiDAR-SLAM TC positioning model is constructed using UWB LOS measurements and LiDAR-SLAM positioning information. Parameter solving using a robust extended Kalman ﬁlter (REKF) to suppress the effect of UWB gross errors improves the robustness and positioning performance of the integrated system. Experimental results show that the proposed NI method using the LiDAR point cloud can efﬁciently and accurately identify UWB NLOS errors to improve the performance of UWB ranging and positioning in real scenarios. The TC integrated method combining NI and REKF achieves better positioning effectiveness and robustness than other comparative methods and satisfactory control of sensor errors with a root-mean-square error of 0.094 m, realizing subdecimeter indoor positioning.


Introduction
Obtaining accurate, robust and continuous position information is an important guarantee for location-based services and applications (internet of things, intelligent transportation, etc.), for which global navigation satellite systems (GNSSs) (especially the global positioning system (GPS) and the BeiDou navigation satellite system (BDS)) are common positioning technology solutions for outdoor environments. Precise point positioning (PPP) or precise point positioning-real-time kinematic (PPP-RTK) positioning solutions can provide dm-cm level navigation and positioning services when the observation environment is good [1][2][3]. However, for indoor environments where GNSS signals cannot penetrate, such as large warehouses and underground parking lots, the attenuation of satellite signals and multipath effects cause the GNSS positioning accuracy to be ineffective, meaning that it cannot meet the demand for indoor high-precision positioning. Therefore, with the growing demand for indoor positioning, determining how to develop and build a real-time,

•
In complex environments, such as large buildings or time-varying environments, where UWB signals are heavily affected by NLOS errors, the rich geometric features enable LiDAR-SLAM to provide accurate and robust pose estimation and mapping results. With the advanced LiDAR-SLAM algorithm LeGO-LOAM, we propose and implement UWB NLOS identification using the LiDAR point cloud algorithm. This method combines the position information of the UWB anchor with the environment map generated by LeGO-LOAM and distinguishes between LOS and NLOS measurements in real time by efficiently and accurately performing obstacle detection and NLOS identification toward the line-of-sight direction of the anchor to improve the UWB data quality. It has good universality as it does not need the tedious data collection work and training phase in the early stage, and it can cope with the interference of dynamic obstacles in the environment well. Experimental results show that this NLOS identification algorithm is reasonably effective without adding a large amount of computation.

•
To suppress the error accumulation of LiDAR-SLAM while simultaneously obtaining the positioning results in the world coordinate system, we propose a novel improved-UWB/LiDAR-SLAM tightly coupled positioning system by using UWB LOS measurements identified by the LiDAR point cloud and the positioning results of LeGO-LOAM as the input to the integrated system. Considering that in addition to NLOS propagation, UWB measurements are affected by signal multipath effects, intensity attenuation and other factors, which are likely to cause large gross errors, we use a robust extended Kalman filter (REKF) for parameter solutions and effectively suppress the influence of abnormal measurements on the filtering results by reducing the weights of outliers. A dynamic positioning experiment demonstrates the accuracy and robustness of the proposed tightly coupled integrated method combining NLOS identification and REKF.
The remainder of this paper is organized as follows. Section 2.1 gives an overview of the complete system pipeline. Section 2.2 designs the temporal synchronization and spatial calibration between sensors to ensure effective sensor fusion. The details of the UWB NLOS identification using the LiDAR point cloud algorithm are described in Sections 2.3 and 2.4. The improved-UWB/LiDAR-SLAM tightly coupled positioning algorithm is introduced in Sections 2.5 and 2.6. The detailed experimental setup and discussions are reported and analyzed in Section 3. Finally, Section 4 summarizes the work by drawing several conclusions and provides an outlook for future research.

Methodology
In this paper, we use the superscript T to denote the transpose of vector or matrix, with lowercase bold symbols (e.g., x) for vectors and uppercase bold symbols (e.g., R) for matrices. For any vector x, x denotes its Euclidean norm.

System Overview
The framework of the proposed improved-UWB/LiDAR-SLAM integrated positioning system is illustrated in Figure 1. First, the system performs data synchronization of LiDAR and UWB sensors through a robot operating system (ROS) and the calibrated external reference matrix, unifying the time and spatial datum (purple blocks in Figure 1). Second, the original LiDAR point cloud is segmented to distinguish between the ground and nonground points, and the ground and nonground feature points are extracted from these point clouds for feature matching to estimate the 6-DOF relative poses of the system. The point cloud in the current frame is registered to the starting coordinate system in accordance with the transformation matrix to obtain a globally consistent map (green blocks in Figure 1). Then, the UWB NLOS data are identified by combining the UWB anchor information with the constructed point cloud map. The UWB data quality is effectively controlled by eliminating NLOS measurements and retaining LOS measurements (yellow blocks in Figure 1). Finally, the improved-UWB/LiDAR-SLAM tightly coupled model is designed. Multiple UWB LOS data and LiDAR data are fused using REKF to obtain the optimal estimation of the integrated system state. The possibility of LiDAR-SLAM divergence is effectively reduced, and the robustness of the system is improved (orange blocks in Figure 1). The red dashed blocks in Figure 1 represent the contributions of our work.
with lowercase bold symbols (e.g., x ) for vectors and uppercase bold symbols (e.g., R ) for matrices. For any vector x , x denotes its Euclidean norm.

System Overview
The framework of the proposed improved-UWB/LiDAR-SLAM integrated positioning system is illustrated in Figure 1. First, the system performs data synchronization of LiDAR and UWB sensors through a robot operating system (ROS) and the calibrated external reference matrix, unifying the time and spatial datum (purple blocks in Figure 1). Second, the original LiDAR point cloud is segmented to distinguish between the ground and nonground points, and the ground and nonground feature points are extracted from these point clouds for feature matching to estimate the 6-DOF relative poses of the system. The point cloud in the current frame is registered to the starting coordinate system in accordance with the transformation matrix to obtain a globally consistent map (green blocks in Figure 1). Then, the UWB NLOS data are identified by combining the UWB anchor information with the constructed point cloud map. The UWB data quality is effectively controlled by eliminating NLOS measurements and retaining LOS measurements (yellow blocks in Figure 1). Finally, the improved-UWB/LiDAR-SLAM tightly coupled model is designed. Multiple UWB LOS data and LiDAR data are fused using REKF to obtain the optimal estimation of the integrated system state. The possibility of LiDAR-SLAM divergence is effectively reduced, and the robustness of the system is improved (orange blocks in Figure 1). The red dashed blocks in Figure 1 represent the contributions of our work.

Spatiotemporal Synchronization of Sensors
Because UWB and LiDAR belong to two different types of positioning systems, temporal synchronization and spatial calibration between sensors are necessary steps for UWB NLOS identification and subsequent multisensor fusion.
For temporal synchronization, a ROS provides a well-established data encapsulation format for storing sensor information and timestamps. Therefore, we use the mechanism of ROS to assign timestamps to each sensor with the same time reference and achieve temporal synchronization for both types of sensors by aligning the timestamps of UWB and LiDAR data.
The spatial calibration is shown in Figure 2. The world coordinate system, referred to as the w-frame (i.e., the UWB coordinate system), is established by setting stations on indoor control points with a high-precision total station; the body coordinate system is referred to as the b-frame, and the LiDAR coordinate system is referred to as the l-frame. LeGO-LOAM defines the l-frame of the first frame as the global reference coordinate system, referred to as the g-frame (i.e., the map coordinate system). Considering the number

Spatiotemporal Synchronization of Sensors
Because UWB and LiDAR belong to two different types of positioning systems, temporal synchronization and spatial calibration between sensors are necessary steps for UWB NLOS identification and subsequent multisensor fusion.
For temporal synchronization, a ROS provides a well-established data encapsulation format for storing sensor information and timestamps. Therefore, we use the mechanism of ROS to assign timestamps to each sensor with the same time reference and achieve temporal synchronization for both types of sensors by aligning the timestamps of UWB and LiDAR data.
The spatial calibration is shown in Figure 2. The world coordinate system, referred to as the w-frame (i.e., the UWB coordinate system), is established by setting stations on indoor control points with a high-precision total station; the body coordinate system is referred to as the b-frame, and the LiDAR coordinate system is referred to as the l-frame. LeGO-LOAM defines the l-frame of the first frame as the global reference coordinate system, referred to as the g-frame (i.e., the map coordinate system). Considering the number of points in the point cloud map, the initial positions of the UWB anchors and the mobile tag are transformed into the g-frame for subsequent sensor data information fusion to improve the conversion efficiency. Therefore, it is necessary to precisely determine the 6-DOF transformation relationship T b w = {R, t} between the w-frame and the g-frame, where R represents the rotation parameter and t represents the translation parameter.
Remote Sens. 2022, 14,1380 of points in the point cloud map, the initial positions of the UWB anchors and the tag are transformed into the g-frame for subsequent sensor data information fusio prove the conversion efficiency. Therefore, it is necessary to precisely determin DOF transformation relationship T R t between the w-frame and the g where R represents the rotation parameter and t represents the translation param The specific steps are as follows.

•
Considering that the indoor ground is mostly horizontal, the transformation r ship b w T between the w-frame and b-frame is obtained by observing the fr rear points on the side of the mobile platform with a high-precision total stat • Because the LiDAR sensor remains fixed after installation, the transformati tionship between the b-frame and l-frame needs to be calibrated only once. A rich static scenario is selected, and the mobile platform is kept stationary. M pairs of corresponding feature points { } T between the b-frame and l-frame is obtained by solving for parameters.

•
The transformation relationship l w T between the w-frame and l-frame is w T Since the g-frame and l 0 -frame are coincident at the initial time, = 0 l g w w T T Accordingly, the UWB anchor coordinates in the g-frame and the lever-arm l l with the LiDAR center pointing to the UWB mobile tag in the l-frame ca tained. In this paper, because the LiDAR center and the UWB mobile tag ar same plumb line, therefore where d is the distance between the center and the UWB mobile tag.

Generation of LiDAR Point Cloud Map
Because the indoor ground is mostly flat, consistent with the assumption LeGO-LOAM robot is always on the ground, and to ensure the accuracy and co tional efficiency of the UWB NLOS identification algorithm, this paper adopts vanced LiDAR-SLAM algorithm LeGO-LOAM to build a high-precision 3-D envir map. LeGO-LOAM consists of four main modules: segmentation, feature ext odometry and mapping. The specific steps are as follows.

•
Considering that the indoor ground is mostly horizontal, the transformation relationship T b w between the w-frame and b-frame is obtained by observing the front and rear points on the side of the mobile platform with a high-precision total station. • Because the LiDAR sensor remains fixed after installation, the transformation relationship between the b-frame and l-frame needs to be calibrated only once. A feature-rich static scenario is selected, and the mobile platform is kept stationary. Multiple pairs of corresponding feature points p w , p l are obtained by observing the sharp-shaped corner feature points in the scenario from two different positions using LiDAR and a high-precision total station. The point p w observed by the total station is transformed into p b in accordance with T b w , and the corresponding feature point pairs p b , p l in the b-frame and l-frame are obtained. Thus, the transformation relationship T l b between the b-frame and l-frame is obtained by solving for the four parameters.

•
The transformation relationship T l w between the w-frame and l-frame is T l w = T l b T b w . Since the g-frame and l 0 −frame are coincident at the initial time, Accordingly, the UWB anchor coordinates in the g-frame and the lever-arm vector l l with the LiDAR center pointing to the UWB mobile tag in the l-frame can be obtained. In this paper, because the LiDAR center and the UWB mobile tag are on the same plumb line, therefore l l = (0, 0, d) T , where d is the distance between the LiDAR center and the UWB mobile tag.

Generation of LiDAR Point Cloud Map
Because the indoor ground is mostly flat, consistent with the assumption that the LeGO-LOAM robot is always on the ground, and to ensure the accuracy and computational efficiency of the UWB NLOS identification algorithm, this paper adopts the advanced LiDAR-SLAM algorithm LeGO-LOAM to build a high-precision 3-D environment map. LeGO-LOAM consists of four main modules: segmentation, feature extraction, odometry and mapping.

•
Segmentation: The point cloud obtained at time t is set to P t = {p 1 , p 2 , · · · , p n }, where p i is a point in the point cloud P t . The point cloud is projected into a range image with a resolution of 1800 × 16. The ground point cloud (P p ) and nonground point cloud (P e ) are extracted from the LiDAR raw data by judging the vertical dimensional characteristic. • Feature extraction: To extract features uniformly from all directions from ground points and segmentation points, the range image is equally divided into several subimages in the horizonal direction. The smoothness of each point is calculated and compared with the smoothness threshold to extract the edge features and planar features for registration.

•
Odometry: The scan-to-scan constraint based on extracted features is built next. Since the ground remains essentially constant between consecutive frames, the variations in the vertical dimension t z , t roll , t pitch can be estimated based on the planar features. The estimate of the vertical dimension is input as the initial value into the second optimization step to reduce the number of iterations, and the variations in the horizontal dimension t x , t y , t yaw are calculated to improve the computational efficiency.

•
Mapping: The scan-to-map constraint is constructed by the Levenberg-Marquardt (L-M) method, and the final global map is obtained using a loop detection approach. More details of the LeGO-LOAM algorithm can be found in [32].
Considering the computational efficiency and the time-varying nature of dynamic scenarios and to minimize the interference caused by the nonrepetition of dynamic objects between adjacent frames (e.g., a vehicle or pedestrian scanned in the previous frame may not appear in the next frame), the method used in this paper registers only the LiDAR point cloud in the current frame to the environment map in accordance with the transformation matrix and updates iteratively in real time.

UWB NLOS Identification Using a LiDAR Point Cloud Map
For positioning in some indoor scenarios, as shown in Figure 3a, occlusion due to obstacles (e.g., pedestrians, vehicles, wall pillars, etc.) may cause a node to enter the NLOS status, meaning that the UWB pulse signal has no direct path and needs to be reflected, refracted, and/or permitted to penetrate through obstacles before it can be received by the target node. Such regions are called NLOS regions. Compared with the LOS region, the time delay of the signal in the NLOS region leads to a positively biased distance estimate, which seriously affects the positioning accuracy. However, in the LiDAR point cloud map, as shown in Figure 3b, the position, size, and reflective surface of each obstacle (static or dynamic) are all known, that is, the LOS region and NLOS region are known.
can be estimated based on the planar features. The estimate of the vertical dimension is input as the initial value into the second optimization step to reduce the number of iterations, and the variations in the hori- x y yaw t t t are calculated to improve the computational efficiency. • Mapping: The scan-to-map constraint is constructed by the Levenberg-Marquardt (L-M) method, and the final global map is obtained using a loop detection approach. More details of the LeGO-LOAM algorithm can be found in [32].
Considering the computational efficiency and the time-varying nature of dynamic scenarios and to minimize the interference caused by the nonrepetition of dynamic objects between adjacent frames (e.g., a vehicle or pedestrian scanned in the previous frame may not appear in the next frame), the method used in this paper registers only the LiDAR point cloud in the current frame to the environment map in accordance with the transformation matrix and updates iteratively in real time.

UWB NLOS Identification Using a LiDAR Point Cloud Map
For positioning in some indoor scenarios, as shown in Figure 3a, occlusion due to obstacles (e.g., pedestrians, vehicles, wall pillars, etc.) may cause a node to enter the NLOS status, meaning that the UWB pulse signal has no direct path and needs to be reflected, refracted, and/or permitted to penetrate through obstacles before it can be received by the target node. Such regions are called NLOS regions. Compared with the LOS region, the time delay of the signal in the NLOS region leads to a positively biased distance estimate, which seriously affects the positioning accuracy. However, in the LiDAR point cloud map, as shown in Figure 3b, the position, size, and reflective surface of each obstacle (static or dynamic) are all known, that is, the LOS region and NLOS region are known.  In the NLOS region, there are obstacles acting as blockages between nodes, which is reflected in the point cloud map as point cloud clusters between nodes. Based on this assumption, after completing the temporal synchronization and spatial calibration of two sensors, we design a UWB NLOS identification using the LiDAR point cloud algorithm. The two types of sensors, UWB and LiDAR, are correlated, and the LiDAR point cloud map is used to identify NLOS by judging whether there is obstacle occlusion between the UWB mobile tag and anchor, i.e., whether the line-of-sight direction is in the NLOS status.
The inputs to NLOS identification algorithm are the transformation relationship T g w between the w-frame and g-frame, the UWB anchor set ua all = {ua 1 , ua 2 , · · · , ua n }, the UWB mobile tag coordinate r w tag at the initial time, the UWB anchor coordinates r w anchor , the LiDAR coordinate r g lidar , the LiDAR point cloud map M g and a point number threshold n thr . The output is the UWB LOS anchor set ua los = {ua 1 , ua 2 , · · · , ua s }. The main flow of NLOS identification algorithm is as follows.

1.
At the initial time, the g-frame coincides with the l 0 −frame. In accordance with the transformation relationship T g w between the w-frame and g-frame calculated in Section 2.2, the UWB mobile tag coordinate r w tag and anchor coordinates r w anchor in the w-frame are transformed into the g-frame (i.e., l 0 −frame). The lever-arm vector l l between the UWB mobile tag and the LiDAR sensor in the l-frame and the UWB anchor coordinates r g anchor = x g anchor , y g anchor , z g anchor in the g-frame are obtained.

2.
At time t, the LiDAR coordinate r g lidar is used to deduce the UWB mobile tag coordinate r g lidar,tag = x g lidar,tag , y g lidar,tag , z g lidar,tag in the g-frame in accordance with the leverarm vector l l . r 3. K-dimensional trees (KD-trees) and voxel grids are commonly used for map representation and corresponding searches in SLAM systems [36,39]. Considering that KD-tree representation can find the association points through a nearest neighbor search or radius search, we convert the point cloud map M g generated in Section 2.3 into a KD-tree structure M g tree for subsequent obstacle detection.

4.
Based on the UWB mobile tag coordinate r g lidar,tag = x g lidar,tag , y g lidar,tag , z g lidar,tag and anchor j coordinate r g anchor = x g anchor , y g anchor , z g anchor , the line-of-sight direction, i.e., the search direction for NLOS identification (e.g., the direction of the red dashed line in Figure 4), is determined. At the same time, the distance d, vertical azimuth ω, and horizontal azimuth α between the mobile tag and anchor j are calculated. In the NLOS region, there are obstacles acting as blockages between nodes, which i reflected in the point cloud map as point cloud clusters between nodes. Based on this as sumption, after completing the temporal synchronization and spatial calibration of tw sensors, we design a UWB NLOS identification using the LiDAR point cloud algorithm The two types of sensors, UWB and LiDAR, are correlated, and the LiDAR point cloud map is used to identify NLOS by judging whether there is obstacle occlusion between th UWB mobile tag and anchor, i.e., whether the line-of-sight direction is in the NLOS status The inputs to NLOS identification algorithm are the transformation relationship T between the w-frame and g-frame, the UWB anchor set x y z r , the line-of-sight direction, i.e., th search direction for NLOS identification (e.g., the direction of the red dashed line i Figure 4), is determined. At the same time, the distance d , vertical azimuth ω , and horizontal azimuth α between the mobile tag and anchor j are calculated.

5.
The first NLOS identification is performed along the line of sight, centered on the UWB mobile tag position. 6.
Considering the general size of indoor objects and the LiDAR vertical angular resolution, the constructed KD-tree M g tree is used to search the point cloud in the neighborhood with a radius of 1 m. If the number of points in the neighborhood exceeds the given threshold n thr (in this paper, n thr = 10), it is considered that the line-of-sight direction may be blocked by obstacles. 7.
To prevent misjudgment (orange dashed block in Figure 4) and improve the accuracy of NLOS identification, based on the principle of collision detection, we traverse all points of the LiDAR point cloud in the neighborhood, take any three points to construct a triangle, and use Plücker coordinates to determine whether the line of sight intersects with the triangle in 3-D space. The Plücker coordinates represent a method of specifying directed lines in 3-D space using six-dimensional vectors.
Taking any different points a = (a 0 , a 1 , a 2 ) and b = (b 0 , b 1 , b 2 ) to construct the Plücker coordinates l of line ab, we obtain: where: The side operator is defined as the permuted inner product of Plücker coordinates l 1 and l 2 : where l 1 = (l 1,0 , l 1,1 , l 1,2 , l 1,3 , l 1,4 , l 1,5 ) and l 2 = (l 2,0 , l 2,1 , l 2,2 , l 2,3 , l 2,4 , l 2,5 ). In our method, the line of sight is defined by the UWB mobile tag coordinate r g lidar,tag and anchor coordinate r g anchor , and the triangle is defined by any three points p 0 , p 1 and p 2 in the LiDAR point cloud. Let: where: If s 1 , s 2 , s 3 > 0 or s 1 , s 2 , s 3 < 0, then the line and the triangle are not coplanar, and the line passes through the triangle, as shown in Figure 5.  8. If a triangle intersects with the line of sight, it is considered that the line-of rection is blocked by obstacles, i.e., the status between the UWB mobile tag chor j is NLOS. This stops the detection and marks anchor j as an NLOS anc erwise, we consider that there is no obstacle occlusion at the current pos continue with the detection. 9. The search center point is moved to the next point m m x y r in accordance with Equation (7), taking 2 m as the step size along the line Steps 6-8 are repeated to continue searching the point cloud for NLOS iden for anchor j. . The LOS anchor measurements will be u put to the UWB module in the integrated positioning system.

Improved-UWB/LiDAR-SLAM Tightly Coupled Integrated Model
In our method, to suppress the error accumulation of LiDAR-SLAM while o positioning results in the world coordinate system, we introduce the measure UWB LOS anchors identified by NLOS identification algorithm in Section 2.4 fo tion to eliminate the drift error of LiDAR-SLAM and provide more robust po results.
Considering that the number of UWB LOS measurements may not be suf maintain UWB positioning, we adopt a UWB/LiDAR-SLAM tightly coupled m takes UWB LOS measurements and LiDAR-SLAM positioning information with closure as the measurement inputs for the Kalman filter. This model is not rest the number of UWB measurements and integrates UWB and LiDAR at the mea level. This paper names it the improved-UWB/LiDAR-SLAM tightly coupled in algorithm (NLOS identification + UWB/LiDAR-SLAM tightly coupled, NI-TC).

8.
If a triangle intersects with the line of sight, it is considered that the line-of-sight direction is blocked by obstacles, i.e., the status between the UWB mobile tag and anchor j is NLOS. This stops the detection and marks anchor j as an NLOS anchor. Otherwise, we consider that there is no obstacle occlusion at the current position and continue with the detection. 9.
The search center point is moved to the next point r in accordance with Equation (7), taking 2 m as the step size along the line of sight.
Steps 6-8 are repeated to continue searching the point cloud for NLOS identification for anchor j.
10. When m > (d + 3)/2, the NLOS identification process for anchor j ends, and anchor j is marked as an LOS anchor. 11. Steps 4-10 are repeated to complete NLOS identification for all UWB anchors and distinguish between the LOS anchor set ua los = {ua 1 , ua 2 , · · · , ua s } and the NLOS anchor set ua nlos = {ua 1 , ua 2 , · · · , ua r }. The LOS anchor measurements will be used as input to the UWB module in the integrated positioning system.

Improved-UWB/LiDAR-SLAM Tightly Coupled Integrated Model
In our method, to suppress the error accumulation of LiDAR-SLAM while obtaining positioning results in the world coordinate system, we introduce the measurements of UWB LOS anchors identified by NLOS identification algorithm in Section 2.4 for correction to eliminate the drift error of LiDAR-SLAM and provide more robust positioning results.
Considering that the number of UWB LOS measurements may not be sufficient to maintain UWB positioning, we adopt a UWB/LiDAR-SLAM tightly coupled model that takes UWB LOS measurements and LiDAR-SLAM positioning information without loop closure as the measurement inputs for the Kalman filter. This model is not restricted by the number of UWB measurements and integrates UWB and LiDAR at the measurement level. This paper names it the improved-UWB/LiDAR-SLAM tightly coupled integrated algorithm (NLOS identification + UWB/LiDAR-SLAM tightly coupled, NI-TC).
In this study, we use a constant velocity model [40] to constrain the two consecutive states. The system model in a discrete-time form can be expressed as follows: where symbols with subscript k indicate the parameter at time k; x k denotes the state vector; Φ k,k−1 denotes the state transition matrix; and ω k−1 denotes the white noise sequence of the system, which is modeled as a Gauss-Markov random process with the ideal zero mean assumption.
In the construction of the tightly coupled measurement equation, the superscript "~" represents the disturbance term. The distance between the UWB mobile tag, calculated using LiDAR-SLAM position, and the anchor is d lidar,tag . The distance measured using the two-way time-of-flight (TW-TOF) method of UWB is d tag . Assuming that the true distance between the UWB mobile tag and anchor is d true,tag , we can obtain: The TW-TOF ranging information measured by UWB can be expressed as: where ε d,tag denotes the measurement noise. Combining Equations (11) and (12), the measurement model for the integrated system is as follows: where: where symbols with subscript j indicate the relevant information of UWB LOS anchor j, j = 1, 2, · · · , s; z k denotes the measurement vector; H k denotes the design matrix of the relationship between measurement and state; δx k denotes the error of the state vector; and v k denotes the white noise sequence of the measurements, and its covariance matrix is R k .

Innovation-Based Outlier-Resistant Robust EKF Algorithm
The filtering method is a common scheme for multisensor data fusion [41][42][43][44]. The standard extended Kalman filter (EKF) carries out the optimal estimation of the real-time state through prediction-measurement-correction, which minimizes the variance in the state error. To maintain the optimal system state, fault measurements should be excluded prior to Kalman updating. In Section 2.4, we use the LiDAR point cloud to perform some degree of quality control on the UWB raw data, effectively eliminating the NLOS measurements. However, in addition to NLOS propagation, the UWB measurements are also influenced by signal multipath effects, intensity fading and other factors. Especially for demanding environments, the system is more random.
The theory of robust estimation provides a feasible solution to this problem. It does not excessively require the unbiasedness and effectiveness of parameter estimation. Reducing the weights of outliers can effectively suppress the influence of abnormal measurements on the filtering results and improve the robustness of the system. The derivation of the REKF is essentially the same as that of the standard EKF, with the main difference being the replacement of the measurement noise covariance matrix R k in the standard EKF with an equivalent covariance matrix R k , thereby adjusting the Kalman gain K k to control the adverse effect of measurement noise on parameter estimation. This paper adopts the two-factor variance inflation model [45] to construct the equivalent covariance matrix R k : where the diagonal elements σ ii denote the measurement noise between the UWB mobile tag and each UWB anchor; the nondiagonal elements σ ij denote the correlation noise between the UWB mobile tag and each UWB anchor, which are considered to be 0 in this paper; and γ ij = √ γ ii γ jj , γ ii and γ jj are inflation factors, and γ ii can be expressed as: where the weight reduction factor k 0 and the elimination factor k 1 are the set thresholds, usually k 0 = 2.0 ∼ 3.0 and k 1 = 4.5 ∼ 8.5 [18,41], with the best filtering performance being achieved when k 0 = 2.0 and k 1 = 6.0 in this paper; and v k,i is the standardized innovation, which can be expressed as: where δx k,k−1 and P k,k−1 denote a priori estimates of the state error and its covariance matrix, respectively. The standardized innovation v k,i can better reflect the measurement anomalies. The construction of the inflation factor shows that when gross error occurs in the measurements, the standardized innovation exceeds the detection threshold of the system, and the corresponding variance is inflated, reducing or eliminating the influence of abnormal measurements on parameter estimation and ensuring the reliability and positioning accuracy of the system. Otherwise, the measurement is considered to be normal, and the original variance remains unchanged.
After equivalently replacing the measurement noise covariance matrix, the recursive equations of the REKF are obtained as follows: where K k denotes the equivalent Kalman gain matrix; δx k and P k denote the filter estimation and its covariance matrix, respectively; and I denotes the identity matrix.

System Hardware
We implement the proposed UWB NLOS identification using the LiDAR point cloud and the improved-UWB/LiDAR-SLAM tightly coupled positioning algorithm in C++ using an ROS (Melodic version). To evaluate the effectiveness of the proposed algorithm, an integrated positioning experimental platform is applied in this paper, as shown in Figure 6, and two groups of experiments (static and dynamic state) are conducted in underground parking lots. The datasets are acquired by the Velodyne VLP-16 3-D LiDAR and Time Domain UWB PulsON440 modules, and the ground truth data are acquired in real time by a Leica TS50 automatic tracking total station. The VLP-16 scanner has a measurement range of up to 100 m with an accuracy of ±3 cm. Its vertical and horizontal fields of view are ±15 • and 360 • , respectively. When the sampling rate is set to 10 Hz, the 16-line laser scanner provides a vertical angular resolution of 2 • and a horizontal angular resolution of 0.2 • . The PulsON440 module uses the TW-TOF method to accurately measure the distance between two units with an accuracy of 5 ± 1 cm under LOS conditions, and the sampling rate is set to 2 Hz. The ranging accuracy of the Leica TS50 automatic tracking total station is 0.6 mm ± 1 ppm, and the sampling rate is set to 10 Hz. All algorithms are tested on an Intel computer (i7-10875H CPU @ 2.30 GHz, 32 GB RAM and Nvidia GeForce RTX 2060 GPU) with an ROS, allowing easy real-time operation.
where k K denotes the equivalent Kalman gain matrix; δ k x and k P denote the filter estimation and its covariance matrix, respectively; and I denotes the identity matrix.

System Hardware
We implement the proposed UWB NLOS identification using the LiDAR point cloud and the improved-UWB/LiDAR-SLAM tightly coupled positioning algorithm in C++ using an ROS (Melodic version). To evaluate the effectiveness of the proposed algorithm, an integrated positioning experimental platform is applied in this paper, as shown in Figure  6, and two groups of experiments (static and dynamic state) are conducted in underground parking lots. The datasets are acquired by the Velodyne VLP-16 3-D LiDAR and Time Domain UWB PulsON440 modules, and the ground truth data are acquired in real time by a Leica TS50 automatic tracking total station. The VLP-16 scanner has a measurement range of up to 100 m with an accuracy of ±3 cm . Its vertical and horizontal fields of view are ±15°and 360°, respectively. When the sampling rate is set to 10 Hz, the 16-line laser scanner provides a vertical angular resolution of 2° and a horizontal angular resolution of 0.2°. The PulsON440 module uses the TW-TOF method to accurately measure the distance between two units with an accuracy of ± 5 1 cm under LOS conditions, and the sampling rate is set to 2 Hz. The ranging accuracy of the Leica TS50 automatic tracking total station is ± 0.6 mm 1 ppm , and the sampling rate is set to 10 Hz. All algorithms are tested on an Intel computer (i7-10875H CPU @ 2.30 GHz, 32 GB RAM and Nvidia GeForce RTX 2060 GPU) with an ROS, allowing easy real-time operation.

Evaluation of the UWB NLOS Identification Algorithm Using Static Data
To test the performance of the UWB NLOS identification using the LiDAR point cloud algorithm, a static experiment is conducted in an underground parking lot of a shopping mall. The experimental layout is shown in Figure 7. UWB anchor 1 and anchor 4 are in the NLOS region, with the line of sight to the UWB mobile tag being blocked by a wall column and a vehicle, respectively. UWB anchor 2 and anchor 3 are in the LOS region, where anchor 2 is placed on the edge of the wall column, very close to the column but not

Evaluation of the UWB NLOS Identification Algorithm Using Static Data
To test the performance of the UWB NLOS identification using the LiDAR point cloud algorithm, a static experiment is conducted in an underground parking lot of a shopping mall. The experimental layout is shown in Figure 7. UWB anchor 1 and anchor 4 are in the NLOS region, with the line of sight to the UWB mobile tag being blocked by a wall column and a vehicle, respectively. UWB anchor 2 and anchor 3 are in the LOS region, where anchor 2 is placed on the edge of the wall column, very close to the column but not blocked by it, which is a deliberate trap we set to detect the effectiveness of the algorithm. The positions of the LiDAR and UWB modules are all precisely measured by the total station. Remote Sens. 2022, 14,1380 blocked by it, which is a deliberate trap we set to detect the effectiveness of the al The positions of the LiDAR and UWB modules are all precisely measured by station. In the static experiment, the distances between the UWB mobile tag and anc measured using the TW-TOF method, as shown in Figure 8, and the true dista calculated using the measurement results of the total station according to the E distance formula. In this paper, the zero value in the purple dashed block in Fi considered to be a result of the loss of the current epoch measurement due to fact as the actual environment and anchor placement, which is a normal phenomenon data acquisition. To evaluate the NLOS errors caused by indoor obstacles, we calculate the errors for the four UWB anchors after removing the zero values from the rangin as shown in Figure 9. Compared with the LOS scenario, the occlusion of obstacle causes positive NLOS errors, and by comparing UWB anchor 1 (Figure 9a) and (Figure 9d), it is clear that different obstacles cause different degrees of NLOS err to the strong penetration of the UWB signal, the wall column causes larger NLO than the vehicle, with a root mean square error (RMSE) of 3.348 m. When rang with positive errors are used for positioning, the positioning error inevitably i Therefore, to achieve better positioning accuracy, the identification of NLOS beco ticularly important. In the static experiment, the distances between the UWB mobile tag and anchors are measured using the TW-TOF method, as shown in Figure 8, and the true distances are calculated using the measurement results of the total station according to the Euclidean distance formula. In this paper, the zero value in the purple dashed block in Figure 8 is considered to be a result of the loss of the current epoch measurement due to factors such as the actual environment and anchor placement, which is a normal phenomenon in UWB data acquisition.
Remote Sens. 2022, 14,1380 blocked by it, which is a deliberate trap we set to detect the effectiveness of the al The positions of the LiDAR and UWB modules are all precisely measured by station. In the static experiment, the distances between the UWB mobile tag and an measured using the TW-TOF method, as shown in Figure 8, and the true dista calculated using the measurement results of the total station according to the E distance formula. In this paper, the zero value in the purple dashed block in Fi considered to be a result of the loss of the current epoch measurement due to fac as the actual environment and anchor placement, which is a normal phenomenon data acquisition. To evaluate the NLOS errors caused by indoor obstacles, we calculate the errors for the four UWB anchors after removing the zero values from the rangin as shown in Figure 9. Compared with the LOS scenario, the occlusion of obstacle causes positive NLOS errors, and by comparing UWB anchor 1 (Figure 9a) and (Figure 9d), it is clear that different obstacles cause different degrees of NLOS err to the strong penetration of the UWB signal, the wall column causes larger NLO than the vehicle, with a root mean square error (RMSE) of 3.348 m. When rang with positive errors are used for positioning, the positioning error inevitably i Therefore, to achieve better positioning accuracy, the identification of NLOS beco ticularly important. To evaluate the NLOS errors caused by indoor obstacles, we calculate the ranging errors for the four UWB anchors after removing the zero values from the ranging values, as shown in Figure 9. Compared with the LOS scenario, the occlusion of obstacles always causes positive NLOS errors, and by comparing UWB anchor 1 (Figure 9a Figure 10, our proposed NLOS identification algorithm obtains LiDAR point clouds (purple and orange points) within a radius of 1 m by performing a radius search using KD-tree along the line-of-sight direction. To prevent misjudgment that incorrectly treats the entire point cloud in the neighborhood as obstacle points that block the line of sight, we effectively distinguish between true obstacle points (purple points) and false obstacle points (orange points) through further collision detection, thereby distinguishing between NLOS anchors and LOS anchors and improving the accuracy of NLOS identification. As shown in Table 1, according to the distance between the UWB mobile tag and anchors, the theoretical search times for the four UWB anchors are 5, 7, 6 and 10. During the actual search, obstacle points are detected on the 4th search of UWB anchor 1, which confirms that the line of sight is blocked by obstacles through collision detection. Therefore, UWB anchor 1 is marked as an NLOS anchor, and the detection is stopped. On the 6th search of UWB anchor 2, obstacle points are detected, which are determined by collision detection to be false obstacle points, i.e., the obstacle is on the side of the line of sight without occlusion. The same situation also occurs in the 3rd search of UWB anchor 4, but the result of the 8th search confirmed that there is an obstacle blocking the line of sight, so it is also marked as an NLOS anchor. In our experiment, the NLOS identification times for the four UWB anchors are 0.634 ms, 0.404 ms, 0.034 ms and 0.506 ms, meaning that we can easily realize the real-time identification of NLOS. The number of NLOS anchors after NLOS identification is shown in Figure 11. Only one NLOS anchor (anchor 1) is identified in the first epoch due to insufficient information of the point cloud map. For each subsequent epoch, the proposed UWB NLOS identification algorithm can accurately and efficiently distinguish between LOS and NLOS anchors, and no more missed or false detections have occurred. In the static experiment, the success rate of UWB NLOS identification is up to 99.80%.   Figure 10, our proposed NLOS identification algorithm obtains LiDAR point clouds (purple and orange points) within a radius of 1 m by performing a radius search using KD-tree along the line-of-sight direction. To prevent misjudgment that incorrectly treats the entire point cloud in the neighborhood as obstacle points that block the line of sight, we effectively distinguish between true obstacle points (purple points) and false obstacle points (orange points) through further collision detection, thereby distinguishing between NLOS anchors and LOS anchors and improving the accuracy of NLOS identification. As shown in Table 1, according to the distance between the UWB mobile tag and anchors, the theoretical search times for the four UWB anchors are 5, 7, 6 and 10. During the actual search, obstacle points are detected on the 4th search of UWB anchor 1, which confirms that the line of sight is blocked by obstacles through collision detection. Therefore, UWB anchor 1 is marked as an NLOS anchor, and the detection is stopped. On the 6th search of UWB anchor 2, obstacle points are detected, which are determined by collision detection to be false obstacle points, i.e., the obstacle is on the side of the line of sight without occlusion. The same situation also occurs in the 3rd search of UWB anchor 4, but the result of the 8th search confirmed that there is an obstacle blocking the line of sight, so it is also marked as an NLOS anchor. In our experiment, the NLOS identification times for the four UWB anchors are 0.634 ms, 0.404 ms, 0.034 ms and 0.506 ms, meaning that we can easily realize the real-time identification of NLOS. The number of NLOS anchors after NLOS identification is shown in Figure 11. Only one NLOS anchor (anchor 1) is identified in the first epoch due to insufficient information of the point cloud map. For each subsequent epoch, the proposed UWB NLOS identification algorithm can accurately and efficiently distinguish between LOS and NLOS anchors, and no more missed or false detections have occurred. In the static experiment, the success rate of UWB NLOS identification is up to 99.80%.

Evaluation of the UWB NLOS Identification Algorithm Using Dynamic Data
To further verify the effectiveness of the UWB NLOS identification using the LiDAR point cloud algorithm, a dynamic experiment is conducted in an underground parking lot of a hospital. The experimental layout, which is shown in Figure 12, is obviously challenging for the UWB positioning system. We deploy four UWB anchors in a 40 m × 40 m area and use the integrated positioning experimental platform to obtain an evaluation dataset with changing mobility. The purple lines and arrows in Figure 12 are the driving route and direction, respectively. Pedestrians and vehicles are allowed to pass intermittently throughout the experiment, thereby the UWB signals are mainly affected by the fixed indoor structures (e.g., wall columns), experimenters, and passing vehicles.

Evaluation of the UWB NLOS Identification Algorithm Using Dynamic Data
To further verify the effectiveness of the UWB NLOS identification using th point cloud algorithm, a dynamic experiment is conducted in an underground pa of a hospital. The experimental layout, which is shown in Figure 12, is obviously c ing for the UWB positioning system. We deploy four UWB anchors in a 40 m × 4 and use the integrated positioning experimental platform to obtain an evaluatio with changing mobility. The purple lines and arrows in Figure 12 are the drivi and direction, respectively. Pedestrians and vehicles are allowed to pass interm throughout the experiment, thereby the UWB signals are mainly affected by the door structures (e.g., wall columns), experimenters, and passing vehicles.  Figure 13 shows UWB raw measurements for each anchor and UWB LOS m ments after NLOS identification. Figure 14 shows the error curve between UWB m ments and the true values after removing the zero values, and the upper subgrap a detailed enlargement of the error curve. Table 2 reports the results of the rang evaluation, and 'Max' represents the maximum error. It is quite obvious that U measurements contain a large number of outliers, with RMSEs of 2.   Figure 13 shows UWB raw measurements for each anchor and UWB LOS measurements after NLOS identification. Figure 14 shows the error curve between UWB measurements and the true values after removing the zero values, and the upper subgraphs show a detailed enlargement of the error curve. Table 2 reports the results of the ranging error evaluation, and 'Max' represents the maximum error. It is quite obvious that UWB raw measurements contain a large number of outliers, with RMSEs of 2. evaluation, and 'Max' represents the maximum error. It is quite obvious that UWB raw measurements contain a large number of outliers, with RMSEs of 2.210 m, 1.831 m, 1.612 m, and 1.075 m and maximum errors of 37.105 m, 30.562 m, 51.658 m and 28.330 m for the four anchor ranging values, respectively. The RMSEs of LOS measurements after NLOS identification are reduced to 0.050 m, 0.098 m, 0.100 m and 0.097 m, which are reduced by 97.74%, 94.65%, 93.80% and 90.98% compared to the raw measurements, respectively, significantly improving the data quality of UWB measurements.    Figure 15 shows the number of UWB NLOS anchors for each epoch identified by NLOS identification algorithm. UWB raw measurements and UWB LOS measurements are used as inputs to the UWB positioning system, and the parameters are solved using the least squares (LS) method to obtain the positioning trajectories for LS and LS with NLOS identification (NLOS identification + LS, NI-LS), as shown in Figure 16. Figure 17ac shows the position errors for the LS and NI-LS algorithms in the X, Y and plane directions, respectively. Table 3 reports the results of the positioning error evaluation. The 'MAE' and 'Std' in Table 3 are the abbreviations representing mean absolute error and standard deviation, respectively, and the 'Availability' is calculated by dividing the number of positioning results by the total number of measurements.    Figure 15 shows the number of UWB NLOS anchors for each epoch identified by NLOS identification algorithm. UWB raw measurements and UWB LOS measurements are used as inputs to the UWB positioning system, and the parameters are solved using the least squares (LS) method to obtain the positioning trajectories for LS and LS with NLOS identification (NLOS identification + LS, NI-LS), as shown in Figure 16. Figure 17a-c shows the position errors for the LS and NI-LS algorithms in the X, Y and plane directions, respectively. Table 3 reports the results of the positioning error evaluation. The 'MAE' and 'Std' in Table 3 are the abbreviations representing mean absolute error and standard deviation, respectively, and the 'Availability' is calculated by dividing the number of positioning results by the total number of measurements.
NLOS identification (NLOS identification + LS, NI-LS), as shown in Figure 16. Fig  c shows the position errors for the LS and NI-LS algorithms in the X, Y and pla tions, respectively. Table 3 reports the results of the positioning error evaluat 'MAE' and 'Std' in Table 3 are the abbreviations representing mean absolute e standard deviation, respectively, and the 'Availability' is calculated by dividing t ber of positioning results by the total number of measurements.         The quality of the UWB ranging values seriously affects the positioning accuracy. Direct parameter calculation using the raw measurements can reach an RMSE of 2.752 m in the plane direction, with a maximum error of 30.483 m, which seriously deviates from the ground truth and is no longer able to maintain an indoor high-precision positioning result. After filtering the UWB data using our NLOS identification algorithm, the positioning accuracy is significantly improved. Our NI-LS (red) method outperforms the LS method (blue) for both error evaluation metrics, reducing the RMSEs of the position error in the X, Y and plane directions by 96.45%, 92.58% and 94.77%, respectively, and reducing the maximum error to 0.590 m, 0.787 m and 0.826 m, respectively, which provides solid evidence for the effectiveness of the proposed method. However, since 2-D LS positioning requires at least 3 ranging values, the NLOS number is no more than one, as shown by the red points in Figure 15, reducing the availability of the NI-LS method to 54.34%. Figure 18 shows the NLOS identification time of the proposed method for different anchors in each epoch. The average identification times for the four anchors are 0.809 ms, 0.919 ms, 0.940 ms, and 1.111 ms, and the maximum identification times are 14.537 ms, 2.004 ms, 5.063 ms, and 4.942 ms, respectively. In our experimental scenario, the average runtime of the whole NLOS identification module is 41.099 ms, and the maximum runtime is 160.754 ms, which does not significantly increase the computation time of the system and satisfies the real-time requirements.
Remote Sens. 2022, 14,1380 result. After filtering the UWB data using our NLOS identification algorithm, tioning accuracy is significantly improved. Our NI-LS (red) method outperform method (blue) for both error evaluation metrics, reducing the RMSEs of the posi in the X, Y and plane directions by 96.45%, 92.58% and 94.77%, respectively, and the maximum error to 0.590 m, 0.787 m and 0.826 m, respectively, which provi evidence for the effectiveness of the proposed method. However, since 2-D LS po requires at least 3 ranging values, the NLOS number is no more than one, as show red points in Figure 15, reducing the availability of the NI-LS method to 54.34% 18 shows the NLOS identification time of the proposed method for different a each epoch. The average identification times for the four anchors are 0.809 ms, 0.940 ms, and 1.111 ms, and the maximum identification times are 14.537 ms, 5.063 ms, and 4.942 ms, respectively. In our experimental scenario, the average ru the whole NLOS identification module is 41.099 ms, and the maximum runtime ms, which does not significantly increase the computation time of the system and the real-time requirements. From both static and dynamic experiments, it is clear that, due to good inf fusion and data association, Algorithm 1 can effectively identify NLOS errors prove the accuracy and robustness of UWB positioning in complex indoor envir without significantly increasing the amount of computation. From both static and dynamic experiments, it is clear that, due to good information fusion and data association, Algorithm 1 can effectively identify NLOS errors and improve the accuracy and robustness of UWB positioning in complex indoor environments without significantly increasing the amount of computation.

Evaluation of the Improved-UWB/LiDAR-SLAM Tightly Coupled Positioning Algorithm
To verify the effectiveness of the proposed improved-UWB/LiDAR-SLAM tightly coupled algorithm with the REKF (NLOS identification + UWB/LiDAR-SLAM tightly coupled + REKF, NI-TC-REKF), several different positioning methods are compared on the evaluation dataset collected in Section 3.3. Considering that the data availability of the NI-LS method is only 54.34%, we take the LOS measurements identified by NLOS identification algorithm and the positioning results of LeGO-LOAM as the direct input of the integrated system. In this experiment, we compare the trajectory solved by the NI-TC-REKF algorithm with several other methods: The ground truth is provided by the Leica TS50 automatic tracking total station. Figure 19 shows the comparison of the trajectories estimated by different comparative methods and the ground truth. Figure 20a-c shows the position errors for different algorithms in the X, Y and plane directions, respectively, and the specific statistical results are shown in Table 4. In the table, 'Scale' represents the ability of the algorithm to calculate global coordinates, with the red results indicating the best accuracy, blue results indicating the second-best accuracy and green results indicating the third-best accuracy for each error metric.
Remote Sens. 2022, 14,1380 in the X, Y and plane directions, respectively, and the specific statistical results ar in Table 4. In the table, 'Scale' represents the ability of the algorithm to calculat coordinates, with the red results indicating the best accuracy, blue results indica second-best accuracy and green results indicating the third-best accuracy for ea metric.      The comparison of NI-LS (cyan), LeGO-LOAM (magenta) and TC-EKF (orange) indicates that if the quality of input sensor data is not controlled, abnormal measurements are likely to interfere with the integrated system and have serious negative effects, resulting in poor accuracy of the system that might be even worse than that of a single sensor. Compared with the TC-EKF, the NI-TC-EKF (blue) and TC-REKF (green) algorithms substantially improve the performance of the integrated system with significant improvements in the positioning accuracy by using the UWB LOS data identified by Algorithm 1 as input to the integrated system and by adjusting the measurement noise covariance matrix to reduce the contribution of abnormal measurements to parameter estimation, respectively. The RMSEs of the TC-REKF algorithm in the X, Y and plane directions are reduced by 91.32%, 91.51% and 91.41%, respectively, and its positioning accuracy is close to that of NI-LS and slightly inferior to that of LeGO-LOAM. The RMSEs of the NI-TC-EKF algorithm in the X, Y and plane directions are reduced by 94.05%, 93.80% and 93.90%, respectively, and its positioning performance is better than that of the NI-LS (RMSEs in the X, Y and plane directions are reduced by 8.86%, 39.17% and 28.47%, respectively), LeGO-LOAM (RMSEs in the X, Y and plane directions are reduced by 17.24%, 3.95% and 10.43%, respectively) and TC-REKF (RMSEs in the X, Y and plane directions are reduced by 31.43%, 27.00% and 28.97%, respectively) algorithms, which further illustrates the importance of the data quality of the measurements. However, NI-TC-EKF is still affected by gross measurement errors in some areas. As shown in Figure 19a, due to the occlusion of the wall column, UWB anchor 3 is identified as an NLOS anchor, and the ranging errors for UWB anchors 1, 2 and 4 are 0.497 m, 0.081 m and 0.102 m, respectively. The positioning errors for NI-LS and NI-TC-EKF in the plane direction are 0.826 m and 0.463 m, respectively.
It is quite obvious that the NI-TC-REKF (red) method, which combines NLOS identification with REKF, is more accurate than the other methods for most of the evaluation metrics (as shown in Table 4). Compared with the NI-TC-EKF algorithm, the RMSEs of the NI-TC-REKF algorithm in the X, Y and plane directions are reduced by 8.33%, 8.22% and 8.74%, respectively. The improvement is lower than that of TC-REKF relative to TC-EKF, which is mainly attributed to the strict control of the NLOS identification algorithm on the quality of UWB data. Compared with the single sensor, the RMSEs of the NI-TC-REKF algorithm in the X, Y and plane directions are reduced by 16.46%, 44.17%, and 34.72% (NI-LS) and 24.14%, 11.84%, and 18.26% (LeGO-LOAM), respectively. As shown in Figure 19, the trajectory of NI-TC-REKF in multiple regions is also superior to those of the other methods and more consistent with the ground truth, such as Figure 19b,c. As shown in Figure 20, the error peaks and spike occurrence ratios of NI-TC-REKF are significantly reduced, and the positioning accuracy and robustness are greatly improved. Table 5 reports the computational load for each NI-TC-REKF module. In our tests, the LiDAR-SLAM module is the dominant computational load of NI-TC-REKF algorithm, with an average runtime of 560.849 ms. The NLOS identification module and the tightly coupled module introduce an additional time consumption of 7.33% and 0.003% of LiDAR-SLAM, respectively. In the tightly coupled module, the average runtimes of NI-TC-REKF and NI-TC-EKF are 0.017 ms and 0.016 ms, and the maximum runtimes are 0.124 ms and 0.119 ms, respectively. The positioning accuracy is improved effectively while occupying only a small amount of memory resources. In summary, the NI-TC-REKF algorithm performs best when considering the accuracy, availability and ability to obtain global coordinates, fully validating the effectiveness of the proposed tightly coupled method based on the combination of NLOS identification and the REKF.

Conclusions
The presence of indoor obstacles affects the accuracy of UWB ranging to varying degrees, thereby reducing the accuracy and reliability of the positioning system. To obtain robust and reliable positioning results in complex indoor environments, we perform the following work. First, we propose a method to detect, identify and eliminate UWB NLOS efficiently and accurately through well-designed sensor fusion and data association by exploiting the complementarity of UWB and LiDAR to improve the performance of UWB dynamic positioning in real scenarios. Second, to make full use of both UWB and LiDAR sensors, an improved-UWB/LiDAR-SLAM tightly coupled positioning algorithm is proposed by combining the improved UWB measurements after NLOS identification and the positioning results of LeGO-LOAM. Additionally, to better suppress the effects of UWB gross errors on the integrated system, the REKF is used for accurate position estimation. The effectiveness and performance of the proposed methods are verified and evaluated through a series of experiments. (1) Both static and dynamic experiments show that the quality of UWB LOS measurements after NLOS identification achieves satisfactory results, and the proposed algorithm can effectively identify NLOS errors without adding a large amount of computation. Furthermore, the accuracy of ranging values (the RMSE of ranging values is reduced by 94.29% on average in the dynamic experiment) and the robustness of position estimation are substantially improved, and the success rate of NLOS identification in the static experiment is up to 99.80%. (2) The tightly coupled method NI-TC-REKF, which combines NLOS identification and the REKF, outperforms other comparative methods in terms of positioning performance and robustness, achieving satisfactory control of sensor errors with RMSEs of 0.066 m, 0.067 m and 0.094 m in the X, Y and plane directions, respectively, and maximum errors of 0.317 m, 0.557 m and 0.566 m, respectively.
The proposed NI-TC-REKF algorithm provides a novel solution for high-precision positioning applications in GNSS-denied environments, such as automatic driving, reverse car-searching in parking lots and warehousing logistics. However, the use of UWB ranging values in this paper is not sufficient, and the complete exclusion of NLOS measurements may lead to poor geometric distribution, which may backfire and increase positioning errors. In future research, we will aim to better judge the discard and use of NLOS measurements to improve the availability of data and the reliability of positioning. In addition, we are considering integrating a low-cost inertial measurement unit (IMU) on the existing platform, using the high-frequency attitude information output by the IMU to improve the quality of LiDAR mapping and thereby improving the robustness and accuracy of the integrated system for higher-speed, more agile systems.