LiDAR / RISS / GNSS Dynamic Integration for Land Vehicle Robust Positioning in Challenging GNSS Environments

: The autonomous vehicles (AV) industry has a growing demand for reliable, continuous, and accurate positioning information to ensure safe tra ﬃ c and for other various applications. Global navigation satellite system (GNSS) receivers have been widely used for this purpose. However, GNSS positioning accuracy deteriorates drastically in challenging environments such as urban environments and downtown cores. Therefore, inertial sensors are widely deployed inside the land vehicle for various purposes, including the integration with GNSS receivers to provide positioning information that can bridge potential GNSS failures. However, in dense urban areas and downtown cores where GNSS receivers may incur prolonged outages, the integrated positioning solution may become prone to severe drift resulting in substantial position errors. Therefore, it is becoming necessary to include other sensors and systems that can be available in future land vehicles to be integrated with both the GNSS receivers and inertial sensors to enhance the positioning performance in such challenging environments. This work aims to design and examine the performance of a multi-sensor system that fuses the GNSS receiver data with not only the three-dimensional reduced inertial sensor system (3D-RISS), but also with the three-dimensional point cloud of onboard light detection and ranging (LiDAR) system. In this paper, a comprehensive LiDAR processing and odometry method is developed to provide a continuous and reliable positioning solution. In addition, a multi-sensor Extended Kalman ﬁltering (EKF)-based fusion is developed to integrate the LiDAR positioning information with both GNSS and 3D-RISS and utilize the LiDAR updates to limit the drift in the positioning solution, even in challenging or ultimately denied GNSS environment. The performance of the proposed positioning solution is examined using several road test trajectories in both Kingston and Toronto downtown areas involving di ﬀ erent vehicle dynamics and driving scenarios. The proposed solution provided a performance improvement over the standalone inertial solution by 64%. Over a GNSS outage of 10 min and 2 km distance traveled, our solution achieved position errors less than 2% of the distance travelled. A.A., A.S.E.-W., H.E. and A.N.; investigation, A.A., A.S.E.-W., H.E. and A.N.; data collection, A.A, A.S.E, and H.E; writing—original draft preparation, A.A; writing—review and editing, A.S.E, H.E, and A.N; supervision, A.N.; funding acquisition, A.N. All authors


Introduction
Reliable positioning and navigation are vital for self-driving car applications. Using position fixing (PF) techniques such as the Global Navigation Satellite System (GNSS), the vehicle can navigate easily in unknown environments. The GNSS provides an absolute long-term positioning solution when in line of sight with four or more satellites. However, reliable GNSS signals are not guaranteed in all environments due to satellite signal blockage, poor satellite geometry, and multipath in dense urban areas [1]. Therefore, the Inertial Navigation System (INS), which is a dead reckoning (DR) technique is usually integrated with the GNSS receiver to provide a positioning solution in case of GNSS failure. The INS is an autonomous system that requires no external information to calculate the location information, unlike the GNSS [2]. The INS consists of inertial sensors such as accelerometers and gyroscopes, which given an initial position, velocity, and attitude, provides a positioning solution [3,4]. However, the positioning solution from the INS has good short-term accuracy, and this is because it suffers from error in sensor measurements accumulating, which requires external information for resetting those errors [5]. The GNSS and the INS have complementary features that led to the trend of integrating both sensors using different filtering techniques to have a more reliable and accurate solution that mitigates each sensor's drawbacks [6]. The INS/GNSS integrated navigation system is the most popular form of integration in which the GNSS provides measurement updates to the INS via Kalman Filtering (KF). As the GNSS measurements prevent inertial sensors from drifting by resetting their errors, the INS smooth the GNSS solution and provides a solution in the case of GNSS outage. A low-cost micro-electro-mechanical system (MEMS)-based inertial sensors is used to cut the cost of such a navigation system. During the past decade, the performance of low-cost MEMS-based inertial sensors has improved significantly that it has found many uses mainly in the automobile industry [7].
Moreover, a three-dimensional reduced inertial sensor system (3D-RISS) was developed to be integrated with the GNSS to provide a more reliable navigation solution for the land vehicle. The 3D-RISS minimizes the usage of the accelerometers and gyroscopes, therefore, reducing the accumulated drift errors [8]. The 3D-RISS mechanization method is mainly developed for land vehicle navigation as it also utilizes the wheel odometer of the land vehicle, which measures the forward speed of the vehicle to calculate the pitch and roll angles. However, the integrated solution fails to provide a reliable navigation solution in extended outages due to the absence of a GNSS signal to reset the residual biases that accumulate over time.
AVs are anticipated to navigate not only on controlled highways but also in suburban and dense urban areas. The realization of such a goal is based on the development of robust and reliable, fully autonomous systems that can make decisions based on the perception of its sensors [9]. Therefore, autonomous cars house perception sensors that help in positioning the vehicle in these situations. Generally, the most common sensors used for integrating with the INS are cameras, Light Detection and Ranging (LiDAR), and Radio Detection and Ranging (RADAR). Compared to the camera, the LiDAR has higher resolution and can provide detailed structural information of the surrounding area. Also, laser scanners are not affected by ambient light like cameras and have a higher resolution than RADARs. As LiDAR sensors are becoming cheaper and smaller in size, they are becoming more suitable for autonomous vehicle applications. Therefore, the work in this paper utilizes mechanical rotating 3D-LiDAR that provides a 360 • panoramic view of the surrounding environment.
As the GNSS becomes unreliable in downtown cores, studies emerged that favored DR solutions such as simultaneous localization and mapping (SLAM) and odometry methods. Those techniques enabled the calculation of the position and orientation of the vehicle based on the data obtained from the onboard sensors. The LiDAR Odometry (LO) is based on the laser reflected from the surrounding objects to estimate the position and orientation given a starting point. A method involving using LO was introduced in [10] to estimate the movement between frames and matches the estimated solution to a road map using particle filtering. Another author in [11] suggested that when the LiDAR scanning rate is higher than the extrinsic motion, the standard Iterative Closest Point (ICP) is used to compute a moving object velocity to address the motion distortion introduced in a single-axis 3D-LiDAR.
However, the traditional ICP is computationally expensive, according to the authors in [12]. Using a LiDAR, IMU, and a wheel odometer, the authors in [13], suggested an integration scheme for pose estimation. They used a point-to-plane ICP approach for the registration of the LiDAR point clouds and EKF for multi-sensor fusion and also a point cloud data map. However, their experiments were done in a controlled and previously mapped environment with nearly static scenes. The authors in this work used a point-to-plane approach, which is much faster in terms of computational complexity. However, using a full IMU mechanization along with the proposed ICP increases the overall computational complexity of the system. Another approach is the pose estimation using LO extracted using the ICP and pseudo cellular ranges fused using EKF to estimate the heading of the vehicle accurately in [14]. A GNSS/INS/LiDAR-SLAM integrated navigation system was proposed in [15], where the GNSS/INS result and the relative pose from a 3D probability map were matched with the graph optimizing. However, in [15], the LiDAR's point cloud was not preprocessed, and thus many errors would increase in such a method.
In this paper, a multi-sensor fusion approach is introduced to bridge GNSS outages and provide reliable and continuous positioning and navigation information. The proposed system is based on integrating LiDAR, INS, and GNSS, supported by a method that smartly selects the best combination from these three systems to provide the most reliable positioning information. The specific contributions of this paper are the following: • Design and implementation of denoising method for the raw LiDAR point cloud to remove any outliers, thus reducing the computational complexity and enhancing the performance. Design and implementation of a selection criterion that can automatically select the set of sensors/systems to be included in the fusion filter to mitigate the drift in the positioning solution.

•
The performance of the proposed system is assessed based on road test experiments to quantitively assess its merits and limitations when compared to a high-end reference solution.
The paper is structured as follows. In Section 2, the system architecture LiDAR/RISS/GNSS integrated navigation system is explained along with the mathematical model of the reduced inertial system. In Section 3, the experimental setup is presented. Next, in Section 4, the results are discussed. Finally, Section 5 is the conclusion, along with future work prospects.

System Architecture and Mathematical Model
In prolonged and frequent GNSS outages, the INS/GNSS integration suffers from biases and drifts that are modulated with time. The drifts in the INS mechanization are caused due to implicit mathematical integrations associated with the algorithm. In a classic INS mechanization, all the six Degree of Freedom (DoF) IMUs are used, integrating the rotation rates of the 3-axis orthogonal gyroscopes to compute the attitude angles. Therefore, the error in the transformation from the body frame to the local frame will linearly increase in time for every given gyroscope error. Also, since the velocities are calculated by integrating the 3-axis accelerometer readings and then rotating by the attitude angles, the linear gyroscope error becomes quadratic. Eventually, to get the 3D-position, the velocity is integrated again, and the quadratic error becomes cubic order [6]. The land vehicle's motion has three characteristics that are defined in [16], which prove that three orthogonal gyroscopes, along with a forward accelerometer, are sufficient for land vehicle navigation. However, increasing the number of gyroscopes, as mentioned before, proven to increase the inherent errors from the integration process. Therefore, to further simplify the system, a single gyroscope along with two accelerometers and the vehicle wheel odometer is used to decrease the inherent errors in the system.

Three-Dimensional Reduced Inertial Sensor System (3D-RISS)
The 3D-RISS utilizes only the vertical gyroscope (w z ), the forward accelerometer ( f x ), and the transversal accelerometer ( f y ) from the IMU unit along with the wheel odometer of the vehicle. The odometer is used to measure the vehicle's forward velocity (v od ) to be differentiated into the acceleration of the vehicle between each epoch to get the vehicle's forward acceleration (a od ). Figure 1 gives a detailed description of the mechanization process [17].
Remote Sens. 2020, 12, x FOR PEER REVIEW 4 of 27 two accelerometers and the vehicle wheel odometer is used to decrease the inherent errors in the system.

Three-Dimensional Reduced Inertial Sensor System (3D-RISS)
The 3D-RISS utilizes only the vertical gyroscope ( z w ), the forward accelerometer (  Figure   1 gives a detailed description of the mechanization process [17]. The 3D-RISS utilizes the denoised sensor measurements from the ( x f , y f , z w ) and the measured forward velocity from the odometer as the mechanization inputs. Accordingly, the navigation state vector of the 3D-RISS is given by the following Equation (1): Where ( ,  , h ) corresponds to the 3D-position components, namely the latitude, longitude, and the altitude, respectively. The ( e v , n v , u v ) represents the velocity components in the local level frame (LLF), the East, North, and Upward velocities, respectively. While ( r , p , Azi ) are the attitude angles, the roll, pitch, and azimuth. The pitch and roll angles are calculated at each epoch k as follows in Equations (2) and (3).
The bias in the gyroscope readings ( z b ) is taken into consideration. Also, the azimuth angle is calculated as follows in (4), at each epoch k. The 3D-RISS utilizes the denoised sensor measurements from the ( f x , f y , w z ) and the measured forward velocity from the odometer as the mechanization inputs. Accordingly, the navigation state vector of the 3D-RISS is given by the following Equation (1): where (ϕ, λ, h) corresponds to the 3D-position components, namely the latitude, longitude, and the altitude, respectively. The (v e , v n , v u ) represents the velocity components in the local level frame (LLF), the East, North, and Upward velocities, respectively. While (r, p, Azi) are the attitude angles, the roll, pitch, and azimuth. The pitch and roll angles are calculated at each epoch k as follows in Equations (2) and (3).
The bias in the gyroscope readings (b z ) is taken into consideration. Also, the azimuth angle is calculated as follows in (4), at each epoch k.
where (R N ) is the normal radius of the earth's ellipsoid, and (w e ) is the earth's rotation component. Then, using the attitude angles the forward speed measured by the odometer is projected on to the Remote Sens. 2020, 12, 2323

of 25
East (v e k ), North (v n k ), and Upward (v u k ) velocity components in the local level frame as illustrated in Equation (5): After that, the 3D-position states, latitude (ϕ), longitude (λ), and altitude (h) in each epoch are derived from integrating the velocities in the LLF and rotating them using the normal (R N ) and meridian (R M ) radii, as shown in Equation (6): Therefore, by reducing the number of the inertial sensors used, the 3D-RISS reduced the complexity of the system. Also, using the odometer and the accelerometers to calculate the pitch and roll angles rather than the gyroscopes eliminates the need for numerical integration. Therefore, no drift or error growth.

LiDAR Odometry
Scan matching algorithms have been widely popular in recent years as they provide an understanding of the movement of the points between two consecutive scans. The ICP is widely used as it is one of the dominant solutions for aligning three-dimensional models based on the Euclidean distance [18], providing an initial estimate of the relative pose of the vehicle. The accuracy of the registration using ICP is dependent on the number of points and the nature of the scene. Therefore, preprocessing steps are required as navigating in a dynamic environment needs context awareness of objects surrounding the vehicle, which happens in most perception pipelines [19]. Therefore, point cloud segmentation is required before using the ICP to register the point clouds. Figure 2 shows a flowchart of the point cloud preparation.
Therefore, by reducing the number of the inertial sensors used, the 3D-RISS reduced the complexity of the system. Also, using the odometer and the accelerometers to calculate the pitch and roll angles rather than the gyroscopes eliminates the need for numerical integration. Therefore, no drift or error growth.

LiDAR Odometry
Scan matching algorithms have been widely popular in recent years as they provide an understanding of the movement of the points between two consecutive scans. The ICP is widely used as it is one of the dominant solutions for aligning three-dimensional models based on the Euclidean distance [18], providing an initial estimate of the relative pose of the vehicle. The accuracy of the registration using ICP is dependent on the number of points and the nature of the scene. Therefore, preprocessing steps are required as navigating in a dynamic environment needs context awareness of objects surrounding the vehicle, which happens in most perception pipelines [19]. Therefore, point cloud segmentation is required before using the ICP to register the point clouds. Figure 2 shows a flowchart of the point cloud preparation.

Ego Points Removal
The 3D-LiDAR used generates an organized point cloud in the form an M-by-N-by-3 array containing the (x, y, z) coordinates of the points in the LiDAR body frame in meters. It also has a 360 • horizontal field of view (FOV) and a 30 • vertical FOV. It is mounted on the vehicle's roof, which causes the ego vehicle points to appear in each scan that can affect the ICP algorithm's accuracy. Therefore, to determine the ego vehicle points in the point cloud, the mounting location of the LiDAR is estimated in the LiDAR body frame considering the vehicle's dimensions The measurements of the vehicle are defined, as shown in Equation (7), and assuming that the LiDAR is mounted horizontally to the ground plane.
where (X min X max ) corresponds to the width of the vehicle, (Y min Y max ) corresponds to the length of the vehicle, and (Z min Z max ) corresponds to the height of the vehicle. With the aid of the vehicle's entered dimensions, a cube is constructed around the mounting location of the LiDAR. As a result, any points lying inside the constructed cube are considered the ego vehicle points and therefore are eliminated from the scene. Figure 3 shows the ego vehicle points in a point cloud labeled in red.
vehicle are defined, as shown in Equation (7), and assuming that the LiDAR is mounted horizontally to the ground plane.  ZZ) corresponds to the height of the vehicle. With the aid of the vehicle's entered dimensions, a cube is constructed around the mounting location of the LiDAR. As a result, any points lying inside the constructed cube are considered the ego vehicle points and therefore are eliminated from the scene. Figure 3 shows the ego vehicle points in a point cloud labeled in red.

Segmentation of the Ground Points from the Point Cloud
After removing the ego vehicle points from the point cloud scene, the ground points are removed next from the scene. Also, assuming that the LiDAR is mounted horizontally to the ground plane and that it appears in at least the lowest row of the point cloud. Therefore, an elevation angle (  ) is set as a threshold and the point of the lowest orientation angle ( i  ) is labeled as a ground point.
Then, the elevation angle is computed between a ground point and its four connected neighbor points. The neighborhood point is labeled as a ground point if the difference is below the specified (  ).

Segmentation of the Ground Points from the Point Cloud
After removing the ego vehicle points from the point cloud scene, the ground points are removed next from the scene. Also, assuming that the LiDAR is mounted horizontally to the ground plane and that it appears in at least the lowest row of the point cloud. Therefore, an elevation angle (α) is set as a threshold and the point of the lowest orientation angle (θ i ) is labeled as a ground point. Then, the elevation angle is computed between a ground point and its four connected neighbor points. The neighborhood point is labeled as a ground point if the difference is below the specified (α). Figure 4 shows the ground points labeled in yellow while the ego vehicle points are labeled in red.

LiDAR Point Cloud Clustering
Point cloud clustering methods have been widely used to group points with similar spectral features into the same homogenous pattern [20]. The 3D-LiDAR used in this work is the Velodyne PUCK LITETM [21], which has 16 channels of individual laser pulse projectors. This LiDAR has a 2°

LiDAR Point Cloud Clustering
Point cloud clustering methods have been widely used to group points with similar spectral features into the same homogenous pattern [20]. The 3D-LiDAR used in this work is the Velodyne PUCK LITETM [21], which has 16 channels of individual laser pulse projectors. This LiDAR has a 2 • vertical angular resolution, while the horizontal angular resolution ranges from 0.1 • to 0.4 • depending on the scanning speed. This considerable difference between the vertical and horizontal angular resolutions has an impact on the clustering process. The clusters are created based on the Euclidean distance between the neighboring points A and B. Threshold is set for the maximum distance between two neighboring points. A potential problem would occur when the two laser measurements are within the distance threshold but belong to another object. Therefore, another constraint on the angle between the neighboring points and the LiDAR is employed. A line is created passing through the sensor at O, and point A forms the first side of the angle. The second side of the angle is formed by passing a line through points A and B, as illustrated in Figure 5.

LiDAR Point Cloud Clustering
Point cloud clustering methods have been widely used to group points with similar spectral features into the same homogenous pattern [20]. The 3D-LiDAR used in this work is the Velodyne PUCK LITETM [21], which has 16 channels of individual laser pulse projectors. This LiDAR has a 2° vertical angular resolution, while the horizontal angular resolution ranges from 0.1° to 0.4° depending on the scanning speed. This considerable difference between the vertical and horizontal angular resolutions has an impact on the clustering process. The clusters are created based on the Euclidean distance between the neighboring points A and B. Threshold is set for the maximum distance between two neighboring points. A potential problem would occur when the two laser measurements are within the distance threshold but belong to another object. Therefore, another constraint on the angle between the neighboring points and the LiDAR is employed. A line is created passing through the sensor at O, and point A forms the first side of the angle. The second side of the angle is formed by passing a line through points A and B, as illustrated in Figure 5.  An angle (β) is formed and defined as the angle between the laser beam from the sensor and the line connecting A and B. If the angle (β) is greater than the specified angle threshold, then the points are grouped in the same cluster. Figure 6 shows the clustered point cloud with each cluster given a unique color. It was observed that some of the clusters contained one or two points per cluster. Therefore, clusters that contain such little points are discarded from the point cloud scene.
Remote Sens. 2020, 12, x FOR PEER REVIEW 8 of 27 An angle (  ) is formed and defined as the angle between the laser beam from the sensor and the line connecting A and B. If the angle (  ) is greater than the specified angle threshold, then the points are grouped in the same cluster. Figure 6 shows the clustered point cloud with each cluster given a unique color. It was observed that some of the clusters contained one or two points per cluster. Therefore, clusters that contain such little points are discarded from the point cloud scene.

Point Cloud Denoising
The point cloud may contain points that are beyond the scanning range of the LiDAR, or the laser encountered an absorbing material that attenuated the laser pulse. When this phenomenon happens, the LiDAR registers these points in the point cloud as either infinite (Inf) or Not a Number

Point Cloud Denoising
The point cloud may contain points that are beyond the scanning range of the LiDAR, or the laser encountered an absorbing material that attenuated the laser pulse. When this phenomenon happens, the LiDAR registers these points in the point cloud as either infinite (Inf) or Not a Number (NaN). However, some points are noisy and irrelevant to the scene, which can cause noisy outputs when using the ICP algorithm. Therefore, the point cloud is denoised to remove the outliers and noise from the scene. The denoising is accomplished based on computing the standard deviation (SD) of the mean of the average distance of the neighbor points, as shown in the following Equation (8): where, D i is the distance measured at a point i, D is the mean of the distances of the neighboring point, and N is the number of neighboring points. The term D i − D is the deviation of the distance of point i from the mean. Increasing the number of neighboring points N would increase the number of computations while decreasing it would make the filter more sensitive to noise. As illustrated in Figure 7a shows the raw point cloud while the red circles highlight the most distinctive points that are removed from the scene. Figure 7b illustrates the denoised point cloud.

Point Cloud Downsampling
The 3D-LiDAR generates around 300,000 points per second [21]. Therefore, any scan matching algorithm will require high computation power to process the point cloud [22]. So, the downsampling step is essential to reduce the computational complexity of the scan matching algorithm. Downsampling is achieved by applying a box grid filter on the point cloud. The Grid size is defined based on the manner of not losing any distinctive features in the point cloud scene but reducing the number of points. The downsampling step is considered the most critical parameter of the scan matching algorithm as it controls the speed of the registration. Figure 8a shows the normal point cloud, and Figure 8b shows the downsampled point cloud.

Point Cloud Downsampling
The 3D-LiDAR generates around 300,000 points per second [21]. Therefore, any scan matching algorithm will require high computation power to process the point cloud [22]. So, the downsampling step is essential to reduce the computational complexity of the scan matching algorithm. Downsampling is achieved by applying a box grid filter on the point cloud. The Grid size is defined based on the manner of not losing any distinctive features in the point cloud scene but reducing the number of points. The downsampling step is considered the most critical parameter of the scan matching algorithm as it controls the speed of the registration. Figure 8a shows the normal point cloud, and Figure 8b shows the downsampled point cloud.
algorithm will require high computation power to process the point cloud [22]. So, the downsampling step is essential to reduce the computational complexity of the scan matching algorithm. Downsampling is achieved by applying a box grid filter on the point cloud. The Grid size is defined based on the manner of not losing any distinctive features in the point cloud scene but reducing the number of points. The downsampling step is considered the most critical parameter of the scan matching algorithm as it controls the speed of the registration. Figure 8a shows the normal point cloud, and Figure 8b shows the downsampled point cloud.

Iterative Closest Point (ICP)
The ICP is the method used in this work to register the point clouds and obtain the translation and rotation between the point clouds. The ICP relies on finding the least square rigid transformation that minimizes the distance between two point cloud sets. Several implementations of the ICP suffer drawbacks as its tendency to diverge if the initial alignment is not proper. This divergence in the ICP is overcome by setting an initial point of (0, 0, 0) in the (x, y, z) coordinates of the LiDAR body frame. A six DoF spatial rigid transformation in three-dimensional Euclidean space is estimated that preserves the Euclidean distance between the point clouds. After the preprocessing done on the point cloud, the ICP algorithm registers the points in the fixed point cloud with the closest point in the moving point cloud to achieve the minimum Root Mean Squared (RMS) error. A point-to-plane

Iterative Closest Point (ICP)
The ICP is the method used in this work to register the point clouds and obtain the translation and rotation between the point clouds. The ICP relies on finding the least square rigid transformation that minimizes the distance between two point cloud sets. Several implementations of the ICP suffer drawbacks as its tendency to diverge if the initial alignment is not proper. This divergence in the ICP is overcome by setting an initial point of (0, 0, 0) in the (x, y, z) coordinates of the LiDAR body frame. A six DoF spatial rigid transformation in three-dimensional Euclidean space is estimated that preserves the Euclidean distance between the point clouds. After the preprocessing done on the point cloud, the ICP algorithm registers the points in the fixed point cloud with the closest point in the moving point cloud to achieve the minimum Root Mean Squared (RMS) error. A point-to-plane approach, which is a variant of the ICP, is applied in this work. This method leverages the advantages of the normal surface information by minimizing the Root-Mean-Squared (RMS) error between a point and its tangent plane to improve robustness and accuracy. The ICP algorithm output consists of a transformation matrix, as shown in Equation (9) and the RMS error in the transformation.
The T f orm is a 4 × 4 matrix that is a combination of the translations and rotations that represent the ways that objects move in the world. The translations are defined as the 4th and last row of the T f orm which comprises of (∆x, ∆y, ∆z). While the angles (θ x , θ y , θ z ) are the Euler angles that define each rotation in the (x, y, z) directions, respectively.
It is expected that when the vehicle is stationary, pedestrians and cars will be moving in the scene. This phenomenon could lead to incorrect registration of the point clouds, and as a result, significant deviations are imposed on the translations and rotations. However, when the vehicle is stationary, it was observed that the RMS error between the point clouds is minimal in contrast to when the vehicle is mobile. Therefore, when the ICP algorithm starts to work, a few epochs are used at first to calculate an average for the RMS error of the ICP. The RMS error average is used as a threshold to determine if the registration process does not provide reliable measurements. Therefore, an approach to use the vehicle's odometer with the RMS error to stop the registration of the ICP is introduced due to the correlation of the vehicle's speed and RMS error of the ICP. A threshold is set at 1 m/s for the vehicle's velocity. This way, we can ensure that when the car is nearly at rest, this would mean that the RMS error should be less than the average RMS error calculated at first. Therefore, if a spike in the RMS error at low speeds occur, it would not affect the stopping of the ICP registration. Figure 9 shows over 180 s the relation between the RMS error and vehicle's speed, highlighting the calculated average RMS error of the ICP, which is 0.6 m in this scenario.
stationary, it was observed that the RMS error between the point clouds is minimal in contrast to when the vehicle is mobile. Therefore, when the ICP algorithm starts to work, a few epochs are used at first to calculate an average for the RMS error of the ICP. The RMS error average is used as a threshold to determine if the registration process does not provide reliable measurements. Therefore, an approach to use the vehicle's odometer with the RMS error to stop the registration of the ICP is introduced due to the correlation of the vehicle's speed and RMS error of the ICP. A threshold is set at 1 m/s for the vehicle's velocity. This way, we can ensure that when the car is nearly at rest, this would mean that the RMS error should be less than the average RMS error calculated at first. Therefore, if a spike in the RMS error at low speeds occur, it would not affect the stopping of the ICP registration. Figure 9 shows over 180 seconds the relation between the RMS error and vehicle's speed, highlighting the calculated average RMS error of the ICP, which is 0.6 m in this scenario.  The LO output is composed of the displacement and the heading from the initial point (0, 0, 0) to the destination. These movements are the accumulation of the translations in (x, y, z) directions in the LiDAR body frame. These movements are transformed from the body frame to the LLF. The transformation to the LLF requires an initial point with curvilinear coordinates in the LLF (ϕ o , λ o , h o ) to be known along with its heading (Azi). After that, the position in the rectangular coordinates (E, N, U) is calculated as follows: where (∆P E k , ∆P N k , ∆P Up k ) denotes the change in position in the East, North, and Upward directions at each epoch k in the LLF. The (∆P Up k ) is then accumulated each epoch on the initial altitude to calculate (h LiDAR k ) as depicted in Equation (11). Then, the (∆P N k ) and (∆P E k ) in each epoch are used to obtain the (ϕ LiDAR k ) and (λ LiDAR k ) of the geodetic coordinates in Equations (12) and (13), respectively.

LiDAR/RISS/GNSS Integration
The LiDAR/RISS/GNSS, an integrated navigation system, is proposed to reset the biases of the inertial sensors in the 3D-RISS solution, as shown in Figure 10. The system utilizes the EKF in a loosely-coupled fashion for the integration. The 3D-RISS is used for prediction, while the switching algorithm toggle the GNSS and LiDAR to provide the measurement updates in the EKF. Nonetheless, if both sensors fail to provide a reliable solution, the 3D-RISS works as a standalone system to provide a navigation solution.

Switching Criterion
In order to maximize the benefits of integrating GNSS, LiDAR, and inertial sensors, we are developing a method to choose the system (GNSS or LiDAR) that is most reliable for integration with RISS. The position measurement update will switch between GNSS and LiDAR. The switching criterion of this Multi-Sensor System (MSS) relies on several factors from the readings of the GNSS receiver, the ICP algorithm, and values from the IMU measurements. This way, a dynamic and robust integrated navigation system in urban environments is introduced. The GNSS receiver provides at each epoch the number of satellites it is observing along with the value of the Geometric Dilution of Precision (GDOP). The GDOP value provides an understanding of the satellite geometry observed by the receiver [1]. Also, the SD of the 3D-position provided by the GNSS indicates the reliability of the GNSS performance. Since an LC integration is utilized, the number of satellites observed plays a massive role in the accuracy of the RISS/GNSS integration. Therefore, the GNSS would fail to provide a reliable solution if the number of satellites observed falls below four. Moreover, if the GDOP value is too high, this would also result in unreliable GNSS measurements. The GDOP is calculated using the parameters of the user's position and time bias errors, which are the latitude, longitude, altitude, and time to those of the pseudo-range errors from the GNSS: where the ( ,? , ? ? xx yy zz tt q q q q ) represent the variance of the estimated user position in each axis and in the user time offset. It is also observed that when the SD of the 3D-position starts to increase in value, the GNSS positioning solution starts to drift from the reference solution. Therefore, depending on the GDOP value, and the SD of the 3D-position, the switching architecture acts accordingly. Therefore, if the GDOP increase in value above two, then the SD of the 3D-position provided by the GNSS receiver is checked. If the SD appears to be increasing over time and above the 1-meter specified threshold, then the switching architecture changes to the LiDAR to provide the measurement updates within the EKF. The SD of the GNSS can be calculated using the following formulas:

Switching Criterion
In order to maximize the benefits of integrating GNSS, LiDAR, and inertial sensors, we are developing a method to choose the system (GNSS or LiDAR) that is most reliable for integration with RISS. The position measurement update will switch between GNSS and LiDAR. The switching criterion of this Multi-Sensor System (MSS) relies on several factors from the readings of the GNSS receiver, the ICP algorithm, and values from the IMU measurements. This way, a dynamic and robust integrated navigation system in urban environments is introduced. The GNSS receiver provides at each epoch the number of satellites it is observing along with the value of the Geometric Dilution of Precision (GDOP). The GDOP value provides an understanding of the satellite geometry observed by the receiver [1]. Also, the SD of the 3D-position provided by the GNSS indicates the reliability of the GNSS performance. Since an LC integration is utilized, the number of satellites observed plays a massive role in the accuracy of the RISS/GNSS integration. Therefore, the GNSS would fail to provide a reliable solution if the number of satellites observed falls below four. Moreover, if the GDOP value is too high, this would also result in unreliable GNSS measurements. The GDOP is calculated using the parameters of the user's position and time bias errors, which are the latitude, longitude, altitude, and time to those of the pseudo-range errors from the GNSS: where the (q xx, q yy , q zz , q tt ) represent the variance of the estimated user position in each axis and in the user time offset. It is also observed that when the SD of the 3D-position starts to increase in value, the GNSS positioning solution starts to drift from the reference solution. Therefore, depending on the GDOP value, and the SD of the 3D-position, the switching architecture acts accordingly. Therefore, if the GDOP increase in value above two, then the SD of the 3D-position provided by the GNSS receiver is checked. If the SD appears to be increasing over time and above the 1-meter specified threshold, then the switching architecture changes to the LiDAR to provide the measurement updates within the EKF. The SD of the GNSS can be calculated using the following formulas: where, σ E , σ N , σ U are the SD of the East, north, and up components of the position error, respectively. Also, the diagonal elements of H correspond to the East, north, vertical, and time DOP, respectively [6]. However, the SD in this paper is provided directly from the GNSS receiver [23]. Furthermore, the switching algorithm switches to the LiDAR to provide the measurement updates based on the RMS error value of the ICP algorithm. Therefore, if the RMS error increases in value above the calculated average, it switches to the 3D-RISS standalone. The 3D-RISS uses the last reliable measurement as an initial point. The 3D-RISS solution starts to drift when a sudden dynamic is captured in the IMU unit. Therefore, a threshold is set on the (w z ) between (0.1) and (−0.1) to maintain the 3D-RISS only solution. The (w z ) are specified based on observation and analysis. Therefore, the readings above (0.1) and lower than (−0.1) reflects a vehicle dynamic, which can be either a lane change or a turn. The algorithm will switch back to either the GNSS or the LiDAR to provide the measurement updates if (w z ) is not within the threshold. Figure 11 provides a detailed flowchart of the switching algorithm.  [6]. However, the SD in this paper is provided directly from the GNSS receiver [23]. Furthermore, the switching algorithm switches to the LiDAR to provide the measurement updates based on the RMS error value of the ICP algorithm. Therefore, if the RMS error increases in value above the calculated average, it switches to the 3D-RISS standalone. The 3D-RISS uses the last reliable measurement as an initial point. The 3D-RISS solution starts to drift when a sudden dynamic is captured in the IMU unit. Therefore, a threshold is set on the ( z w ) between (0.1) and (-0.1) to maintain the 3D-RISS only solution. The ( z w ) are specified based on observation and analysis. Therefore, the readings above (0.1) and lower than (-0.1) reflects a vehicle dynamic, which can be either a lane change or a turn. The algorithm will switch back to either the GNSS or the LiDAR to provide the measurement updates if ( z w ) is not within the threshold. Figure 11 provides a detailed flowchart of the switching algorithm. In the MSS proposed, the 3D-RISS is used for the prediction while the measurement updates are provided by the multi-modal switching algorithm's output. For the RISS/GNSS case, the system and measurement models are explained in detail in [24]. In the LiDAR/RISS case, the system model is similar to that of the RISS/GNSS, while the measurement model is different as other states are observed. Hence, the design matrix (  In the MSS proposed, the 3D-RISS is used for the prediction while the measurement updates are provided by the multi-modal switching algorithm's output. For the RISS/GNSS case, the system and measurement models are explained in detail in [6]. In the LiDAR/RISS case, the system model is similar to that of the RISS/GNSS, while the measurement model is different as other states are observed. Hence, the design matrix (H k, LiDAR ) will change as the states observed in the LiDAR are the 3D-position and the azimuth. The measurement model for the LiDAR/RISS will be formulated, as shown in Equation (15) Remote Sens. 2020, 12, 2323 13 of 25 Also, the design matrix (H k, LiDAR ) at discrete-time (k) is presented in Equation (16): Similar to the GNSS in the measurement update, the vector of measurement noise (η k, LiDAR ) has a zero-mean Gaussian distribution with covariance matrix (R k, LiDAR ) that is measured at each epoch. The (R k, LiDAR ) diagonal terms represent the variance of the measured states, as shown in Equation (17):

Experimental System Setup
Two real road trajectories were conducted to analyze and evaluate the performance of the proposed MSS. In both trajectories, a Toyota Sienna minivan showed in Figure 12a was used in data collection, and on the roof of the vehicle, the LiDAR is mounted as shown in Figure 12b. A custom-made mount is made to get the maximum vertical FOV and to prevent any vibrations.

Experimental System Setup
Two real road trajectories were conducted to analyze and evaluate the performance of the proposed MSS. In both trajectories, a Toyota Sienna minivan showed in Figure 12a was used in data collection, and on the roof of the vehicle, the LiDAR is mounted as shown in Figure 12b. A custommade mount is made to get the maximum vertical FOV and to prevent any vibrations. A testbed is rigidly mounted in the car's trunk, accomodating the reference solution from NovAtel along with two low-cost MEMS-based IMU, as shown in Figure 13. Only the VTI unit (Model number: SCC1300-D04 [25]) is used in providing the IMU measurements and the IMU unit used in our solution. Moreover, a GNSS antenna supplied by Ublox (Model number: ANN-MS-0-005 [26]) is A testbed is rigidly mounted in the car's trunk, accomodating the reference solution from NovAtel along with two low-cost MEMS-based IMU, as shown in Figure 13. Only the VTI unit (Model number: SCC1300-D04 [24]) is used in providing the IMU measurements and the IMU unit used in our solution. Moreover, a GNSS antenna supplied by Ublox (Model number: ANN-MS-0-005 [25]) is used to collect the GNSS signal used in the solution. The Ublox antenna is an active GNSS antenna that is considerably low cost and used in most commercial cars with navigational capabilities. More detailed specifications about the IMUs used for the reference and integration are listed in Table 1 used to collect the GNSS signal used in the solution. The Ublox antenna is an active GNSS antenna that is considerably low cost and used in most commercial cars with navigational capabilities. More detailed specifications about the IMUs used for the reference and integration are listed in Table 1. The reference consists of an OEM4 SPAN-SE GNSS receiver (Model number: OM-2000124 [27]) with a pinwheel receiver antenna integrated with a tactical grade SPAN-CPT IMU (Model number: OM-2000122 [28]) in an ultra-tightly coupled fashion.

Road Trajectories
The proposed MSS's performance is evaluated by the 2D maximum error and the 2D RMS error. The system performance is also assessed using an additional metric of the deviation along the distance traveled, which is calculated by dividing the RMS error by the total distance traveled for the whole outage scenario. Since the main concern is the navigation for land vehicles, the metrics are limited to 2D. The data is analyzed on an Intel Core i7-8750H CPU, at 2.20 GHz, with a 32GB RAM running Windows 10 with an NVIDIA GeForce RTX 2070 Max-Q design GPU. The map overlays were created on Google earth using an online tool called GPS visualizer [29].
The results presented in this paper are divided into two trajectories. The first trajectory focus on evaluating the performance of the LiDAR/RISS tested in a suburban area while introducing an artificial GNSS outage. The second trajectory focuses on switching between the LiDAR, GNSS, and the 3D-RISS standalone to achieve a robust and continuous navigation solution in different scenarios.

Road Trajectories
The proposed MSS's performance is evaluated by the 2D maximum error and the 2D RMS error. The system performance is also assessed using an additional metric of the deviation along the distance traveled, which is calculated by dividing the RMS error by the total distance traveled for the whole outage scenario. Since the main concern is the navigation for land vehicles, the metrics are limited to 2D. The data is analyzed on an Intel Core i7-8750H CPU, at 2.20 GHz, with a 32GB RAM running Windows 10 with an NVIDIA GeForce RTX 2070 Max-Q design GPU. The map overlays were created on Google earth using an online tool called GPS visualizer [28].
The results presented in this paper are divided into two trajectories. The first trajectory focus on evaluating the performance of the LiDAR/RISS tested in a suburban area while introducing an artificial GNSS outage. The second trajectory focuses on switching between the LiDAR, GNSS, and the 3D-RISS standalone to achieve a robust and continuous navigation solution in different scenarios.

First Road Trajectory
The first trajectory was performed in the city of Kingston, Ontario, Canada, which is considered as a suburban environment as low-mid rise buildings characterize it. The trajectory was conducted in the afternoon around 4 pm on 23 May 2018. It lasted for 17 min and was 2.25 km in length. Due to the infrastructure of the city, and there is hardly any place with extended GNSS coverage. Therefore, two artificial outages were introduced in a post-processing mode to evaluate the performance over different scenarios. Figure 14 shows the full trajectory highlighting the simulated outages chosen to assess the proposed LiDAR/RISS navigation solution.
The first trajectory was performed in the city of Kingston, Ontario, Canada, which is considered as a suburban environment as low-mid rise buildings characterize it. The trajectory was conducted in the afternoon around 4 pm on May 23 rd , 2018. It lasted for 17 minutes and was 2.25 km in length. Due to the infrastructure of the city, and there is hardly any place with extended GNSS coverage. Therefore, two artificial outages were introduced in a post-processing mode to evaluate the performance over different scenarios. Figure 14 shows the full trajectory highlighting the simulated outages chosen to assess the proposed LiDAR/RISS navigation solution. The first simulated outage shown in Figure 15, lasted for 80 seconds, and covered a distance of 380 meters, passing through a four-way intersection and one sharp turn. Figure 15 presents the results from the LiDAR/RISS solution, the 3D-RISS standalone, and the NovAtel reference solution.  The first simulated outage shown in Figure 15, lasted for 80 s, and covered a distance of 380 m, passing through a four-way intersection and one sharp turn. Figure 15 presents the results from the LiDAR/RISS solution, the 3D-RISS standalone, and the NovAtel reference solution.
The first trajectory was performed in the city of Kingston, Ontario, Canada, which is considered as a suburban environment as low-mid rise buildings characterize it. The trajectory was conducted in the afternoon around 4 pm on May 23 rd , 2018. It lasted for 17 minutes and was 2.25 km in length. Due to the infrastructure of the city, and there is hardly any place with extended GNSS coverage. Therefore, two artificial outages were introduced in a post-processing mode to evaluate the performance over different scenarios. Figure 14 shows the full trajectory highlighting the simulated outages chosen to assess the proposed LiDAR/RISS navigation solution. The first simulated outage shown in Figure 15, lasted for 80 seconds, and covered a distance of 380 meters, passing through a four-way intersection and one sharp turn. Figure 15 presents the results from the LiDAR/RISS solution, the 3D-RISS standalone, and the NovAtel reference solution.  The 3D-RISS standalone resulted in an RMS error of 7.8 m, while the LiDAR/RISS resulted in an RMS error of 4.6 m, which is a 41% improvement. Moreover, the 3D-RISS resulted in a 2D maximum error of 11.87 m, while the LiDAR/RISS showed a slight improvement to 11 m, which is a 7.3% improvement. The deviation over the 380 m traveled was 2.05% for the 3D-RISS, while the LiDAR/RISS was only 1.21%. Figure 16 shows the 2D position error over the first simulated outage. The 3D-RISS standalone resulted in an RMS error of 7.8 m, while the LiDAR/RISS resulted in an RMS error of 4.6 m, which is a 41% improvement. Moreover, the 3D-RISS resulted in a 2D maximum error of 11.87 m, while the LiDAR/RISS showed a slight improvement to 11 m, which is a 7.3% improvement. The deviation over the 380 meters traveled was 2.05% for the 3D-RISS, while the LiDAR/RISS was only 1.21%. Figure 16 shows the 2D position error over the first simulated outage. The second simulated outage, as shown in Figure 17, also lasted for 80 seconds, and traveled a distance of 225 meters with two sharp turns and a four-way intersection. The 3D-RISS standalone resulted in an RMS error of 13.4 m, while the LiDAR/RISS showed an improvement to 1.7 m, which is an 87% improvement. Moreover, the 2D maximum error in the 3D- The second simulated outage, as shown in Figure 17, also lasted for 80 s, and traveled a distance of 225 m with two sharp turns and a four-way intersection.
Remote Sens. 2020, 12, x FOR PEER REVIEW 17 of 27 The 3D-RISS standalone resulted in an RMS error of 7.8 m, while the LiDAR/RISS resulted in an RMS error of 4.6 m, which is a 41% improvement. Moreover, the 3D-RISS resulted in a 2D maximum error of 11.87 m, while the LiDAR/RISS showed a slight improvement to 11 m, which is a 7.3% improvement. The deviation over the 380 meters traveled was 2.05% for the 3D-RISS, while the LiDAR/RISS was only 1.21%. Figure 16 shows the 2D position error over the first simulated outage. The second simulated outage, as shown in Figure 17, also lasted for 80 seconds, and traveled a distance of 225 meters with two sharp turns and a four-way intersection. The 3D-RISS standalone resulted in an RMS error of 13.4 m, while the LiDAR/RISS showed an improvement to 1.7 m, which is an 87% improvement. Moreover, the 2D maximum error in the 3D- The 3D-RISS standalone resulted in an RMS error of 13.4 m, while the LiDAR/RISS showed an improvement to 1.7 m, which is an 87% improvement. Moreover, the 2D maximum error in the 3D-RISS was 28.03 m, while the LiDAR/RISS showed considerable improvement to 3.37 m, which is an 88% improvement in the 2D maximum error. The deviation over the 225 m outage, the 3D-RISS deviated 12.45%, while the LiDAR/RISS only deviated 1.5%. The LiDAR/RISS was able to produce an overall better performance compared to the 3D-RISS. Figure 18 shows the 2D position error in meters over the second simulated outage.
Remote Sens. 2020, 12, x FOR PEER REVIEW 18 of 27 RISS was 28.03 m, while the LiDAR/RISS showed considerable improvement to 3.37 m, which is an 88% improvement in the 2D maximum error. The deviation over the 225 meters outage, the 3D-RISS deviated 12.45%, while the LiDAR/RISS only deviated 1.5%. The LiDAR/RISS was able to produce an overall better performance compared to the 3D-RISS. Figure 18 shows the 2D position error in meters over the second simulated outage. As observed in Figure 18, the 3D-RISS deviates from the reference solution due to dynamics introduced in the ( z w ). However, it is observed that between the seconds 61 to 74, the 3D-RISS standalone solution has a lower 2D position error than the LiDAR/RISS. This observation has led to the motivation of using the 3D-RISS standalone in the switching algorithm, which was explained in section 3.1. As the 3D-RISS is given an appropriate initial point, it can perform for a short period without drifting, and this will further be illustrated the second trajectory.

Second Road Trajectory
The second road trajectory was done in a dense urban area environment. It was conducted in the city of Toronto, Ontario, Canada, which is considered a challenging urban area in which the GNSS signal frequently suffers from total blockage and multipath. Therefore, this is considered the main challenge for the proposed MSS and how it can operate in various environments with different scenarios. The trajectory was conducted around noon on June 13 th , 2018. It lasted for 1 hour and 14 minutes and was 24.4 km in length. Figure 19 shows the trajectory highlighting the scenario selected. The scenario selected started with a good signal of GNSS, then the GNSS signal degrades. As observed in Figure 18, the 3D-RISS deviates from the reference solution due to dynamics introduced in the (w z ). However, it is observed that between the seconds 61 to 74, the 3D-RISS standalone solution has a lower 2D position error than the LiDAR/RISS. This observation has led to the motivation of using the 3D-RISS standalone in the switching algorithm, which was explained in Section 3. As the 3D-RISS is given an appropriate initial point, it can perform for a short period without drifting, and this will further be illustrated the second trajectory.

Second Road Trajectory
The second road trajectory was done in a dense urban area environment. It was conducted in the city of Toronto, Ontario, Canada, which is considered a challenging urban area in which the GNSS signal frequently suffers from total blockage and multipath. Therefore, this is considered the main challenge for the proposed MSS and how it can operate in various environments with different scenarios. The trajectory was conducted around noon on 13 June 2018. It lasted for 1 h and 14 min and was 24.4 km in length. Figure 19 shows the trajectory highlighting the scenario selected. The scenario selected started with a good signal of GNSS, then the GNSS signal degrades.
The scenario, as shown in Figure 19, lasted for 9 min and 45 s and covered a distance of 1.98 km. The scenario starts at the highway, where a direct line of sight with the GNSS satellites was established, and the GNSS provided a reliable result. Then, leaving the highway through an exit and facing a stop sign where we took a right turn into a tunnel. While entering the tunnel, the GNSS signal was lost. Then, coming out of the tunnel, the vehicle encountered an intersection and a red traffic light. The car then took a sharp left at the intersection, where the downtown area began to form with high-rise buildings on each side. After that, the vehicle faced a straight road with two traffic lights. Then, at the second traffic light, a slight right turn was made into a straight road until a traffic light where the vehicle took a right turn at the traffic light. Figure 20 shows the complete selected scenario and the navigation solution between the 3D-RISS, the GNSS, the proposed solution LiDAR/RISS/GNSS, and the reference solution. The scenario, as shown in Figure 19, lasted for 9 minutes and 45 seconds and covered a distance of 1.98 km. The scenario starts at the highway, where a direct line of sight with the GNSS satellites was established, and the GNSS provided a reliable result. Then, leaving the highway through an exit and facing a stop sign where we took a right turn into a tunnel. While entering the tunnel, the GNSS signal was lost. Then, coming out of the tunnel, the vehicle encountered an intersection and a red traffic light. The car then took a sharp left at the intersection, where the downtown area began to form with high-rise buildings on each side. After that, the vehicle faced a straight road with two traffic lights. Then, at the second traffic light, a slight right turn was made into a straight road until a traffic light where the vehicle took a right turn at the traffic light. Figure 20 shows the complete selected scenario and the navigation solution between the 3D-RISS, the GNSS, the proposed solution LiDAR/RISS/GNSS, and the reference solution.   The scenario, as shown in Figure 19, lasted for 9 minutes and 45 seconds and covered a distance of 1.98 km. The scenario starts at the highway, where a direct line of sight with the GNSS satellites was established, and the GNSS provided a reliable result. Then, leaving the highway through an exit and facing a stop sign where we took a right turn into a tunnel. While entering the tunnel, the GNSS signal was lost. Then, coming out of the tunnel, the vehicle encountered an intersection and a red traffic light. The car then took a sharp left at the intersection, where the downtown area began to form with high-rise buildings on each side. After that, the vehicle faced a straight road with two traffic lights. Then, at the second traffic light, a slight right turn was made into a straight road until a traffic light where the vehicle took a right turn at the traffic light. Figure 20 shows the complete selected scenario and the navigation solution between the 3D-RISS, the GNSS, the proposed solution LiDAR/RISS/GNSS, and the reference solution.  The 3D-RISS standalone resulted in an RMS error of 46.37 m, while the proposed MSS showed a massive improvement to 16.28 m, which is a 64% improvement in the RMS error value. Moreover, the 3D-RISS standalone resulted in a 2D maximum error of 121.8 m, while the MSS showed considerable improvement to 40 m, which is a 67% improvement in the 2D maximum error value. Also, the deviation along the distance traveled for the standalone 3D-RISS was 6% compared to the MSS of only 2%. Figure 21 shows the 2D position error in meters, highlighting the switching between the systems.
The 3D-RISS standalone resulted in an RMS error of 46.37 m, while the proposed MSS showed a massive improvement to 16.28 m, which is a 64% improvement in the RMS error value. Moreover, the 3D-RISS standalone resulted in a 2D maximum error of 121.8 m, while the MSS showed considerable improvement to 40 m, which is a 67% improvement in the 2D maximum error value. Also, the deviation along the distance traveled for the standalone 3D-RISS was 6% compared to the MSS of only 2%. Figure 21 shows the 2D position error in meters, highlighting the switching between the systems. LiDAR/RISS/GNSS does not imply a positioning solution from the three sensors at the same time. However, it is an overall positioning solution toggling between the sensors according to the switching criteria. The algorithm worked on switching between the LiDAR, the GNSS and the 3D-RISS standalone as follows:  For the first 147 seconds, the algorithm was using the GNSS to provide updates in the EKF as it was an open sky environment with good satellite geometry, as shown in Figure  22 from the GDOP values.  After that, for the next 111 seconds, the algorithm switched to the LiDAR to provide the measurement updates as the ICP's RMSE has a low value, as shown in Figure 24. As the GDOP values in Figure 22 have spiked as it lost all the GNSS satellites while entering the tunnel. Also, observed in Figure 23 that the SD of the 3D-position begins to increase in value.  Then, for the next 80 seconds, the algorithm switched to the 3D-RISS standalone to provide a navigation solution as the RMS error in the ICP was increasing, as shown in Figure 24.  Finally, when relying only on the 3D-RISS solution, when a sharp turn is detected by the ( z w ), as shown in Figure 25, it is switched back to the LiDAR. Therefore, for the last 247 seconds, the switching criteria will check the GDOP and SD of the GNSS and will find them unsuitable for toggling to the GNSS. Therefore, it will use the LiDAR to provide the measurement updates for the system. As observed in Figure 22 and Figure 23, the GDOP and the SD of the 3D-position provided by the GNSS are still high, which is the reason why we did not rely on the GNSS to provide the measurement updates. LiDAR/RISS/GNSS does not imply a positioning solution from the three sensors at the same time. However, it is an overall positioning solution toggling between the sensors according to the switching criteria. The algorithm worked on switching between the LiDAR, the GNSS and the 3D-RISS standalone as follows: • For the first 147 s, the algorithm was using the GNSS to provide updates in the EKF as it was an open sky environment with good satellite geometry, as shown in Figure 22 from the GDOP values.

•
After that, for the next 111 s, the algorithm switched to the LiDAR to provide the measurement updates as the ICP's RMSE has a low value. As the GDOP values in Figure 22 have spiked as it lost all the GNSS satellites while entering the tunnel. Also, observed in Figure 23 that the SD of the 3D-position begins to increase in value.

•
Then, for the next 80 s, the algorithm switched to the 3D-RISS standalone to provide a navigation solution as the RMS error in the ICP was increasing, as shown in Figure 24. • Finally, when relying only on the 3D-RISS solution, when a sharp turn is detected by the (w z ), as shown in Figure 25, it is switched back to the LiDAR. Therefore, for the last 247 s, the switching criteria will check the GDOP and SD of the GNSS and will find them unsuitable for toggling to the GNSS. Therefore, it will use the LiDAR to provide the measurement updates for the system. As observed in Figures 22 and 23, the GDOP and the SD of the 3D-position provided by the GNSS are still high, which is the reason why we did not rely on the GNSS to provide the measurement updates.
To further explain the scenario, Figure 26 shows the beginning of the chosen scenario. The scenario begins on the highway, where a clear line of sight with the satellites is present, and the GNSS signal provides a reliable solution. The GNSS reliability can be verified from the measurements in Figures 22 and 23. However, as the vehicle exits the highway, the GNSS measurements become unreliable. Also, as observed in Figure 26, the standalone 3D-RISS begins to drift as the vehicle starts exiting the highway. Remote Sens. 2020, 12, x FOR PEER REVIEW 21 of 27      Figure 27 shows the LiDAR taking the place of the GNSS in providing the measurement updates in the EKF. As shown in Figure 24, the RMSE of the ICP is decreasing as we exit the highway and moving on lower speeds. Therefore, the LiDAR/RISS takes over from the RISS/GNSS at the exit of the highway, then to a right turn where the car enters the tunnel. Inside the tunnel, the vehicle is faced with a stalled car, and from this point, the 3D-RISS starts to drift. However, the LiDAR with the stopping criterion proposed to stop the registration when the vehicle's speed is lower than the specified threshold of 1 m/s. The LiDAR still provides excellent accuracy and does not drift until the vehicle reaches the traffic light and then takes a left turn. After the left turn, due to the glass panels on the buildings, the LiDAR/RISS solution begins to drift. Remote Sens. 2020, 12, x FOR PEER REVIEW 22 of 27  To further explain the scenario, Figure 26 shows the beginning of the chosen scenario. The scenario begins on the highway, where a clear line of sight with the satellites is present, and the GNSS signal provides a reliable solution. The GNSS reliability can be verified from the measurements in Figure 22 and Figure 23. However, as the vehicle exits the highway, the GNSS measurements become unreliable. Also, as observed in Figure 26, the standalone 3D-RISS begins to drift as the vehicle starts exiting the highway.  To further explain the scenario, Figure 26 shows the beginning of the chosen scenario. The scenario begins on the highway, where a clear line of sight with the satellites is present, and the GNSS signal provides a reliable solution. The GNSS reliability can be verified from the measurements in Figure 22 and Figure 23. However, as the vehicle exits the highway, the GNSS measurements become unreliable. Also, as observed in Figure 26, the standalone 3D-RISS begins to drift as the vehicle starts exiting the highway. The LiDAR/RISS solution began to drift when nearing buildings with large glass panels in the front. When the RMS error of the ICP is above the set threshold, this indicates the LiDAR is beginning to drift. Therefore, the algorithm checks the (w z ) of the IMU if it is within the specified range, as shown in Figure 25. So, the algorithm switches to the 3D-RISS standalone to provide the navigation solution. The last valid update of the LiDAR/RISS is used as an initial point for the 3D-RISS. As illustrated in Figure 28, the 3D-RISS takes over for 80 s while the LiDAR registration is working in the background. The switching algorithm keeps monitoring the (w z ) of the IMU until it detects that the value is below or above the specified range, as shown in Figure 25. Then, the algorithm switches to back the LiDAR when it finds that the GDOP and the SD of 3D-position from GNSS have high values that correspond to unreliable GNSS measurements.  Figure 27 shows the LiDAR taking the place of the GNSS in providing the measurement updates in the EKF. As shown in Figure 24, the RMSE of the ICP is decreasing as we exit the highway and moving on lower speeds. Therefore, the LiDAR/RISS takes over from the RISS/GNSS at the exit of the highway, then to a right turn where the car enters the tunnel. Inside the tunnel, the vehicle is faced with a stalled car, and from this point, the 3D-RISS starts to drift. However, the LiDAR with the stopping criterion proposed to stop the registration when the vehicle's speed is lower than the specified threshold of 1 m/s. The LiDAR still provides excellent accuracy and does not drift until the vehicle reaches the traffic light and then takes a left turn. After the left turn, due to the glass panels on the buildings, the LiDAR/RISS solution begins to drift.    Figure 27 shows the LiDAR taking the place of the GNSS in providing the measurement updates in the EKF. As shown in Figure 24, the RMSE of the ICP is decreasing as we exit the highway and moving on lower speeds. Therefore, the LiDAR/RISS takes over from the RISS/GNSS at the exit of the highway, then to a right turn where the car enters the tunnel. Inside the tunnel, the vehicle is faced with a stalled car, and from this point, the 3D-RISS starts to drift. However, the LiDAR with the stopping criterion proposed to stop the registration when the vehicle's speed is lower than the specified threshold of 1 m/s. The LiDAR still provides excellent accuracy and does not drift until the vehicle reaches the traffic light and then takes a left turn. After the left turn, due to the glass panels on the buildings, the LiDAR/RISS solution begins to drift.    Figures 22 and 23, the GNSS still does not provide a reliable solution. Moreover, the 3D-RISS standalone at this point due to the sensor error accumulation drifts far away from the solution. The LiDAR/RISS, however, provides a robust, reliable solution in urban areas. As shown in Figure 29, the vehicle passed three traffic lights making a right turn at the last one. After the right turn, as shown in Figure 21, the 2D-position error increases at the end. This increase was due to the large glass panels on the buildings. Therefore, this is the part where the algorithm will detect the increase in the RMS error of the ICP and act accordingly. To either switch to the standalone 3D-RISS or the GNSS.
solution. The last valid update of the LiDAR/RISS is used as an initial point for the 3D-RISS. As illustrated in Figure 28, the 3D-RISS takes over for 80 seconds while the LiDAR registration is working in the background. The switching algorithm keeps monitoring the ( z w ) of the IMU until it detects that the value is below or above the specified range, as shown in Figure 25. Then, the algorithm switches to back the LiDAR when it finds that the GDOP and the SD of 3D-position from GNSS have high values that correspond to unreliable GNSS measurements.   Figure 22 and Figure 23, the GNSS still does not provide a reliable solution. Moreover, the 3D-RISS standalone at this point due to the sensor error accumulation drifts far away from the solution. The LiDAR/RISS, however, provides a robust, reliable solution in urban areas. As shown in Figure 29, the vehicle passed three traffic lights making a right turn at the last one. After the right turn, as shown in Figure  21, the 2D-position error increases at the end. This increase was due to the large glass panels on the buildings. Therefore, this is the part where the algorithm will detect the increase in the RMS error of the ICP and act accordingly. To either switch to the standalone 3D-RISS or the GNSS. The previous results and analysis indicate that the proposed MSS mitigates the 3D-RISS drift during the GNSS outage period. Moreover, the MSS provides an overall reliable navigation solution due to its capability of switching between different systems.

Conclusions
In conclusion, the work done in this paper introduced an integration scheme of the LiDAR/RISS/GNSS to provide a navigation solution for land vehicles in challenging GNSS environments. Moreover, a switching criterion is proposed to select among the sensors to provide an overall robust navigation solution in different scenarios. The performance of the proposed MSS is assessed based on two different trajectories in two different environments.
The first trajectory conducted in a suburban environment, which is why we introduced an artificial GNSS outage to assess the performance of the LiDAR/RISS integration. It was observed that in an 80-second outage, the LiDAR/RISS showed an improvement over the 3D-RISS standalone. Over two simulated outages without the aid of GNSS, the LiDAR/RISS showed an improvement of 41% and 87% percent, respectively. Also, the LiDAR/RISS provided only a 1% deviation over the distance traveled on both outages while the 3D-RISS standalone showed a 2.05% and a 12.45% deviation, The previous results and analysis indicate that the proposed MSS mitigates the 3D-RISS drift during the GNSS outage period. Moreover, the MSS provides an overall reliable navigation solution due to its capability of switching between different systems.

Conclusions
In conclusion, the work done in this paper introduced an integration scheme of the LiDAR/RISS/GNSS to provide a navigation solution for land vehicles in challenging GNSS environments. Moreover, a switching criterion is proposed to select among the sensors to provide an overall robust