# Coarse-to-Fine Loosely-Coupled LiDAR-Inertial Odometry for Urban Positioning and Mapping

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

- Development of a coarse-to-fine LC-LIO pipeline based on window optimization. Meanwhile, the adaptive covariance estimation of the LiDAR scan-to-map registration is proposed for further LiDAR/Inertial integration.
- Theoretical analysis of performance upper bound of LC and TC LiDAR-inertial fusion considering the error propagation from LiDAR and inertial measurements.
- Validation of the proposed method with challenging datasets collected in urban canyons of Hong Kong. The convergence results of both the LC-LIO and TC-LIO are presented to experimentally verify the theoretical analysis in the second aspect.

## 2. Overview of the Proposed LC-LIO

- The LiDAR body frame is represented as ${\{\xb7\}}^{L}$, which is fixed at the center of the LiDAR sensor.
- The IMU body frame is represented as ${\{\xb7\}}^{B}$, which is fixed at the center of the IMU sensor.
- The world frame is represented as ${\{\xb7\}}^{W}$, which is originated at the initial position of the vehicle. It is assumed to coincide with the initial LiDAR frame.

**.**The transformation matrix of the $k$-th frame of LiDAR point cloud under the world frame is represented as ${T}_{{L}_{k}}^{W}\in \mathrm{Special}\mathrm{Euclidean}\mathrm{Group}\left(SE\left(3\right)\right)$ [23]:

## 3. Coarse-to-Fine Loosely-Coupled LiDAR-Inertial Integration

#### 3.1. LC-LIO Factor Graph

**,**and ${C}^{-1}$ represents the so-called information matrix. $\rho $ represents the robust kernel to decrease the influence of outliers. The Cauchy kernel [26] is selected and defined as follows:

#### 3.2. IMU Measurement Modeling

#### 3.3. LiDAR Scan Matching Modeling

**.**$u$ is achieved by solving the following overdetermined linear equation:

#### 3.4. Adaptively Weighted LiDAR-Inertial Fusion

- A monotonically decreasing function of ${Q}_{{L}_{k}}$;
- A positive function of ${Q}_{{L}_{k}}$;
- Decreasing rate and ranges concerning ${Q}_{{L}_{k}}$ can be regulated by the control parameters and thus be employed in general scenarios.

## 4. Performance Upper Bound Analysis of Tightly Coupled and Loosely Coupled LiDAR-Inertial Integration

#### 4.1. Error Propagation of LiDAR Measurement in TC-LIO

**,**respectively. The errors from both measurements are propagated to ${J}_{{r}_{TC,stmr}}$ via the state ${x}_{k}$ shared by both kinds of residuals ${r}_{TC,stmr}$ and ${r}_{TC,\mathcal{B}}$. Therefore, the final convergence of ${r}_{TC,stmr}$ relies on the quality of IMU and the accuracy of its noise modeling. Unfortunately, the noise of the IMU drifts and is hard to obtain, even if the noise is estimated or corrected simultaneously by the LiDAR scan-to-map registration. As a result, the potential for the high accuracy of the raw LiDAR measurements is not fully relaxed. A similar argument is also presented in a study [37] where a similar coarse-to-fine LiDAR/visual integration scheme is proposed. It maintains that there is a significant difference between the accuracy level of the visual measurements and the LiDAR measurements, and therefore, the direct and joint optimization of the residuals derived from two kinds of observations can not relax the potential for LiDAR measurements.

#### 4.2. Error Propagation in LiDAR Measurement Modeling of LC-LIO

#### 4.3. Performance Upper Bound Analysis

## 5. Experimental Results

#### 5.1. Experiment Setup

#### 5.1.1. Sensor Setups

#### 5.1.2. Evaluation Metrics

- (1)
- LiLi-OM [16]: Tightly-coupled integration of LiDAR/inertial method.
- (2)
- LC-LIO-FC: The proposed coarse-to-fine loosely-coupled integration of LiDAR/inertial with fixed covariance.
- (3)
- LC-LIO-AC: The proposed coarse-to-fine loosely-coupled integration of LiDAR/inertial with adaptive covariance based on Equation (28).

#### 5.2. Experiments in Urban Canyon 1: The HK-Data20200314 Dataset

#### 5.2.1. Performance Analysis

#### 5.2.2. Quantitative Analysis of the Performance Upper Bound in TC-LIO and LC-LIO

#### 5.2.3. Mapping Result

#### 5.3. Experiments in the Urban Canyon 2: The HK-Data20190428 Dataset

#### 5.3.1. Performance Analysis

#### 5.3.2. Quantitative Analysis of the Performance Upper Bound in TC-LIO and LC-LIO

#### 5.3.3. Mapping Result of the Proposed LC-LIO

## 6. Conclusions and Future Perspectives

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## Appendix A

## References

- Wen, W.; Zhang, G.; Hsu, L.-T. GNSS NLOS exclusion based on dynamic object detection using LiDAR point cloud. IEEE Trans. Intell. Transp. Syst.
**2019**, 22, 853–862. [Google Scholar] [CrossRef] - Breßler, J.; Reisdorf, P.; Obst, M.; Wanielik, G. GNSS positioning in non-line-of-sight context—A survey. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; pp. 1147–1154. [Google Scholar]
- Zhang, J.; Singh, S. Low-drift and real-time lidar odometry and mapping. Auton. Robot.
**2017**, 41, 401–416. [Google Scholar] [CrossRef] - Wen, W.; Zhou, Y.; Zhang, G.; Fahandezh-Saadi, S.; Bai, X.; Zhan, W.; Tomizuka, M.; Hsu, L.-T. Urbanloco: A full sensor suite dataset for mapping and localization in urban scenes. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 2310–2316. [Google Scholar]
- Merriaux, P.; Dupuis, Y.; Boutteau, R.; Vasseur, P.; Savatier, X. LiDAR point clouds correction acquired from a moving car based on CAN-bus data. arXiv
**2017**, arXiv:1706.05886. [Google Scholar] - Wen, W.; Hsu, L.-T.; Zhang, G. Performance analysis of NDT-based graph SLAM for autonomous vehicle in diverse typical driving scenarios of Hong Kong. Sensors
**2018**, 18, 3928. [Google Scholar] [CrossRef] [Green Version] - Thrun, S. Probabilistic robotics. Commun. ACM
**2002**, 45, 52–57. [Google Scholar] [CrossRef] - Dellaert, F.; Kaess, M. Factor graphs for robot perception. Found. Trends Robot.
**2017**, 6, 1–139. [Google Scholar] [CrossRef] - Qin, C.; Ye, H.; Pranata, C.E.; Han, J.; Zhang, S.; Liu, M. LINS: A Lidar-Inertial State Estimator for Robust and Efficient Navigation. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 8899–8906. [Google Scholar]
- Wen, W.; Pfeifer, T.; Bai, X.; Hsu, L.T. Factor Graph Optimization for GNSS/INS Integration: A Comparison with the Extended Kalman Filter. Navigation
**2020**, 68, 315–331, accepted. [Google Scholar] - Ye, H.; Chen, Y.; Liu, M. Tightly coupled 3d lidar inertial odometry and mapping. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 3144–3150. [Google Scholar]
- Qin, T.; Li, P.; Shen, S. Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot.
**2018**, 34, 1004–1020. [Google Scholar] [CrossRef] [Green Version] - Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24–30 October 2020; pp. 5135–5142. [Google Scholar]
- Kaess, M.; Ranganathan, A.; Dellaert, F. iSAM: Incremental smoothing and mapping. IEEE Trans. Robot.
**2008**, 24, 1365–1378. [Google Scholar] [CrossRef] - Balsa-Barreiro, J.; Fritsch, D. Generation of visually aesthetic and detailed 3D models of historical cities by using laser scanning and digital photogrammetry. Digit. Appl. Archaeol. Cult. Herit.
**2018**, 8, 57–64. [Google Scholar] [CrossRef] - Li, K.; Li, M.; Hanebeck, U.D. Towards high-performance solid-state-lidar-inertial odometry and mapping. IEEE Robot. Autom. Lett.
**2021**, 6, 5167–5174. [Google Scholar] [CrossRef] - Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The kitti vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
- Shan, T.; Englot, B. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4758–4765. [Google Scholar]
- Demir, M.; Fujimura, K. Robust localization with low-mounted multiple LiDARs in urban environments. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 3288–3293. [Google Scholar]
- Tang, J.; Chen, Y.; Niu, X.; Wang, L.; Chen, L.; Liu, J.; Shi, C.; Hyyppä, J. LiDAR scan matching aided inertial navigation system in GNSS-denied environments. Sensors
**2015**, 15, 16710–16728. [Google Scholar] [CrossRef] [PubMed] - Gao, X.; Zhang, T.; Liu, Y.; Yan, Q. 14 Lectures on Visual SLAM: From Theory to Practice; Publishing House of Electronics Industry: Beijing, China, 2017; pp. 236–241. [Google Scholar]
- Huai, Z.; Huang, G. Robocentric visual-inertial odometry. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 6319–6326. [Google Scholar]
- Barfoot, T.D. State Estimation for Robotics; Cambridge University Press: Cambridge, UK, 2017; pp. 205–284. [Google Scholar] [CrossRef]
- Sola, J. Quaternion kinematics for the error-state Kalman filter. arXiv
**2017**, arXiv:1711.02508. [Google Scholar] - Qin, T.; Cao, S.; Pan, J.; Shen, S. A general optimization-based framework for global pose estimation with multiple sensors. arXiv
**2019**, arXiv:1901.03642. [Google Scholar] - Zhang, Z. Parameter estimation techniques: A tutorial with application to conic fitting. Image Vis. Comput.
**1997**, 15, 59–76. [Google Scholar] [CrossRef] [Green Version] - Hu, G.; Khosoussi, K.; Huang, S. Towards a reliable SLAM back-end. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 37–43. [Google Scholar]
- Agarwal, S.; Mierle, K. Ceres Solver. Available online: http://ceres-solver.org (accessed on 6 January 2021).
- Moré, J.J. The Levenberg-Marquardt algorithm: Implementation and theory. In Numerical Analysis; Springer: Berlin/Heidelberg, Germany, 1978; pp. 105–116. [Google Scholar]
- Forster, C.; Carlone, L.; Dellaert, F.; Scaramuzza, D. On-Manifold Preintegration for Real-Time Visual—Inertial Odometry. IEEE Trans. Robot.
**2016**, 33, 1–21. [Google Scholar] [CrossRef] [Green Version] - Lupton, T.; Sukkarieh, S. Visual-inertial-aided navigation for high-dynamic motion in built environments without initial conditions. IEEE Trans. Robot.
**2011**, 28, 61–76. [Google Scholar] [CrossRef] - Balsa Barreiro, J.; Avariento Vicent, J.P.; Lerma García, J.L. Airborne light detection and ranging (LiDAR) point density analysis. Sci. Res. Essays
**2012**, 7, 3010–3019. [Google Scholar] [CrossRef] - Balsa-Barreiro, J.; Lerma, J.L. Empirical study of variation in lidar point density over different land covers. Int. J. Remote Sens.
**2014**, 35, 3372–3383. [Google Scholar] [CrossRef] - Lin, J.; Zhang, F. Loam livox: A fast, robust, high-precision LiDAR odometry and mapping package for LiDARs of small FoV. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 3126–3131. [Google Scholar]
- Zhang, J.; Singh, S. Laser–visual–inertial odometry and mapping with high robustness and low drift. J. Field Robot.
**2018**, 35, 1242–1264. [Google Scholar] [CrossRef] - Zhang, J.; Kaess, M.; Singh, S. On degeneracy of optimization-based state estimation problems. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 809–816. [Google Scholar]
- Zhang, J.; Singh, S. Visual-lidar odometry and mapping: Low-drift, robust, and fast. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 2174–2181. [Google Scholar]
- Hsu, L.-T.; Kubo, N.; Chen, W.; Liu, Z.; Suzuki, T.; Meguro, J. UrbanNav:An open-sourced multisensory dataset for benchmarking positioning algorithms designed for urban areas (Accepted). In Proceedings of the ION GNSS+ 2021, Miami, FL, USA, 20–24 September 2021. [Google Scholar]
- Quigley, M.; Conley, K.; Gerkey, B.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; Ng, A.Y. ROS: An open-source Robot Operating System. ICRA Workshop Open Source Softw.
**2009**, 3, 5. [Google Scholar] - Grupp, M. evo: Python Package for the Evaluation of Odometry and SLAM. Available online: https://github.com/MichaelGrupp/evo (accessed on 1 March 2021).
- Zhang, Z.; Scaramuzza, D. A tutorial on quantitative trajectory evaluation for visual (-inertial) odometry. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 7244–7251. [Google Scholar]
- Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. In Proceedings of the Sensor Fusion IV: Control Paradigms and Data Structures, Boston, MA, USA, 12–15 November 1991; pp. 586–606. [Google Scholar]
- Wen, W.; Hsu, L.-T. 3D LiDAR Aided GNSS and Its Tightly Coupled Integration with INS Via Factor Graph Optimization. In Proceedings of the ION GNSS+ 2020, Richmond Heights, MO, USA, 22–25 September 2020. [Google Scholar]
- Wen, W.; Zhang, G.; Hsu, L.-T. Exclusion of GNSS NLOS receptions caused by dynamic objects in heavy traffic urban scenarios using real-time 3D point cloud: An approach without 3D maps. In Proceedings of the Position, Location and Navigation Symposium (PLANS), Monterey, CA, USA, 23–26 April 2018; pp. 158–165. [Google Scholar]
- Wen, W.; Zhang, G.; Hsu, L.T. Correcting NLOS by 3D LiDAR and building height to improve GNSS single point positioning. Navigation
**2019**, 66, 705–718. [Google Scholar] [CrossRef]

**Figure 1.**Flowchart of the proposed coarse-to-fine loosely coupled LiDAR-inertial odometry. The yellow boxes represent the coarse process via the LC optimization. The blue box represents the fine process via the scan-to-map registration. The LC optimization provides an initial guess for the scan-to-map registration of the next frame.

**Figure 2.**Factor graph of the proposed LC-LIO. The light purple circle represents the state of the IMU frame. There are two types of factors introduced to constrain the states, namely the scan matching factor and IMU pre-integration factor.

**Figure 4.**Illustration of point-to-line residual for edge feature point and point-to-plane residual for plane feature point used in LiDAR scan-to-map registration: (

**a**) Point-to-line residual for an edge feature point ${f}_{e,i}^{W}$ of the current LiDAR frame, the orange line is the fitted line passing ${f}_{c,ei}^{W}$; (

**b**) Point-to-plane residual for a plane point ${f}_{p,i}^{W}$ of the current LiDAR frame, the orange plane is the fitted plane using ${f}_{p,j}^{W}\in {\mathcal{P}}^{W},j=\{1,\dots ,5\}$.

**Figure 5.**Illustration of the impacts of the control parameters ${c}_{1}$, ${c}_{2}$ and ${c}_{3}$ on the weighting. Q (x-axis) represents the quality of the solution provided by scan-to-map registration derived from the average matching residual. The y-axis represents the weighting coefficient ${w}_{{L}_{k}}$. When one parameter changes, the other two remain constant. (

**a**–

**c**) are the illustrations of the weighting under different value of ${c}_{1}$, ${c}_{2}$ and ${c}_{3}$, respectively.

**Figure 6.**Error propagation in the LiDAR scan-to-map registration for TC-LIO and LC-LIO. An error of IMU measurements and LiDAR measurements are represented as $\Delta z$ and $\Delta f$

**,**respectively. For TC-LIO, $X$ represents the states to be estimated during LiDAR-Inertial integration. LiDAR measurement modeling is directly affected by both $\Delta z$ and $\Delta f$. For LC-LIO, LiDAR scan-to-map registration is only disturbed by $\Delta f$, which is the error of its own.

**Figure 7.**Impact of the error from IMU measurement and LiDAR measurement on LiDAR scan-to-map registration. When computing the residual and the Jacobian matrix during the registration, in TC-LIO, errors from both IMU and LiDAR are involved in the shared states. While in LC-LIO, only error from LiDAR measurement can exert some impacts.

**Figure 8.**The trajectory of the dataset collected in the urban canyon 1: HK-Data20200314. (

**a**) The ground truth aligned on Google Earth from the bird-eye view. The start point is represented with the pink circle with the number “1”. The other eight annotated places are the eight turns. (

**b**) Trajectory comparison, the black trajectory is ground truth, the red and blue trajectories are estimated by LiLi-OM and LC-LIO-AC, respectively.

**Figure 9.**The RPE of LiLi-OM, LC-LIO-FC and LC-LIO-AC on the entire trajectory of one experiment in the urban canyon 1. The number corresponds to the nine places denoted in Figure 8a. (

**a**,

**b**) present RRE and RTE, respectively. The gray circles represent that the RTE of the three methods are comparable as the vehicle moves slowly at the turns.

**Figure 10.**The final LiDAR scan-to-map registration residual of feature points when the registration converges on each frame of LiLi-OM and LC-LIO-AC that are corresponding in time in the urban canyon 1. The black line represents the velocity of these frames calculated by LC-LIO-AC. The gray boxes marked from 1 to 9 correspond to the nine places marked in Figure 8a. The top and bottom panels represent the residual for plane and edge points, respectively.

**Figure 11.**The map is generated by LC-LIO-AC in the urban canyon 1 and rendered with intensity value. (

**a**) The entire map in bird-eye view. (

**b**) The top panel is the zoomed-in map around the first place marked in (

**a**), the bottom panel is the zoomed-in map around the second place marked in (

**a**). (

**c**) The top and the bottom panels are the environments that correspond to the first and the second place, respectively.

**Figure 12.**Comparison of the zoomed-in maps generated by LC-LIO-AC without and with the coarse-to-fine process around the third-place marked in Figure 11a where the vehicle passes by twice. The top panel and the bottom panel of both (

**a**,

**b**) are in the bird-eye view and approximately in the front view, respectively. (

**a**) The map is generated without the coarse-to-fine process. When the vehicle passes the door the second time, a mismatch occurs between the point clouds registered in both times. (

**b**) The map is generated with the coarse-to-fine process. The point clouds registered at the first time and the second time overlap. The map is more clear.

**Figure 13.**The trajectory of the dataset collected in the urban canyon 2: HK-Data20190428. (

**a**) The ground truth aligned on Google Earth from the bird-eye view. The first marked place is the start point. The other ten marked places are the nine turns. (

**b**) Trajectory comparison, the black trajectory is ground truth, and the red and blue trajectories are estimated by LiLi-OM and LC-LIO-AC, respectively.

**Figure 14.**The RPE of LiLi-OM, LC-LIO-FC and LC-LIO-AC on the entire trajectory of one experiment in the urban canyon 2. The number corresponds to the places marked in Figure 13a. (

**a**) The top panel is the RRE and the bottom panel is the RTE of the three methods. (

**b**) The top panel is the image captured when the RTE of LiLi-OM reaches up to 8 meters. The bottom panel is the zoomed-in RTE for LiLi-OM at that point. (

**c**) The images are captured when the RTE of all the three methods is equally large as illustrated in (

**a**).

**Figure 15.**The final LiDAR scan-to-map registration residual of feature points after the registration converges on each frame of LiLi-OM and LC-LIO-AC that are corresponding in time in the urban canyon 2. The black line represents the velocity of these frames calculated by LC-LIO-AC. The segments in gray boxes represent when the vehicle stop for a while and the velocity is zero. The top and bottom panels represent the residual for plane and edge points, respectively.

**Figure 16.**The map is generated by LC-LIO-AC in the urban canyon 2 and rendered with intensity value. The top left panel presents the entire map in a bird-eye view. The four places marked in the entire map are zoomed in and the details are shown with the same number marked on the panel. The buildings locating at the first place near an intersection, the vegetation locating at the second and the third place, and the pedestrian around the fourth place are all reconstructed clearly.

**Table 1.**RPE of LiLi-OM, LC-LIO-FC and LC-LIO-AC produced by EVO [40] in the urban canyon 1. The bold values represents best precision.

Dataset | Method | Relative Rotation Error (°) | Relative Translation Error (m) | ||
---|---|---|---|---|---|

Mean Value | RMSE | Mean Value | RMSE | ||

Urban Canyon 1 (HK-Data20200314) | LiLI-OM | 1.133 | 1.762 | 0.605 | 0.693 |

LC-LIO-FC | 0.885 | 1.249 | 0.271 | 0.348 | |

LC-LIO-AC | 0.800 | 1.115 | 0.233 | 0.262 |

**Table 2.**RPE of LiLi-OM, LC-LIO-FC and LC-LIO-AC produced by EVO [40] on urban canyon 2. The bold values represents best precision.

Dataset | Method | Relative Rotation Error (°) | Relative Translation Error (m) | ||
---|---|---|---|---|---|

Mean Value | RMSE | Mean Value | RMSE | ||

Urban Canyon 2 (HK-Data20190428) | LiLi-OM | 0.458 | 0.878 | 0.609 | 0.891 |

LC-LIO-FC | 0.421 | 0.671 | 0.249 | 0.431 | |

LC-LIO-AC | 0.331 | 0.478 | 0.182 | 0.267 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Zhang, J.; Wen, W.; Huang, F.; Chen, X.; Hsu, L.-T.
Coarse-to-Fine Loosely-Coupled LiDAR-Inertial Odometry for Urban Positioning and Mapping. *Remote Sens.* **2021**, *13*, 2371.
https://doi.org/10.3390/rs13122371

**AMA Style**

Zhang J, Wen W, Huang F, Chen X, Hsu L-T.
Coarse-to-Fine Loosely-Coupled LiDAR-Inertial Odometry for Urban Positioning and Mapping. *Remote Sensing*. 2021; 13(12):2371.
https://doi.org/10.3390/rs13122371

**Chicago/Turabian Style**

Zhang, Jiachen, Weisong Wen, Feng Huang, Xiaodong Chen, and Li-Ta Hsu.
2021. "Coarse-to-Fine Loosely-Coupled LiDAR-Inertial Odometry for Urban Positioning and Mapping" *Remote Sensing* 13, no. 12: 2371.
https://doi.org/10.3390/rs13122371