Uniaxial Partitioning Strategy for Efficient Point Cloud Registration
Abstract
:1. Introduction
2. Related Works
3. Our Contributions
- (a)
- The partitioning approach is now applicable for retrieving orientation resulting from multiple rotation phenomena around general axes; in addition, there is now flexibility in the principal axis along which cut-sectioning is done. Compared to previous versions, the choice of the axis is now automatic. Currently, the algorithm chooses the cutting axis after measuring the data variance along the three principal axes of the local frame. For more details, see Section 4.2.1;
- (b)
- The method now has two operating modes, namely configurations A and B, which refer to the chosen cutting axes; they can either be different (configuration A) or the same for both target and source clouds (configuration B). Configuration A allows partitioning source and target models in different directions, what sounds useful for registration where point clouds come from different acquisition systems, for example.
- (c)
- For that which concerns the stop criterion, it is now calculated for every input cloud on the basis of an original proposal called micromisalignment (detailed in Section 4.2.3), which is conducted automatically, implying no need for previous ad hoc knowledge of the input models. To the best of the authors’ knowledge, no other work in the recent literature suggests a measurement for registration goodness based on the input model itself and, as such, automatically adjustable. Other approaches instead rely on the use of parameters or constants of limited scope.
4. Uniaxial Partitioning Strategy
4.1. A Look at ICP
4.1.1. Selection of Points
4.1.2. Matching
4.1.3. Error Metrics
4.2. Mathematical Formulation
4.2.1. Partitioning
- Configuration A: In this configuration, the partition-axis of a given input model is chosen as the one with the largest data variance among the three principal axes. Therefore, source and target models can be cut along different -axes, which might benefit scenarios in which they differ significantly in orientation (for example, where clouds are randomly rotated [6] or captured by different sensors [40]).
- Configuration B: Here, data variance is calculated only in the target point cloud and the chosen -axis is assigned to both input models, performing faster than the previous one for obvious reasons. It might be a good choice for situations in which the ground truth is known, as well as for registration of sequentially acquired shots in which orientation changes only in one degree of freedom (for example, in-plane robot navigation in SLAM applications [41]).
4.2.2. Convergence Check
4.2.3. Stop Criterium
5. Materials and Methods
- the running time (in seconds) of the registration as required by the algorithms implemented;
- the RMSE measure (in meters; here computed between the source cloud and the target cloud after pose correction);
- the estimated pose calculated according to the equivalent angle-axis representation for orientation;
- the mean RMSE, calculated as an average over the 3D models used in each experiment.
Implementation Details
6. Results
6.1. Simple Pairwise Registration
6.2. Registration under Combinations of Arbitrary Rotations
6.3. Downsampling Effect
6.4. Registration in the Presence of Different Levels of Gaussian Noise
6.5. Partial Registration of Point Clouds with Different Overlap Rates
6.6. Registration of Indoor Scenes
6.7. Registration of Multiple Shots of Indoor Scenes
6.8. Registration of Outdoor Scenes with Different Point Densities
7. Discussion
8. Conclusions
- the outer level of iterations favours the correspondence step of ICP and reduces computation efforts: this is because k-registration steps of ()-sized point clouds take less time than one registration of N-sized clouds.
- The existence of two operating modes provides flexibility to the approach, widening the range of possible applications: configuration A is adequate for situations in which little or no information about the scene is provided, as can be the case of huge disorientation between target and source and/or arbitrarily disoriented samples, whereas configuration B suits non-severe disorientation scenario and high-overlapping samples, as can be the case of applications assisted by progressive scene acquisition.
- The stop criterion based on the micromisalignment concept introduced here performed well, showed to be a reliable measure of quantitative assessment of registration goodness and it is one major contribution of this study to the scientific community.
- In terms of time performance, comparative analysis revealed impressive results in favour of UPS: except for a few cases in which it was beaten by ICP point-to-plane, UPS was always faster than the other approaches by about 3 times at least. In some cases, it was 300 times faster.
- In terms of registration quality, UPS performed better than many of the counterparts. In this regard, using RMSE as a metrics for registration quality, UPS was 8 times better than 3D-NDT and GICP in outdoor scenario and 10 times better than Sparse-ICP, Go-ICP and FPFH + ICP in a study of robustness to Gaussian noise.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Siqueira, R.S.; Alexandre, G.R.; Soares, J.M.; The, G.A.P. Triaxial Slicing for 3-D Face Recognit. From Adapted Rotational Invariants Spatial Moments and Minimal Keypoints Dependence. IEEE Robot. Autom. Lett. 2018, 3, 3513–3520. [Google Scholar] [CrossRef]
- Wang, C.H.; Peng, C.C. 3D Face Point Cloud Reconstruction and Recognition Using Depth Sensor. Sensors 2021, 21, 2587. [Google Scholar] [CrossRef] [PubMed]
- Cai, L.; Xu, H.; Yang, Y.; Yu, J. Robust facial expression recognition using RGB-D images and multichannel features. Mult. Tools Appl. 2018, 78, 28591–28607. [Google Scholar] [CrossRef]
- Izatt, G.; Mirano, G.; Adelson, E.; Tedrake, R. Tracking objects with point clouds from vision and touch. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 4000–4007. [Google Scholar]
- Forte, M.D.N.; Neto, P.S.; The, G.A.P.; Nogueira, F.G. Altitude Correction of an UAV Assisted by Point Cloud Registration of LiDAR Scans. In Proceedings of the 18th International Conference Informatics in Control, Automation and Robot, (ICINCO), Online, 6–8 July 2021. [Google Scholar]
- Yang, J.; Li, H.; Campbell, D.; Jia, Y. Go-ICP: A Globally Optimal Solution to 3D ICP Point-Set Registration. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 30, 2241–2254. [Google Scholar] [CrossRef] [Green Version]
- Souza Neto, P.; Pereira, N.S.; Thé, G.A.P. Improved Cloud Partitioning Sampling for Iterative Closest Point: Qualitative and Quantitative Comparison Study. In Proceedings of the 15th International Conference on Informatics in Control, Automation and Robot, (ICINCO), Lisbon, Portugal, 29–31 July 2018; pp. 49–60. [Google Scholar]
- Choi, O.; Hwang, W. Colored Point Cloud Registration by Depth Filtering. Sensors 2021, 21, 7023. [Google Scholar] [CrossRef]
- Pomerleau, F.; Colas, F.; Siegwart, R. A review of point cloud registration algorithms for mobile robotics. Found. Trends Robot. 2015, 4, 1–104. [Google Scholar] [CrossRef] [Green Version]
- Dong, Z.; Liang, F.; Yang, B.; Xu, Y.; Zang, Y.; Li, J.; Wang, Y.; Dai, W.; Fan, H.; Hyyppäb, J.; et al. Registration of large-scale terrestrial laser scanner point clouds: A review and benchmark. ISPRS J. Photogramm. Remote Sens. 2020, 163, 327–342. [Google Scholar] [CrossRef]
- Pomerleau, F.; Liu, M.; Colas, F.; Siegwart, R. Challenging data sets for point cloud registration algorithms. Int. J. Robot. Res. 2012, 31, 1705–1711. [Google Scholar] [CrossRef] [Green Version]
- Yang, J.; Dai, Y.; Li, H.; Gardner, H.; Jia, Y. Single-shot extrinsic calibration of a generically configured RGB-D camera rig from scene constraints. In Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Adelaide, Australia, 1–4 October 2013; pp. 181–188. [Google Scholar]
- Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 1611, 586–606. [Google Scholar] [CrossRef]
- Pereira, N.S.; Carvalho, C.R.; Thé, G.A.P. Point cloud partitioning approach for ICP improvement. In Proceedings of the 21th International Conference on Automation and Computing (ICAC), Glasgow, UK, 11–12 September 2015; pp. 1–5. [Google Scholar]
- Pomerleau, F.; Colas, F.; Siegwart, R.; Magnenat, S. Comparing ICP variants on real-world data sets. Auton. Robot. 2013, 34, 133–148. [Google Scholar] [CrossRef]
- Mavridis, P.; Andreadis, A.; Papaioannou, G. Efficient sparse icp. Comput. Aided Geomet. Des. 2015, 35, 16–26. [Google Scholar] [CrossRef]
- Bouaziz, S.; Tagliasacchi, A.; Pauly, M. Sparse Iterative Closest Point. Comput. Graph. Forum 2013, 32, 1–11. [Google Scholar] [CrossRef] [Green Version]
- Segal, A.; Haehnel, D.; Thrun, S. Generalized-icp. Robot. Sci. Syst. 2009, 2, 495. [Google Scholar]
- Chen, Y.; Medioni, G. Object modelling by registration of multiple range images. Image Vis. Comp. 1992, 10, 145–155. [Google Scholar] [CrossRef]
- Agamennoni, G.; Fontana, S.; Siegwart, R.Y.; Sorrenti, D.G. Point clouds registration with probabilistic data association. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 4092–4098. [Google Scholar]
- Magnusson, M.; Lilienthal, A.; Duckett, T. Scan registration for autonomous mining vehicles using 3D-NDT. J. Field Robot. 2007, 24, 803–827. [Google Scholar] [CrossRef] [Green Version]
- Das, A.; Diu, M.; Mathew, N.; Scharfenberger, C.; Servos, J.; Wong, A.; Zelek, J.S.; Clausi, D.A.; Waslander, S.L. Mapping, planning, and sample detection strategies for autonomous exploration. J. Field Robot. 2014, 31, 75–106. [Google Scholar] [CrossRef]
- Mellado, N.; Aiger, D.; Mitra, N.J. Super 4pcs fast global pointcloud registration via smart indexing. Comput. Graph. Forum 2014, 33, 205–215. [Google Scholar] [CrossRef] [Green Version]
- Rodolà, E.; Albarelli, A.; Cremers, D.; Torsello, A. A simple and effective relevance-based point sampling for 3D shapes. Pattern Recognit. Lett. 2015, 59, 41–47. [Google Scholar] [CrossRef] [Green Version]
- He, Y.; Liang, B.; Yang, J.; Li, S.; He, J. An iterative closest points algorithm for registration of 3D laser scanner point clouds with geometric features. Sensors 2017, 17, 1862. [Google Scholar] [CrossRef] [Green Version]
- Li, J.; Chen, B.; Yuan, M.; Zhao, Q.; Luo, L.; Gao, X. Matching Algorithm for 3D Point Cloud Recognition and Registration Based Multi-Statistics Histogram Descriptors. Sensors 2022, 22, 417. [Google Scholar] [CrossRef]
- Kahaki, S.M.M.; Nordin, M.J.; Ashtari, A.H.; Zahra, S.J. Invariant feature matching for image registration application based on new dissimilarity of spatial features. PLoS ONE 2016, 11, e0149710. [Google Scholar]
- Chen, B.; Chen, H.; Song, B.; Gong, G. TIF-Reg: Point Cloud Registration with Transform-Invariant Features in SE(3). Sensors 2021, 17, 5778. [Google Scholar] [CrossRef] [PubMed]
- Rusu, R.B.; Blodow, N.; Beetz, M. Fast point feature histograms (FPFH) for 3D registration. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Kobe, Japan, 12–17 May 2009; pp. 3212–3217. [Google Scholar]
- Aoki, Y.; Goforth, H.; Srivatsan, R.A.; Lucey, S. Pointnetlk: Robust and efficient point cloud registration using pointnet. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 7163–7172. [Google Scholar]
- Kurobe, A.; Sekikawa, Y.; Ishikawa, K.; Saito, H. Corsnet: 3d point cloud registration by deep 725 neural network. IEEE Robot. Autom. Lett. 2020, 5, 3960–3966. [Google Scholar] [CrossRef]
- Bello, S.A.; Yu, S.; Wang, C.; Adam, J.M.; Li, J. TIF-Reg: Deep learning on 3D point clouds. Remote Sens. 2020, 12, 1729. [Google Scholar] [CrossRef]
- Salas-Moreno, R.F.; Newcombe, R.A.; Strasdat, H.; Kelly, P.H.; Davison, A.J. Slam++: Simulta neous localisation and mapping at the level of objects. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 1352–1359. [Google Scholar]
- Fernández-Moral, E.; Rives, P.; Arévalo, V.; González-Jiménez, J. Scene structure registration for localization and mapping. Robot. Auton. Syst. 2016, 75, 649–660. [Google Scholar] [CrossRef]
- Zhang, Z. Iterative point matching for registration of free-form curves and surfaces. Int. J. Comput. Vis. 1994, 13, 119–152. [Google Scholar] [CrossRef]
- Vitter, J.S. Faster methods for random sampling. Commun. ACM 1984, 27, 703–718. [Google Scholar] [CrossRef] [Green Version]
- Holz, D.; Ichim, A.E.; Tombari, F.; Rusu, R.B.; Behnke, S. Registration with the point cloud library: A modular framework for aligning in 3-D. IEEE Robot. Autom. Mag. 2015, 22, 110–124. [Google Scholar] [CrossRef]
- Elseberg, J.; Magnenat, S.; Siegwart, R.; Nüchter, A. Comparison of nearest-neighbor-search strategies and implementations for efficient shape registration. J. Soft. Eng. Robot. 2012, 3, 2–12. [Google Scholar]
- Rusu, R.B.; Cousins, S. 3d is here: Point cloud library (PCL). In Proceedings of the IEEE International Conference on Robot and Automation (ICRA), Shanghai, China, 9–13 May 2011; pp. 1–4. [Google Scholar]
- Tazir, M.L.; Gokhool, T.; Checchin, P.; Malaterre, L.; Trassoudaine, L. CICP: Cluster Iterative Closest Point for sparse–dense point cloud registration. Robot. Auton. Syst. 2018, 108, 66–86. [Google Scholar] [CrossRef] [Green Version]
- Li, X.; Du, S.; Li, G.; Li, H. Integrate point-cloud segmentation with 3D lidar scan-matching for mobile robot localization and mapping. Sensors 2020, 20, 237. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Horn, B.K. Closed-form solution of absolute orientation using unit quaternions. JOSA A 1987, 4, 629–642. [Google Scholar] [CrossRef]
- Yang, J.; Li, H.; Jia, Y. Go-icp: Solving 3d registration efficiently and globally optimally. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 1457–1464. [Google Scholar]
- Turk, G.; Levoy, M. Zippered polygon meshes from range images. In Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, Orlando, FL, USA, 24–29 July 1994; pp. 311–318. [Google Scholar]
- Rusinkiewicz, S.; Levoy, M. Efficient variants of the ICP algorithm. In Proceedings of the Third International Conference on 3-D Digital Imaging and Modeling, Quebec City, QC, Canada, 28 May–1 June 2001; pp. 145–152. [Google Scholar]
- Aleotti, J.; Rizzini, D.L.; Caselli, S. Perception and grasping of object parts from active robot exploration. J. Intell. Robot. Syst. 2014, 76, 401–425. [Google Scholar] [CrossRef]
- Statue Model Repository. Available online: https://lgg.epfl.ch/statues_dataset.php (accessed on 29 January 2022).
- The Stanford 3D Scanning Repository. Available online: https://graphics.stanford.edu/data/3Dscanrep/ (accessed on 29 January 2022).
- Razer Stargazer Support. Available online: https://support.razer.com/gaming-headsets-and-audio/razer-stargazer/ (accessed on 29 January 2022).
- Wang, X.; Zhu, X.; Ying, S.; Shen, C. An Accelerated and Robust Partial Registration Algorithm for Point Clouds. IEEE Access 2020, 8, 156504–156518. [Google Scholar] [CrossRef]
- Aiger, D.; Mitra, N.J.; Cohen-Or, D. 4-points congruent sets for robust pairwise surface registration. ACM Siggraph 2008, 27, 1–10. [Google Scholar] [CrossRef] [Green Version]
- Costanzo, M.; Maria, G.D.; Lettera, G.; Natale, C.; Pirozzi, S. Flexible Motion Planning for Object Manipulation in Cluttered Scenes. In Proceedings of the 15th International Conference Informatics in Control, Automation and Roboics (ICINCO), Porto, Portugal, 29–31 July 2018; Volume 2, pp. 978–989. [Google Scholar]
- Cheng, L.; Chen, S.; Liu, X.; Xu, H.; Wu, Y.; Li, M.; Chen, Y. Registration of laser scanning point clouds: A review. Sensors 2018, 18, 1641. [Google Scholar] [CrossRef] [Green Version]
- He, L.; Wang, X.; Zhang, H. M2DP: A novel 3D point cloud descriptor and its application in loop closure detection. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 231–237. [Google Scholar]
Objects | Scenes | ||||||
---|---|---|---|---|---|---|---|
Model | Dataset | Density | Size | Model | Dataset | Density | Size |
Bunny | [48] | 40 k | 644.3 kB | Lab. 1 | Ours | 56 k | 683.9 kB |
Dragon | 35 k | 1.2 MB | Lab. 2 | 72 k | 879.9 kB | ||
Buddha | 75 k | 2.2 MB | Office | [23] | 200 k | 3.6 MB | |
Horse | [46] | 3 k | 98.3 kB | Stage | 69 k | 694.2 kB | |
Hammer | 2k | 74.2 kB | House | [39] | 83 k | 466.1 kB | |
Aquarius | [47] | 64 k | 784.8 kB | Gasebo 1 | [11] | 153 k/67 k | 5.1 MB |
Bear | 27 k | 328.5 kB | Gasebo 2 | 155 k/66 k | 5.2 MB | ||
Eagle | 68 k | 836.2 kB | UFC | Ours | 1.2 M/828 k | 9.8 MB |
Bunny | Dragon | Buddha | Horse | Hammer | |
---|---|---|---|---|---|
CP-ICP | 4.922 | 3.847 | 9.085 | 0.338 | 0.202 |
Go-ICP | 36.537 | 35.847 | 36.288 | 42.348 | 36. 198 |
FPFH+ | 125.438 | 91.969 | 413.457 | 5.526 | 3.086 |
22.543 | 23.260 | 53.506 | 2.628 | 1.454 | |
72.726 | 55.799 | 179.481 | 7.486 | 4.227 | |
8.202 | 6.903 | 15.585 | 0.696 | 0.351 | |
16.392 | 3.584 | 2.501 | 20.928 | 1.543 | |
3.996 | 1.755 | 2.416 | 7.015 | 0.535 |
Bunny | Dragon | Buddha | Horse | Hammer | |
---|---|---|---|---|---|
Ground Truth | 45 | 24 | 24 | 180 | 45 |
Axis | Y | Z | Z | Z | Z |
CP-ICP | 16.498 | 24.009 | 22.543 | 55.869 | 35.075 |
Go-ICP | 34.480 | 61.281 | 15.612 | 42.348 | 36.198 |
FPFH+ | 46.706 | 49.605 | 50.154 | 187.515 | 58.929 |
41.301 | 23.863 | 21.679 | 36.342 | 45.577 | |
43.246 | 24.091 | 24.039 | 182.610 | 44.591 | |
43.246 | 24.091 | 24.039 | 182.610 | 44.591 |
Bunny | Dragon | Buddha | Horse | Hammer | RMSEavg | |
---|---|---|---|---|---|---|
CP-ICP | 0.011 | 0.002 | 0.003 | 0.029 | 0.010 | 0.011 |
Go-ICP | 0.089 | 0.055 | 0.032 | 0.523 | 0.207 | 0.181 |
FPFH+ | 0.004 | 0.003 | 0.004 | 0.004 | 0.005 | 0.004 |
0.057 | 0.002 | 0.003 | 0.027 | 0.011 | 0.020 | |
0.054 | 0.002 | 0.003 | 0.026 | 0.006 | 0.018 | |
0.002 | 0.002 | 0.003 | 0.020 | 0.004 | 0.006 | |
0.002 | 0.003 | 0.003 | 0.003 | 0.004 | 0.003 | |
0.002 | 0.003 | 0.003 | 0.003 | 0.004 | 0.003 |
Aquarius | Bears | Eagle | – | ||||
---|---|---|---|---|---|---|---|
Time | RMSE | Time | RMSE | Time | RMSE | RMSEavg | |
Go-ICP | 55.806 | 0.714 | 64.914 | 0.831 | 60.492 | 0.867 | 0.804 |
Sparse ICP | 798.121 | 0.802 | 252.054 | 1.737 | 650.190 | 0.902 | 1.147 |
29.034 | 0.016 | 15.400 | 0.051 | 2.154 | 0.034 | 0.034 | |
5.530 | 0.016 | 5.024 | 0.051 | 2.140 | 0.034 | 0.034 |
Dragon | Buddha | ||||||
---|---|---|---|---|---|---|---|
Time | RMSE | Time | RMSE | RMSEavg | |||
Uniform | 0.298 | 0.004 | 23.683 | 0.462 | 0.004 | 13.184 | 0.004 |
+ ICP | 5.058 | 0.002 | 23.873 | 8.428 | 0.003 | 21.765 | 0.002 |
+ ICP | 2.268 | 0.002 | 23.843 | 5.140 | 0.003 | 21.490 | 0.002 |
3.584 | 0.003 | 24.091 | 2.501 | 0.003 | 24.039 | 0.003 | |
1.755 | 0.003 | 24.091 | 2.416 | 0.003 | 24.039 | 0.003 |
Range | 0.002 | 0.0025 | 0.003 | 0.005 |
---|---|---|---|---|
Go-ICP | 36.811 | 36.996 | 36.736 | 36.707 |
37.867 | 37.389 | 38.721 | 37.888 | |
FPFH + | 126.108 | 125.918 | 125.233 | 121.885 |
28.982 | 29.883 | 28.535 | 32.443 | |
96.184 | 103.004 | 95.748 | 105.784 | |
47.046 | 48.915 | 43.434 | 67.582 | |
14.155 | 15.501 | 13.751 | 22.195 |
Range | 0.002 | 0.0025 | 0.003 | 0.005 | RMSEavg |
---|---|---|---|---|---|
Go-ICP | 0.041 | 0.041 | 0.046 | 0.058 | 0.046 |
0.038 | 0.036 | 0.041 | 0.047 | 0.040 | |
FPFH + | 0.016 | 0.016 | 0.022 | 0.016 | 0.017 |
0.059 | 0.057 | 0.073 | 0.011 | 0.050 | |
0.027 | 0.010 | 0.062 | 0.005 | 0.026 | |
0.002 | 0.002 | 0.002 | 0.002 | 0.002 | |
0.002 | 0.002 | 0.002 | 0.002 | 0.002 |
Lab. 1 | Lab. 2 | Office | Stage | |
---|---|---|---|---|
2.331 | 3.986 | 18.793 | 21.117 | |
Generalized ICP | 12.634 | 20.897 | 590.206 | 397.375 |
3D-NDT | 70.781 | 138.931 | 426.403 | 472.563 |
29.649 | 3.217 | 78.294 | 10.908 | |
5.163 | 2.491 | 17.576 | 9.861 |
Lab. 1 | Lab. 2 | Office | Stage | RMSEavg | |
---|---|---|---|---|---|
0.012 | 0.019 | 0.059 | 0.045 | 0.034 | |
Generalized ICP | 0.024 | 0.038 | 0.310 | 0.175 | 0.067 |
3D-NDT | 0.046 | 0.047 | 0.311 | 0.167 | 0.143 |
0.012 | 0.019 | 0.049 | 0.047 | 0.032 | |
0.012 | 0.019 | 0.049 | 0.047 | 0.032 |
Lab. 1 | Lab. 2 | House | |||||
---|---|---|---|---|---|---|---|
Time | RMSE | Time | RMSE | Time | RMSE | RMSEavg | |
26.209 | 0.010 | 33.857 | 0.010 | 40.108 | 0.031 | 0.017 | |
GICP | 116.53 | 0.010 | 139.341 | 0.010 | 992.158 | 0.053 | 0.024 |
3D-NDT | 131.26 | 0.054 | 289.184 | 0.013 | 905.530 | 0.054 | 0.040 |
47.075 | 0.010 | 5.532 | 0.010 | 85.812 | 0.035 | 0.018 | |
10.321 | 0.010 | 5.434 | 0.010 | 20.792 | 0.035 | 0.018 |
Gazebo 1 | Gazebo 2 | UFC | |||||
---|---|---|---|---|---|---|---|
153 k → 67 k | 155 k→ 66 k | 1.2 M → 828 k | |||||
Time | RMSE | Time | RMSE | Time | RMSE | RMSEavg | |
6.163 | 0.248 | 5.802 | 0.155 | 233.987 | 0.766 | 0.390 | |
GICP | 128.213 | 0.368 | 182.603 | 0.247 | 2980.82 | 4.183 | 1.599 |
3D-NDT | 166.381 | 0.323 | 218.221 | 0.217 | 256.105 | 4.300 | 1.613 |
5.501 | 0.200 | 5.584 | 0.147 | 545.986 | 0.266 | 0.204 | |
3.369 | 0.200 | 3.343 | 0.147 | 60.909 | 0.266 | 0.204 |
ICP [13,18,19] | NDT [21] | 4PCS [51] | FPFH app. [52] | Go-ICP [6] | CICP [40] | Wang [50] | UPS (cf. A) | |
---|---|---|---|---|---|---|---|---|
(1) Independent of prior information | × | × | × | ✓ | ✓ | ✓ | ✓ | ✓ |
(2) Independent of coarse-alignment | × | × | ✓ | ✓ | × | ✓ | ||
(3) No need for sampling | × | × | × | × | × | ✓ | ✓ | ✓ |
(4) No performs of registration in feature space | ✓ | ✓ | × | × | ✓ | ✓ | × | ✓ |
(5) Robust to loss of surface details | ✓ | × | × | × | × | ✓ | × | ✓ |
(6) Multi-scenario scope | × | × | × | × | × | × | × | ✓ |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Souza Neto, P.; Marques Soares, J.; Pereira Thé, G.A. Uniaxial Partitioning Strategy for Efficient Point Cloud Registration. Sensors 2022, 22, 2887. https://doi.org/10.3390/s22082887
Souza Neto P, Marques Soares J, Pereira Thé GA. Uniaxial Partitioning Strategy for Efficient Point Cloud Registration. Sensors. 2022; 22(8):2887. https://doi.org/10.3390/s22082887
Chicago/Turabian StyleSouza Neto, Polycarpo, José Marques Soares, and George André Pereira Thé. 2022. "Uniaxial Partitioning Strategy for Efficient Point Cloud Registration" Sensors 22, no. 8: 2887. https://doi.org/10.3390/s22082887
APA StyleSouza Neto, P., Marques Soares, J., & Pereira Thé, G. A. (2022). Uniaxial Partitioning Strategy for Efficient Point Cloud Registration. Sensors, 22(8), 2887. https://doi.org/10.3390/s22082887