NMC3D: Non-Overlapping Multi-Camera Calibration Based on Sparse 3D Map
Abstract
:1. Introduction
- We propose a method for constructing 3D maps of natural environments by adding fixed constraints to calibrate multi-camera systems without overlapping parts.
- We propose a distance-based feature matching point selection strategy to improve the accuracy and robustness of calibration.
- The efficacy of the method is demonstrated by its capacity to facilitate rapid deployment and minimal latency in multi-camera systems.
2. Related Work
2.1. Target-Based Multi-Camera Calibration Methods
2.2. Targetless Multi-Camera Calibration Methods
3. Methodology
3.1. Mapping Module
3.2. Calibration Module
- Matching of similar keyframes
- Matching of map feature pointsSecondly, among the screened similar keyframes, feature point matching was performed according to the similarity descriptor. Subsequently, a certain number of feature matching points were selected to calculate the extrinsic. To the best of our knowledge, we were the first to propose the selection of feature matching points based on the distance of the feature matching points to compute the extrinsic. This was due to the fact that the effects of feature matching points at different distances on the computation of rotation and translation vectors in the multi-camera epiphenomenon were not uniform. In practice, distant feature matching points played a more significant role in estimating the rotation vectors in the extrinsic, while close feature matching points play a more significant role in estimating the translation vectors in the extrinsic. Then, the selection of feature matching points based on the distance between them could result in the generation of a uniform distribution of feature matching points, which could then be extended to a larger spatial range. This approach could enhance the accuracy of calculating the extrinsic. In our approach, we first set the distant feature matching point threshold and the near matching point threshold. The feature matching points were then classified according to the two thresholds into a far distance feature matching point set and a near distance feature matching point set, and the rest of the feature matching points were classified into a medium distance feature matching point set. The feature matching points with different distances and homogeneous distributions in the three point sets were selected for the subsequent calculation of the multi-camera extrinsic.
- Calculate the extrinsic parameter transformation relationFinally, based on the feature matches selected in the previous step, the transformation relationship of the multi-camera extrinsic was calculated. As shown in Equation (5), where and represented the feature matching points in the keyframe K with high similarity, and represents the extrinsic between cameras A and B. According to the equation relationship, the equation was enumerated and the extrinsic was computed. The computed multi-camera extrinsic was set as the initial extrinsic.
3.3. Optimize Module
4. Results
4.1. Running the Calibration Algorithm in a Simulation Environment
4.2. Running Calibration Algorithms in a Real Environment
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Li, H.; Li, Z.; Akmandor, N. Stereovoxelnet: Real-time obstacle detection based on occupancy voxels from a stereo camera using deep neural networks. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 4826–4833. [Google Scholar]
- Kim, J. Camera-Based Net Avoidance Controls of Underwater Robots. Sensors 2024, 24, 674. [Google Scholar] [CrossRef] [PubMed]
- Mi, J.; Wang, Y.; Li, C. Omni-Roach: A legged robot capable of traversing multiple types of large obstacles and self-righting. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 235–242. [Google Scholar]
- Zhang, F.; Li, L.; Xu, P.; Zhang, P. Enhanced Path Planning and Obstacle Avoidance Based on High-Precision Mapping and Positioning. Sensors 2024, 10, 3100. [Google Scholar] [CrossRef] [PubMed]
- Adajania, V.; Zhou, S.; Singh, A.; Schoellig, A.; Akmandor, N. Amswarm: An alternating minimization approach for safe motion planning of quadrotor swarms in cluttered environments. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 1421–1427. [Google Scholar]
- Park, J.; Jang, I.; Kim, H. Omni-Roach: Decentralized Deadlock-free Trajectory Planning for Quadrotor Swarm in Obstacle-rich Environments. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 1428–1434. [Google Scholar]
- Xu, Z.; Xiu, Y.; Zhan, X.; Chen, B.; Shimada, K. Vision-aided UAV navigation and dynamic obstacle avoidance using gradient-based B-spline trajectory optimization. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 1214–1220. [Google Scholar]
- Zhu, Y.; An, H.; Wang, H.; Xu, R.; Wu, M.; Lu, K. RC-SLAM: Road Constrained Stereo Visual SLAM System Based on Graph Optimization. Sensors 2024, 24, 536. [Google Scholar] [CrossRef] [PubMed]
- Zhang, X.; Zhu, Y.; Ding, Y.; Zhu, Y.; Stone, P.; Zhang, S. Visually grounded task and motion planning for mobile manipulation. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 1925–1931. [Google Scholar]
- Guo, H.; Peng, S.; Lin, H.; Wang, Q.; Zhang, G.; Bao, H.; Zhou, X. Neural 3d scene reconstruction with the manhattan-world assumption. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5511–5520. [Google Scholar]
- Dang, C.; Lee, S.; Alam, M.; Lee, S.; Park, M.; Seong, H.; Han, S.; Nguyen, H.; Baek, M.; Lee, J. Korean Cattle 3D Reconstruction from Multi-View 3D-Camera System in Real Environment. Sensors 2024, 24, 427. [Google Scholar] [CrossRef] [PubMed]
- Zhou, Z.; Tulsiani, S.; Wang, Q. Sparsefusion: Distilling view-conditioned diffusion for 3d reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 12588–12597. [Google Scholar]
- Long, G.; Kneip, L.; Li, X.; Zhang, X.; Yu, Q. Simplified mirror-based camera pose computation via rotation averaging. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1247–1255. [Google Scholar]
- Xu, Y.; Gao, F.; Zhang, Z.; Jiang, X. A Calibration Method for Non-Overlapping Cameras Based on Mirrored Phase Target. Int. J. Adv. Manuf. Technol. 2017, 104, 9–15. [Google Scholar] [CrossRef]
- Xu, J.; Li, R.; Zhao, L.; Yu, W.; Liu, Z.; Zhang, B.; Li, Y. CamMap: Extrinsic calibration of non-overlapping cameras based on SLAM map alignment. IEEE Robot. Autom. Lett. 2022, 7, 11879–11885. [Google Scholar] [CrossRef]
- Agrawal; Davis. Camera calibration using spheres: A semi-definite programming approach. In Proceedings of the Ninth IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003; pp. 782–789. [Google Scholar]
- Ueshiba; Tomita. Plane-based calibration algorithm for multi-camera systems via factorization of homography matrices. In Proceedings of the Ninth IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003; pp. 966–973. [Google Scholar]
- Zhang, Z. Flexible camera calibration by viewing a plane from unknown orientations. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; pp. 666–673. [Google Scholar]
- Kumar, R.; Ilie, A.; Frahm, J.; Pollefeys, M. Simple calibration of non-overlapping cameras with a mirror. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–7. [Google Scholar]
- Carrera, G.; Angeli, A.; Davison, A. SLAM-based automatic extrinsic calibration of a multi-camera rig. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 2652–2659. [Google Scholar]
- Heng, L.; Furgale, P.; Pollefeys, M. Leveraging image-based localization for infrastructure-based calibration of a multi-camera rig. J. Field Robot. 2015, 32, 775–802. [Google Scholar] [CrossRef]
- Triggs, B.; McLauchlan, P.; Hartley, R.; Fitzgibbon, A. Bundle adjustment—A modern synthesis. In Vision Algorithms: Theory and Practice: International Workshop on Vision Algorithms; Springer: Berlin/Heidelberg, Germany, 2000; pp. 298–372. [Google Scholar]
- Campos, C.; Elvira, R. Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
- Zhang, Y.; Jin, R.; Zhou, Z. Understanding bag-of-words model: A statistical framework. Int. J. Mach. Learn. Cybern. 2010, 1, 43–52. [Google Scholar] [CrossRef]
Extrinsic | Rotation (°) | Inaccuracies (Rota) ↓ | Translation (m) | Inaccuracies (Trans) ↓ |
---|---|---|---|---|
Ground truth | [−180, 0, −180] | — | [0, 0.356, −1.932] | — |
Cammap | [−181.189, −0.514, −179.790] | [1.890, 0.514, 0.210] | [0.087, 0.297, −1.878] | [0.087, 0.059, 0.054] |
Ours | [−179.050, −0.430, −179.585] | [0.950, 0.430, 0.415] | [0.025, 0.261, −2.030] | [0.025, 0.095, 0.098] |
Variance | Rotation (°) ↓ | Translation (m) ↓ | Overall ↓ |
---|---|---|---|
Cammap | 0.631 | 0.0653 | 0.6993 |
Ours | 0.590 | 0.0726 | 0.6626 |
Extrinsic | Front and Rear Cameras | Ground Truth |
---|---|---|
Rotation (°) | [−177.399, −4.1353, −177.957] | [−180, 0, −180] |
Translation (m) | [−0.0288, −0.05166, −0.28815] | [0, 0, −0.250] |
Extrinsic | Front and Side Cameras | Ground Truth |
---|---|---|
Rotation (°) | [−100.476, 87.9655, −100.297] | [−90, 90, −90] |
Translation (m) | [0.0439, −0.0213, −0.198614] | [0, 0, −0.250] |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Dai, C.; Han, T.; Luo, Y.; Wang, M.; Cai, G.; Su, J.; Gong, Z.; Liu, N. NMC3D: Non-Overlapping Multi-Camera Calibration Based on Sparse 3D Map. Sensors 2024, 24, 5228. https://doi.org/10.3390/s24165228
Dai C, Han T, Luo Y, Wang M, Cai G, Su J, Gong Z, Liu N. NMC3D: Non-Overlapping Multi-Camera Calibration Based on Sparse 3D Map. Sensors. 2024; 24(16):5228. https://doi.org/10.3390/s24165228
Chicago/Turabian StyleDai, Changshuai, Ting Han, Yang Luo, Mengyi Wang, Guorong Cai, Jinhe Su, Zheng Gong, and Niansheng Liu. 2024. "NMC3D: Non-Overlapping Multi-Camera Calibration Based on Sparse 3D Map" Sensors 24, no. 16: 5228. https://doi.org/10.3390/s24165228
APA StyleDai, C., Han, T., Luo, Y., Wang, M., Cai, G., Su, J., Gong, Z., & Liu, N. (2024). NMC3D: Non-Overlapping Multi-Camera Calibration Based on Sparse 3D Map. Sensors, 24(16), 5228. https://doi.org/10.3390/s24165228