Path Planning of Autonomous 3-D Scanning and Reconstruction for Robotic Multi-Model Perception System
Abstract
:1. Introduction
1.1. Background and Significance
1.2. Aims and Contributions
- (1)
- This paper proposes a novel path planning approach for 3-D visual coverage scanning. By defining a set of practical photogrammetric constraints, the quality of the images collected along the way of the robotic system is effectively improved. Different from the existing path planning algorithms—either frontier-based methods, sampling-based methods, or a combination of both—we planned the path based on the shape of the objects. The proposed strategy can obtain dynamic feasible paths based on the true shapes of the scanning objects;
- (2)
- This paper defines two new photogrammetric constraints for image acquisition based on the shape of scenes or objects. In order to obtain an accurate model of the ROI, the images obtained from the cameras integrated into the scanner should satisfy the equidistant and frontal constraints. In this way, the problems of image deformation and resolution loss caused by the shape change of the scenes or objects to be reconstructed can be resolved;
- (3)
- This paper proposes a novel design of a scanner and robotic system for coverage scanning tasks. Firstly, we designed a scanner equipped with LiDAR, Realsense, and multiple cameras. Secondly, we used a mobile platform and robotic arm to ensure the robotic system’s ability to move in 3-D space. Finally, the designed scanner was mounted at the end of the robotic arm to form the scanning robotic system, which can significantly improve both the quality of photography and the efficiency of scanning.
2. Problem Formulation
2.1. Assumptions
- (1)
- The ground in the three-dimensional space where the robotic system is located is required to be flat without potholes or bulges, so as to achieve smooth robotic system movement;
- (2)
- The LiDAR, Realsense, and multiple cameras on the scanner should have been pre-calibrated. In addition, the parameters of the cameras and images, such as the camera field of view (FOV) and image resolution, should have been known in advance. In addition, the scanner and the end of the robotic arm should have been pre-calibrated.
- (3)
- Individual images are used or multiple images are seen as a composite individual image for further theoretical analysis, for the purpose of path planning.
2.2. Definition of Photogrammetric Constraints and Problem Statement
- (1)
- Equidistant: In order to maintain a consistent image resolution, the position of each waypoint should keep a constant offset distance h from the fitted plane with a bounded position error ;
- (2)
- Frontal: For each waypoint , the orientation of the camera is perpendicular to the fitted plane to capture an orthophotograph with a bounded angle error ;
- (3)
- Overlap: In the 3-D reconstruction application, the images should be acquired with a horizontal overlapping rate and vertical overlapping rate .
2.3. Next View Planner
2.4. Inverse Kinematics Constraints
2.5. Path Generation
2.6. Algorithm of Path Planning under Photogrammetric and Kinematics Constraints (PP-PKC)
Algorithm 1: Path Planning Algorithm under Photogrammetric and Kinematics Constraints (PP-PKC) |
3. Robotic Multi-Model Perception System (RMMP)
3.1. Robotic System Overall Structure
3.2. Mechanical Module and Self-Designed Scanner
3.3. Control Module
4. Experiment
4.1. Experimental Environment
4.2. Task Workflow
4.3. Multi-Model Data Perception
4.4. Three-Dimensional Real Scene Reconstruction
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Chong, S.; Pan, G.T.; Chin, J.; Show, P.L.; Yang, T.C.K.; Huang, C.M. Integration of 3D printing and Industry 4.0 into engineering teaching. Sustainability 2018, 10, 3960. [Google Scholar] [CrossRef] [Green Version]
- Hao, B.; Lin, G. 3D printing technology and its application in industrial manufacturing. In Proceedings of the IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2020; Volume 782, p. 022065. [Google Scholar]
- Wang, P.; Wu, P.; Wang, J.; Chi, H.L.; Wang, X. A critical review of the use of virtual reality in construction engineering education and training. Int. J. Environ. Res. Public Health 2018, 15, 1204. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Li, Y. Film and TV Animation Production Based on Artificial Intelligence AlphaGd. Mob. Inf. Syst. 2021, 2021, 1104248. [Google Scholar] [CrossRef]
- Zhao, D.; Li, Y.; Liu, Y. Simulating dynamic driving behavior in simulation test for unmanned vehicles via multi-sensor data. Sensors 2019, 19, 1670. [Google Scholar] [CrossRef] [Green Version]
- Hou, M.; Yang, S.; Hu, Y.; Wu, Y.; Jiang, L.; Zhao, S.; Wei, P. Novel method for virtual restoration of cultural relics with complex geometric structure based on multiscale spatial geometry. Isprs Int. J.-Geo-Inf. 2018, 7, 353. [Google Scholar] [CrossRef] [Green Version]
- Zhao, S.; Hou, M.; Hu, Y.; Zhao, Q. Application of 3D model of cultural relics in virtual restoration. Int. Arch. Photogram. Remote Sens Spat. Inf. Sci. 2018, 42, 2401–2405. [Google Scholar] [CrossRef] [Green Version]
- Lozano, M.T.U.; D’Amato, R.; Ruggiero, A.; Manzoor, S.; Haro, F.B.; Méndez, J.A.J. A study evaluating the level of satisfaction of the students of health sciences about the use of 3D printed bone models. In Proceedings of the Sixth International Conference on Technological Ecosystems for Enhancing Multiculturality, Salamanca, Spain, 24–26 October 2018; pp. 368–372. [Google Scholar]
- Chabra, R.; Lenssen, J.E.; Ilg, E.; Schmidt, T.; Straub, J.; Lovegrove, S.; Newcombe, R. Deep local shapes: Learning local sdf priors for detailed 3d reconstruction. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 608–625. [Google Scholar]
- Tian, X.; Liu, R.; Wang, Z.; Ma, J. High quality 3D reconstruction based on fusion of polarization imaging and binocular stereo vision. Inf. Fusion 2022, 77, 19–28. [Google Scholar] [CrossRef]
- Gallo, A.; Muzzupappa, M.; Bruno, F. 3D reconstruction of small sized objects from a sequence of multi-focused images. J. Cult. Herit. 2014, 15, 173–182. [Google Scholar] [CrossRef]
- De Paolis, L.T.; De Luca, V.; Gatto, C.; D’Errico, G.; Paladini, G.I. Photogrammetric 3D Reconstruction of Small Objects for a Real-Time Fruition. In Proceedings of the International Conference on Augmented Reality, Virtual Reality and Computer Graphics; Springer: Berlin/Heidelberg, Germany, 2020; pp. 375–394. [Google Scholar]
- Cui, B.; Tao, W.; Zhao, H. High-Precision 3D Reconstruction for Small-to-Medium-Sized Objects Utilizing Line-Structured Light Scanning: A Review. Remote Sens. 2021, 13, 4457. [Google Scholar] [CrossRef]
- Schöps, T.; Sattler, T.; Häne, C.; Pollefeys, M. 3D Modeling on the Go: Interactive 3D Reconstruction of Large-Scale Scenes on Mobile Devices. In Proceedings of the 2015 International Conference on 3D Vision, Lyon, France, 19–22 October 2015; pp. 291–299. [Google Scholar]
- Kähler, O.; Prisacariu, V.A.; Murray, D.W. Real-time large-scale dense 3D reconstruction with loop closure. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 500–516. [Google Scholar]
- Schöps, T.; Sattler, T.; Häne, C.; Pollefeys, M. Large-scale outdoor 3D reconstruction on a mobile device. Comput. Vis. Image Underst. 2017, 157, 151–166. [Google Scholar] [CrossRef]
- Kim, H.; Guillemaut, J.Y.; Takai, T.; Sarim, M.; Hilton, A. Outdoor dynamic 3-D scene reconstruction. IEEE Trans. Circuits Syst. Video Technol. 2012, 22, 1611–1622. [Google Scholar] [CrossRef] [Green Version]
- Xu, Z.; Kang, R.; Lu, R. 3D reconstruction and measurement of surface defects in prefabricated elements using point clouds. J. Comput. Civ. Eng. 2020, 34, 04020033. [Google Scholar] [CrossRef]
- Guan, J.; Yang, X.; Ding, L.; Cheng, X.; Lee, V.C.; Jin, C. Automated pixel-level pavement distress detection based on stereo vision and deep learning. Autom. Constr. 2021, 129, 103788. [Google Scholar] [CrossRef]
- Hall-Holt, O.; Rusinkiewicz, S. Stripe boundary codes for real-time structured-light range scanning of moving objects. In Proceedings of the Eighth IEEE International Conference on Computer Vision, ICCV 2001, Vancouver, BC, Canada, 7–14 July 2001; Volume 2, pp. 359–366. [Google Scholar]
- Percoco, G.; Salmerón, A.J.S. Photogrammetric measurement of 3D freeform millimetre-sized objects with micro features: An experimental validation of the close-range camera calibration model for narrow angles of view. Meas. Sci. Technol. 2015, 26, 095203. [Google Scholar] [CrossRef] [Green Version]
- Li, W.; Zhou, L. A rotary vision system for high-precision measurement over large size. J. Mod. Opt. 2022, 69, 347–358. [Google Scholar] [CrossRef]
- Straub, J.; Kerlin, S. Development of a large, low-cost, instant 3D scanner. Technologies 2014, 2, 76–95. [Google Scholar] [CrossRef] [Green Version]
- Zeraatkar, M.; Khalili, K. A fast and low-cost human body 3D scanner using 100 cameras. J. Imaging 2020, 6, 21. [Google Scholar] [CrossRef] [Green Version]
- Yin, S.; Ren, Y.; Guo, Y.; Zhu, J.; Yang, S.; Ye, S. Development and calibration of an integrated 3D scanning system for high-accuracy large-scale metrology. Measurement 2014, 54, 65–76. [Google Scholar] [CrossRef]
- Du, H.; Chen, X.; Xi, J.; Yu, C.; Zhao, B. Development and verification of a novel robot-integrated fringe projection 3D scanning system for large-scale metrology. Sensors 2017, 17, 2886. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhen, W.; Hu, Y.; Liu, J.; Scherer, S. A joint optimization approach of lidar-camera fusion for accurate dense 3-d reconstructions. IEEE Robot. Autom. Lett. 2019, 4, 3585–3592. [Google Scholar] [CrossRef]
- Zhen, W.; Hu, Y.; Yu, H.; Scherer, S. LiDAR-enhanced structure-from-motion. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 6773–6779. [Google Scholar]
- Shi, J.; Sun, Z.; Bai, S. Large-scale three-dimensional measurement via combining 3D scanner and laser rangefinder. Appl. Opt. 2015, 54, 2814–2823. [Google Scholar] [CrossRef] [PubMed]
- Wang, J.; Tao, B.; Gong, Z.; Yu, S.; Yin, Z. A mobile robotic measurement system for large-scale complex components based on optical scanning and visual tracking. Robot. Comput. Integr. Manuf. 2021, 67, 102010. [Google Scholar] [CrossRef]
- Wang, J.; Gong, Z.; Tao, B.; Yin, Z. A 3-D Reconstruction Method for Large Freeform Surfaces Based on Mobile Robotic Measurement and Global Optimization. IEEE Trans. Instrum. Meas. 2022, 71, 1–9. [Google Scholar] [CrossRef]
- Zhou, Z.; Liu, W.; Wang, Y.; Yue, Y.; Zhang, J. An accurate calibration method of a combined measurement system for large-sized components. Meas. Sci. Technol. 2022, 33, 095013. [Google Scholar] [CrossRef]
- Zhou, Z.; Liu, W.; Wang, Y.; Yu, B.; Cheng, X.; Yue, Y.; Zhang, J. A combined calibration method of a mobile robotic measurement system for large-sized components. Measurement 2022, 189, 110543. [Google Scholar] [CrossRef]
- Zhang, X.; Scholz, M.; Reitelshöfer, S.; Franke, J. An autonomous robotic system for intralogistics assisted by distributed smart camera network for navigation. In Proceedings of the 2018 IEEE 14th International Conference on Automation Science and Engineering (CASE), Munich, Germany, 20–24 August 2018; pp. 1224–1229. [Google Scholar]
- Schmid, L.; Pantic, M.; Khanna, R.; Ott, L.; Siegwart, R.; Nieto, J. An efficient sampling-based method for online informative path planning in unknown environments. IEEE Robot. Autom. Lett. 2020, 5, 1500–1507. [Google Scholar] [CrossRef] [Green Version]
- Naazare, M.; Rosas, F.G.; Schulz, D. Online Next-Best-View Planner for 3D-Exploration and Inspection With a Mobile Manipulator Robot. IEEE Robot. Autom. Lett. 2022, 7, 3779–3786. [Google Scholar] [CrossRef]
- Deng, D.; Duan, R.; Liu, J.; Sheng, K.; Shimada, K. Robotic exploration of unknown 2d environment using a frontier-based automatic-differentiable information gain measure. In Proceedings of the 2020 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Boston, MA, USA, 6–9 July 2020; pp. 1497–1503. [Google Scholar]
- Dai, A.; Papatheodorou, S.; Funk, N.; Tzoumanikas, D.; Leutenegger, S. Fast frontier-based information-driven autonomous exploration with an mav. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 9570–9576. [Google Scholar]
- Wang, H.; Zhang, S.; Zhang, X.; Zhang, X.; Liu, J. Near-optimal 3-D visual coverage for quadrotor unmanned aerial vehicles under photogrammetric constraints. IEEE Trans. Ind. Electron. 2021, 69, 1694–1704. [Google Scholar] [CrossRef]
- Di Franco, C.; Buttazzo, G. Coverage path planning for UAVs photogrammetry with energy and resolution constraints. J. Intell. Robot. Syst. 2016, 83, 445–462. [Google Scholar] [CrossRef]
- Dai, R.; Fotedar, S.; Radmanesh, M.; Kumar, M. Quality-aware UAV coverage and path planning in geometrically complex environments. Ad. Hoc. Netw. 2018, 73, 95–105. [Google Scholar] [CrossRef]
- Xing, C.; Wang, J.; Xu, Y. Overlap analysis of the images from unmanned aerial vehicles. In Proceedings of the 2010 International Conference on Electrical and Control Engineering, Wuhan, China, 25–27 June 2010; pp. 1459–1462. [Google Scholar]
- Li, Y.; Wang, H.; Zhang, X. Dense Points Aided Performance Evaluation Criterion of Human Obsevation for Image-based 3D Reconstruction. In Proceedings of the 2020 10th Institute of Electrical and Electronics Engineers International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), Xi’an, China, 10–13 October 2020; pp. 246–251. [Google Scholar]
Num of | Constrainst h (m) | Path Length (m) | (m) | Triangular Facet | |
---|---|---|---|---|---|
Preset path | 50 | — | 25.40 | — | — |
Path1 | 47 | 2.5 | 21.73 | 1,703,424 | |
Path2 | 44 | 2 | 18.47 | 2,982,178 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Fan, C.; Wang, H.; Cao, Z.; Chen, X.; Xu, L. Path Planning of Autonomous 3-D Scanning and Reconstruction for Robotic Multi-Model Perception System. Machines 2023, 11, 26. https://doi.org/10.3390/machines11010026
Fan C, Wang H, Cao Z, Chen X, Xu L. Path Planning of Autonomous 3-D Scanning and Reconstruction for Robotic Multi-Model Perception System. Machines. 2023; 11(1):26. https://doi.org/10.3390/machines11010026
Chicago/Turabian StyleFan, Chongshan, Hongpeng Wang, Zhongzhi Cao, Xinwei Chen, and Li Xu. 2023. "Path Planning of Autonomous 3-D Scanning and Reconstruction for Robotic Multi-Model Perception System" Machines 11, no. 1: 26. https://doi.org/10.3390/machines11010026
APA StyleFan, C., Wang, H., Cao, Z., Chen, X., & Xu, L. (2023). Path Planning of Autonomous 3-D Scanning and Reconstruction for Robotic Multi-Model Perception System. Machines, 11(1), 26. https://doi.org/10.3390/machines11010026