Three-Dimensional Reconstruction of Fragment Shape and Motion in Impact Scenarios
Abstract
1. Introduction
1.1. Close-Range Photogrammetric Object Surface Reconstruction Techniques
1.2. Related Work
1.3. Goals and Structure
2. Experimental Setup
3. Methodology
4. Description and Experimental Evaluation of the Proposed Methods
4.1. Non-Rotating or Low-Rotation Motion
4.1.1. Image Analysis and 3D Shape Reconstruction
4.1.2. AI-Based Object Detection and Segmentation in Overlapping and Shadowed Scenarios
4.2. Rotating Motion
5. Results
5.1. Non-Rotating or Low-Rotation Motion
5.1.1. A 3D Reconstruction Under Dust-, Shadow-, and Overlap-Free Conditions
5.1.2. Experimental Results and Performance Evaluation of Mask R-CNN in Overlapping and Shadowed Scenarios
5.2. Rotating Motion
6. Discussion
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
SfM | Structure from Motion |
AI | Artificial Intelligence |
F | Fundamental Matrix |
E | Essential Matrix |
Root Mean Square error along the X direction | |
Root Mean Square error along the Y direction | |
Root Mean Square error along the Z direction |
References
- Leicht, L.; Beckmann, B.; Curbach, M. Influences on the structural response of beams in drop tower experiments. Civ. Eng. Des. 2021, 3, 192–209. [Google Scholar] [CrossRef]
- Gallo, A.; Muzzupappa, M.; Bruno, F. 3D reconstruction of small sized objects from a sequence of multi-focused images. J. Cult. Herit. 2014, 15, 173–182. [Google Scholar] [CrossRef]
- Patrucco, G.; Giulio Tonolo, F.; Sammartano, G.; Spanò, A. SFM-based 3D reconstruction of heritage assets using UAV thermal images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, XLIII-B1-2022, 399–406. [Google Scholar] [CrossRef]
- Engin, I.C.; Maerz, N.H. Size distribution analysis of aggregates using LiDAR scan data and an alternate algorithm. Measurement 2019, 143, 136–143. [Google Scholar] [CrossRef]
- Arayici, Y. An approach for real world data modelling with the 3D terrestrial laser scanner for built environment. Autom. Constr. 2007, 16, 816–829. [Google Scholar] [CrossRef]
- Andresen, K. Das Phasenshiftverfahren zur Moiré-Bildauswertung. Optik 1986, 72, 115–125. [Google Scholar]
- Luhmann, T.; Fraser, C.; Maas, H.G. Sensor modelling and camera calibration for close-range photogrammetry. ISPRS J. Photogramm. Remote Sens. 2015, 115, 37–46. [Google Scholar] [CrossRef]
- Stahs, T.G.; Wahl, F.M. Fast and robust range data acquisition in a low-cost environment. In Proceedings of the Close-Range Photogrammetry Meets Machine Vision, Zurich, Switzerland, 3–7 September 1990; Gruen, A., Baltsavias, E.P., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 1990; Volume 1395, p. 13951R. [Google Scholar] [CrossRef]
- Batlle, J.; Mouaddib, E.; Salvi, J. Recent progress in coded structured light as a technique to solve the correspondence problem: A survey. Pattern Recognit. 1998, 31, 963–982. [Google Scholar] [CrossRef]
- Bell, T.; Li, B.; Zhang, S. Structured Light Techniques and Applications. In Wiley Encyclopedia of Electrical and Electronics Engineering; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2016; pp. 1–24. [Google Scholar] [CrossRef]
- Lu, L.; Xi, J.; Yu, Y.; Guo, Q. New approach to improve the accuracy of 3-D shape measurement of moving object using phase shifting profilometry. Opt. Express 2013, 21, 30610–30622. [Google Scholar] [CrossRef]
- Malz, R. Adaptive Light Encoding for 3-D-Sensing with Maximum Measurement Efficiency. In Mustererkennung 1989, Proceedings of the 11. DAGM-Symposium Hamburg, Hamburg, Germany, 2–4 October 1989; Burkhardt, H., Höhne, K.H., Neumann, B., Eds.; Springer: Berlin/Heidelberg, Germany, 1989; pp. 98–105. [Google Scholar]
- Watson, E.; Gulde, M.; Hiermaier, S. Fragment Tracking in Hypervelocity Impact Experiments. Procedia Eng. 2017, 204, 170–177. [Google Scholar] [CrossRef]
- Liang, S.C.; Li, Y.; Chen, H.; Huang, J.; Liu, S. Research on the technique of identifying debris and obtaining characteristic parameters of large-scale 3D point set. Int. J. Impact Eng. 2013, 56, 27–31. [Google Scholar] [CrossRef]
- Watson, E.; Kunert, N.; Putzar, R.; Maas, H.G.; Hiermaier, S. Four-View Split-Image Fragment Tracking in Hypervelocity Impact Experiments. Int. J. Impact Eng. 2019, 135, 103405. [Google Scholar] [CrossRef]
- Putze, T.; Raguse, K.; Maas, H.G. Configuration of multi mirror systems for single high speed camera based 3D motion analysis. Proc. SPIE-Int. Soc. Opt. Eng. 2007, 6491, 64910L. [Google Scholar] [CrossRef]
- Prades-Valls, A.; Corominas, J.; Lantada, N.; Matas, G.; Núñez-Andrés, M.A. Capturing rockfall kinematic and fragmentation parameters using high-speed camera system. Eng. Geol. 2022, 302, 106629. [Google Scholar] [CrossRef]
- Weindorf, B.J.; Morrison, M.; Lowe, K.T.; Ng, W.; Loebig, J. Fragment tracking for microparticle breakage resulting from high-speed impacts. Powder Technol. 2025, 453, 120657. [Google Scholar] [CrossRef]
- Filho, W.L.; Abubakar, I.R.; Hunt, J.D.; Dinis, M.A.P. Managing space debris: Risks, mitigation measures, and sustainability challenges. Sustain. Futur. 2025, 10, 100849. [Google Scholar] [CrossRef]
- Johnson, N.; Krisko, P.; Liou, J.C.; Anz-Meador, P. NASA’s new breakup model of evolve 4.0. Adv. Space Res. 2001, 28, 1377–1384. [Google Scholar] [CrossRef]
- Rozumnyi, D.; Oswald, M.R.; Ferrari, V.; Matas, J.; Pollefeys, M. DeFMO: Deblurring and Shape Recovery of Fast Moving Objects. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021. [Google Scholar] [CrossRef]
- Rozumnyi, D.; Oswald, M.R.; Ferrari, V.; Pollefeys, M. Motion-From-Blur: 3D Shape and Motion Estimation of Motion-Blurred Objects in Videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 15990–15999. [Google Scholar]
- Photron. Photron Fastcam SA-X2 Datasheet (8 Juli 2025); Photron: San Diego, CA, USA; Bucks, UK; Tokyo, Japan, 2017. [Google Scholar]
- Ullman, S. The interpretation of structure from motion. Proc. R. Soc. Lond. Ser. B. Biol. Sci. 1979, 203, 405–426. [Google Scholar]
- Westoby, M.; Brasington, J.; Glasser, N.; Hambrey, M.; Reynolds, J. ‘Structure-from-Motion’ photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef]
- Godding, R. Photogrammetric method for the investigation and calibration of high-resolution camera systems. In Proceedings of the Recording Systems: High-Resolution Cameras and Recording Devices and Laser Scanning and Recording Systems, Munich, Germany, 21–25 June 1993; Beiser, L., Lenz, R.K., Eds.; SPIE: Bellingham, WA, USA, 1993; Volume 1987, pp. 103–110. [Google Scholar] [CrossRef]
- Brown, D.C. Close-Range Camera Calibration. Photogramm. Eng. 1971, 37, 855–866. [Google Scholar]
- Boufares, O.; Aloui, N.; Cherif, A. Adaptive Threshold for Background Subtraction in Moving Object Detection using Stationary Wavelet Transforms 2D. Int. J. Adv. Comput. Sci. Appl. 2016, 7, 29–36. [Google Scholar] [CrossRef]
- Luhmann, T. Nahbereichsphotogrammetrie; Wichmann: Lotte, Germany, 2023. [Google Scholar]
- Kuhn, H.W. The Hungarian method for the assignment problem. Nav. Res. Logist. Q. 1955, 2, 83–97. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R.B. Mask R-CNN. arXiv 2017, arXiv:1703.06870. [Google Scholar]
- Davoudkhani, M.; Mulsow, C.; Maas, H.G. Single camera 6-dof object tracking using spatial resection based techniques. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2025, XLVIII-G-2025, 351–358. [Google Scholar] [CrossRef]
- Shorten, C.; Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
- Chiu, M.C.; Chen, T.M. Applying Data Augmentation and Mask R-CNN-Based Instance Segmentation Method for Mixed-Type Wafer Maps Defect Patterns Classification. IEEE Trans. Semicond. Manuf. 2021, 34, 455–463. [Google Scholar] [CrossRef]
- Duran Vergara, L.; Leicht, L.; Beckmann, B.; Maas, H.G. Longitudinal wave propagation determination in concrete specimen under impact loading by ultrahigh-speed camera image sequence and strain gauge data analysis. Meas. Sci. Technol. 2025, 36. [Google Scholar] [CrossRef]
Sensor type | CMOS, monochrome model |
Pixel Size | 20 µm × 20 µm |
Sensor Size | 20.48 × 20.48 mm |
Maximum Resolution | 1024 × 1024 px at 12,500 fps |
Fill Factor | 58% |
Minimum Exposure | Global electronic shutter to 1 µs |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Davoudkhani, M.; Maas, H.-G. Three-Dimensional Reconstruction of Fragment Shape and Motion in Impact Scenarios. Sensors 2025, 25, 5842. https://doi.org/10.3390/s25185842
Davoudkhani M, Maas H-G. Three-Dimensional Reconstruction of Fragment Shape and Motion in Impact Scenarios. Sensors. 2025; 25(18):5842. https://doi.org/10.3390/s25185842
Chicago/Turabian StyleDavoudkhani, Milad, and Hans-Gerd Maas. 2025. "Three-Dimensional Reconstruction of Fragment Shape and Motion in Impact Scenarios" Sensors 25, no. 18: 5842. https://doi.org/10.3390/s25185842
APA StyleDavoudkhani, M., & Maas, H.-G. (2025). Three-Dimensional Reconstruction of Fragment Shape and Motion in Impact Scenarios. Sensors, 25(18), 5842. https://doi.org/10.3390/s25185842