3D Point Cloud Verification and Scatter Removal Using a Dual Camera/Projector Setup
Highlights
- Scattering media like water creates false 3D data points when using active illumination. This effect complicates subsequent machine vision tasks such as boundary detection, size estimation, and object segmentation.
- Our method can remove up to 99% of such false 3D points, making it an enabling methodology for robust and trustworthy 3D imaging in turbid conditions.
- The algorithm is O(1), meaning that it can be made suitable for embedded platforms and real-time systems.
- The method enables high-quality, turbidity-independent 3D imaging for underwater robotics, inspection, and autonomous operations, even in challenging environments.
- The reliable removal of scatter-induced artifacts allows accurate object boundary detection, critical for tasks like manipulation, navigation, and monitoring.
Abstract
1. Introduction
2. Theoretical Background
2.1. Short Introduction to Underwater 3D Imaging by Triangulation
2.2. Influence of Scatter on 3D Data Quality
- Backscatter-dominated signal.
- Forward-scatter-dominated signal.
- 3D data outside of physical objects.
2.2.1. Backscatter-Dominated Signal
2.2.2. Forward-Scatter-Dominated Signal
2.2.3. Three-Dimensional Data Outside of Physical Objects
3. Materials and Methods
3.1. Camera Setup and Calibration
3.2. Scatter Removal Through a Dual Camera and Projector Setup
- 1.
- Using the camera calibration between Camera 1 and Camera 2, determine the pixel in Camera 2, where should have been observed. As the camera is calibrated, this is simply a rotation/translation of the 3D coordinates into the coordinate system of Camera 2, yielding followed by a homogeneous transform of the 3D coordinates into the 2D image plane of Camera 2.
- 2.
- For each of the transformed measurements from Camera 1, find the 3D camera measurement that is closest in Camera 2 as described in point 1. As the pixel indices in Camera 2 for searching the closest measurements have been found through calibration, this process can be performed very quickly without the need for, e.g., kd-trees [28].
- 1.
- A point is found close to in Camera 2, e.g., where is a suitable threshold. (“Close”)
- 2.
- A point is found far away from , e.g., where is a suitable threshold. (“Far”)
- 3.
- The pixel does not contain any 3D measurements (no peaks above threshold), meaning no match can be found (“None”).
- We only have opaque surfaces that are to be imaged. This means that there will only be one true 3D measurement per camera pixel for either camera. If we detect a secondary peak, that is due to scattering.
- Based on the consideration illustrated in Figure 7, we see that scattered data points will differ in 3D location between the two cameras, meaning that these are highly probable not to be “close”. The correct measurements will, however, always align closely with each other.
- Trusted1 means that the assumption of the algorithm has been met—a single data point that can be verified between cameras.
- Trusted0 represents no data received, which can also be considered a safe state.
- Untrusted1 represents a single point that could not be verified. This will either be a scatter point or an object point that could not be verified in the second camera due to shadowing. Usually, this point will be disregarded as untrustworthy.
- Untrusted2 represents that both points reported have a nearby corresponding point. Typically, this occurs when the scattering is too close to the real object point to be well separated. In this case, no reliable selection can be made.
- Untrusted3 indicates that all points are far from each other. This typically happens if only scattering is measured by both cameras.
4. Results
4.1. Qualitative Assessment
4.2. Quantitative Assessment
5. Discussion
6. Conclusions and Summary
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Market Research Future. Underwater Robotics Market Size Share 2030|Industry Trends. Available online: https://www.marketresearchfuture.com/reports/underwater-robotics-market-7605 (accessed on 19 November 2025).
- Hu, K.; Wang, T.; Shen, C.; Weng, C.; Zhou, F.; Xia, M.; Weng, L. Overview of Underwater 3D Reconstruction Technology Based on Optical Images. J. Mar. Sci. Eng. 2023, 11, 949. [Google Scholar] [CrossRef]
- Cho, M.; Javidi, B. Three-Dimensional Visualization of Objects in Turbid Water Using Integral Imaging. J. Disp. Technol. 2010, 6, 544–547. [Google Scholar] [CrossRef]
- Komatsu, S.; Markman, A.; Javidi, B. Optical sensing and detection in turbid water using multidimensional integral imaging. Opt. Lett. 2018, 43, 3261–3264. [Google Scholar] [CrossRef]
- Schulein, R.; Do, C.M.; Javidi, B. Distortion-tolerant 3D recognition of underwater objects using neural networks. J. Opt. Soc. Am. A 2010, 27, 461–468. [Google Scholar] [CrossRef] [PubMed]
- Levy, D.; Peleg, A.; Pearl, N.; Rosenbaum, D.; Akkaynak, D.; Korman, S.; Treibitz, T. SeaThru-NeRF: Neural Radiance Fields in Scattering Media. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 56–65. [Google Scholar]
- Wen, B.; Trepte, M.; Aribido, J.; Kautz, J.; Gallo, O.; Birchfield, S. FoundationStereo: Zero-Shot Stereo Matching. In Proceedings of the IEEE Computer Vision and Pattern Recognition Conference, Nashville, TN, USA, 11–15 June 2025; pp. 5249–5260. [Google Scholar] [CrossRef]
- Min, J.; Jeon, Y.; Kim, J.; Choi, M. S2M2: Scalable Stereo Matching Model for Reliable Depth Estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision 2025, Honolulu, HI, USA, 19–23 October 2025. [Google Scholar]
- Castillón, M.; Palomer, A.; Forest, J.; Ridao, P. State of the Art of Underwater Active Optical 3D Scanners. Sensors 2019, 19, 5161. [Google Scholar] [CrossRef]
- Maccarone, A.; McCarthy, A.; Ren, X.; Warburton, R.E.; Wallace, A.M.; Moffat, J.; Petillot, Y.; Buller, G.S. Underwater depth imaging using time-correlated single-photon counting. Opt. Express 2015, 23, 33911–33926. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Q.; Wang, Q.; Hou, Z.; Liu, Y.; Su, X. Three-dimensional shape measurement for an underwater object based on two-dimensional grating pattern projection. Opt. Laser Technol. 2011, 43, 801–805. [Google Scholar] [CrossRef]
- Massot-Campos, M.; Oliver-Codina, G.; Kemal, H.; Petillot, Y.; Bonin-Font, F. Structured light and stereo vision for underwater 3D reconstruction. In Proceedings of the OCEANS 2015, Genova, Italy, 18–21 May 2015; pp. 1–6. [Google Scholar]
- Tang, Y.; Zhang, Z.; Wang, X. Estimation of the Scale of Artificial Reef Sets on the Basis of Underwater 3D Reconstruction. J. Ocean Univ. China 2021, 20, 1195–1206. [Google Scholar] [CrossRef]
- Narasimhan, S.G.; Nayar, S.K.; Sun, B.; Koppal, S.J. Structured light in scattering media. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05), Beijing, China, 17–21 October 2005; Volume 1, pp. 420–427. [Google Scholar]
- Lyu, N.; Yu, H.; Han, J.; Zheng, D. Structured light-based underwater 3-D reconstruction techniques: A comparative study. Opt. Lasers Eng. 2023, 161, 107344. [Google Scholar] [CrossRef]
- Bruno, F.; Bianco, G.; Muzzupappa, M.; Barone, S.; Razionale, A.V. Experimentation of structured light and stereo vision for underwater 3D reconstruction. ISPRS J. Photogramm. Remote Sens. 2011, 66, 508–518. [Google Scholar] [CrossRef]
- Risholm, P.; Kirkhus, T.; Thielemann, J.T.; Thorstensen, J. Adaptive Structured Light with Scatter Correction for High-Precision Underwater 3D Measurements. Sensors 2019, 19, 1043. [Google Scholar] [CrossRef] [PubMed]
- Tetlow, S.; Spours, J. Three-dimensional measurement of underwater work sites using structured laser light. Meas. Sci. Technol. 1999, 10, 1162–1167. [Google Scholar] [CrossRef]
- Risholm, P.; Thorstensen, J.B.; Thielemann, J.T.; Kaspersen, K.; Tschudi, J.; Yates, C.; Softley, C.; Abrosimov, I.; Alexander, J.; Haugholt, K.H. Real-time super-resolved 3D in turbid water using a fast range-gated CMOS camera. Appl. Opt. 2018, 57, 3927–3937. [Google Scholar] [CrossRef] [PubMed]
- Thorstensen, J.; Zonetti, S.; Thielemann, J. Light transport in turbid water for 3D underwater imaging. Opt. Express 2024, 32, 45013. [Google Scholar] [CrossRef]
- Risholm, P.; Thielemann, J.T.; Moore, R.; Haugholt, K.H. A scatter removal technique to enhance underwater range-gated 3D and intensity images. In Proceedings of the OCEANS 2018 MTS/IEEE Charleston, Charleston, SC, USA, 22–25 October 2018. [Google Scholar]
- Mobley, C.D. Optical Properties of Water. In Handbook of Optics: Volume IV—Optical Properties of Materials, Nonlinear Optics, Quantum Optics; Bass, M., Ed.; McGraw-Hill Education: Columbus, OH, USA, 2010. [Google Scholar]
- Garrido-Jurado, S.; Muñoz-Salinas, R.; Madrid-Cuevas, F.J.; Medina-Carnicer, R. Generation of fiducial marker dictionaries using Mixed Integer Linear Programming. Pattern Recognit. 2016, 51, 481–491. [Google Scholar] [CrossRef]
- Zhang, Z. Flexible camera calibration by viewing a plane from unknown orientations. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Corfu, Greece, 20–25 September 1999; Volume 1, pp. 666–673. [Google Scholar]
- Pedersen, M.; Bengtson, S.H.; Gade, R.; Madsen, N.; Moeslund, T.B. Camera Calibration for Underwater 3D Reconstruction Based on Ray Tracing Using Snell’s Law. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1491–14917. [Google Scholar]
- Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
- Pollefeys, M.; Koch, R.; Van Gool, L. A simple and efficient rectification method for general motion. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Corfu, Greece, 20–27 September 1999; IEEE: Piscataway, NJ, USA, 1999; Volume 1, pp. 496–501. [Google Scholar]
- Bentley, J.L. Multidimensional binary search trees used for associative searching. Commun. ACM 1975, 18, 509–517. [Google Scholar] [CrossRef]
- Bouquet, G.; Thorstensen, J.; Bakke, K.A.H.; Risholm, P. Design tool for TOF and SL based 3D cameras. Opt. Express 2017, 25, 27758–27769. [Google Scholar] [CrossRef] [PubMed]













| None | None | None | None | None | |
| None | Far | Far | Far | Far | |
| None | Far | Close | Close | Far | |
| None | Far | Close | Close | Far | |
| None | Far | Far | Far | Far |
| Point 1 Classification | |||
|---|---|---|---|
| Point 2 Classification | None | Close | Far |
| None | Trusted0 | Trusted1 | Untrusted1 |
| Close | Trusted1 | Untrusted2 | Trusted2 |
| Far | Untrusted1 | Trusted2 | Untrusted3 |
| False Positive Rate (%) | False Negative Rate (%) | |||
|---|---|---|---|---|
| NTU | Our | Amplitude | Our | Amplitude |
| 0.3 | 0.7 | 31.0 | 0.0 | 0.0 |
| 1.6 | 0.5 | 29.6 | 0.1 | 0.0 |
| 2.4 | 0.3 | 25.6 | 0.1 | 0.0 |
| 3.5 | 0.3 | 22.2 | 0.1 | 0.0 |
| 4.5 | 0.1 | 17.2 | 0.1 | 0.0 |
| 5.3 | 0.1 | 17.4 | 0.1 | 0.0 |
| 6.5 | 0.0 | 10.5 | 0.4 | 0.2 |
| 8.0 | 0.0 | 7.6 | 0.6 | 0.4 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Thielemann, J.T.; Thorstensen, J. 3D Point Cloud Verification and Scatter Removal Using a Dual Camera/Projector Setup. Sensors 2025, 25, 7250. https://doi.org/10.3390/s25237250
Thielemann JT, Thorstensen J. 3D Point Cloud Verification and Scatter Removal Using a Dual Camera/Projector Setup. Sensors. 2025; 25(23):7250. https://doi.org/10.3390/s25237250
Chicago/Turabian StyleThielemann, Jens T., and Jostein Thorstensen. 2025. "3D Point Cloud Verification and Scatter Removal Using a Dual Camera/Projector Setup" Sensors 25, no. 23: 7250. https://doi.org/10.3390/s25237250
APA StyleThielemann, J. T., & Thorstensen, J. (2025). 3D Point Cloud Verification and Scatter Removal Using a Dual Camera/Projector Setup. Sensors, 25(23), 7250. https://doi.org/10.3390/s25237250

