# Single-Image-Based 3D Reconstruction of Endoscopic Images

^{*}

## Abstract

**:**

## 1. Introduction

- We present a comprehensive pipeline for step-by-step 3D reconstruction using an AD-based PSFS algorithm, as demonstrated in Figure 1. This pipeline is generic and applicable to any endoscopic device, provided that we have access to the required image data for 3D reconstruction, as well as data for geometric and radiometric calibration.
- We utilized JPG images and opted for an endoscope where access to RAW image data was unavailable, reflecting real-world scenarios where RAW data may not be accessible. This choice underscores the practical applicability of our approach, as in many real-world applications, access to RAW data is limited.
- We validated the AD-based PSFS method in real-world scenarios by conducting 3D reconstruction on simple primitives and comparing the results with ground truth—a practice seldom addressed in the literature. This rigorous validation process enhances the credibility and reliability of our approach.
- We present simple methods for estimating the spatial irradiance and light source intensity of the endoscope, designed for scenarios where relying on multiple images for radiometric calibration is not feasible. Further details on these methods are provided in Section 2.4 of the article.

## 2. Methods Overview

#### 2.1. PSFS Model

#### 2.2. Geometric Calibration

#### 2.3. Albedo Measurement

#### 2.4. Radiometric Calibration

#### 2.4.1. Light Source Intensity Measurement

#### 2.4.2. Camera Response Function

#### 2.4.3. Spatial Irradiance

#### 2.5. Unit Conversion

#### 2.6. Image Denoising

#### 2.7. Assessment Criteria

## 3. Experiments and Results

#### 3.1. Ground Truth Models

#### 3.2. Image Acquisition

#### 3.3. 3D Reconstruction

#### 3.4. Discussion

#### 3.5. Preliminary Results of WCE

## 4. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Iddan, G.; Meron, G.; Glukhovsky, A.; Swain, P. Wireless capsule endoscopy. Nature
**2000**, 405, 417. [Google Scholar] - Silva, J.; Histace, A.; Romain, O.; Dray, X.; Granado, B. Toward embedded detection of polyps in wce images for early diagnosis of colorectal cancer. Int. J. Comput. Assist. Radiol. Surg.
**2014**, 9, 283–293. [Google Scholar] [CrossRef] - Pan, G.; Wang, L. Swallowable wireless capsule endoscopy: Progress and technical challenges. Gastroenterol. Res. Pract.
**2011**, 2012, 841691. [Google Scholar] [CrossRef] [PubMed] - Ahmad, B.; Floor, P.A.; Farup, I.; Hovde, Ø. 3D Reconstruction of Gastrointestinal Regions Using Single-View Methods. IEEE Access
**2023**, 11, 61103–61117. [Google Scholar] [CrossRef] - Aharchi, M.; Ait Kbir, M. A review on 3D reconstruction techniques from 2D images. In Proceedings of the Innovations in Smart Cities Applications, Paris, France, 4–6 October 2020; pp. 510–522. [Google Scholar]
- Verbin, D.; Zickler, T. Toward a universal model for shape from texture. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 422–430. [Google Scholar]
- Nayar, S.K.; Nakagawa, Y. Shape from focus. IEEE Trans. Pattern Anal. Mach. Intell.
**1994**, 16, 824–831. [Google Scholar] [CrossRef] - Zhang, R.; Tsai, P.S.; Cryer, J.E.; Shah, M. Shape-from-shading: A survey. IEEE Trans. Pattern Anal. Mach. Intell.
**1999**, 21, 690–706. [Google Scholar] [CrossRef] - Schonberger, J.L.; Frahm, J.M. Structure-from-motion revisited. In Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4104–4113. [Google Scholar]
- Palade, D.O.; Cobzeanu, B.M.; Zaharia, P.; Dabija, M. 3D Reconstruction role in surgical treatment of sinonasal tumours. Rev. Chim.
**2018**, 69, 1455–1457. [Google Scholar] [CrossRef] - Münster, S.; Köhler, T. 3D reconstruction of cultural heritage artifacts. In Virtual Palaces, Part II: Lost Palaces and Their Afterlife. Virtual Reconstruction between Science and Media; ART-Books: Heidelberg, Germany, 2016; pp. 87–102. [Google Scholar]
- Horn, B.K.; Brooks, M.J. The variational approach to shape from shading. Comput. Vision Graph. Image Process.
**1986**, 33, 174–208. [Google Scholar] [CrossRef] - Frankot, R.T.; Chellappa, R. A method for enforcing integrability in shape from shading algorithms. IEEE Trans. Pattern Anal. Mach. Intell.
**1988**, 10, 439–451. [Google Scholar] [CrossRef] - Kimmel, R.; Sethian, J.A. Optimal algorithm for shape from shading and path planning. J. Math. Imaging Vis.
**2001**, 14, 237–244. [Google Scholar] [CrossRef] - Tankus, A.; Sochen, N.; Yeshurun, Y. Shape-from-shading under perspective projection. Int. J. Comput. Vis.
**2005**, 63, 21–43. [Google Scholar] [CrossRef] - Wu, C.; Narasimhan, S.G.; Jaramaz, B. A multi-image shape-from-shading framework for near-lighting perspective endoscopes. Int. J. Comput. Vis.
**2010**, 86, 211–228. [Google Scholar] [CrossRef] - Ahmad, B.; Floor, P.A.; Farup, I. A Comparison of Regularization Methods for Near-Light-Source Perspective Shape-from-Shading. In Proceedings of the 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France, 16–19 October 2022; pp. 3146–3150. [Google Scholar]
- Andersen, C.F.; Farup, I.; Hardeberg, J.Y. Additivity Constrained Linearisation of Camera Calibration Data. IEEE Trans. Image Process.
**2023**, 32, 3774–3789. [Google Scholar] [CrossRef] - Ahmad, B. Anisotropic Diffusion for Depth Estimation in Shape from Focus Systems. In Proceedings of the VISAPP 2024: 19th International Conference on Computer Vision Theory and Applications, Rome, Italy, 27–29 February 2024. [Google Scholar]
- Di Zenzo, S. A note on the gradient of a multi-image. Comput. Vision Graph. Image Process.
**1986**, 33, 116–125. [Google Scholar] [CrossRef] - Sapiro, G.; Ringach, D.L. Anisotropic diffusion of multivalued images with applications to color filtering. IEEE Trans. Image Process.
**1996**, 5, 1582–1586. [Google Scholar] [CrossRef] [PubMed] - Tschumperlé, D.; Deriche, R. Vector-valued image regularization with PDEs: A common framework for different applications. IEEE Trans. Pattern Anal. Mach. Intell.
**2005**, 27, 506–517. [Google Scholar] [CrossRef] [PubMed] - Bouguet, J.Y. Camera Calibration Toolbox for Matlab. 2008. Available online: http://www.vision.caltech.edu/bouguetj/calib_doc (accessed on 25 March 2024).
- Heikkila, J.; Silvén, O. A four-step camera calibration procedure with implicit image correction. In Proceedings of the computer society conference on computer vision and pattern recognition, San Juan, PR, USA, 17–19 June 1997; pp. 1106–1112. [Google Scholar]
- KonicaMinolta. CS2000 Spectroradiometer. Available online: https://sensing.konicaminolta.us/us/products/cs-2000-spectroradiometer/ (accessed on 25 March 2024).
- X-Rite. Digital SG ColorChecker. Available online: https://www.xrite.com/categories/calibration-profiling/colorchecker-digital-sg (accessed on 25 March 2024).
- Xu, L.; Yan, Q.; Xia, Y.; Jia, J. Structure extraction from texture via relative total variation. ACM Trans. Graph.
**2012**, 31, 139. [Google Scholar] [CrossRef] - LifeLikeBioTissue. Single Layer Bowel. Available online: https://lifelikebiotissue.com/shop/general/single-layer-bowel (accessed on 25 March 2024).
- Charlet, V. Creation of a 3D Modular System for Colon Feformation; Technical Report for Colorlab; NTNU: Torgarden, Norway, 2022. [Google Scholar]
- Oiiwak. Dual Camera WiFi Endoscope. Available online: https://www.oiiwak.com/ (accessed on 25 March 2024).
- Metronic. PillCam COLON 2. Available online: https://www.medtronic.com/covidien/en-us/products/capsule-endoscopy/pillcam-colon-2-system.html (accessed on 25 March 2024).
- Kannala, J.; Brandt, S.S. A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses. IEEE Trans. Pattern Anal. Mach. Intell.
**2006**, 28, 1335–1340. [Google Scholar] [CrossRef]

**Figure 2.**PSFS model with light source at the camera center $\mathbf{O}$. $(x,y,z)$ represents the camera coordinate system, which is centered at $\mathbf{O}$. The z-axis is the optical axis, pointing towards the image plane.

**Figure 4.**Radiance intensity and albedo measurement: (

**a**) albedo, (

**b**) nonisotropic light, (

**c**) uniform light, and (

**d**) radiance power.

**Figure 5.**Results of camera response function. (

**a**) Image of SG chart captured with an endoscope and used for estimating the CRF and the light distribution. (

**b**) CRF in red channel. The red dotted line represents the data point. The red line represents the nonlinear fit. The horizontal axis represents the normalized image intensity, and the vertical axis represents the normalized image irradiance, the same in (

**c**,

**d**). (

**c**) The green dotted red line represents the data point. The green line represents the nonlinear fit. (

**d**) The blue dotted line represents the data point. The blue line represents the nonlinear fit.

**Figure 6.**Correction of light distribution. The point where horizontal and vertical lines intersect denotes the image center: (

**a**) $\tilde{M}(\tilde{x},\tilde{y})$, (

**b**) ${\sum}_{i=1}^{6}\frac{\mathbf{n}\xb7{\mathbf{l}}_{\mathbf{i}}}{{r}_{i}^{2}}$, and (

**c**) $M(\tilde{x},\tilde{y})=\tilde{M}(\tilde{x},\tilde{y})/{\sum}_{i=1}^{6}\frac{\mathbf{n}\xb7{\mathbf{l}}_{\mathbf{i}}}{{r}_{i}^{2}}$.

**Figure 8.**Images of artificial colon captured with an endoscope: (

**a**) ROI-1, (

**b**) ROI-2, and (

**c**) ROI-3.

**Figure 10.**Ground truth models of primitives. The axis represents the values in meters: (

**a**) sphere, (

**b**) cube, and (

**c**) pyramid.

**Figure 11.**Recovered 3D primitives. The axis represents the values in meters: (

**a**) sphere, (

**b**) cube, and (

**c**) pyramid.

**Figure 13.**Recovered 3D colon models. The axis represents the values in meters: (

**a**) ROI-1, (

**b**) ROI-2, and (

**c**) ROI-3.

**Figure 15.**Geometrically corrected cropped images utilized for 3D reconstruction: (

**a**) PC-1, (

**b**) PC-2, and (

**c**) PC-3.

**Figure 16.**Top view of the recovered 3D human GI regions. The axis represents the values in meters: (

**a**) PC-1, (

**b**) PC-2, and (

**c**) PC-3.

**Figure 17.**Side view of the recovered 3D human GI regions. The axis represents the values in meters: (

**a**) PC-1, (

**b**) PC-2, and (

**c**) PC-3.

Primitives | Cube | Sphere | Pyramid |
---|---|---|---|

${r}_{\mathrm{RMSE}}$ | 0.0377 | 0.0465 | 0.0386 |

${r}_{\mathrm{MDE}}$ | 0.0828 | 0.1282 | 0.0956 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Ahmad, B.; Floor, P.A.; Farup, I.; Andersen, C.F.
Single-Image-Based 3D Reconstruction of Endoscopic Images. *J. Imaging* **2024**, *10*, 82.
https://doi.org/10.3390/jimaging10040082

**AMA Style**

Ahmad B, Floor PA, Farup I, Andersen CF.
Single-Image-Based 3D Reconstruction of Endoscopic Images. *Journal of Imaging*. 2024; 10(4):82.
https://doi.org/10.3390/jimaging10040082

**Chicago/Turabian Style**

Ahmad, Bilal, Pål Anders Floor, Ivar Farup, and Casper Find Andersen.
2024. "Single-Image-Based 3D Reconstruction of Endoscopic Images" *Journal of Imaging* 10, no. 4: 82.
https://doi.org/10.3390/jimaging10040082