Use of Low-Cost Spherical Cameras for the Digitisation of Cultural Heritage Structures into 3D Point Clouds
Abstract
:1. Introduction
2. Equirectangular Projection
3. Materials and Methods
3.1. 360° Camera Used for the Experimentation: Ricoh Theta SC2
3.2. Pipeline of the Investigation Method
3.2.1. Laboratory Test (Indoor Environment)
3.2.2. Cultural Heritage Datasets
4. Results
5. Discussion and Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Acknowledgments
Conflicts of Interest
Appendix A
Algorithm A1—Equirectangular to fisheye |
imequ2fish(path): img = readimg(path) emi_1 = img.crop((img.width/4, 0, 3 * img.width/4, img.height)) emi_2_1 = img.crop((0, 0, img.width/4, img.height)) emi_2_2 = img.crop((3 * img.width/4, 0, img.width, img.height)) emi_2 = emi_2_1 + emi_2_2 fish_1 = equ2fish(emi_1) fish_2 = equ2fish(emi_2) fish_1.save() fish_2.save() Where equ2fish is defined as: equ2fish(img): C = (img.with/2, img.height/2) foreach pixel in img: point = ((pixel.y – C.x)/C.x, (C.y − fishPos.x)/C.y) R = sqrt(point.x * point.x + point.y * point.y); if R <= 1: phi = R * aperture/2 theta = atan2(point.y, point.x) P = (R * sin(phi) * cos(theta), R * cos(phi), R * sin(phi) * sin(theta)) lon = atan2(P.y, P.x) lat = atan2(P.z, sqrt(P.x * P.x + P.y * P.y)) origPos = (lon/PI, 2 * lat/PI) origPixel = (C.x + origPos.x * C.x, C.y – origPos.y * C.y) if origPixel.x < img.height and origPixl.y < img.width: pixel = img(origPixel.x, origPixel.y) |
References
- Shults, R. New opportunities of low-cost photogrammetry for Culture Heritage preservation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 5. [Google Scholar] [CrossRef] [Green Version]
- Sun, Z.; Zhang, Y. Accuracy evaluation of videogrammetry using a low-cost spherical camera for narrow architectural heritage: An observational study with variable baselines and blur filters. Sensors 2019, 19, 496. [Google Scholar] [CrossRef] [Green Version]
- Pepe, M.; Costantino, D. Techniques, tools, platforms and algorithms in close range photogrammetry in building 3D model and 2D representation of objects and complex architectures. Comput. Aided Des. Appl. 2020, 18, 42–65. [Google Scholar] [CrossRef]
- Trujillo-Jiménez, M.A.; Navarro, P.; Pazos, B.; Morales, L.; Ramallo, V.; Paschetta, C.; De Azevedo, S.; Ruderman, A.; Pérez, O.; Delrieux, C.; et al. Body2vec: 3D Point Cloud Reconstruction for Precise Anthropometry with Handheld Devices. J. Imaging 2020, 6, 94. [Google Scholar] [CrossRef] [PubMed]
- Caron, G.; Mouaddib, E.M.; Marchand, E. 3D model based tracking for omnidirectional vision: A new spherical approach. Robot. Auton. Syst. 2012, 60, 1056–1068. [Google Scholar] [CrossRef] [Green Version]
- Scaramuzza, D. Omnidirectional camera. In Computer Vision; Springer: Berlin, Germany, 2014; pp. 552–560. [Google Scholar]
- Change, R.I. New RICOH THETA Model, Capturing 360-degree Images in One Shot, is on Sale Soon—Spherical Video Function, API and SDK (Beta Version). News Release. 28 October 2014; p. 3. [Google Scholar]
- Campos, M.B.; Tommaselli, A.M.G.; Marcato Junior, J.; Honkavaara, E. Geometric model and assessment of a dual-fisheye imaging system. Photogramm. Rec. 2018, 33, 243–263. [Google Scholar] [CrossRef]
- Argyriou, L.; Economou, D.; Bouki, V. Design methodology for 360 immersive video applications: The case study of a cultural heritage virtual tour. Pers. Ubiquitous Comput. 2020, 24, 843–859. [Google Scholar] [CrossRef] [Green Version]
- Campos, M.B.; Tommaselli, A.M.G.; Castanheiro, L.F.; Oliveira, R.A.; Honkavaara, E. A fisheye image matching method boosted by recursive search space for close range photogrammetry. Remote Sens. 2019, 11, 1404. [Google Scholar] [CrossRef] [Green Version]
- Liu, S.; Chen, J.; Sun, M.; Zhao, L.; Wei, X.; Guo, H. 3D Spherical Panoramic Epipolar Line Based on Essential Matrix. IEEE Access 2020, 8, 192165–192176. [Google Scholar] [CrossRef]
- Pepe, M.; Costantino, D. UAV photogrammetry and 3D modelling of complex architecture for maintenance purposes: The case study of the masonry bridge on the Sele river, Italy. Period. Polytech. Civ. Eng. 2021, 65, 191–203. [Google Scholar] [CrossRef]
- Pagani, A.; Stricker, D. Structure from motion using full spherical panoramic cameras. In Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, 6–13 November 2011; pp. 375–382. [Google Scholar]
- Kingsland, K. Comparative analysis of digital photogrammetry software for cultural heritage. Digit. Appl. Archaeol. Cult. Herit. 2020, 18, e00157. [Google Scholar] [CrossRef]
- Barbasiewicz, A.; Widerski, T.; Daliga, K. The analysis of the accuracy of spatial models using photogrammetric software: Agisoft Photoscan and Pix4D. In Proceedings of the E3S Web of Conferences; EDP Sciences: Les Ulis, France, 2018; Volume 26, p. 00012. [Google Scholar]
- Matzen, K.; Cohen, M.F.; Evans, B.; Kopf, J.; Szeliski, R. Low-Cost 360 Stereo Photography and Video Capture. ACM Trans. Graph. 2017, 36, 148. [Google Scholar] [CrossRef]
- Hatzopoulos, J.N.; Stefanakis, D.; Georgopoulos, A.; Tapinaki, S.; Pantelis, V.; Liritzis, I. Use of various surveying technologies to 3d digital mapping and modelling of cultural heritage structures for maintenance and restoration purposes: The tholos in Delphi, Greece. Mediterr. Archaeol. Archaeom. 2017, 17, 311–336. [Google Scholar]
- Bassier, M.; Deloof, T.; Vincke, S.; Vergauwen, M. Panoramic image application for cultural heritage. In Proceedings of the Euro-Mediterranean Conference, Nicosia, Cyprus, 29 October–3 November 2018; pp. 386–395. [Google Scholar]
- Gottardi, C.; Guerra, F. Spherical images for cultural heritage: Survey and documentation with the Nikon KM360. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Riva del Garda, Italy, 4–7 June 2018. [Google Scholar]
- Pramulyo, H.; Harto, A.B.; Mertotaroeno, S.H.; Murtiyoso, A. Towards better 3D model accuracy with spherical photogrammetry. In The Rise of Big Spatial Data; Springer: Cham, Switzerland, 2017; pp. 107–120. [Google Scholar]
- Kennedy, M.; Kopp, S. Understanding Map Projections; ESRI Press: Redlands, CA, USA, 2000. [Google Scholar]
- Swart, D.; Torrence, B. Mathematics Meets Photography: Part I: The Viewable Sphere. Math Horiz. 2011, 19, 14–17. [Google Scholar] [CrossRef]
- Koeva, M.; Luleva, M.; Maldjanski, P. Integrating spherical panoramas and maps for visualization of cultural heritage objects using virtual reality technology. Sensors 2017, 17, 829. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Fangi, G. Further developments of the spherical photogrammetry for cultural heritage. In Proceedings of the XXII CIPA Symposium, Kyoto, Japan, 11–15 October 2009; pp. 11–15. [Google Scholar]
- Fangi, G. The Multi-image spherical Panoramas as a tool for Architectural Survey. CIPA Herit.Doc. 2011, 21, 311–316. [Google Scholar]
- Fangi, G. Some Metric Documentation of Cultural Heritage in Poland by Spherical Photogrammetry. In New Advanced GNSS and 3D Spatial Techniques; Springer: Cham, Switzerland, 2018; pp. 189–203. [Google Scholar]
- Costantino, D.; Pepe, M.; Restuccia, A.G. Scan-to-HBIM for conservation and preservation of Cultural Heritage building: The case study of San Nicola in Montedoro church (Italy). Appl. Geomat. 2021, 1–15. [Google Scholar] [CrossRef]
- Bourke. Converting Dual Fisheye Images into a Spherical (Equirectangular) Projection. Available online: http://paulbourke.net/dome/dualfish2sphere/ (accessed on 30 June 2021).
- Yao, Y.; Luo, Z.; Li, S.; Fang, T.; Quan, L. Mvsnet: Depth inference for unstructured multi-view stereo. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 767–783. [Google Scholar]
- Tinkham, W.T.; Swayze, N.C. Influence of Agisoft Metashape parameters on UAS structure from motion individual tree detection from canopy height models. Forests 2021, 12, 250. [Google Scholar] [CrossRef]
- Vosselman, G.; Maas, H.-G. Airborne and Terrestrial Laser Scanning; CRC: Boca Raton, FL, USA, 2010. [Google Scholar]
- Ahmad Fuad, N.; Yusoff, A.R.; Ismail, Z.; Majid, Z. Comparing the performance of point cloud registration methods for landslide monitoring using mobile laser scanning data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 11–21. [Google Scholar] [CrossRef] [Green Version]
- Guerra, J.P.; Pinto, M.M.; Beato, C. Virtual Reality—Shows a New Vision for Tourism and Heritage. Eur. Sci. J. 2015. [Google Scholar] [CrossRef]
- Cho, Y.-H.; Wang, Y.; Fesenmaier, D.R. Searching for Experiences: The Web-Based Virtual Tour in Tourism Marketing. J. Travel Tour. Mark. 2002, 12, 1–17. [Google Scholar] [CrossRef]
- Scianna, A.; La Guardia, M. 3D Virtual CH Interactive Information Systems for a Smart Web Browsing Experience for Desktop PCs and Mobile Devices. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 1–7. [Google Scholar] [CrossRef] [Green Version]
- Schonberger, J.L.; Frahm, J.M. Structure-from-motion revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 4104–4113. [Google Scholar]
Product | Resolution [Megapixel] | Cost [€] | Level |
---|---|---|---|
Ssstar | 16 | 200 | entry |
GoXtreme Dome 360 | 8 | 230 | |
Nikon KeyMission 360 | 23.9 | 280 | |
Xiaomi Mijia Mi Sphere 360 | 23.9 | 290 | |
GoPro Max | 16.6 | 550 | medium |
Ricoh Theta V | 14.4 | 550 | |
Garmin VIRB 360 | 15 | 700 | |
Vuze+ | 14.7 | 990 | |
Ricoh Theta Z1 | 22 | 1100 | |
Insta360 Pro 2 | 59 | 4000 | high |
iSTAR Fusion 360 | 50 | 5900 | |
Insta360 Titan | 55 | 17,000 | |
Weiss Ag Civetta WAM2 | 230 | not available |
Specification | Description |
---|---|
Release date | 12/2019 |
Exterior/external dimensions and weight | 45.2 mm (W) × 130.6 mm (H) × 22.9 mm, 104 g |
Still image resolution | 5376 × 2688 |
Internal memory/Number of photos that can be recorded | Approx. 14GB |
Object distance | Approx. 10 cm–∞ (from front of lens) |
Exposure compensation | Still image: Manual compensation (−2.0–+2.0 EV, 1/3 EV step) |
ISO sensitivity | Still image: (Automatic) ISO64–1600 (ISO priority mode) ISO64–3200 (Manual mode) ISO64–3200 |
Shutter speed | Still image: (Automatic) 1/25,000–1/8 s, (Shutter priority mode) 1/25,000–1/8 s (Manual mode) 1/25,000–60 s |
Compression method | Still image: JPEG (Exif Ver2.3) |
Lens configuration | 7 elements in 6 groups |
Lens_F value | value F2.0 |
Image sensor_size | 1/2.3 CMOS (×2) |
Effective pixels | 12 megapixels (×2) |
Output pixels | Approx. 14 megapixels |
Markers | X (m) | Y (m) | Z (m) | Error (m) | Error (pix) |
---|---|---|---|---|---|
1 | 1.879 | −0.541 | 0.382 | 0.010 | 1.901 |
2 | 1.197 | −1.388 | 2.523 | 0.029 | 1.749 |
3 | 1.611 | −2.144 | −0.091 | 0.027 | 1.752 |
4 | 0.319 | −2.341 | 0.538 | 0.021 | 3.179 |
5 | −0.569 | −2.185 | 2.116 | 0.060 | 4.647 |
6 | −1.300 | −2.063 | −0.007 | 0,022 | 1.826 |
7 | −1.954 | −1.953 | 1.314 | 0.032 | 1.765 |
8 | −1.954 | −1.042 | 2.509 | 0.033 | 2.328 |
9 | −2.352 | −0.686 | 0.043 | 0.022 | 2.014 |
10 | −2.260 | −0.100 | 2.132 | 0.015 | 2.096 |
11 | −1.596 | 0.775 | 1.449 | 0.012 | 2.323 |
12 | −1.064 | 0.729 | 2.503 | 0.033 | 2.905 |
13 | −1.289 | 2.879 | 0.332 | 0.041 | 1.553 |
14 | −0.404 | 2.168 | 1.380 | 0.033 | 3.161 |
15 | 0.100 | 2.110 | −0.229 | 0.019 | 2.742 |
16 | 0.692 | 2.541 | 0.551 | 0.043 | 3.171 |
17 | 1.286 | 1.997 | −0.074 | 0.029 | 1.287 |
18 | 1.848 | 1.869 | 1.348 | 0.038 | 2.216 |
19 | 1.960 | 2.341 | 2.184 | 0.042 | 0.662 |
20 | 0.939 | 1.237 | 2.525 | 0.036 | 4.149 |
Features | Agisoft Photoscan | Agisoft Metashape |
---|---|---|
Tie Points | 1265 | 702 |
Dense Cloud | 3,897,181 | 1,347,738 |
Error (m) | 0.0320 | 0.0331 |
Error (pix) | 2.569 | 3.210 |
Time alignment and match processing (s) | 70 s | 52 s |
Dataset | Configuration | Tie Points | Increase of Tie Points | Dense Cloud | Increase of Dense Cloud |
---|---|---|---|---|---|
A | 9 images at ground level | 702 | - | 1,347,738 | - |
B | 9 images at ground level + 1 image at tripod level | 726 | 4% | 1,552,328 | 15% |
C | 9 images at ground level + 4 images at tripod level | 811 | 16% | 2,091,827 | 55% |
D | 9 images at ground level + 5 images at tripod level | 905 | 29% | 2,246,343 | 67% |
E | 9 images at ground level + 9 images at tripod level | 1164 | 66% | 2,382,485 | 77% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Herban, S.; Costantino, D.; Alfio, V.S.; Pepe, M. Use of Low-Cost Spherical Cameras for the Digitisation of Cultural Heritage Structures into 3D Point Clouds. J. Imaging 2022, 8, 13. https://doi.org/10.3390/jimaging8010013
Herban S, Costantino D, Alfio VS, Pepe M. Use of Low-Cost Spherical Cameras for the Digitisation of Cultural Heritage Structures into 3D Point Clouds. Journal of Imaging. 2022; 8(1):13. https://doi.org/10.3390/jimaging8010013
Chicago/Turabian StyleHerban, Sorin, Domenica Costantino, Vincenzo Saverio Alfio, and Massimiliano Pepe. 2022. "Use of Low-Cost Spherical Cameras for the Digitisation of Cultural Heritage Structures into 3D Point Clouds" Journal of Imaging 8, no. 1: 13. https://doi.org/10.3390/jimaging8010013
APA StyleHerban, S., Costantino, D., Alfio, V. S., & Pepe, M. (2022). Use of Low-Cost Spherical Cameras for the Digitisation of Cultural Heritage Structures into 3D Point Clouds. Journal of Imaging, 8(1), 13. https://doi.org/10.3390/jimaging8010013