Toward a Data Fusion Index for the Assessment and Enhancement of 3D Multimodal Reconstruction of Built Cultural Heritage
Abstract
:1. Introduction
2. State-of-the-Art
3. Context of the Study
Notre-Dame des Fontaînes Chapel
- Time: Several data acquisition campaigns (six) were conducted in various research projects (four) between 2015 and 2021. In addition, archival resources (mostly photographs) beginning in 1950 exist and more missions are planned in the close future.
- Scale: Different strategies and devices were applied indoor/outdoor, using terrestrial and aerial approaches with high variability in terms of spatial resolution (between 85 points and 0.1 point for 1 cm).
- Sensor: Range- and image-based sensing were utilized. In total, 14 sensors models corresponding to four sensor types (photographic, phase-shift, telemetric and thermal imaging sensor) were used with no less than 10 imaging techniques. In addition, work it planned concerning data-integration regarding spectroscopic (LIBS, XRF) acoustic measurements (ultrasound, SRIR) and climatic sensors (temperature, hydrometry).
- Spectral: Some imaging techniques were collected at different spectral band ranges (visible, near-infrared and ultraviolet). In this study, only multi-spectral photographic-based techniques were integrated; however, multi-spectral RTI and hyperspectral imaging has been performed.
- Actors: These data were captured by several researchers (seven) from different teams and expertise fields with differing experience levels and methods.
4. Methodology
- Semantisation: The semantic labelling of each point cloud source is obtained beforehand thanks to the extensive MEMoS-based description. For a given naming value, matching is made between the name of the point cloud and the appellation attribute stored in the WHAT section. During the fusion step, the original cloud index is stored in the scalar field serving on the viewer side to retrieve and display a complete description of the source modality.
- Density estimation: The density is first estimated for each source with a variable local neighbouring radius (LNR). This helps to track of the local density that may vary according to the resolution of the point cloud source. The densities are unified using the LNR as a scaling factor to obtain a comparable approximation with regard to the multi-resolution of sources. A third and last density computation is made after the fusion, allowing to count the number of overlapping modalities for a given cell-size of the Octree.
- Octree computation and sampling: The multi-dimensional aspect of the sources (i.e., the scale and resolution) imply the need to unify the scale-space of the 3D datasets. The Octree is used as an underlying spatial grid, enabling multiple in-depth levels of detail. The sampling, based on a given Octree level, helps to discretize the point clouds to a comparable resolution while preserving the spatial coherence (e.g., Octree grid).
- Features scaling and fusion: Managing the variable dimensionality of the sources (i.e., spatial extensions, scales and resolutions) causes many issues on the computational side. At different steps of the algorithm, a feature scaling (using a min–max normalization method) is required to reach a comparative multi-source purpose. Feature aggregation is made from basic arithmetic functions, with a final formula obtained by an empirical trial-and-error process. This direction obtains a fusion index that preserve the local and absolute density variations, integrating the spatial overlapping of the modalities.
- Visual enhancement: As the densities were highly variable, the distribution of the feature values could lead to some visualization issues. As MEFI is an indicative attribute, some transformations could be made on the scalar field to improve their interpretative value (e.g., histogram equalization). This explains the bilateral smoothing applied in the last step to improve the distribution of values and decrease the impact of outliers and noise on the gradient. Once MEFI is computed, a colour map is created and fitted to the histogram. In the end a custom rainbow-based colour map is generated, but many others (diverging, wave, etc.) could be used and adapted to better reveal the MEFI features.
5. Results and Discussions
5.1. A Custom Potree-Based Viewer
5.2. Limits and Future Works
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
DCH | Digital cultural heritage |
CH | Cultural heritage |
MEFI | Multimodal enhancement fusion index |
ICP | Iterative closest points |
DoC | Degree of confidence |
MoP | Measure of performance |
MEMoS | Metadata-enriched multimodal documentation system |
C2C | Cloud to cloud |
C2M | Cloud to mesh |
M3C2 | Linear dichroism |
H-BIM | Heritage building information modelling |
IFC | Industry foundation classes |
RMS | Root mean square |
LNR | Local neighbouring radius |
PCD | Point cloud data |
GUI | Graphical user interface |
CLI | Command line interface |
PLY | Polygon file format |
NDF | Notre Dame des Fontaines |
RTI | Reflectance transformation imaging |
LIBS | Laser-induced breakdown spectroscopy |
XRF | X-ray fluorescence |
PCL | Point cloud library |
IoU | Intersection or union |
HDF5 | Hierarchical data format version 5 |
Appendix A
Algorithm A1: Pseudocode of Multimodal Enhancement Fusion Index computational method |
Data: List of N registered and overlaping Point Cloud Data (PCD) Result: Single PCD enhanced with Multimodal Fusion Index STEP ONE/Defining variables Set minimal (e.g., 5 mm): 0.005; min = in (21); final (fOS) is defined by OctreeSubDivisionLevel; If : then (=4 cm) STEP TWO/Preparing each source data for fusion STEP THREE/Computing point cloud fusion enhanced with multimodal index Merge subsampled PCD; Compute Merged Density where () = GeometricFeature.Density.NumberOfNeighbors with R = Analysis CellSize; Equalize Merged Density () with ScalarField.Arithmetic where ; Compute with; ; ; Combine with arithmetic mean; ; Compute Multimodal Enhancement Fusion Index () with ScalarField.Arithmetic ; Apply ScalarField.BilateralFilter with and ; Equalize with ScalarField.Arithmetic ; Resample at MaxOctreeLevel: (fOS = 4 cm) |
References
- Matrone, F.; Grilli, E.; Martini, M.; Paolanti, M.; Pierdicca, R.; Remondino, F. Comparing machine and deep learning methods for large 3D heritage semantic segmentation. ISPRS Int. J. Geo-Inf. 2020, 9, 535. [Google Scholar] [CrossRef]
- Poux, F.; Neuville, R.; Van Wersch, L.; Nys, G.A.; Billen, R. 3D Point Clouds in Archaeology: Advances in Acquisition, Processing and Knowledge Integration Applied to Quasi-Planar Objects. Geosciences 2017, 7, 96. [Google Scholar] [CrossRef]
- Buitelaar, P.; Cimiano, P.; Frank, A.; Hartung, M.; Racioppa, S. Ontology-based information extraction and integration from heterogeneous data sources. Int. J. Hum. Comput. Stud. 2008, 66, 759–788. [Google Scholar] [CrossRef]
- Pamart, A.; De Luca, L.; Véron, P. A metadata enriched system for the documentation of multi-modal digital imaging surveys. Stud. Digit. Herit. 2022, 6, 1–24. [Google Scholar] [CrossRef]
- Pamart, A.; Guillon, O.; Vallet, J.M.; De Luca, L. Toward a multimodal photogrammetric acquisition and processing methodology for monitoring conservation and restoration studies. In Proceedings of the 14th EUROGRAPHICS Workshop on Graphics and Cultural Heritage, Genova, Italy, 5–7 October 2016. [Google Scholar]
- Remondino, F.; Rizzi, A. Reality-based 3D documentation of natural and cultural heritage sites—techniques, problems, and examples. Appl. Geomat. 2010, 2, 85–100. [Google Scholar] [CrossRef]
- Mathys, A.; Jadinon, R.; Hallot, P. Exploiting 3D multispectral texture for a better feature identification for cultural heritage. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, IV-2/W6, 91–97. [Google Scholar] [CrossRef]
- Guidi, G.; Russo, M.; Ercoli, S.; Remondino, F.; Rizzi, A.; Menna, F. A Multi-Resolution Methodology for the 3D Modeling of Large and Complex Archeological Areas. Int. J. Archit. Comput. 2009, 7, 39–55. [Google Scholar] [CrossRef]
- Markiewicz, J.; Bochenska, A.; Kot, P.; Lapinski, S.; Muradov, M. The Integration of The Multi-Source Data for Multi-Temporal Investigation of Cultural Heritage Objects. In Proceedings of the 2021 14th International Conference on Developments in eSystems Engineering (DeSE), Sharjah, United Arab Emirates, 7–10 December 2021; IEEE: Sharjah, United Arab Emirates, 2021; pp. 63–68. [Google Scholar] [CrossRef]
- Rodríguez-Gonzálvez, P.; Guerra Campo, Á.; Muñoz-Nieto, Á.L.; Sánchez-Aparicio, L.J.; González-Aguilera, D. Diachronic reconstruction and visualization of lost cultural heritage sites. ISPRS Int. J. Geo-Inf. 2019, 8, 61. [Google Scholar] [CrossRef]
- Adamopoulos, E.; Rinaudo, F. 3D interpretation and fusion of multidisciplinary data for heritage science: A review. In Proceedings of the 27th CIPA International Symposium-Documenting the Past for a Better Future, Avila, Spain, 1–5 September 2019; International Society for Photogrammetry and Remote Sensing: Bethesda, MA, USA, 2019; Volume 42, pp. 17–24. [Google Scholar]
- Adamopoulos, E.; Tsilimantou, E.; Keramidas, V.; Apostolopoulou, M.; Karoglou, M.; Tapinaki, S.; Ioannidis, C.; Georgopoulos, A.; Moropoulou, A. Multi-Sensor Documentation of Metric and Qualitative Information of Historic Stone Structures. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 4, 1–8. [Google Scholar] [CrossRef]
- Tschauner, H.; Salinas, V.S. Stratigraphic modeling and 3D spatial analysis using photogrammetry and octree spatial decomposition. In Digital Discovery: Exploring New Frontiers in Human Heritage: CAA 2006: Computer Applications and Quantitative Methods in Archaeology, Proceedings of the 34th Conference, Fargo, ND, USA, April 2006; pp. 257–270. Available online: https://proceedings.caaconference.org/paper/cd28_tschauner_siveroni_caa2006/ (accessed on 3 April 2023).
- Pamart, A.; Ponchio, F.; Abergel, V.; Alaoui M’Darhri, A.; Corsini, M.; Dellepiane, M.; Morlet, F.; Scopigno, R.; De Luca, L. A complete framework operating spatially-oriented RTI in a 3D/2D cultural heritage documentation and analysis tool. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W9, 573–580. [Google Scholar] [CrossRef]
- Grifoni, E.; Bonizzoni, L.; Gargano, M.; Melada, J.; Ludwig, N.; Bruni, S.; Mignani, I. Hyper-dimensional Visualization of Cultural Heritage: A Novel Multi-analytical Approach on 3D Pomological Models in the Collection of the University of Milan. J. Comput. Cult. Herit. 2022, 15, 1–15. [Google Scholar] [CrossRef]
- Ramos, M.; Remondino, F. Data fusion in Cultural Heritage—A Review. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-5/W7, 359–363. [Google Scholar] [CrossRef]
- Adamopoulos, E.; Rinaudo, F. Close-Range Sensing and Data Fusion for Built Heritage Inspection and Monitoring—A Review. Remote Sens. 2021, 13, 3936. [Google Scholar] [CrossRef]
- Huang, X.; Mei, G.; Zhang, J.; Abbas, R. A comprehensive survey on point cloud registration. arXiv 2021, arXiv:2103.02690. [Google Scholar]
- Pintus, R.; Gobbetti, E.; Callieri, M.; Dellepiane, M. Techniques for seamless color registration and mapping on dense 3D models. In Sensing the Past; Springer: Cham, Switzerland, 2017; pp. 355–376. [Google Scholar]
- Klein, L.A. Sensor and Data Fusion: A Tool for Information Assessment and Decision Making (SPIE Press Monograph Vol. PM138SC); SPIE Press: Bellingham, WA, USA, 2004. [Google Scholar] [CrossRef]
- Hall, D.L.; Steinberg, A. Dirty Secrets in Multisensor Data Fusion; Technical Report; Pennsylvania State University Applied Research Laboratory: University Park, PA, USA, 2001. [Google Scholar]
- Boström, H.; Andler, S.F.; Brohede, M.; Johansson, R.; Karlsson, E.; Laere, J.V.; Niklasson, L.; Nilsson, M.; Persson, A.; Ziemke, T. On the Definition of Information Fusion as a Field of Research; IKI Technical Reports; HS-IKI-TR-07-006; Institutionen för Kommunikation och Information: Skövde, Sweden, 2007. [Google Scholar]
- Lahat, D.; Adali, T.; Jutten, C. Multimodal data fusion: An overview of methods, challenges, and prospects. Proc. IEEE 2015, 103, 1449–1477. [Google Scholar] [CrossRef]
- Khaleghi, B.; Khamis, A.; Karray, F.O.; Razavi, S.N. Multisensor data fusion: A review of the state-of-the-art. Inf. Fusion 2013, 14, 28–44. [Google Scholar] [CrossRef]
- Hall, D.L.; McNeese, M.; Llinas, J.; Mullen, T. A framework for dynamic hard/soft fusion. In Proceedings of the 2008 11th International Conference on Information Fusion, Sun City, South Africa, 1–4 November 2008; pp. 1–8. [Google Scholar]
- Antova, G. Application of Areal Change Detection Methods Using Point Clouds Data. IOP Conf. Ser. Earth Environ. Sci. 2019, 221, 012082. [Google Scholar] [CrossRef]
- James, M.R.; Robson, S.; Smith, M.W. 3-D uncertainty-based topographic change detection with structure-from-motion photogrammetry: Precision maps for ground control and directly georeferenced surveys: 3-D uncertainty-based change detection for SfM surveys. Earth Surf. Process. Landf. 2017, 42, 1769–1788. [Google Scholar] [CrossRef]
- Hänsch, R.; Weber, T.; Hellwich, O. Comparison of 3D interest point detectors and descriptors for point cloud fusion. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, II-3, 57–64. [Google Scholar] [CrossRef]
- Farella, E.M.; Torresani, A.; Remondino, F. Quality Features for the Integration of Terrestrial and UAV Images. ISPRS—Int. Arch. Photogramm. Rem. Sens. Spatial Inf. Sci. 2019, XLII-2-W9, 339–346. [Google Scholar] [CrossRef]
- Li, Y.; Liu, P.; Li, H.; Huang, F. A Comparison Method for 3D Laser Point Clouds in Displacement Change Detection for Arch Dams. ISPRS Int. J. Geo-Inf. 2021, 10, 184. [Google Scholar] [CrossRef]
- Weinmann, M.; Jutzi, B.; Mallet, C.; Weinmann, M. Geometric features and their relevance for 3D point cloud classification. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, IV-1/W1, 157–164. [Google Scholar] [CrossRef]
- Ioannou, Y.; Taati, B.; Harrap, R.; Greenspan, M. Difference of normals as a multi-scale operator in unorganized point clouds. In Proceedings of the 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission, Zurich, Switzerland, 13–15 October 2012; pp. 501–508. [Google Scholar]
- Hackel, T.; Wegner, J.D.; Schindler, K. Joint classification and contour extraction of large 3D point clouds. ISPRS J. Photogramm. Remote Sens. 2017, 130, 231–245. [Google Scholar] [CrossRef]
- Li, W.; Wang, C.; Zai, D.; Huang, P.; Liu, W.; Wen, C.; Li, J. A Volumetric Fusing Method for TLS and SFM Point Clouds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3349–3357. [Google Scholar] [CrossRef]
- Li, S.; Ge, X.; Hu, H.; Zhu, Q. Laplacian fusion approach of multi-source point clouds for detail enhancement. ISPRS J. Photogramm. Remote Sens. 2021, 171, 385–396. [Google Scholar] [CrossRef]
- Yang, X.; Grussenmeyer, P.; Koehl, M.; Macher, H.; Murtiyoso, A.; Landes, T. Review of Built Heritage Modelling: Integration of HBIM and Other Information Techniques. J. Cult. Heritage 2020, 46, 350–360. [Google Scholar] [CrossRef]
- Nishanbaev, I.; Champion, E.; McMeekin, D.A. A Web GIS-Based Integration of 3D Digital Models with Linked Open Data for Cultural Heritage Exploration. ISPRS Int. J. Geo-Inform. 2021, 10, 684. [Google Scholar] [CrossRef]
- Ortega-Alvarado, L.M.; García-Fernández, Á.L.; Conde-Rodríguez, F.; Jurado-Rodríguez, J.M. Integrated and Interactive 4D System for Archaeological Stratigraphy. Archaeol. Anthropol. Sci. 2022, 14, 203. [Google Scholar] [CrossRef]
- Jaspe-Villanueva, A.; Ahsan, M.; Pintus, R.; Giachetti, A.; Marton, F.; Gobbetti, E. Web-based exploration of annotated multi-layered relightable image models. J. Comput. Cult. Heritage (JOCCH) 2021, 14, 1–29. [Google Scholar] [CrossRef]
- Manuel, A.; Abergel, V. Aïoli, a Reality-Based Annotation Cloud Platform for the Collaborative Documentation of Cultural Heritage Artefacts. In Proceedings of the Un Patrimoine Pour L’avenir, Une Science Pour le Patrimoine, Paris, France, 15–16 March 2022. [Google Scholar]
- Dutailly, B.; Portais, J.C.; Granier, X. RIS3D: A Referenced Information System in 3D. J. Comput. Cult. Heritage 2023, 15, 1–20. [Google Scholar] [CrossRef]
- Soler, F.; Melero, F.J.; Luzón, M.V. A Complete 3D Information System for Cultural Heritage Documentation. J. Cult. Heritage 2017, 23, 49–57. [Google Scholar] [CrossRef]
- Richards-Rissetto, H.; von Schwerin, J. A catch 22 of 3D data sustainability: Lessons in 3D archaeological data management & accessibility. Digit. Appl. Archaeol. Cult. Herit. 2017, 6, 38–48. [Google Scholar]
- Nova Arévalo, N.; González, R.A.; Beltrán, L.C.; Nieto, C.E. A Knowledge Management System for Sharing Knowledge About Cultural Heritage Projects. SSRN Elect. J. 2023. Available online: https://ssrn.com/abstract=4330691 (accessed on 3 April 2023). [CrossRef]
- Liu, J.; Ram, S. Improving the Domain Independence of Data Provenance Ontologies: A Demonstration Using Conceptual Graphs and the W7 Model. J. Database Manag. 2017, 28, 43–62. [Google Scholar] [CrossRef]
- Ram, S.; Liu, J. A semiotics framework for analyzing data provenance research. J. Comput. Sci. Eng. 2008, 2, 221–248. [Google Scholar] [CrossRef]
- Georgiev, I.; Georgiev, I. An Information Technology Framework for the Development of an Embedded Computer System for the Remote and Non-Destructive Study of Sensitive Archaeology Sites. Computation 2017, 5, 21. [Google Scholar] [CrossRef]
- Girardeau-Montaut, D. CloudCompare (Version 2.12.4) [GPL Software]. 2022. Available online: https://www.danielgm.net/cc/ (accessed on 4 April 2023).
- Crameri, F.; Shephard, G.E.; Heron, P.J. The misuse of colour in science communication. Nat. Commun. 2020, 11, 5444. [Google Scholar] [CrossRef]
- Liu, Y.; Heer, J. Somewhere Over the Rainbow: An Empirical Assessment of Quantitative Colormaps. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 1–12. [Google Scholar] [CrossRef]
- Bares, A.; Keefe, D.F.; Samsel, F. Close Reading for Visualization Evaluation. IEEE Comput. Graphics Appl. 2020, 40, 84–95. [Google Scholar] [CrossRef]
- D’Andrea, A.; Fernie, K. 3D ICONS metadata schema for 3D objects. Newsl. Archeol. CISA 2013, 4, 159–181. [Google Scholar]
- D’Andrea, A.; Fernie, K. CARARE 2.0: A metadata schema for 3D cultural objects. In Proceedings of the 2013 Digital Heritage International Congress (DigitalHeritage), Marseille, France, 28 October 2013–1 November 2013; Volume 2, pp. 137–143. [Google Scholar] [CrossRef]
- Hermon, S.; Niccolucci, F.; Ronzino, P. A Metadata Schema for Cultural Heritage Documentation. In Electronic Imaging & the Visual Arts: EVA 2012 Florence, 9–11 May 2012; Firenze University Press: Firenze, Italy, 2012. [Google Scholar] [CrossRef]
- Tournon, S.; Baillet, V.; Chayani, M.; Dutailly, B.; Granier, X.; Grimaud, V. The French National 3D Data Repository for Humanities: Features, Feedback and Open Questions. In Proceedings of the Computer Applications and Quantitative Methods in Archaeology (CAA) 2021, Lymassol (Virtual), Cyprus, 14–18 June 2021. [Google Scholar]
- Meghini, C.; Scopigno, R.; Richards, J.; Wright, H.; Geser, G.; Cuy, S.; Fihn, J.; Fanini, B.; Hollander, H.; Niccolucci, F.; et al. ARIADNE: A Research Infrastructure for Archaeology. J. Comput. Cult. Herit. (JOCCH) 2017, 10, 1–27. [Google Scholar] [CrossRef]
- Petras, V.; Hill, T.; Stiller, J.; Gäde, M. Europeana—A Search Engine for Digitised Cultural Heritage Material. Datenbank-Spektrum 2017, 17, 41–46. [Google Scholar] [CrossRef]
- Boutsi, A.M.; Ioannidis, C.; Soile, S. An Integrated Approach to 3D Web Visualization of Cultural Heritage Heterogeneous Datasets. Remote Sens. 2019, 11, 2508. [Google Scholar] [CrossRef]
- Schutz, M. Potree: Rendering Large Point Clouds in Web Browsers. Master’s Thesis, Institute of Computer Graphics and Algorithms, Vienna University of Technology, Vienna, Austria, 2016. [Google Scholar]
- Potenziani, M.; Callieri, M.; Dellepiane, M.; Corsini, M.; Ponchio, F.; Scopigno, R. 3DHOP: 3D Heritage Online Presenter. Comput. Graph. 2015, 52, 129–141. [Google Scholar] [CrossRef]
- CesiumJS: 3D Geospatial Visualization for the Web. 2022. Available online: https://cesium.com/platform/cesiumjs/ (accessed on 2 September 2022).
- Bergerot, L.; Blaise, J.Y.; Pamart, A.; Dudek, I. Visual Cross-Examination of Architectural and Acoustic Data: The 3d Integrator Experiment. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 46, 81–88. [Google Scholar] [CrossRef]
- Schutz, M. GitHub—Potree/PotreeConverter: Create Multi Res Point Cloud to Use with Potree. 2022. Available online: https://github.com/potree/PotreeConverter (accessed on 2 September 2022).
- Blaise, J.Y.; Dudek, I.; Pamart, A.; Bergerot, L.; Vidal, A.; Fargeot, S.; Aramaki, M.; Ystad, S.; Kronland-Martinet, R. Acquisition and Integration of Spatial and Acoustic Features: A Worflow Tailored to Small-Scale Heritage Architecture; IMEKO International Measurement Confederation: Budapest, Hungary, 2022. [Google Scholar] [CrossRef]
- Chang, C.T.; Gorissen, B.; Melchior, S. Fast oriented bounding box optimization on the rotation group SO (3,R). ACM Trans. Graphics (TOG) 2011, 30, 1–16. [Google Scholar]
- Xu, J.; Ma, Y.; He, S.; Zhu, J. 3D-GIoU: 3D Generalized Intersection over Union for Object Detection in Point Cloud. Sensors 2019, 19, 4093. [Google Scholar] [CrossRef]
- Zhang, Y.; Li, K.; Chen, X.; Zhang, S.; Geng, G. A multi feature fusion method for reassembly of 3D cultural heritage artifacts. J. Cult. Herit. 2018, 33, 191–200. [Google Scholar] [CrossRef]
- Buglio, D.L.; Derycke, D. Reduce to Understand: A Challenge for Analysis and Three-dimensional Documentation of Architecture. In Environmental Representation: Bridging the Drawings and Historiography of Mediterranean Vernacular Architecture; Lodz University of Technology: Lodz, Poland, 2015. [Google Scholar]
- Verhoeven, G.J.; Santner, M.; Trinks, I. FROM 2D (TO 3D) TO 2.5D – NOT ALL GRIDDED DIGITAL SURFACES ARE CREATED EQUALLY. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, VIII-M-1–2021, 171–178. [Google Scholar] [CrossRef]
- Jusoh, S.; Almajali, S. A systematic review on fusion techniques and approaches used in applications. IEEE Access 2020, 8, 14424–14439. [Google Scholar] [CrossRef]
- Poux, F.; Billen, R. Voxel-Based 3D Point Cloud Semantic Segmentation: Unsupervised Geometric and Relationship Featuring vs. Deep Learning Methods. ISPRS Int. J. Geo-Inf. 2019, 8, 213. [Google Scholar] [CrossRef]
- Croce, V.; Caroti, G.; Piemonte, A.; De Luca, L.; Véron, P. H-BIM and Artificial Intelligence: Classification of Architectural Heritage for Semi-Automatic Scan-to-BIM Reconstruction. Sensors 2023, 23, 2497. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Pamart, A.; Abergel, V.; de Luca, L.; Veron, P. Toward a Data Fusion Index for the Assessment and Enhancement of 3D Multimodal Reconstruction of Built Cultural Heritage. Remote Sens. 2023, 15, 2408. https://doi.org/10.3390/rs15092408
Pamart A, Abergel V, de Luca L, Veron P. Toward a Data Fusion Index for the Assessment and Enhancement of 3D Multimodal Reconstruction of Built Cultural Heritage. Remote Sensing. 2023; 15(9):2408. https://doi.org/10.3390/rs15092408
Chicago/Turabian StylePamart, Anthony, Violette Abergel, Livio de Luca, and Philippe Veron. 2023. "Toward a Data Fusion Index for the Assessment and Enhancement of 3D Multimodal Reconstruction of Built Cultural Heritage" Remote Sensing 15, no. 9: 2408. https://doi.org/10.3390/rs15092408
APA StylePamart, A., Abergel, V., de Luca, L., & Veron, P. (2023). Toward a Data Fusion Index for the Assessment and Enhancement of 3D Multimodal Reconstruction of Built Cultural Heritage. Remote Sensing, 15(9), 2408. https://doi.org/10.3390/rs15092408