The conservation of historic buildings is an essential part of the preservation of cultural heritage. Therefore, ageing structures need to be inspected are regular intervals to counteract deterioration with a well-planned preservation strategy. As an outcome of a visual inspection that covers the entire interior and exterior building surface, damages and anomalies are documented by annotations in plans or in captured images. For a comprehensive dataset, this includes, on the one hand, the actual state and condition of a building and, on the other hand, the history of acquired data and performed evaluations in order to compare them to extract information on deformation or damage progression.
The field of data acquisition is increasingly supported by digital technology, such as image-based photogrammetric 3D reconstruction [1
] or laser scanning, to obtain a highly detailed 3D dataset as a basis for condition assessment or planning of restoration works supported by automated processes. While Unmanned Aircraft Systems (UASs) are used to acquire images of façades of high and hard to reach building regions [2
], this approach is rather used to capture indoor or outdoor areas of low building heights, where the UAS flight has technical limitations. Unlike laser scanning data, the photogrammetric reconstruction process estimates a camera position and orientation, the so-called extrinsic parameters, for each image along with the point cloud that is georeferenced by Ground Control Points (GCPs). This registration enables a location-based filtering of the image dataset and the mapping of information from the image plane onto a 3D surface [4
]. Additionally, by triangulating such point clouds, surface meshes are derived [3
]. The colour information stored for the point cloud as a defined colour value is stored as a texture from the image data on the mesh. The general result of this process is a detailed geometrical 3D representation of the captured building.
However, the essence of 3D methods in the field of data acquisition is usually not transferred to methods of damage assessment. There, due to a more familiar manual labelling of 2D information [8
], the focus is on 2D data, such as drawings from Computer-Aided Design (CAD) systems, rectified images, orthophotos, or the raw image data. Furthermore, the automated labelling of image datasets by image segmentation employing deep learning techniques produces 2D information in the first place [10
A transformation of 2D annotations to 3D geometries is thus necessary for 3D methods to be applied on the annotation data, such as integration in Geographic Information Systems (GISs), the extraction of damage dimensions through accurate measurements, or the evaluation of affected building elements [13
]. In Grilli et al. (2018) [5
] and Adamopoulos and Rinaudo (2021) [14
], a workflow for the mapping of image segmentation labels onto 3D point clouds was proposed to transfer annotations from images and orthophotos to a semantically enriched point cloud. A forward- and back-projection of annotations to registered images from a photogrammetric reconstruction process that allows for the inclusion of additional imagery was described in Manuel et al. (2014) [4
]. In there, the estimated camera positions were used to project an image-based annotation onto a building surface and to identify images that contain the same 3D annotation. Other inspection systems support direct 3D annotations along with high-performing web-based visualisation and an underlying database for the inspection of the data [6
]. Furthermore, in Malinverni et al. (2019) [15
], a building information model edited in a CAD system was used to apply a 3D annotation workflow with the goal of quantity determination and further planning.
The re-modelling of acquired 3D point clouds or meshes to achieve simplified building geometries is an important step towards the semantic enrichment of the dataset, and it is widely applied in the field of Historic (or Heritage) Building Information Modelling (HBIM) [15
]. Depending on the targeted level of detail, the re-modelling process includes geometries from the definition of rough building sections as bounding boxes up to detailed volumetric building element models. A previous segmentation of the point cloud according to derived spatial criteria to identify building elements could support this process, as shown in Croce et al. (2021) [20
]. Furthermore, a segmentation using deep learning techniques [21
] or voxel-based methods [22
], as well as the derivation of building geometries [24
] enables the automated transformation of point clouds into semantically enriched and simplified models.
Inspection data that are acquired periodically from the same object in addition allow for the comparison of different states. Chiabrando et al. (2017) [25
] applied such multitemporal comparative processing to identify post-earthquake damages of a church building in point cloud datasets of different states. Another example can be found from the identification of significant structural deformation of bridge piers due to temperature effects presented in Hallermann et al. (2018) [26
]. In Vetrivel et al. (2016) [27
], a voxel-based method for the comparison of pre- and post-earthquake point clouds led to the identification of damaged areas. The application of voxel-based methods enables the development of algorithms that are not based on a specific type of geometry, as also shown for typology analysis in Borrmann and Rank (2009) [28
]. However, these studies investigated the identification of damages or deformation in multitemporal datasets, but did not compare the damage entities themselves. Three-dimensional annotations, as shown in the literature, independent of the identification method (e.g., manual, image, or point cloud segmentation) from different states, geometrically serve as a basis to identify localised changes of decays and possibly a derivation of damage progression.
This article proposes a methodological workflow for the integration of image-based 2D annotations into semantically enriched 3D models. Additionally, the obtained 3D annotations are assigned to the building elements to create a linked 3D dataset that serves as a basis for condition state evaluations. Finally, this process is repeated for three different annotated states, where the annotations of each state are assigned to each other to obtain a state history. The assigned annotation states are then compared, and a localised geometric change is extracted to evaluate the dimension of their increase over the compared states.
The article is structured as follows: Section 2.1
describes the background of the building, data acquisition, and first processing steps to reconstruct the 3D data, as well as the workflow. In Section 2.3
, important data characteristics used in the workflow and additional modelling tasks, such as the definition of building sections, are explained. Methods of the integration of 2D annotations into 3D models and the application of assignment methods for the linking of 3D annotations, building elements, and different states are explained in detail in Section 2.4
. Finally, the resulting linked dataset and the computed state comparisons for the extraction of local damage increases are presented in Section 3
The digital inspection data collected in the main hall of the Wehrkirche Döblitz was used to perform the integration of image-based 2D annotations into 3D models and to apply automated procedures for the computation of localised geometric changes of damages. For the transfer of the 2D annotations, the triangulated sparse point cloud from photogrammetric 3D reconstruction was used as the ray casting target. Thus, the accuracy of the resulting 3D annotations was strictly related to the accuracy of the photogrammetric reconstruction, leading to different offsets between multitemporal data. The offset could be reduced by a more accurate local registration using methods such as ICP or a compensation of structural deformations by simulated displacement fields. Another way to avoid such offsets would be the projection of the 2D annotations onto a common surface, e.g., of a manually created CAD model, but then, the actual 3D geometries may differ significantly from the projected ones.
The applied voxel-based methods used to assign damages to building elements, images to damages, and damages to damages from previous inspections produced reasonable results. In particular, compared to coarser methods using AABBs or the shortest distance, the voxel-based methods are beneficial in terms of the avoidance of erroneous assignments.
The extraction of localised geometrical changes of multitemporal damage geometries was also carried out using a voxel-based approach. There, the accuracy of the smallest detectable change was again related to the registration of the two photogrammetric reconstructions on which the annotations were based. With increasing offset, the accuracy of the detection of the changes decreased. For damage geometries of a wide range, numerous voxels needed to be generated with the presented method, which significantly increased the computation time.
In summary, a high degree of automation of the comparison of time-varying inspection data and annotations could be achieved. This leads to a sorted and linked data collection that allows for an effective further processing.
This article presented a methodological workflow for damage documentation of historic buildings alongside a validation study of a church building in the German region of Vogtland. For this purpose, high-resolution images of the building surface were acquired and the sparse point cloud, as well as the camera orientations were computed by a photogrammetric reconstruction. In addition, the walls, the ceiling, and the floor of the main hall of the church building were manually modelled as simplified CAD geometries. On the captured images, the 2D annotation of visual damages on these building elements was conducted.
In the automated process, presented for the evaluation of these data, the point cloud and the corresponding image data were first segmented on the basis of the modelled building elements. For each segment, the 2D annotations of the damage were transferred into corresponding 3D geometries and linked to the dataset in order to store the relationship between images, components, and damage in a data model. In addition, voxel-based methods were used to automatically identify and localise the geometrical changes over three different states from the generated 3D geometries of the damages.
The choice of the voxel size for the discretisation of the damage geometries was found to be critical for the accuracy of the annotation assignments and comparisons. Therefore, the voxel size should preferably be adapted to the dimensions of the damage and determined adaptively for different elements of a data set in a further development of the method. To overcome the decrease of performance processing large objects, the algorithm could analyse the geometries in fixed grids, which could be evaluated in a parallel process, in order to save computational resources.
The paper highlighted the potential that digital image processing, 3D reconstruction, and systematic condition information modelling have for digital documentation and assessment workflows in the context of heritage preservation. Besides this, the presented methods are also applicable to the field of infrastructure inspections, such as bridges or tunnels, which are surveyed in defined intervals, or the condition assessment after natural disasters to plan rebuilding processes and evaluate the degree of damages. For a subsequent categorisation of the condition assessment of a structure, it is necessary to take into account indicators of changes in damage geometries over different states. Condition scores or assessment criteria should include damage progression and the condition history. In the case of the continuous data acquisition of a structure, the values of identified geometrical changes could possibly also serve as a prognosis of damage progression.