Next Article in Journal
First Results from the WindRAD Scatterometer on Board FY-3E: Data Analysis, Calibration and Wind Retrieval Evaluation
Next Article in Special Issue
StainView: A Fast and Reliable Method for Mapping Stains in Facades Using Image Classification in HSV and CIELab Colour Space
Previous Article in Journal
Improving the Estimation of Gross Primary Productivity across Global Biomes by Modeling Light Use Efficiency through Machine Learning
Previous Article in Special Issue
Isostatic Anomaly and Isostatic Additional Force Analysis by Multiple Geodetic Observations in Qinling Area
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Surface Defect Detection of Nanjing City Wall Based on UAV Oblique Photogrammetry and TLS

1
Department of Geomatics, Nanjing Forestry University, Nanjing 210037, China
2
School of Geography, Nanjing Normal University, Nanjing 210046, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(8), 2089; https://doi.org/10.3390/rs15082089
Submission received: 1 March 2023 / Revised: 31 March 2023 / Accepted: 13 April 2023 / Published: 15 April 2023

Abstract

:
Ancient architecture, with its long history, has a high cultural value, artistic achievement, and scientific value. The Nanjing City Wall was constructed in the mid-to-late 14th century, and it ranks first among the world’s city walls in terms of both length and size, whether historically or in the contemporary era. However, these sites are subject to long-term degradation and are sensitive to disturbances from the surrounding landscape, resulting in the potential deterioration of the architecture. Therefore, it is urgent to detect the defects and repair and protect Nanjing City Wall. In this paper, a novel method is proposed to detect the surface defects of the city walls by using the unmanned aerial vehicle (UAV) oblique photogrammetry and terrestrial laser scanning (TLS) data. On the one hand, the UAV oblique photogrammetry was used to collect the image data of the city wall, and a three-dimensional (3D) model of the wall was created using the oblique images. With this model, 43 cracks with lengths greater than 30 cm and 15 shedding surfaces with an area greater than 300 cm2 on the wall can be effectively detected. On the other hand, the point cloud data obtained by TLS were firstly preprocessed, and then, the KNN algorithm was used to construct a local neighborhood for each sampling point, and the neighborhood was fitted using the least squares method. Next, five features of the point cloud were calculated, and the results were visualized. Based on the visualization results, surface defects of the wall were identified, and 18 cracks with lengths greater than 30 cm and 5 shedding surfaces with an area greater than 300 cm2 on the wall were detected. To verify the accuracy of these two techniques in measuring cracks, the coordinates of some cracks were surveyed using a prism-free total station, and the lengths were calculated. The root mean square error (RMSE) of crack lengths based on the UAV oblique photogrammetry model and TLS point cloud model were calculated to be 0.73 cm and 0.34 cm, respectively. The results of the study showed that both techniques were able to detect the defects on the wall surface, and the measurement accuracy could meet the accuracy requirements of the surface defect detection of the city wall. Considering their low cost and high efficiency, these two techniques provide help for the mapping and conservation of historical buildings, which is of great significance for the conservation and repair of ancient buildings.

1. Introduction

With its long history and cultural value, ancient Chinese architecture occupies an important place in the history of world architecture [1]. Ancient architecture is highly acclaimed for its exquisite artistic achievements and scientific values. Nowadays, with the rapid development of relevant technologies, people are seeking efficient and accurate methods to monitor ancient buildings and establish their three-dimensional (3D) digital models [2,3]. These methods protect heritage information and provide data support for the conservation, management, and defect analysis of cultural heritage.
As important historical relics, monitoring and defect detection of ancient buildings is a very important cultural preservation task. Through detection and monitoring, defect information, such as wall cracks, wall shedding, and wall deformation, can be found. Usually, traditional methods, such as manual inspection or periodic observation of instruments, are used to detect and monitor ancient buildings. However, it costs more time, and the inspection accuracy is low; therefore, these methods are dismissed as inefficient. Based on remote sensing technology, geospatial distribution information can be obtained quickly and efficiently. The image information of buildings can be acquired by sensors mounted on spaceborne, airborne, and ground-based platforms [4,5,6]. However, costs rise greatly when aviation and satellite platforms are used. In recent years, some new techniques have been applied to the work which can detect and monitor the defects of cultural relics such as ancient buildings. Since ancient buildings generally have a large scale, the cost and efficiency of survey methods need to be considered when acquiring 3D data. To achieve this goal, 3D terrestrial laser scanning (TLS) and unmanned aerial vehicle (UAV) photogrammetry are the methods commonly used by most researchers [7,8,9].
TLS uses a laser to measure the 3D spatial information of objects within a certain distance from the ground, which can break through the limitations of traditional single-point measurement methods and has the advantages of the non-contact, efficient, and high-precision acquisition of 3D point cloud data on the surface of objects [10]. In recent years, TLS has been widely used in many fields, such as 3D modeling, ancient building conservation, deformation monitoring, and forest structure investigation [11].
When 3D laser scanning technology has been applied to the defect monitoring process of ancient buildings and other cultural relics [12,13,14], the efficiency and objectivity of monitoring have been greatly improved. By conducting keyword searches (e.g., UAV photogrammetry, TLS, ancient building, defect measurement, etc.) on Google Scholar and Web of Science, the relevant literature from scholars on the topic of ancient building defect detection in recent years has been collected. Nowak et al. [15] obtained the building geometry of a historic building by TLS measurements and then performed a detailed diagnosis of the building and determined the cause of its damage, demonstrating the feasibility of applying TLS to the structural diagnosis of buildings. Oniga et al. [16] applied a hand-held mobile laser scanner (HMLS) and TLS to 3D modeling of complex cultural heritage scenes. Ebolese et al. [17] used TLS to scan large archaeological remains to obtain a 3D model of the Roman Domus with high detail and area coverage. Alkadri et al. [18] proposed a TLS-based integrated method to investigate the surface properties of heritage buildings using the attribute information stored in the point cloud data, such as XYZ, RGB information, and reflection intensity. Fracture surface analysis and calculation of material properties were performed to identify vulnerable structures in the building. The authors believe that the data obtained by TLS can be combined with thermal cameras to calibrate the parameters and consider the use of deep learning to further refine the study in the future. Sánchez-Aparicio et al. [19] combined the TLS technique with radiometric and geometric data to propose a new method to identify damage in historic buildings. Lercari [20] verified the feasibility of TLS and multiscale model-to-model cloud comparison (M3C2) surface change detection method in monitoring and conserving ancient buildings and can provide effective information for future conservation interventions. Mugnai et al. [21] conducted two TLS surveys of an ancient bridge in 2011 and 2017 and used relevant software to conduct a comparative surface analysis of the bridge to identify its damage and deformation. Scholars used TLS point cloud intensity data to detect the damaged area of historical buildings [22,23]. Dayal et al. [24] combined TLS with ground images to perform 3D reconstruction of heritage buildings and monitored and detected damage to the buildings. Tucci et al. [25] combined TLS, deviation analysis (DA), and finite element (FE) numerical simulation to evaluate the health of a historic tower. Roselli et al. [26] performed non-destructive tests (NDTs) of a Roman archaeological site using several techniques, combining 3D laser scanning and stereo-photogrammetry for 3D reconstruction and obtaining information such as thermal infrared images, air temperature, and relative humidity. The integration of these results allows for a more comprehensive understanding of the structural behavior and condition of the building while allowing for less invasive measurements.
UAV oblique photogrammetry is able to obtain image information from multiple angles due to its good flexibility and maneuverability and obtain point clouds at high places of ancient buildings, which is beneficial to the establishment of large scenes [27,28]. Methods of image processing based on oblique photogrammetry have a great impact on the accuracy of data analysis; nowadays, the unmanned aerial vehicle structure from motion (UAV-SfM) workflow has been used widely. The SfM algorithm, namely, the structure from motion algorithm, is an offline algorithm that can construct a 3D model from disordered images [29]. This is a process that includes calculating 3D coordinates of corresponding points from multiple images and getting the camera parameters of the images. The SfM algorithm has the advantages of low attitude requirement and good matching effect and has a high utilization rate in 3D construction based on the image.
UAV Oblique photogrammetry technology can be used in natural environment monitoring [30], topographic survey [31], disaster assessment [32], and so on. Oblique photogrammetry technology has also been applied in the protection of cultural relics [33]. Konstantinos et al. [34] reconstructed 3D models of archaeological sites based on UAV photogrammetry and created high-resolution orthophotos and digital surface model (DSM), verifying that the 3D models created from UAV imagery and their derivatives are accurate enough for precise measurements. Bolognesi et al. [35] analyzed the potential and limitations of RPAS (remotely piloted aircraft systems) photogrammetric systems in the reconstruction of architectural cultural heritage. Chen et al. [36] proposed a method based on the combination of UAV oblique photogrammetry and 3D laser scanning technology to virtually repair damaged ancient buildings. Aicardi et al. [37] used the oblique images obtained by UAV for the 3D reconstruction of historical buildings and tested different dense image-matching algorithms implemented in software, such as Agisoft Photoscan, Pix4D, VisualSFM, and ContextCapture. Vetrivel et al. [38] used oblique images to generate 3D point clouds of ancient buildings and to identify and assess the damage to the buildings. Martínez-Carricondo et al. [39] developed HBIM (historic building information modeling) of historic buildings based on UAV oblique photogrammetry and texturized the HBIM models to achieve realistic results. Ulvi [40] has generated the 3D model, digital surface model (DSM), and orthophoto of a cultural site using UAV photogrammetry, which is less expensive and faster than other measurement methods for 3D reconstruction. Carvajal-Ramírez et al. [41] applied UAV oblique photogrammetry based on SfM and Multi-View Stereoscopic (MVS) algorithms to a virtual reconstruction of an archaeological site, contributing to the recovery and preservation of this cultural heritage. Manajitprasert et al. [42] used the UAV-SfM method to conduct a 3D reconstruction of the two pagodas and verified the accuracy of the UAV-SfM model by comparing it with the point cloud results obtained by TLS. Mongelli et al. [43] applied techniques, such as UAV photogrammetry and FE modeling, to assess bridge crack patterns and their structural health by integrating a multidisciplinary approach.
The Nanjing City Wall surrounded Nanjing when the city was the capital of China’s early Ming Dynasty; it was constructed in the mid-to-late 14th century. Due to extended service, weathering, and constant changes in external conditions, the wall was damaged in many places. Among them, the generation of cracks caused rainwater to flow deep into the bricks, which led to further erosion of the wall and triggered the destruction of vegetation. For a long time, due to the influence of climate and environmental changes, the wall surface has weathered, leading to bricks falling off and wall hollows. Despite continuous maintenance, there are still many dangerous conditions and hidden dangers.
In order to accurately investigate and understand the defect information and the degree of defects on the surface of the city wall, this paper used UAV oblique photogrammetry and TLS to acquire data and perform defect detection on the Nanjing City Wall. Using the oblique images obtained by the UAV, a 3D model of the city wall was constructed, through which various defects on the wall could be directly seen, and the crack length and detached area could be measured, which improved the efficiency of the wall inspection. However, some parts of the model were deformed due to the obscuration of the surrounding trees, and data supplementation was carried out for these areas using TLS. After preprocessing the obtained point clouds, various features of the point clouds were calculated, and the results were visualized. Wall defects were extracted using the visualization results, and wall cracks were measured. To verify the accuracy of these two techniques for measuring cracks, the endpoint coordinates of some cracks were collected using a prism-free total station, and the lengths of the cracks were calculated. The measurement accuracy of these two techniques was verified by calculating the errors, and both could meet the accuracy requirements of surface disease detection of the city wall. The novelty of this paper is that various defects on the wall can be identified and extracted without contact using these two techniques. Compared with traditional manual inspection, it can locate and measure defects quickly, and the results are reliable.
The structure of this paper is as follows. Section 2 presents an overview of the study area and briefly describes the UAV oblique photogrammetry and TLS data acquisition and processing process. Section 3 shows the data processing results in detail and presents the analysis and discussion. Section 4 provides a conclusion and prospect.

2. Materials and Methods

2.1. Study Area: Nanjing City Wall (from Jiefangmen to Xuanwumen)

The Nanjing City Wall surrounded Nanjing when the city was the capital of China’s early Ming Dynasty. It was constructed in the mid-to-late 14th century. The wall was composed of four parts: the palace city wall; the imperial city wall; the inner city wall; and the outer city wall. The inner city wall has 13 gates and is 35.267 km long. Today, 25.091 km remain [44]. It is the longest masonry city wall in the world that is still standing. The outer city wall has 18 gates with a length of 60 km, of which 45 km remain. The Nanjing City Wall ranks first among the world’s city walls in terms of both length and size, whether historically or in the contemporary era. Figure 1 is a part of the Nanjing City Wall.
In this study, the section from Jiefangmen to Xuanwumen was chosen as a representative; the total length of this section is about 1500 m, and the selection of this area is mainly based on a field survey and communication with related personnel. According to the related people, compared with other sections of the wall, this section is relatively well protected and has not undergone much restoration. Moreover, this section of the wall is located in a hilly area, unlike other sections of the wall in the mountain range, and the continuity of the wall is better, which facilitates the collection of data. In addition, there are various typical defects on this section of the wall, such as cracks, wall shedding, and damage by vegetation growth (Figure 2), which facilitate subsequent research.

2.2. UAV Oblique Photogrammetry Surveying

In this study, a DJI Phantom 4 Pro UAV (Figure 3) equipped with an FC6310 camera (Figure 4), a small multi-rotor high-precision UAV for low-altitude photogrammetry, was used. The specific camera parameters are shown in Table 1. In order to ensure the accuracy of UAV photogrammetry images, eight ground control points (GCPs) were laid out in the study area, and their coordinates were surveyed using GPS-RTK technology with the WGS-84 coordinate system. The coordinates of GCPs are shown in Table 2, and their distribution is shown in Figure 5. Due to the remarkable length of the wall, a total of three missions were conducted for data collection, and a total of 8585 images were collected. Since there are trees and buildings around the city walls, all three missions were flown at the lowest possible height to improve image clarity while ensuring that they did not hit obstacles, so the three flights were at different heights. Since the DJI Phantom 4 Pro UAV is equipped with a single lens, the five-way flight mode was selected for the route planning. That is, the mission is divided into five routes, the first of which is directly above the surveying area; the lens is vertical when flying; the remaining four routes are located in the four directions of the surveying area, east, west, north, and south, and the lens angle is 45° by default when flying. The route layouts and flight parameters of each of the three flights are shown in Figure 6.

2.3. TLS Surveying

Since the UAV oblique photogrammetry was obscured by tall trees around the city wall, some areas of the 3D model are deformed; this study used the Faro3D X330 scanner (Figure 7) to collect point cloud data for the city wall in these areas. Figure 8 shows part of the target points of the surveying area. The instrument system includes the scanner itself, the built-in camera, the rotating platform, the tripod, and the spherical target.
The scanner can scan 360° horizontally and 300° vertically for point cloud data acquisition by the rotating platform and built-in rotating camera; the farthest measuring distance can reach 330 m, and the measuring speed can reach 976,000 points/second, which can acquire spatial 3D information of large scenes in high speed, large range, and large volume. A total of 15 stations were erected for this data collection, and each station was scanned once. The parameters of the scanner were set to a horizontal angle of −180° to 180°, a vertical angle of −20° to 60°, and a scan resolution of 1/4.

2.4. Total Station Surveying

To verify the accuracy of crack measurement based on the UAV oblique photogrammetry model and TLS point cloud model, Topcon prism-free total station was used to measure the cracks. A total of 23 typical cracks were selected for the measurement of endpoint coordinates, and the field survey is shown in Figure 9.

2.5. Data Processing

The photogrammetric process is based on the automatic modeling of oblique images using ContextCapture software, one of the commonly used software for processing oblique images, which automatically processes the images and generates a well-textured 3D model based on the SfM algorithm [45]. By importing the original images, GCP data, and POS (position and orientation system) data in the ContextCapture software, the software can automatically complete the process of key point extraction, image pair selection, external orientation element initialization, connection point matching, and bundle adjustment, and then complete the aerial triangulation [46]. After aerial triangulation, the software performs automatic texture mapping and produces a fine 3D model. The horizontal RMSE value of the model was 0.52 cm, and the vertical RMSE value was 0.83 cm, calculated using the control points. The data processing process of oblique images is shown in Figure 10.
For the point cloud data obtained by TLS, they are first preprocessed, i.e., the data of all stations are registered together, the surrounding non-essential point clouds are removed, and then the separation of ground points and non-ground points and data segmentation is performed. The horizontal RMSE value of the point cloud model was 0.27 cm, and the vertical RMSE value was 0.25 cm, calculated using the control points.
Using the processed point cloud data, the wall defects are extracted using the following main ideas: firstly, construct a local neighborhood for each sampled point using the KNN algorithm and fit this local neighborhood using the least squares method; secondly, calculate the covariance of each point and extract the eigenvalues and eigenvectors so as to calculate point cloud feature information such as normal vector and curvature; finally, the results of feature calculation are visualized. According to the comparison between the visualization results and the actual photos, the defects are classified and extracted, and the cracks in the city wall are measured. The data processing process of point cloud data is shown in Figure 11.
In order to more easily analyze the defect distribution under different features and compare the visualization results, a piece of point cloud data of about 6 × 12 m was segmented for the subsequent experiments. This part of the data was selected because this area has a dense distribution of defects and is located in the middle and upper part of the city wall, which is less affected by ground obscurants and has more complete point cloud data. As shown in Figure 12, the point cloud with color is the segmented data.

3. Results and Discussion

3.1. City Wall Surface Defect Detection Based on UAV Oblique Photogrammetry

The purpose of this project is to build a 3D model with geodetic information that can be measured to realize the defect inspection of the city wall. The walls with clear texture allow us to locate the position of the defect and make marks quickly, creating convenient conditions for the maintenance and repair of the walls. The digitization of the ancient city wall provides data support for the protection, management, and defect analysis of cultural heritage information and makes it possible to optimize the protection work. The oblique images, POS data, and GCPs data were imported into the ContextCapture software, which automatically processed the images based on the SfM algorithm to obtain a high-precision 3D model with clear texture. Figure 13 is the complete 3D model of the whole study area. Figure 14 and Figure 15 show some clear texture information about the wall.
After obtaining a high-precision 3D model of the city wall, various defects, such as cracks, wall shedding, and damage by vegetation growth, can be intuitively seen from the model, as shown in Figure 16. There are 43 cracks with lengths greater than 30 cm and 15 shedding surfaces with an area greater than 300 cm2 in total. Using the “Measurements” function of Context Capture, the longest crack was measured to be 300.05 cm (Figure 17), and the maximum shedding area was 12.37 m2 (Figure 18).

3.2. City Wall Surface Defect Detection Based on TLS

3.2.1. Determination of K Value

Given the relatively small amount of data in this experiment, the raw resolution point cloud was used for processing and analysis in order to better identify and extract the wall defects. If the data volume is large, the point cloud can be downsampled at a 0.5 sampling rate using a random sampling method, and the thinned point cloud can be used for subsequent experiments to improve computational efficiency.
In this paper, the K-Nearest Neighbor (KNN) classification algorithm [47] was used to construct a local domain for each sampling point. The KNN algorithm is a supervised learning algorithm. The working mechanism is as follows: given a test sample, the distance between the test sample and all training samples is calculated according to a determined distance measure function (the most commonly used is the Euclidean distance method), and then the K nearest training samples are found. The category of the test sample is decided based on the information of these K training samples, and the most basic method is the majority voting method, i.e., the classification of the largest number of the K nearest neighbors is the classification result of the new incoming samples.
After constructing the local neighborhood, the least squares plane of this neighborhood is fitted, and the general expression of the plane is solved. The normal vector of the fitted plane is considered the normal vector of the sampled point, and the curvature of the plane is considered the curvature of the point. In addition to the normal vector and curvature, the three features of structure tensor “eigenentropy”, structure tensor omnivariance, and structure tensor linearity were calculated according to the formula in reference [48].
The determination of the K value is crucial for the subsequent conduct of the experiment. In this paper, in order to determine the best value of K, the results obtained by setting K = 4, 6, 8, and 10 are compared. Figure 19 shows the point cloud colored by the normal vector feature when K takes different values. Similarly, Figure 20, Figure 21, Figure 22 and Figure 23 show the point cloud colored by curvature, structure tensor “eigenentropy”, structure tensor omnivariance, and structure tensor linearity, respectively.
The computational results of each feature were analyzed based on the calculation speed, the ability to identify brick-to-brick splices, and the identification of wall defects, and K = 8 was finally determined as the best value (see Table 3). When K = 8 and 10, both can identify the wall defects clearly, but when K = 10, the calculation efficiency is lower; so, after comprehensive consideration, K = 8 was finally selected.

3.2.2. Defect Information Extraction

After the K value was determined, the wall defects were extracted and analyzed by comparing the point cloud visualization of each feature with real photographs, with the main purpose of concluding under which features the wall defects are more easily identified and extracted. The visualization results of each feature are as follows. The color bars in Figure 24, Figure 25, Figure 26, Figure 27 and Figure 28 correspond to the values obtained from the corresponding feature calculations, and the units are the color legends formed by calculating the feature values of each point in order from largest to smallest.
Figure 24a shows the point cloud visualization obtained by computing the normal vectors. The normal vectors are oriented inward and outward due to the process of calculating the normal vectors. This results in a dark color where the bricks fall off and a brighter color in other areas, as seen in the red box in Figure 24b, which shows the fall-off defect of the wall. It can be seen that the connection between the bricks can be seen relatively clearly in the point cloud visualization of the normal vector, and each brick can be roughly distinguished. The green box in Figure 24b shows the area with large gaps between the bricks, which are considered to be transverse cracks in the wall. The area where the surface appears blurred is the area covered by vegetation, as shown in the green circle in Figure 24b.
Figure 25a shows the point cloud visualization obtained by computing the curvature. By comparing with the photos taken in the field, the distribution of plants on the walls can be clearly identified under the display of curvature features, as shown in the red circle in Figure 25b. The wall shedding defect can also be shown, as the red box in Figure 25b shows a brick shedding; the more green spots at the shedding place indicate the more serious shedding. Gaps between bricks can also be identified; the larger the gap, the brighter it is on the graph, as shown by the green box in Figure 25b.
Figure 26a shows the point cloud visualization obtained by computing the structural tensor “eigenentropy”. From the figure, it can be seen that each brick is not well distinguished in the visualization of “eigenentropy”; the connection between bricks cannot be seen clearly, and the areas covered with vegetation are not clearly discernible. However, the shedding defects on the surface of the wall can still be identified. The areas where the wall hollows and shedding occurs are shown as clusters of green dots, as shown in the red box in Figure 26b, and the normal bricks without shedding are shown in the green box.
Figure 27a shows the point cloud visualization obtained by computing the structural tensor omnivariance. The omnivariance is the most intuitive of all the features in terms of expressing information. The green lines are evident at the joints between the bricks, and the green box in Figure 27b shows an area with a large gap between the bricks. There are obvious clusters of green dots where the wall falls off, as shown in the red box in Figure 27b.
Figure 28a shows the point cloud visualization obtained by computing the structural tensor linearity. The other three features in reference [48], namely, structure tensor anisotropy, structure tensor planarity, and structure tensor sphericity are also calculated. However, due to the similarity of the visualization results, only linearity results are shown as the representatives in this paper.
From the figure, it is possible to identify the gaps between the brick connections and can clearly identify the shedding of the wall. Figure 28b shows the large gaps around the bricks in the green box, which are caused by the long-term exposure of the bricks to external influences and the change in the force conditions of the wall bricks, and can clearly be seen to be very different from the normal bricks in the yellow box. The red box in Figure 28b shows the shedding defects of the wall bricks.
By comparing the point cloud visualization results of each feature, it can be concluded that the normal vector feature is the most obvious among all features to show the shedding defect of the wall, and the visualization result of structure tensor omnivariance is the clearest to identify the connection between bricks. Moreover, using the curvature feature, the vegetation defect of the wall can be clearly shown. In contrast, the ability to identify each defect of the wall using the structural tensor “eigenentropy” feature is the weakest.
To further study the crack defects of the city wall and to compare the results of the above feature visualization with the rendering methods that come with the Cloud Compare (CC) software, a section of the city wall point cloud with cracks was selected, and we used five different methods to render the point cloud. The specific comparison results are shown in Figure 29.
Figure 29a shows the city wall point cloud displayed in original gray scale values. Figure 29b shows the result of elevation rendering. This method can render the morphology of the point cloud based on the elevation information of the point cloud data, which can show the morphology of natural objects such as terrain and is more intuitive. The disadvantage is that for complex point cloud data, the elevation rendering may be obscured and confused, which affects the rendering effect. Figure 29c shows the result of point index rendering. This method can render the morphology of the point cloud based on the location information and index information of the point cloud data and can display the complex structure and morphology in the point cloud, which is more accurate. However, it needs to pre-process the point cloud data, and the processing time may be longer for large point cloud data. Figure 29d shows the result of arithmetic rendering. This method can perform various operations on the point cloud data, including addition, subtraction, multiplication, division, comparison, etc. It can transform the point cloud data into various different forms, which is more flexible. Then, according to the previous experimental results, two features that can show wall defects more clearly are selected. Figure 29e,f show the point clouds colored by structural tensor omnivariance and curvature features, respectively. Overall, the different rendering methods have their own advantages and disadvantages, and the specific choice depends on the user’s needs for processing the point cloud data. As can be seen from the comparison in Figure 29, the point index rendering shows the cracks more clearly. This is due to the fact that the data for this experiment are city walls, which are structured in a neater way, with the geometric features of regular rectangular bricks stacked on top of each other, which are more regular overall. Therefore, the cracks were finally measured using the point cloud rendered by the point index rendering method, and the measurement result is shown in Figure 30. Figure 30a is a schematic diagram of the location of two cracks, and Figure 30b,c shows the length measurements of these two cracks, where the length of crack a is 1.668 m and the length of crack b is 1.355 m. A total of 18 cracks with lengths greater than 30 cm were detected using the same method.

3.3. Accuracy Assessment

To check the accuracy of measuring cracks from the UAV oblique photogrammetry model and the TLS point cloud model, the endpoint coordinates of 23 typical cracks were collected by a prism-free total station, and the length of each crack was calculated. Seventeen of the 23 cracks were measured from the UAV oblique photogrammetry model, and the lengths of the other six cracks were measured from the TLS point cloud model. Table 4 gives the difference and RMSE between the measured crack lengths by the model and the measured lengths by the total station. From Table 4, it can be seen that the RMSEs of crack measurement based on the UAV oblique photogrammetry model and the TLS point cloud data model are 0.73 cm and 0.34 cm, respectively. The results show that the accuracy of crack measurement based on these two models meets the accuracy requirements for surface defect detection of the city wall.

4. Conclusions

This paper takes the Nanjing city wall as the research object and studies the defect detection of ancient buildings based on UAV oblique photogrammetry and 3D laser scanning technology, which provides data support for the conservation and repair of ancient buildings. By processing the oblique images obtained by the UAV, a high-precision, realistic texture 3D model of the city wall is constructed. On the model, various defects, such as cracks and wall shedding, can be visually and easily identified, and the length of cracks and the area of wall detachment can be directly measured, which saves a lot of time and energy compared to the traditional manual visual inspection. However, some parts of the model were incomplete or deformed due to the occlusion of the surrounding trees, so point clouds of these areas were obtained using the TLS technique. After preprocessing the point clouds, local neighborhoods were constructed for each sampled point using the KNN algorithm, and the neighborhoods were fitted with the least squares method. Then, different features of the point cloud were calculated, and the results were visualized to analyze and compare the ability of the point cloud visualization maps with different features to identify defects. Then, two features with better defect recognition ability were selected and compared with the point cloud rendering methods that come with the CC software. Since the city wall is made of regular rectangular bricks with a neat and regular overall structure, the point cloud under point index rendering can show the cracks of the wall more clearly. Therefore, the point cloud rendered using the point index rendering method is used to measure the cracks in the wall. To verify the accuracy of these two techniques for crack length measurement, the endpoint coordinates of some of the cracks were measured using a prism-free total station, and the crack lengths were calculated. The RMSE of the crack measurement based on the UAV oblique photogrammetry model and the TLS point cloud model were calculated to be 0.73 cm and 0.34 cm, respectively, which can meet the accuracy requirements for surface defect detection of the walls. The experimental results proved that both techniques are helpful for the detection of wall defects, identifying wall defects and then informing the relevant departments of the defect information in time, which provides convenience for the protection and repair work of the wall. The UAV oblique photogrammetry and TLS technology can provide reliable data support for future ancient architecture conservation and has very high application value in ancient architecture conservation.
However, there are still some problems; for example, defect identification and measurement still require manual intervention, which can affect the accuracy of the results to some extent. One future research direction is to combine deep learning methods to study defect recognition and measurement so as to achieve automatic recognition of cracks and wall shedding and quickly obtain information such as crack length and shedding area.
Another problem is that both techniques suffer from a certain degree of data deficiency, which is caused by the shielding of tall trees around the city wall. To address this problem, a multi-source data fusion method was proposed to be studied: (1) the ground images assist the 3D reconstruction of oblique images, and the ground images are used to effectively compensate for a large amount of shielding of the side data acquired by oblique images, which could not obtain all the side data of the ground structure; (2) TLS assists in the 3D reconstruction of oblique images, through the fusion of the two kinds of data so that the large amount of point cloud data collected by TLS fills the dead angle of oblique images to obtain a more complete and clear 3D model. Eventually, the model results and design process of this project can bring convenience to the production work, and it is also hoped that scholars can continue to discuss and find better processes and algorithms for building 3D models of ancient buildings and defect identification in the follow-up studies.

Author Contributions

Conceptualization, Y.S.; methodology, Y.S., J.W., H.W. and Y.W.; data acquisition and curation, H.W. and Y.D.; writing—original draft, J.W. and Y.W.; writing—review and editing, Y.S., J.W. and H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 41971415 and Grant 42271450, in part by the Natural Science Foundation of Jiangsu Province under Grant BK20201387, and in part by Qing Lan Project of Jiangsu Province (QL2021), China.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors want to acknowledge the Nanjing City Wall Management Committee for the collection of Nanjing City Wall image data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, D. Material Management in Planning and Construction of Ancient Buildings. Open House Int. 2019, 44, 60–63. [Google Scholar] [CrossRef]
  2. Zhang, Z. Research on 3D Digital Model Design of the Sparrow Brace in Qingcheng Ancient Town. In Proceedings of the 2021 2nd International Conference on Intelligent Design (ICID), Xi’an, China, 19 October 2021; pp. 278–281. [Google Scholar] [CrossRef]
  3. Yinglong, H.; Yingbiao, C.; Zhifeng, W. Unmanned Aerial Vehicle and Ground Remote Sensing Applied in 3D Reconstruction of Historical Building Groups in Ancient Villages. In Proceedings of the 2018 Fifth International Workshop on Earth Observation and Remote Sensing Applications (EORSA), Xi’an, China, 18–20 June 2018; pp. 1–5. [Google Scholar]
  4. El-naggar, A.M. Determination of optimum segmentation parameter values for extracting building from remote sensing images. Alex. Eng. J. 2018, 57, 3089–3097. [Google Scholar] [CrossRef]
  5. Shao, Z.; Tang, P.; Wang, Z.; Saleem, N.; Yam, S.; Sommai, C. BRRNet: A Fully Convolutional Neural Network for Automatic Building Extraction from High-Resolution Remote Sensing Images. Remote Sens. 2020, 12, 1050. [Google Scholar] [CrossRef] [Green Version]
  6. McNamara, D.; Mell, W. Towards the use of Remote Sensing for Identification of Building Damage, Destruction, and Defensive Actions at Wildland-Urban Interface Fires. Fire Technol. 2021, 58, 641–672. [Google Scholar] [CrossRef]
  7. Jo, Y.H.; Hong, S. Three-Dimensional Digital Documentation of Cultural Heritage Site Based on the Convergence of Terrestrial Laser Scanning and Unmanned Aerial Vehicle Photogrammetry. ISPRS Int. J. Geo-Inf. 2019, 8, 53. [Google Scholar] [CrossRef] [Green Version]
  8. Chatzistamatis, S.; Kalaitzis, P.; Chaidas, K.; Chatzitheodorou, C.; Papadopoulou, E.E.; Tataris, G.; Soulakellis, N. Fusion of TLS and UAV photogrammetry data for post-earthquake 3d modeling of a cultural heritage church. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, XLII-3/W4, 143–150. [Google Scholar] [CrossRef] [Green Version]
  9. Müge, A.; Efdal, K.; Yilmaz, H.M. 3d Modeling of Cultural Heritages with UAV and TLS Systems: A Case Study on the Somuncu Baba Mosque. Artgrid-J. Archit. Eng. Fine Arts 2020, 2, 1–12. [Google Scholar]
  10. Balaguer-Puig, M.; Molada-Tebar, A.; Marqués-Mateu, A.; Lerma, J.L. Characterisation of intensity values on terrestrial laser scanning for recording enhancement. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W5, 49–55. [Google Scholar] [CrossRef] [Green Version]
  11. Yang, F.; Wen, X.; Wang, X.; Li, X.; Li, Z. A Model Study of Building Seismic Damage Information Extraction and Analysis on Ground-Based LiDAR Data. Adv. Civ. Eng. 2021, 2021, 1–14. [Google Scholar] [CrossRef]
  12. Salama, K.K.; Ali, M.F.; El-Shiekh, S. Reconstruction of monastery Saint Jeremiah computer-aided design model. Sci. Cult. 2017, 3, 11–14. [Google Scholar]
  13. Hu, Q.; Wang, S.; Fu, C.; Ai, M.; Yu, D.; Wang, W. Fine surveying and 3D modeling approach for wooden ancient architecture via multiple laser scanner integration. Remote Sens. 2016, 8, 270. [Google Scholar] [CrossRef] [Green Version]
  14. Tsiafaki, D.; Michailidou, N. Benefits and Problems Through the Application Of 3D Technologies In Archaeology: Recording, Visualisation, Representation and Reconstruction. Sci. Cult. 2015, 1, 37–45. [Google Scholar] [CrossRef]
  15. Nowak, R.; Orłowicz, R.; Rutkowski, R. Use of TLS (LiDAR) for Building Diagnostics with the Example of a Historic Building in Karlino. Buildings 2020, 10, 24. [Google Scholar] [CrossRef] [Green Version]
  16. Oniga, V.E.; Breaban, A.I.; Alexe, E.I.; Văsii, C. Indoor mapping of a complex cultural heritage scene using tls and hmls laser scanning. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, XLIII-B2-2, 605–612. [Google Scholar] [CrossRef]
  17. Ebolese, D.; Lo Brutto, M.; Dardanelli, G. 3D Reconstruction of the roman domus in the archaeological site of lylibaeum (marsala, italy). Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W15, 437–442. [Google Scholar] [CrossRef] [Green Version]
  18. Alkadri, M.F.; Alam, S.; Santosa, H.; Yudono, A.; Beselly, S.M. Investigating Surface Fractures and Materials Behavior of Cultural Heritage Buildings Based on the Attribute Information of Point Clouds Stored in the TLS Dataset. Remote Sens. 2022, 14, 410. [Google Scholar] [CrossRef]
  19. Sánchez-Aparicio, L.J.; Del Pozo, S.; Ramos, L.F.; Arce, A.; Fernandes, F.M. Heritage site preservation with combined radiometric and geometric analysis of TLS data. Autom. Constr. 2018, 85, 24–39. [Google Scholar] [CrossRef]
  20. Lercari, N. Monitoring earthen archaeological heritage using multi-temporal terrestrial laser scanning and surface change detection. J. Cult. Herit. 2019, 39, 152–165. [Google Scholar] [CrossRef] [Green Version]
  21. Mugnai, F.; Lombardi, L.; Tucci, G.; Nocentini, M.; Gigli, G.; Fanti, R. Geomatics in bridge structural health monitoring, inte-grating terrestrial laser scanning techniques and geotechnical inspections on a high value cultural heritage. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 895–900. [Google Scholar] [CrossRef] [Green Version]
  22. Li, Q.; Cheng, X. Damage detection for historical architectures based on TLS intensity data. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, XLII-3, 915–921. [Google Scholar] [CrossRef] [Green Version]
  23. Suchocki, C. Comparison of Time-of-Flight and Phase-Shift TLS Intensity Data for the Diagnostics Measurements of Buildings. Materials 2020, 13, 353. [Google Scholar] [CrossRef] [Green Version]
  24. Dayal, K.R.; Tiwari, P.S.; Sara, R.; Pande, H.; Kumar, A.S.; Agrawal, S.; Srivastav, S. Diagnostic utilisation of ground based imaging and non-imaging sensors for digital documentation of heritage sites. Digit. Appl. Archaeol. Cult. Heritage 2019, 15, e00117. [Google Scholar] [CrossRef]
  25. Tucci, G.; Bartoli, G.; Betti, M.; Bonora, V.; Korumaz, M.; Korumaz, A.G. Advanced Procedure for Documenting and Assessment of Cultural Heritage: From Laser Scanning to Finite Element, IOP Conference Series: Materials Science and Engineering, 2018; IOP Publishing: Bristol, UK, 2018. [Google Scholar]
  26. Roselli, I.; Tatì, A.; Fioriti, V.; Bellagamba, I.; Mongelli, M.; Romano, R.; De Canio, G.; Barbera, M.; Cianetti, M.M. Integrated approach to structural diagnosis by non-destructive techniques: The case of the Temple of Minerva Medica. Acta IMEKO 2018, 7, 13–19. [Google Scholar] [CrossRef]
  27. Federman, A.; Shrestha, S.; Quintero, M.S.; Mezzino, D.; Gregg, J.; Kretz, S.; Ouimet, C. Unmanned Aerial Vehicles (UAV) Photogrammetry in the Conservation of Historic Places: Carleton Immersive Media Studio Case Studies. Drones 2018, 2, 18. [Google Scholar] [CrossRef] [Green Version]
  28. Wang, C.; Xu, X.; Yu, L.; Li, H.; Yap, J.B.H. Grid algorithm for large-scale topographic oblique photogrammetry precision enhancement in vegetation coverage areas. Earth Sci. Inform. 2021, 14, 931–953. [Google Scholar] [CrossRef]
  29. Snavely, N. Scene Reconstruction and Visualization from Internet Photo Collections: A Survey. IPSJ Trans. Comput. Vis. Appl. 2011, 3, 44–66. [Google Scholar] [CrossRef] [Green Version]
  30. Gao, N.; Zhao, J.; Song, D.; Chu, J.; Cao, K.; Zha, X.; Du, X. High-precision and light-small oblique photogrammetry uav landscape restoration monitoring. In Proceedings of the 2018 Ninth International Conference on Intelligent Control and Information Processing (ICICIP), Wanzhou, China, 9–11 November 2018; pp. 301–304. [Google Scholar]
  31. Wang, J.; Wang, L.; Jia, M.; He, Z.; Bi, L. Construction and optimization method of the open-pit mine DEM based on the oblique photogrammetry generated DSM. Measurement 2019, 152, 107322. [Google Scholar] [CrossRef]
  32. Gerke, M.; Kerle, N. Automatic Structural Seismic Damage Assessment with Airborne Oblique Pictometry© Imagery. Photogramm. Eng. Remote Sens. 2011, 77, 885–898. [Google Scholar] [CrossRef]
  33. Li, D.; Lin, H.; He, Q.; Li, J. Digital restoration and visualization of Nanfeng ancient city-focusing on cultural mining. J. Physics: Conf. Ser. 2021, 012045. [Google Scholar] [CrossRef]
  34. Nikolakopoulos, K.G.; Soura, K.; Koukouvelas, I.K.; Argyropoulos, N.G. UAV vs. classical aerial photogrammetry for archaeological studies. J. Archaeol. Sci: Rep. 2017, 14, 758–773. [Google Scholar] [CrossRef]
  35. Bolognesi, M.; Furini, A.; Russo, V.; Pellegrinelli, A.; Russo, P. Testing the low-cost RPAS potential in 3D cultural heritage reconstruction. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-5/W4, 229–235. [Google Scholar] [CrossRef] [Green Version]
  36. Chen, S.; Yang, H.; Wang, S.; Hu, Q. Surveying and digital restoration of towering architectural heritage in harsh environments: A case study of the millennium ancient watchtower in Tibet. Sustainability 2018, 10, 3138. [Google Scholar] [CrossRef] [Green Version]
  37. Aicardi, I.; Chiabrando, F.; Grasso, N.; Lingua, A.M.; Noardo, F.; Spanò, A. Uav photogrammetry with oblique images: First analysis on data acquisition and processing. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 835–842. [Google Scholar] [CrossRef] [Green Version]
  38. Vetrivel, A.; Gerke, M.; Kerle, N.; Vosselman, G. Identification of damage in buildings based on gaps in 3D point clouds from very high resolution oblique airborne images. ISPRS J. Photogramm. Remote Sens. 2015, 105, 61–78. [Google Scholar] [CrossRef]
  39. Martínez-Carricondo, P.; Carvajal-Ramírez, F.; Yero-Paneque, L.; Agüera-Vega, F. Combination of nadiral and oblique UAV photogrammetry and HBIM for the virtual reconstruction of cultural heritage. Case study of Cortijo del Fraile in Níjar, Almería (Spain). Build. Res. Inf. 2019, 48, 140–159. [Google Scholar] [CrossRef]
  40. Ulvi, A. Documentation, Three-Dimensional (3D) Modelling and visualization of cultural heritage by using Unmanned Aerial Vehicle (UAV) photogrammetry and terrestrial laser scanners. Int. J. Remote Sens. 2021, 42, 1994–2021. [Google Scholar] [CrossRef]
  41. Carvajal-Ramírez, F.; Navarro-Ortega, A.D.; Agüera-Vega, F.; Martínez-Carricondo, P.; Mancini, F. Virtual reconstruction of damaged archaeological sites based on Unmanned Aerial Vehicle Photogrammetry and 3D modelling. Study case of a southeastern Iberia production area in the Bronze Age. Measurement 2019, 136, 225–236. [Google Scholar] [CrossRef]
  42. Manajitprasert, S.; Tripathi, N.K.; Arunplod, S. Three-dimensional (3D) modeling of cultural heritage site using UAV imagery: A case study of the pagodas in Wat Maha That, Thailand. Appl. Sci. 2019, 9, 3640. [Google Scholar] [CrossRef] [Green Version]
  43. Mongelli, M.; De Canio, G.; Roselli, I.; Malena, M.; Nacuzi, A.; de Felice, G. 3D Photogrammetric reconstruction by drone scanning for FE analysis and crack pattern mapping of the “Bridge of the Towers”, Spoleto. Key Eng. Mater. 2017, 747, 423–430. [Google Scholar] [CrossRef]
  44. Araoka, W.; Hokoi, S.; Ogura, D.; Iba, C.; Li, Y.; Hu, S. Deterioration and Preservation of City Wall in Nanjing. Energy Procedia 2017, 132, 945–950. [Google Scholar] [CrossRef]
  45. Lenda, G.; Siwiec, J.; Kudrys, J. Multi-Variant TLS and SfM Photogrammetric Measurements Affected by Different Factors for Determining the Surface Shape of a Thin-Walled Dome. Sensors 2020, 20, 7095. [Google Scholar] [CrossRef] [PubMed]
  46. Wen, C.; Wang, W. The Research on Integrated Production of Large Scale 3D Products Based on Mass Multi-angular Images. J. Phys. Conf. Ser. 2020, 1624, 052010. [Google Scholar] [CrossRef]
  47. Cover, T.; Hart, P. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef] [Green Version]
  48. Gross, H.; Thoennessen, U. Extraction of lines from laser point clouds, Symposium of ISPRS Commission III: Photogrammetric Computer Vision PCV06. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36, 86–91. [Google Scholar]
Figure 1. Part of the Nanjing City Wall.
Figure 1. Part of the Nanjing City Wall.
Remotesensing 15 02089 g001
Figure 2. Various defects of city wall: (a) cracks; (b) wall shedding; (c) damage by vegetation growth.
Figure 2. Various defects of city wall: (a) cracks; (b) wall shedding; (c) damage by vegetation growth.
Remotesensing 15 02089 g002
Figure 3. DJI Phantom 4 Pro UAV.
Figure 3. DJI Phantom 4 Pro UAV.
Remotesensing 15 02089 g003
Figure 4. FC6310 camera.
Figure 4. FC6310 camera.
Remotesensing 15 02089 g004
Figure 5. Ground control point distribution map.
Figure 5. Ground control point distribution map.
Remotesensing 15 02089 g005
Figure 6. Route layouts and flight parameters of each of the three flights carried out by UAV.
Figure 6. Route layouts and flight parameters of each of the three flights carried out by UAV.
Remotesensing 15 02089 g006
Figure 7. Faro3D X330 scanner.
Figure 7. Faro3D X330 scanner.
Remotesensing 15 02089 g007
Figure 8. Part of the target points of the surveying area.
Figure 8. Part of the target points of the surveying area.
Remotesensing 15 02089 g008
Figure 9. Surveying the cracks with the prism-free total station.
Figure 9. Surveying the cracks with the prism-free total station.
Remotesensing 15 02089 g009
Figure 10. Data processing process of oblique images.
Figure 10. Data processing process of oblique images.
Remotesensing 15 02089 g010
Figure 11. Flow chart of point cloud data for wall defect extraction.
Figure 11. Flow chart of point cloud data for wall defect extraction.
Remotesensing 15 02089 g011
Figure 12. Data segmentation diagram. (The color in the figure is rendered with the intensity information of the point cloud, and the size of the cropping area is about 6 × 12 m).
Figure 12. Data segmentation diagram. (The color in the figure is rendered with the intensity information of the point cloud, and the size of the cropping area is about 6 × 12 m).
Remotesensing 15 02089 g012
Figure 13. The 3D model of the study area.
Figure 13. The 3D model of the study area.
Remotesensing 15 02089 g013
Figure 14. Orthophotomosaic of the city wall. (The top view shows the corridor at the top of the city wall for tourists to walk on).
Figure 14. Orthophotomosaic of the city wall. (The top view shows the corridor at the top of the city wall for tourists to walk on).
Remotesensing 15 02089 g014
Figure 15. The texture of Nanjing City Wall’s inner wall.
Figure 15. The texture of Nanjing City Wall’s inner wall.
Remotesensing 15 02089 g015
Figure 16. Various defects in 3D model of the city wall: (a) cracks; (b) wall shedding; (c) damage by vegetation growth.
Figure 16. Various defects in 3D model of the city wall: (a) cracks; (b) wall shedding; (c) damage by vegetation growth.
Remotesensing 15 02089 g016
Figure 17. The longest crack in inner wall.
Figure 17. The longest crack in inner wall.
Remotesensing 15 02089 g017
Figure 18. (a) The maximum shedding area of inner wall. (b) The perimeter and area of shedding part.
Figure 18. (a) The maximum shedding area of inner wall. (b) The perimeter and area of shedding part.
Remotesensing 15 02089 g018
Figure 19. Point cloud marked by Normal Vector. (a) K = 4. (b) K = 6. (c) K = 8. (d) K = 10.
Figure 19. Point cloud marked by Normal Vector. (a) K = 4. (b) K = 6. (c) K = 8. (d) K = 10.
Remotesensing 15 02089 g019
Figure 20. Point cloud marked by Curvature. (a) K = 4. (b) K = 6. (c) K = 8. (d) K = 10.
Figure 20. Point cloud marked by Curvature. (a) K = 4. (b) K = 6. (c) K = 8. (d) K = 10.
Remotesensing 15 02089 g020
Figure 21. Point cloud marked by Structure Tensor “Eigenentropy”. (a) K = 4. (b) K = 6. (c) K = 8. (d) K = 10.
Figure 21. Point cloud marked by Structure Tensor “Eigenentropy”. (a) K = 4. (b) K = 6. (c) K = 8. (d) K = 10.
Remotesensing 15 02089 g021
Figure 22. Point cloud marked by Structure Tensor Omnivariance. (a) K = 4. (b) K = 6. (c) K = 8. (d) K = 10.
Figure 22. Point cloud marked by Structure Tensor Omnivariance. (a) K = 4. (b) K = 6. (c) K = 8. (d) K = 10.
Remotesensing 15 02089 g022
Figure 23. Point cloud marked by Structure Tensor Linearity. (a) K = 4. (b) K = 6. (c) K = 8. (d) K = 10.
Figure 23. Point cloud marked by Structure Tensor Linearity. (a) K = 4. (b) K = 6. (c) K = 8. (d) K = 10.
Remotesensing 15 02089 g023
Figure 24. (a) Visualization of Normal Vector. (b) Defect analysis diagram. (The red boxes indicate the detachment of bricks, the green boxes indicate cracks in the wall, and the green circle is the area where the wall has been damaged by vegetation).
Figure 24. (a) Visualization of Normal Vector. (b) Defect analysis diagram. (The red boxes indicate the detachment of bricks, the green boxes indicate cracks in the wall, and the green circle is the area where the wall has been damaged by vegetation).
Remotesensing 15 02089 g024
Figure 25. (a) Visualization of Curvature. (b) Defect analysis diagram. (The red box indicates the detachment of bricks; the green box indicates cracks in the wall, and the red circle is the area where the wall has been damaged by vegetation).
Figure 25. (a) Visualization of Curvature. (b) Defect analysis diagram. (The red box indicates the detachment of bricks; the green box indicates cracks in the wall, and the red circle is the area where the wall has been damaged by vegetation).
Remotesensing 15 02089 g025
Figure 26. (a) Visualization of Structural Tensor “Eigenentropy”. (b) Defect analysis diagram. (The red boxes indicate bricks that have fallen off, and the green box indicates normal bricks that have not fallen off).
Figure 26. (a) Visualization of Structural Tensor “Eigenentropy”. (b) Defect analysis diagram. (The red boxes indicate bricks that have fallen off, and the green box indicates normal bricks that have not fallen off).
Remotesensing 15 02089 g026
Figure 27. (a) Visualization of Structural Tensor Omnivariance. (b) Defect analysis diagram. (The red box indicates the detachment of bricks; the green box indicates cracks in the wall).
Figure 27. (a) Visualization of Structural Tensor Omnivariance. (b) Defect analysis diagram. (The red box indicates the detachment of bricks; the green box indicates cracks in the wall).
Remotesensing 15 02089 g027
Figure 28. (a) Visualization of Structural Tensor Linearity. (b) Defect analysis diagram. (The red boxes indicate the detachment of bricks; the green boxes indicate cracks in the wall, and the yellow box indicates the normal bricks without defects).
Figure 28. (a) Visualization of Structural Tensor Linearity. (b) Defect analysis diagram. (The red boxes indicate the detachment of bricks; the green boxes indicate cracks in the wall, and the yellow box indicates the normal bricks without defects).
Remotesensing 15 02089 g028
Figure 29. Part of the wall point cloud displayed in the original grayscale value.
Figure 29. Part of the wall point cloud displayed in the original grayscale value.
Remotesensing 15 02089 g029
Figure 30. Crack measurement results. (a) Location diagram of two cracks. (b) Length measurement result of crack a. (c) Length measurement result of crack b.
Figure 30. Crack measurement results. (a) Location diagram of two cracks. (b) Length measurement result of crack a. (c) Length measurement result of crack b.
Remotesensing 15 02089 g030
Table 1. Camera parameters.
Table 1. Camera parameters.
Camera Parameters
Image sensor1 inch CMOS; 20 million effective pixels
LensFOV 84° 8.8 mm/24 mm (35 mm format equivalent)
f/2.8–f/11 with autofocus (focus distance 1 m–infinity)
ISO range100–3200 (auto)
100–12,800 (manual)
Mechanical shutter speed8–1/2000 s
Electronic shutter speed8–1/8000 s
Photo size3:2 aspect ratio: 5472 × 3648
4:3 aspect ratio: 4864 × 3648
16:9 aspect ratio: 5472 × 3078
Image formatJPEG; DNG (RAW); JPEG + DNG
Table 2. Coordinates of ground control points.
Table 2. Coordinates of ground control points.
NameX (m)Y (m)H (m)
KZD1385,841.7003,549,621.52220.060
KZD2385,908.4793,549,632.50318.698
KZD3385,597.5413,549,975.84112.437
KZD4385,584.2853,549,947.84920.656
KZD5385,119.6813,550,061.54511.895
KZD6385,098.8883,550,053.67222.799
KZD7384,990.7703,550,572.04713.436
KZD8385,073.2663,550,513.30512.498
Table 3. Feature calculation speed, differentiation between bricks, and defect identification ability under different values of K.
Table 3. Feature calculation speed, differentiation between bricks, and defect identification ability under different values of K.
Value of KCalculation SpeedDifferentiation between BricksDefect Identification
4FastPoorPoor
6Relatively fastRelatively poorRelatively poor
8Relatively slowRelatively goodRelatively good
10SlowGoodGood
Table 4. The difference and RMSE of crack surveying.
Table 4. The difference and RMSE of crack surveying.
ErrorsMaximum Difference
(cm)
Minimum Difference
(cm)
RMSE
(cm)
Models
UAV Model2.90.50.73
TLS Model1.70.30.34
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, J.; Shi, Y.; Wang, H.; Wen, Y.; Du, Y. Surface Defect Detection of Nanjing City Wall Based on UAV Oblique Photogrammetry and TLS. Remote Sens. 2023, 15, 2089. https://doi.org/10.3390/rs15082089

AMA Style

Wu J, Shi Y, Wang H, Wen Y, Du Y. Surface Defect Detection of Nanjing City Wall Based on UAV Oblique Photogrammetry and TLS. Remote Sensing. 2023; 15(8):2089. https://doi.org/10.3390/rs15082089

Chicago/Turabian Style

Wu, Jiayi, Yufeng Shi, Helong Wang, Yajuan Wen, and Yiwei Du. 2023. "Surface Defect Detection of Nanjing City Wall Based on UAV Oblique Photogrammetry and TLS" Remote Sensing 15, no. 8: 2089. https://doi.org/10.3390/rs15082089

APA Style

Wu, J., Shi, Y., Wang, H., Wen, Y., & Du, Y. (2023). Surface Defect Detection of Nanjing City Wall Based on UAV Oblique Photogrammetry and TLS. Remote Sensing, 15(8), 2089. https://doi.org/10.3390/rs15082089

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop