Next Article in Journal
Characterizing Smart Environments as Interactive and Collective Platforms: A Review of the Key Behaviors of Responsive Architecture
Next Article in Special Issue
Pixel-Level Fatigue Crack Segmentation in Large-Scale Images of Steel Structures Using an Encoder–Decoder Network
Previous Article in Journal
The Impact of Weather and Seasons on Falls and Physical Activity among Older Adults with Glaucoma: A Longitudinal Prospective Cohort Study
Previous Article in Special Issue
Deep Learning-Based Concrete Surface Damage Monitoring Method Using Structured Lights and Depth Camera
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Adaptive Method for the Extraction of Steel Design Structures from an Integrated Point Cloud

by
Pawel Burdziakowski
1,* and
Angelika Zakrzewska
2
1
Department of Geodesy, Faculty of Civil and Environmental Engineering, Gdansk University of Technology, Narutowicza 11-12, 80-233 Gdansk, Poland
2
Geopartner Spolka Z Ograniczoną Odpowiedzialnością Spolka Komandytowa, ul. Rakoczego 31, 80-171 Gdańsk, Poland
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(10), 3416; https://doi.org/10.3390/s21103416
Submission received: 19 April 2021 / Revised: 5 May 2021 / Accepted: 10 May 2021 / Published: 14 May 2021
(This article belongs to the Collection Vision Sensors and Systems in Structural Health Monitoring)

Abstract

:
The continuous and intensive development of measurement technologies for reality modelling with appropriate data processing algorithms is currently being observed. The most popular methods include remote sensing techniques based on reflected-light digital cameras, and on active methods in which the device emits a beam. This research paper presents the process of data integration from terrestrial laser scanning (TLS) and image data from an unmanned aerial vehicle (UAV) that was aimed at the spatial mapping of a complicated steel structure, and a new automatic structure extraction method. We proposed an innovative method to minimize the data size and automatically extract a set of points (in the form of structural elements) that is vital from the perspective of engineering and comparative analyses. The outcome of the research was a complete technology for the acquisition of precise information with regard to complex and high steel structures. The developed technology includes such elements as a data integration method, a redundant data elimination method, integrated photogrammetric data filtration and a new adaptive method of structure edge extraction. In order to extract significant geometric structures, a new automatic and adaptive algorithm for edge extraction from a random point cloud was developed and presented herein. The proposed algorithm was tested using real measurement data. The developed algorithm is able to realistically reduce the amount of redundant data and correctly extract stable edges representing the geometric structures of a studied object without losing important data and information. The new algorithm automatically self-adapts to the received data. It does not require any pre-setting or initial parameters. The detection threshold is also adaptively selected based on the acquired data.

1. Introduction

Today, measurement technologies for spatial modelling data are under continuous and vigorous enhancement. The most popular methods include photogrammetric techniques based on digital visible light cameras and laser scanning. The development of these sensors drives engineers and scientists to come up with newer measurement methods and associated applications. More and more of the above find their application in civil engineering [1,2,3], environmental engineering [4,5,6], construction [7,8,9,10] or architecture [11,12], thus intensively stimulating the further progress of these technologies.
When using the aforementioned photogrammetric techniques, the differences arising from the various types of used sensors should be taken into account. On the one hand, we are dealing with a passive sensor: a photo camera, the images from which constitute a basis for further geometric studies; on the other hand, a laser scanner that collects information on the surrounding terrain via an active sensor, most usually in the red band. Various sensors and different methods for the acquisition of spatial information result in such data also being different. Both technologies have their pros and cons, which are described in more detail in [13]. Their simultaneous use cross-eliminates the restrictions of both sensors. Information from two or more sensory sources can be fused or integrated, which supports the modelling process and minimizes the modelling issues arising from the physics of a given sensor.
Quite often, the data acquired using an unmanned aerial vehicle (UAV) constitute a perfect complement to the terrestrial laser scanning (TLS) data. Therefore, these techniques can be deemed complementary.
In general, as indicated by a review of the source literature below and the nature of both technologies, it should be concluded that TLS information is used to generate a true geometric model (quantitative data), while visible light camera or multispectral camera data additionally provide qualitative and quantitative data. The source literature already contains multiple methods for fusing UAV and TLS data, and the number of their applications is constantly growing. The authors of [14,15,16] developed an improved method for assessing landslide risk based on a generated 3D surface model. UAV photos were used within the research to assess slope-forming rock cracks. The synergistic use of photogrammetric products and their fusion is often the case in the assessment of landslide risks, which is demonstrated by [17,18]. The authors of [19] concluded that the method for acquiring photos from a UAV is characterized by higher accuracy in modelling key forest properties during its regeneration phase. In their publication [20], the authors compared data acquired via stationary laser scanning and data from a scanner on-board a UAV. This is a concept created by the Austrian company Riegl. UAV Laser Scanning (ULS) proved to be more efficient, faster and more accurate in the case of forest areas than the stationary method that is recognized as the reference in this study. As also noted, airborne laser scanning (ALS) provided lower-density clouds that, in the case of forest areas, failed to guarantee data—an aspect that is vital in terms of this object. The combination of image and laser scan data is widely used in forestry. In the case of [21,22], data from two different sources significantly improves the elaboration quality and the ultimate initial point accuracy.
Low-altitude photogrammetry (or UAV photogrammetry) was found to be excellent for an accurate analysis of coastline and littoral areas [23,24,25]. The study [26] thoroughly assessed the accuracy of the applied digital surface model (DSM) that was aimed at detecting changes in a coastal area. The authors of [25] also presented a filtration method involving UAV data that was intended to enhance the matching of coastal area models. The publication [27] comprehensively described a method of fusing sensory data for coastal protection systems.
Salach et al., in [28], thoroughly analysed the accuracies achieved owing to UAV Laser Scanning (ULS) and UAV-photogrammetry, in which case it was concluded that the Digital Terrain Model (DTM) generated by ULS was significantly more accurate and enabled the elimination of inaccuracies related to terrain vegetation. The authors indicate that laser technology had clear advantages over photogrammetric models in situations where vegetation can be a problem during terrain surface reconstruction. In contrast, in the case of terrains not covered by vegetation, UAV photogrammetry enables the achievement of surface model determination accuracy from 1 cm [29].
Information on the natural environment can also be enhanced owing to use of multispectral sensors and the integration of these with spatial data. Salehi et al., in [30], reviewed a methodology for integrating multispectral camera and scanning laser data for the evaluation of sea cliffs in the Arctic region. Bujakowski et al., in [31], stated that the data from ALS and multispectral photography constituted grounds for the assessment of embankment stability.
Very good results are also achieved by combining scans and a photogrammetric model when studying engineering structures. Such data provide increased amounts of information and enable the precise stocktaking of cultural heritage structures [32]. Furthermore, owing to numerical and spatial models, the damage and degradation of cultural structures are assessable [33]. The research [34] was conducted from a similar perspective, in which laser scans were used to develop orthoimages to be used as a base to detect structural cracking. Moreover, scans and images can be integrated in order to obtain even more information, which in the case of structural assessment is an innovative method, and was described for the first time in [35].
The analysis of the geometry obtained from a point cloud was described in [35], which, just like the previously presented publications, focused on converting the cloud into orthoimages, then subjected these to analyses (e.g., edge detection). It should be noted that these methods develop a two-dimensional image representation (orthoimage), which is then assessed.
A very interesting publication [36] discussed the possible use of photogrammetric data for the supplementation of airborne laser scanning (ALS) data. Airborne scanning is characterized by the generation of a relatively low density of points; hence, high-resolution photos are perfect to complement the missing data. It is worth mentioning that the authors of [37] suggested reconstructing characteristic geometric structures (building roof outlines in this case) using integrated spatial data. Extracting only vital geometric structures enables the achievement of a significant data volume reduction.
It should be recognized that point clouds and high-resolution imagery carry large amounts of information. Their magnitude, therefore, can be limited only to what is essential, e.g., by isolating vital geometric structures. This issue was addressed by such elaborations as [38], in which Serna et al. used huge point clouds to extract only the objects that were important from the modelling perspective (building facades in this case). Xie et al. in [38] also presented an urban area building shape extraction method. In addition, they discussed methods of filtering and preparing the data for analyses.
In the case of the stocktaking of engineering structures, high-accuracy spacing mapping for the purposes of reconstruction or comparison is a very important issue. Publications [39,40,41,42] have thoroughly described the comprehensive use of measuring devices in order to improve the accuracy. What is more, they list and develop appropriate algorithms for the evaluation of structural performance.
Very often, the mapping accuracy in such analyses must be at a level of 1 mm; however, in the case of the object described in this on-going article (a complex steel structure), its dynamic operation and erection precision must fall within a tolerance of 1 cm. It is quite complicated to achieve such a result; therefore, in our article, we propose an innovative method for combining data in order to achieve the required outcome.
Integrated spatial data has a very large number of points. The integration of TLS and UAV clouds results in a number of points that commonly exceeds several million. In most engineering applications, such dense point clouds are not required, and only some characteristic elements of the structure—such as its edges—are analyzed [43,44,45,46]. Additionally, as in the presented case, the constructed object is compared with the design data in CAD (Computer Aided Design) software. Such CAD projects contain mainly lines, representing the edges of the object and its elements. Therefore, it seems reasonable to implement a method to extract only such characteristic features of an engineering structure from a fully integrated point cloud.
As the literature analysis indicates, edge extraction techniques from point clouds can be divided into methods using rouboust statistics [47,48,49,50], surface segmentation [51,52], line segmentation [53], region growing methods [54,55,56,57] and neural methods [58,59]. The application of the methods ranges widely, including robotics [60], reverse engineering [61,62], manufacturing industries [63,64,65] and cartography [46]. One feature common to the above-mentioned methods is the sensitivity to the noise present in the point cloud. Due to the fact that point clouds derived from real measurements of engineering objects generally have a large amount of noise, the selected method should have some noise robustness, while the process of preparing the cloud for analysis should also take this fact into account.
This study integrated TLS data with UAV image data in order to reconstruct a complex spatial steel structure and then minimize the volume of data and automatically extract vital structural elements from the perspective of engineering analyses. The outcome of the research was the development of a technology for the acquisition of precise complex spatial information related to a high steel structure. This contains such elements as a method for integrating data and using it to extract vital structures, as well as methods for eliminating redundant data and for filtering integrated photogrammetric data. Ultimately, the subsequently applied structure extraction algorithms isolate structural elements that can be easily compared with best steel structure design practices, and consequently evaluate them in terms of execution. In the work, the developed final product, owing to the minimization of the volume of spatial information and the isolation of vital elements, was compared with a theoretical 3D model of the structure.
This study presents the following new solutions in the field of spatial measurements and data analysis:
  • The development and presentation of a complete integration technology for spatial data generated from two sensory measurements: data from TLS and that from airborne photogrammetry obtained through UAV flights was integrated.
  • The comparative analysis of the developed models and the accuracy analysis of the integration process.
  • The development and testing of a new adaptive and automatic algorithm for the extraction of the edges of geometric structures from point clouds.
  • A new algorithm used to develop a reduced spatial model of a building’s steel structure.
Within this context, the paper has been organized as follows: the first section is the Introduction, which presents the motivation and background of this study; the second section, Materials and Methods, describes the tools and methods used to process the data, and presents the developed extraction algorithm. The third section discusses the results and quality obtained. The paper ends with a section entitled “Conclusions”, which summarizes the most important aspects of the study.

2. Materials and Methods

2.1. Object History and Description

The subject matter of the study was the Palm House of the Oliwa Park in Gdansk (Figure 1), constructed in the second half of the 18th century. It is located within the Adam Mickiewicz Park, which occupies an area of almost 10 ha. This park used to be a monastery garden established by the Cistercians and inspired by the French garden ark of the Baroque. The palm house located therein acted as a winter garden housing exotic plants [66].
The inside of the building houses palms, cacti, aloe, philodendrons and banana trees in near-natural conditions. The palm house, as an element of a post-Cistercian complex, was entered into the register of monuments in 1971. The date plant therein is 180 years old, and it is the only such object in Poland. Prior to its renovation, the facility consisted of an eastern, single-story brick building. Its cylindrical body was constructed in 1954. The dome—the southern section and parts of the western section of which are glazed—was 15 m high (Figure 2).
The palm house structure was demolished in September 2017 in order to replace it with a taller building that would incorporate the height of the date tree, which was distorting the roof structure as of 2013. The new structure is cylindrical, and has a glass rotunda with a height of 24 m and a width of 17 m. The volume of the building is 4.4 thousand m3. Of note, 1400 supporting points were installed on the steel structure, each of which was individually fitted.

2.2. Process Description

A work methodology and algorithm were developed in order to process the measurement data and isolate the geometric structures of the studied building (Figure 3). The individual stages of the algorithm below are thoroughly discussed in the further sections of this research paper.

2.3. Data Acquisition

The data acquisition process was conducted using terrestrial laser scanning and UAV flight image acquisition. The fact that the upper section of the facility was unavailable to a laser scanner necessitated the use of a UAV with a non-metric camera. Figure 4 shows a graphical data acquisition diagram. TLS stations were uniformly distributed around the building. In this case, the laser scanner was based on 17 stations. The distance of the scanner from the measurement object was determined experimentally, and is a certain compromise between the available space and the theoretical density of the measurement points. The essence of the determination of the distance is to choose such a distance from the structure as to obtain a combined coverage with the TLS data for the bottom part and the UAV data for the upper.
Each UAV flight followed a circle with several different radii ( r c 1 , r c 2 ) and at respectively different altitudes ( h A G L 1 , h A G L 2 ). Additionally, several vertical flights were conducted in order to photograph the structure below the dome. Figure 4 contains a diagram with circular flight trajectories marked in red, which constitute the theoretical minimum. It also contains the vertical flight trajectories that are advocated for the scanning of such structures. In practice, flying over numerous concentric radii is recommended. The objective of such a flight plan is to maximize the overlapping of the photos and to multiply the projecting rays for a selected area. Two independent flights were applied in the case in question. The first one covered seven concentric trajectories over the structure in a clockwise direction, while the second included nine counter-clockwise concentric trajectories. Some of the trajectories were executed automatically using the available UAV flight automation functions, whereas those at a short distance over the building were conducted manually, as manual flight control over such a structure improves the air operation safety.

2.3.1. UAV Photogrammetry: Initial Data Processing

The photogrammetric flight was conducted using a DJI Mavic Pro (Shenzhen DJI Sciences and Technologies Ltd., Shenzhen, China) UAV. Such an UAV is representative of the commercially available aerial vehicles designed and intended primarily for recreational flying. It was equipped with an integrated non-metric camera. A total of 1180 photos bearing metadata with the current UAV position were taken during the two flights. The data was saved in EXIF (Exchangeable Image File Format). The results were processed using commercial Bentley ContextCapture (Bentley Systems Inc., Exton, PA, USA) software (Table 1). The result of the processing was exported to a point cloud in a *.las format (Figure 5). The UAV image data were processed using the direct georeferencing method, which means that each image contained location data recorded using an UAV on-board global navigation satellite system. Due to the height of the structure and its design, ground control points could not be placed on the object. With the direct georeferencing method, the object was modeled according to its actual scale.

2.3.2. TLS Initial Data Processing

The laser scanning was conducted in a continuous mode using a Leica P30 (Leica Geosystems AG: Part of Hexagon, Sankt Gallen, Switzerland) scanner (Table 2). The measurement stations (17 in total) were placed on the ground, evenly around the structure. The measurements were taken using the option of recording up to a million points per second.
The robust estimation method and the well-known ICP (Iterative Closest Points) algorithm were used in order to align the images from the individual scans. This method aims to appropriately filter the points in order to determine automatic reference points on the station’s point cloud, and then combine them relative to the subsequent stations. The method utilizes an algorithm described in [16], in which the author aligned stationary stations relative to airborne ones using the least squares method. Another interesting modification of the ICP algorithm is presented here [67]. The record alignment results are shown in Table 3, as appropriate translations of PX, PU, PZ (scan shift over individual axes), as well as Roll, Pitch and Yaw (inter-rotation of the stations). PKT is the automatically computed number of reference points that must be taken into account in the calculations. The processing result in the form of a point cloud was exported to the *.las format.

2.4. Point Cloud Filtration

The point clouds generated within the previous stage have a certain amount of redundant data that is irrelevant from the point of view of the extracted structures, and a certain amount of noise and random data (Figure 5). For this reason, the developed point clouds were pre-filtered. As demonstrated in [68], cloud pre-filtration is very important and enables the isolation of vital infrastructure elements. Pre-filtration was also applied in [69]. Pre-filtration consists of four stages: noise filtering, cloth simulation filtering [70] (CSF), data reduction and statistical outlier removal (SOR) filtering. The same stages were applied for each of the acquired point clouds and are recommended prior to the cloud integration stage.
A Surface Distance-Based Filter [70] was applied in the case of the point cloud acquired using an UAV. This filter eliminates outliers (considered noise) that do not fall within a defined distance from the local surface, as determined inside a kernel window defined by a search radius. In this way, it is possible to eliminate noise, i.e., points beyond the minimum distance ( D m i n ), which are defined as:
D m i n = s d k _ + n σ [ ]
where s d k _ means the mean distance from the local surface determined by k of the adjacent points around an indicated central point, n is a user-defined coefficient and usually takes the value of 1–3, and σ is the standard deviation of the distance from the flat surface. It should be noted that setting overly aggressive parameters for this method can lead to excessive point cloud filtration. This process can be iterative in order to avoid this. Such filtration also tends to remove rounded surfaces and edges. In the case in question, the s d k _ value was set at 0.006474 m, while n adopted the value of 1. This operation enabled the elimination of 90,290,876 outliers. After this stage, the number of points in the UAV cloud was reduced to 92,214,210 (Table 4) (Figure 6a).
A neighbourhood distance filter was used in the case of the TLS cloud in order to eliminate outliers. These filter k studies define the neighbours of a point for each of the points within a tested cloud; points with a distance higher than the sum of the mean distance and standard deviation values are classified as outliers. This can be expressed as follows:
D m i n = d k _ + n σ [ ]
where d k _ is the mean distance k of points adjacent to the measured point (centre) and n is a user-defined coefficient that usually takes the value of 1–3. The elimination of the outliers for the studied cases was conducted for k = 6 neighbours and n = 1 . The use of the algorithm resulted in the removal of 42,889,276 points deemed noise from the TLS cloud. The number of points after this operation was 60,791,121 (Table 4) (Figure 6b).
The next stage of the pre-filtration is the removal of the points representing the Earth’s surface and other objects located in the vicinity of the studied structure. The cloth simulation filter (CSF) followed by the manual elimination of small ambient objects was applied for this purpose. The CSF technique [70] enables the segmentation of point clouds and their division into points representing the ground and other elements placed on it.
Cloth simulation is a collision detection algorithm. These are used in computer graphics and computer simulations in order to find movement restrictions in 2D and 3D scenes. In general, a collision detection algorithm answers the following question: is moving any object in a given direction possible or are there obstacles in its path, i.e., other moving or stationary objects? Collisions between various fragments of the same object should also be detected as part of the cloth simulation. Certain modifications were introduced in order for this algorithm to be used for point cloud filtering. Collisions are detected by comparing the heights of the simulated cloth particle and the terrain. As soon as a particle reaches ground level, it is immobilized. The simulation provides an approximation of the real terrain, and then the distances between original cloud points and the simulated particles are calculated using an algorithm for the calculation of the distances between clouds. Points with distances smaller than a defined distance threshold are classified as ground, while the others constitute measurement (terrain) objects.
The practical implementation of the CSF algorithm requires the definition of three parameters. The first is the cloth resolution, which relates to the grid size. The next value concerns the number of iterations. Usually, 500 iterations are sufficient. The last parameter is the classification threshold, which defines the distance between points and the simulated terrain. In order to filter both clouds (UAV and TLS), we assumed the following parameter values: grid size 2500 iterations, and 0.5 for the classification threshold. This eliminated the points classified as the ground surface, and the total number of points in both clouds was once again reduced (Table 4).
After eliminating the ground surface, objects located in the vicinity of the studied structure were removed manually. They included a bucket truck and elements of technical infrastructure that the analysis did not cover. After this operation, the UAV and TLS point clouds were deemed fully cleaned and ready for another density balancing operation (Figure 7).
The next step in preparing the clouds for integration is balancing their density. A cloud of higher density should be reduced by a determined density reduction factor ( R D ). For the purposes of this study, the reduction factor was defined as
R D = 100 ( D P C HI   / D P C L O W )  
where D P C H I and D P C L O W are the mean densities for the clouds with higher and lower density, respectively. The mean cloud density ( D P C ) was defined as the product of the sum of the mean surface density ( D i ) of the cloud for k-neighbours of the studied point, with a radius of r, and the total number of points in this cloud should be
D P C = 1 n T   i = 1 n T D i
D i = n i π r 2
where D i is the cloud surface density (points/m2), n i is the number of points adjacent to the studied point i, r–radius (m), n T is the total number of points in a cloud.
Using the expressions above, the mean density and the UAV cloud reduction factor were calculated for both data sets: UAV and TLS. Consequently, the mean UAV data density amounted to 4973.59 (points/m2), the mean TLS data density was 1624.03 (points/m2), and the reduction factor was 32.65%. As a result, the number of points in the UAV cloud was reduced and the densities of both clouds were balanced. The number of points in the UAV cloud after this operation was 24,160,311 (Table 4).
The ultimate stage in preparing the data for integration is filtration based on a statistical filter [71,72]. This filter is based on the assumption that an outlier is considered to be a point located further than the adopted threshold, defined as the mean standard deviation distribution for all of the k-neighbours of each cloud point. As such, if we let point m i described with coordinates x i ,   y i ,   z i within space 3 belong to point cloud M with a total number of points M p , then
M = { m i } , i = 1 , , M p , m i = x i , y i , z i
And let m q mean a studied point, such that m q M i , and m n means its neighbouring point wherein m n M i , then the closest neighbourhood M n   k of points adjacent to the studied point m q , such that M n = { m 1 n , , m k n } , satisfies the condition:
1 k | m k n m q | p p d m
where d m is the maximum adopted distance between the studied point, and m k n M n , as well as p 1 (here adopted p = 2 ).
In consequence, if the mean distance around point m q relative to all points k in its vicinity is
d i = 1 k 1 k ( m k n m q ) 2
And for all points m i , the mean value of d i is
μ = i M p d i M p
The standard deviation for the studied set M can be defined as
ξ = 1 M p   i M p ( d i μ ) 2
Thus, the resultant point cloud M o , without outliers relative to the mean point will be defined as follows:
M o = { m q M | ( μ α ξ ) d i ( μ + α ξ ) }
where α is an experimentally determined multiplier for a given point cloud.
The aforementioned statistical filter was applied only once for any given cloud. In the case of the UAV data, we adopted k = 6 and α = 1, and k = 8 and α = 4 for TLS, which enabled the ultimate elimination of the outliers (Table 4). Results in the form of a cloud image are shown in Figure 8, which indicates that a UAV cloud clearly better maps the geometry in the upper part of the object, especially near the peak rosette. The TLS cloud does not exhibit complete object geometry in this section. TLS cloud noise and irrelevant data were removed, which revealed the shortcomings of this model. This was a predictable situation because the scanner was positioned in the bottom object section, such that it was not physically possible to fully map the object in this area.

2.5. Point Cloud Integration

Point cloud integration is the final process in preparing the data for geometric structure extraction. In this case, the integration will successively use 4-Point Congruent Sets (4PCS) [73] and Iterative Closest Point [74,75,76] (ICP) algorithms. Integration, in fact, involves, in this case, the determination of elementary rotation matrices R X ( θ ) , R Y ( θ ) , R Z ( θ ) and the 3D coordinates of the translation vector T X , T Y , T Z . This procedure is often encountered when undertaking similar tasks [77].
Cloud integration is conducted in two stages. The first stage is coarse matching and the next is precise matching. As described above, the data was significantly filtered and denoised. However, it should be noted that the data sources differ, and that the modelled surfaces of the structural elements have a slightly different shape depending on the data source. TLS cloud objects have sharp and clear shapes. Metal section cross-sections are very sharp; however, due to occlusions, some of the closed sections only have one part modelled (usually, the outer) that is directly illuminated by the laser beam. The model based on UAV has slightly more rounded section edges. The cross-section of the metal sections is geometrically correct, and has rounded and smoother edges. This is directly related to the characteristics and accuracy of photogrammetric modelling. The phenomenon of occlusion did not have such a significant impact on the data volume, and most sections were completely modelled. Minimizing occlusion results directly from the number of stations taking the photographs. In practice, these were hundreds of positions, whereas in the case of TLS, there were 18 stations. It follows that the 4PCS algorithm, as preliminary matching, will fit perfectly in this case. This was also demonstrated in [53]. As a consequence, the outcome of preliminary matching involving a cloud balanced with the 4PCS algorithm was the following values of the rotation matrix R and transformation vector t :
M = R ( S t )
R = [ r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 ] = [ 0.821028828621   0.570886731148     0.0     0.570886731148   0.821028828621     0.0          0.000000000000          0.000000000000     1.0         ]
t = [ T X T Y T Z   ] = [ 9.172649383545 10.196824073792 0.000000000000   ]
where, S and represent a source cloud and the target cloud or model, respectively.
Coarse matching was conducted using the 4-Points Congruent Sets (4PCS) algorithm [73]. This technique is rapid, noise-resistance and enables the matching of point clouds with a high number of outliers. As claimed by the authors of this algorithm, cloud pre-filtration and data denoising are not required. The essence of aggressive cloud filtration is the prevention of the loss of significant object elements. Overly aggressive filtration results in the significant loss of high-frequency features, especially in UAV models. The UAV model has significantly fewer high-frequency details. This is manifested by rounded edges of sharp objects, with eliminated small objects. In the case of photogrammetric models, elements smaller than 1.5xGSD (ground sampling distance) are often omitted. The mean GSD for the UAV model is 11 mm; therefore, objects smaller than 16.5 mm will rather be eliminated in the data processing and cloud pre-filtration processes.
The authors of the studies presented in [73], after pre-matching clouds with the 4PCS algorithm, then used precise matching with the ICP algorithm. Moreover, in this case, the ICP algorithm was used within the second stage, where the rotation matrices and translation vectors were also determined. Hence, good cloud pre-matching is important. This stems directly from the ICP algorithm’s principle of operation. In our study, let us assume that S and represent a source cloud and a target cloud or model, respectively. In this case, the source cloud is the TLS one, while the UAV cloud is considered as the target. Therefore, we are looking for rigid transformation which minimizes the distance between corresponding points in the clouds. The resultant cloud is shown below (Figure 9).

2.6. Adaptive Structure Extraction Algorithm

The objective of the extraction of a structural object from an integrated point cloud is the isolation of its stable representatives. These are the point clouds which best represent the geometric structure of the object, regardless of their source, and are noise-independent. In our study, an original automatic and adaptive method involving the extraction of edges from a random point cloud and adaptive thresholding was developed in order to extract the target steel structure. Our method is based on the automatic extraction of edges from a point cloud, as described in [62] and modified using the study [63]. Furthermore, the method by Otsu [78], used in [62], was replaced by adaptive thresholding [79]. This led to the attainment of a new, adaptive and automatic algorithm for the extraction of edges from a point cloud. This algorithm was developed for the extraction of the geometric structure of this particular steel building, as it has a rather complicated shape. However, the algorithm does not exclude universality and its possible application for other purposes. The method is automatic and does not require the provision of any parameters.
The first stage of this algorithm for each point p i of the cloud has a calculated normal vector n i for the vicinity of this point that is determined by the k nearest neighbouring points. The normal vector n i will be equal to the lowest eigenvector corresponding to the lowest eigenvalue of the covariance matrix defined in [80]:
C = 1 k   i = 1 k ( p i p ¯ ) · ( p i p ¯ ) T , C · v j = λ j · v j ,   j { 0 , 1 , 2 }  
where k is the defined number of neighbours of the query point p i , p ¯ is the centroid for k neighbours, λ j is the j eigenvalue of the covariance matrix, and v j is the j eigenvector. For a given query point, the p i  k of the nearest neighbours can be determined through [81].
The neighbours of point p i can be expressed as V i = { n 1 , n 2 , , n k } ; therefore, the centroid p i   ¯ for set V i can be calculated from the following formula [63]:
p i ¯ = 1 | V i | j = 1 k n j
The scalar product of vector ( p i p i ¯ ) and the normal vector n i in point p i can be expressed as:
P d ( i ) = | ( p i p i ¯ ) · n i |
And will become smaller the more the query point p i will be positioned in the vicinity of the points forming the flat surface [62]. In contrast, the scalar product P d for points located on the edges will adopt the highest values. This method enables the classification of all of the points located on the edge or not. Sample P d values for several cases are shown in Figure 10.
The next stage of the algorithm involves iterative calculations of P d for successive k neighbours. In the case in question, it was assumed that k = {8, 16, 32 … 128), which gives a total of 16 results for one cloud. If a given edge appears in each iteration for different k values, it can be considered to be a very stable feature. In other words, if a high P d value appears in all of the results at the same point p i , it represents the given structure’s stable edge. Thus, if value P d in point p i is equal to or exceeds a certain determined threshold T, such a point represents an edge, and conversely, if the value is lower than threshold T, it is not treated as an edge. This relationship can be expressed for all iterations as:
F ( i ) = { 1   i f   i = 1 n s P d ( i ) T 0   i f   i = 1 n s P d ( i ) < T
where T is defined adaptively, globally for all potential edges, using the adaptive method [79], and ns represents the total number of iterations.
In the case of the method in question, the proper determination of the T threshold is important. In order to automatically match the value of this threshold, the authors used an adaptive thresholding technique that was discussed in [79]. This algorithm performs its task via two stages. In the first stage, an integral image is calculated based on the source image [82]. In the second stage, the integral image is used to calculate the mean for the value of s × s pixels surrounding each studied image point, followed by a comparison of the pixel values. If the value of the current pixel is t percent lower than the calculated mean for its surroundings, then the pixel takes the value 0 (black). Otherwise, it takes the value 1 (white). In the case of this research, t = 50%.

3. Results and Discussion

3.1. Integration Quality Assessment

The accuracy assessment of the mutual cloud matching after the integration was conducted visually, by developing cross-sections at various levels (Figure 11), and objectively, by using the methods from [83,84]. An M3C2 distance map (Multiscale Model to Model Cloud Comparison) was developed for each point cloud. The results for the processed clouds are shown in Figure 12.
An analysis of the cross-sections based on integrated point clouds on four representative levels (Figure 11) clearly indicates the achieved precision of the integration process and point distribution. Cross-section A, developed at the top of the structure, is characterized by a significant number of UAV points, whereas the TLS points have a trace share in the modelling of the level-A elements. The UAV cloud at level A ensures the required separation between the elements and data continuity within the element cross-section. The TLS cloud, in contrast, does not ensure modelling continuity, and a concentration of TLS points is visible at level B; however, this only takes place on the outer structural elements. The UAV also guarantees element modelling continuity and its separation at this level. Level C exhibits a clear balancing of the modelling continuity for both techniques. The TLS and UAV cloud enables the modelling of elements throughout their entire perimeter; the cross-section is relatively continuous, and the data are available even for internally located structural sections. It is noteworthy that, at the same level, the TLS cloud is a significantly clearer representation of the modelled element, and its shape is precisely reflected. This same element from a UAV cloud is clearly rounded, and its shape is not so sharp. The differences in the distance at this level amount to several millimetres (a maximum of 5 mm) and result from the nature of the very technique of point cloud acquisition and the UAV flight plan. No peripheral flights were detected at this level. In the case of level D, the separation ability of the UAV technique is significantly lower, yet it maintains continuity, although incorrect. The UAV cloud at this level does not enable the modelling of smooth elements in close proximity, because they merge into one shape. In this case, the TLS technique enabled the achievement of a clear structural model, similar to level C.
When analysing the M3C2 (Figure 13) distance histogram and the normal distribution, it can be concluded that the mean standard deviation is 16 mm, with a mean of 0 for the TLS cloud, which means that this cloud overlaps with the UAV cloud. Because a UAV cloud slightly differs from an actual section course in the bottom part of the structure (as shown by cross-sections C and D in Figure 12), the distance projected onto the UAV cloud indicates a slightly higher standard deviation of 34 mm and a mean of 6 mm. These differences demonstrate that a UAV cloud slightly deviates from an ideal model, especially in the case of the lower parts of the modelled structure. The change in the section shape to a more rounded one can be observed when the number of stations decreases and GSD increases. Conversely, TLS indicates greater shape stability at the expense of the data volume. In the case of the upper structure sections, the TLS cloud (cross-sections A and B in Figure 12) does not map the shape, or maps it very poorly; however, despite the lack of data, the shape is geometrically very correct.

3.2. Structure Extraction

The operation of the developed structure edge detection algorithm was validated in two stages. In the first, the algorithm was tested on a source cloud fragment. It involved subjecting the cloud fragment to data reduction, which meant the reduction of the cloud density. The second stage involved testing the operation of the algorithm using the entire source cloud (a fully integrated TLS and UAV point cloud).
The structural extraction was validated in the first stage on a test set, i.e., a representative fragment of a steel structure. The structure contains fragments of a vertical supporting beam and thinner horizontal supports. Five data sets—such that the minimum distances between the point clouds were 0.5 mm, 1 mm, 3 mm, 5 mm and 7 mm—were developed in order to determine the ability of the algorithm to extract structures and the minimum density of the source cloud. These sets were subsequently subjected to the operation of the developed method, and the results are shown in Figure 14. The source cloud points from a given data set are marked in magenta, and the points of the detected edges are marked in green.
The results analysis indicated that the developed algorithm extracts structure edges. In the case of a source cloud (not subjected to reduction) (Figure 14a), all of the sharp edges were indicated correctly. These sharp edges originated primarily from laser scanning, and were especially apparent on the horizontal reinforcement beams. UAV points form slightly smoother edges, and point islands appear on some flat surfaces of vertical sections, which are detected as edges. Such a phenomenon occurs at a high density of an irregular point cloud, and is clearly minimized when the distance between cloud points is lower than 3 mm (Figure 14d). The correct edge detection is the case with clouds where the minimum distance between points is 1–3 mm. In the case of these clouds, the edges of vertical beams and of thinner strengthening elements are clearly marked. No loss of data concerning the studied structure is indicated for this density. Further reduction (7 mm) causes the edges of horizontal thinner elements to no longer be detected, with consequent visible loss of data. The described phenomenon occurs for the proposed number of iterations (16) and the highest number of k = 128. Because the integrated source point cloud exhibits a very high density, the scale level number (16) planned herein might be insufficient. A larger span of the k scale can be used for a higher density, at a clear expense of computing speed loss. However, it should be noted that the nature of the integrated point cloud is not uniform. The cloud originates from two sources. The structure has slightly rounded section edges, such that, with high cloud density, such a potential edge is a rounded section element. In other words, the algorithm is so sensitive that it detects even the smallest edges at a high density, especially on an uneven surface. This unexpected property can sometimes be a great advantage when detecting cracks in particular; however, this was not the goal in this case. Additionally, these surface irregularities originate from the type of applied point cloud acquisition technology, and are notably visible in the case of a UAV cloud. A close-up of this phenomenon is shown in Figure 15. This figure shows clouds divided into UAV (blue), TLS (green) points and detected edge points (red).
Obtaining the optimal point cloud density enabled us to carry out the final computations for the entire object. The results are shown in Figure 16. The detected edges are shown in the left view and constitute characteristic elements of a steel spatial structure. In the middle is the view of the structures with the source cloud reduced to a value of 3 mm. On the right, we see a composite view, with two clouds representing the source and the detected edges in green, and the baseline cloud in magenta.
The analysis of the ultimate elaboration shows that the essential structural elements have been preserved. The algorithm was very correct in isolating all of the edges of the structural elements and connections. Moreover, the peak rosette has been correctly depicted on the detected edges. Overall, the detected elements enable the conduction of a proper comparative assessment of the steel structure. Such an appraisal is the outcome of comparing the design data and the data acquired as a result of measuring the actual structure.

4. Conclusions

This study shows a comprehensive approach to the issue of processing spatial measurement data using modern techniques. The measured building was a steel structure subjected to verification. The structural verification in the course of construction involved the comparison of its current shape with the design shape. Measurements using terrestrial laser scanning and low-level photogrammetry were conducted for this purpose. Because terrestrial laser scanning was unable to cover the entire structure of the building, its upper part was mapped using data from a UAV. This vehicle was used to reach the peak rosette crowning the building, where it took imagery that was then applied for the construction of a point cloud that was then integrated with the cloud obtained on the basis of the laser scanning.
This article thoroughly presents the process of the acquisition of measurement data from various sources, as well as their integration and geometric structure extraction. The entire process involved a separate and independent filtration of both point clouds. It also involved the reduction of noise, the number of outliers and the elements of the structure’s surroundings. This filtration was followed by balancing the cloud density and integrating both point clouds. The resulting integrated point cloud enabled an objective presentation of the current geometric state of the building. Because both applied technologies have very broad reality visualization abilities, the reconstructed building had many additional elements that were unnecessary in assessing the geometry of the steel structure itself. Furthermore, the integrated cloud had over 40 million points, which is a maximum reflection of the actual state, but also significantly hinders work in engineering software (a cloud for model assessment and comparison should be smaller than one million points). However, simple data reduction also significantly reduces the important elements of the structure itself. Therefore, such a solution was not considered. In order to extract structurally significant building elements, a new and adaptive algorithm for the extraction of edges from a random point cloud was developed, tested and adopted for the whole process.
The developed adaptive algorithm was based on previously presented studies, but was significantly modified. This algorithm was developed for the extraction of the geometric structure of this particular steel building, as it has a rather complicated shape. However, the algorithm does not exclude universality and its possible application for other purposes. The method is automatic, and does not require the provision of any additional parameters. The applied adaptive thresholding technique enables the algorithm’s operation without specifying the threshold value, thus greatly facilitating the structural extraction process. The developed algorithm correctly detects building element structures based upon the detection of their edges. The object edges were correctly extracted from the integrated cloud, for a minimum point-to-point distance of 1–3 mm. The further reduction of the data for distances between cloud points above 7 mm results in the edges of horizontal thin elements no longer being found, and a visible loss of data.
In contrast to the studies quoted herein, the algorithm was developed and tested by means of actual measurement data. Moreover, data from actual measurements were used to assess the operation. This additionally increases the value of the presented solution. This proves that the adaptive part of the algorithm correctly operates on real data that, in practice, is burdened with irregular noise, processing errors and imperfect shapes. The presented algorithm works for any kind of point cloud. As it was stated above, the point clouds were integrated for the completeness of the data.
One more feature of the developed method was discovered in the course of the study. This, we feel, will be of major importance in the future. In the case of very dense point clouds (a dozen or so points per mm2), the algorithm detects even the smallest edges and surface irregularities. This unexpected property could be of great advantage when conducting laser scanning aimed at the detection of microcracking in buildings or other structures.
In order to enable readers to conduct their study and apply the developed algorithm for their own work, we have made the Matlab source code and the developed script available.

Author Contributions

Conceptualization, P.B.; methodology, P.B.; software, P.B. and A.Z.; validation, P.B. and A.Z.; formal analysis, P.B. and A.Z.; investigation, P.B.; resources, P.B.; data curation, P.B. and A.Z.; writing—original draft preparation, P.B.; writing—review and editing, P.B. and A.Z.; visualization, P.B. and A.Z.; supervision, P.B.; project administration, P.B.; funding acquisition, P.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The sample data and Matlab code for method presented in this study are openly available in repository MOST Wiedzy (https://mostwiedzy.pl/en/) at doi:10.34808/szar-a523.

Acknowledgments

The authors would like to acknowledge Piotr Lezynski for sharing the historical photographs of the Palm House from the private collection of Krzysztof Kaminski.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wójcik, A.; Klapa, P.; Mitka, B.; Piech, I. The use of TLS and UAV methods for measurement of the repose angle of granular materials in terrain conditions. Measurement 2019, 146, 780–791. [Google Scholar] [CrossRef]
  2. Martínez-Carricondo, P.; Agüera-Vega, F.; Carvajal-Ramírez, F. Use of UAV-Photogrammetry for Quasi-Vertical Wall Surveying. Remote. Sens. 2020, 12, 2221. [Google Scholar] [CrossRef]
  3. Gruszczyński, W.; Matwij, W.; Ćwiąkała, P. Comparison of low-altitude UAV photogrammetry with terrestrial laser scanning as data-source methods for terrain covered in low vegetation. ISPRS J. Photogramm. Remote. Sens. 2017, 126, 168–179. [Google Scholar] [CrossRef]
  4. Kałuża, T.; Sojka, M.; Strzeliński, P.; Wróżyński, R. Application of Terrestrial Laser Scanning to Tree Trunk Bark Structure Characteristics Evaluation and Analysis of Their Effect on the Flow Resistance Coefficient. Water 2018, 10, 753. [Google Scholar] [CrossRef] [Green Version]
  5. Shen, Y.; Wang, J.; Lindenbergh, R.; Hofland, B.; Ferreira, V.G. Range Image Technique for Change Analysis of Rock Slopes Using Dense Point Cloud Data. Remote. Sens. 2018, 10, 1792. [Google Scholar] [CrossRef] [Green Version]
  6. Xu, H.; Li, H.; Yang, X.; Qi, S.; Zhou, J. Integration of Terrestrial Laser Scanning and NURBS Modeling for the Deformation Monitoring of an Earth-Rock Dam. Sensors 2018, 19, 22. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Gawronek, P.; Makuch, M. TLS Measurement during Static Load Testing of a Railway Bridge. ISPRS Int. J. Geo-Info. 2019, 8, 44. [Google Scholar] [CrossRef] [Green Version]
  8. Ham, N.; Lee, S.-H. Empirical Study on Structural Safety Diagnosis of Large-Scale Civil Infrastructure Using Laser Scanning and BIM. Sustainability 2018, 10, 4024. [Google Scholar] [CrossRef] [Green Version]
  9. Suchocki, C.; Błaszczak-Bąk, W. Down-Sampling of Point Clouds for the Technical Diagnostics of Buildings and Structures. Geoscience 2019, 9, 70. [Google Scholar] [CrossRef] [Green Version]
  10. Ziolkowski, P.; Szulwic, J.; Miskiewicz, M. Deformation Analysis of a Composite Bridge during Proof Loading Using Point Cloud Processing. Sensors 2018, 18, 4332. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Wang, Q.; Guo, J.; Kim, M.-K. An Application Oriented Scan-to-BIM Framework. Remote. Sens. 2019, 11, 365. [Google Scholar] [CrossRef] [Green Version]
  12. Cao, Z.; Chen, D.; Shi, Y.; Zhang, Z.; Jin, F.; Yun, T.; Xu, S.; Kang, Z.; Zhang, L. A Flexible Architecture for Extracting Metro Tunnel Cross Sections from Terrestrial Laser Scanning Point Clouds. Remote. Sens. 2019, 11, 297. [Google Scholar] [CrossRef] [Green Version]
  13. Burdziakowski, P.; Tysiac, P. Combined Close Range Photogrammetry and Terrestrial Laser Scanning for Ship Hull Modelling. Geoscience 2019, 9, 242. [Google Scholar] [CrossRef] [Green Version]
  14. Sarro, R.; Riquelme, A.; García-Davalillo, J.C.; Mateos, R.M.; Tomás, R.; Pastor, J.L.; Cano, M.; Herrera, G. Rockfall Simulation Based on UAV Photogrammetry Data Obtained during an Emergency Declaration: Application at a Cultural Heritage Site. Remote. Sens. 2018, 10, 1923. [Google Scholar] [CrossRef] [Green Version]
  15. Ossowski, R.; Przyborski, M.; Tysiac, P. Stability Assessment of Coastal Cliffs Incorporating Laser Scanning Technology and a Numerical Analysis. Remote. Sens. 2019, 11, 1951. [Google Scholar] [CrossRef] [Green Version]
  16. Tysiac, P. Bringing Bathymetry LiDAR to Coastal Zone Assessment: A Case Study in the Southern Baltic. Remote. Sens. 2020, 12, 3740. [Google Scholar] [CrossRef]
  17. Mazzanti, P.; Schilirò, L.; Martino, S.; Antonielli, B.; Brizi, E.; Brunetti, A.; Margottini, C.; Mugnozza, G.S. The Contribution of Terrestrial Laser Scanning to the Analysis of Cliff Slope Stability in Sugano (Central Italy). Remote. Sens. 2018, 10, 1475. [Google Scholar] [CrossRef] [Green Version]
  18. Paleček, V.; Kubíček, P. Assessment of Accuracy in the Identification of Rock Formations from Aerial and Terrestrial Laser-Scanning Data. ISPRS Int. J. Geo-Inf. 2018, 7, 142. [Google Scholar] [CrossRef] [Green Version]
  19. Puliti, S.; Solberg, S.; Granhus, A. Use of UAV Photogrammetric Data for Estimation of Biophysical Properties in Forest Stands Under Regeneration. Remote. Sens. 2019, 11, 233. [Google Scholar] [CrossRef] [Green Version]
  20. Brede, B.; Lau, A.; Bartholomeus, H.M.; Kooistra, L. Comparing RIEGL RiCOPTER UAV LiDAR Derived Canopy Height and DBH with Terrestrial LiDAR. Sensors 2017, 17, 2371. [Google Scholar] [CrossRef]
  21. Surový, P.; Yoshimoto, A.; Panagiotidis, D. Accuracy of Reconstruction of the Tree Stem Surface Using Terrestrial Close-Range Photogrammetry. Remote. Sens. 2016, 8, 123. [Google Scholar] [CrossRef] [Green Version]
  22. Tompalski, P.; Coops, N.C.; Marshall, P.L.; White, J.C.; Wulder, M.A.; Bailey, T. Combining Multi-Date Airborne Laser Scanning and Digital Aerial Photogrammetric Data for Forest Growth and Yield Modelling. Remote. Sens. 2018, 10, 347. [Google Scholar] [CrossRef] [Green Version]
  23. Rusnák, M.; Sládek, J.; Kidová, A.; Lehotský, M. Template for high-resolution river landscape mapping using UAV technology. Measurements 2018, 115, 139–151. [Google Scholar] [CrossRef]
  24. Agüera-Vega, F.; Carvajal-Ramírez, F.; Martínez-Carricondo, P. Assessment of photogrammetric mapping accuracy based on variation ground control points number using unmanned aerial vehicle. Meas. J. Int. Meas. Confed. 2017, 98, 221–227. [Google Scholar] [CrossRef]
  25. Burdziakowski, P.; Specht, C.; Dabrowski, P.S.; Specht, M.; Lewicka, O.; Makar, A. Using UAV Photogrammetry to Analyse Changes in the Coastal Zone Based on the Sopot Tombolo (Salient) Measurement Project. Sensors 2020, 20, 4000. [Google Scholar] [CrossRef]
  26. Long, N.; Millescamps, B.; Guillot, B.; Pouget, F.; Bertin, X. Monitoring the Topography of a Dynamic Tidal Inlet Using UAV Imagery. Remote. Sens. 2016, 8, 387. [Google Scholar] [CrossRef] [Green Version]
  27. Saponaro, M.; Pratola, L.; Capolupo, A.; Saponieri, A.; Damiani, L.; Fratino, U.; Tarantino, E. Data fusion of terrestrial laser scanner and remotely piloted aircraft systems points clouds for monitoring the coastal protection systems. Aquat. Ecosyst. Health Manag. 2020, 23, 1–7. [Google Scholar] [CrossRef]
  28. Salach, A.; Bakuła, K.; Pilarska, M.; Ostrowski, W.; Górski, K.; Kurczyński, Z. Accuracy Assessment of Point Clouds from LiDAR and Dense Image Matching Acquired Using the UAV Platform for DTM Creation. ISPRS Int. J. Geo-Inf. 2018, 7, 342. [Google Scholar] [CrossRef] [Green Version]
  29. Wierzbicki, D.; Nienaltowski, M. Accuracy Analysis of a 3D Model of Excavation, Created from Images Acquired with an Action Camera from Low Altitudes. ISPRS Int. J. Geo-Inf. 2019, 8, 83. [Google Scholar] [CrossRef] [Green Version]
  30. Salehi, S.; Lorenz, S.; Vest Sørensen, E.; Zimmermann, R.; Fensholt, R.; Heincke, B.H.; Kirsch, M.; Gloaguen, R. Integration of Vessel-Based Hyperspectral Scanning and 3D-Photogrammetry for Mobile Mapping of Steep Coastal Cliffs in the Arctic. Remote. Sens. 2018, 10, 175. [Google Scholar] [CrossRef] [Green Version]
  31. Bujakowski, F.; Falkowski, T. Hydrogeological Analysis Supported by Remote Sensing Methods as A Tool for Assessing the Safety of Embankments (Case Study from Vistula River Valley, Poland). Water 2019, 11, 266. [Google Scholar] [CrossRef] [Green Version]
  32. Napolitano, R.; Hess, M.; Glisic, B. Integrating Non-Destructive Testing, Laser Scanning, and Numerical Modeling for Damage Assessment: The Room of the Elements. Heritage 2019, 2, 151–168. [Google Scholar] [CrossRef] [Green Version]
  33. De Regis, M.; Consolino, L.; Bartalini, S.; De Natale, P. Waveguided Approach for Difference Frequency Generation of Broadly-Tunable Continuous-Wave Terahertz Radiation. Appl. Sci. 2018, 8, 2374. [Google Scholar] [CrossRef] [Green Version]
  34. Markiewicz, J.S.; Podlasiak, P.; Zawieska, D. A New Approach to the Generation of Orthoimages of Cultural Heritage Objects—Integrating TLS and Image Data. Remote. Sens. 2015, 7, 16963–16985. [Google Scholar] [CrossRef] [Green Version]
  35. Corso, J.; Roca, J.; Buill, F. Geometric Analysis on Stone Façades with Terrestrial Laser Scanner Technology. Geosciences 2017, 7, 103. [Google Scholar] [CrossRef] [Green Version]
  36. Jarząbek-Rychard, M.; Maas, H.-G. Geometric Refinement of ALS-Data Derived Building Models Using Monoscopic Aerial Images. Remote. Sens. 2017, 9, 282. [Google Scholar] [CrossRef] [Green Version]
  37. Serna, A.; Marcotegui, B.; Hernández, J. Segmentation of Façades from Urban 3D Point Clouds Using Geometrical and Morphological Attribute-Based Operators. ISPRS Int. J. Geo-Inf. 2016, 5, 6. [Google Scholar] [CrossRef]
  38. Xie, L.; Zhu, Q.; Hu, H.; Wu, B.; Li, Y.; Zhang, Y.; Zhong, R. Hierarchical Regularization of Building Boundaries in Noisy Aerial Laser Scanning and Photogrammetric Point Clouds. Remote. Sens. 2018, 10, 1996. [Google Scholar] [CrossRef] [Green Version]
  39. Laefer, D.F.; Truong-Hong, L.; Carr, H.; Singh, M. Crack detection limits in unit based masonry with terrestrial laser scanning. NDT E Int. 2014, 62, 66–76. [Google Scholar] [CrossRef] [Green Version]
  40. Korumaz, M.; Betti, M.; Conti, A.; Tucci, G.; Bartoli, G.; Bonora, V.; Korumaz, A.G.; Fiorini, L. An integrated Terrestrial Laser Scanner (TLS), Deviation Analysis (DA) and Finite Element (FE) approach for health assessment of historical structures. A minaret case study. Eng. Struct. 2017, 153, 224–238. [Google Scholar] [CrossRef]
  41. Miśkiewicz, M.; Pyrzowski, Ł.; Sobczyk, B. Short and Long Term Measurements in Assessment of FRP Composite Footbridge Behavior. Materials 2020, 13, 525. [Google Scholar] [CrossRef] [Green Version]
  42. Miśkiewicz, M.; Sobczyk, B.; Tysiac, P. Non-Destructive Testing of the Longest Span Soil-Steel Bridge in Europe—Field Measurements and FEM Calculations. Materials 2020, 13, 3652. [Google Scholar] [CrossRef] [PubMed]
  43. Gong, M.; Zhang, Z.; Zeng, D. A New Simplification Algorithm for Scattered Point Clouds with Feature Preservation. Symmetry 2021, 13, 399. [Google Scholar] [CrossRef]
  44. Han, H.; Han, X.; Sun, F.; Huang, C. Point cloud simplification with preserved edge based on normal vector. Optics 2015, 126, 2157–2162. [Google Scholar] [CrossRef]
  45. Zhang, K.; Qiao, S.; Wang, X.; Yang, Y.; Zhang, Y. Feature-Preserved Point Cloud Simplification Based on Natural Quadric Shape Models. Appl. Sci. 2019, 9, 2130. [Google Scholar] [CrossRef] [Green Version]
  46. Song, H.; Feng, H.-Y. A progressive point cloud simplification algorithm with preserved sharp edge data. Int. J. Adv. Manuf. Technol. 2009, 45, 583–592. [Google Scholar] [CrossRef]
  47. Fleishman, S.; Cohen-Or, D.; Silva, C.T. Robust moving least-squares fitting with sharp features. ACM Trans. Graph. 2005, 24, 544–552. [Google Scholar] [CrossRef]
  48. Ii, J.D.; Ochotta, T.; Ha, L.K.; Silva, C.T. Spline-based feature curves from point-sampled geometry. Vis. Comput. 2008, 24, 449–462. [Google Scholar] [CrossRef] [Green Version]
  49. Öztireli, A.C.; Guennebaud, G.; Gross, M. Feature Preserving Point Set Surfaces based on Non-Linear Kernel Regression. Comput. Graph. Forum 2009, 28, 493–501. [Google Scholar] [CrossRef] [Green Version]
  50. Xia, S.; Wang, R. A Fast Edge Extraction Method for Mobile Lidar Point Clouds. IEEE Geosci. Remote. Sens. Lett. 2017, 14, 1288–1292. [Google Scholar] [CrossRef]
  51. Demarsin, K.; Vanderstraeten, D.; Volodine, T.; Roose, D. Detection of closed sharp edges in point clouds using normal estimation and graph theory. Comput. Des. 2007, 39, 276–283. [Google Scholar] [CrossRef]
  52. Xu, J.; Zhou, M.; Wu, Z.; Shui, W.; Ali, S. Robust surface segmentation and edge feature lines extraction from fractured fragments of relics. J. Comput. Des. Eng. 2015, 2, 79–87. [Google Scholar] [CrossRef] [Green Version]
  53. Lin, Y.; Wang, C.; Cheng, J.; Chen, B.; Jia, F.; Chen, Z.; Li, J. Line segment extraction for large scale unorganized point clouds. ISPRS J. Photogramm. Remote. Sens. 2015, 102, 172–183. [Google Scholar] [CrossRef]
  54. Weber, C.; Hahmann, S.; Hagen, H. Sharp feature detection in point clouds. In Proceedings of the 2010 Shape Modeling International Conference, Aix-en-Provence, France, 21–23 June 2010; pp. 175–186. [Google Scholar]
  55. Weber, C.; Hahmann, S.; Hagen, H. Methods for Feature Detection in Point Clouds. In Proceedings of the OpenAccess Series in Informatics, Kaiserslautern, Germany, 10–11 June 2011. [Google Scholar]
  56. Gumhold, S.; Macleod, R.; Wang, X. Feature Extraction from Point Clouds. In Proceedings of the 10th International Meshing Roundtable, Newport Beach, CA, USA, 7–11 October 2001. [Google Scholar]
  57. Feng, C.; Taguchi, Y.; Kamat, V.R. Fast plane extraction in organized point clouds using agglomerative hierarchical clustering. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 6218–6225. [Google Scholar]
  58. Raina, P.; Mudur, S.; Popa, T. Sharpness fields in point clouds using deep learning. Comput. Graph. 2019, 78, 37–53. [Google Scholar] [CrossRef]
  59. Raina, P.; Mudur, S.; Popa, T. MLS2: Sharpness Field Extraction Using CNN for Surface Reconstruction. In Proceedings of the Proceedings—Graphics Interface, Toronto, ON, Canada, 9–11 May 2018. [Google Scholar]
  60. Wang, Y.; Du, Z.; Gao, Y.; Li, M.; Dong, W. An Approach to Edge Extraction Based on 3D Point Cloud for Robotic Chamfering. J. Phys. Conf. Ser. 2019, 1267. [Google Scholar] [CrossRef] [Green Version]
  61. Daniels, J.I.; Ha, L.K.; Ochotta, T.; Silva, C.T. Robust Smooth Feature Extraction from Point Clouds. In Proceedings of the IEEE International Conference on Shape Modeling and Applications 2007 (SMI ’07), Minneapolis, MN, USA, 13–15 June 2007; Institute of Electrical and Electronics Engineers: Piscataway, NJ, USA; pp. 123–136. [Google Scholar]
  62. Tran, T.-T.; Cao, V.-T.; Nguyen, V.T.; Ali, S.; Laurendeau, D. Automatic Method for Sharp Feature Extraction from 3D Data of Man-made Objects. In Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Lisbon, Portuga, 5–8 January 2014; SciTePress—Science and Technology Publications: Setúbal, Portugal, 2014; pp. 112–119. [Google Scholar]
  63. Ahmed, S.M.; Tan, Y.Z.; Chew, C.M.; Al Mamun, A.; Wong, F.S. Edge and Corner Detection for Unorganized 3D Point Clouds with Application to Robotic Welding. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2018; pp. 7350–7355. [Google Scholar]
  64. Xiao, R.; Xu, Y.; Hou, Z.; Chen, C.; Chen, S. An adaptive feature extraction algorithm for multiple typical seam tracking based on vision sensor in robotic arc welding. Sens. Actuators A Phys. 2019, 297, 111533. [Google Scholar] [CrossRef]
  65. Zhao, W.; Zhao, C.; Wen, Y.; Xiao, S. An Adaptive Corner Extraction Method of Point Cloud for Machine Vision Measuring System. In Proceedings of the 2010 International Conference on Machine Vision and Human-machine Interface, Kaifeng, China, 24–25 April 2010; Institute of Electrical and Electronics Engineers: Piscataway, NJ, USA, 2010; pp. 80–83. [Google Scholar]
  66. Dyrekcja Rozbudowy Miasta Gdanska. Rewitalizacja i Przebudowa Kompleksu Budynków Palmiarni. Available online: https://www.drmg.gdansk.pl/index.php/bup-realizowane/288-rewitalizacja-i-przebudowa-kompleksu-budynkow-palmiarni-w-ogrodzie-botanicznym-w-parku-opackim-im-adama-mickiewicza-w-gdansku-oliwie-etap-i (accessed on 15 October 2020).
  67. Marchel, Ł.; Specht, C.; Specht, M. Testing the Accuracy of the Modified ICP Algorithm with Multimodal Weighting Factors. Energies 2020, 13, 5939. [Google Scholar] [CrossRef]
  68. Chen, S.; Truong-Hong, L.C.; O’Keeffe, E.; Laefer, D.F.; Mangina, E. Outlier Detection of Point Clouds Generating from Low-Cost UAVs for Bridge Inspection. In Proceedings of the Life-Cycle Analysis and Assessment in Civil Engineering, Ghent, Belgium, 28–31 October 2018; Frangopol, D.M., Caspeele, R., Taerwe, L., Eds.; CRC Press/Balkema: Boca Raton, FL, USA, 2019; pp. 1969–1975. [Google Scholar]
  69. Szabó, Z.; Tóth, C.A.; Holb, I.; Szabó, S. Aerial Laser Scanning Data as a Source of Terrain Modeling in a Fluvial Environment: Biasing Factors of Terrain Height Accuracy. Sensors 2020, 20, 2063. [Google Scholar] [CrossRef] [Green Version]
  70. Zhang, W.; Qi, J.; Wan, P.; Wang, H.; Xie, D.; Wang, X.; Yan, G. An Easy-to-Use Airborne LiDAR Data Filtering Method Based on Cloth Simulation. Remote. Sens. 2016, 8, 501. [Google Scholar] [CrossRef]
  71. Rusu, R.B.; Marton, Z.C.; Blodow, N.; Dolha, M.; Beetz, M. Towards 3D Point cloud based object maps for household environments. Robot. Auton. Syst. 2008, 56, 927–941. [Google Scholar] [CrossRef]
  72. Balta, H.; Velagic, J.; Bosschaerts, W.; De Cubber, G.; Siciliano, B. Fast Statistical Outlier Removal Based Method for Large 3D Point Clouds of Outdoor Environments. IFAC-PapersOnLine 2018, 51, 348–353. [Google Scholar] [CrossRef]
  73. Aiger, D.; Mitra, N.J.; Cohen-Or, D. 4-points congruent sets for robust pairwise surface registration. ACM Trans. Graph. 2008, 27, 1–10. [Google Scholar] [CrossRef] [Green Version]
  74. Besl, P.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  75. Prochazkova, J.; Martisek, D. Notes on Iterative Closest Point Algorithm. In Proceedings of the 17th Conference on Applied Mathematics Aplimat 2018 Proceedings; Slovak University of Technology in Bratislava in Publishing House SPEKTRUM STU, Bratislava, Slovakia, 6–8 February 2018; p. 876. [Google Scholar]
  76. Chen, Y.; Medioni, G. Object modeling by registration of multiple range images. In Proceedings of the Proceedings. 1991 IEEE International Conference on Robotics and Automation, Sacramento, CA, USA, 9–11 April 1991; pp. 2724–2729. [Google Scholar]
  77. He, Y.; Liang, B.; Yang, J.; Li, S.; He, J. An Iterative Closest Points Algorithm for Registration of 3D Laser Scanner Point Clouds with Geometric Features. Sensors 2017, 17, 1862. [Google Scholar] [CrossRef] [Green Version]
  78. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  79. Bradley, D.; Roth, G. Adaptive Thresholding using the Integral Image. J. Graph. Tools 2007, 12, 13–21. [Google Scholar] [CrossRef]
  80. Hoppe, H.; Derose, T.; Duchamp, T.; McDonald, J.; Stuetzle, W. Surface reconstruction from unorganized points. ACM SIGGRAPH Comput. Graph. 1992, 26, 71–78. [Google Scholar] [CrossRef]
  81. Friedman, J.H.; Bentley, J.L.; Finkel, R.A. An Algorithm for Finding Best Matches in Logarithmic Expected Time. ACM Trans. Math. Softw. 1977, 3, 209–226. [Google Scholar] [CrossRef]
  82. Viola, P.; Jones, M.J.C. Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2001), Kauai, HI, USA, 8–14 December 2001; Volume 1, p. 3. [Google Scholar]
  83. James, M.; Robson, S.; D’Oleire-Oltmanns, S.; Niethammer, U. Optimising UAV topographic surveys processed with structure-from-motion: Ground control quality, quantity and bundle adjustment. Geomorphology 2017, 280, 51–66. [Google Scholar] [CrossRef] [Green Version]
  84. James, M.R.; Robson, S.; Smith, M.W. 3-D uncertainty-based topographic change detection with structure-from-motion photogrammetry: Precision maps for ground control and directly georeferenced surveys. Earth Surf. Process. Landforms 2017, 42, 1769–1788. [Google Scholar] [CrossRef]
Figure 1. (a) Oliwa Park Palm House, 1972–1978 (photo credit: Andrzej Zborski). (b) Site location (WGS-84).
Figure 1. (a) Oliwa Park Palm House, 1972–1978 (photo credit: Andrzej Zborski). (b) Site location (WGS-84).
Sensors 21 03416 g001
Figure 2. (a) Palm house prior to its reconstruction, 2017; (b) during the reconstruction, 2018, measurement and construction process inspection period; (c) the glazed and commissioned building [66] (Reproduced with permission from Dyrekcja Rozbudowy Miasta Gdanska).
Figure 2. (a) Palm house prior to its reconstruction, 2017; (b) during the reconstruction, 2018, measurement and construction process inspection period; (c) the glazed and commissioned building [66] (Reproduced with permission from Dyrekcja Rozbudowy Miasta Gdanska).
Sensors 21 03416 g002
Figure 3. Data processing algorithm (PC: point cloud; *.JPG: JPG file format; *.LAS: LAS file format).
Figure 3. Data processing algorithm (PC: point cloud; *.JPG: JPG file format; *.LAS: LAS file format).
Sensors 21 03416 g003
Figure 4. TLS and UAV data acquisition diagram for high structures.
Figure 4. TLS and UAV data acquisition diagram for high structures.
Sensors 21 03416 g004
Figure 5. Developed primary point clouds for UAV (a) and TLS (b).
Figure 5. Developed primary point clouds for UAV (a) and TLS (b).
Sensors 21 03416 g005
Figure 6. Point clouds after noise filtration for UAV (a) and TLS (b).
Figure 6. Point clouds after noise filtration for UAV (a) and TLS (b).
Sensors 21 03416 g006
Figure 7. Point clouds after removing the ground and accompanying objects for UAV (a) and TLS (b).
Figure 7. Point clouds after removing the ground and accompanying objects for UAV (a) and TLS (b).
Sensors 21 03416 g007
Figure 8. Post-filtration point clouds for UAV: (a) top view, (c) side view and TLS, (b) top view, and (d) side view.
Figure 8. Post-filtration point clouds for UAV: (a) top view, (c) side view and TLS, (b) top view, and (d) side view.
Sensors 21 03416 g008
Figure 9. Integrated point clouds using two sources. (a) Visualization of the RGB palette points for the UAV cloud and the grayscale for reflection intensity for the UAV and TLS cloud; (b) blue—UAV cloud, red—TLS cloud.
Figure 9. Integrated point clouds using two sources. (a) Visualization of the RGB palette points for the UAV cloud and the grayscale for reflection intensity for the UAV and TLS cloud; (b) blue—UAV cloud, red—TLS cloud.
Sensors 21 03416 g009
Figure 10. P d values for different tested point clouds: (a) sample object 1, (b) sample object 2, (c) sample object 3.
Figure 10. P d values for different tested point clouds: (a) sample object 1, (b) sample object 2, (c) sample object 3.
Sensors 21 03416 g010
Figure 11. Cross-sections of the integrated point clouds: blue, the UAV cloud; red, the TLS point cloud (values in meters).
Figure 11. Cross-sections of the integrated point clouds: blue, the UAV cloud; red, the TLS point cloud (values in meters).
Sensors 21 03416 g011
Figure 12. The M3C2 distance for integrated point clouds: (a) the distance projected on TLS points; (b) the distance projected on UAV points.
Figure 12. The M3C2 distance for integrated point clouds: (a) the distance projected on TLS points; (b) the distance projected on UAV points.
Sensors 21 03416 g012
Figure 13. The M3C2 distance for the integrated point clouds: (a) the distance projected onto TLS points; (b) the distance projected onto UAV points.
Figure 13. The M3C2 distance for the integrated point clouds: (a) the distance projected onto TLS points; (b) the distance projected onto UAV points.
Sensors 21 03416 g013
Figure 14. Results for a set of reduced data: (a) no reduction, (b) 0.5 mm, (c) 1 mm, (d) 3 mm, (e) 5 mm, (f) 7 mm (magenta: source points; green: edges detected).
Figure 14. Results for a set of reduced data: (a) no reduction, (b) 0.5 mm, (c) 1 mm, (d) 3 mm, (e) 5 mm, (f) 7 mm (magenta: source points; green: edges detected).
Sensors 21 03416 g014
Figure 15. Structure extraction results by the source of the point origin: (a) no reduction, (b) 0.5 mm, (c) 1 mm, (d) 3 mm, (e) 5 mm, (f) 7 mm (green: TLS points; blue: UAV points; red: edges detected).
Figure 15. Structure extraction results by the source of the point origin: (a) no reduction, (b) 0.5 mm, (c) 1 mm, (d) 3 mm, (e) 5 mm, (f) 7 mm (green: TLS points; blue: UAV points; red: edges detected).
Sensors 21 03416 g015
Figure 16. Structure detection results: (a) detected edges (results), (b) source cloud, (c) composite view (source point cloud in magenta, edges (results) in green).
Figure 16. Structure detection results: (a) detected edges (results), (b) source cloud, (c) composite view (source point cloud in magenta, edges (results) in green).
Sensors 21 03416 g016
Table 1. Accuracy-related data of the developed photogrammetric model.
Table 1. Accuracy-related data of the developed photogrammetric model.
SeriesDistance to ObjectGround ResolutionReprojection Error
11–15 m11 mm/pix0.71 pix
21–15 m2.4 mm/pix0.77 pix
Camera locations and error estimates (mean)
X error (m)Y error (m)Z error (m)
10.001270.001370.00128
20.000820.000840.00092
Table 2. TLS technical data—Leica P30.
Table 2. TLS technical data—Leica P30.
Technical DataLeica P 30
Measurement speed:Up to 1 MM points per second
Range accuracy:1.2 mm + 10 ppm over the entire range
Angular accuracy:8″ horizontally; 8″ vertically
3D position accuracy:3 mm at 50 m; 6 mm at 100 m
Laser wave length:1550 nm (invisible)/658 (visible)
Distance noise:0.4 mm RMS at 10 m
0.5 mm RMS at 50 m
Horizontal field of view:360°
Vertical field of view:270°
Table 3. Station transformation parameters during stocktaking work involving a steel engineering structure.
Table 3. Station transformation parameters during stocktaking work involving a steel engineering structure.
NamePXPYPZRollPitchYawScalePKT
Stan10.0000.0000.0000.0000.0000.0000.0188
Stan20.0000.0000.0000.0000.0110.0000.0293
Stan30.0010.0000.0030.007−0.0150.0050.0364
Stan40.0010.0010.0040.011−0.0150.0050.0428
Stan50.0010.0010.0070.0020.0240.0050.0330
Stan60.0000.0010.0080.017−0.0030.0010.0359
Stan7−0.0010.0000.0110.015−0.0210.0020.0306
Stan8−0.0020.0000.0080.025−0.0090.0090.0238
Stan9−0.001−0.0010.0070.0140.0170.0060.0304
Stan10−0.001−0.0010.0030.0050.0120.0030.0228
Stan11−0.001−0.0010.005−0.0030.0050.0020.0245
Stan12−0.001−0.0020.002−0.006−0.018−0.0110.0142
Stan130.000−0.0020.0030.0010.0030.0050.0138
Stan14−0.001−0.0010.006−0.009−0.0080.0040.0311
Stan15−0.004−0.0010.004−0.015−0.003−0.0040.032
Stan16−0.0090.0000.0090.001−0.0240.0100.0536
Stan17−0.0100.0000.009−0.002-0.0250.0140.0539
Table 4. Number of points in the individual clouds, after each filtration stage.
Table 4. Number of points in the individual clouds, after each filtration stage.
Filtration PhaseUAV Point CloudTLS Point Cloud
Initial182,505,086103,680,397
Noise filter92,214,21060,791,121
CSF81,005,41137,192,129
Manual cleaning69,029,45824,033,077
Reduction24,160,31124,033,077
SOR18,806,44423,875,659
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Burdziakowski, P.; Zakrzewska, A. A New Adaptive Method for the Extraction of Steel Design Structures from an Integrated Point Cloud. Sensors 2021, 21, 3416. https://doi.org/10.3390/s21103416

AMA Style

Burdziakowski P, Zakrzewska A. A New Adaptive Method for the Extraction of Steel Design Structures from an Integrated Point Cloud. Sensors. 2021; 21(10):3416. https://doi.org/10.3390/s21103416

Chicago/Turabian Style

Burdziakowski, Pawel, and Angelika Zakrzewska. 2021. "A New Adaptive Method for the Extraction of Steel Design Structures from an Integrated Point Cloud" Sensors 21, no. 10: 3416. https://doi.org/10.3390/s21103416

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop