On the Use of the OptD Method for Building Diagnostics

: Terrestrial laser scanner (TLS) measurements can be used to assess the technical condition of buildings and structures; in particular, high-resolution TLS measurements should be taken in order to detect defects in building walls. This consequently results in the creation of a huge amount of data in a very short time. Despite high-resolution measurements typically being needed in certain areas of interest, e.g., to detect cracks, reducing redundant information on regions of low interest is of fundamental importance in order to enable computationally e ﬃ cient and e ﬀ ective analysis of the dataset. In this work, data reduction is made by using the Optimum Dataset (OptD) method, which allows to signiﬁcantly reduce the amount of data while preserving the geometrical information of the region of interest. As a result, more points are retained on areas corresponding to cracks and cavities than on ﬂat and homogeneous surfaces. This approach allows for a thorough analysis of the surface discontinuity in building walls. In this investigation, the TLS dataset was acquired by means of the time-of-ﬂight scanners Riegl VZ-400i and Leica ScanStation C10. The results obtained by reducing the TLS dataset by means of OptD show that this method is a viable solution for data reduction in building and structure diagnostics, thus enabling the implementation of computationally more e ﬃ cient diagnostic strategies. M.J.; formal analysis, C.S.; investigation, C.S.; resources, C.S. and W.B.-B.; data curation, C.S. and W.B.-B.; writing—original draft preparation, C.S. and W.B.-B.; writing—review and editing, A.M.; visualization, C.S. and A.M.; supervision, C.S. and W.B.-B.; project administration, and

Cultural heritage sites, which are spread all around the world, should be protected, monitored and renewed. The use of a proper remote sensing documentation technique is of fundamental importance in order to obtain 3D models of cultural heritage sites with high accuracy and details, but reducing the risk of damages. To this aim, the TLS technology can be conveniently carried out. Indeed, it provides the ability to collect at a rate of more than one million points per second with millimetre accuracy. Furthermore, TLS can register the radiometric information of a returned laser beam signal, its so-called intensity. The intensity value can be used for instance for defect detection on wall surfaces, e.g., cracks, cavities [16,17] and for assessing humidity saturation and moisture movement in buildings [18,19]. As demonstrated by these examples, intensity information can be a useful tool to assess the technical state, as well as the need of restoration of the historical buildings [20].
In recent years, TLS has gained popularity in several applications related to cultural heritage conservation. Often, other measuring techniques in cultural heritage documentation such as ground penetrating radar (GPR), seismic tomography and chemical analyses of plaster can be used to supplement TLS measurements [21]. Typical symptoms of the poor state of conservation of historical buildings are cracks, cavities and various discontinuities on the building surfaces [22]. Therefore, the collection of high-resolution point clouds on cavities and cracks is very important to monitor the conservation status of buildings. The ability to test the geometry of the building and simultaneously detect visible cracks and cavities is very useful during a building's technical inspection.
The need of detecting also minor defects on the surfaces of walls imposes the acquisition of TLS measurements at very high resolutions. However, this often leads to very large datasets, which are consequently difficult to be efficiently analysed. This motivates the usage of automatic optimization methods for reducing the size of the above-mentioned datasets. Typically, data reduction on large datasets is performed by using random subsampling methods that can cause a partial loss of the information of interest. Despite this, the subsampling strategy is simple and computationally extremely efficient, and the consequent potential information loss may be unacceptable for accurate analysis and diagnostics. It should also be noted that many researchers have also used other approaches to reduce large datasets. For instance, the down-sampling of point clouds through mesh simplification [23,24], mathematical approach of the reduction of point clouds based on the surface curvature radius [25,26] and a planar-based adaptive to reduce large datasets [27].
Among the data reduction methods already proposed in the above-mentioned literature, this work considers the use of the Optimum Dataset (OptD) reduction method to reduce the number of points, while carefully taking into control the potential loss of useful information, e.g., the number of points without the potential loss of useful information cardinality. Meanwhile, this carefully takes into control the potential loss of useful information, e.g., the method is expected to properly reduce the number of points on flat surfaces while retaining points on defect/damaged areas (cracks and cavities). Retained points on the wall defects highlight their location. This makes it easier to identify defects of building walls, as well as the use of well-known non-destructive testing methods for further testing, such as the Schmidt hammer test and ultrasonic pulse velocity test [28,29].
The OptD method was originally designed to reduce large datasets of light detection and ranging (LiDAR) measurements, such as in applications related to the generation of digital terrain models (DTMs) [30][31][32]. This paper investigates the potential of the OptD method to optimise point clouds for building and structure diagnostics, and presents the obtained results on scans of a historical building.

Theoretical Background of OptD Method
The theoretical principles of the algorithm's operation and the OptD method in various applications have been presented in [30,31]. The OptD method was modified for wall defect detection, and consists of iteratively processed stages. The OptD-single method has been used in this study.
In step 1, the algorithm read the input TLS dataset in *.txt format. Depending on the content of the input file, the user modifies the configuration file. In step 2, the OptD method starts asking to the user to set a proper optimization criterion (f). Such a criterion can specify, for instance, a percentage of the total number of points in the original point cloud, leading to a data reduction of exactly the percentage value specified by (f). This is the most commonly accepted optimization criterion, as the result of optimization is the most characteristic point in the dataset-the higher the degree of reduction, Remote Sens. 2020, 12, 1806 3 of 15 the more visible changes will be on the object. Multiple optimization criteria can also be considered, if needed, in the OptD-multi case [30].
In the next step, step 3, the OptD method starts an automatic and iterative process of data examination and point selection until the desired goal is met. First, the data domain is partitioned in stripes of width L on the horizontal plane (see Figure 1). The width of the strip L is important, as the degree of reduction depends (among others) on this parameter. The initial value L depends on the density of points in the dataset. The input L is taken as the average value of the distance between points. The strips can be horizontal and vertical. It depends on defect characteristics. The width of the measurement strip is automatically calculated and adjusted in subsequent iterations (without the user's participation). Then, in step 4, the point selection is separately performed on each of such strips by means of the cartographic generalization method [33], whose data reduction rate is strictly related to the current value of the tolerance parameter (t). Since the initial values of L and t automatically set by the OptD method usually do not allow to reach the desired optimization objective, the above process is repeated after automatically resetting the values of such parameters until the obtained results satisfy the chosen criterion. Since this working procedure allows to automatically determine the proper values of L and t, the only human interaction required by the OptD method is the selection of optimization criterion.
Remote Sens. 2020, 12, x FOR PEER REVIEW 3 of 15 reduction, the more visible changes will be on the object. Multiple optimization criteria can also be considered, if needed, in the OptD-multi case [30].
In the next step, step 3, the OptD method starts an automatic and iterative process of data examination and point selection until the desired goal is met. First, the data domain is partitioned in stripes of width L on the horizontal plane (see Figure 1). The width of the strip L is important, as the degree of reduction depends (among others) on this parameter. The initial value L depends on the density of points in the dataset. The input L is taken as the average value of the distance between points. The strips can be horizontal and vertical. It depends on defect characteristics. The width of the measurement strip is automatically calculated and adjusted in subsequent iterations (without the user's participation). Then, in step 4, the point selection is separately performed on each of such strips by means of the cartographic generalization method [33], whose data reduction rate is strictly related to the current value of the tolerance parameter (t). Since the initial values of L and t automatically set by the OptD method usually do not allow to reach the desired optimization objective, the above process is repeated after automatically resetting the values of such parameters until the obtained results satisfy the chosen criterion. Since this working procedure allows to automatically determine the proper values of L and t, the only human interaction required by the OptD method is the selection of optimization criterion. The rationale of the point selection principle of the OptD method is based on the idea of preserving more points on the locations where larger variations of the measured variables (e.g., height, intensity of laser beam) occur. Vice versa, fewer points are kept on smooth areas, e.g., where the object shape can be locally well approximated with a planar surface. Consequently, the application of OptD produces point clouds with non-homogeneous densities; the degree of data reduction is highly dependent of the local object shape regularity (i.e., of the information contained in such area of the 3D model). It is also worth noticing that, differently from trivial subsampling methods, OptD checks the usefulness of each point in the model: the tolerance parameter is used to determine whether a point should be preserved or discarded. The reader is referred to [30,31] for a more detailed description of the OptD method.
The result of step 4 depends on the value of the tolerance parameter. The end of the OptD processing process occurs in step 5, when the generalization method will be applied in all measurement strips, and the saved dataset will meet the optimization criterion set in step 2. The width of the measurement strip and the tolerance parameter decide the degree of reduction, therefore these values are changed during iteration until the output dataset meets the optimization criterion. Figure 2 summarizes the workflow of the OptD method, taking into account all the parameters that affect the reduction results. The rationale of the point selection principle of the OptD method is based on the idea of preserving more points on the locations where larger variations of the measured variables (e.g., height, intensity of laser beam) occur. Vice versa, fewer points are kept on smooth areas, e.g., where the object shape can be locally well approximated with a planar surface. Consequently, the application of OptD produces point clouds with non-homogeneous densities; the degree of data reduction is highly dependent of the local object shape regularity (i.e., of the information contained in such area of the 3D model). It is also worth noticing that, differently from trivial subsampling methods, OptD checks the usefulness of each point in the model: the tolerance parameter is used to determine whether a point should be preserved or discarded. The reader is referred to [30,31] for a more detailed description of the OptD method.
The result of step 4 depends on the value of the tolerance parameter. The end of the OptD processing process occurs in step 5, when the generalization method will be applied in all measurement strips, and the saved dataset will meet the optimization criterion set in step 2. The width of the measurement strip and the tolerance parameter decide the degree of reduction, therefore these values are changed during iteration until the output dataset meets the optimization criterion. Figure 2 summarizes the workflow of the OptD method, taking into account all the parameters that affect the reduction results. In the scheme, the parameters that are dependent and not dependent on the user are visible. In the iterative process, a dataset that meets the optimization criterion is selected. It should be mentioned that with the use of TLS, the one wall or the entire object are measured. To apply the OptD method, each wall must be implemented separately in the local wall coordinate system. It is important that the wall with defects is in the correct coordinate system, so that the correct division into measuring strips occurs.
The use of the OptD method has a positive impact on the following aspects: • Geometry visibility. Improvement of visibility and readability of the certain shape details. After applying OptD, it is possible to better distinguish object shapes that might originally be hardly visible because of the presence of a large amount of data [34]; • Processing time. Dataset reduction enables a computationally more efficient execution of the time-consuming analysis of the acquired data [30].
In previous papers using the OptD method, the advantage of the method was the processing speed, especially when DTM is generated based on a reduced point cloud.
In [30], the results showed that with the OptD method, the preparation of the data or DTM construction is less time-consuming. The time required for the implementation of the OptD method can be considered as negligible in the whole process of preparing the data for the DTM construction. In the work, [35] has shown that the approach based on the OptD method is less time-and labourconsuming than the approach based on DTM generalization. It results from the fact that before the DTM is generalized, it must first be built from the original point cloud. If the OptD method is used, the dataset will be quickly reduced and DTM will be generated on the basis of the optimal dataset. During the DTM generalization, the time needed to reduce the dataset was about 1200 s. Meanwhile, for reduction using the OptD method it was up to 20 s. In both approaches, datasets can be reduced by up to 98%. In the scheme, the parameters that are dependent and not dependent on the user are visible. In the iterative process, a dataset that meets the optimization criterion is selected. It should be mentioned that with the use of TLS, the one wall or the entire object are measured. To apply the OptD method, each wall must be implemented separately in the local wall coordinate system. It is important that the wall with defects is in the correct coordinate system, so that the correct division into measuring strips occurs.
The use of the OptD method has a positive impact on the following aspects: • Geometry visibility. Improvement of visibility and readability of the certain shape details. After applying OptD, it is possible to better distinguish object shapes that might originally be hardly visible because of the presence of a large amount of data [34]; • Processing time. Dataset reduction enables a computationally more efficient execution of the time-consuming analysis of the acquired data [30].
In previous papers using the OptD method, the advantage of the method was the processing speed, especially when DTM is generated based on a reduced point cloud.
In [30], the results showed that with the OptD method, the preparation of the data or DTM construction is less time-consuming. The time required for the implementation of the OptD method can be considered as negligible in the whole process of preparing the data for the DTM construction. In the work, [35] has shown that the approach based on the OptD method is less time-and labour-consuming than the approach based on DTM generalization. It results from the fact that before the DTM is generalized, it must first be built from the original point cloud. If the OptD method is used, the dataset will be quickly reduced and DTM will be generated on the basis of the optimal dataset. During the DTM generalization, the time needed to reduce the dataset was about 1200 s. Meanwhile, for reduction using the OptD method it was up to 20 s. In both approaches, datasets can be reduced by up to 98%.
Furthermore, in the case of MLS data for a dataset consisting of 20 million points, the OptD method lasted for about 72 s (for criterion of optimization equal to 50%) and 121 s (for criterion of optimization equal to 90%) [36].

Objects of Research and Used Equipment
In this work, two different time-of-flight terrestrial laser scanners, Riegl VZ-400i and Leica ScanStation C10, were used.
The Riegl VZ-400i TLS uses a narrow infrared laser beam. Laser pulse repetition is from 100 kHz to 1200 kHz. The maximum measurement range of this TLS is up to 800 meters for a laser pulse repetition rate of 100 kHz. The scanner works with a maximum measurement rate of 500,000 points/s for a laser pulse repetition rate of 1200 kHz. The laser beam divergence is 0.35 mrad. The angle measurement resolution is better than 0.0007 • , and the range accuracy at 100 m is 5 mm.
The Leica ScanStation C10 TLS uses a visible green laser beam (wavelength equal to 532 nm). The maximum and minimum measurement ranges are approximately 300 m and 0.1 m, respectively. The Leica ScanStation C10 scanner has scan speed of up to 50,000 points/s. The laser beam width at 50 m is 4.5 mm (full-width-half-maximum). Angular accuracy is 60 µrad, while range and position measurement accuracies at 50 m are 4 mm and 6 mm, respectively.
The first case study considered in this work is an old tobacco factory in Cracow. The building is part of the Dolne Młyny complex, which is under the supervision of the conservator. The interior part of the building has been restored, while the outer part of the building is in poor technical condition. A part of wall with damaged plaster and concrete structural element with cracks ( Figure 3) was used to carry out the tests. Measurements were conducted with Riegl VZ-400i TLS at 10 m distance from the wall, setting the laser pulse repetition rate to 1200 kHz. The angular measurement resolution, both horizontal and vertical, was set on the scanner to 0.01 deg.
Remote Sens. 2020, 12, x FOR PEER REVIEW 5 of 15 Furthermore, in the case of MLS data for a dataset consisting of 20 million points, the OptD method lasted for about 72 s (for criterion of optimization equal to 50%) and 121 s (for criterion of optimization equal to 90%) [36].

Objects of Research and Used Equipment
In this work, two different time-of-flight terrestrial laser scanners, Riegl VZ-400i and Leica ScanStation C10, were used.
The Riegl VZ-400i TLS uses a narrow infrared laser beam. Laser pulse repetition is from 100 kHz to 1200 kHz. The maximum measurement range of this TLS is up to 800 meters for a laser pulse repetition rate of 100 kHz. The scanner works with a maximum measurement rate of 500,000 points/s for a laser pulse repetition rate of 1200 kHz. The laser beam divergence is 0.35 mrad. The angle measurement resolution is better than 0.0007°, and the range accuracy at 100 m is 5 mm.
The Leica ScanStation C10 TLS uses a visible green laser beam (wavelength equal to 532 nm). The maximum and minimum measurement ranges are approximately 300 m and 0.1 m, respectively. The Leica ScanStation C10 scanner has scan speed of up to 50,000 points/s. The laser beam width at 50 m is 4.5 mm (full-width-half-maximum). Angular accuracy is 60 μrad, while range and position measurement accuracies at 50 m are 4 mm and 6 mm, respectively.
The first case study considered in this work is an old tobacco factory in Cracow. The building is part of the Dolne Młyny complex, which is under the supervision of the conservator. The interior part of the building has been restored, while the outer part of the building is in poor technical condition. A part of wall with damaged plaster and concrete structural element with cracks ( Figure 3) was used to carry out the tests. Measurements were conducted with Riegl VZ-400i TLS at 10 m distance from the wall, setting the laser pulse repetition rate to 1200 kHz. The angular measurement resolution, both horizontal and vertical, was set on the scanner to 0.01 deg. The second case study was the historic retaining wall strengthening the bluff located in Olsztyn (Figure 4). The TLS survey was conducted with a Leica ScanStation C10 from two stations in order to The second case study was the historic retaining wall strengthening the bluff located in Olsztyn (Figure 4). The TLS survey was conducted with a Leica ScanStation C10 from two stations in order to properly scan the entire deep cavity. The two-point clouds were registered using special targets and merged using the Cyclone software.
Remote Sens. 2020, 12, x FOR PEER REVIEW 6 of 15 properly scan the entire deep cavity. The two-point clouds were registered using special targets and merged using the Cyclone software. The registration results are shown in Table 1.

Data Processing Using the OptD Method
The CloudCompare software was used to visualize the data. Detailed characteristics of the datasets used in this work as case studies are provided in Table 2. The TLS datasets were processed by means of the OptD method. Authors used their own software to reduce the datasets. The software has been written in the Java programming language (v.9). The percentage of points to be retained after the data reduction was used as the optimization criterion in the OptD. During processing, the Douglas-Peucker generalization method was used. For comparison, the OptD method was run with six different settings, i.e., with the following values of the percentage of points to be retained: 20%, 10%, 5%, 2%, 1% and 0.5%. Table 3 reports the values of the processing parameters, namely L and t, automatically determined by the OptD to satisfy the optimization requirements. The registration results are shown in Table 1.

Data Processing Using the OptD Method
The CloudCompare software was used to visualize the data. Detailed characteristics of the datasets used in this work as case studies are provided in Table 2. The TLS datasets were processed by means of the OptD method. Authors used their own software to reduce the datasets. The software has been written in the Java programming language (v.9). The percentage of points to be retained after the data reduction was used as the optimization criterion in the OptD. During processing, the Douglas-Peucker generalization method was used. For comparison, the OptD method was run with six different settings, i.e., with the following values of the percentage of points to be retained: 20%, 10%, 5%, 2%, 1% and 0.5%. Table 3 reports the values of the processing parameters, namely L and t, automatically determined by the OptD to satisfy the optimization requirements.  Table 3 presents L and t for the last iteration. For all test objects, the L value was the same for the last iteration with the adopted optimization criteria: 0.001 m for test area 1 (wall with damaged plaster) and test area 2 (concrete element with the cracks, respectively, and 0.005 m for test area 3 (damaged retaining brick wall). However, the value of t changed depending on the optimization criterion. The most iterations were needed when processing test area 2 with the optimization criterion p = 20%, while the least iterations were for test area 3 with p = 5%. The largest t-value was for test area 3 at p = 0.5%. This is due to the fact that this feature was the most diverse in terms of geometry.
The data reduction results obtained in the test areas 1, 2 and 3 are shown in Figures 5-7 Table 3 presents L and t for the last iteration. For all test objects, the L value was the same for the last iteration with the adopted optimization criteria: 0.001 m for test area 1 (wall with damaged plaster) and test area 2 (concrete element with the cracks, respectively, and 0.005 m for test area 3 (damaged retaining brick wall). However, the value of t changed depending on the optimization criterion. The most iterations were needed when processing test area 2 with the optimization criterion p = 20%, while the least iterations were for test area 3 with p = 5%. The largest t-value was for test area 3 at p = 0.5%. This is due to the fact that this feature was the most diverse in terms of geometry.
The data reduction results obtained in the test areas 1, 2 and 3 are shown in Figures 5, 6 and 7, respectively.  in the 2% and 1% datasets in Figure 4 were largely reduced on the flat areas, while leaving many more points on the defects of the wall.
A similar observation can be repeated for the 5%, 2% and 1% datasets in Figure 5 and Figure 6. It is also notable that in all the considered cases (see Figures 4-6), the 0.5% dataset probably does not allow a proper defect detection. This is a direct consequence of the dramatic data reduction and the specific behaviour of the OptD algorithm, which always preserves a significant amount of points on the borders of the considered area.  AA profiles were made for three cases to carefully analyze the obtained results, which are shown in Figures 8, 9 and 10. The profiles show strips of 0.008 m width.
It is worth noticing that points close to sudden profile changes should be preserved by the data reduction method to properly maintain the possibility of defect detection on the wall surface. Positions corresponding to such sudden changes are marked with dashed lines in figures to ease the readability of these figures. A similar observation can be repeated for the 5%, 2% and 1% datasets in Figure 5 and Figure 6. It is also notable that in all the considered cases (see Figures 4-6), the 0.5% dataset probably does not allow a proper defect detection. This is a direct consequence of the dramatic data reduction and the specific behaviour of the OptD algorithm, which always preserves a significant amount of points on the borders of the considered area.  AA profiles were made for three cases to carefully analyze the obtained results, which are shown in Figures 8, 9 and 10. The profiles show strips of 0.008 m width.
It is worth noticing that points close to sudden profile changes should be preserved by the data reduction method to properly maintain the possibility of defect detection on the wall surface. Positions corresponding to such sudden changes are marked with dashed lines in figures to ease the readability of these figures. The main advantage of the OptD method is that it typically ensures a low data reduction rate on areas corresponding to defects (cavities, cracks or other surface discontinuities) and, vice versa, a high reduction rate on smooth/regular areas (without defects). This can clearly be seen by means of visual inspection in the presented examples (Figures 4-6). For instance, the number of points in the 2% and 1% datasets in Figure 4 were largely reduced on the flat areas, while leaving many more points on the defects of the wall.
A similar observation can be repeated for the 5%, 2% and 1% datasets in Figures 5 and 6. It is also notable that in all the considered cases (see Figures 4-6), the 0.5% dataset probably does not allow a proper defect detection. This is a direct consequence of the dramatic data reduction and the specific behaviour of the OptD algorithm, which always preserves a significant amount of points on the borders of the considered area.
AA profiles were made for three cases to carefully analyze the obtained results, which are shown in Figures 8-10. The profiles show strips of 0.008 m width.
It is worth noticing that points close to sudden profile changes should be preserved by the data reduction method to properly maintain the possibility of defect detection on the wall surface. Positions corresponding to such sudden changes are marked with dashed lines in figures to ease the readability of these figures.
In Figure 8, a dashed line was located in mortar layers. In most cases, the mortar layers have been damaged. By analyzing the individual profiles, it can be concluded that the OptD reduction method left more points in the above-mentioned places than on the flat surfaces, which was expected.
The profiles shown in Figure 9 correspond to a defected area in the concrete element. As shown in this figure (e.g., 2% dataset), the OptD method clearly retains a larger amount of points on areas associated with profile variations, which are more informative for detecting defects.
The profiles shown in Figure 10 correspond to a defected area in the brick wall. For instance, by analyzing these profiles, it can be seen that for a 5% dataset, one can correctly diagnose the defect. Thus, the OptD method also works well in this case.
In Figure 8, a dashed line was located in mortar layers. In most cases, the mortar layers have been damaged. By analyzing the individual profiles, it can be concluded that the OptD reduction method left more points in the above-mentioned places than on the flat surfaces, which was expected. The profiles shown in Figure 9 correspond to a defected area in the concrete element. As shown in this figure (e.g., 2% dataset), the OptD method clearly retains a larger amount of points on areas associated with profile variations, which are more informative for detecting defects. In Figure 8, a dashed line was located in mortar layers. In most cases, the mortar layers have been damaged. By analyzing the individual profiles, it can be concluded that the OptD reduction method left more points in the above-mentioned places than on the flat surfaces, which was expected. The profiles shown in Figure 9 correspond to a defected area in the concrete element. As shown in this figure (e.g., 2% dataset), the OptD method clearly retains a larger amount of points on areas associated with profile variations, which are more informative for detecting defects. The profiles shown in Figure 10 correspond to a defected area in the brick wall. For instance, by analyzing these profiles, it can be seen that for a 5% dataset, one can correctly diagnose the defect. Thus, the OptD method also works well in this case.

Discussion
Since OptD retains a larger number of points on defected areas, it is quite clear that it allows high data reduction rates while still preserving the possibility of detecting cavities and cracks. In spite of this, the selection of a very high down-sampling rate may cause the loss of useful information about object defects.
A more in-depth analysis of three samples taken from the considered datasets were carried out in order to better investigate this aspect ( Figure 11). Each sample has a flat and homogeneous area (FHA) and defect area (DA). The quantitative comparisons between the original and reduced point cloud for FHA and DA are presented in Table 4. As a result of a visual assessment, it was found that considering the 2% dataset is sufficient to execute a proper diagnosis of both the wall with damaged plaster and the concrete element with cracks. Differently, the 5% dataset shall be used in the case of damaged retaining brick wall. Figure 11 reports the percentage of points kept in the FHA and DA in all considered cases.

Discussion
Since OptD retains a larger number of points on defected areas, it is quite clear that it allows high data reduction rates while still preserving the possibility of detecting cavities and cracks. In spite of this, the selection of a very high down-sampling rate may cause the loss of useful information about object defects.
A more in-depth analysis of three samples taken from the considered datasets were carried out in order to better investigate this aspect ( Figure 11). Each sample has a flat and homogeneous area (FHA) and defect area (DA). The quantitative comparisons between the original and reduced point cloud for FHA and DA are presented in Table 4. As a result of a visual assessment, it was found that considering the 2% dataset is sufficient to execute a proper diagnosis of both the wall with damaged plaster and the concrete element with cracks. Differently, the 5% dataset shall be used in the case of damaged retaining brick wall. Figure 11 reports the percentage of points kept in the FHA and DA in all considered cases.
Remote Sens. 2020, 12, x FOR PEER REVIEW 11 of 15 Figure 11. Tested areas for three samples. Figure 12 presents a spatial visualization for test area 3, as this object was the most diverse in terms of geometry and had the largest defects.   Figure 12 presents a spatial visualization for test area 3, as this object was the most diverse in terms of geometry and had the largest defects. Figure 12 presents the distribution of points that obstruct a homogeneous area and defect area. The figure was prepared based on the sets after the optimization criterion of 5% (similar to Figure 11) and the level above, i.e., 10%. OptD processing results in a different degree of reduction in different areas of the object. On places that are not very varied, few points remained (red colour in Figure 12), and on complex surfaces there were more points (blue colour in Figure 12). Thanks to this property of the OptD method, we can find object defects while reducing the size of the dataset. Table 4 and Figure 13 experimentally prove that the data reduction on the flat and homogeneous areas is greater than on the defect area, which was expected. Nevertheless, the reduction degree in these areas differs for different datasets.
For instance, in the first case study, in the 5% dataset 2.696% of the original amount of points were retained in the FHA area, while 9.828% points were retained in the DA area, leading to approximately four times more points kept in the DA with respect to FHA (DA%/FHA% ≈ 4). Differently, in the 2% dataset case, only 0.168% points are retained on the FHA area and 4.121% points in the DA area, leading to a 24-times ratio between the two of them. Similar considerations can also be repeated in the other cases reported in Table 4.  Figure 12 presents a spatial visualization for test area 3, as this object was the most diverse in terms of geometry and had the largest defects.  Figure 12 presents the distribution of points that obstruct a homogeneous area and defect area. The figure was prepared based on the sets after the optimization criterion of 5% (similar to Figure 11) and the level above, i.e., 10%. OptD processing results in a different degree of reduction in different areas of the object. On places that are not very varied, few points remained (red colour in Figure 12), Analysing the graphs presented in Figure 13, it can be seen that the largest discrepancies in the retained points between the homogeneous area and defect area occur for concrete element (test area 2). This means that for this case, more unnecessary points have been removed (in areas with little variation) compared to the points showing the defects. To conclude, according to the analysis conducted in this work, the OptD method can be effectively used for point cloud down-sampling in the context of identifying building defects. The shrunk dataset obtained from the OptD method can be more easily visually analysed or processed further using algorithms for automatic point cloud classification.

Conclusions
Analysing the graphs presented in Figure 13, it can be seen that the largest discrepancies in the retained points between the homogeneous area and defect area occur for concrete element (test area 2). This means that for this case, more unnecessary points have been removed (in areas with little variation) compared to the points showing the defects.

Conclusions
The paper presents the application of the OptD method for the optimized size reduction of point clouds in the diagnosis and monitoring of historical buildings. Results reported in this paper show that, thanks to its careful optimized point selection, the use of OptD allows to obtain a significantly smaller dataset while also highlighting defects and discontinuities in the wall with damaged plaster in the concrete element, as well as the brick wall.
Based on the results obtained in the considered case studies, the following conclusions can be drawn:

•
The reduced dataset obtained with OptD has a significantly lower point density on regular areas (wall without defects) than on defects (cavities and cracks).

•
Obtained results show that OptD can be effectively used for optimizing cloud points for diagnostic measurements buildings and other structures.

•
The results of this work indicate the possibility of using OptD as a tool for easing the detection of defects in buildings and structures. Our future work will be dedicated to the investigation of this aspect.

•
The main disadvantage of the OptD method is that it may retain a large number of points at the border of the region of interest.

•
Authors have been working to implement the OptD method in point cloud data processing software.