Next Article in Journal
Intent Detection and Slot Filling with Capsule Net Architectures for a Romanian Home Assistant
Previous Article in Journal
Mobile Deep Learning System That Calculates UVI Using Illuminance Value of User’s Location
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Symmetry Analysis of Oriental Polygonal Pagodas Using 3D Point Clouds for Cultural Heritage

1
School of Geography and Planning, Sun Yat-sen University, Guangzhou 510275, China
2
Guangdong Provincial Key Laboratory of Urbanization and Geo-Simulation, Sun Yat-sen University, Guangzhou 510275, China
3
China Regional Coordinated Development and Rural Construction Institute, Sun Yat-sen University, Guangzhou 510275, China
4
Department of Geography, College of Science, Swansea University, Swansea SA28PP, UK
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(4), 1228; https://doi.org/10.3390/s21041228
Submission received: 14 December 2020 / Revised: 23 January 2021 / Accepted: 8 February 2021 / Published: 9 February 2021
(This article belongs to the Section Remote Sensors)

Abstract

:
Ancient pagodas are usually parts of hot tourist spots in many oriental countries due to their unique historical backgrounds. They are usually polygonal structures comprised by multiple floors, which are separated by eaves. In this paper, we propose a new method to investigate both the rotational and reflectional symmetry of such polygonal pagodas through developing novel geometric models to fit to the 3D point clouds obtained from photogrammetric reconstruction. The geometric model consists of multiple polygonal pyramid/prism models but has a common central axis. The method was verified by four datasets collected by an unmanned aerial vehicle (UAV) and a hand-held digital camera. The results indicate that the models fit accurately to the pagodas’ point clouds. The symmetry was realized by rotating and reflecting the pagodas’ point clouds after a complete leveling of the point cloud was achieved using the estimated central axes. The results show that there are RMSEs of 5.04 cm and 5.20 cm deviated from the perfect (theoretical) rotational and reflectional symmetries, respectively. This concludes that the examined pagodas are highly symmetric, both rotationally and reflectionally. The concept presented in the paper not only work for polygonal pagodas, but it can also be readily transformed and implemented for other applications for other pagoda-like objects such as transmission towers.

1. Introduction

Ancient polygonal pagodas are important components of architectural and historical studies in many oriental countries. Most of these polygonal pagodas were built with symmetries (both reflectional and rotational) due to the oriental cultural background [1]. According to Lu [2], symmetry in architecture has been found in official documents created during the Warring States Period (BC 770–BC 221) in China. Therefore, investigating the pagodas’ symmetry or asymmetry is an essential process for both cultural studies. On the other hand, structural health of historical building in terms of symmetry is becoming more and more important for cultural heritage and tourism [3]. The symmetry can be investigated by first applying the state-of-art laser scanning [4] or the digital photogrammetry [5] to record the pagoda structures in terms of tremendous number of points in a three-dimensional (3D) coordinate system. These points are known as the point clouds.
Using point clouds for the digitalization of historical sites and buildings is now very common for cultural heritage because the increased availability of the affordable sensors, data collection platforms (e.g., the unmanned aerial vehicle (UAV)), and the associated processing algorithms [6,7]. Due to the high-density point clouds, which could be up to scales of a billion points, the symmetries of buildings or other 3D objects have been investigated intensively during the past decade. Berner et al. [8] detected the symmetry from point clouds by analyzing a graph of surface features obtained by a randomized subgraph searching algorithm based on the random sample consensus (RANSAC). The method detected the reoccurring components of the structures, and then the iterative closest points (ICPs) algorithm is used to verify the results. Combès et al. [9] modified the basic form of the ICP to estimate a plane to define the reflectional symmetry. [10] of a point cloud.
Jiang et al. [11] transformed the point clouds of some objects into curve skeletons and then computed the symmetry electors (a set of skeleton node pairs) to create a symmetry correspondence matrix. The matrix was then processed with the spectral analysis. This method is accurate but could cause extra computational load, especially for objects that were highly symmetric. Li et al. [12] proposed a method that utilizes both global symmetry and local symmetry to inpaint the original point cloud to reconstruct an ancient architecture. Shao et al. [13] proposed a method based on the affinity propagation algorithm to model the symmetry of the incomplete point clouds of some statues The affinity propagation algorithm explores the similarity between points and allows those points to estimate their best exemplar.
Xue et al. [14] applied a slice-based method on point clouds of buildings to examine the symmetry. A derivative-free optimization is applied to accelerate the computation for each slice to estimate the axis position. This method was further developed and reported in Xue et al. [15]. The method was built with the octree algorithm, so each point fell into a voxel with a weight, and then the points were used to compute a set of descriptors that were then input into an optimization process. Li et al. [16] applied geometric fitting techniques and a voting algorithm to point clouds to compute the central axis for the symmetry. Cheng et al. [17] detected the symmetry from point clouds and used it for point cloud registration based on sine function fittings in which the symmetrical parameters was extracted from slices of the point cloud.
Before the symmetry can be investigated, the point clouds of the historical buildings need to be acquired and processed via a series of procedures. For examples, Liang et al. [18] integrated laser scanning and UAV-based photogrammetry to create a complete set of point clouds of a classical Chinese garden in Suzhou, China. They have created 3D meshes using the point clouds of many smaller components inside the garden for analysis but do not geometrically model any of them to deliver accurate parameters or to investigate the symmetry. Similarly, Jo and Hang [19] surveyed an old temple with the laser scanning and UAV techniques in Gonju, South Korea. They acquired a complete point cloud but do not attempt to geometrically model the temple structure and therefore the symmetry. Using the oblique UAV photogrammetry, Martínez-Carricondo et al. [20] reconstruct a point cloud of a historical site in Almeria, Spain. The point cloud was then used as input for a building information model (BIM), but no any information about the symmetry of the targets was reported. Manajitprasert et al. [21] reconstructed an ancient pagoda with the UAV imagery in Ayutthaya, Thailand. The pagoda’s symmetry has not been reported. Nevertheless, Chan et al. [22] had geometrically modeled a part of an old chapel in Australia as a square pyramid using point clouds obtained from a hand-held scanner, but they failed to model the entire chapel and thus were not able to evaluate the symmetry. Some other examples for the point cloud-based heritage documentation can be found in [23,24,25].
Regardless of the richness of the literatures on the symmetry or the structural analysis of the point clouds, very few or none of them focused on investigation of the symmetry of a particular and common type of oriental architecture. In the paper, we developed a model-based method to process the point clouds to estimate the central axis and then evaluate the symmetry of a typical oriental pagoda for cultural heritage documentation. We proposed a novel geometric model for a typical polygonal pagoda to fit to the point clouds obtained from photogrammetric reconstruction based on UAV and hand-held digital imagery. The model can simultaneously estimate the parameters for central axis and the major components of the entire pagoda regardless to the high structural complexity. The resultant parameters can be then used to transform the point cloud to a nominal (original) position where the pagoda is completely leveled so the symmetry can be evaluated with the ICP algorithm in terms of the matching accuracy.

2. Methods

The main idea of the proposed method is that the point cloud of the entire pagoda (obtained from photogrammetric reconstruction) is first processed and segmented into several individual parts. Then, all the individual parts are adjusted using a single least-squares process to estimate the model parameters. Then, the model parameters are used to transform the entire pagoda to its nominal position at where its central axis coincides with the Z-axis in the defined coordinate system, so it becomes completely leveled.
As shown in Figure 1, the proposed method consists of three main components: (1) preprocessing; (2) model parameter estimation; and (3) symmetry analysis. The first component mainly deals with the pagoda extraction from the original point cloud. The second component is the estimation of the pagoda model parameters using the least-squares method, including the initial value computation for the model. A new geometric model for a polygonal pagoda is developed and presented in detail. The last component is the analysis of the pagoda’s rotational and reflectional symmetries based on the errors obtained by point cloud matching of the rotated and reflected pagoda. The details of all the main components are fully discussed in the following subsections. In this paper, even though we demonstrate our method using a hexagonal pagoda [26] as an example, the method can be applied to other polygonal pagodas by simply adjusting one variable—number of sides of the polygon (n), for the models.

2.1. Preprocessing

2.1.1. Data Collection

The pagodas are surveyed using the photogrammetric techniques. Sets of digital images are first collected by a UAV-held or a hand-held camera. The images should be captured in a way that the camera is moved around the pagoda at different heights (UAV) and different elevation angles (hand-held). The collected images can be then processed by commercial photogrammetric software packages such as ContextCapture (previously known as Smart3D), Pix4Dmapper and Agisoft Metashape (previously known as Agisoft Photoscan) [27].

2.1.2. Pagoda Extraction

Before extracting the pagoda from the point cloud, the ground should be removed to reduce the computation. This is achieved by using the cloth simulation filter (CSF) first proposed by Zhang et al. [28]. Many oriental ancient pagodas consist of multiple floors (main bodies), which are polygonal prisms (mostly hexagonal and octagonal prisms) separated by short eaves as shown in Figure 2. Similar to Luo and Wang [29], the original point cloud was sliced into many thin layers (e.g., 2 cm) so the entire point cloud can be processed as slices iteratively. All the points in each slice are projected onto a two-dimensional (2D) plane. Since the pagoda is rather an isolated building, its cross-section is a polygon (usually hexagon or octagon). Therefore, the 2D circle fitting is used to find the approximate center of the polygon (pagoda slice) on each slice [30]. As the height of the slice increases, the estimated radius from the 2D circle will change rapidly so that the slice of the roof, eaves, and the main bodies (multiple floors) can be identified.

2.2. Model Parameter Estimation

2.2.1. Planar Patch Extraction Based on the PCA

Many roofs of oriental ancient pagodas are covered by roof tiles, which are usually non planar, therefore those roof tiles should be filtered out to keep those points lying on planar patches of the roof. This can be achieved by applying a filtering method [31,32] based on the principal component analysis (PCA). The k-nearest neighbor algorithm method is first employed for each point, j, to search other points within a small radius (e.g., 2 cm) in its neighborhood. Then the covariance matrix, C, is computed as
C = 1 p i = 1 p ( r i r c ) ( r i r c ) T
where p is the number of points within a sphere having a small radius and centered at the position of point j. r i is the coordinates of all the points within the sphere, while r c is the mean centroid of all those points. Then, the eigenvalues of this covariance matrix is computed as λ1, λ2, and λ3. There are two almost equal, and one small eigenvalues (normalized) for a planar patch (e.g., λ1 ≈ λ2, λ3 ≈ 0).

2.2.2. Geometric Model

The ancient pagodas usually consist of a repeated component, namely, floor. The dimension of each floor could be descending for the higher floors. On the other hand, the eaves usually spread outward the main bodies, and in a form of the polygonal pyramid. The roof is usually a sharper polygonal pyramid compared to the eaves. As a result, the pagoda roof/eaves, main bodies (floors) can be geometrically modeled with the 3D model adapted from Chan et al. [33].

Polygonal Prism/Pyramid for Different Components of the Pagodas

The geometric models of the polygonal pyramid and prism can be for different parts of the pagodas as illustrated in Figure 3. The geometric model of the polygonal pyramid (roof/eaves) is given as
f x , l   =   R 0 k Z X tan 1 1 n · 90 ° Y = 0
where
X Y Z = R 3 q 1 · 360 ° n + Ψ R 2 Φ R 1 Ω X X c Y Y c Z
Additionally,
q = θ n 360 °
where x   and   l   are column vectors for the model parameters and observations, respectively; X, Y, and Z are the coordinates of points of the pagodas in the object space; X’, Y’, and Z’ are the coordinates of points of the pagodas in the model space; R1, R2, and R3 are the rotation matrices about the X-axis, Y-axis, and Z-axis, respectively; n is set to 6 (number of side of the polygon) for a hexagonal pagoda, so q is defined as the sextant number, which is associated to each point, representing which sextant (Figure 4) the point belongs to. For all those different parts of the pagoda, the model parameters include: the center of the prism (Xc, Yc); the rotation angles (Ω, Φ, Ψ ) about the X-axis, Y-axis, and Z-axis, respectively; the polygonal radius (R0) and the taper factor (k), which is the gradient governing the radius decrement at larger Z by subtracting a portion of Z from R0. There are seven parameters in total.
For a prism (main body), the taper factor can be set to 0 for a prism, so Equation (1) is simplified as
f x , l   =   R 0 X tan 1 1 n · 90 ° Y = 0

Geometric Model for the Entire Pagoda

Instead of modeling different parts individually, the entire pagoda should be modeled with a common central axis (different parts share a common set of center and rotations but with different shapes and dimensions), as illustrated in Figure 5. When the radius of the main bodies decreased at a higher floor, multiple radius parameters have to be solved in the model. Similarly, different taper factors were considered for each part except the main bodies (k is set to 0 for the prism, and not solved). The geometric model of the polygonal pagoda for a given point i, lying on part j (either pyramid or prism), is given as
f x , l   =   R 0 j k j Z i X i tan 1 1 n · 90 ° Y i = 0
where
X i Y i Z i = R 3 q i 1 · 360 ° n + Ψ R 2 Φ R 1 Ω X X c Y Y c Z Z b
Note that the Zb is the Z coordinate of the base of the pagoda, it is added to the model even though it is not solved by the least-squares. It is added to the model to translate the pagoda to the XY-plane, otherwise, the estimated taper factor and polygonal radii would not fall in realistic ranges. However, It cannot be solved because it is absolutely correlated with the taper factors of the roof and eaves. Zb can take the minimum Z value of the entire pagoda. Before the processing, the entire pagoda is also first translated to its centroid so that the Xc and Yc become small numbers.

2.2.3. Least-Squares Estimation

As the parameters and the observations are not separable in the model (Equation (6)), the Gauss–Helmert adjustment model [34], also known as the combined model, is employed to solve the parameters. The linearized adjustment model is expressed as
A δ ^ + B v ^ + w = 0
where δ ^ is the correction vector for the parameters, which can be further broken down into
δ ^ p = x c   y c   Ω   Φ   Ψ   T
for the central axis; and
δ ^ t = k 1   R 1     k 2   R 2     k g   R g T
for the pagoda shape parameter, where g is the total number of the parts. A is the design matrix of partial derivatives of the models with respect to the parameters of the pagoda model; B is the design matrix of partial derivatives of the model with respect to the observations of the pagoda model; v ^   is the residual vector; and w is the misclosure vector of the pagoda model.
Using the same subscripts as the correction vector, the design matrices A and B, are broken down in the same way, and arranged in the following normal equation:
A p T B P 1 B T 1 A p A p T B P 1 B T 1 A t A t T B P 1 B T 1 A p A t T B P 1 B T 1 A t δ ^ p δ ^ t + A p T B P 1 B T 1 w A t T B P 1 B T 1 w = 0 0
where P is the weight matrix for the observations, a diagonal matrix filled with the inverse of the squares of the standard deviations for the observations:
P = d i a g 1 σ x 1 2 1 σ y 1 2 1 σ z 1 2 1 σ x m 2 1 σ y m 2 1 σ m 2
where m is the number of point. The standard deviations for the observations can be set based on the precisions of the point cloud coordinates, e.g., 2 cm.

2.2.4. Initial Value Estimation

For the least-squares method, it is required that a set of approximate values of the parameters (known as initial values) to be input to the system to guarantee the convergence of the solution. The initial values of each part of the pagoda should be estimated independently and then aggregated to form the solution vector for the estimation. The initial values of the parameters (except Ψ) for the solution vector can be estimated by fitting the observations to the circular cone or cylinder model (Equation (13)). The definitions of the parameters are the same as Equations (2) and (3). For a cylinder model, k is set to zero.
f x , l = X 2 + Y 2 ( R 0 k Z ) 2 = 0
where
X Y Z = R 2 Φ R 1 Ω X X c Y Y c Z  
The initial values for the circular cone model can be all set to zero (except for R0). The initial value of R0 can be obtained based on empirical knowledge.

2.3. Sysmmetry Analysis

We developed a method to analyze the symmetry by first using the estimated parameters to transform pagoda to its nominal position where the pagoda is completely leveled (the pagoda is perpendicular to the XY-plane) and centered at (0, 0) on the XY-plane. Then, we rotate the entire pagoda and reflect half of it to analyze the rotational and reflectional symmetry, respectively via calculating the matching errors after the rotation/reflection. Theoretically, a pagoda possessing perfect rotational and reflectional symmetry will result in zero matching errors.

2.3.1. Rigid Body Transformation

After the model parameters are estimated, they are used to transform the pagoda to its nominal positions (Equation (15)). Therefore, the z-axis becomes the axis of symmetry for the rotational symmetry, and the XZ/YZ-plane becomes the plane of symmetry for the reflectional symmetry.
X Y Z = R 3 Ψ R 2 Φ R 1 Ω X X c Y Y c Z Z b

2.3.2. ICP-Based Sector Matching

For a hexagonal pagoda, it has a rotational symmetry of order of 6 (n = 6) about its central axis, so the pagoda should be identical after a rotation of multiple of 360°/6 = 60° if it has a perfect rotational symmetry. By using the well-known ICP method [35], we match the pagoda before and after the rotation of (m − 1) × 60° where m = 1, 2, …, 5, and compute the errors of matching in terms of the root-mean-squares error (RMSE). More specifically, we computed the RMSE for both clockwise and anticlockwise for the rotation of (m − 1) × 60°. Then, the overall RMSE for the rotational symmetry (RMSErot) is defined as
R M S E r o t = R M S E C W + R M S E A C W 2  
where
R M S E C W = i = 1 5 R M S E I C P 0 ° , i · 60 ° 5
and
R M S E A C W = i = 1 5 R M S E I C P 0 ° ,   i · 60 ° 5
R M S E C W and R M S E A C W stand for the RMSE of the ICP-based sector point matching in clockwise and anticlockwise directions, respectively. R M S E I C P 0 ° , i · 60 ° and R M S E I C P 0 ° , i · 60 ° stand for the RMSE of the ICP between the pagoda point cloud (at normal position, with a rotation of 0°) and after the rotation about the Z-axis for −i·60°, and that and after the rotation about the Z-axis for i·60°, respectively.
Similarly, we define the errors of matching of the ICP between the first half and the second half of the pagoda point clouds divided by the plane of symmetry (the XZ-plane and the YZ-plane) for the reflection symmetry:
R M S E r e f = R M S E X Z + R M S E Y Z 2
where R M S E X Z is the RMSE of the ICP-based matching between the first half (X ≥ 0) and the second half (X ≤ 0) of the pagoda point cloud divided by the XZ-plane, while R M S E Y Z is the RMSE of the ICP-based matching between the first half (Y ≥ 0) and the second half (Y ≤ 0) of the pagoda point cloud divided by the YZ-plane.

3. Experiment

For the experiment, we focused on a typical type of the Chinese pagodas, the “Wen” pagoda, because it is relatively smaller and shorter (commonly three floors in the Guangdong Province, China). This type of pagoda is usually hexagonal. We capture images for photogrammetric reconstruction for four different pagodas (the details are listed in Table 1) in Guangzhou city, the capital of Guangzhou Province in China, with the DJI Mavic 2 Pro UAV and a Nikon D5600 digital camera (Figure 6). The camera embedded on the Mavic 2 Pro and the D5600 have the resolution of 12 and 24.16 million pixels, respectively.
All the images were input to the ContextCapture software to estimate the point clouds (Figure 7) for further processing with our proposed method. The images are first placed in a folder, and then loaded into the software. After that, the camera parameters such as the sensor dimension and focal length should be selected for the subsequent reconstruction. Control point coordinates can be input into software this stage if they are available for generation of the point cloud in a real scale. Three-dimensional previews will be generated for visualization after the camera parameters (and the control points) are selected. If the previews are confirmed, then the process of 3D reconstruction will be started.
After the first step of our proposed method is completed, the pagoda point clouds are extracted and divided into different parts (Figure 8): Part 1 is the roof; Parts 2, 4, and 6 are eaves (Part 6 does not exist for Pagoda D); Parts 3, 5, and 7 are the main bodies (Part 7 does not exist for Pagoda D). Parts 1–3 are fitted to the conventional circular cone/cylinder, and to the hexagonal pyramid/prism model. Then, all the seven parts (five parts for Pagoda 5) were used to fit to the geometric model for the entire pagodas. The fitting results were then used for the subsequent symmetry analysis. The point cloud of Pagoda D was reconstructed using image without geotagging by the global positioning system (GPS), therefore, its reconstructed point cloud was not in real scale. The scale of the Pagoda D was intentionally left uncorrected to verify if the method works with pagoda, which is not in a true scale (not defined by ground true) due to the use of non-geotagged images. Since the ContextCapture software will generate point clouds in its own-defined unit but with correct relative positions based on the inner constraints of the bundle adjustment. We assume that the defined unit is in meter for the Pagoda D for consistency with the presentation of the results obtained from other Pagodas (A, B, and C). In other words, Pagoda D will be reconstructed as a bigger pagoda if the unit is assumed as a meter, but its shape and degree of symmetry will be preserved in the defined coordinate system.

4. Results

4.1. Fitting Estimates Compared to the Conventional Cylindrical/Conic Models

The least-squares estimates and their precisions (σ) from the circular cone/cylinder fitting (CCF) and the hexagonal pyramid/prism fitting (HPF) for Part 1 (roof), Part 2 (eave), and Part 3 (main body) are tabulated in Table 1, Table 2 and Table 3, respectively. It can be seen that the estimated parameters from the CCF and the HPF deviate significantly for Part 2, compared for Part 1, for all the four pagodas. This is attributed to the fact that the eaves are short, so the observations only concentrate on a short Z range. Parts 1 and 3 have much larger Z ranges, therefore, the CCF can perform better but still worse than the HPF for the same sets of observations as they are indeed hexagonal. The differences in performance for the CCF and HPF are also indicated by their precisions. It can be seen from Table 2, Table 3 and Table 4 that the precisions of the HPF were higher than the CCF (σ is smaller) due to the fact that the observations fit better to the HPF. As a result, the hexagonal models was a more suitable choice for the polygonal pagoda modeling, compared to the circular models, which were previously the only or one of a few options for this type of modeling tasks.
Another way to evaluate the performance of the fittings is to analyze the residuals. The z residuals for the fittings for Parts 1–3 are shown in Figure 9, Figure 10 and Figure 11, respectively. It can be seen that the z residuals were significantly reduced by using the HPF compared to the CCF for Parts 1–3 of all the four pagodas. For Part 1, the z residuals were up to 40 cm for CCF, but they were reduced to centimeter levels, so the residual were improved by approximately 75%. On the other hand, the residuals appeared on Figure 11 were smaller compared to those appearing on Figure 9 and Figure 10. This is because the outer walls (made of bricks) of the main body was much flatter than that of the roof or eaves, which were essentially covered by non-planar roof tiles.

4.2. Geometric Fitting of the Proposed Pagoda Model

The least-squares estimates, and their precisions of the fittings of the proposed model for the entire pagoda for all the four pagodas are tabulated in Table 5. It can be seen that all the estimates of a central axes are consistent for all four pagodas, and reasonable values (e.g., the tilt angles are small angles (≤2°)).
To realize the accuracy of the estimates, the estimated parameters were used to simulate the best-fit pagoda. The simulated pagodas were then superimposed on the original point clouds for all four pagodas as shown in Figure 12. High consistency between the simulated (estimated) pagodas and the original pagodas indicate that the high rigor and robustness of the proposed model and the associated processing methods. On the other hand, this high consistency also suggests that the architecture of this type of pagodas was highly standardized in old times (Pagoda A was built about 300 years ago). The architecture of these pagodas was shown to be highly hexagonal (polygonal).

4.3. Symmetry Analysis

The extent of symmetry (rotational and reflectional) of all the four pagodas were quantified by using the RMSE values (Table 6) obtained from the ICP as discussed in Section 2. From the table, it is realized that the RMSErot and the RMSEref were in the centimeter level. They were close to each other, however, it is not straightforward to draw any conclusion on the relationship between the pagoda’s rotational and reflectional symmetries. Since the RMSEs are only at the centimeter level, this suggests that all the four pagodas are highly symmetric, both rotationally and reflectionally. Since most of the errors adhere to the lens for photogrammetric reconstruction had been already compensated by the lens distortion model built in the software (ContextCapture), so that we could assume that the symmetry could be examined with an accurate geometric model for the pagoda. This is realized by plotting the sextant number (Figure 13) obtained along with the solutions obtained from the least-squares fitting of the models. In Figure 13, the six colors were evenly distributed and appeared in the form of sysmmetry. This is consistent with the old time architectural philosophy in oriental countries, which advocated perfect symmetry (i.e., a well balance). The symmetry analysis depended on the point cloud reconstruction accuracy, which is a function of multiple factors such as the accuracy of the sensors and the sensing media. It is difficult to isolate all those factors, but they can be investigated in future work.

5. Conclusions

In this paper, we presented a new method for symmetry analysis of a typical polygonal pagodas based on 3D point clouds for cultural heritage. The most important component of the method is a new geometric model for the polygonal pagoda, which makes rigorous estimation of the pagoda’s central axis with least-squares become possible. The geometric model consists of multiple individual models of polygonal pyramids and prisms. After the central axis is estimated, the pagoda can be leveled so that the symmetry can be assessed by using the RMSE values delivered by the ICP-based matching. The results suggest that the deviation of the symmetry causes approximately 5 cm of point matching errors after the point clouds are rotated (reflected) about (by) the estimated axis (plane) of symmetry. The results show that the proposed model is rigorous and robust to four different polygonal pagodas with similar architectural styles. Even though we only demonstrate applications for the polygonal pagoda, the concept developed in this paper can be readily extended to applications of other pagoda-like structure, such as structural modeling of the transmission and telecommunication towers.

Author Contributions

Conceptualization, T.O.C. and L.X.; methodology, T.O.C. and L.X.; software, T.O.C.; validation, Y.C., W.L., T.C., Q.L. and J.W.; formal analysis, Y.S, T.C., Q.L., and W.L.; data collection, R.D.; writing—original draft preparation, T.O.C.; writing—review and editing, T.O.C., L.X., Y.C., W.L., T.C., Y.S., and J.W.; visualization, T.O.C., R.D. and Y.S.; supervision, T.O.C.; funding acquisition, T.O.C., Y.C., and L.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Key Research and Development Program of China (grant number 2017YFB0504103), the Fundamental Research Funds for the Central Universities (20lgzd09), and Key Research and Development Program of Guangdong Province (No. 2020B0101130009).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Acknowledgments

The anonymous reviewers are acknowledged for their valuable comments. The authors would like to thank Jiahui Tang and Mengwei Liu for their support on raw data curation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tang, X.X.; Chen, C.J. The Symmetry in the Traditional Chinese Architecture. Appl. Mech. Mater. 2014, 488, 669–675. [Google Scholar] [CrossRef]
  2. Lu, Y. A History of Chinese Science and Technology; Springer: Berlin, Germany, 2015; Volume 3, pp. 1–624. [Google Scholar]
  3. Heritage, E. 3D Laser Scanning for Heritage: Advice and Guidance to Users on Laser Scanning in Archaeology and Architecture; Historic England: Swindon, UK, 2007; pp. 1–44. [Google Scholar]
  4. Lindenbergh, R. Engineering applications. In Airborne and Terrestrial Laser Scanning; Vosselman, G., Maas, H.-G., Eds.; Whittles Publishing: Scotland, UK, 2010; pp. 237–269. [Google Scholar]
  5. Luhmann, T.; Robson, S.; Kyle, S.A.; Harley, I.A. Close Range Photogrammetry: Principles, Techniques and Applications; Blackwell Publishing Ltd: Oxford, UK, 2006. [Google Scholar]
  6. Remondino, F. Heritage Recording and 3D Modeling with Photogrammetry and 3D Scanning. Remote Sens. 2011, 3, 1104–1138. [Google Scholar] [CrossRef] [Green Version]
  7. Aicardi, I.; Chiabrando, F.; Lingua, A.M.; Noardo, F. Recent trends in cultural heritage 3D survey: The photogrammetric computer vision approach. J. Cult. Herit. 2017, 32, 257–266. [Google Scholar] [CrossRef]
  8. Berner, A.; Bokeloh, M.; Wand, M.; Schilling, A.; Seidel, H.P. A graph-based approach to symmetry detection. In Proceedings of the IEEE/ EG Symposium on Volume and Point-Based Graphics (2008), Los Angeles, CA, USA, 10–11 August 2008; Volume 23, pp. 1–8. [Google Scholar]
  9. Combes, B.; Hennessy, R.; Waddington, J.; Roberts, N.; Prima, S. Automatic symmetry plane estimation of bilateral objects in point clouds. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar] [CrossRef] [Green Version]
  10. Benoît, C.; Sylvain, P. New algorithms to map asymmetries of 3D surfaces. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2008, New York, NY, USA, 6–10 September 2008; Volume 11. [Google Scholar] [CrossRef]
  11. Jiang, W.; Xu, K.; Cheng, Z.Q.; Zhang, H. Skeleton-based intrinsic symmetry detection on point clouds. Graph. Models 2013, 75, 177–188. [Google Scholar] [CrossRef]
  12. Li, E.; Zhang, X.P.; Chen, Y.Y. Symmetry based Chinese Ancient Architecture Reconstruction from Incomplete Point Cloud. In Proceedings of the 5th International Conference on Digital Home, Guangzhou, China, 28–30 November 2014; pp. 157–161. [Google Scholar] [CrossRef]
  13. Shao, J.; Gong, T.; Yang, T.; Xu, J. Incomplete Surface Reconstruction for Statue Point Cloud based on Symmetry. In Proceedings of the ITM Web of Conferences, Hangzhou, China, 29–31 July 2016; Volume 7, p. 07002. [Google Scholar]
  14. Xue, F.; Chen, K.; Lu, W. Architectural symmetry detection from 3D urban point clouds: A derivative-free optimization (DFO) approach. In Advances in Informatics and Computing in Civil and Construction Engineering; Springer: Berlin/Heidelberg, Germany, 2019; pp. 513–519. [Google Scholar]
  15. Xue, F.; Lu, W.; Webster, C.J.; Chen, K. A derivative-free optimization-based approach for detecting architectural symmetries from 3D point clouds. Isprs J. Photogramm. Remote Sens. 2019, 148, 32–40. [Google Scholar] [CrossRef]
  16. Li, H.; Zhu, Q.; Huang, M.; Guo, Y.; Qin, J. Pose Estimation of Sweet Pepper through Symmetry Axis Detection. Sensors 2018, 18, 3083. [Google Scholar] [CrossRef] [Green Version]
  17. Cheng, L.; Wu, Y.; Chen, S.; Zong, W.; Yuan, Y.; Sun, Y.; Zhuang, Q.; Li, M. A Symmetry-Based Method for LiDAR Point Registration. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 285–299. [Google Scholar] [CrossRef]
  18. Liang, H.; Li, W.; Lai, S.; Zhu, L.; Jiang, W.; Zhang, Q. The integration of terrestrial laser scanning and terrestrial and unmanned aerial vehicle digital photogrammetry for the documentation of Chinese classical gardens –A case study of Huanxiu Shanzhuang, Suzhou, China. J. Cult. Herit. 2018, 33, 222–230. [Google Scholar] [CrossRef]
  19. Jo, Y.; Hong, S. Three-Dimensional Digital Documentation of Cultural Heritage Site Based on the Convergence of Terrestrial Laser Scanning and Unmanned Aerial Vehicle Photogrammetry. Int. J. Geo-Inf. 2019, 8, 53. [Google Scholar] [CrossRef] [Green Version]
  20. Martínez-Carricondo, P.; Carvajal-Ramírez, F.; Yero-Paneque, L.; Agüera-Vega, F. Combination of nadiral and oblique UAV photogrammetry and HBIM for the virtual reconstruction of cultural heritage. Case study of Cortijo del Fraile in Níjar, Almería (Spain). Build. Res. Inf. 2019, 48, 140–159. [Google Scholar] [CrossRef]
  21. Manajitprasert, S.; Tripathi, N.K.; Arunplod, S. Three-dimensional (3D) Modeling of cultural heritage site using UAV imagery: A case study of the pagodas in Wat Maha That, Thailand. Appl. Sci. 2019, 9, 3640. [Google Scholar] [CrossRef] [Green Version]
  22. Chan, T.O.; Lichti, D.D.; Belton, D.; Klingseisen, B.; Helmholz, P. Survey Accuracy Analysis of a Hand-held Mobile LiDAR Device for Cultural Heritage Documentation. Photogramm. Fernerkund. Geoinf. 2016, 2016, 153–165. [Google Scholar] [CrossRef]
  23. Erenoglu, R.C.; Akcay, O.; Erenoglu, O. An UAS-Assisted multi-Sensor approach for 3D modeling and reconstruction of cultural heritage site. J. Cult. Herit. 2017, 26, 79–90. [Google Scholar] [CrossRef]
  24. Herrero-Tejedor, T.R.; Arqués Soler, F.; López-Cuervo Medina, S.; de la O Cabrera, M.R.; Martín Romero, J.L. Documenting a cultural landscape using point-cloud 3d models obtained with geomatic integration techniques. The case of the El Encín atomic garden, Madrid (Spain). PLoS ONE 2020, 15, e0235169. [Google Scholar] [CrossRef] [PubMed]
  25. Andriasyan, M.; Moyano, J.; Nieto-Julián, J.E.; Antón, D. From Point Cloud Data to Building Information Modelling: An Automatic Parametric Workflow for Heritage. Remote Sens. 2020, 12, 1094. [Google Scholar] [CrossRef] [Green Version]
  26. Chan, T.O.; Xia, L.; Tang, J.; Liu, M.; Lang, W.; Chen, T.; Xiao, H. Central axis estimation for ancient Chinese pagodas based on geometric modelling and UAV-based photogrammetry. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2020, 43, 751–756. [Google Scholar] [CrossRef]
  27. Yanagi, H.; Chikatsu, H. Performance evaluation of 3D modeling software for UAV photogrammetry. Isprs Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 147–152. [Google Scholar] [CrossRef]
  28. Zhang, W.; Qi, J.; Wan, P.; Wang, H.; Xie, D.; Wang, X.; Yan, G. An Easy-to-Use Airborne LiDAR Data Filtering Method Based on Cloth Simulation. Remote Sens. 2016, 8, 501. [Google Scholar] [CrossRef]
  29. Luo, D.; Wang, Y. Rapid extracting pillars by slicing point clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37 Pt B3, 215–218. [Google Scholar]
  30. Chan, T.O.; Lichti, D.D. Automatic In Situ Calibration of a Spinning Beam LiDAR System in Static and Kinematic Modes. Remote Sens. 2015, 7, 10480–10500. [Google Scholar] [CrossRef] [Green Version]
  31. El-Halawany, S.I.; Lichti, D.D. Detecting road poles from mobile terrestrial laser scanning data. GIsci. Remote Sens. 2013, 50, 704–722. [Google Scholar] [CrossRef]
  32. Demantké, J.; Mallet, C.; David, N.; Vallet, B. Dimensionality based scale selection in 3D LiDAR point cloud. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2011, 38, 97–102. [Google Scholar] [CrossRef] [Green Version]
  33. Chan, T.O.; Lichti, D.D.; Belton, D.; Nguyen, H.L. Automatic point cloud registration using a single octagonal lamp pole. Photogramm. Eng. Remote Sens. 2016, 82, 257–269. [Google Scholar] [CrossRef]
  34. Förstner, W.; Wrobel, B. Mathematical Concepts in Photogrammetry. In Manual of Photogrammetry, 5th ed.; McGlone, J.C., Mikhail, E.M., Bethel, J., Mullen, R., Eds.; American Society of Photogrammetry and Remote Sensing: Bethesda, MA, USA, 2004; pp. 15–180. [Google Scholar]
  35. Besl, P.J.; Mckay, H.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
Figure 1. Workflow of the proposed method for the symmetry analysis of the polygonal pagoda.
Figure 1. Workflow of the proposed method for the symmetry analysis of the polygonal pagoda.
Sensors 21 01228 g001
Figure 2. A typical polygonal pagoda in China (Chigang Pagoda, Guangzhou).
Figure 2. A typical polygonal pagoda in China (Chigang Pagoda, Guangzhou).
Sensors 21 01228 g002
Figure 3. Model parameters of the main components of hexagonal pagoda: (a) a roof; (b) an eave; and (c) a main body (floor).
Figure 3. Model parameters of the main components of hexagonal pagoda: (a) a roof; (b) an eave; and (c) a main body (floor).
Sensors 21 01228 g003
Figure 4. Sextant number (q) for the hexagonal pagoda model.
Figure 4. Sextant number (q) for the hexagonal pagoda model.
Sensors 21 01228 g004
Figure 5. Model parameters of the proposed geometric model for the entire hexagonal pagoda.
Figure 5. Model parameters of the proposed geometric model for the entire hexagonal pagoda.
Sensors 21 01228 g005
Figure 6. Survey equipment for the experiment: (a) the DJI Mavic 2 Pro UAV and (b) Nikon D5600 digital camera.
Figure 6. Survey equipment for the experiment: (a) the DJI Mavic 2 Pro UAV and (b) Nikon D5600 digital camera.
Sensors 21 01228 g006
Figure 7. Pagoda point clouds: (a) Pagoda A; (b) Pagoda B; (c) Pagoda C; and (d) Pagoda D.
Figure 7. Pagoda point clouds: (a) Pagoda A; (b) Pagoda B; (c) Pagoda C; and (d) Pagoda D.
Sensors 21 01228 g007
Figure 8. Pagoda point clouds divided into different parts for the proposed model: (a) Pagoda A; (b) Pagoda B; (c) Pagoda C; and (d) Pagoda D.
Figure 8. Pagoda point clouds divided into different parts for the proposed model: (a) Pagoda A; (b) Pagoda B; (c) Pagoda C; and (d) Pagoda D.
Sensors 21 01228 g008
Figure 9. Residuals of z for the CCF (red) and the HPF (blue) for Part 1: (a) Pagoda A; (b) Pagoda B; (c) Pagoda C; and (d) Pagoda D.
Figure 9. Residuals of z for the CCF (red) and the HPF (blue) for Part 1: (a) Pagoda A; (b) Pagoda B; (c) Pagoda C; and (d) Pagoda D.
Sensors 21 01228 g009
Figure 10. Residuals of z for the CCF (red) and the HPF (blue) for Part 2: (a) Pagoda A; (b) Pagoda B; (c) Pagoda C; and (d) Pagoda D.
Figure 10. Residuals of z for the CCF (red) and the HPF (blue) for Part 2: (a) Pagoda A; (b) Pagoda B; (c) Pagoda C; and (d) Pagoda D.
Sensors 21 01228 g010
Figure 11. Residuals of z for the CCF (red) and the HPF (blue) for Part 3: (a) Pagoda A; (b) Pagoda B; (c) Pagoda C; and (d) Pagoda D.
Figure 11. Residuals of z for the CCF (red) and the HPF (blue) for Part 3: (a) Pagoda A; (b) Pagoda B; (c) Pagoda C; and (d) Pagoda D.
Sensors 21 01228 g011
Figure 12. The best fit pagoda (red) and the original point clouds (yellow): (a) Pagoda A; (b) Pagoda B; (c) Pagoda C; and (d) Pagoda D.
Figure 12. The best fit pagoda (red) and the original point clouds (yellow): (a) Pagoda A; (b) Pagoda B; (c) Pagoda C; and (d) Pagoda D.
Sensors 21 01228 g012
Figure 13. Sextant number obtained from the least-squares estimation as a byproduct for the symmetry analysis: (a) Pagoda A; (b) Pagoda B; (c) Pagoda C; and (d) Pagoda D. Red: q = 1; Cyan: q = 2; Blue: q = 3; Green: q = 4; Yellow: q = 5; Magenta: q = 6.
Figure 13. Sextant number obtained from the least-squares estimation as a byproduct for the symmetry analysis: (a) Pagoda A; (b) Pagoda B; (c) Pagoda C; and (d) Pagoda D. Red: q = 1; Cyan: q = 2; Blue: q = 3; Green: q = 4; Yellow: q = 5; Magenta: q = 6.
Sensors 21 01228 g013
Table 1. Details of survey for the Pagoda A, B, C, and D.
Table 1. Details of survey for the Pagoda A, B, C, and D.
Pagoda
Name
LocationApprox. No. of ImagesNo. of
Floor
Survey PlatformSensor Dimension/
Focal Length
Fly HeightCapture
Period
A (Wenfeng)Pengyu District, Guangzhou, China1023UAV13.20 mm/
10.26 mm
3–8 mJuly, 2017
B (Wenchang)Pengyu District, Guangzhou, China993UAV13.20 mm/
10.26 mm
3–8 mOct, 2020
C (Shenjing Wen)Huangpu District, Guangzhou, China1173UAV13.20 mm/
10.26 mm
3–8 mOct, 2020
D (Liwan Wen)Liwan District, Guangzhou, China372Hand-held23.50 mm/
18.00 mm
-Oct, 2020
Table 2. Estimated parameters and their precisions of the circular cone/cylinder fitting (CCF) and the hexagonal pyramid/prism fitting (HPF) for Part 1.
Table 2. Estimated parameters and their precisions of the circular cone/cylinder fitting (CCF) and the hexagonal pyramid/prism fitting (HPF) for Part 1.
Pagoda APagoda BPagoda CPagoda D
Param.CCFHPFCCFHPFCCFHPFCCFHPF
Est. ± σEst. ± σEst. ± σEst. ± σEst. ± σEst. ± σEst. ± σEst. ± σ
Xc (m)−0.108 ± 4.9 × 10−5−0.108 ± 3.0 × 10−70.035 ± 3.6 × 10−50.036 ± 2.2 × 10−7−0.013 ± 4.2 × 10−5−0.019 ± 2.4 × 10−70.235 ± 1.9 × 10−50.245 ± 1.1 × 10−7
Yc (m)−0.018 ± 5.0 × 10−5−0.018 ± 2.9 × 10−7−0.088 ± 3.6 × 10−5−0.090 ± 2.1 × 10−7−0.043 ± 4.3 × 10−5−0.046 ± 2.4 × 10−7−0.243 ± 1.9 × 10−5−0.246 ± 1.1 × 10−7
Ω (°)−0.558 ± 1.6 × 10−3−0.448 ± 9.9 × 10−60.401 ± 2.8 × 10−50.401 ± 1.7 × 10−71.661 ± 3.0 × 10−51.604 ± 1.8 × 10−70.573 ± 6.7 × 10−60.401 ± 4.1 × 10−8
Φ (°)−1.803 ± 1.66 × 10−3−1.792 ± 9.9 × 10−61.031 ± 2.8 × 10−51.089 ± 1.7 × 10−70.458 ± 3.1 × 10−50.229 ± 1.9 × 10−73.896 ± 6.8 × 10−60.516 ± 4.2 × 10−8
Ψ (°)n/a−12.832 ± 1.6 × 10−5n/a−11.517 ± 2.6 × 10−7n/a−1.203 ± 2.5 × 10−7n/a−2.177 ± 6.2 × 10−8
k0.661 ± 2.8 × 10−50.719 ± 2.0 × 10−70.535 ± 2.5 × 10−50.581 ± 1.8 × 10−70.649 ± 3.1 × 10−50.694 ± 2.2 × 10−70.520 ± 6.2 × 10−60.566 ± 4.4 × 10−8
R0 (m)1.672 ± 2.7 × 10−51.824 ± 1.9 × 10−71.540 ± 2.2 × 10−51.674 ± 1.5 × 10−71.589 ± 2.2 × 10−51.727 ± 1.6 × 10−73.020 ± 1.1 × 10−53.319 ± 7.6 × 10−8
Table 3. Estimated parameters and their precisions of the CCF and the HPF for Part 2.
Table 3. Estimated parameters and their precisions of the CCF and the HPF for Part 2.
Pagoda APagoda BPagoda CPagoda D
Param.CCFHPFCCFHPFCCFHPFCCFHPF
Est. ± σEst. ± σEst. ± σEst. ± σEst. ± σEst. ± σEst. ± σEst. ± σ
Xc (m)−0.286 ± 1.7 × 10−3−0.099 ± 2.0 × 10−60.006 ± 5.9 × 10−40.155 ± 1.3 × 10−6−0.133 ± 9.3 × 10−40.061 ± 1.5 × 10−61.591 ± 4.8 × 10−40.871 ± 9.7 × 10−7
Yc (m)0.351 ± 1.7 × 10−3−0.006 ± 1.9 × 10−6−0.121 ± 6.2 × 10−4−0.120 ± 1.2 × 10−6−0.223 ± 8.8 × 10−40.014 ± 1.6 × 10−6−1.134 ± 4.8 × 10−4−0.851 ± 9.7 × 10−7
Ω (°)−3.328 ± 1.4 × 10−2−0.552 ± 3.6 × 10−50.745 ± 1.8 × 10−40.630 ± 4.8 × 10−74.011 ± 1.6 × 10−41.948 ± 4.6 × 10−7−2.292 ± 6.7 × 10−50.859 ± 2.8 × 10−7
Φ (°)−3.947 ± 1.4 × 10−2−2.683 ± 3.7 × 10−5−2.349 ± 1.7 × 10−40.172 ± 4.8 × 10−7−2.578 ± 1.7 × 10−4−0.745 ± 4.4 × 10−75.500 ± 6.9 × 10−50.802 ± 2.9 × 10−7
Ψ (°)n/a−12.839 ± 2.8 × 10−5n/a−9.339 ± 3.6 × 10−7n/a−3.266 ± 4.1 × 10−7n/a−1.948 ± 1.6 × 10−7
k2.483 ± 1.9 × 10−31.237 ± 5.5 × 10−61.307 ± 3.5 × 10−41.145 ± 2.0 × 10−62.026 ± 8.1 × 10−41.472 ± 3.4 × 10−61.481 ± 2.7 × 10−40.816 ± 1.0 × 10−6
R0 (m)2.817 ± 5.5 × 10−43.159 ± 5.4 × 10−72.644 ± 7.8 × 10−52.913 ± 3.5 × 10−72.674 ± 2.4 × 10−42.988 ± 4.2 × 10−74.608 ± 1.4 × 10−45.126 ± 3.2 × 10−7
Table 4. Estimated parameters and their precisions of the CCF and the HPF for Part 3.
Table 4. Estimated parameters and their precisions of the CCF and the HPF for Part 3.
Pagoda APagoda BPagoda CPagoda D
Param.CCFHPFCCFHPFCCFHPFCCFHPF
Est. ± σEst. ± σEst. ± σEst. ± σEst. ± σEst. ± σEst. ± σEst. ± σ
Xc (m)0.003 ± 1.0 × 10−40.000 ± 2.6 × 10−70.098 ± 4.2 × 10−50.088 ± 2.6 × 10−7−0.102 ± 3.3 × 10−5−0.104 ± 2.1 × 10−70.862 ± 8.5 × 10−60.885 ± 5.3 × 10−8
Yc (m)−0.019 ± 1.0 × 10−4−0.018 ± 2.6 × 10−7−0.203 ± 4.3 × 10−5−0.199 ± 2.7 × 10−7−0.033 ± 3.2 × 10−5−0.035 ± 2.0 × 10−7−0.896 ± 8.5 × 10−6−0.929 ± 5.3 × 10−8
Ω (°)0.222 ± 8.6 × 10−30.241 ± 2.2 × 10−50.057 ± 5.7 × 10−50.401 ± 3.5 × 10−71.203 ± 3.1 × 10−51. 261 ± 2.0 × 10−7−0.229 ± 5.6 × 10−6−0.401 ± 3.0 × 10−8
Φ (°)−1.506 ± 8.5 × 10−3−1.424 ± 2.1 × 10−50.115 ± 5.6 × 10−50.344 ± 3.5 × 10−70.458 ± 3.2 × 10−50.401 ± 2.0 × 10−7−0.057 ± 5.5 × 10−6−0.115 ± 3.0 × 10−8
Ψ (°)n/a−12.863 ± 1.3 × 10−5n/a−9.053 ± 2.3 × 10−7n/a−3.381 ± 1.7 × 10−7n/a−1.834 ± 2.0 × 10−8
kn/an/a0.002 ± 4.1 × 10−5n/a−0.004 ± 2.2 × 10−5n/a0.001 ± 4.5 × 10−6n/a
R0 (m)2.426 ± 7.3 × 10−52.653 ± 2.1 × 10−72.531 ± 3.0 × 10−52.769 ± 2.2 × 10−72.654 ± 2.3 × 10−52.909 ± 1.7 × 10−75.046 ± 6.8 × 10−65.522 ± 4.8 × 10−8
Table 5. Estimated parameters and their precisions of fittings of the proposed model for entire pagoda.
Table 5. Estimated parameters and their precisions of fittings of the proposed model for entire pagoda.
Param.Pagoda APagoda BPagoda CPagoda D
Est. ± σEst. ± σEst. ± σEst. ± σ
Xc (m)0.049 ± 3.5 × 10−4−0.007 ± 8.7 × 10−4−0.137 ± 4.0 × 10−41.004 ± 4.8 × 10−4
Yc (m)−0.015 ± 3.5 × 10−40.076 ± 9.6 × 10−40.141 ± 4.0 × 10−4−0.902 ± 5.0 × 10−4
Ω (°)0.097 ± 4.5 × 10−31.089 ± 1.1 × 10−41.203 ± 4.3 × 10−5−0.172 ± 6.5 × 10−5
Φ (°)−1.901 ± 4.4 × 10−30.516 ± 1.1 × 10−40.458 ± 4.3 × 10−5−0.115 ± 6.3 × 10−5
Ψ (°)−12.800 ± 1.7 × 10−2−9.282 ± 3.9 × 10−4−3.438 ± 1.6 × 10−4−1.834 ± 1.1 × 10−4
k10.721 ± 6.6 × 10−40.586 ± 1.6 × 10−30.705 ± 6.5 × 10−40.569 ± 1.2 × 10−3
R1 (m)5.474 ± 3.3 × 10−310.074 ± 2.3 × 10−212.491 ± 1.0 × 10−213.926 ± 2.1 × 10−2
k21.216 ± 1.5 × 10−21.197 ± 2.3 × 10−21.544 ± 1.3 × 10−21.008 ± 1.7 × 10−2
R2 (m)7.374 ± 5.0 × 10−218.063 ± 2.9 × 10−124.267 ± 1.8 × 10−120.986 ± 2.5 × 10−1
R3 (m)2.653 ± 5.3 × 10−42.769 ± 5.9 × 10−42.908 ± 2.3 × 10−45.523 ± 3.4 × 10−4
k41.667 ± 2.1 × 10−22.426 ± 3.1 × 10−11.928 ± 4.9 × 10−21.694 ± 3.4 × 10−2
R4 (m)2.679 ± 8.5 × 10−324.733 ± 2.7 × 10020.255 ± 4.3 × 10−118.653 ± 2.5 × 10−1
R5 (m)2.768 ± 5.4 × 10−42.774 ± 7.1 × 10−42.959 ± 2.9 × 10−45.520 ± 2.9 × 10−4
k61.608 ± 2.7 × 10−23.016 ± 3.0 × 10−11.692 ± 3.6 × 10−2n/a
R6 (m)−3.840 ± 1.2 × 10−117.982 ± 1.5 × 10010.896 ± 1.6 × 10−1n/a
R7 (m)2.878 ± 5.3 × 10−42.854 ± 6.7 × 10−43.050 ± 3.3 × 10−4n/a
Table 6. Root-mean-squares error (RMSE) of the iterative closest point (ICP) matching for the symmetry analysis.
Table 6. Root-mean-squares error (RMSE) of the iterative closest point (ICP) matching for the symmetry analysis.
Pagoda APagoda BPagoda CPagoda DMean
RMSErot (m)0.04240.05720.03890.06320.0504
RMSEref (m)0.04860.05480.04510.05970.0520
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chan, T.O.; Xia, L.; Chen, Y.; Lang, W.; Chen, T.; Sun, Y.; Wang, J.; Li, Q.; Du, R. Symmetry Analysis of Oriental Polygonal Pagodas Using 3D Point Clouds for Cultural Heritage. Sensors 2021, 21, 1228. https://doi.org/10.3390/s21041228

AMA Style

Chan TO, Xia L, Chen Y, Lang W, Chen T, Sun Y, Wang J, Li Q, Du R. Symmetry Analysis of Oriental Polygonal Pagodas Using 3D Point Clouds for Cultural Heritage. Sensors. 2021; 21(4):1228. https://doi.org/10.3390/s21041228

Chicago/Turabian Style

Chan, Ting On, Linyuan Xia, Yimin Chen, Wei Lang, Tingting Chen, Yeran Sun, Jing Wang, Qianxia Li, and Ruxu Du. 2021. "Symmetry Analysis of Oriental Polygonal Pagodas Using 3D Point Clouds for Cultural Heritage" Sensors 21, no. 4: 1228. https://doi.org/10.3390/s21041228

APA Style

Chan, T. O., Xia, L., Chen, Y., Lang, W., Chen, T., Sun, Y., Wang, J., Li, Q., & Du, R. (2021). Symmetry Analysis of Oriental Polygonal Pagodas Using 3D Point Clouds for Cultural Heritage. Sensors, 21(4), 1228. https://doi.org/10.3390/s21041228

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop