Next Article in Journal
Edge-Distilled and Local–Global Feature Selection Network for Hyperspectral Image Super-Resolution
Previous Article in Journal
Music Performance Improvement Support System Using a Semi-Automated Instrument-Playing Robot with Real-Time Acoustic Analysis and Habit Visualization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Highly Efficient Deep Learning-Enabled Parameterization and 3D Reconstruction of Traditional Chinese Roof Structures

1
Nantong Key Laboratory of Spatial Information Technology R&D and Application, College of Geographic Science, Nantong University, Nantong 226019, China
2
Jiangsu Yangtze River Economic Belt Research Institute, Nantong University, Nantong 226019, China
3
Nan Tong Surveying & Mapping Institute Co., Ltd., Nantong 226019, China
*
Author to whom correspondence should be addressed.
Sensors 2026, 26(3), 1054; https://doi.org/10.3390/s26031054
Submission received: 18 December 2025 / Revised: 27 January 2026 / Accepted: 4 February 2026 / Published: 5 February 2026
(This article belongs to the Topic 3D Documentation of Natural and Cultural Heritage)

Abstract

Ancient Chinese architecture, with its typical symmetrical structures, curved roofs, and upturned eaves presenting a unique architectural aesthetic, is a treasure of Chinese culture. Recently, unmanned aerial vehicle oblique photogrammetry and laser scanning technology have greatly facilitated the realistic replication of ancient buildings and have become crucial data sources for the HBIM of ancient buildings. However, parameter extraction and geometric model representation are more difficult because of the curved surfaces and upturned eaves of traditional Chinese roofs. As symmetrical features are typical of ancient Chinese architecture, the parameter quantity and modelling difficulty of the model representation can be effectively reduced by recognizing the symmetrical structure of traditional Chinese roofs and using “mirror replication” to quickly generate the other half of the model. Accurate symmetry detection and highly efficient parameter extraction are crucial for the HBIM of traditional Chinese roofs. Therefore, in this study, a deep learning network, namely, TCRSym-Net, is proposed to identify the symmetry from point clouds of traditional Chinese roofs. Each roof point cloud is then relocated and reoriented to obtain longitudinal and cross sections, and parametric modelling scripts are coded in Dynamo to model traditional Chinese roofs via curve lofting and solid Boolean operations. The experimental results reveal that the symmetry detection network is effective for symmetry detection, and five different types of traditional Chinese roofs are successfully recreated, which confirms the dependability of the method.

1. Introduction

As a carrier of Chinese civilization and art, ancient Chinese architecture has become a symbol of Eastern aesthetics because of its unique artistic form and profound cultural connotations. The main material that was used in ancient Chinese architecture was wood. Owing to the fragility of wooden structures and natural or man-made disasters, many ancient Chinese buildings have disappeared into the long river of history [1]. Therefore, 3D digital modelling technology is becoming increasingly necessary for understanding and repairing these buildings. With the rapid updating and development of cities and the increasing risk of irreversible damage to historical buildings, integrating the cultural heritage of ancient architecture with digital archives has become an inevitable trend [2].
Traditional surveying and recording methods are not sufficient to satisfy the demands of highly efficient modelling of historical buildings, and parameterizing nongeometric attributes, such as historical information and material data, is difficult [3,4]. With its advantages in parametric modelling and information integration, historical building information modelling (HBIM) technology has emerged as an essential tool for the preservation of ancient architectures [5,6].
Early studies focused primarily on constructing modular systems [7,8] by following the regulations and principles of the cai-fen system to construct BIM components, and parametrically modelling ancient architectural structures through key control parameters. These approaches meet the requirements of most standardized ancient architectural models and are typically used for model construction during the design phase. However, constructing systems of this type on the basis of rules requires the effective acquisition of control parameters, and traditional tape measures and total stations require surveyors to possess professional knowledge of ancient architecture. The emergence of laser scanning and oblique photogrammetry technologies has effectively improved the efficiency of field data collection, and these methods are being increasingly applied in the field of heritage building protection [9,10]. However, efficiently obtaining parameters and quickly constructing BIM models on the basis of these parameters are problems that urgently need to be solved [11].
The structures of ancient Chinese architecture can be roughly classified into three parts: foundation, body, and roof. Creating an accurate roof model is an important step in the 3D modelling process. Common types of ancient building roofs include hipped roofs, gable and hip roofs, flush/overhanging gable roofs, and four-corner tents. The roofs of ancient Chinese buildings usually have symmetrical structures, and a curved roof shape in combination with different cornice warping angles forms a unique architectural style. Owing to the curved surface and upturned eaves of traditional Chinese roofs, meeting the needs of HBIM has become quite difficult for parameter extraction and geometric model representation processes. The roofs of ancient Chinese architecture usually have symmetrical structures, and symmetry aids in reducing complexity by representing an entity with less information [12]. By identifying their symmetrical structures, the number of parameters in the geometric representation of the curved roofs of ancient buildings can be effectively reduced. The other half of the model can be generated through techniques such as “mirror replication”, thereby reducing the workload of repetitive modelling.
While symmetry identification using images has become increasingly popular, symmetry detection in ancient Chinese architecture using point clouds is still highly challenging and has undergone very little investigation [13,14]. The emergence of deep learning methods has greatly improved the ability to extract features [15,16], and many studies have used deep learning techniques to study symmetry detection methods [4]. However, there is currently a lack of relevant datasets for traditional Chinese roof modelling. Existing symmetry detection neural networks are typically designed for specific applications and struggle to meet the requirements of ancient architectural parameter extraction. Therefore, in this study, a deep learning-based symmetry detection and 3D modelling method for traditional Chinese roofs is proposed. Three aspects are among the contributions of this study:
(1)
A deep neural network, namely, TCRSym-Net, is proposed for recognizing the symmetry of point clouds on traditional Chinese roofs;
(2)
A local roof coordinate system is determined on the basis of symmetry information, expressions for longitudinal and cross sections and the parameters of upturned eaves of the roof are defined, and the defined roof parameters are extracted from point clouds;
(3)
The curved surfaces of various traditional Chinese roof types are constructed into BIM using Dynamo modelling scripts.
The rest of this paper is structured as follows. The related works are reviewed in Section 2. The proposed symmetry detection method is described in detail in Section 3. The experiments and discussion are presented in Section 4 and Section 5, respectively. The paper is concluded in Section 6.

2. Related Works

As a nonrenewable carrier of cultural heritage, ancient Chinese architecture carries the historical progression of Chinese civilization and the wisdom of craftsmen. In recent years, 3D laser scanning and oblique photogrammetry technologies have been widely used to collect data from ancient architectural relics. However, this type of work is time-consuming and labour-intensive, and there is an urgent need to implement automated modelling methods. The surface modelling of roof structures, which are crucial components of ancient architecture, still faces numerous challenges, primarily because of the difficulties in extracting geometric parameters and constructing geometric models for roof surfaces. The symmetrical characteristics of ancient Chinese architecture enable the effective simplification of model parameter expressions through symmetry detection, which is crucial for high-precision geometric parameter extraction and model reconstruction. Thus, various aspects that are closely related to this study, including symmetry detection and the 3D modelling of traditional Chinese roofs, are briefly reviewed.

2.1. Symmetry Detection

Symmetry is characterized by visual harmony and satisfies human cognitive instincts and emotional needs. Symmetry is like the “grammar rules” of nature and defines the order of basic laws [14]. Symmetry detection is an important research topic in computer vision, and many scholars have studied symmetry detection in two-dimensional images [17]. Researchers have used the correspondences of symmetry to reconstruct shapes in various representations, such as points [18] and curves [19]. An ICP-like method proposed by Ecins et al. [18] was used for segmenting symmetric objects and retrieving their symmetries from 3D point clouds of natural settings. The method alternates between identifying correspondences between symmetric locations and fine-tuning the candidate symmetry plane based on the correspondences.
In recent years, by leveraging a vast amount of training data and deep neural networks’ expressive capabilities, significant progress has been made in 3D point cloud symmetry detection. As the first unsupervised symmetry detection method, PRS Net [20] can simultaneously identify planar reflection symmetry and rotational surface symmetry. Nevertheless, this approach has a restricted number of rotation axes and reflection planes. As the number of predicted planes and rotation axes increases, the number of network parameters increases approximately linearly [21]. Ji et al. [22] converted the problem of identifying symmetry in 3D point clouds into a problem of classifying symmetry points. This approach labels the points on the symmetry plane as positive samples. It trains a multiscale deep neural network based on the PointNet++ architecture [23]. A weighted cross-entropy loss function was employed to address the imbalance between positive and negative samples. A preliminary symmetry plane equation is computed using the RANSAC algorithm and least squares approach based on the point-by-point classification results. SymmetryNet, which was proposed by Shi et al. [24], improves symmetry detection with pointwise multitask prediction. It predicts the symmetric positions of each point and the foot points on the symmetry plane, and clusters the predicted symmetries during the inference process.
Although symmetry detection has received widespread attention, research has often focused on specific objects or datasets. Research on, and datasets for, the symmetry detection of traditional Chinese roofs are lacking, which makes meeting the needs of high-precision parameter extraction and reverse BIM for ancient Chinese buildings difficult [25]. With respect to the complex geometric shapes of ancient building roofs, traditional symmetry detection algorithms still have limitations in terms of computational efficiency and noise interference removal. Therefore, developing deep learning methods for symmetry detection in ancient buildings that meet the requirements for ancient building modelling is very important. Owing to the widespread existence of symmetry in ancient Chinese architecture, accurately and efficiently identifying the symmetry of buildings in the 3D point clouds of ancient architectural scenes is highly important for BIM object modelling.

2.2. Three-Dimensional Modelling of Traditional Chinese Roofs

Building model reconstruction is a vibrant and interdisciplinary research focus, drawing significant attention from the fields of photogrammetry, computer vision, and remote sensing [6]. Early research on the 3D modelling of ancient Chinese architecture focused mainly on constructing a parameterized component library of ancient buildings and generating BIM models of ancient buildings by controlling family models through key parameters [26]. Shen et al. [27] introduced a methodology for parameterizing the design and construction regulations for these roofs as outlined in the Yingzao Fashi. The method converts the design and construction principles for intricate curved roofs into mathematical models, allowing designers to create such architectural forms. A rule-based method was introduced by Liu et al. [8] for the creation of Song-era ancient Chinese architecture. The technique formalizes construction standards for various architectural styles and parameterizes the wooden parts of structures based on the hierarchical topology of structural patterns and the unique module system found in traditional Chinese architecture. Hu et al. [7] provided an editable initial frame and an automated level of detail (LOD) approach for modelling various styles of ancient Chinese architecture. This method can automatically simplify building models without the need for prefabricated low-poly proxies.
An increasing number of projects have used laser scanning technology or oblique photogrammetry for ancient architectural surveys [11,28] and have reconstructed models through reverse engineering modelling. The specific BIM families that have been constructed can meet the requirements for 3D modelling of ancient Chinese architecture, but require high-precision acquisition of control parameters. Extracting control parameters from point clouds or oblique photogrammetry models remains a highly complex process. This is because point clouds are scattered, unstructured data without semantic information, which makes identifying the geometric shapes of instances and extracting parameters difficult.
Segmenting each surface and replacing point clouds with geometric primitives are necessary steps in reconstructing a geometric model. When laser point clouds or oblique photogrammetry are used for data acquisition and reverse modelling of ancient Chinese architecture, two key issues need to be addressed: (1) the semantic segmentation of point clouds [29,30] and (2) primitive shape detection or geometric parameter extraction. Li et al. [1] presented a semantic classification and 3D model expression for Chinese roofs. This method involves a two-level semantic decomposition of the roof based on the characteristics of ancient Chinese-style architecture. Ji et al. [31] proposed an improved neural network, namely, DGCNN, for roof extraction, which aims to extract roof structure from point clouds automatically. Huo et al. [32] proposed a novel method for accurate 3D modelling of roof decorative components specific to Ming and Qing official-style architecture. This approach involves the establishment of a standardized template library containing a variety of decorative elements. Dong et al. [33] presented an automatic classification technique for identifying roof types in Ming and Qing Dynasty official-style architecture. The approach utilizes a hierarchical semantic network framework to extract key geometric features and analyses ridge structures through an attributed relational graph, with classification thresholds based on historical construction rules.
Roof modelling is an important component of the modelling of ancient Chinese architecture. In terms of extracting the geometric parameters of roof surfaces, existing parameter extraction methods focus mainly on regular geometric structures [8], such as planes [34] and cylinders [4]. The roof structure in traditional ancient Chinese architecture is usually a curved surface structure, which is more complex to model than a flat roof. Effectively extracting roof parameters from three-dimensional point clouds and constructing curved roof structures is difficult [35]. To meet modelling needs, researchers have used the nonuniform rational spline (NURBS) to represent curved surfaces. Barazzetti et al. [36] obtained accurate BIM models from point cloud data by reconstructing complex and irregular objects on the basis of NURBS curves and surfaces.
Traditional methods usually involve manual or semiautomatic parameter extraction, which is not convenient for the automatic modelling of traditional Chinese roofs. Thus, developing a flexible and accurate method for parameter extraction for traditional Chinese roofs with different shapes is important.

3. Materials and Methods

3.1. Overview

In this section, we present the methodological framework for the three-dimensional (3D) reconstruction of traditional Chinese architectural roof structures. As illustrated in Figure 1, the overall workflow is systematically organized into four major phases. First, point cloud preprocessing, which includes noise filtering and roof segmentation, is performed. Second, a deep learning network, namely, TCRSym-Net, is proposed for the symmetrical detection of ancient building roofs. Third, parameter extraction and curved surface representation for roof surfaces are conducted. Finally, 3D modelling of roofs is performed using Dynamo.

3.2. TCRSym-Net for Symmetry Detection for Traditional Chinese Roofs

3.2.1. Network Architecture

In this study, TCRSym-Net for symmetry detection is proposed. The overall architecture of TCRSym-Net is shown in Figure 2. The input of the network is a three-dimensional roof point cloud P = p i , i 1 , N . The input point cloud is first centralized and normalized. The normalized data is then embedded into a new feature space. These embedded features are processed through four cascaded offset attention (OA) modules to learn discriminative pointwise features. The learned geometric features are concatenated and aggregated via a spatially weighted pooling layer to obtain a global feature representation. Finally, these features are fed into multiple task-specific prediction branches to estimate the centre point on the symmetry plane, foot points, and normal vectors.
The embedding module enhances local feature extraction capability by incorporating sampling and grouping (SG) layers and Linear-ReLU (LR) layers [12]. The SG layer first uses the farthest point sampling (FPS) algorithm to downsample the input point cloud, and uses the k-NN algorithm to search for the nearest neighbours of each sampling point; Then, it calculates the difference between the features of neighbouring points and the features of sampling points by point-wise subtraction. The difference is concatenated with the sampling points and input into the LR layers; Finally, the local features of the point cloud are obtained through max pooling.
Following PCT [37], the offset attention (OA) module uses an offset-attention mechanism to generate optimized attention features on the basis of contextual information. It first calculates the self-attention (SA) features from input features, then calculates the offset between the SA features and input features. The offset feeds the LR network and concatenates with input to ultimately obtain the OA features.
Multitask prediction branch networks contain three branches that predict the centre points on the symmetry plane, foot points, and normal vectors. Each branch contains four MLP layers and is implemented using Conv1d convolution and the ReLU function [38] to output the final predictions. All branches share an underlying feature extraction network, and each branch updates the parameters independently.

3.2.2. Loss Function

For the input roof point cloud P, the symmetry plane background value is expressed as s ^ r e f = { c ^ s , n ^ s } , where c ^ s denotes an arbitrary point on the symmetry plane, and n ^ s represents the corresponding normal vector. The TCRSym-Net network predicts the set of foot points O s = { o i s } , i 1 , N for the input point cloud P. Each point p i P has a perpendicular foot point o i s on the reference symmetry plane s r e f . Specifically, o i s is the orthogonal projection of point p i onto the symmetry plane. Subsequently, the background value of o i s is derived from this projection relationship, o ^ i s = p i n ^ s ( ( p i c ^ s ) · n ^ s ) .
A composite loss function model that combines geometric alignment and symmetry constraints is used, and the multibranch prediction model predicts the normal vector Q s = { n i s } , i 1 , N of the symmetry plane, any point C s = { c i s } , i 1 , N on the plane, and the foot point O s for the input point cloud P. For each point p i , the total loss calculation function for symmetry prediction is as follows:
L i = L i f o o t + L i c + w L i c o n f
The loss of the entire network L i is the sum of the branch network losses. The loss function that corresponds to each branch includes the symmetry plane point loss L i c , foot point loss L i f o o t and confidence loss L i c o n f :
L i f o o t = d 2 o i s , o ^ i s
L i c = d 2 c i s , c ^ s
L i c o n f = 1 N j N   C E ( p i j , p ^ i j ) + d 2 ( n i s , n ^ s )
The average Euclidean distances between the predicted points and the target point are calculated for L i c and L i f o o t , and the confidence loss is obtained by summing the symmetric point C E ( p i j , p ^ i j ) and normal vector loss d 2 ( n i s , n ^ s ) . C E ( p i j , p ^ i j ) is the cross entropy loss and d 2 ( n i s , n ^ s ) is the mean angle error between the predicted normal and ground truths. The default value of the weight w is 0.5. During the prediction process, the DBSCAN algorithm [24] is used to cluster pointwise-estimated normal vectors, with the cluster centre vectors serving as the final predicted normal vectors. This method enables the identification of multiple symmetry planes.

3.3. Extraction of Roof Parameters

In previous studies, researchers have parameterized the design and construction rules that are specified in Yingzao Fashi for these roofs and transformed the complex curved roof design and construction rules into mathematical models to model curved roofs. In the research by Shen et al. [27], the correlation between the roof rise and the positioning of the front and rear purlins was fundamentally governed by the architectural grade of the structure. Furthermore, they determined two key parameters: the total step coefficient m and the roof curvature. These calculations were directly derived from the length of each individual rafter span, defined as the distance between adjacent structural seams. The rising and bending rules reduce the height between the rafters at different positions on the roof to form a beautiful and functional roof surface. The core of this approach lies in the calculation of the height of each important node on the roof to achieve a natural transition of the roof curve from the ridge to the eaves.
However, this method struggles to meet the demand for rapid roof surface modelling from drone scanning data, as beam span parameters are difficult to extract from top-view scans. Additionally, many historic buildings feature more flexible designs and do not satisfy traditional regulatory constraints. Therefore, building upon the research by Shen et al. [27], in this study, the use of roof point curve fitting in reverse modelling to obtain roof parameters is proposed. In the process of large-scale ancient architectural scanning and rapid modelling with drones, this method is the most straightforward and flexible.

3.3.1. Definitions of the Local Coordinate System and Roof Parameters

In this study, five different roof types of traditional ancient Chinese buildings, namely, hipped roofs, gable and hip roofs, flush/overhanging gable roofs, four-corner tents, and double-eave gable and hip roofs, are determined to meet the parameter types for parametric modelling.
A local coordinate system is established to parameterize the roof geometry. The origin is set at the midpoint of the main ridge, with the x-axis aligned along the ridge direction and the z-axis oriented vertically upward. The longitudinal section is defined by a plane through the origin perpendicular to the y-axis, while the cross section is defined by a plane through the origin perpendicular to the x-axis. The geometric shape of the roof is described using parameters that are extracted from the two sections. The geometric shape of the roof observed from the front and side is shown in Figure 3.
For the flush/overhanging gable roof, the longitudinal section contains the main ridge, the cross section contains the front slope–back slope curve, and the curve is stored via the fitted coordinate points. For the hipped roof, the longitudinal section shows the main ridge and two symmetrical slope curves, whereas the cross section shows the front slope and back slope curves. For the gable and hip roof, the longitudinal section shows the main ridge and two symmetrical slope curves, whereas the cross section shows the front slope and back slope curves. For the four-corner-tent roof, the longitudinal section shows two symmetrical slope curves, and the cross section shows the front slope and back slope curves. For the double-eave gable and hip roof, the longitudinal section shows the main ridge and two symmetrical slope curves for the upper eave, as well as two slope curves for the lower eave. The cross section shows the front and rear slope curves of the upper eave and the front and rear slope curves of the lower eaves (Figure 4).
The upturned eaves are the corner parts of the eaves in traditional Chinese architecture and are named for their upwards curve resembling a bird’s wing. They primarily connect the adjacent sloping eaves of the roof. To effectively construct a three-dimensional model of the architectural upturned eaves, the parameters that are recorded for each upturned eave include the following: the rising point, the offset distance dx from the starting point of the eave in the longitudinal section and the offset distance dy from the starting point of the eave in the cross section.

3.3.2. Roof Parameter Extraction Method

To create a roof model with defined roof parameters, these parameters should be extracted from the point clouds. The parameters that are extracted for a curvilinear roof are of two main types: roof surface parameters and upturned eave parameters. After symmetry detection via the TCRSym-Net neural network, a symmetry plane s = { c , n s } perpendicular to the main ridge is chosen to segment the symmetry plane section point cloud. The distance from a point p i to the symmetry plane is d i s = p i c · n s , n s = 1 , and points where d i s < d are used to form the symmetry plane section point cloud (Figure 5a). The highest point p h i on the symmetry plane section point cloud is selected as the origin point of the coordinate system (Figure 6a,b). For ridges with offsets, the offset value h is manually set. p o r i is transformed to a new point p o r i = p h i h · ( 0,0 , 1 ) .
The origin point p o r i and symmetry plane normal vector are used to perform a rotation transformation on the roof point cloud. The cloud is rotated in the east direction, which is set as the x-axis. Afterwards, a plane perpendicular to the x-axis is used to create a cross-section P c s , and a plane perpendicular to the y-axis is used to create a longitudinal section P l s (Figure 5b).
The main ridge parameter is derived from the longitudinal section point cloud. Ridge points are identified by projecting the roof point cloud onto line L along the x-axis, subject to the condition d i L < d . Given two points p C and p D on line L, the point p i projected onto L is denoted by p i L .
p i L   = p i + ( p D p C ) · ( p i p C ) p D p C · ( p D p C )
The projected points on the straight line along the x-axis form a segment that can be used as the main ridge segment, as shown in Figure 6d. The coordinates of the main ridge point are stored on the basis of the offset from point C. To obtain the smooth curves of the roof, we used B-splines to fit the points in the sections. B-splines are mathematical representations that can accurately model complex two-dimensional or three-dimensional free-form organic curves. As shown in Figure 6c,d, the curved line is fitted using B-splines from the section point cloud, and ten refitted points are saved as the roof surface parameters.
The upturned eave parameters include the rising point and offset distances dx and dy. The trend plane of the roof slope is used to segment the upturned eave portion (Figure 6e), the maximum value of the segmented point cloud is taken as the rising point, and the offset distances dx and dy are calculated. The offset distance dx is from the rising point along the x-direction to the starting point of the eave. The offset distance dy is measured from the rising point along the y-direction to the starting point of the eave. Regarding the overhang distance in ancient architecture, we determine two planes parallel to the x-axis and y-axis at a distance d o v e r h a n g inward from the eaves and obtain the transition points. As shown in Figure 6f, the T-point (transition point) is denoted by G and T-point is denoted by H. The curve of the eaves is determined by the rising point, transition point, and eaves point.

3.4. Three-Dimensional Modelling of Roofs on the Basis of Dynamo

In this section, a parametric modelling method for traditional Chinese roofs based on Dynamo is introduced. Dynamo allows users to engage in a visual programmeming workflow in which they connect elements to establish relationships and define the sequence of actions that form custom algorithms. As a traditional Chinese roof usually has two symmetry planes, “mirror replication” technology can be easily used to quickly generate the other half of the model. In this case, a 1/4 foundation model is constructed as the base model and then mirrored to restore the overall roof model during the modelling process. For example, for the parameterized modelling of a 1/4 foundation model of a single-eave gable and hip roof, the roof is divided into four parts: the roof front slope, sloping gable surface, upturned eaves corners, and pediment surface (Figure 7).
The curve lofting method is used for roof front slope and sloping gable surface modelling. In the curve lofting method, a profile is swept along a path and is moved and aligned perpendicular to the path (Figure 8). A swept surface is defined as the surface that is generated by moving a profile curve along a trajectory curve. For a gable and hip roof, the fitted spline from the cross section P c s serves as the lofting curve, and the main ridge serves as the path that creates the front slope surface of the roof; the fitted B-spline from the longitudinal section P l s is used as the lofting curve, and the eave curve is used as the path to create the sloping gable surface.
To model upturned eaves, we use a Boolean operation method to model the complex upturned eave surfaces (Figure 9). In this method, two adjacent roof surfaces A and B are laid out, and surface B is thickened by 5 m, whereas the other surface is thickened by 0.001 m. A and B’ are then used for solid Boolean operations to model one of the upturned eave surfaces. This process is repeated: surface A is thickened by 5 m, surface B is thickened by 0.001 m, and then the thickened surfaces A’ and B are used for Boolean operations to model another upturned eave surface. Multiple parts of the model are combined to complete the 1/4 foundation model of the roof, and the complete roof model is generated by mirroring the 1/4 foundation model twice.
The automatic BIM program was developed on the basis of Dynamo. The interpolated roof slope spline is created by applying the NurbsCurve.ByPoints method to a collection of points. Then, the roof slope spline is swept along the eave curve to create surfaces by using Surface.BySweep, and the surface is transformed into a solid with Surface.Thicken. The upturned eave models are constructed by using the Boolean difference between two solids, and the intersection points between two thickened surfaces are determined with Geometry.Interesect. Multiple parts of the model are then combined to complete the 1/4 foundation model of the roof. Finally, the complete roof model is generated by mirroring the 1/4 foundation model twice with Geometry.Mirror.

4. Results

4.1. Datasets

The experimental dataset was obtained by oblique photogrammetry and includes point cloud data for ancient buildings in areas such as Datong, Shanghai, and Nantong. The original point clouds were obtained by oblique photogrammetry [39], the obtained ground resolution ranges from 2 cm to 10 cm. The noise level is related to the accuracy of the 3D reconstruction model. The point cloud density ranges from 43.8 points/m3 to 21,413.6 points/m3. As shown in Figure 10, the roof styles include a flush gable roof, a hip roof, a single-eave gable and hip roof, a four-corner tent roof, and a double-eave gable and hip roof. The dataset contains approximately 330 unique roofs. The number of roofs of each type is given in Table 1, including 100 flush gable roofs, 54 hipped roofs, 100 single-eave gable and hip roofs, 28 four-corner tents, and 48 double-eave gable and hip roofs and others. During the preprocessing phase, we uniformly sampled 40,960 points for each roof.
Because the acquired data are vertically oriented, annotation was performed in the projected image. The original point cloud of the ancient building was segmented to obtain the roof point cloud. Afterwards, the roof point cloud was projected onto the XOY plane to obtain the projected image I. An annotation tool was used to draw a 2D line segment l along the symmetry axis on the image. The direction vector D of the line segment L was calculated, D was rotated around the z-axis by 90 degrees, and a new vector N was obtained as the normal vector of the symmetry plane.
Dataset augmentation was conducted to generate more samples on the basis of the existing training samples to learn as many features as possible and improve the generalization ability of the new model. Random rotations around the z-axis were performed on each point cloud to obtain 3300 augmented point clouds for training. We selected 80% of the point cloud points as the training set and reserved the remaining 20% of the point cloud points for validation. We additionally selected 10 roof point clouds as the test dataset and performed BIM modelling experiments.

4.2. Symmetry Detection for Roof Point Clouds

4.2.1. Implementation Details

During network training, the input points were centred and normalized. We sampled 4096 points as input to the proposed network. The network was trained for 200 epochs, with a batch size b = 8. Afterwards, the stochastic gradient descent (SGD) [40] optimizer was used to train the network according to the loss described above. The initial learning rate lr was set as 0.001. The point cloud data were input into the training network to extract features, and after fusing global features, multiple branches were used to predict symmetry parameters. All experimental procedures were performed on a computing platform equipped with an Intel Core i7-10750H. The system was configured with 32 GB of RAM and an NVIDIA GeForce RTX 4070S GPU.

4.2.2. Evaluation Metric

To evaluate and compare the proposed methods, we used the PR (precision–recall) curve to represent the training performance of the model in our study; these curves were generated by changing the threshold of the predicted confidence value [24]. To determine whether the predicted symmetry constituted a true-positive or false-positive result, we calculated the symmetry error on the basis of the difference between the predicted symmetry and the true symmetry. For an object with point P = { p i } , i 1 , N , the dense symmetry error between the predicted symmetry S ^ r e f and the true symmetry S r e f was calculated as follows:
E r e f = 1 N i N   T r e f ( p i ) T ^ r e f ( p i ) 2 ρ
where T r e f and T ^ r e f represent the symmetric transformations of S r e f and S ^ r e f , respectively, and ρ represents the maximum distance from the points in P to the symmetry plane S ^ r e f . A predicted symmetry plane is considered a true positive when it fulfils both E r e f < ε and 1 c o s   ( n s · n ^ s / n s · n ^ s ) < 1 c o s   ( θ ) .

4.2.3. Qualitative Results

As illustrated in Table 2, the proposed method achieves an AUC (Area under curve) of 0.672 and a Highest F1 score of 0.762, which is better than its counterparts built upon PointNet [41] and PointMLP [42] backbones. The PR curves for different backbones are shown in Figure 11. Since the proposed method predicts multiple normal vectors simultaneously, the prediction results may correspond to multiple ground truth values, resulting in an elevated PR curve.
In the following experiments, we applied the proposed symmetry detection method to 10 roofs in the test dataset and compared the results with those of the methods using PointNet [41] and PointMLP [42] as a backbone. The first step in the evaluation process was to match the predicted symmetry planes with the ground truth values. If the angle between the predicted normal vector and the ground truth was smaller than 5°, it was considered a correct match. Next, the angle between the predicted normal vector of the symmetry plane and the ground truth value was calculated, and the distance from the centre point on the predicted symmetry plane to the true symmetry plane was used as an evaluation metric to determine the symmetry plane detection performance of different deep learning networks on the test dataset. It can be observed that the average angle error of the method proposed in this study is 0.7563°, which is smaller than the 1.2685° of the PointNet backbone and the 1.3966° of the PointMLP backbone. The distance predicted from a centre point in the symmetry plane to the true symmetry plane by this method is 0.1043 m, which is smaller than 0.1480 m for the PointNet backbone and 0.1874 for the PointMLP backbone. The median distance error is 0.0570 m, which is also the smallest (Table 3). The evaluation results prove that the proposed method can predict symmetry parameters more accurately.

4.3. Roof Parameter Extraction

On the basis of the symmetry plane extracted by the TCRSym-Net neural network, the points close to the symmetry plane were selected as the section point cloud to determine the origin point of the coordinate system. The distance threshold d was set to 0.05 m. The highest point on the section point cloud was set as the origin point, which was used to determine the x-axis. For ridges with offsets, the offset values were manually set. Afterwards, the origin point and the normal vector of the symmetry plane were used to both determine a quadrantal angle and to rotate the roof point cloud in the direction of the x-axis.
Next, the planes perpendicular to the x-axis and y-axis were used to construct a cross-section and a longitudinal section, respectively. The main ridge lines and ridge points were extracted from the longitudinal section. The curve line was fitted using B-splines from the section point cloud, and 10 refitted points were saved for roof surface modelling. Similarly, 10 refitted points from the side eaves were extracted at equal intervals from the cross-section.
The upturned eave parameters include the rising point and offset distances, dx and dy. The trend plane of the roof slope was used to segment the upturned eave portion, the maximum value of the segmented point cloud was taken as the rising point, and the offset distances dx and dy were calculated. The offset distance dx was measured from the rising point along the x-direction to the starting point of the eave. The offset distance dy was measured from the rising point along the y-direction to the starting point of the eave. The extracted parameters were exported as *.csv files to fulfil the 3D modelling requirements of Dynamo. The extracted parameters for the five different roofs are shown in Table 4.

4.4. Three-Dimensional Modelling Results

After all the roof parameters were extracted, five Dynamo scripts that were developed for traditional Chinese roof modelling were run. Two groups of roof point clouds, group A and group B, were tested in this study to verify the modelling capability of the proposed method. Each group contains five roof point clouds, including a flush gable roof, a hip roof, a single-eave gable and hip roof, a four-corner tent roof, and a double-eave gable and hip roof. All of the models were successfully created via the proposed method, as shown in Table 5 and Table 6. The index M A c c [43] was used to verify the accuracy of the modelled roof surfaces and is defined as follows:
M A c c = M e d π j T p i ,   i f   π j T p i r
where π j T p i measures the perpendicular distance from a vertex p i in the source model to a plane π in the manually created reference model. A cut-off distance r (set between 1 and 15 cm in this study) was introduced to limit the influence of incomplete or inaccurate regions in the source model. To evaluate the overall alignment between the source surfaces and the reference model, the Med function was used to calculate the median Euclidean distance from the sample points on the source surfaces to the nearest surfaces in the reference model. The accuracy values M A c c of the modelled roof surfaces for the test dataset are shown in Table 7.
As shown in Figure 12, at a 10 cm buffer size, the surface accuracy for all test sites remained below 5 cm. Within test group A, site A1, which featured a flush gable roof, achieved the highest accuracy at 1.72 cm. In contrast, site B1, which also had a flush gable roof, resulted in a slightly lower accuracy of 2.25 cm under the same buffer conditions. The reason for this outcome is that the extracted symmetry plane for B1 is not sufficiently accurate compared with A1, as the distribution of the point cloud is uneven in B1. Furthermore, in the two experiments, the accuracy was the lowest for the four corners tents A4 and B4; this outcome may have occurred because the surface determined by the eave point, transition point, and upturned eave corner point is not sufficiently fine, and more points need to be interpolated to obtain a more accurate surface. Due to the presence of complex roof decorations, the surface accuracy of the gable and hip roof at test site A3 was lower than that of the hipped roof at test site A2. This phenomenon also occurred in the test dataset group B.

5. Discussion

Since symmetry detection plays a key role in roof reconstruction, in Section 4.2.2, a comparison experiment on symmetry detection is performed to verify the effectiveness of the proposed method. Compared to networks with a PointNet backbone and a PointMLP backbone, the proposed symmetry detection method achieves better symmetry detection results. The reason can be attributed to the fact that each point is well captured by the point cloud features through the offset attention mechanism, which improves the ability to extract symmetry information. The optimal assignment module and loss function proposed in this work enable it to output accurate symmetries. Supplementary Materials are available online at https://github.com/yhexie/TCRSym-Net.git (accessed on 1 February 2026).
Additional experiments were designed to test the robustness of the proposed method in the presence of noise and outliers (Figure 13a). Gaussian noise with standard deviations of σ = 0.05 m, 0.1 m, and 0.25 m was added to each point on the test roof point clouds. It is shown that the proposed method can also correctly detect symmetries, but the accuracy of detecting symmetric planes decreases with increasing noise. To test the robustness of the proposed method to incomplete point clouds, we removed the roof point clouds by a factor of 5%, 10%, and 20%. It was shown that the proposed method can still extract the symmetry information well when 5% of the point clouds are removed, but when the point cloud is removed by a factor of 10 or even 20, the extracted symmetry plane is significantly shifted, and the accuracy is significantly reduced, which does not meet the needs of modelling (Figure 13b). Therefore, the proposed method is still sensitive to missing data. When the density distribution of the point cloud is very inhomogeneous, it affects the predicted centre point of the symmetry plane and causes a significant shift.
The roofs that were considered in this study did not cover all types of traditional Chinese roofs, such as helmet roofs, truncated roofs and eight-corner tents (Figure 14). Thus, an expansion of the training dataset is needed. The modelling of complex ancient building structures, such as cross roofs, Baosha roofs, and joined roofs (Figure 15), has not been fully discussed. For even more complex roof types, composite roof modelling has not been achieved, which requires additional consideration in future work.

6. Conclusions

In this study, a deep learning network, namely, TCRSym-Net, was proposed to identify symmetry in the point clouds of traditional Chinese roofs. This network detects the symmetry plane of a roof point cloud, which is then used to relocate and reorient the roof point cloud, thus successfully determining the longitudinal and cross sections and extracting the section parameters. Roof modelling scripts were constructed by using Dynamo to construct models of various traditional Chinese roof types. The proposed method can significantly improve the automation level of BIM and reduce the cost and time investment for the digital protection of ancient buildings. Thus, it establishes a more robust digital foundation for the protection, restoration, and preservation of cultural heritage.
In the future, we will expand the existing training sample dataset and explore geometric shape identification and parameter extraction problems for complete ancient Chinese architecture modelling.

Supplementary Materials

The source code and experimental dataset for this study are available online at https://github.com/yhexie/TCRSym-Net.git (accessed on 1 February 2026).

Author Contributions

Conceptualization, R.O. and F.Y.; data curation, L.C., L.Q. and Y.H.; formal analysis, L.L.; investigation, R.O. and M.C.; methodology, R.O. and F.Y.; project administration, F.Y.; validation, L.C., L.Q. and Y.H.; resources, M.C. and L.L.; software, F.Y.; supervision, F.Y. and C.Z.; writing—original draft, R.O.; writing—review and editing, F.Y. and C.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This study is funded by the College Students’ Innovative Entrepreneurial Training Plan Program (no. S202510304184), the Project of Nantong Science and Technology Bureau (no. MS2023065), the Postgraduate Research & Practice Innovation Program of Jiangsu Province (no. SJCX25_2043), and the National Natural Science Foundation of China (no. 42001322).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors thank the anonymous reviewers for their insightful suggestions and comments.

Conflicts of Interest

Author Lili Li was employed by the company Nan Tong Surveying & Mapping Institute Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Li, L.; Tang, L.; Zhu, H.; Zhang, H.; Yang, F.; Qin, W. Semantic 3D Modeling Based on CityGML for Ancient Chinese-Style Architectural Roofs of Digital Heritage. ISPRS Int. J. Geo-Inf. 2017, 6, 132. [Google Scholar] [CrossRef]
  2. Spring, A.P. History of Laser Scanning, Part 2: The Later Phase of Industrial and Heritage Applications. Photogramm. Eng. Remote Sens. 2020, 86, 479–501. [Google Scholar] [CrossRef]
  3. Santos, D.; Sousa, H.S.; Cabaleiro, M.; Branco, J.M. HBIM Application in Historic Timber Structures: A Systematic Review. Int. J. Archit. Herit. 2022, 17, 1331–1347. [Google Scholar] [CrossRef]
  4. Croce, V.; Caroti, G.; Piemonte, A.; De Luca, L.; Veron, P. H-BIM and Artificial Intelligence: Classification of Architectural Heritage for Semi-Automatic Scan-to-BIM Reconstruction. Sensors 2023, 23, 2497. [Google Scholar] [CrossRef]
  5. Pocobelli, D.P.; Boehm, J.; Bryan, P.; Still, J.; Grau-Bové, J. BIM for heritage science: A review. Herit. Sci. 2018, 6, 30. [Google Scholar] [CrossRef]
  6. Ding, J.; Liang, M.; Chen, W. Integration of BIM and Chinese Architectural Heritage: A Bibliometric Analysis Research. Buildings 2023, 13, 593. [Google Scholar] [CrossRef]
  7. Hu, Z.; Qin, X. Extended interactive and procedural modeling method for ancient chinese architecture. Multimed. Tools Appl. 2021, 80, 5773–5807. [Google Scholar] [CrossRef]
  8. Liu, J.; Wu, Z.-K. Rule-Based Generation of Ancient Chinese Architecture from the Song Dynasty. J. Comput. Cult. Herit. 2015, 9, 1–22. [Google Scholar] [CrossRef]
  9. Liu, E.; Luo, C.; Yang, C.; Huang, Y. Research on 3D Laser Scanning Reconstruction of Ancient Buildings Combined with BIM Technology. J. Comput. Commun. 2023, 11, 233–240. [Google Scholar] [CrossRef]
  10. Yang, X.; Grussenmeyer, P.; Koehl, M.; Macher, H.; Murtiyoso, A.; Landes, T. Review of built heritage modelling: Integration of HBIM and other information techniques. J. Cult. Herit. 2020, 46, 350–360. [Google Scholar] [CrossRef]
  11. Li, Y.; Zhao, L.; Chen, Y.; Zhang, N.; Fan, H.; Zhang, Z. 3D LiDAR and multi-technology collaboration for preservation of built heritage in China: A review. Int. J. Appl. Earth Obs. Geoinf. 2023, 116, 103156. [Google Scholar] [CrossRef]
  12. Sipiran, I.; Romanengo, C.; Falcidieno, B.; Biasotti, S. SHREC 2023: Detection of symmetries on 3D point clouds representing simple shapes. In Proceedings of the Eurographics Workshop on 3D Object Retrieval (2023), Lille, France, 31 August–1 September 2023. [Google Scholar]
  13. Xue, F.; Lu, W.; Webster, C.J.; Chen, K. A derivative-free optimization-based approach for detecting architectural symmetries from 3D point clouds. ISPRS J. Photogramm. Remote Sens. 2019, 148, 32–40. [Google Scholar] [CrossRef]
  14. Mitra, N.J.; Pauly, M.; Wand, M.; Ceylan, D. Symmetry in 3D Geometry: Extraction and Applications. Comput. Graph. Forum 2013, 32, 1–23. [Google Scholar] [CrossRef]
  15. Pellis, E.; Masiero, A.; Betti, M.; Tucci, G.; Grussenmeyer, P. A Deep Learning Multiview Approach for the Semantic Segmentation of Heritage Building Point Clouds. Int. J. Archit. Herit. 2025, 19, 3117–3139. [Google Scholar] [CrossRef]
  16. Li, B.; Lim, Y.L.; Li, W. Systematic review: A bibliometric analysis of building technology and its potential applications to artificial intelligence in the field of cultural heritage conservation from 2013 to 2023. J. Asian Archit. Build. Eng. 2025, 1–23. [Google Scholar] [CrossRef]
  17. Nguyen, T.P.; Truong, H.P.; Nguyen, T.T.; Kim, Y.-G. Reflection symmetry detection of shapes based on shape signatures. Pattern Recognit. 2022, 128, 108667. [Google Scholar] [CrossRef]
  18. Ecins, A.; Fermuller, C.; Aloimonos, Y. Seeing Behind the Scene: Using Symmetry to Reason About Objects in Cluttered Environments. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 7193–7200. [Google Scholar]
  19. Bokeloh, M.; Berner, A.; Wand, M.; Seidel, H.-P.; Schilling, A. Symmetry Detection Using Feature Lines. Comput. Graph. Forum 2009, 28, 697–706. [Google Scholar] [CrossRef]
  20. Gao, L.; Zhang, L.-X.; Meng, H.-Y.; Ren, Y.-H.; Lai, Y.-K.; Kobbelt, L. PRS-Net: Planar Reflective Symmetry Detection Net for 3D Models. arXiv 2019, arXiv:1910.06511. [Google Scholar] [CrossRef]
  21. Li, R.-W.; Zhang, L.-X.; Li, C.; Lai, Y.-K.; Gao, L. E3Sym: Leveraging E(3) Invariance for Unsupervised 3D Planar Reflective Symmetry Detection. In Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023; pp. 14497–14507. [Google Scholar]
  22. Ji, P.; Liu, X. A fast and efficient 3D reflection symmetry detector based on neural networks. Multimed. Tools Appl. 2019, 78, 35471–35492. [Google Scholar] [CrossRef]
  23. Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. PointNet++: Deep hierarchical feature learning on point sets in a metric space. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5105–5114. [Google Scholar]
  24. Shi, Y.; Huang, J.; Zhang, H.; Xu, X.; Rusinkiewicz, S.; Xu, K. SymmetryNet: Learning to predict reflectional and rotational symmetries of 3D shapes from single-view RGB-D images. ACM Trans. Graph. 2020, 39, 213. [Google Scholar] [CrossRef]
  25. Li, Y.; Liu, C.; Lou, Y.; Shen, T.; Wu, Y.; Guo, J.; Li, Y.; Zhang, M. Integrating colored LiDAR and YOLO semantic segmentation for design feature extraction in Chinese ancient architecture. NPJ Herit. Sci. 2025, 13, 316. [Google Scholar] [CrossRef]
  26. Khan, I.A.; Xie, X.; Wang, Q.; Mohd Noor, S.N.F.B.; Rad, D. Parameterization of Chinese Ancient Architecture on the Basis of Modulo Relationships. SHS Web Conf. 2023, 171, 03031. [Google Scholar] [CrossRef]
  27. Shen, Y.; Zhang, E.; Feng, Y.; Liu, S.; Wang, J. Parameterizing the Curvilinear Roofs of Traditional Chinese Architecture. Nexus Netw. J. 2020, 23, 475–492. [Google Scholar] [CrossRef]
  28. Liu, S.; Bin Mamat, M.J. Application of 3D laser scanning technology for mapping and accuracy assessment of the point cloud model for the Great Achievement Palace heritage building. Herit. Sci. 2024, 12, 153. [Google Scholar] [CrossRef]
  29. Zhao, J.; Hua, X.; Yang, J.; Yin, L.; Liu, Z.; Wang, X. A Review of Point Cloud Segmentation of Architectural Cultural Heritage. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, X-1/W1-2023, 247–254. [Google Scholar] [CrossRef]
  30. Yang, S.; Hou, M.; Li, S. Three-Dimensional Point Cloud Semantic Segmentation for Cultural Heritage: A Comprehensive Review. Remote Sens. 2023, 15, 548. [Google Scholar] [CrossRef]
  31. Ji, Y.; Dong, Y.; Hou, M.; Qi, Y.; Li, A. An Extraction Method for Roof Point Cloud of Ancient Building Using Deep Learning Framework. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, XLVI-M-1-2021, 321–327. [Google Scholar] [CrossRef]
  32. Huo, P.; Hou, M.; Dong, Y.; Li, A.; Ji, Y.; Li, S. A Method for 3D Reconstruction of the Ming and Qing Official-Style Roof Using a Decorative Components Template Library. ISPRS Int. J. Geo-Inf. 2020, 9, 570. [Google Scholar] [CrossRef]
  33. Dong, Y.; Hou, M.; Xu, B.; Li, Y.; Ji, Y. Ming and Qing Dynasty Official-Style Architecture Roof Types Classification Based on the 3D Point Cloud. ISPRS Int. J. Geo-Inf. 2021, 10, 650. [Google Scholar] [CrossRef]
  34. Lailiang, C.; Kan, W.; Qian, F.; Ruyu, Z. Fast 3D Modeling Chinese Ancient Architectures Base on Points Cloud. In Proceedings of the 2010 International Conference on Computational Intelligence and Software Engineering, Wuhan, China, 10–12 December 2010. [Google Scholar]
  35. Barazzetti, L.; Banfi, F.; Brumana, R.; Previtali, M. Creation of Parametric BIM Objects from Point Clouds Using Nurbs. Photogramm. Rec. 2015, 30, 339–362. [Google Scholar] [CrossRef]
  36. Barazzetti, L. Parametric as-built model generation of complex shapes from point clouds. Adv. Eng. Inform. 2016, 30, 298–311. [Google Scholar] [CrossRef]
  37. Guo, M.-H.; Cai, J.-X.; Liu, Z.-N.; Mu, T.-J.; Martin, R.R.; Hu, S.-M. PCT: Point Cloud Transformer. arXiv 2021, arXiv:2012.09688. [Google Scholar] [CrossRef]
  38. Agarap, A.F. Deep Learning using Rectified Linear Units (ReLU). arXiv 2019, arXiv:1803.08375. [Google Scholar] [CrossRef]
  39. Zhou, W.; Fu, X.; Deng, Y.; Yan, J.; Zhou, J.; Liu, P. The Extraction of Roof Feature Lines of Traditional Chinese Village Buildings Based on UAV Dense Matching Point Clouds. Buildings 2024, 14, 1180. [Google Scholar] [CrossRef]
  40. Bottou, L. Large-Scale Machine Learning with Stochastic Gradient Descent. In Proceedings of the COMPSTAT’2010, Paris France, 22–27 August 2010; pp. 177–186. [Google Scholar]
  41. Charles, R.Q.; Su, H.; Kaichun, M.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 77–85. [Google Scholar]
  42. Ma, X.; Qin, C.; You, H.; Ran, H.; Fu, Y.R. Rethinking Network Design and Local Geometry in Point Cloud: A Simple Residual MLP Framework. arXiv 2022, arXiv:2202.07123. [Google Scholar] [CrossRef]
  43. Tran, H.; Khoshelham, K.; Kealy, A. Geometric comparison and quality evaluation of 3D models of indoor environments. ISPRS J. Photogramm. Remote Sens. 2019, 149, 29–39. [Google Scholar] [CrossRef]
Figure 1. Flowchart illustrating the proposed roof reconstruction method.
Figure 1. Flowchart illustrating the proposed roof reconstruction method.
Sensors 26 01054 g001
Figure 2. TCRSym-Net architecture.
Figure 2. TCRSym-Net architecture.
Sensors 26 01054 g002
Figure 3. Defined local coordinate system for roof parameter description.
Figure 3. Defined local coordinate system for roof parameter description.
Sensors 26 01054 g003
Figure 4. Roof types and parameter illustrations for the longitudinal section and cross section.
Figure 4. Roof types and parameter illustrations for the longitudinal section and cross section.
Sensors 26 01054 g004
Figure 5. Relocation algorithm and longitudinal and cross section acquisition algorithm. (a) Algorithm to relocate of the local coordinate system; (b) algorithm to acquire longitudinal section and cross section.
Figure 5. Relocation algorithm and longitudinal and cross section acquisition algorithm. (a) Algorithm to relocate of the local coordinate system; (b) algorithm to acquire longitudinal section and cross section.
Sensors 26 01054 g005
Figure 6. Roof parameter extraction. (a) Symmetry plane, (b) relocation of the local coordinate system, (c) curve fitting in the cross section, (d) main ridge and curve fitting in the longitudinal section, (e) determination of the upturned eave region, and (f) rising point and T-points that are extracted for the corner of the upturned eave.
Figure 6. Roof parameter extraction. (a) Symmetry plane, (b) relocation of the local coordinate system, (c) curve fitting in the cross section, (d) main ridge and curve fitting in the longitudinal section, (e) determination of the upturned eave region, and (f) rising point and T-points that are extracted for the corner of the upturned eave.
Sensors 26 01054 g006
Figure 7. 1/4 foundation modelling, in which the roof is divided into three parts: the front roof slope, sloping gable surface, and upturned eaves corners.
Figure 7. 1/4 foundation modelling, in which the roof is divided into three parts: the front roof slope, sloping gable surface, and upturned eaves corners.
Sensors 26 01054 g007
Figure 8. Sweeping a profile curve along a path with curve lofting.
Figure 8. Sweeping a profile curve along a path with curve lofting.
Sensors 26 01054 g008
Figure 9. Boolean operator for creating the model of the upturned eave corner.
Figure 9. Boolean operator for creating the model of the upturned eave corner.
Sensors 26 01054 g009
Figure 10. Examples from the roof symmetry detection dataset.
Figure 10. Examples from the roof symmetry detection dataset.
Sensors 26 01054 g010
Figure 11. PR curves for various backbones.
Figure 11. PR curves for various backbones.
Sensors 26 01054 g011
Figure 12. Accuracy evaluation of the roof surfaces, the dash lines indicate the M A c c values at 10 cm cut-off distance. (a) The surface accuracy for test sites in group A; (b) the surface accuracy for test sites in group B.
Figure 12. Accuracy evaluation of the roof surfaces, the dash lines indicate the M A c c values at 10 cm cut-off distance. (a) The surface accuracy for test sites in group A; (b) the surface accuracy for test sites in group B.
Sensors 26 01054 g012
Figure 13. The experiments for testing the robustness of the proposed method. (a) Gaussian noise with standard deviations of σ = 0.05 m, 0.1 m, and 0.25 m was added to each point of the test roof point clouds; (b) the points were removed by 5%, 10%, and 20% of the roof point cloud.
Figure 13. The experiments for testing the robustness of the proposed method. (a) Gaussian noise with standard deviations of σ = 0.05 m, 0.1 m, and 0.25 m was added to each point of the test roof point clouds; (b) the points were removed by 5%, 10%, and 20% of the roof point cloud.
Sensors 26 01054 g013
Figure 14. Helmet roof, truncated roof and eight-corner tents.
Figure 14. Helmet roof, truncated roof and eight-corner tents.
Sensors 26 01054 g014
Figure 15. Cross roof, Baosha roof and joined roof.
Figure 15. Cross roof, Baosha roof and joined roof.
Sensors 26 01054 g015
Table 1. Parameters for the training data set for symmetry detection of traditional Chinese roofs.
Table 1. Parameters for the training data set for symmetry detection of traditional Chinese roofs.
IDItemParameters
1Number of sampling points per roof40,960
2Minimum size[1.8532, 2.0453, 0.5046]
3Maximum size[57.9948, 6.2010, 2.6024]
4Point cloud density range43.8~21,413.6 points/m3
5Range of diagonal lengths2.805~63.691 m
6Original ground resolution of drone images2–10 cm
IDRoof typeNumber of unique roof
point clouds
1Flush gable roofs100
2Hipped roofs54
3Single-eave gable and hip roofs100
4Four-corner tents28
5Double-eave gable and hip roofs48
Total330
Table 2. Evaluation index of PR-AUC and Highest F1-score on different backbones.
Table 2. Evaluation index of PR-AUC and Highest F1-score on different backbones.
IndexPointNet BackbonePointMLP BackboneProposed Method
PR-AUC0.4860.2940.672
Highest F1-score0.6140.4640.762
Table 3. Evaluation of symmetry detection of the test dataset.
Table 3. Evaluation of symmetry detection of the test dataset.
Background Prediction (The Predicted Normal Vector’s Angle Error Smaller Than  θ = 5 ° )
Test Sites Symmetry Best Matched Prediction Proposed Method PointNet Backbone [41]PointMLP Backbone [42]
Angle Error
(Degree)
Distance Error
(m)
Angle Error
(Degree)
Distance Error
(m)
Angle Error
(Degree)
Distance Error
(m)
A1Sym1Pred-10.41990.04030.65000.43651.28510.4355
Sym2Pred-20.13980.08740.13790.00382.05330.0298
B1Sym1Pred-10.24460.05611.88130.02954.23450.1655
Sym2Pred-21.55760.09542.59770.17683.09620.0091
A2Sym1Pred-10.92260.00331.78740.00222.44670.1344
Sym2Pred-20.13350.00902.11770.01740.40420.0079
B2Sym1Pred-11.15320.05790.42550.08280.31230.1071
Sym2Pred-20.90410.01800.67200.03000.26030.0231
A3Sym1Pred-10.78980.33932.08430.06390.07280.7751
Sym2Pred-20.89330.00401.90090.24473.72430.1840
B3Sym1Pred-11.12950.45601.26130.28252.77390.3464
Sym2Pred-20.32960.12262.14220.06670.22390.1111
A4Sym1Pred-10.32750.06251.01550.05080.87610.0318
Sym2Pred-20.88220.02052.23270.29312.76430.1546
B4Sym1Pred-11.24800.04680.85010.01210.04840.1385
Sym2Pred-21.46040.04051.04420.04860.34910.1072
A5Sym1Pred-10.49900.03340.09280.22470.77440.2128
Sym2Pred-20.98280.42390.02720.65571.01020.6411
B5Sym1Pred-10.19440.08041.02030.07560.32380.1317
Sym2Pred-20.91510.08841.42880.16180.89740.0013
Mean0.75630.10431.26850.14801.39660.1874
Median0.88770.05701.15270.07110.88670.1330
Standard deviation0.44300.13560.79200.16931.32540.2105
Table 4. Roof parameter extraction experiments.
Table 4. Roof parameter extraction experiments.
CaseParameters
Sensors 26 01054 i001Main Axis: [−9.121803, −0.113617, 1.018555], [9.361763, −0.113617, 1.018555]
Second Axis: [0.119980, −5.030745, 1.018555], [0.119980, 4.803511, 1.018555]
Fitted Curve:
[0.000000, −1.915039], [0.541100, −1.735352], [1.082200, −1.540039], [1.623300, −1.274414], [2.164400, −1.030273], [2.705500, −0.733398], [3.246600, −0.478516], [3.787699, −0.157715], [4.328799, 0.238770], [4.869899, 0.775879], [5.410999, 1.124023]
Sensors 26 01054 i002Main Axis: [−3.825644, 0.720184, 1.682293], [4.312551, 0.720184, 1.682293]
Main Ridge: [2.651347, 1.682293], [5.486848, 1.682293],
Fitted Curve:
[0.000000, −0.562422], [0.265135, −0.412106], [0.530269, −0.267233], [0.795404, −0.108004], [1.060539, 0.058395], [1.325673, 0.233313], [1.590808, 0.429060], [1.855943, 0.627842], [2.121077, 0.847433], [2.386212, 1.156660], [2.651347, 1.682293]
Second Axis: [−0.243454, −3.420081, 1.682293], [0.243454, 4.860450, 1.682293]
Fitted Curve:
[0.000000, −0.503857], [0.414027, −0.442457], [0.828053, −0.300898], [1.242080, −0.159391], [1.656106, −0.012772], [2.070133, 0.176414], [2.484159, 0.377151], [2.898186, 0.608349], [3.312212, 0.846260], [3.726239, 1.086050], [4.140265, 1.688667]
Bounding box: [−3.825644, −3.420081, 0.000000], [4.312551, 4.860450, 0.000000]
Eaves: [−3.871060, −3.436253, 0.001334], [0.045417, 0.016172]
Sensors 26 01054 i003Main Axis: [−6.421559, 0.074219, 1.260220], [6.507496, 0.074219, 1.260220]
Main Ridge: [1.933882, 1.260220], [10.995173, 1.260220]
Fitted Curve:
[0.000000, −1.254971], [0.193388, −1.148760], [0.386776, −1.090052], [0.580165, −0.985477], [0.773553, −0.886642], [0.966941, −0.800716], [1.160329, −0.683926], [1.353717, −0.619263], [1.547105, −0.521505], [1.740494, −0.437271], [1.933882, −0.353037]
Second Axis: [−0.042969, −4.214520 1.260220], [0.042969, 4.362957, 1.260220]
Fitted Curve:
[0.000000, −1.304916], [0.428874, −1.058765], [0.857748, −0.863235], [1.286621, −0.609316], [1.715495, −0.427336], [2.144369, −0.229176], [2.573243, 0.004543], [3.002117, 0.355425], [3.430991, 0.691423], [3.859864, 1.068165], [4.288738, 1.262224]
Bounding box: [−6.421559, −4.214520 0.000000], [6.507496, 4.362957, 0.000000]
Eaves: [−7.169045, −4.983853, 0.525608], [0.747486, 0.769333]
Sensors 26 01054 i004Main Axis: [−3.748123, −0.243225, 1.535645], [6.030334, −0.243225, 1.535645]
Fitted Curve:
[0.000000, −1.846191], [0.488923, −1.656006], [0.977846, −1.466797], [1.466769, −1.226074], [1.955692, −0.946533], [2.444614, −0.652832], [2.933537, −0.415771], [3.422460, −0.069092], [3.911383, 0.210693], [4.400306, 0.758301], [4.889229, 1.070313]
Second Axis: [−1.141106, −5.823246, 1.535645], [1.141106, 5.336796, 1.535645]
Fitted Curve:
[0.000000, −1.688232], [0.558002, −1.688232], [1.116004, −1.501221], [1.674006, −1.214844], [2.232008, −0.945557], [2.790011, −0.662109], [3.348013, −0.403564], [3.906015, −0.104248], [4.464017, 0.204834], [5.022019, 0.504150], [5.580021, 1.065674]
Bounding box: [−3.748123, −5.823246, 0.000000], [6.030334, 5.336796, 0.000000]
Eaves: [−4.085910, −5.973135, 9.121094], [0.337787, 0.149889]
Sensors 26 01054 i005Second eave
Main Axis: [−16.647789, 0.006954, 5.755520], [15.180134, 0.006954, 5.755520]
Fitted Curve:
[0.000000, −5.952494], [0.447442, −5.630781], [0.894884, −5.410862], [1.342326, −5.201662], [1.789768, −4.973425], [2.237210, −4.746714], [2.684652, −4.499012], [3.132094, −4.255314], [3.579536, −3.974346], [4.026978, −3.686543], [4.474420, −3.418728]
Second Axis: [−0.733828, −12.860601, 5.755520], [−0.733828, 12.874510, 5.755520]
Fitted Curve:
[0.000000, −5.772617], [0.463336, −5.615183], [0.926671, −5.396662], [1.390007, −5.192884], [1.853342, −4.907066], [2.316678, −4.659809], [2.780014, −4.419453], [3.243349, −4.155888], [3.706685, −3.886074], [4.170021, −3.609570], [4.633356, −2.834946]
Bounding box: [−16.647789, −12.860601, 0.000000], [15.180134, 12.874510, 0.000000]
Eaves: [−17.279955, −13.404373, 6.180473], [0.632166, 0.543772]
Table 5. Experimental results of roof modelling for test dataset A.
Table 5. Experimental results of roof modelling for test dataset A.
IDRoof TypePicturesOriginal Point CloudsRoof Point CloudsRoof Models
A1Flush Gable RoofSensors 26 01054 i006Sensors 26 01054 i007Sensors 26 01054 i008Sensors 26 01054 i009
A2Hipped RoofSensors 26 01054 i010Sensors 26 01054 i011Sensors 26 01054 i012Sensors 26 01054 i013
A3Single-eave Gable and Hip RoofSensors 26 01054 i014Sensors 26 01054 i015Sensors 26 01054 i016Sensors 26 01054 i017
A4Four-corner tentsSensors 26 01054 i018Sensors 26 01054 i019Sensors 26 01054 i020Sensors 26 01054 i021
A5Double-eave Gable and Hip RoofSensors 26 01054 i022Sensors 26 01054 i023Sensors 26 01054 i024Sensors 26 01054 i025
Table 6. Experimental results of roof modelling test dataset B.
Table 6. Experimental results of roof modelling test dataset B.
IDRoof TypePicturesOriginal Point CloudsRoof Point CloudsRoof Models
B1Flush Gable RoofSensors 26 01054 i026Sensors 26 01054 i027Sensors 26 01054 i028Sensors 26 01054 i029
B2Hipped RoofSensors 26 01054 i030Sensors 26 01054 i031Sensors 26 01054 i032Sensors 26 01054 i033
B3Single-eave Gable and Hip RoofSensors 26 01054 i034Sensors 26 01054 i035Sensors 26 01054 i036Sensors 26 01054 i037
B4Four-corner tentsSensors 26 01054 i038Sensors 26 01054 i039Sensors 26 01054 i040Sensors 26 01054 i041
B5Double-eave Gable and Hip RoofSensors 26 01054 i042Sensors 26 01054 i043Sensors 26 01054 i044Sensors 26 01054 i045
Table 7. The accuracy of the modelled roof surfaces for the test dataset.
Table 7. The accuracy of the modelled roof surfaces for the test dataset.
Cut-Off
Distance (cm)
M A c c (cm)
A1A2A3A4A5B1B2B3B4B5
10.4530.5140.6630.3940.6120.4520.4730.4970.3540.409
20.7511.0300.8980.8170.9790.9130.9760.6910.7040.938
30.9861.4580.9411.4531.4901.3191.3161.3571.1601.595
41.1031.8821.2962.1982.1311.5661.7631.8511.6401.969
51.1662.2451.7872.8172.5731.8702.0002.2512.4102.339
61.2302.4922.0943.2882.7782.1702.1082.2993.1072.609
71.2882.6952.3483.6862.9802.4262.1752.5353.8752.869
81.3812.8512.8014.0703.1652.5262.2262.7074.4833.121
91.5132.9803.2464.3733.2962.5482.2652.7994.8773.321
101.7233.1113.6064.6203.4172.5552.3352.8485.1433.321
111.9713.2203.9584.8833.6442.5552.4222.8945.2613.344
122.2323.3394.0475.1263.7502.5552.5162.9375.3193.476
132.4693.4504.3975.3243.7502.5552.5682.9745.3603.490
142.7313.5814.4775.5063.7502.5552.6173.1765.3863.502
153.0793.7094.5065.6473.8062.5552.6883.2125.4073.513
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ou, R.; Yang, F.; Li, L.; Cheng, L.; Qian, L.; He, Y.; Che, M.; Zhang, C. Highly Efficient Deep Learning-Enabled Parameterization and 3D Reconstruction of Traditional Chinese Roof Structures. Sensors 2026, 26, 1054. https://doi.org/10.3390/s26031054

AMA Style

Ou R, Yang F, Li L, Cheng L, Qian L, He Y, Che M, Zhang C. Highly Efficient Deep Learning-Enabled Parameterization and 3D Reconstruction of Traditional Chinese Roof Structures. Sensors. 2026; 26(3):1054. https://doi.org/10.3390/s26031054

Chicago/Turabian Style

Ou, Ruisi, Fan Yang, Lili Li, Liyu Cheng, Lile Qian, Ye He, Mingliang Che, and Chi Zhang. 2026. "Highly Efficient Deep Learning-Enabled Parameterization and 3D Reconstruction of Traditional Chinese Roof Structures" Sensors 26, no. 3: 1054. https://doi.org/10.3390/s26031054

APA Style

Ou, R., Yang, F., Li, L., Cheng, L., Qian, L., He, Y., Che, M., & Zhang, C. (2026). Highly Efficient Deep Learning-Enabled Parameterization and 3D Reconstruction of Traditional Chinese Roof Structures. Sensors, 26(3), 1054. https://doi.org/10.3390/s26031054

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop