Automated Dimension Recognition and BIM Modeling of Frame Structures Based on 3D Point Clouds
Abstract
1. Introduction
2. Methodology
2.1. Semantic Segmentation of Point Clouds
2.1.1. PointNet++ Network
2.1.2. Dataset Preparation and Annotation
2.1.3. Implementation Details
2.2. Geometric Information Extraction
2.2.1. Point Cloud Clustering
2.2.2. Dimensionality Reduction
2.2.3. Automated Dimension Extraction
| Algorithm 1. Calculation of Cross-Section Edge Lengths |
| Input: : array of x-coefficients of line equations;: array of y-coefficients of line equations Output: : edge lengths of the cross-section 1: // Initialization 2: : intersection coordinates 3: : edge index 4: Icaled: indices of edges already computed 5: : The number of lines 6: : computed edge lengths 7: // Compute intersection coordinates 8: for each i in [1, 2, …, − 1] do: 9: for each j in [i, …,] do: 10: , 11: if then: 12: 13: add (x, y) to , add (i, j) to . 14: end 15: end 16: end 17: // The number of intersections 18: // Compute edge lengths 19: for each i in [1, 2, …, − 1] do: 20: for each j in [i, …, ] do: 21: , 22: if there exists a common line index in (, ) then: 23: Ico-edge common line index in (, ) 24: if Ico-edge not in Icaled then: 25: 26: Add Ico-edge to Icaled 27: Add to 28: end 29: end 30: end 31: end 32: return |
2.3. Automated BIM Modelling
3. Results and Analysis
3.1. Semantic Segmentation Performance
3.2. Geometric Information Extraction and BIM Reconstruction Results
3.2.1. Computational Efficiency Evaluation
3.2.2. The Accuracy Evaluation
4. Discussion
4.1. Applicability to Full-Scale Conditions
4.2. Data Completeness
4.3. Application Scope and Limitations
5. Conclusions
- (1)
- Establishment of an Automated Scan-to-BIM Workflow: A complete technical route was developed, seamlessly linking point cloud semantic segmentation (PointNet++), individual component isolation (FEC clustering), and geometric parameter extraction (PCA and RANSAC). This workflow effectively addresses the challenge of reconstructing frame members, which differs significantly from planar wall reconstruction. An automated Python script based on IFCOpenShell was further developed to map extracted parameters to standard IfcBeam and IfcColumn entities, enabling the direct generation of editable BIM models compatible with mainstream software like Revit.
- (2)
- High-Performance Semantic Segmentation: The trained PointNet++ network demonstrated exceptional robustness in classifying structural components. Quantitative evaluations on the test set yielded an Overall Accuracy (OA) of 97.8%, a Mean Accuracy (mAcc) of 94.92%, and a Mean Intersection over Union (mIoU) of 90.8%. Notably, critical structural elements such as beams, columns, and slabs all maintained IoU scores exceeding 90%, proving that the data augmentation strategy effectively mitigated class imbalance and ensured reliable recognition even in the presence of noise.
- (3)
- Precise Dimension Extraction and Geometric Accuracy: Experimental validation confirmed that the proposed geometric extraction algorithm satisfies the accuracy requirements for engineering reverse modeling. For structural components with complete data coverage, the average dimensional error for columns ranged from 1.4 mm to 2.0 mm, while beams exhibited average errors between 1.6 mm and 2.3 mm, excluding specific occluded cases. These results indicate that the integration of PCA dimensionality reduction and RANSAC boundary fitting can effectively resist point cloud noise and accurately restore the As-Built dimensions of existing structures.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Wei, X.; Liu, Y.; Zuo, X.; Zhong, J.; Yuan, Y.; Wang, Y.; Li, C.; Zou, Y. Automated Arch Profile Extraction from Point Clouds and Its Application in Arch Bridge Construction Monitoring. Buildings 2025, 15, 2912. [Google Scholar] [CrossRef]
- Wu, H.; Ma, M.; Yang, Y.; Han, L.; Wu, S. On-Site Measuring Robot Technology for Post-Construction Quality Assessment of Building Projects. Buildings 2024, 14, 3085. [Google Scholar] [CrossRef]
- Jiang, S.; Yang, Y.; Gu, S.; Li, J.; Hou, Y. Bridge Geometric Shape Measurement Using LiDAR–Camera Fusion Mapping and Learning-Based Segmentation Method. Buildings 2025, 15, 1458. [Google Scholar] [CrossRef]
- Reuland, Y.; Lestuzzi, P.; Smith, I.F.C. An engineering approach to model-class selection for measurement-supported post-earthquake assessment. Eng. Struct. 2019, 197, 109408. [Google Scholar] [CrossRef]
- Maboudi, M.; Backhaus, J.; Mai, I.; Ghassoun, Y.; Khedar, Y.; Lowke, D.; Riedel, B.; Bestmann, U.; Gerke, M. Very high resolution bridge deformation monitoring using UAV-based photogrammetry. J. Civ. Struct. Health Monit. 2025, 15, 3489–3508. [Google Scholar] [CrossRef]
- Wang, D.; Liu, J.; Jiang, H.; Liu, P.; Jiang, Q. Existing Buildings Recognition and BIM Generation Based on Multi-Plane Segmentation and Deep Learning. Buildings 2025, 15, 691. [Google Scholar] [CrossRef]
- Piekarczuk, A.; Mazurek, A.; Szer, J.; Szer, I. A Case Study of 3D Scanning Techniques in Civil Engineering Using the Terrestrial Laser Scanning Technique. Buildings 2024, 14, 3703. [Google Scholar] [CrossRef]
- Zhao, J.; Chen, J.; Liang, Y.; Xu, Z. Feature Selection-Based Method for Scaffolding Assembly Quality Inspection Using Point Cloud Data. Buildings 2024, 14, 2518. [Google Scholar] [CrossRef]
- Patil, J.; Kalantari, M. Automatic Scan-to-BIM—The Impact of Semantic Segmentation Accuracy. Buildings 2025, 15, 1126. [Google Scholar] [CrossRef]
- Keitaanniemi, A.; Virtanen, J.-P.; Rönnholm, P.; Kukko, A.; Rantanen, T.; Vaaja, M. The Combined Use of SLAM Laser Scanning and TLS for the 3D Indoor Mapping. Buildings 2021, 11, 386. [Google Scholar] [CrossRef]
- Nowak, R.; Orłowicz, R.; Rutkowski, R. Use of TLS (LiDAR) for Building Diagnostics with the Example of a Historic Building in Karlino. Buildings 2020, 10, 24. [Google Scholar] [CrossRef]
- Masiero, A.; Fissore, F.; Guarnieri, A.; Pirotti, F.; Visintini, D.; Vettore, A. Performance evaluation of two indoor mapping systems: Low-Cost UWB-aided photogrammetry and backpack laser scanning. Appl. Sci. 2018, 8, 416. [Google Scholar] [CrossRef]
- Li, J.; Peng, Y.; Tang, Z.; Li, Z. Three-Dimensional Reconstruction of Railway Bridges Based on Unmanned Aerial Vehicle–Terrestrial Laser Scanner Point Cloud Fusion. Buildings 2023, 13, 2841. [Google Scholar] [CrossRef]
- Zhou, Z.; Gong, J.; Guo, M. Image-Based 3D Reconstruction for Posthurricane Residential Building Damage Assessment. J. Comput. Civ. Eng. 2016, 30, 04015015. [Google Scholar] [CrossRef]
- Hao, D.; Li, Y.; Liu, H.; Xu, Z.; Zhang, J.; Ren, J.; Wu, J. Deformation monitoring of large steel structure based on terrestrial laser scanning technology. Measurement 2025, 248, 116962. [Google Scholar] [CrossRef]
- Ma, J.W.; Czerniawski, T.; Leite, F. Semantic segmentation of point clouds of building interiors with deep learning: Augmenting training datasets with synthetic BIM-based point clouds. Autom. Constr. 2020, 113, 103144. [Google Scholar] [CrossRef]
- Perez-Perez, Y.; Golparvar-Fard, M.; El-Rayes, K. Segmentation of point clouds via joint semantic and geometric features for 3D modeling of the built environment. Autom. Constr. 2021, 125, 103584. [Google Scholar] [CrossRef]
- Schnabel, R.; Wahl, R.; Klein, R. Efficient RANSAC for point-cloud shape detection. In Computer Graphics Forum; Wiley: Hoboken, NJ, USA, 2007; Volume 26, pp. 214–226. [Google Scholar]
- Adams, R.; Bischof, L. Seeded region growing. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 641–647. [Google Scholar] [CrossRef]
- Yang, J.; Zhang, D.; Frangi, A.F.; Yang, J.-Y. Two-Dimensional PCA: A New Approach to Appearance-Based Face Representation and Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 131–137. [Google Scholar] [CrossRef] [PubMed]
- Charles, R.Q.; Su, H.; Kaichun, M.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Hamid-Lakzaeian, F. Point cloud segmentation and classification of structural elements in multi-planar masonry building facades. Autom. Constr. 2020, 118, 103232. [Google Scholar] [CrossRef]
- Zhou, Z.; Gong, J. Automated Analysis of Mobile LiDAR Data for Component-Level Damage Assessment of Building Structures during Large Coastal Storm Events. Comput. Aided Civ. Eng. 2018, 33, 373–392. [Google Scholar] [CrossRef]
- Kim, H.; Yoon, J.; Sim, S. Automated bridge component recognition from point clouds using deep learning. Struct. Control Health Monit. 2020, 27, e2591. [Google Scholar] [CrossRef]
- Lu, R.; Brilakis, I.; Middleton, C.R. Detection of Structural Components in Point Clouds of Existing RC Bridges. Comput. Aided Civ. Eng. 2019, 34, 191–212. [Google Scholar] [CrossRef]
- Pan, Y.; Dong, Y.; Wang, D.; Chen, A.; Ye, Z. Three-Dimensional Reconstruction of Structural Surface Model of Heritage Bridges Using UAV-Based Photogrammetric Point Clouds. Remote Sens. 2019, 11, 1204. [Google Scholar] [CrossRef]
- Yi, C.; Lu, D.; Xie, Q.; Xu, J.; Wang, J. Tunnel Deformation Inspection via Global Spatial Axis Extraction from 3D Raw Point Cloud. Sensors 2020, 20, 6815. [Google Scholar] [CrossRef]
- Dabrowski, P.S. Novel PCSE-based approach of inclined structures geometry analysis on the example of the Leaning Tower of Pisa. Measurement 2022, 189, 110462. [Google Scholar] [CrossRef]
- Yang, S.; Hou, M.; Li, S. Three-Dimensional Point Cloud Semantic Segmentation for Cultural Heritage: A Comprehensive Review. Remote Sens. 2023, 15, 548. [Google Scholar] [CrossRef]
- Lee, J.S.; Park, J.; Ryu, Y.-M. Semantic segmentation of bridge components based on hierarchical point cloud model. Autom. Constr. 2021, 130, 103847. [Google Scholar] [CrossRef]
- Wang, Q.; Kim, M.-K.; Cheng, J.C.P.; Sohn, H. Automated quality assessment of precast concrete elements with geometry irregularities using terrestrial laser scanning. Autom. Constr. 2016, 68, 170–182. [Google Scholar] [CrossRef]
- Kim, M.-K.; Sohn, H.; Chang, C.-C. Automated dimensional quality assessment of precast concrete panels using terrestrial laser scanning. Autom. Constr. 2014, 45, 163–177. [Google Scholar] [CrossRef]
- Kim, M.-K.; Thedja, J.P.P.; Wang, Q. Automated dimensional quality assessment for formwork and rebar of reinforced concrete components using 3D point cloud data. Autom. Constr. 2020, 112, 103077. [Google Scholar] [CrossRef]
- Bosché, F.; Ahmed, M.; Turkan, Y.; Haas, C.T.; Haas, R. The value of integrating Scan-to-BIM and Scan-vs-BIM techniques for construction monitoring using laser scanning and BIM: The case of cylindrical MEP components. Autom. Constr. 2015, 49, 201–213. [Google Scholar] [CrossRef]
- Shu, J.; Zeng, Z.; Li, W.; Zhou, S.; Zhang, C.; Xu, C.; Zhang, H. Automatic geometric digital twin of box girder bridge using a laser-scanned point cloud. Autom. Constr. 2024, 168, 105781. [Google Scholar] [CrossRef]
- Pantoja-Rosero, B.G.; Achanta, R.; Kozinski, M.; Fua, P.; Perez-Cruz, F.; Beyer, K. Generating LOD3 building models from structure-from-motion and semantic segmentation. Autom. Constr. 2022, 141, 104430. [Google Scholar] [CrossRef]
- Jing, Y.; Sheil, B.; Acikgoz, S. Segmentation of large-scale masonry arch bridge point clouds with a synthetic simulator and the BridgeNet neural network. Autom. Constr. 2022, 142, 104459. [Google Scholar] [CrossRef]
- Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic Graph CNN for Learning on Point Clouds. ACM Trans. Graph. 2019, 38, 1–12. [Google Scholar] [CrossRef]
- Zhao, H.; Jiang, L.; Jia, J.; Torr, P.H.S.; Koltun, V. Point Transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021. [Google Scholar]
- Thomas, H.; Qi, C.R.; Deschaud, J.-E.; Marcotegui, B.; Goulette, F.; Guibas, L. KPConv: Flexible and Deformable Convolution for Point Clouds. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Cao, Y.; Wang, Y.; Xue, Y.; Zhang, H.; Lao, Y. FEC: Fast Euclidean Clustering for Point Cloud Segmentation. Drones 2022, 6, 325. [Google Scholar] [CrossRef]













| Serial Number | Actual Value | Proposed Method | Absolute Error | Relative Error | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| w | l | w1 | l1 | w2 | l2 | w1 | l1 | w2 | l2 | w1 | l1 | w2 | l2 | |
| 1 | 30.3 | 57.9 | 31.1 | 61.3 | 32.4 | 59.6 | 0.9 | 3.4 | 2.1 | 1.7 | 2.97% | 5.87% | 6.93% | 2.94% |
| 2 | 29.7 | 59.9 | 33.9 | 61.7 | 33.7 | 57.1 | 4.2 | 1.9 | 4 | 2.8 | 14.14% | 3.17% | 13.47% | 4.67% |
| 3 | 30.4 | 59.6 | 32.2 | 60.7 | 29.7 | 61.9 | 1.8 | 1.1 | 0.7 | 2.3 | 5.92% | 1.85% | 2.30% | 3.86% |
| 4 | 31.1 | 61.8 | 33.8 | 63.1 | 32.8 | 60.8 | 2.7 | 1.3 | 1.6 | 1 | 8.68% | 2.10% | 5.14% | 1.62% |
| 5 | 31.3 | 59.6 | 34.4 | 59.8 | 34.7 | 60.2 | 3.1 | 0.2 | 3.4 | 0.6 | 9.90% | 0.34% | 10.86% | 1.01% |
| 6 | 30.8 | 61.2 | 31.7 | 62.1 | 30.3 | 59.6 | 0.9 | 0.9 | 0.5 | 1.6 | 2.92% | 1.47% | 1.62% | 2.61% |
| 7 | 31.7 | 59.4 | 34.3 | 61.6 | 32.6 | 60.4 | 2.6 | 2.2 | 1 | 1 | 8.20% | 3.70% | 3.15% | 1.68% |
| 8 | 31.3 | 61 | 30.8 | 61.6 | 31 | 59.2 | 0.6 | 0.6 | 0.3 | 1.8 | 1.92% | 0.98% | 0.96% | 2.95% |
| 9 | 29.9 | 59.5 | 30.5 | 61.3 | 28.8 | 60.4 | 0.6 | 1.8 | 1.1 | 0.9 | 2.01% | 3.03% | 3.68% | 1.51% |
| 10 | 30.4 | 62.9 | 29 | 63 | 30.5 | 62.8 | 1.4 | 0.1 | 0.1 | 0.1 | 4.61% | 0.16% | 0.33% | 0.16% |
| 11 | 29.6 | 60.8 | 33.9 | 61.9 | 34.1 | 59.3 | 4.3 | 1.1 | 4.5 | 1.5 | 14.53% | 1.81% | 15.20% | 2.47% |
| 12 | 28.9 | 62.5 | 30.8 | 61.4 | 30.5 | 58.5 | 1.9 | 1.2 | 1.6 | 4 | 6.57% | 1.92% | 5.54% | 6.40% |
| 13 | 30.8 | 59.1 | 33.7 | 61.2 | 31.2 | 60.5 | 2.9 | 2.1 | 0.4 | 1.4 | 9.42% | 3.55% | 1.30% | 2.37% |
| 14 | 31 | 57.8 | 35.4 | 62.4 | 27.1 | 61 | 4.4 | 4.6 | 3.9 | 3.2 | 14.19% | 7.96% | 12.58% | 5.54% |
| 15 | 31.1 | 62 | 29 | 61.6 | 32.1 | 61.6 | 2 | 0.4 | 1.1 | 0.4 | 6.43% | 0.65% | 3.54% | 0.65% |
| 16 | 30.5 | 62.3 | 34.8 | 62.9 | 33 | 59.9 | 4.3 | 0.6 | 2.5 | 2.4 | 14.10% | 0.96% | 8.20% | 3.85% |
| 17 | 30.1 | 59.5 | 32.2 | 61.5 | 27.8 | 61.7 | 2.1 | 2 | 2.3 | 2.2 | 6.98% | 3.36% | 7.64% | 3.70% |
| 18 | 31.6 | 58.3 | 37.7 | 62 | 33.7 | 59.6 | 4.1 | 3.7 | 1.1 | 1.3 | 12.97% | 6.35% | 3.48% | 2.23% |
| 19 | 30.8 | 62 | 33.5 | 60.6 | 28.3 | 60.2 | 2.7 | 1.5 | 2.5 | 1.8 | 8.77% | 2.42% | 8.12% | 2.90% |
| 20 | 31.9 | 61.6 | 34 | 64.6 | 35 | 64 | 2.1 | 3.1 | 3.1 | 2.5 | 6.58% | 5.03% | 9.72% | 4.06% |
| 21 | 31.5 | 61.5 | 32.8 | 62.1 | 30.3 | 59.1 | 1.3 | 0.7 | 1.2 | 2.3 | 4.13% | 1.14% | 3.81% | 3.74% |
| 22 | 30.3 | 60 | 30.7 | 62.1 | 29.2 | 61.8 | 0.4 | 2 | 1.1 | 1.8 | 1.32% | 3.33% | 3.63% | 3.00% |
| 23 | 29.3 | 58.9 | 34.1 | 61.8 | 32.3 | 58.8 | 4.8 | 2.9 | 3 | 0.1 | 16.38% | 4.92% | 10.24% | 0.17% |
| 24 | 30 | 63 | 3.3 | 64.7 | 5.3 | 64.1 | 26.7 | 1.7 | 24.7 | 1.1 | 89.00% | 2.70% | 82.33% | 1.75% |
| 25 | 30.9 | 61.4 | 32.1 | 64.5 | 32.1 | 63.2 | 1.2 | 3.2 | 1.2 | 1.8 | 3.88% | 5.21% | 3.88% | 2.93% |
| 26 | 31.5 | 59.1 | 31.3 | 59.1 | 32.3 | 61.3 | 0.3 | 0 | 0.8 | 2.2 | 0.95% | 0.00% | 2.54% | 3.72% |
| 27 | 30.8 | 60.9 | 3.6 | 62.1 | 3.6 | 62.3 | 27.2 | 1.2 | 27.2 | 1.4 | 88.31% | 1.97% | 88.31% | 2.30% |
| 28 | 30.9 | 60.7 | 32.9 | 62.1 | 32.5 | 62.3 | 2 | 1.3 | 1.6 | 1.6 | 6.47% | 2.14% | 5.18% | 2.64% |
| 29 | 31.2 | 62 | 31.9 | 60.3 | 31.6 | 60.2 | 0.7 | 1.8 | 0.5 | 1.8 | 2.24% | 2.90% | 1.60% | 2.90% |
| 30 | 31.3 | 60.4 | 34 | 62.4 | 32.5 | 60.4 | 2.7 | 2 | 1.2 | 0 | 8.63% | 3.31% | 3.83% | 0.00% |
| 31 | 29.2 | 62 | 33.5 | 64.6 | 33 | 61.9 | 4.3 | 2.6 | 3.8 | 0.1 | 14.73% | 4.19% | 13.01% | 0.16% |
| 32 | 29.8 | 60.8 | 4.2 | 64.3 | 1.9 | 65 | 25.6 | 3.4 | 27.9 | 4.2 | 85.91% | 5.59% | 93.62% | 6.91% |
| Serial Number | Actual Value | Proposed Method | Absolute Error | Relative Error | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| w1 | l1 | w2 | l2 | w1 | l1 | w2 | l2 | w1 | l1 | w2 | l2 | w1 | l1 | w2 | l2 | |
| 1 | 52.9 | 61.2 | 52.9 | 61.2 | 55 | 59.7 | 52.2 | 60.4 | 2.1 | 1.5 | 0.7 | 0.7 | 3.97% | 2.45% | 1.32% | 1.14% |
| 2 | 51.3 | 61.4 | 51.3 | 61.4 | 52.4 | 60.6 | 52.8 | 58.5 | 1 | 0.8 | 1.5 | 2.9 | 1.95% | 1.30% | 2.92% | 4.72% |
| 3 | 51 | 62.2 | 51 | 62.2 | 51.2 | 61 | 51.3 | 60.9 | 0.3 | 1.2 | 0.3 | 1.3 | 0.59% | 1.93% | 0.59% | 2.09% |
| 4 | 50.5 | 59.8 | 50.5 | 59.8 | 53.2 | 61.4 | 52.2 | 59.7 | 2.7 | 1.6 | 1.8 | 0.1 | 5.35% | 2.68% | 3.56% | 0.17% |
| 5 | 50.4 | 61.3 | 50.4 | 61.3 | 51.4 | 62.4 | 53 | 62.5 | 1.1 | 1.1 | 2.6 | 1.1 | 2.18% | 1.79% | 5.16% | 1.79% |
| 6 | 51.6 | 60 | 51.6 | 60 | 54.6 | 58 | 50.1 | 59.2 | 3 | 1.9 | 1.5 | 0.7 | 5.81% | 3.17% | 2.91% | 1.17% |
| 7 | 51.3 | 62 | 51.3 | 62 | 54 | 60.5 | 52.9 | 61.3 | 2.7 | 1.5 | 1.6 | 0.8 | 5.26% | 2.42% | 3.12% | 1.29% |
| 8 | 52.4 | 61.7 | 52.4 | 61.7 | 54.6 | 58.8 | 53.5 | 63.8 | 2.3 | 3 | 1.2 | 2.1 | 4.39% | 4.86% | 2.29% | 3.40% |
| 9 | 51.9 | 61 | 51.9 | 61 | 53.6 | 60.2 | 51.4 | 60.4 | 1.7 | 0.8 | 0.5 | 0.6 | 3.28% | 1.31% | 0.96% | 0.98% |
| 10 | 49.6 | 60.4 | 49.6 | 60.4 | 53.9 | 60.2 | 54.9 | 59.6 | 4.3 | 0.2 | 5.3 | 0.9 | 8.67% | 0.33% | 10.69% | 1.49% |
| 11 | 49.7 | 61.5 | 49.7 | 61.5 | 52.2 | 62.7 | 51.7 | 60.6 | 2.5 | 1.2 | 2 | 0.9 | 5.03% | 1.95% | 4.02% | 1.46% |
| 12 | 52.3 | 62.7 | 52.3 | 62.7 | 54.3 | 60 | 54.4 | 62.3 | 2 | 2.6 | 2.1 | 0.3 | 3.82% | 4.15% | 4.02% | 0.48% |
| 13 | 50 | 60.4 | 50 | 60.4 | 52.9 | 58.4 | 53.4 | 61.2 | 2.8 | 2 | 3.4 | 0.8 | 5.60% | 3.31% | 6.80% | 1.32% |
| 14 | 51.6 | 62.9 | 51.6 | 62.9 | 52.4 | 62.9 | 51.8 | 61.9 | 0.8 | 0 | 0.2 | 1 | 1.55% | 0.00% | 0.39% | 1.59% |
| 15 | 50.3 | 61.4 | 50.3 | 61.4 | 52.5 | 64.1 | 52.7 | 63.6 | 2.2 | 2.7 | 2.5 | 2.2 | 4.37% | 4.40% | 4.97% | 3.58% |
| 16 | 52.8 | 60.8 | 52.8 | 60.8 | 51.9 | 60.4 | 51.1 | 58.9 | 0.8 | 0.3 | 1.6 | 1.9 | 1.52% | 0.49% | 3.03% | 3.13% |
| 17 | 52 | 60.5 | 52 | 60.5 | 51.8 | 59.8 | 54.2 | 61.2 | 0.2 | 0.7 | 2.2 | 0.7 | 0.38% | 1.16% | 4.23% | 1.16% |
| 18 | 51.2 | 62.4 | 51.2 | 62.4 | 52.8 | 59.8 | 53.1 | 63.2 | 1.6 | 2.7 | 1.9 | 0.8 | 3.13% | 4.33% | 3.71% | 1.28% |
| 19 | 51.1 | 60.3 | 51.1 | 60.3 | 51.8 | 60.2 | 53.4 | 60.9 | 0.7 | 0.1 | 2.3 | 0.6 | 1.37% | 0.17% | 4.50% | 1.00% |
| 20 | 50.2 | 62.6 | 50.2 | 62.6 | 52.5 | 61.2 | 53 | 60.4 | 2.3 | 1.4 | 2.8 | 2.2 | 4.58% | 2.24% | 5.58% | 3.51% |
| 21 | 51.8 | 59.8 | 51.8 | 59.8 | 53 | 60 | 53.7 | 61.3 | 1.2 | 0.2 | 1.9 | 1.5 | 2.32% | 0.33% | 3.67% | 2.51% |
| 22 | 50.3 | 61.4 | 50.3 | 61.4 | 52.9 | 64.4 | 52.8 | 64 | 2.5 | 2.9 | 2.4 | 2.6 | 4.97% | 4.72% | 4.77% | 4.23% |
| 23 | 51 | 62.6 | 51 | 62.6 | 50.9 | 62.6 | 52.3 | 59.8 | 0.1 | 0 | 1.2 | 2.8 | 0.20% | 0.00% | 2.35% | 4.47% |
| 24 | 50.1 | 62.1 | 50.1 | 62.1 | 53.6 | 62.3 | 52.5 | 60.3 | 3.5 | 0.2 | 2.3 | 1.8 | 6.99% | 0.32% | 4.59% | 2.90% |
| 25 | 52.7 | 60.4 | 52.7 | 60.4 | 53.6 | 63.1 | 54.1 | 62.3 | 0.9 | 2.7 | 1.4 | 1.9 | 1.71% | 4.47% | 2.66% | 3.15% |
| 26 | 51.2 | 59.9 | 51.2 | 59.9 | 53.4 | 63.7 | 53.5 | 59.2 | 2.2 | 3.8 | 2.2 | 0.7 | 4.30% | 6.34% | 4.30% | 1.17% |
| 27 | 52 | 59.9 | 52 | 59.9 | 53 | 59.6 | 48.7 | 62.3 | 1 | 0.3 | 3.3 | 2.5 | 1.92% | 0.50% | 6.35% | 4.17% |
| 28 | 50.8 | 61.2 | 50.8 | 61.2 | 52.4 | 60.2 | 49.8 | 62.1 | 1.6 | 0.9 | 1.1 | 0.9 | 3.15% | 1.47% | 2.17% | 1.47% |
| 29 | 50.1 | 62.2 | 50.1 | 62.2 | 53.1 | 60.7 | 54.2 | 62.7 | 3 | 1.5 | 4.1 | 0.5 | 5.99% | 2.41% | 8.18% | 0.80% |
| 30 | 50.5 | 61.8 | 50.5 | 61.8 | 49.8 | 64 | 52.8 | 65.2 | 0.7 | 2.2 | 2.3 | 3.5 | 1.39% | 3.56% | 4.55% | 5.66% |
| 31 | 49.9 | 61.5 | 49.9 | 61.5 | 53.1 | 61.3 | 51.3 | 59.6 | 3.2 | 0.2 | 1.4 | 1.9 | 6.41% | 0.33% | 2.81% | 3.09% |
| 32 | 49.6 | 62.7 | 49.6 | 62.7 | 51.9 | 59.1 | 50.9 | 63.4 | 2.3 | 3.6 | 1.3 | 0.7 | 4.64% | 5.74% | 2.62% | 1.12% |
| Average | 51.1 | 61.3 | 51.1 | 61.3 | 52.8 | 61 | 52.5 | 61.3 | 1.9 | 1.4 | 2 | 1.4 | 3.72% | 2.28% | 3.91% | 2.28% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Zhang, F.; Liu, J.; Li, P.; Chen, L.; Xiong, Q. Automated Dimension Recognition and BIM Modeling of Frame Structures Based on 3D Point Clouds. Electronics 2026, 15, 293. https://doi.org/10.3390/electronics15020293
Zhang F, Liu J, Li P, Chen L, Xiong Q. Automated Dimension Recognition and BIM Modeling of Frame Structures Based on 3D Point Clouds. Electronics. 2026; 15(2):293. https://doi.org/10.3390/electronics15020293
Chicago/Turabian StyleZhang, Fengyu, Jinyang Liu, Peizhen Li, Lin Chen, and Qingsong Xiong. 2026. "Automated Dimension Recognition and BIM Modeling of Frame Structures Based on 3D Point Clouds" Electronics 15, no. 2: 293. https://doi.org/10.3390/electronics15020293
APA StyleZhang, F., Liu, J., Li, P., Chen, L., & Xiong, Q. (2026). Automated Dimension Recognition and BIM Modeling of Frame Structures Based on 3D Point Clouds. Electronics, 15(2), 293. https://doi.org/10.3390/electronics15020293

