Next Article in Journal
YOLO-Act: Unified Spatiotemporal Detection of Human Actions Across Multi-Frame Sequences
Previous Article in Journal
Real-Time Detection and Localization of Force on a Capacitive Elastomeric Sensor Array Using Image Processing and Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Generation of Structural Components for Indoor Spaces from Point Clouds

by
Junhyuk Lee
1,†,
Yutaka Ohtake
1,*,†,
Takashi Nakano
2 and
Daisuke Sato
2
1
School of Precision Engineering, The University of Tokyo, Tokyo 113-8654, Japan
2
DataLabs, Inc., Tokyo 103-0024, Japan
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2025, 25(10), 3012; https://doi.org/10.3390/s25103012 (registering DOI)
Submission received: 26 March 2025 / Revised: 5 May 2025 / Accepted: 8 May 2025 / Published: 10 May 2025
(This article belongs to the Section Sensing and Imaging)

Abstract

Point clouds from laser scanners have been widely used in recent research on indoor modeling methods. Currently, particularly in data-driven modeling methods, data preprocessing for dividing structural components and nonstructural components is required before modeling. In this paper, we propose an indoor modeling method without the classification of structural and nonstructural components. A pre-mesh is generated for constructing the adjacency relations of point clouds, and plane components are extracted using planar-based region growing. Then, the distance fields of each plane are calculated, and voxel data referred to as a surface confidence map are obtained. Subsequently, the inside and outside of the indoor model are classified using a graph-cut algorithm. Finally, indoor models with watertight meshes are generated via dual contouring and mesh refinement. The experimental results showed that the point-to-mesh error ranged from approximately 2 mm to 50 mm depending on the dataset. Furthermore, completeness—measured as the proportion of original point-cloud data successfully reconstructed into the mesh—approached 1.0 for single-room datasets and reached around 0.95 for certain multiroom and synthetic datasets. These results demonstrate the effectiveness of the proposed method in automatically removing non-structural components and generating clean structural meshes.
Keywords: 3D indoor modeling; 3D reconstruction; planar-based region growing; graph-cut; unsigned distance fields 3D indoor modeling; 3D reconstruction; planar-based region growing; graph-cut; unsigned distance fields

Share and Cite

MDPI and ACS Style

Lee, J.; Ohtake, Y.; Nakano, T.; Sato, D. Generation of Structural Components for Indoor Spaces from Point Clouds. Sensors 2025, 25, 3012. https://doi.org/10.3390/s25103012

AMA Style

Lee J, Ohtake Y, Nakano T, Sato D. Generation of Structural Components for Indoor Spaces from Point Clouds. Sensors. 2025; 25(10):3012. https://doi.org/10.3390/s25103012

Chicago/Turabian Style

Lee, Junhyuk, Yutaka Ohtake, Takashi Nakano, and Daisuke Sato. 2025. "Generation of Structural Components for Indoor Spaces from Point Clouds" Sensors 25, no. 10: 3012. https://doi.org/10.3390/s25103012

APA Style

Lee, J., Ohtake, Y., Nakano, T., & Sato, D. (2025). Generation of Structural Components for Indoor Spaces from Point Clouds. Sensors, 25(10), 3012. https://doi.org/10.3390/s25103012

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop