Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (10)

Search Parameters:
Keywords = floorplan reconstruction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 4166 KB  
Article
FP-MAE: A Self-Supervised Model for Floorplan Generation with Incomplete Inputs
by Jing Zhong, Ran Luo, Peilin Li, Tianrui Li, Pengyu Zeng, Zhifeng Lei, Tianjing Feng and Jun Yin
Buildings 2026, 16(3), 558; https://doi.org/10.3390/buildings16030558 - 29 Jan 2026
Viewed by 510
Abstract
Floor plans are a central representational component of architectural design, operating in close relation to sections, elevations, and three-dimensional reasoning to support the production and understanding of architectural space. In this context, we address the bounded computational task of completing incomplete floor plan [...] Read more.
Floor plans are a central representational component of architectural design, operating in close relation to sections, elevations, and three-dimensional reasoning to support the production and understanding of architectural space. In this context, we address the bounded computational task of completing incomplete floor plan representations as a form of early-stage design assistance, rather than treating the floor plan as an isolated architectural object. Within this workflow, being able to automatically complete a floor plan from an unfinished draft is highly valuable because it allows architects to generate preliminary schemes more quickly, streamline early discussions, and reduce the repetitive workload involved in revisions. To meet this need, we present FP-MAE, a self-supervised learning framework designed for floor plan completion. This study proposes three core contributions: (1) We developed FloorplanNet, a dedicated dataset that includes 8000 floorplans consisting of both schematic line drawings and color-coded plans, providing diverse yet consistent examples of residential layouts. (2) On top of this dataset, FP-MAE applies the Masked Autoencoder (MAE) strategy. By deliberately masking sections of a plan and using a lightweight Vision Transformer (ViT) to reconstruct the missing regions, the model learns to capture the global structural patterns of floor plans from limited local information. (3) We evaluated FP-MAE across multiple masking scenarios and compared its performance with state-of-the-art baselines. Beyond controlled experiments, we also tested the model on real sketches produced during the early stages of design projects, which demonstrated its robustness under practical conditions. The results show that FP-MAE can produce complete plans that are both accurate and functionally coherent, even when starting from highly incomplete inputs. FP-MAE is a practical and scalable solution for automated floor plan generation. It can be integrated into design software as a supportive tool to speed up concept development and option exploration, and it also points toward broader opportunities for applying AI in architectural automation. While the current framework operates on two-dimensional plan representations, future extensions may integrate multi-view information such as sections or three-dimensional models to better reflect the relational nature of architectural design representations. Full article
(This article belongs to the Special Issue Artificial Intelligence in Architecture and Interior Design)
Show Figures

Graphical abstract

19 pages, 15310 KB  
Article
A New Framework for Generating Indoor 3D Digital Models from Point Clouds
by Xiang Gao, Ronghao Yang, Xuewen Chen, Junxiang Tan, Yan Liu, Zhaohua Wang, Jiahao Tan and Huan Liu
Remote Sens. 2024, 16(18), 3462; https://doi.org/10.3390/rs16183462 - 18 Sep 2024
Cited by 6 | Viewed by 4517
Abstract
Three-dimensional indoor models have wide applications in fields such as indoor navigation, civil engineering, virtual reality, and so on. With the development of LiDAR technology, automatic reconstruction of indoor models from point clouds has gained significant attention. We propose a new framework for [...] Read more.
Three-dimensional indoor models have wide applications in fields such as indoor navigation, civil engineering, virtual reality, and so on. With the development of LiDAR technology, automatic reconstruction of indoor models from point clouds has gained significant attention. We propose a new framework for generating indoor 3D digital models from point clouds. The proposed method first generates a room instance map of an indoor scene. Walls are detected and projected onto a horizontal plane to form line segments. These segments are extended, intersected, and, by solving an integer programming problem, line segments are selected to create room polygons. The polygons are converted into a raster image, and image connectivity detection is used to generate a room instance map. Then the roofs of the point cloud are extracted and used to perform an overlap analysis with the generated room instance map to segment the entire roof point cloud, obtaining the roof for each room. Room boundaries are defined by extracting and regularizing the roof point cloud boundaries. Finally, by detecting doors and windows in the scene in two steps, we generate the floor plans and 3D models separately. Experiments with the Giblayout dataset show that our method is robust to clutter and furniture point clouds, achieving high-accuracy models that match real scenes. The mean precision and recall for the floorplans are both 0.93, and the Point–Surface Distance (PSD) and standard deviation of the PSD for the 3D models are 0.044 m and 0.066 m, respectively. Full article
Show Figures

Figure 1

22 pages, 20832 KB  
Article
3D Visual Reconstruction as Prior Information for First Responder Localization and Visualization
by Susanna Kaiser, Magdalena Linkiewicz, Henry Meißner and Dirk Baumbach
Sensors 2023, 23(18), 7785; https://doi.org/10.3390/s23187785 - 10 Sep 2023
Cited by 3 | Viewed by 2109
Abstract
In professional use cases like police or fire brigade missions, coordinated and systematic force management is crucial for achieving operational success during intervention by the emergency personnel. A real-time situation picture enhances the coordination of the team. This situation picture includes not only [...] Read more.
In professional use cases like police or fire brigade missions, coordinated and systematic force management is crucial for achieving operational success during intervention by the emergency personnel. A real-time situation picture enhances the coordination of the team. This situation picture includes not only an overview of the environment but also the positions, i.e., localization, of the emergency forces. The overview of the environment can be obtained either from known situation pictures like floorplans or by scanning the environment with the aid of visual sensors. The self-localization problem can be solved outdoors using the Global Navigation Satellite System (GNSS), but it is not fully solved indoors, where the GNSS signal might not be received or might be degraded. In this paper, we propose a novel combination of an inertial localization technique based on simultaneous localization and mapping (SLAM) with 3D building scans, which are used as prior information, for geo-referencing the positions, obtaining a situation picture, and finally visualizing the results with an appropriate visualization tool. We developed a new method for converting point clouds into a hexagonal prism map specifically designed for our SLAM algorithm. With this combination, we could keep the equipment for first responders as lightweight as required. We showed that the positioning led to an average accuracy of less than 1m indoors, and the final visualization including the building layout obtained by the 3D building reconstruction will be advantageous for coordinating first responder operations. Full article
(This article belongs to the Special Issue Advanced Inertial Sensors, Navigation, and Fusion)
Show Figures

Figure 1

14 pages, 3864 KB  
Article
Reconstructing Floorplans from Point Clouds Using GAN
by Tianxing Jin, Jiayan Zhuang, Jiangjian Xiao, Ningyuan Xu and Shihao Qin
J. Imaging 2023, 9(2), 39; https://doi.org/10.3390/jimaging9020039 - 8 Feb 2023
Cited by 7 | Viewed by 4605
Abstract
This paper proposed a method for reconstructing floorplans from indoor point clouds. Unlike existing corner and line primitive detection algorithms, this method uses a generative adversarial network to learn the complex distribution of indoor layout graphics, and repairs incomplete room masks into more [...] Read more.
This paper proposed a method for reconstructing floorplans from indoor point clouds. Unlike existing corner and line primitive detection algorithms, this method uses a generative adversarial network to learn the complex distribution of indoor layout graphics, and repairs incomplete room masks into more regular segmentation areas. Automatic learning of the structure information of layout graphics can reduce the dependence on geometric priors, and replacing complex optimization algorithms with Deep Neural Networks (DNN) can improve the efficiency of data processing. The proposed method can retain more shape information from the original data and improve the accuracy of the overall structure details. On this basis, the method further used an edge optimization algorithm to eliminate pixel-level edge artifacts that neural networks cannot perceive. Finally, combined with the constraint information of the overall layout, the method can generate compact floorplans with rich semantic information. Experimental results indicated that the algorithm has robustness and accuracy in complex 3D indoor datasets; its performance is competitive with those of existing methods. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images)
Show Figures

Figure 1

16 pages, 9517 KB  
Article
Building Floorplan Reconstruction Based on Integer Linear Programming
by Qiting Wang, Zunjie Zhu, Ruolin Chen, Wei Xia and Chenggang Yan
Remote Sens. 2022, 14(18), 4675; https://doi.org/10.3390/rs14184675 - 19 Sep 2022
Cited by 8 | Viewed by 4299
Abstract
The reconstruction of the floorplan for a building requires the creation of a two-dimensional floorplan from a 3D model. This task is widely employed in interior design and decoration. In reality, the structures of indoor environments are complex with much clutter and occlusions, [...] Read more.
The reconstruction of the floorplan for a building requires the creation of a two-dimensional floorplan from a 3D model. This task is widely employed in interior design and decoration. In reality, the structures of indoor environments are complex with much clutter and occlusions, making it difficult to reconstruct a complete and accurate floorplan. It is well known that a suitable dataset is a key point to drive an effective algorithm, while existing datasets of floorplan reconstruction are synthetic and small. Without reliable accumulations of real datasets, the robustness of methods to real scene reconstruction is weakened. In this paper, we first annotate a large-scale realistic benchmark, which contains RGBD image sequences and 3D models of 80 indoor scenes with more than 10,000 square meters. We also introduce a framework for the floorplan reconstruction with mesh-based point cloud normalization. The loose-Manhattan constraint is performed in our optimization process, and the optimal floorplan is reconstructed via constraint integer programming. The experimental results on public and our own datasets demonstrate that the proposed method outperforms FloorNet and Floor-SP. Full article
Show Figures

Figure 1

12 pages, 1763 KB  
Article
Automatic 2D Floorplan CAD Generation from 3D Point Clouds
by Uuganbayar Gankhuyag and Ji-Hyeong Han
Appl. Sci. 2020, 10(8), 2817; https://doi.org/10.3390/app10082817 - 19 Apr 2020
Cited by 29 | Viewed by 12630
Abstract
In the architecture, engineering, and construction (AEC) industry, creating an indoor model of existing buildings has been a challenging task since the introduction of building information modeling (BIM). Because the process of BIM is primarily manual and implies a high possibility of error, [...] Read more.
In the architecture, engineering, and construction (AEC) industry, creating an indoor model of existing buildings has been a challenging task since the introduction of building information modeling (BIM). Because the process of BIM is primarily manual and implies a high possibility of error, the automated creation of indoor models remains an ongoing research. In this paper, we propose a fully automated method to generate 2D floorplan computer-aided designs (CADs) from 3D point clouds. The proposed method consists of two main parts. The first is to detect planes in buildings, such as walls, floors, and ceilings, from unstructured 3D point clouds and to classify them based on the Manhattan-World (MW) assumption. The second is to generate 3D BIM in the industry foundation classes (IFC) format and a 2D floorplan CAD using the proposed line-detection algorithm. We experimented the proposed method on 3D point cloud data from a university building, residential houses, and apartments and evaluated the geometric quality of a wall reconstruction. We also offer the source code for the proposed method on GitHub. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

15 pages, 15436 KB  
Article
Indoor Reconstruction from Floorplan Images with a Deep Learning Approach
by Hanme Jang, Kiyun Yu and JongHyeon Yang
ISPRS Int. J. Geo-Inf. 2020, 9(2), 65; https://doi.org/10.3390/ijgi9020065 - 21 Jan 2020
Cited by 53 | Viewed by 8938
Abstract
Although interest in indoor space modeling is increasing, the quantity of indoor spatial data available is currently very scarce compared to its demand. Many studies have been carried out to acquire indoor spatial information from floorplan images because they are relatively cheap and [...] Read more.
Although interest in indoor space modeling is increasing, the quantity of indoor spatial data available is currently very scarce compared to its demand. Many studies have been carried out to acquire indoor spatial information from floorplan images because they are relatively cheap and easy to access. However, existing studies do not take international standards and usability into consideration, they consider only 2D geometry. This study aims to generate basic data that can be converted to indoor spatial information using IndoorGML (Indoor Geography Markup Language) thick wall model or the CityGML (City Geography Markup Language) level of detail 2 by creating vector-formed data while preserving wall thickness. To achieve this, recent Convolutional Neural Networks are used on floorplan images to detect wall and door pixels. Additionally, centerline and corner detection algorithms were applied to convert wall and door images into vector data. In this manner, we obtained high-quality raster segmentation results and reliable vector data with node-edge structure and thickness attributes that enabled the structures of vertical and horizontal wall segments and diagonal walls to be determined with precision. Some of the vector results were converted into CityGML and IndoorGML form and visualized, demonstrating the validity of our work. Full article
Show Figures

Figure 1

30 pages, 8432 KB  
Article
Structural 3D Reconstruction of Indoor Space for 5G Signal Simulation with Mobile Laser Scanning Point Clouds
by Yang Cui, Qingquan Li and Zhen Dong
Remote Sens. 2019, 11(19), 2262; https://doi.org/10.3390/rs11192262 - 27 Sep 2019
Cited by 25 | Viewed by 5441
Abstract
3D modelling of indoor environment is essential in smart city applications such as building information modelling (BIM), spatial location application, energy consumption estimation, and signal simulation, etc. Fast and stable reconstruction of 3D models from point clouds has already attracted considerable research interest. [...] Read more.
3D modelling of indoor environment is essential in smart city applications such as building information modelling (BIM), spatial location application, energy consumption estimation, and signal simulation, etc. Fast and stable reconstruction of 3D models from point clouds has already attracted considerable research interest. However, in the complex indoor environment, automated reconstruction of detailed 3D models still remains a serious challenge. To address these issues, this paper presents a novel method that couples linear structures with three-dimensional geometric surfaces to automatically reconstruct 3D models using point cloud data from mobile laser scanning. In our proposed approach, a fully automatic room segmentation is performed on the unstructured point clouds via multi-label graph cuts with semantic constraints, which can overcome the over-segmentation in the long corridor. Then, the horizontal slices of point clouds with individual room are projected onto the plane to form a binary image, which is followed by line extraction and regularization to generate floorplan lines. The 3D structured models are reconstructed by multi-label graph cuts, which is designed to combine segmented room, line and surface elements as semantic constraints. Finally, this paper proposed a novel application that 5G signal simulation based on the output structural model to aim at determining the optimal location of 5G small base station in a large-scale indoor scene for the future. Four datasets collected using handheld and backpack laser scanning systems in different locations were used to evaluate the proposed method. The results indicate our proposed methodology provides an accurate and efficient reconstruction of detailed structured models from complex indoor scenes. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Show Figures

Graphical abstract

23 pages, 6128 KB  
Article
An Accurate Visual-Inertial Integrated Geo-Tagging Method for Crowdsourcing-Based Indoor Localization
by Tao Liu, Xing Zhang, Qingquan Li, Zhixiang Fang and Nadeem Tahir
Remote Sens. 2019, 11(16), 1912; https://doi.org/10.3390/rs11161912 - 16 Aug 2019
Cited by 10 | Viewed by 4388
Abstract
One of the unavoidable bottlenecks in the public application of passive signal (e.g., received signal strength, magnetic) fingerprinting-based indoor localization technologies is the extensive human effort that is required to construct and update database for indoor positioning. In this paper, we propose an [...] Read more.
One of the unavoidable bottlenecks in the public application of passive signal (e.g., received signal strength, magnetic) fingerprinting-based indoor localization technologies is the extensive human effort that is required to construct and update database for indoor positioning. In this paper, we propose an accurate visual-inertial integrated geo-tagging method that can be used to collect fingerprints and construct the radio map by exploiting the crowdsourced trajectory of smartphone users. By integrating multisource information from the smartphone sensors (e.g., camera, accelerometer, and gyroscope), this system can accurately reconstruct the geometry of trajectories. An algorithm is proposed to estimate the spatial location of trajectories in the reference coordinate system and construct the radio map and geo-tagged image database for indoor positioning. With the help of several initial reference points, this algorithm can be implemented in an unknown indoor environment without any prior knowledge of the floorplan or the initial location of crowdsourced trajectories. The experimental results show that the average calibration error of the fingerprints is 0.67 m. A weighted k-nearest neighbor method (without any optimization) and the image matching method are used to evaluate the performance of constructed multisource database. The average localization error of received signal strength (RSS) based indoor positioning and image based positioning are 3.2 m and 1.2 m, respectively, showing that the quality of the constructed indoor radio map is at the same level as those that were constructed by site surveying. Compared with the traditional site survey based positioning cost, this system can greatly reduce the human labor cost, with the least external information. Full article
(This article belongs to the Special Issue Mobile Mapping Technologies)
Show Figures

Graphical abstract

33 pages, 11112 KB  
Article
Procedural Modeling of Buildings Composed of Arbitrarily-Shaped Floor-Plans: Background, Progress, Contributions and Challenges of a Methodology Oriented to Cultural Heritage
by Telmo Adão, Luís Pádua, Pedro Marques, Joaquim João Sousa, Emanuel Peres and Luís Magalhães
Computers 2019, 8(2), 38; https://doi.org/10.3390/computers8020038 - 11 May 2019
Cited by 17 | Viewed by 11077
Abstract
Virtual models’ production is of high pertinence in research and business fields such as architecture, archeology, or video games, whose requirements might range between expeditious virtual building generation for extensively populating computer-based synthesized environments and hypothesis testing through digital reconstructions. There are some [...] Read more.
Virtual models’ production is of high pertinence in research and business fields such as architecture, archeology, or video games, whose requirements might range between expeditious virtual building generation for extensively populating computer-based synthesized environments and hypothesis testing through digital reconstructions. There are some known approaches to achieve the production/reconstruction of virtual models, namely digital settlements and buildings. Manual modeling requires highly-skilled manpower and a considerable amount of time to achieve the desired digital contents, in a process composed by many stages that are typically repeated over time. Both image-based and range scanning approaches are more suitable for digital preservation of well-conserved structures. However, they usually require trained human resources to prepare field operations and manipulate expensive equipment (e.g., 3D scanners) and advanced software tools (e.g., photogrammetric applications). To tackle the issues presented by previous approaches, a class of cost-effective, efficient, and scarce-data-tolerant techniques/methods, known as procedural modeling, has been developed aiming at the semi- or fully-automatic production of virtual environments composed of hollow buildings exclusively represented by outer façades or traversable buildings with interiors, either for expeditious generation or reconstruction. Despite the many achievements of the existing procedural modeling approaches, the production of virtual buildings with both interiors and exteriors composed by non-rectangular shapes (convex or concave n-gons) at the floor-plan level is still seldomly addressed. Therefore, a methodology (and respective system) capable of semi-automatically producing ontology-based traversable buildings composed of arbitrarily-shaped floor-plans has been proposed and continuously developed, and is under analysis in this paper, along with its contributions towards the accomplishment of other virtual reality (VR) and augmented reality (AR) projects/works oriented to digital applications for cultural heritage. Recent roof production-related enhancements resorting to the well-established straight skeleton approach are also addressed, as well as forthcoming challenges. The aim is to consolidate this procedural modeling methodology as a valuable computer graphics work and discuss its future directions. Full article
Show Figures

Figure 1

Back to TopTop