Next Article in Journal
Remote Sensing Classification of Offshore Seaweed Aquaculture Farms on Sample Dataset Amplification and Semantic Segmentation Model
Previous Article in Journal
Impact of a Hyperspectral Satellite Cross-Calibration Radiometer’s Spatial and Noise Characteristics on Cross-Calibration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reconstructing Geometrical Models of Indoor Environments Based on Point Clouds

by
Maximilian Kellner
1,2,*,
Bastian Stahl
1 and
Alexander Reiterer
1,2
1
Fraunhofer Institute for Physical Measurement Techniques IPM, 79110 Freiburg, Germany
2
Department of Suistainable Systems Engineering INATECH, Albert Ludwigs University Freiburg, 79110 Freiburg, Germany
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(18), 4421; https://doi.org/10.3390/rs15184421
Submission received: 15 June 2023 / Revised: 18 August 2023 / Accepted: 6 September 2023 / Published: 8 September 2023

Abstract

:
In this paper, we present a workflow that combines supervised and unsupervised methods for the reconstruction of geometric models with architectural information from unordered 3D data. Our method uses a downsampling strategy to enrich features to provide scalability for large datasets, increase robustness, and be independent of the sensor used. A Neural Network is then used to segment the resulting point cloud into basic structures. This removes furniture and clutter and preserves the relevant walls, ceilings, floors, and openings. A 2D projection combined with a graph structure is used to find a Region of Interest within the cleaned point cloud, indicating a potential room. Each detected region is projected back into a 3D data patch to refine the room candidates and allow for more complex room structures. The resulting patches are fitted with a polygon using geometric approaches. In addition, architectural features, such as windows and doors, are added to the polygon. To demonstrate that the presented approach works and that the network provides usable results, even with changing data sources, we tested the approach in different real-world scenarios with different sensor systems.

Graphical Abstract

1. Introduction

Nowadays, there are a large number of 3D sensors based on different measurement principles. The accuracy and the number of measurements that such a sensor can perform in a given time have increased enormously in recent years. However, the processing of the collected data could not keep up with the given pace. For this reason, the demand for 3D data processing is extremely high. The range of interest extends from autonomous driving [1,2,3] to infrastructure mapping [4,5,6,7] up to biomedical analysis [8].
In most cases, point clouds are used to define the collected 3D data. The processing of point clouds has several disadvantages compared to images, for instance. First of all, point clouds are very sparse, and most of the environment that may have been covered is empty. Furthermore, the density of the points within the same point cloud varies. Commonly, the point density close to the sensor is denser than the point density further away from it. Thirdly, point clouds are irregular. This means the number of points within a respective point cloud differs. Moreover, point clouds are unstructured, which implies that each point is independent and the distance between adjacent points varies. In addition to that, point clouds are unordered and invariant to permutation.
Each 3D sensor produces data with different characteristics, resolutions, and noise levels. This makes it challenging to process and analyze the data without changing the applied methods or algorithms. In order to cover as many of the sensors used today as possible, it must be ensured that the methods or algorithms generalize. An example of four different point clouds received by four different sensors is shown in Figure 1. It can be seen that not only are the colors displayed differently, but also the point density and the geometries are different. Especially with the windows, the difference is most obvious. In the first two sensors, the glass pane is barely visible in the point cloud while, in the rear two sensors, it is completely visible.
Despite the disadvantages mentioned, point clouds have enormous potential due to their high information content. The idea of Scan-to-BIM [9], a process of creating a 3D model from point cloud data obtained by laser scanning or other types of 3D scanning technologies, was born because of the high information content. The goal of Scan-to-BIM is to extract accurate and detailed 3D building models from point cloud data, which can be used for a variety of applications, such as building renovation, construction management, and facility management. Ideally, the digital model is as similar to the real object as possible. In case the real object changes within time or the object does not even have such a digital repository, the idea of Scan-to-BIM is to make use of 3D sensors to generate the digital BIM. Today, a common way is to manually extract the information out of the 3D data and create the model by hand. This procedure can be, depending on the size and complexity of the object, very time-consuming.
The main purpose of this work is to automate the idea of Scan-to-BIM as much as possible without depending on a specific sensor. With the generalization to different 3D sensors, limited work is currently being performed. The current state of the art often necessitates a particular sensor or is specialized for a segment of the pathway from the point cloud to the geometric model. For this purpose, this paper can be divided into five parts. First, to handle large scenes as well, we have to downsample and unify the given point cloud. This makes it possible to partially compensate for the initial differences between the sensors in terms of point density. To keep as much geometrical information as possible, we introduce a feature-preserving downsampling strategy. In the second part, we make use of a Neural Network (NN) to segment the preprocessed points into relevant classes. In our case, we are only interested in the following classes: floor, ceiling, wall, door, door leaf, window, and clutter, as these are sufficient to describe the main structure of the building. The segmented point cloud is divided into potential room candidates within the next step. This way, the more complex room reconstruction can be applied to the smaller subsets containing only the points belonging to the room itself. In the fourth step, the polygons are fitted into the room candidates in order to, on the one hand, further reduce the point cloud and, on the other hand, to bring the point cloud into a more readable output format. The last step expands the reconstructed room by adding architectural features. For the first experiment, we focused on windows and doors. Of course, this approach can be extended to other objects as well.

2. Related Work

The current state-of-the-art techniques for Scan-to-BIM typically involve using a combination of machine learning and computer vision techniques to extract building components. Methods such as edge detection, feature matching, and region growing are used to extract features from the point cloud data, which can then be used to segment the point cloud data into different building components. In [10], 2.5D plans are extracted from laser scans by triangulating a 2D sampling of the wall positions and separating these triangles into interior and exterior sets. The boundary lines between the sets are used to define the walls. Within [11], the wall candidates are extracted by heat propagation to allow the fitting of polygons. However, both approaches only allow the fitting of environments with single ceilings and floors and do not detect further attributes. In [12], parametric building models with additional features are reconstructed from indoor scans by deriving wall candidates from vertical surfaces observed in the scans and detecting for wall openings. Combining supervised and unsupervised methods was performed by [13,14,15]. In the former, a semi-automatic approach for reconstructing heritage buildings from point clouds using a random forest classifier combined with a subpart annotated by a human was introduced. The second work first cleans the point cloud and samples new relevant points, in the preprocessing step, followed by creating a depth image and detecting the walls and doors using a Convolutional Neural Network (CNN). The last uses a 3D NN to classify the points and to extract the surfaces afterward.
In [16], the blueprint was analyzed to detect walls to further use them to extract rooms. The study in [17] extracts geometric information for buildings from Manhattan-world urban scenes, which states that these scenes were built on a Cartesian grid [18]. The problem of reconstructing polygons from point clouds is formulated as a binary optimization problem by using a set of face candidates in [19].
To tackle the task of 3D segmentation, there are different approaches today. A general overview is given in [20]. Mainly, these approaches can be divided into projection-based, discretization-based, and point-based methods. For the segmentation part, we made use of three different architectures and compared them in terms of generalization. Two of the selected approaches are point-based and one is a discretization-based method. We made use of RandLA-Net [21], where the authors made use of pointwise Multilayer Perceptron (MLP) to learn 3D features directly from the point cloud, as well as to downsample the point cloud. Point Transformer [22] belongs to the same category as RandLA-Net, but the authors made use of self-attention, which is intrinsically a set operator and, therefore, matches 3D point clouds that are essentially sets of points with positional attributes. Due to the fact that all the relevant classes are large, we decided to make use of a 3D convolution as well. For the base structure, the U-Net [23] architecture will be used.
To make our algorithm as applicable as possible, we tried to avoid any dependency on the used 3D sensor. This means we cannot rely on further point features, like colors or reflections, and have to focus on the pure geometrical output. In [24], geometrical features based on the eigenvalues λ 1 , λ 2 , and λ 3 of the covariance matrix in a given neighborhood with λ 1 λ 2 λ 3 0 are used to describe the local 3D structure. In [25], these features are used to detect contours within large-scale data.

3. Method

3.1. Feature-Enriched Downsampling

We used a voxel grid downsampling to reduce the number of points and to allow for a more uniform distribution. To preserve the geometrical information, we used the downsampling procedure to calculate the geometrical features. Each voxel defines the neighborhood of the given points to avoid the use of searching for nearest neighbors. The covariance matrix Σ is calculated for each voxel with the position α , β , and γ in the given 3D environment:
Σ α , β , γ = 1 n i n ( p i p ¯ ) ( p i p ¯ ) T ,
with p ¯ = 1 n i n p i and the n points within the voxel. The features are defined by [24] and are summarized in Table 1.
The results from our described downsampling strategy are P d R m × 11 with m new points. The procedure is shown in Figure 2.
The calculated covariance matrices Σ α , β , γ depend on the size of the voxels. For this reason, we scale the calculated eigenvalues with the voxel size v s . Further, we perform a linear transformation to the final features to receive scaled features within the range [ 0 , 1 ] .

3.2. Semantic Segmentation

To describe a room, only a few classes are needed. Therefore, we are only interested in knowing the following classes: floor, ceiling, wall, door, door leaf, window, and clutter. For the classes door and window, only the frame around the opening is relevant and not the actual panel or glass.
The input for the NN is the point cloud, where each point is represented by its 3D coordinates and the features described in the previous chapter. The voxel size for downsampling and feature computation was set to 10 cm. We chose this size to be able to compute the features even in less dense point clouds and because the classes relevant to us can still be described with this resolution.
For the choice of the parameter of the architectures, we have followed as closely as possible the specifications in the respective publications. For the point-based methods, we choose the following downsampling rates: N N 4 N 16 N 64 N 256 . In contrast, the per-point feature dimension was increased for each layer: 16 64 128 256 512 . For the discretization-based method, we halved the grid size after every layer and doubled the feature dimension starting with 32.

3.3. Regions of Interest

To find potential room candidates, all points except points belonging to walls are removed from the predicted point cloud. We do not need high resolution for this part and temporarily reduce the remaining wall points further. By using the assumption of walls being perpendicular to the ground plane, we can reduce the dimension by projecting all points into the x-y plane. The basic idea behind our algorithm is following the line until we reach the same point again.
To perform this, we transform the points into an undirected graph G = ( V , E ) . Each remaining point is described by a node V = [ 1 , . . . , n ] with n points. We build the graph iteratively starting at a random point p i = [ 1 , , n ] and allow V i to only connect to the closest point. The selected closest point is removed from the selectable points and defines the new start. The resulting edges E R n × 2 have the corresponding weight w i j = d ( V i , V j ) , where d is the Euclidean distance d ( V i , V j ) = V i V j .
Due to the way the graph has been constructed, the distance between two associated nodes may be much higher than the actual underlying neighboring point. To resolve the incorrect assignment, all connections with w i j > d m a x with d m a x = 0.5 are removed and, this way, the whole graph G is divided into subgraphs G m .
All nodes are used to rebuild G with more suitable edges. A distance matrix D = ( d i j ) with d i j = p i p j is calculated with d i i = 1 e 6 . Next, all nodes i and j with d i j < d m a x that are not within the same subgraph ( V i G m , V j G m ) , or have at least a path length π m ( V i , V j ) with more than 10 nodes, are connected with each other. Note that the newly created graph can have nodes with more than two edges. All subgraphs that have only a single node are considered an outlier and are removed. The remaining nodes with a single edge are connected in case there is a node with a distance d i j < d m a x with d m a x = 1.5 and a path length π m ( V i , V j ) with more than 10 nodes.
Cycles within G are an indicator of potential room candidates. To find the shortest cycles within G , we select a random node V i and temporarily remove the edge E i , j . To check whether a circle exists, the Dijkstra algorithm [26] is used to find the path π m V i , V j . If a path π m exists, it most probably describes a room candidate Ω . The algorithm is shown in Algorithm 1.
Algorithm 1 Find shortest cycle in graph
Require: 
G , V i
Ensure: 
y = x n
  • [ V ] g e t C o n n e c t e d N o d e s ( V i )
  • V s t a r t V [ 0 ]
  • V s t o p V [ 1 ]
  • r e m o v e E d g e B e t w e e n N o d e s ( G , V s t a r t , V s t o p )
  • p a t h D i j k s t r a ( G , V s t a r t , V s t o p )
  • a d d E d g e B e t w e e n N o d e s ( G , V s t a r t , V s t o p )
  • return  p a t h
Each detected cycle is analyzed and the area of the polygon described by the nodes is calculated. We make use of formulation described in [27]:
A r e a ( Ω ) = 1 2 k p k × p k + 1 .
with n points and p n + 1 = p 0 . Rooms with an A r e a ( Ω ) < 1 are considered as outliers and removed.

3.4. 3D Room Reconstruction

The following procedure will be performed on each potential room candidate separately. This means that the reconstruction only has to process easier subparts instead of the entire point cloud. We retain the assumption made in Section 3.3 for the first step.
We use RANSAC [28] to fit 99 % of the points within the defined contour with lines. All points that belong to the defined line are used in the later processing. This way, we can use the points to calculate the root mean squared error ϵ R M S E between the fitted plane and the assigned points; to define how good the fit is, we can bound the height for a face by the given z-values and can calculate how well the line is covered by points.
ϵ R M S E = i = 1 n | a p i x + b p i y + c p i z + d | a 2 + b 2 + c 2 .
The fitted lines are bounded into segments by using the containing points. In case of a jump larger than some threshold, the line segment is split. A jump is detected by sorting the distance between one bounding point and all points lying on the segment in ascending order. In case of a large gradient ( + f ) ( i ) = f i + 1 f i , a jump is detected.
The cosine similarity between the direction vectors of the segments is used to check for parallel segments. In case of two line segments being parallel with a distance d ( l i , l j ) < 0.2 m, where the bounds of the smaller segment are covered fully by the bounds of the larger one, the smaller segments are removed.
The remaining line segments are used to search for intersections. To make sure intersections for line segments are possible, we increase the length of each segment in both directions ( d 1 , d 2 ) by θ s :
d 1 = p 1 p 2 p ^ 1 = p 1 d 1 d 1 θ s d 2 = p 2 p 1 p ^ 2 = p 2 d 2 d 2 θ s .
Since line segment k might be intersected by multiple other segments, each intersection point is stored in P i n t k = { p 1 , } . In the case of the bounding points of k being close to one of the intersecting points, P i n t k , the bounding is replaced by the intersecting point. Otherwise, the bounding points are added to the set P i n t k as well. The line segment k, as well as the points lying on the segment, are divided into c a r d ( P i n t k ) 1 new segments.
The idea of following the line is used to generate the final path contour. In case the angle between two connected segments θ a n g l e is smaller than 30 , the two segments are fused and refined. The value was selected because acute angles in the construction industry are rather atypical. The described procedure is visualized in Figure 3.
The next step is to close the wall faces with the corresponding ceiling and ground. The procedure for bounding the wall to the ceiling and the ground is equivalent. For this reason, we will describe this for the ceiling points in the following manner.
We use RANSAC to fit the largest plane into the points belonging to the class ceiling. The height for each face is calculated by the intersection of the lines defining the wall faces with the detected plane. For the outlining contour, describing the room is already known, we create an n-Gon by the intersection points. In case the detected room does not have a sufficient number of points belonging to the floor or the ceiling, the detected room is rejected. This can occur if, for example, an inner courtyard or reflections within a room have been incorrectly detected.

3.5. Projecting Additional Architectural Features

As we defined further classes, like openings, which can indicate a door or a window, we can fit them into the polygon as well. Within each room candidate, we create instances for the classes window and doors by using a clustering algorithm DBSCAN [29]. Afterward, we create a Bounding Box (BB) around each instance and create cuboids out of it. Since we do not assume that the points are axis-aligned, we make use of the Principal Component Analysis [30] to create an Oriented Bounding Box (OBB) from the orthographic perspective. By using this perspective, we reduce the 3D problem to a 2D one since we only use the x and y values to calculate the covariance matrix. The min and max values of z are used to define the height of the OBB. One has to keep in mind that, due to this assumption, windows in roof slopes, for instance, cannot be described precisely.

4. Experiments

We studied the behavior of our proposed workflow on different data sources from real-world scenarios to verify generalization. For this reason, we used data from different sensors and different environments. The S3DIS dataset [31], where the data are collected by a Matterport Camera, shows three different buildings of mainly educational and office use. The Redwood dataset [32] offers an apartment that is reconstructed from RGB-D video. The sensor manufacturer NavVis provides an example point cloud created from their sensor VLX [33]. Further, we used the RTC360 and the BLK2GO developed by Leica Geosystems AG to record two different indoor scenes within Freiburg. One is a public building and the other one is a typical German apartment. We added more reconstructed buildings from the ISPRS benchmark [34] into Appendix A.

4.1. Feature-Enriched Downsampling

We compared our implementation for downsampling with the simultaneous calculation of the geometric features with the downsampling version of Open3D [35]. Likewise, we computed the k Nearest Neighbor (kNN) with the library Scikit-Learn [36] and Open3D to compare the runtimes. We show the results in Table 2. At this point, it should be mentioned that in the computation time of the kNN, in contrast to our implementation, the eigenvalues of the covariance matrices have not yet been computed. In summary, the downsampling time triples, but we have an approximation of the geometric properties in this area that would otherwise be lost. The resulting features are visualized in Figure 4.

4.2. Network Generalization

All networks are trained on the S3DIS dataset using Area 5 for validation. For generalization purposes, we further collected and annotated some of our own data by using the RTC360 scanner. We selected these two different types of sensors because the point clouds highly differ. As mentioned earlier, we are not interested in furniture and other objects but in the actual structural elements of the building. For this reason, we moved all irrelevant classes. The class beam is moved into ceiling, column is moved into wall, and the remaining classes are moved into clutter. To receive the actual opening for the door, we split the class door into two classes: door and door leaf. For the evaluation of each model, we used the mean Intersection over Union (mIoU).
As the classes within the datasets are highly imbalanced, we will use a weight for each class w c = 1 f c leading to the weighted cross-entropy loss L wce :
L wce = 1 c C w c c C w c y c log y ^ c .
To avoid overfitting, the data is augmented. First, we dropped a random amount of U ( 0 , 0.3 ) points. Afterward, the x, y, and z position of each point is shifted by the value of U ( 2 , 2 ) , the point cloud is rotated around the z-axis by an angle between 30° and 330°, each point cloud is scaled by a value of U ( 0.8 , 1.2 ) , and we add noise with N ( μ = 0 , σ 2 = 0.01 ) . Each augmentation but the first is applied independently with a probability of 0.5 . Since the data from S3DIS in the original format are divided into rooms, but we also want to apply the learned network to the individual viewpoints of the scanner, we have divided the entire point cloud from one area into random parts. This way, we have point clouds that have several rooms, or at least parts of them, in a single scan.
The training and different evaluation results are shown in Table 3. The first thing to mention is that using the calculated geometric feature instead of the color improves the result for the classes relevant to us. This means that all the architectures used can recognize patterns based on geometry alone and there is no need for color information. Another important point is that the windows are not properly recognizable using either the color or the geometric feature when the data is captured by a different sensor. This point counts for all NNs used and indicates that the geometry must be too different in the data itself.
Since the approach using 3D convolutions combined with the geometrical features generalized the best, we will use it for the ongoing pipeline. Keep in mind that all of the networks might be further optimized and different architectures can be used as well, but this is not the main focus of our work.

4.3. Regions of Interest

For generalization, we applied the algorithm to all the named datasets. As our models were trained on the S3DIS data, we only applied our method to the unseen validation set (Area 5) and merged the separated rooms to remove any prior knowledge. Further, we did not axis-align the point cloud nor remove the reflections for the RTC360 and BLK2Go data. The results can be seen in Figure 5 and Table 4. Each row shows a different dataset and sensor. The Redwood, the S3DIS, the RTC360, the BLK2Go, and the VLX data are displayed in order. The first column shows the prediction, and the second is the first step of the created graph followed by the separated, and in different colors, visualized subgraphs. The third column shows the reconnected graph that allows nodes to have more than two edges. The final contour, described by the detected cycles, is shown in the last column.
We would like to emphasize that the algorithm was able to recognize the rooms even under a bad prediction (BLK2Go). The hallway in the S3DIS data was not detected, but it was not closed either since we separated only a part of it from the data. One can see the output for the whole of Area 5 in Appendix A.
The post-processing simply checks the number of points belonging to the classes ceiling and floor. In case the detected room does not have a ceiling or a floor, it is rejected, as it might belong to an inner courtyard, or it was created due to reflections. Table 4 clears that up; after the post-processing, the algorithm was capable of detecting all the given rooms without any prior knowledge except for the RTC360 data. There is a lot of clutter in the long hallway of the building. This has resulted in large gaps in the walls and shallow predictions, making it difficult for the graph to compensate. In cases where the resulting gap in the walls is larger than the width of the hallway, the graph may close the hallway at that point. For this reason, the hallway in the building has been divided into six sections instead of three.

4.4. 3D Room Reconstruction

All the detected room candidates for the different datasets are reconstructed and shown in Figure 6. We ordered the rows in the same way as described in Section 4.3. The first column shows the full prediction. All the points belonging to the same contour detected by our proposed graph algorithm are colored in the same color in column two. This is the input to the described room reconstruction algorithm. The output generated by the room reconstruction is shown in the third column. The last column shows the raw input point cloud and the reconstructed rooms together.
To easily analyze the accuracy of the reconstruction process, the point cloud to mesh distance is calculated for the extracted rooms and the points that were used for the extraction (ceiling, ground, and wall). Note that this is not equivalent to the ground truth geometry of the rooms, but serves as an indicator of how well the process describes the given points. The histograms for the distribution of the calculated distances are shown in Figure 7. In all cases, the distance between the reconstructed spaces and the points is very small. There is a second peak in the Redwood data. This can be explained by the fact that one room has two different ceiling heights, of which the smaller one is neglected, resulting in several points being incorrectly described.
To investigate how accurate the reconstruction actually is, we took a closer look at three different rooms. We manually reconstructed two rooms and exported one that was given by the ISPRS benchmark. The result is shown in Figure 8. The automatic reconstruction is shown in green, the ground truth in red and gray, and the points that have been detected as a wall are displayed in blue. It is noticeable that small edges are not taken into account by the algorithm. This is because not all the wall points are used for the reconstruction and the way RANSAC does fits the points. The threshold of not using all the points for the characterization of the geometry is a trade-off between the use of possible wrong predictions and the neglect of small details.

4.5. Additional Features

As shown in Table 3, the networks are generalizing well for the classes ceiling, floor, and wall, which allows the following algorithms to reconstruct the 3D room. Even if the openings of the doors are not detected with the same IoU as the previous classes, it is possible for the algorithm to describe the openings for different sensors. Nevertheless, it shows a lack of generalization for the class window. Class recognition is only applicable to the data recorded with the same sensor. Example results for each sensor are shown in Figure 9.
The main reason for the class window not being detected in other datasets is the geometry captured. As mentioned in Section 1, the point cloud depends on the sensor. In Figure 1, we have illustrated the difference. If we look at the window, it is clear that it has different geometries depending on the sensor. The laser sensors used by Leica Geosystems measure through the glass in the windows and a window is, thus, primarily described as a hole in the wall. In contrast, the matterport used in the acquisition of the S3DIS dataset captures points on the window itself. The defined window is, thus, not described by an opening, but by an offset plane to the wall.

4.6. Limitations

In this section, we would like to discuss the limiting factors. Since the pipeline is based on the detection of walls, and we assume that a room is closed by walls, the procedure only works if a sufficient number of points for the walls has been detected. This leads to problems as soon as a wall consists only of windows. Even if windows and doors are included in the description of the room, it can happen that there are too few points and the graph does not recognize the room as closed. Corridors, like the one from the S3DIS dataset in Appendix A, which have neither a defined beginning nor a defined end, cannot be closed with the graph either. This is because the graph, when running along the outer walls, never arrives at the inner walls of the corridor, but at its original beginning. Thus, only one large room, which theoretically includes all the rooms inside the corridor, would be included. It follows that the graph as implemented is not able to recognize rooms that are enclosed on all sides by another room.
The graph itself is also able to recognize rooms with round walls. However, since the reconstruction based on it currently only allows planes and combines them if the angles are too small, round walls cannot be reconstructed. During reconstruction, each room is considered separately, so that rooms with different ceiling and floor heights can be described. However, only one ceiling and floor is allowed per room. For this reason, no roof slopes and no rooms that go over several floors can be described.

5. Discussion

The large number of available 3D sensors makes it possible to address different applications. However, the most common format, the point cloud, is mostly not usable by the consumer. We have developed a method that is intended for the use case of interiors and brings the point cloud into a minimal geometric format while preserving architectural properties.
We have tested our method on five different real-world datasets collected by five different sensors. The scenes used have been selected to be as realistic as possible. This means that all the recorded rooms are inhabited and, therefore, furnished and cluttered. We chose data from different continents showing apartments, educational rooms, and offices. Not only the data but also the sensors vary greatly in their characteristics. This way, we are able to show how far our approach generalizes and, thus, offer a wide range of applications.
In this work, it has been shown that the difference between the sensors is not insignificant and, even if the focus is put on the 3D geometry alone, the differences have a direct influence on the trained NN. This means that the recognition of the trained classes is not transferable to all the classes. In our case, the windows would have to be mentioned at this point. Nevertheless, the generalization for the recognition of the remaining classes could be proven and, thus, the NN represents a suitable basis for the separation and filtering of the 3D data.
In most cases, the graph detected the correct number of rooms and was able to reduce the number of points used for the polygon. Other advantages of the method are that it provides direct information about the area size of the rooms, there is no limitation to the planar walls that can be detected, and there is no need to know the position or trajectory of the sensor. A disadvantage, however, is that in cases of large gaps within the detected walls, for example, due to very large objects between the scanner and the wall, the graph is sometimes unable to close this gap and, thus, the room is incorrectly detected.
Due to the preprocessing steps, only smaller parts of the point cloud had to be reconstructed, and many irrelevant points were removed. Thus, we were able to apply the geometric approach of plane finding with RANSAC to the point clouds from different sources and to generalize the method. We restricted the geometric objects to approximate planes. Therefore, the round walls cannot be reconstructed.
Another point we have not addressed so far is that we have not manually preprocessed the data we use. In the point clouds contained in the S3DIS or Redwood dataset, for example, there are hardly any outliers, and no false 3D points are captured due to reflections. When using optical sensors, there are very common problems with reflections from window panes, which are sometimes very challenging to remove. The pipeline shown can handle a large number of mirrored points. If the proportion of reflections is too high, for example, due to extremely large window areas, false rooms are detected.

6. Conclusions and Outlook

We have presented an approach that uses point clouds from indoor environments to extract a digital model. We exploited the strengths of a Neural Network in terms of generalization and pattern recognition, thus providing a basis for applying geometric approaches to simplified problems and a highly reduced amount of points. This hybrid approach complements each other and is, therefore, very broadly applicable. Furthermore, the approach scales with the given data and can, thus, be further improved.
One can clearly see that even in case of poor performance of the Neural Network, the following algorithms are capable of delivering acceptable results. We have proven that the approach can be applied to different sensor sources and different real-world scenarios. In case the sensor characteristics differ too much from the sensor on which the Neural Network has been trained, it is not possible to add the windows to the digital model. Doors, on the other hand, could be recognized better and transferred to the digital model for all except the BLK2Go data.
In further work, the graph could be further optimized. This might make it possible to build the reconstruction directly from the generated polygon patch. Another option would be to improve the room fitting by allowing multiple ceilings and floors. This way, the roof slopes or rooms with multiple floors, like a staircase, can be covered as well. Allowing further geometrical objects like circles, for instance, within the RANSAC procedure would make it possible to reconstruct round walls.
An extension of the architectural elements would also be conceivable. A possible option would be to include pipes, for example. This could also make the generated output usable for other users and describe the point cloud more precisely.

Author Contributions

Conceptualization, M.K.; methodology, M.K.; software, M.K.; validation, M.K., B.S. and A.R.; formal analysis, M.K.; investigation, M.K.; resources, A.R.; data curation, M.K.; writing—original draft preparation, M.K. and B.S.; writing—review and editing, M.K., B.S. and A.R.; visualization, M.K.; supervision, B.S. and A.R.; project administration, M.K.; funding acquisition, B.S. and A.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) by the project “Modeling of civil engineering structures with particular attention to incomplete and uncertain measurement data by using explainable machine learning-MoCES, (Project no. 501457924)” and the BMBF by the project “Partially automated creation of object-based inventory models using multi-data fusion of multimodal data streams and existing inventory data-mdfBIM+”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data, apart from the RTC360 and BLK2Go data, are publicly available and referenced in the article.

Acknowledgments

The authors would like to thank the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) and the Bundesministerium für Bildung und Forschung (BMBF, Federal Ministry of Education and Research).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
NNNeural Network
CNNConvolutional Neural Network
MLPMultilayer Perceptron
RANSACRANdom SAmple Consensus
DBSCANDensity-Based Spatial Clustering of Applications with Noise
BBBounding Box
OBBOriented Bounding Box
mIoUmean Intersection over Union
kNNk Nearest Neighbor
RoIRegion of Interest

Appendix A

We provide further results here. In Figure A1, the full reconstruction of the available apartments for all the rooms can be seen. The first one is the Redwood and the second the Freiburg apartment. Note that for the second one, we removed the reflection points to keep the dimension small.
Figure A1. Full apartment with features. Reflections were not included to keep image size small.
Figure A1. Full apartment with features. Reflections were not included to keep image size small.
Remotesensing 15 04421 g0a1
The detected rooms for the entirety of Area 5 from the S3DIS dataset are shown in Figure A2. The top image shows the points detected as a wall and the contours describing the candidate rooms with the area in square meters. The lower image shows the reconstructed rooms with the additional doors (yellow) and windows (red).
Figure A2. Full S3DIS Area 5.
Figure A2. Full S3DIS Area 5.
Remotesensing 15 04421 g0a2
In Figure A3, Figure A4 and Figure A5, the results from the ISPRS benchmark are shown.
Figure A3. TUB1.
Figure A3. TUB1.
Remotesensing 15 04421 g0a3aRemotesensing 15 04421 g0a3b
Figure A4. TUB2.
Figure A4. TUB2.
Remotesensing 15 04421 g0a4
Figure A5. UoM.
Figure A5. UoM.
Remotesensing 15 04421 g0a5

References

  1. Chen, X.; Ma, H.; Wan, J.; Li, B.; Xia, T. Multi-View 3D Object Detection Network for Autonomous Driving. arXiv 2016, arXiv:1611.07759. [Google Scholar]
  2. Liang, M.; Yang, B.; Wang, S.; Urtasun, R. Deep Continuous Fusion for Multi-Sensor 3D Object Detection. arXiv 2020, arXiv:2012.10992. [Google Scholar]
  3. Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; Schiele, B. The Cityscapes Dataset for Semantic Urban Scene Understanding. arXiv 2016, arXiv:1604.01685v2. [Google Scholar]
  4. Merkle, D.; Frey, C.; Reiterer, A. Fusion of ground penetrating radar and laser scanning for infrastructure mapping. J. Appl. Geod. 2021, 15, 31–45. [Google Scholar] [CrossRef]
  5. Reiterer, A.; Wäschle, K.; Störk, D.; Leydecker, A.; Gitzen, N. Fully Automated Segmentation of 2D and 3D Mobile Mapping Data for Reliable Modeling of Surface Structures Using Deep Learning. Remote Sens. 2020, 12, 2530. [Google Scholar] [CrossRef]
  6. Merkle, D.; Schmitt, A.; Reiterer, A. Concept of an autonomous mobile robotic system for bridge inspection. In Proceedings of the SPIE Remote Sensing 2020, Edinburgh, UK, 21–24 September 2020. [Google Scholar] [CrossRef]
  7. von Olshausen, P.; Roetner, M.; Koch, C.; Reiterer, A. Multimodal measurement system for road analysis and surveying of road surroundings. In Proceedings of the Automated Visual Inspection and Machine Vision IV; Beyerer, J., Heizmann, M., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2021; Volume 11787, pp. 72–78. [Google Scholar]
  8. Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. arXiv 2016, arXiv:1606.06650. [Google Scholar]
  9. Bosché, F.; Ahmed, M.; Turkan, Y.; Haas, C.T.; Haas, R. The value of integrating Scan-to-BIM and Scan-vs-BIM techniques for construction monitoring using laser scanning and BIM: The case of cylindrical MEP components. Autom. Constr. 2015, 49, 201–213. [Google Scholar] [CrossRef]
  10. Turner, E.L.; Zakhor, A. Floor plan generation and room labeling of indoor environments from laser range data. In Proceedings of the 2014 International Conference on Computer Graphics Theory and Applications (GRAPP), Lisbon, Portugal, 5–8 January 2014; pp. 1–12. [Google Scholar]
  11. Mura, C.; Mattausch, O.; Jaspe Villanueva, A.; Gobbetti, E.; Pajarola, R. Automatic room detection and reconstruction in cluttered indoor environments with complex room layouts. Comput. Graph. 2014, 44, 20–32. [Google Scholar] [CrossRef]
  12. Ochmann, S.; Vock, R.; Wessel, R.; Klein, R. Automatic reconstruction of parametric building models from indoor point clouds. Comput. Graph. 2016, 54, 94–103. [Google Scholar] [CrossRef]
  13. Croce, V.; Caroti, G.; De Luca, L.; Jacquot, K.; Piemonte, A.; Véron, P. From the Semantic Point Cloud to Heritage-Building Information Modeling: A Semiautomatic Approach Exploiting Machine Learning. Remote Sens. 2021, 13, 461. [Google Scholar] [CrossRef]
  14. Uuganbayar Gankhuyag, J.H.H. Automatic BIM Indoor Modelling from Unstructured Point Clouds Using a Convolutional Neural Network. Intell. Autom. Soft Comput. 2021, 28, 133–152. [Google Scholar] [CrossRef]
  15. Tang, S.; Li, X.; Zheng, X.; Wu, B.; Wang, W.; Zhang, Y. BIM generation from 3D point clouds by combining 3D deep learning and improved morphological approach. Autom. Constr. 2022, 141, 104422. [Google Scholar] [CrossRef]
  16. Ahmed, S.; Liwicki, M.; Weber, M.; Dengel, A. Automatic Room Detection and Room Labeling from Architectural Floor Plans. In Proceedings of the 2012 10th IAPR International Workshop on Document Analysis Systems, Gold Coast, QLD, Australia, 27–29 March 2012; pp. 339–343. [Google Scholar] [CrossRef]
  17. Li, M.; Wonka, P.; Nan, L. Manhattan-World Urban Reconstruction from Point Clouds. In Proceedings of the Computer Vision–ECCV 2016, Amsterdam, The Netherlands, 11–14 October 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 54–69. [Google Scholar]
  18. Coughlan, J.M.; Yuille, A.L. The Manhattan World Assumption: Regularities in Scene Statistics which Enable Bayesian Inference. In Proceedings of the NIPS, Denver, CO, USA, 27 November–2 December 2000; pp. 809–815. [Google Scholar]
  19. Nan, L.; Wonka, P. PolyFit: Polygonal Surface Reconstruction from Point Clouds. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2372–2380. [Google Scholar] [CrossRef]
  20. Guo, Y.; Wang, H.; Hu, Q.; Liu, H.; Liu, L.; Bennamoun, M. Deep Learning for 3D Point Clouds: A Survey. arXiv 2019, arXiv:1912.12033. [Google Scholar] [CrossRef]
  21. Hu, Q.; Yang, B.; Xie, L.; Rosa, S.; Guo, Y.; Wang, Z.; Trigoni, N.; Markham, A. RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds. arXiv 2020, arXiv:1911.11236. [Google Scholar]
  22. Zhao, H.; Jiang, L.; Jia, J.; Torr, P.H.; Koltun, V. Point transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 16259–16268. [Google Scholar]
  23. Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention (MICCAI), Athens, Greece, 17–21 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; Volume 9901, pp. 424–432. [Google Scholar]
  24. Weinmann, M.; Jutzi, B.; Mallet, C. Feature relevance assessment for the semantic interpretation of 3D point cloud data. In Proceedings of the ISPRS Workshop Laser Scanning 2013, ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Antalya, Turkey, 11–13 November 2013; Volume II-5/W2, pp. 313–318. [Google Scholar] [CrossRef]
  25. Hackel, T.; Wegner, J.; Schindler, K. Contour Detection in Unstructured 3D Point Clouds. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1610–1618. [Google Scholar] [CrossRef]
  26. Dijkstra, E. A Note on Two Problems in Connexion with Graphs. Numer. Math. 1959, 1, 269–271. [Google Scholar] [CrossRef]
  27. Goldman, R.N. IV.1—Area of planar polygons and volume of polyhedra. In Graphics Gems II; Arvo, J., Ed.; Morgan Kaufmann: San Diego, CA, USA, 1991; pp. 170–171. [Google Scholar] [CrossRef]
  28. Schnabel, R.; Wahl, R.; Klein, R. Efficient RANSAC for Point-Cloud Shape Detection. Comput. Graph. Forum 2007, 26, 214–226. [Google Scholar] [CrossRef]
  29. Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. Density-Based Clustering in Spatial Databases: The Algorithm GDBSCAN and Its Applications. In Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining, Portland, OR, USA, 2–4 August 1996. [Google Scholar]
  30. Hotelling, H. Analysis of a complex of statistical variables into principal components. J. Educ. Psychol. 1933, 24, 498–520. [Google Scholar] [CrossRef]
  31. Armeni, I.; Sener, O.; Zamir, A.R.; Jiang, H.; Brilakis, I.; Fischer, M.; Savarese, S. 3D Semantic Parsing of Large-Scale Indoor Spaces. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  32. Park, J.; Zhou, Q.Y.; Koltun, V. Colored Point Cloud Registration Revisited. In Proceedings of the ICCV, Venice, Italy, 22–29 October 2017. [Google Scholar]
  33. NavVis. NavVis VLX Point Cloud Data. Available online: https://www.navvis.com/resources/specifications/navvis-vlx-point-cloud-office (accessed on 16 November 2022).
  34. Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D. The isprs benchmark on indoor modelling. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W7, 367–372. [Google Scholar] [CrossRef]
  35. Zhou, Q.Y.; Park, J.; Koltun, V. Open3D: A Modern Library for 3D Data Processing. arXiv 2018, arXiv:1801.09847. [Google Scholar]
  36. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
Figure 1. Visualization of point clouds captured by different sensors. (a) Leica Geosystems RTC360, (b) Leica Geosystems BLK2GO, (c) Apple iPad Pro, and (d) Matterport Pro2.
Figure 1. Visualization of point clouds captured by different sensors. (a) Leica Geosystems RTC360, (b) Leica Geosystems BLK2GO, (c) Apple iPad Pro, and (d) Matterport Pro2.
Remotesensing 15 04421 g001
Figure 2. Stages showing the feature enriched downsampling strategy. (a) Original points P i n R n × 3 , (b) voxeled points, (c) calculated covariance, and (d) resulting point cloud P o u t R m × 11 .
Figure 2. Stages showing the feature enriched downsampling strategy. (a) Original points P i n R n × 3 , (b) voxeled points, (c) calculated covariance, and (d) resulting point cloud P o u t R m × 11 .
Remotesensing 15 04421 g002
Figure 3. Stages showing the different parts of room fitting algorithm. The top row shows the projection into 2D while the row below shows the step reprojected into 3D. The prediction and remaining points belonging to walls are shown in (a). In (b), the detected lines (2D) and the resulting faces (3D) are shown. The bounded line segments and faces are visualized in (c). The closed line segments and final faces can be seen in (d).
Figure 3. Stages showing the different parts of room fitting algorithm. The top row shows the projection into 2D while the row below shows the step reprojected into 3D. The prediction and remaining points belonging to walls are shown in (a). In (b), the detected lines (2D) and the resulting faces (3D) are shown. The bounded line segments and faces are visualized in (c). The closed line segments and final faces can be seen in (d).
Remotesensing 15 04421 g003
Figure 4. Visualization of different calculated features. The color is scaled from blue, green, and yellow to red for the feature value of range [ 0 , 1 ] . The ceiling, as well as one wall, is removed for easier visualization. The names are described in Table 1.
Figure 4. Visualization of different calculated features. The color is scaled from blue, green, and yellow to red for the feature value of range [ 0 , 1 ] . The ceiling, as well as one wall, is removed for easier visualization. The names are described in Table 1.
Remotesensing 15 04421 g004
Figure 5. Different stages for extracting possible room candidates. The Redwood, the S3DIS, the RTC360, the BLK2Go, and the VLX data are displayed in order from top to bottom. The prediction of all points can be seen in (a). The ceiling has been removed for easier visualization. In (b), the created graph and the corresponding edges are shown. All vertices larger than some threshold are removed creating multiple subgraphs in (c). Better connections are searched for and a new graph is constructed (d). Cycles are searched for in the resulting graph and path patches are used to define the room contours (e).
Figure 5. Different stages for extracting possible room candidates. The Redwood, the S3DIS, the RTC360, the BLK2Go, and the VLX data are displayed in order from top to bottom. The prediction of all points can be seen in (a). The ceiling has been removed for easier visualization. In (b), the created graph and the corresponding edges are shown. All vertices larger than some threshold are removed creating multiple subgraphs in (c). Better connections are searched for and a new graph is constructed (d). Cycles are searched for in the resulting graph and path patches are used to define the room contours (e).
Remotesensing 15 04421 g005
Figure 6. Different stages for whole pipeline. The Redwood, the S3DIS, the RTC360, the BLK2Go, and the VLX data are displayed in order from top to bottom. The prediction of the defined base elements is shown in (a). In (b), all points are assigned to RoIs by the described graph search. The RoIs fitted by polygons are visualized in (c). The fitted polygons and the raw points are shown in (d).
Figure 6. Different stages for whole pipeline. The Redwood, the S3DIS, the RTC360, the BLK2Go, and the VLX data are displayed in order from top to bottom. The prediction of the defined base elements is shown in (a). In (b), all points are assigned to RoIs by the described graph search. The RoIs fitted by polygons are visualized in (c). The fitted polygons and the raw points are shown in (d).
Remotesensing 15 04421 g006
Figure 7. Distance between points belonging to classes used for the polygon extraction and the final extracted room polygons. The number of points is normalized with respect to the number of points in each dataset.
Figure 7. Distance between points belonging to classes used for the polygon extraction and the final extracted room polygons. The number of points is normalized with respect to the number of points in each dataset.
Remotesensing 15 04421 g007
Figure 8. Quantitative comparison between the automatically generated (green) and the manual, human-reconstructed geometry (red). The points predicted by the NN as a wall are displayed in blue. (a) Easy room, (b) small room with fine details, and (c) large floor with corners.
Figure 8. Quantitative comparison between the automatically generated (green) and the manual, human-reconstructed geometry (red). The points predicted by the NN as a wall are displayed in blue. (a) Easy room, (b) small room with fine details, and (c) large floor with corners.
Remotesensing 15 04421 g008
Figure 9. Different sensor inputs and the corresponding reconstructed room with additional features. Each row shows a point cloud from a different sensor. Starting with the Redwood, the S3DIS, the RTC360, and, finally, the VLX data. The first column (a) shows the colorized point cloud for easier visualization. The second column (b) is the final digital model, which was created by our proposed method. In the last column (c), we have superimposed the regenerated model and the original point cloud.
Figure 9. Different sensor inputs and the corresponding reconstructed room with additional features. Each row shows a point cloud from a different sensor. Starting with the Redwood, the S3DIS, the RTC360, and, finally, the VLX data. The first column (a) shows the colorized point cloud for easier visualization. The second column (b) is the final digital model, which was created by our proposed method. In the last column (c), we have superimposed the regenerated model and the original point cloud.
Remotesensing 15 04421 g009
Table 1. Geometrical features calculated by the eigenvalues of the covariance matrix.
Table 1. Geometrical features calculated by the eigenvalues of the covariance matrix.
Planarity f p ( λ 2 λ 3 ) / λ 1
Linearity f l ( λ 1 λ 2 ) / λ 1
Sphericity f s λ 3 / λ 1
Surface variation f v λ 1 / ( λ 1 + λ 2 + λ 3 )
Sum of eigenvalues f Σ λ 1 + λ 2 + λ 3
Omnivariance f o ( λ 1 · λ 2 · λ 3 ) 1 3
Eigentropy f e i = 1 3 λ i · ln ( λ i )
Anisotropy f a ( λ 1 λ 3 ) / λ 1
Table 2. Runtime in ms using Open3D voxel downsampling, and our approach to downsample and include the feature calculation and the k = 8 Nearest Neighbor calculation using Open3D and Scikit-Learn. For downsampling, we chose a voxel size of 0.1 m. All experiments were conducted using S3DIS Area 5 data, which contains 68 point clouds with a total amount of 78,719,063 points. Note that the calculation for the covariance matrix and the eigenvalues are not included within the kNN time but in our approach.
Table 2. Runtime in ms using Open3D voxel downsampling, and our approach to downsample and include the feature calculation and the k = 8 Nearest Neighbor calculation using Open3D and Scikit-Learn. For downsampling, we chose a voxel size of 0.1 m. All experiments were conducted using S3DIS Area 5 data, which contains 68 point clouds with a total amount of 78,719,063 points. Note that the calculation for the covariance matrix and the eigenvalues are not included within the kNN time but in our approach.
Open3DOursOpen3D kNNScikit-Learn kNN
min7.5718.62116.67704.27
avrg50.1159.35928.125197.86
max218.7694.44866.9622,565.6
Table 3. Semantic segmentation results for different architectures. The first part shows the results using color information and the second part shows the results using the proposed geometrical features. The first section for the given input names the IoU on S3DIS evaluated on Area 5. The second section indicates the results of our internally generated data using an RTC360 scanner.
Table 3. Semantic segmentation results for different architectures. The first part shows the results using color information and the second part shows the results using the proposed geometrical features. The first section for the given input names the IoU on S3DIS evaluated on Area 5. The second section indicates the results of our internally generated data using an RTC360 scanner.
InputMethodClutterCeilingFloorWallDoorWindowDoor LeafmIoU
RGBPointTransformer0.80.90.980.740.490.470.570.71
RandLA-Net0.840.930.980.810.340.480.080.65
3DConv0.630.840.860.620.360.210.20.53
RGBPointTransformer0.40.550.830.670.190.020.130.4
RandLA-Net0.390.640.950.760.120.010.00.41
3DConv0.60.820.820.730.120.030.050.45
GeoPointTransformer0.820.920.980.770.540.480.670.74
RandLA-Net0.850.930.980.780.450.340.660.71
3DConv0.630.860.860.640.330.290.370.57
GeoPointTransformer0.420.530.730.730.20.020.050.39
RandLA-Net0.480.660.950.630.140.020.10.43
3DConv0.590.810.870.770.170.040.110.48
Table 4. The number of detected regions of interest compared to the real number of existing rooms.
Table 4. The number of detected regions of interest compared to the real number of existing rooms.
RedwoodS3DISRTCBLK2Govlx
Rooms3121666
Detected3142167
After post-processing3121966
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kellner, M.; Stahl, B.; Reiterer, A. Reconstructing Geometrical Models of Indoor Environments Based on Point Clouds. Remote Sens. 2023, 15, 4421. https://doi.org/10.3390/rs15184421

AMA Style

Kellner M, Stahl B, Reiterer A. Reconstructing Geometrical Models of Indoor Environments Based on Point Clouds. Remote Sensing. 2023; 15(18):4421. https://doi.org/10.3390/rs15184421

Chicago/Turabian Style

Kellner, Maximilian, Bastian Stahl, and Alexander Reiterer. 2023. "Reconstructing Geometrical Models of Indoor Environments Based on Point Clouds" Remote Sensing 15, no. 18: 4421. https://doi.org/10.3390/rs15184421

APA Style

Kellner, M., Stahl, B., & Reiterer, A. (2023). Reconstructing Geometrical Models of Indoor Environments Based on Point Clouds. Remote Sensing, 15(18), 4421. https://doi.org/10.3390/rs15184421

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop