Sensors 2012, 12(12), 16099-16115; doi:10.3390/s121216099

Article
Automatic Method for Building Indoor Boundary Models from Dense Point Clouds Collected by Laser Scanners
Enrique Valero 1,2,*, Antonio Adán 2 and Carlos Cerrada 1
1
School of Computer Engineering, Universidad Nacional de Educación a Distancia (UNED), C/Juan del Rosal, 16, 28040 Madrid, Spain; E-Mail: ccerrada@issi.uned.es
2
3D Visual Computing and Robotics Lab, Universidad de Castilla-La Mancha (UCLM), Paseo de la Universidad, 4, 13071 Ciudad Real, Spain; E-Mail: Antonio.Adan@uclm.es
*
Author to whom correspondence should be addressed; E-Mail: evalero@issi.uned.es; Tel.: +34-91-398-6477.
Received: 31 August 2012; in revised form: 30 October 2012 / Accepted: 8 November 2012 /
Published: 22 November 2012

Abstract

: In this paper we present a method that automatically yields Boundary Representation Models (B-rep) for indoors after processing dense point clouds collected by laser scanners from key locations through an existing facility. Our objective is particularly focused on providing single models which contain the shape, location and relationship of primitive structural elements of inhabited scenarios such as walls, ceilings and floors. We propose a discretization of the space in order to accurately segment the 3D data and generate complete B-rep models of indoors in which faces, edges and vertices are coherently connected. The approach has been tested in real scenarios with data coming from laser scanners yielding promising results. We have deeply evaluated the results by analyzing how reliably these elements can be detected and how accurately they are modeled.
Keywords:
3D modeling; B-rep models; laser scanners; 3D data processing

1. Introduction

During the last decade laser scanners have gained popularity in architecture, engineering, construction and facility management (AEC/FM). Other measurement methods such as total stations, measuring tapes and based-stereo-camera prototypes are too time-consuming or inaccurate compared to scanners, particularly in large-scale environments. Moreover the high density provided by a single scan (which could be over several million points) makes this technology very suitable for use in the 3D modeling of facilities.

Most of the time, the dense raw data provided from scanners are manipulated by a designer or processed by an engineer in order to create a simplified model of the scenario. This well known reverse engineering process is applied to 3D model creation. Thus, a simplified model provides a high-level representation of the scenario that ranges from a single CAD model, in which a wall is represented as a set of independent planar surfaces, to a Building Information Model (BIM), in which a wall is thought as a volumetric object composed by multiple surfaces with several relevant properties like color, material, cost, etc. Much of the emphasis in previous works has been on creating visually realistic non-parametric models rather than accurate parametric ones. Some examples in this category include methods by El-Hakim et al.[1], which is focused on indoor environments and Früh et al.[2] and Remondino et al.[3], which are focused on outdoor environments. In these cases, a great part of the modeling process is supervised by the modeler so that it cannot be said that models are obtained in an automatic manner.

In this paper we introduce a method that automatically yields Boundary Representation Models (B-rep) for indoors after processing dense point clouds collected by laser scanners from key locations through an existing facility. The 3D model then represents the current state of the building, what is named as the “as-is condition”. This model does not necessarily have to be coincident with either the designed (“as-designer condition”) or the built model (“as-built condition”). In fact, the facility could have lightly been modified or restored from its initial design. In other occasions, we cannot access to the design drawings or they merely do not exist. Thus, the automatic creation of “as-is models” of inhabited scenarios is a challenging research field which is gaining attention from different applications including architecture, engineering, robotics, etc.[4].

Although there are not many publications dealing with the automatic creation of 3D models, in the last years several interesting works related with some of the stages of this process can be found in the literature. A review for automatic reconstruction of as-built building information models can be found in [4]. The process of creating a single-semantic model can vary depending on the input and expected output. Generally, the automatic modeling process can be divided into three steps: data acquisition, data processing and data modeling, and most of the published papers concentrate on the second stage: data processing.

In this framework, 3D data processing means processing millions of unstructured 3D data in order to obtain higher level data structures. Several approaches that convert 3D data into high-level representations in buildings context can be found over the last years. We can here distinguish between proposals to detect and model single objects or particular parts of large scenarios and those than completely model indoors and outdoors.

As regards the first sort of proposals, one of the earlier works that obtain 3D models from laser scanner in a local space context is the one of Kwon et al.[5]. They introduce a set of algorithms for fitting sparse point clouds to a set of single volumetric primitives (cuboids, cylinders and spheres) which can be extended to groups of primitives belonging to the same object. An automated recognition/retrieval approach for 3D CAD objects in the construction context is presented in [6]. In [7] a semiautomatic method to match 3D existing models to data of industrial building steel structures is proposed. The author develops a variant of the ICP algorithm for data registration in order to recognize CAD models objects in large site laser scans. The method is semiautomatic because a coarse registration needs to be performed manually. The same author presents in [8] a plane based registration system for coarse registration of laser scanners data with 3D models in the context of AEC/FM industry. The approach is based on finding planes from the point cloud and matching them with the ones extracted from the 3D models. Nevertheless, the matching process is performed by hand. Other authors [9] propose the combination of point clouds, acquired by means of laser scanners, and photogrammetry in order to generate 3D models of a building under construction. The work of Rusu et al.[10] correctly recognizes and localizes relevant kitchen objects including cupboards, kitchen appliances, and tables. They interpret the improved point clouds in terms of rectangular planes and 3D geometric shapes. One of the innovations consists of including a novel multi-dimensional tuple representation for point clouds and robust, efficient and accurate techniques for computing these representations, which facilitate the creation of hierarchical object models.

Detailed models of part of the walls and building façades are obtained in [2,1113]. In [11,12] the data processing goes from detecting windows through low data density regions to discover other data patterns in the façade. In [13] important façade elements such as walls and roofs are distinguished as features. Then, the knowledge about the features’ sizes, positions, orientations, and topology is introduced to recognize these features in a segmented laser point cloud. Früh et al.[2] develop a set of data processing algorithms for generating textured façade meshes of cities from a series of scans, corresponding to vertical 2D surfaces, obtained by a laser scanner. Thrun et al. developed a plane extraction method based on the expectation-maximization algorithm [14]. Other researchers have proposed plane sweep approaches to find planar regions [15,16]. Valero et al.[17] focus on the modeling of those linear moldings that typically surround doorways, windows, and divide ceilings from walls and walls from floors.

With respect to the automatic creation of complete indoor models, geometric surfaces or volumetric primitives can be fitted to a 3D point cloud to model walls, doors, ceilings, columns, beams, and other structures of interest. In its most simple format, the modeled primitives are annotated with labels (e.g., “wall”). In a higher modeling degree, spatial and functional relationships between nearby structures and spaces are established. Interesting developments can be seen in [18] and [19]. In these works, Adan et al. identify and model the main structural components of an indoor environment (walls, floors, ceilings, windows, and doorways) despite the presence of significant clutter and occlusion, which occur frequently in natural indoor environments, but they deal with rectangular rooms. Furthermore, in the present work we present an approach to identify walls in rooms with more complicated geometries. Okorn et al.[20] present an automated method for creating accurate 2D floor plan models of building interiors. They project the points onto a 2D ground plane and create a histogram of points density, from which line segments corresponding to walls are extracted using a Hough Transform. The first steps of our proposal seem to be inspired in the same strategy since we also project the points to extract the approximate 2D wall’s location. Nevertheless, this is only used to later retrieve the corresponding 3D data and delimitate with precision the points belonging to the walls.

This work aims to make progress in the creation of automatic 3D models from point clouds provided by scanners [21]. The contributions of this work lie in three points: accuracy of the model, coherent representation and evaluation of the method. First, we propose a voxelization (discretization) of the space with which to accurately define the planes fitted to the points belonging to the ceiling, floor and walls of the facility. Second, we generate a complete boundary representation model of the indoor in which faces, edges and vertices are coherently connected. Third, once the method is designed and implemented, we make a performance evaluation by measuring the geometric modeling accuracy, the recognition accuracy and the relationship modeling accuracy. The following sections explain the main stages to achieve a complete B-rep model of interiors and present the results obtained.

2. 3D Data Segmentation

2.1. Floor and Ceiling Segmentation

The first stage in the 3D model creation process consists of efficiently segmenting the data obtained by the scanner from different positions. Here we assume that preprocessing stages like outliers/noise filtering and data registration are done, so that our input is an unstructured large cloud of points.

One of the problems which arise in the automatic walls detection is the voxelization of the space in the sense of which the best voxels’ size is and where the origin of the space is set. In [18,19] this aspect is not tackled. In the present work, we deal with the optimization of the voxels’ size, allowing us to adjust more precisely and at the same time, the ceiling, floor and walls of rectangular rooms into the voxel planes. Thus, a voxel plane contains the majority of the data of each wall.

First of all, we face up the identification and segmentation of the floor and the ceiling of the room. The method proposed in this section assumes that, as it is usual in construction, floor and ceiling are parallel structures. From here on the word “wall” will be indistinctly used to refer any flat indoor structure (ceiling, floor or wall). Our approach is based on creating an optimum discretization of the space (from here on called “voxel space”) and then accurately defining the planes which contain the maximum number of points which lie in the floor and ceiling.

Formally, the voxel space can be defined in a universal coordinate system (UCS) by means of the voxel size (ε, δ, σ) and the coordinates of centroid’s voxel (vx, vy, vz), vz being voxel’s height, according to the construction context.

Assuming cubic voxels, our objective is then addressed on determining the minimum voxel size, characterized by the parameter ε, and the first coordinate of the voxel plane vz which contains the maximum number of the points belonging to a wall. Once the voxelization of the space is carried out, most of the points belonging, for example, to the floor are contained in a narrow parallelepiped M whose height is ε (see Figure 1(a)). The uncertainty in ε can be limited by means of the flatness specifications provided in the construction standards.

The trade-off between the size of the voxels and the number of points contained in M is regulated by the objective function (1), in which p(ε, vz) is the percentage of the data contained in M. Therefore, this function evaluates the percentage of the wall’s data contained in a voxel plane versus the voxel’s size. Figure 1 shows the voxelization of the space according to the definition of the volume M for the floor (Ma) and the ceiling (Mb):

F ( ε , v z ) = p 2 ( ε , v z ) ε

In order to obtain the maximum value of function F, attaining the maximum number of the wall’s data inside a voxel plane, different restricted ranges are defined for variables ε and vz. On the one hand, ε is delimited by the range [ε1, ε2] in which ε1 corresponds to the precision of the scanner (in our case, 1 cm) andε2 is determined by the flatness tolerance, allowed in the international specification DIN 18202.

On the other hand, vz is initialized in the position m, which corresponds to the maximum value of the data distribution around the wall. For each value of ε in the range [ε1, ε2] the value of vz is evaluated for the range [mε, m + ε] and function F is then calculated. Once this process is finished, the maximum value of F (max F in Algorithm 1) provides the optimum values ε’ and v’z respectively. The algorithm is shown below.

When the algorithm is separately applied for ceiling and floor, two different voxel space configurations are generated. Let (v’z,a, ε’a) and (v’z,b, ε’b) be the position and voxel size parameters calculated for the floor and the ceiling of the room. We propose a new function G which integrates both voxelization proposals.

Table Algorithm 1. Calculation of optimum values ε’ and v’z.

Click here to display table

Algorithm 1. Calculation of optimum values ε’ and v’z.
F(ε,vz)← 3D data distribution in axis Z
m← fitted Gaussian function
max F← F(ε1,m)
for each εε1 to ε2do
for each zm-ε to m+ε do
F(ε,z)
ifF(ε,z) > max F then
max F← F(ε,z); ε’ ← ε ; v’z← z
end
end
end

Equation (2) imposes one unique voxel size E and the positions of the planes of voxels V’z,a and V’z,b, attaining the maximum value of G which provide us the best simultaneous planes of voxels for ceiling and floor. The new proposed function to optimize is as follows:

argmax Ω { G ( E , V z , a , V z , b ) } Ω = E [ ε 1 , ε 1 ] , V z , a [ v z , a ε a 2 , v z , a + ε a 2 ] , V z , b [ v z , b , ε a 2 , v z , b , + ε a 2 ] , G { E , V z , a , V z , b } = p a ( E , V z , a ) + p b ( E , V z , b )
pa and pb are the occupation percentages for the parameters ε’a and ε’b. The pseudocode of the algorithm which obtains G is detailed in Algorithm 2.

Table Algorithm 2. Obtaining the function G.

Click here to display table

Algorithm 2. Obtaining the function G.
for each ε ← ε1 to ε2 do
for each z ← za−ε/2 to za+ε/2 do
    calculate ceiling voxel centroid b
    calculate number of points in intervals a and b
    calculate percentage of points pa and pb
end
for each z ← zb−ε/2 to zb+ε/2 do
    calculate ceiling voxel centroid a
    calculate number of points in intervals a and b
    calculate percentage of points pa and pb
end
end
G = maximum(sum(pa,pb))

2.2. Performance Tests

The approach detailed above has been tested in simulated and real data. Figure 1 shows an example of the results obtained under simulation. In Figure 1(b) a front view of a simulated point cloud of a room is depicted. It can be seen two dense point regions, which correspond to the floor and ceiling, and sparse regions which simulate the rest of the sensed points in the room. The maximum values of two Gaussian functions fitted to the data distribution determine the initial values of vz,a and vz,b. Figure 1(c) shows the voxels planes projected over the plane YZ, the 3D data and two slices in red and blue that contain the majority of the sensed points. The red voxels plane contains points of the floor and the blue one contains points of the ceiling. We tested the algorithm over twenty simulated rooms. The occupation percentage averages for ceiling and floor were 96.9% and 92.1% respectively.

Figure 1(d) presents the result for a real case. We illustrate the 3D data sensed by a laser scanner from five positions of an inhabited classroom. The data segments corresponding to the ceiling and floor are painted in cyan and red. The total point cloud was composed by 1.5 million points, and the size of the segmented regions was 187,000 points for the ceiling and 95,000 points for the floor.

2.3. Walls Segmentation

2.3.1. Rectangular Indoor Plans

In this section we present an approach to segment 3D data corresponding to each one of the walls of a rectangular indoor plan. A rectangular plan is the easiest case to be dealt with. The strategy explained in Section 2.1 can here be extended by considering three pairs of parallel voxels planes, so that the Equation (2) can be easily extended. The objective is to find six parallelepipeds (slices) with centers ci, i = 1,…6 and with a common width ε which contain the maximum number of points belonging to the walls of the room. Formally, the objective is:

argmax ( G ) ε , c i G ( ε , c 1 , c 2 , ... , c 6 ) = i = 1 6 p i ( ε , c i ) ε [ ε min , ε max ] , c i [ z i ε i 2 , z i + ε i 2 ] , i = 1 . . . 6

2.3.2. Arbitrary Indoor Plans

Assuming that floor and ceiling lie in parallel planes, the approach proposed in Section 2.1 can always be used to detect the floor and the ceiling in any non-rectangular room of a building. Figure 2(b) illustrates the extraction of the points belonging to the floor and ceiling of an arbitrary plan. However, the identification of the walls is a more complex task.

The projection of the 3D data from a specific viewpoint allows us to obtain a normalized binary image in which each pixel can be occupied by one or more 3D points (white pixels in Figure 3(b)) or not. From here on, we will denote I the projected image of the data from a top view. This image will help us to obtain a coarse location and position of the walls which will be later refined. The segmentation process is as follows.

After creating the image I, the boundary of the room is extracted and, through a Hough Transform algorithm, the set of edges corresponding to the walls in the projected image are detected in a 2D context. As the reader may suppose, if a wall or the connectivity between two walls is completely occluded by a piece of furniture or a constructive component, the boundary of the room does not fit to the walls but these components. Once the room boundaries are demarcated, the points out of the boundaries are removed automatically. Afterwards, we figure out the intersections between edges and obtain the corners in the image. Figure 3 shows the steps of the segmentation process: (a) 3D point cloud viewed from the top of the room; (b) Discretization of the view and generation of binary image I; (c) Boundary extraction in I. (d) Edge and corner detection.

Assuming vertical walls, the segments and corners in the image signify planes and edges in the 3D context. And the planes will be used to segment the 3D points which lie into each wall. Thus, we calculate the mean square distance of the point cloud to the walls and classify each point into a wall. Figure 4 shows the segmentation of points belonging to the walls for two different rooms.

3. Creation of B-Rep Models

The segmentation stage provides the set of points belonging to each wall (including floor and ceiling) of the indoor scene. The following step consists of converting this raw information into high level surface representation. Within the 3D representation models universe we have chosen the boundary representation (B-rep) model [22]. In B-rep, a shape is described by a set of surface elements along with the connectivity information which describe the topological relationship between the elements.

The process to achieve a B-rep of an interior space starts calculating the planes which best fit every set of the segments associated to the walls. To do this, we have used the Singular Value Decomposition (SVD) technique [23]. Through SVD, the closeness between the plane and the segmented points is easily calculated. The steps to calculate the plane equation that best fits a generic set of points are shown below.

Each point cloud P = (p1,p2,…,pn), corresponding to a wall, can be fit to a plane defined by the equation:

Π : A x + B y + C z + D = 0

The best fit plane is that which minimizes the sum of the distances between every point pi and the plane Π. Therefore, we can calculate each fit plane by minimizing the expression:

i = 1 n | A x i + B y i + C z i + D | 2 A 2 + B 2 + C 2

Setting the partial derivative with respect to D equal to zero, we obtain:

D = ( A x 0 + B y 0 + C z 0 )
in which p0= (x0, y0,z0) is the centroid of P. Replacing (6) in (5):
i = 1 n | A ( x i x 0 ) + B ( y i y 0 ) + C ( z i z 0 ) | 2 A 2 + B 2 + C 2

Let us introduce the matrix M = [p1 − p0 p2 − p0 … pn − p0]T, in which pi= (xi, yi, zi) and p0= (x0, y0, z0), and the vector v = [A  B  C]T. We can show the problem over a matrix representation:

( v T M T ) ( M v ) v T v

This expression is called a Rayleigh Quotient and is minimized by the eigenvector of MTM that corresponds to its smallest eigenvalue. Next, we use the singular decomposition of M = USVT, in which the columns of V are the singular vectors of M and the eigenvectors of MTM. Therefore, the solution of Equation (5), provide us the normal of the plane Π, n Π = [ A B C ]. The parameter D is calculated from Equation (6).

The last stage consists of calculating the intersections between connected planes. Note that the topological relationship between the walls is established as the edges and corners are extracted in figure I. So, we know which planes have to be themselves intersected and, therefore, we can find the 3D edges and corners of the room. For instance, the ceiling and floor’s planes pairwise intersect with each wall’s plane and define the edges at the top and down of the room; and vertices are extracted after intersecting three planes.

Figure 5 illustrates an example with the results of our approach. Part (a) represents the planes fitted to the walls and part (b) shows the set of labeled vertices of the room. Figure 4(c) contains the relationship graph in which adjacent faces in the diagram share one edge.

4. Results

Our approach was tested on panoramic range data appertaining to inhabited interiors. Note that we are dealing with very complex scenarios, in which furniture and other objects contribute to clutter and occlusions. They contain a wide variety of objects that occlude not only the walls, but also the ceiling and floor. Three to six laser scans were taken per room. A FARO Photon laser scanner provided 38 million points per room. Figures 7 and 8 show the resulting 3D B-rep models for two rooms.

In this section we present the results obtained for two different inhabited indoors. These interiors do not have rectangular but arbitrary plans. The first room corresponds to the Virtual Reality Lab at the Escuela Técnica Superior de Ingenieros Industriales (UCLM). The second one is the living room of a private flat.

After defining the B-rep model we aim to investigate the accuracy of the obtained models. Firstly, we determine the error committed when we represent the walls of the rooms by means of planes. We thus measure the quadratic distance of every point of the walls to the corresponding plane. Distances between the sensed points and the planes are represented in colormaps in Figure 6(a) for different walls of the lab and (b) for two walls belonging to the living room. It can be seen different regions where an important variation of color is produced. These areas can be owing to typical objects hanging on the wall (pictures, posters, and so on), moldings and doorframes. Of course, some of the errors come from the fact that the walls, ceiling and floor are not totally flat. The corresponding error means, for both inhabited interiors, are presented in Table 1, in which represents the mean value for the distances between each 3D point and its corresponding plane.

The mean error of the fitted planes ranged between 0.59 and 4.64 cm for the lab and between 0.19 and 6.5 cm for the room. In any case, for the majority of the walls the mean error is around below 2 centimeters, despite being severely occluded by tables and chairs. On the other hand, the wall which fitted worse was number 2 of the lab. In this case, there was a big projection panel which largely occluded the wall. Most points sensed in this part of the room corresponded to the panel, so that the calculated plane fitted the panel instead of fitting the wall.

We focused most of our analysis on understanding the performance of our modeling results, since this aspect of the algorithm is considerably less studied than planar wall modeling. We considered two aspects of the performance: first, how reliably the walls can be detected, and second, how accurately they are modeled. To answer the first question, we compared the detected walls with walls of the ground truth model. No fails were reported in this aspect and all existing walls were correctly detected. Failed detections mainly might occur in severe occlusion circumstances.

The second question was tackled first generating the geometrical ground truth of the scenes. We constructed by hand the ground truth models of the rooms with the help of a Leica DISTOTM A6 laser tape measure, which provides 1 millimeter accuracy. In order to assess the error committed, the ground truth models were then compared with our 3D models (see Figures 7 and 8). The results are summarized in Table 2.

We first compared the lengths of the vertical and horizontal edges of each face of the ground truth models with ours and the difference between are denoted as dv and dh in Table 2. The value of dv was similar for all pairs of walls; around 0.87 cm for the lab and 0.80 cm for the living room. The smallest value of dh in the living room was for number 3. In this case, the mean value was less than 2 cm in both rooms.

In order to compare the accuracy of the orientation of the faces, we calculated the difference between the respective normal vectors (α in Table 2). The smallest faces yielded a high rate in α, which distorted the average value. Thus, although the mean value of α were 1.59° and 1.85° for the lab and the living room respectively, in the majority of the faces α was less than this value.

Once we have presented the results for these two inhabited interiors, we can establish some difference between them. The living room has more and smaller walls than the laboratory, what might lead the reader to believe that the segmentation process is particularly complicated in this case. However, if we compare the mean values for the different parameters in Tables 1 and 2, the deviations with the ground truth are lower in the living room’s case. This result can be due to the fact that a multitude of pieces of furniture occlude the walls of the room and, consequently, the segmentation process may yield more imprecise segments.

5. Conclusions

Automatic creation of “as-is models” of inhabited scenarios is a challenge research field which is gaining attention from different applications in AEC/FM contexts. In this paper an approach for automatically creating Boundary Representation Models (B-rep) from dense point clouds collected by laser scanners is presented.

The method here proposed is based on segmenting the data by optimizing the discretization of the space. We make a 2D projection of the data in the voxel space and coarsely determine the segments in which the points lie. The voxels’ size is adjusted to the walls, taking into account the flatness tolerances in construction, improving the approach of previous works [18,19] in which predefined voxels’ sizes were used. Then the boundary representation model is generated by intersecting the planes that contains the 3D segments, calculating the faces, edges and vertices, and establishing the relationship between components. This idea has been tested in arbitrary shape plans providing excellent results.

The results lead us to state that the majority of the wall’s points were correctly segmented. We have also evaluated the performance and accuracy of our method comparing the ground truth and the query B-rep models. Overall modeling accuracy for the AEC domain typically needs to be at least within 2.5 cm of the ground truth, so that the precision of our model is clearly above the standard (between 0.8 cm and 1.88 cm for vertical and horizontal edges).

This research is a part of a larger project which aims to obtain automated reverse engineering of buildings including more complex semantic models. Future improvements to the method will be addressed in two lines: extend the method to non-flat walls and generate B-rep models including details (paintings, panels, etc.) and parts of the walls (moldings, doorframes, windows frames, etc.) in inhabited buildings. As other potential applications, these 3D models can be helpful to create, in an automatic manner, virtual scenes in which digitized objects are introduced in order to generate virtual exhibitions as the ones shown in [2426].

This research has been carried out under contract with the Spanish CICYT through the DPI-2008-05444, DPI 2009-14024-C02-01 and DPI2011-26094 projects. It also belongs to the activities carried out in the frame of the RoboCity2030-II excellence research network of the CAM (ref. S2009/DPI-1559).

References

  1. El-Hakim, S.F.; Boulanger, P.; Blais, F.; Beraldin, J.A. A system for indoor 3D mapping and virtual environments. Proc. SPIE 1997, 3174, 21–35.
  2. Früh, C.; Jain, S.; Zakhor, A. Data processing algorithms for generating textured 3D building facade meshes from laser scans and camera images. Int. J. Comput. Vis 2005, 61, 159–184, doi:10.1023/B:VISI.0000043756.03810.dd.
  3. Remondino, F.; El-Hakim, S.; Gonzo, L. 3D Virtual Reconstruction and Visualization of Complex Architectures. Proceedings of 3rd International Workshop on 3D Virtual Reconstruction and Visualization of Complex Architectures (3D-Arch), Trento, Italy, 25–28 February 2009.
  4. Tang, P.; Huber, D.; Akinci, B.; Lipman, R.; Lytle, A. Automatic reconstruction of as-built building information models from laser-scanned point clouds: A review of related techniques. Automat. Constr 2010, 19, 829–843, doi:10.1016/j.autcon.2010.06.007.
  5. Kwon, S.; Kim, F.; Haas, C.; Liapi, K.A. Fitting range data to primitives for rapid local 3D modeling using sparse point range clouds. Automat. Constr 2004, 13, 67–81, doi:10.1016/j.autcon.2003.08.007.
  6. Bosche, F.; Haas, C. Automated retrieval of 3D CAD model objects in construction range images. Automat. Constr 2008, 17, 499–512, doi:10.1016/j.autcon.2007.09.001.
  7. Bosche, F. Automated recognition of 3D CAD model objects in Laser scans and calculation of as-built dimensions for dimensional compliance control in construction. Automat. Constr 2010, 24, 107–118.
  8. Bosche, F. Plane-based registration of construction laser scans with 3D/4D building models. Adv. Eng. Informat 2012, 26, 90–102, doi:10.1016/j.aei.2011.08.009.
  9. El-Omari, S.; Moselhi, O. Integrating automated data acquisition technologies for progress reporting of construction projects. Automat. Constr 2011, 20, 699–705, doi:10.1016/j.autcon.2010.12.001.
  10. Rusu, R.B.; Marton, Z.C.; Blodow, N.; Dolha, M.; Beetz, M. Towards 3D point cloud based object maps for household environments. Rob. Auton. Syst 2008, 56, 927–941, doi:10.1016/j.robot.2008.08.005.
  11. Bohm, J.; Becker, S.; Haala, N. Model Refinement by Integrated Processing of Laser Scanning and Photogrammetry. Proceedings of 2nd International Workshop on 3D Virtual Reconstruction and Visualization of Complex Architectures (3D-Arch), Zurich, Switzerland, 12–13 July 2007.
  12. Bohm, J. Façade Detail from Incomplete Range Data. Proceedings of the ISPRS Congress, Beijing, China, 3–11 July 2008.
  13. Pu, S.; Vosselman, G. Knowledge based reconstruction of building models from terrestrial laser scanning data. ISPRS J. Photogramm 2009, 64, 575–584, doi:10.1016/j.isprsjprs.2009.04.001.
  14. Thrun, S.; Martin, C.; Liu, Y.; Hähnel, D.; Emery-Montemerlo, R.; Chakrabarti, D.; Burgard, W. A realtime expectation-maximization algorithm for acquiring multiplanar maps of indoor environments with mobile robots. IEEE Trans. Robot 2004, 20, 433–443, doi:10.1109/TRA.2004.825520.
  15. Hähnel, D.; Burgard, W.; Thrun, S. Learning compact 3D models of indoor and outdoor environments with a mobile robot. Rob. Auton. Syst 2003, 44, 15–27, doi:10.1016/S0921-8890(03)00007-1.
  16. Budroni, A.; Böhm, J. Toward Automatic Reconstruction of Interiors from Laser Data. Proceedings of Virtual Reconstruction and Visualization of Complex Architectures (3D-Arch), Venice, Italy, 22–24 August 2005.
  17. Valero, E.; Adan, A.; Huber, D.; Cerrada, C. Detection, Modeling, and Classification of Moldings for Automated Reverse Engineering of Buildings from 3D Data. Proceedings of International Symposium on Automation and Robotics in Construction (ISARC), Seoul, Korea, 2–4 March 2011; pp. 546–551.
  18. Adan, A.; Xiong, X.; Akinci, B.; Huber, D. Automatic Creation of Semantically Rich 3D Building Models from Laser Scanner Data. Proceedings of the International Symposium on Automation and Robotics in Construction (ISARC), Seoul, Korea, 2–4 March 2011; pp. 342–347.
  19. Adan, A.; Huber, D. 3D Reconstruction of Interior Wall Surfaces under Occlusion and Clutter. Proceedings of International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), Hangzhou, China, 16–20 May 2011; pp. 275–281.
  20. Okorn, B.E.; Xiong, X.; Akinci, B.; Huber, D. Toward Automated Modeling of Floor Plans. Proceedings of the Fourth International Symposium on 3D Data Processing, Visualization and Transmission, Paris, France, 18–20 May 2010.
  21. Valero, E.; Adan, A.; Cerrada, C. Automatic construction of 3D basic-semantic models of inhabited interiors using laser scanners and RFID sensors. Sensors 2012, 12, 5705–5724, doi:10.3390/s120505705.
  22. Mortenson, M. Geometric Modeling, 1st ed ed.; John Wiley and Sons: New York, NY, USA, 1985.
  23. Golub, G.; Reinsch, C. Singular value decomposition and least square solutions. Numer. Math 1970, 14, 403–420, doi:10.1007/BF02163027.
  24. Corcoran, F.; Demaine, J.; Picard, M.; Dicaire, L.; Taylor, J. Inuit3D: An Interactive Virtual 3D Web Exhibition. Proceedings of the Conference on Museums and the Web, Boston, MA, USA, 17–20 April 2002.
  25. Walczak, K.; Cellary, W.; White, M. Virtual museum exhibitions. IEEE Computer 2006, 39, 93–95.
  26. Monnerat, M.; Romano, P.; Grillo, O.; Haguenauer, C.; Azevedo, S.; Cunha, G. The Dinos Virtuais Project: A virtual approach of a real exhibition. IADIS Int. J. WWW/Internet 2010, 8, 136–150.
Sensors 12 16099f1 200
Figure 1. (a) Representation of the voxelized space. (b) and (c) Simulated data of the interior of a room and final discretization of the space. Blue and red planes of voxels containing the majority of the points of floor and ceiling. (d) Real segmentation. Point cloud sensed in a room and segmented points belonging to ceiling (cyan) and floor (blue).

Click here to enlarge figure

Figure 1. (a) Representation of the voxelized space. (b) and (c) Simulated data of the interior of a room and final discretization of the space. Blue and red planes of voxels containing the majority of the points of floor and ceiling. (d) Real segmentation. Point cloud sensed in a room and segmented points belonging to ceiling (cyan) and floor (blue).
Sensors 12 16099f1 1024
Sensors 12 16099f2 200
Figure 2. Segmentation of floor and ceiling in rectangular (a) and arbitrary (b) plans. (a) shows the segments of the four walls of the rectangular indoor presented in Figure 1(d).

Click here to enlarge figure

Figure 2. Segmentation of floor and ceiling in rectangular (a) and arbitrary (b) plans. (a) shows the segments of the four walls of the rectangular indoor presented in Figure 1(d).
Sensors 12 16099f2 1024
Sensors 12 16099f3 200
Figure 3. Stages in a wall segmentation process. (a) Visualization of the point cloud from a zenital viewpoint. (b) Binary image generated after discretization. (c) Boundary detection. (d) Defining edges and corners in the image.

Click here to enlarge figure

Figure 3. Stages in a wall segmentation process. (a) Visualization of the point cloud from a zenital viewpoint. (b) Binary image generated after discretization. (c) Boundary detection. (d) Defining edges and corners in the image.
Sensors 12 16099f3 1024
Sensors 12 16099f4 200
Figure 4. Retrieval of 3D points corresponding to the walls.

Click here to enlarge figure

Figure 4. Retrieval of 3D points corresponding to the walls.
Sensors 12 16099f4 1024
Sensors 12 16099f5a 200Sensors 12 16099f5b 200
Figure 5. (a) Planes fitting the walls. Note that the planes (in red) do not represent the walls but the planes which fit the walls. They are merely used to illustrate the intersections of such planes. (b) Labeled vertices of the room. (c) Decomposition of a solid in simple objects. Relationship between topological elements in the test room.

Click here to enlarge figure

Figure 5. (a) Planes fitting the walls. Note that the planes (in red) do not represent the walls but the planes which fit the walls. They are merely used to illustrate the intersections of such planes. (b) Labeled vertices of the room. (c) Decomposition of a solid in simple objects. Relationship between topological elements in the test room.
Sensors 12 16099f5a 1024Sensors 12 16099f5b 1024
Sensors 12 16099f6 200
Figure 6. Deviation of the data to the fitted planes for two walls of the lab (a) and two walls of the living room. (b) Colorbars are coded in centimeters.

Click here to enlarge figure

Figure 6. Deviation of the data to the fitted planes for two walls of the lab (a) and two walls of the living room. (b) Colorbars are coded in centimeters.
Sensors 12 16099f6 1024
Sensors 12 16099f7 200
Figure 7. Images of the tested lab (top). Our model and the ground truth model (bottom).

Click here to enlarge figure

Figure 7. Images of the tested lab (top). Our model and the ground truth model (bottom).
Sensors 12 16099f7 1024
Sensors 12 16099f8 200
Figure 8. Planar image of the living room (top). Our model and the ground truth model (bottom).

Click here to enlarge figure

Figure 8. Planar image of the living room (top). Our model and the ground truth model (bottom).
Sensors 12 16099f8 1024
Table Table 1. Mean deviation of the scanned data to fitted walls.

Click here to display table

Table 1. Mean deviation of the scanned data to fitted walls.
LabWall 1Wall 2Wall 3Wall 4Wall 5Wall 6Wall 7Wall 8Mean
[cm]1.134.641.710.590.811.441.231.161.58
Living roomWall 1Wall 2Wall 3Wall 4Wall 5Wall 6Wall 7Wall 8Wall 9Wall 10Wall 11Wall 12Wall 13Wall 14Wall 15Wall 16Mean
[cm]0.551.110.820.196.524.600.241.860.330.570.494.110.440.40.281.081.47
Table Table 2. Parameters calculated for each pair of walls.

Click here to display table

Table 2. Parameters calculated for each pair of walls.
LabWall 1Wall 2Wall 3Wall 4Wall 5Wall 6Wall 7Wall 8Wall 9Wall 10Mean
α (º)1.090.091.121.586.624.830.080.45--1.59
dv(cm)0.810.800.890.900.900.890.890.90--0.87
dh(cm)0.523.211.831.961.120.600.984.86--1.88
Living roomWall 1Wall 2Wall 3Wall 4Wall 5Wall 6Wall 7Wall 8Wall 9Wall 10Wall 11Wall 12Wall 13Wall 14Wall 15Wall 16Wall 17Wall 18Mean
α (º)2.2510.30.952.490.120.726.620.163.862.551.120.890.580.200.110.56--1.85
dv(cm)0.800.790.790.800.790.790.800.800.800.800.800.800.800.800.800.80--0.80
dh(cm)0.740.390.090.990.151.800.112.050.321.462.181.311.292.151.174.45--1.29
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert