Sustainable Application of Hybrid Point Cloud and BIM Method for Tracking Construction Progress

: Compared to the past, the complexity of construction-project progress has increased as the size of structures has become larger and taller. This has resulted in many unexpected problems with an increasing frequency of occurrence, such as various uncertainties and risk factors. Recently, research was conducted to solve the problem via integration with data-collection automation tools of construction-project-progress measurement. Most of the methods used spatial sensing technology. Thus, this study performed a review of the representative technologies applied to construction-project-progress data collection and identiﬁed the unique characteristics of each technology. The basic principle of the progress proposed in this study is its execution through the point cloud and the attributes of BIM, which were studied in ﬁve stages: (1) Acquisition of construction completion data using a point cloud, (2) production of a completed 3D model, (3) interworking of an as-planned BIM model and as-built model, (4) construction progress tracking via overlap of two 3D models, and (5) veriﬁcation by comparison with actual data. This has conﬁrmed that the technical limitations of the construction progress tracking through the point cloud do not exist, and that a fairly high degree of progress data which contains e ﬃ ciency and accuracy can be collected.


Introduction
Data related to the progress of construction projects are very useful both to determine whether timelines are being kept and to assess the quality of the work done, and these data are essential to improving the productivity of construction management [1,2]. However, the progress of construction projects is currently tracked in various ways, such as scheduling, utilizing construction methods, expenditure management, and resource/quality management, and it is difficult to accurately track and record all of those activities [3].
The information required to measure the progress of a construction project can be classified in two categories. The first one is information related to the plan and design and can be acquired at the end of the design phase. The second one is information related to the current construction progress. The latter type cannot be easily collected, and continuously changes. Unfortunately, for most construction project sites, data acquisition depends on the manual recording of information on paper, and the use of photos and documents causes many constraints in time and space. Automation is considered to be the most economical solution to these data acquisition-related problems [4,5].
The goal of this study is to improve the efficiency and accuracy with which progress data is acquired, as this is important to the overall management of the progress of each construction project.
To achieve this goal, the study considers recent trends with some construction projects and proposes an alternative process for solving the problems related to data collection on project sites. This study conducts verification procedures on various buildings to confirm the validity of the proposed measures as well as to identify methods of post-processing the acquired data. The contents of each of the performance stages presented in this study are shown in Figure 1.

Existing Studies on Automated Progress Data Acquisition
Conventional processes employed to acquire data related to the progress of construction projects are inefficient, both in terms of time and cost, and this has led to many studies in the field of automation technologies [6][7][8][9]. Various mobile IT devices were initially proposed as a way to automate data acquisition because they can transmit information via the Internet. Initial representative studies involved the development of various automated methods to perform field data acquisition using data acquisition technologies (DATs) such as radio frequency identification (RFID), global positioning systems (GPSs), bar codes, time-lapse cameras, and ultra-wideband (UWB) [5,[10][11][12][13]. The above-mentioned studies generally indicated that the introduction of mobile-based IT could enhance the efficiency with which data are collected from project sites in real time. Nevertheless, in practice, the proposed methods had technical limitations and they were therefore not commonly applied to construction projects. Typical problems include the cost of purchasing equipment and software for construction projects and the cost of upgrading hardware for maintenance. Furthermore, it has been confirmed that the technical limitations have not yet deviated from the conceptual stage in terms of automation and that the usefulness of the information collected has been poorer than information collected through other techniques [14].
Photogrammetry is a technique that involves the development of a point cloud model from digital photos in order to acquire data about the progress of a construction project. El-Omari and Moselhi [15] is one of the representative studies on photogrammetry, where the amount of work done for a certain time was estimated based on images captured over the corresponding time interval. However, as progress data need to be stored over time, the memory space reserved for data storage should be secured.
The video-based measurement collects progress data by filming the construction project site. This method is effective as it is possible to extract sequential video frame data [16]. Studies on construction projects that have utilized the video-based measurement to acquire data have focused on civil engineering projects such as roads and bridges. The damage detection and safety assessment of facilities and the detection of mobile equipment have been the main areas of focus [17][18][19]. In particular, video-based measurements are affected by many factors, including temperature changes of objects, focus, the data-capture range, and camera resolution.
In 3D laser scanning, laser lights are emitted onto an object, and the distance to the object is calculated using the return time of travel of the light. This method is widely used in the engineering field [16]. Representative studies in this area have examined monitoring methods for the process and interference of mechanical, electrical, and plumbing (MEP) by comparing two 3D models, or by utilizing other methods to create 3D models using actual progress data acquired by LIDAR [20,21]. However, as data acquisition using LIDAR requires the emission of laser lights, if an object has a high reflectance, the efficiency decreases [22]. Besides, the high cost and limited applicability of LIDAR in complex indoor environments are obstacles to its popularity.
Augmented reality (AR) is a combination of various technologies, where virtual images from a computer are added to a real environment [23]. BIM is the representative software used for AR, and it is also applicable to visualization, simulation, information modeling, and safety testing [24,25]. The advantages of AR are that the construction progress and potential defects can be easily determined during the decision-making process, and if necessary, corrections can be made. While AR has been adopted by a large number of studies, there are still many problems related to user convenience, noise, and data interference filtering. Accordingly, practical methods of solving those problems need to be developed. Table 1 shows the characteristics of the data acquisition technologies and is based on elements that should be considered for technical use in measuring the progress in a construction project. In recent years, studies have been conducted to verify the progress by comparing as-built 3D models collected through LIDAR with those produced during the design phase [20,26]. Representative studies in this area have examined the visualization of process rate monitoring through a 4D simulation model conducted in combination with modeling based on laser scanning.
Han and Golparvar-Fard [27] developed a progressive model via laser scanning and studied the construction progress through the BIM. Patraucean et al. [28] also conducted research on the modeling method for the progressive status of a project through the BIM. Meanwhile, Adan et al. [29] focused on the recognition of objects. After segmenting the point clouds corresponding to the walls of a building, a set of candidate objects was detected independently in the color and geometric spaces, and an original consensus procedure integrated both results to infer recognition. In addition, the recognized object was positioned and inserted in the as-is semantically rich 3D model, or BIM model. Wang et al. [30] developed a technique to automatically estimate the dimensions of precast concrete bridge deck panels and create as-built building information modeling (BIM) models to store the real dimensions of the panels. Bueno et al. [31] presented a novel automatic coarse registration method that is an adaptation of as-is 3D point clouds with 3D BIM models. Rebolj et al. [32] proposed methodology including three parameters (minimum local density, minimum local accuracy, and level of scatter) to measure the quality of point cloud data for construction progress tracking. While a recent study investigated the relationship between the quality of point cloud data and the successful identification of building elements, research is still lacking that can identify the required point cloud data quality for each specific application. Therefore, Wang et al. [33] suggested three main future research directions within the scan-to-BIM framework. First, the information requirements for different BIM applications should be identified, and the quantitative relationships between the modelling accuracy or point cloud quality and the reliability of as-is BIM for its intended use should be investigated. Second, scan planning techniques should be further studied for cases in which an as-designed BIM does not exist and for UAV-mounted laser scanning. Third, as-is BIM reconstruction techniques should be improved with regard to accuracy, applicability, and level of automation. Puri and Turkan [34] mentioned that future work should focus on multiple larger construction projects that contain elements with complex geometrical shapes.

LIDAR-Based Point Cloud Data
Image scanning is a technique that involves optically reading images and converting them into data, information and objects, and a LIDAR device is a device that supports image scanning [35]. LIDAR emits a laser beam to an object at specific intervals, and expresses the shape of the object in a set of 3D coordinates by using the direction of the reflected laser and the distance measured [36].
Points that are obtained in this way have 3D X, Y, and Z coordinates, including geo information, and each constituent point is formed at a point where the laser of the LIDAR is reflected from the object. Accordingly, although no geometric information of the object is given, the surface coordinates of the object are included, from which the length, height, and other similar attributes of the object can be acquired ( Figure 2). Consequently, a point cloud that includes the geoinformation of an object can provide high-resolution data without distortion using a 3D mesh model. More information can be acquired by modeling points that are obtained from each scanning task into a shape. In each scanning iteration, LIDAR enables only the object seen in the straight line from it to be scanned. If there is another object between the LIDAR and the object, no scanning data are acquired. For this reason, in the case where a laser beam does not arrive at a point from the measurement point, information about the point cannot be determined. In addition, as shown in Figure 3, LIDAR radially emits the source of light and thus generates a shadow area. In other words, even if a projection plane is created vertical to the scanning direction of LIDAR, there may be an overlap such as the one shown in the dotted line inside the circle. To prevent such a phenomenon from occurring behind the object to be measured, all of the whole information pertaining to the appearance of the object needs to be scanned. This means that an object should be scanned at least two times. LIDAR is classified mainly as contact LIDAR and noncontact LIDAR. Contact scanning is a measurement method that attaches a contact sensor called a Touch Probe to an object. The coordinate measuring machine (CMM) is a representative device. Nevertheless, because the sensor directly touches the surface of the object, the object may be easily deformed, thus making the measurement either impossible or time consuming for materials that are likely to become deformed [35].
The first principle of noncontact scanning is that 3D coordinates are formed by timing the return of a laser beam emitted to and reflected from an object surface on the basis of the time-of-flight (TOF) measurement [37]. As this method does not require the sensor to contact the surface of the object, wide areas can be measured at much faster speeds [38]. The TOF measurement installs a measurement device on an axis of rotation and rotates it by a certain angle for horizontal scanning. Meanwhile, for vertical scanning, the laser reflection mirror inside the measurement device is moved by a certain angle. The second principle of noncontact scanning is laser-based triangulation, as illustrated in Figure 4. The reflected light from a target object on which a line-shaped laser beam is irradiated, is measured at a specific cell of a charge-coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS). In other words, this method restores points obtained by scanning a target object into a 3D plane or figure. The distance between the laser oscillator and the optic sensor is specified, and the oscillation angle is also given. Thus, in a triangle consisting of the laser oscillator, the optic sensor, and the target object, the lengths of two sides can be obtained from the remaining side and two angles. A larger number of points can be measured within a given time period when compared with the TOF method. However, rotation is needed to scan the whole area. Other methods that are used to acquire 3D shapes of objects are the shape from shades (SFS) and the structured light system (SLS). SFS restores the 3D shape of an object by illuminating it with light, and then measuring the intensity of the reflected light source. SLS identifies the outer shape of an object by projecting a light source with a regular pattern on a target object and using the shape of pattern reflected.
The point cloud data of each scene, which are obtained by scanning a target object, need to be combined into a single coordinate system to measure the object dimensions, and to analyze points with nonuniform curvatures, and modeling of shapes. The alignment target is the criterion for the alignment process. Generally, the data of a single scanned scene consist of numerous points, and several hundreds of millions or billions of points are given after the alignment process. Accordingly, it takes a long time to accurately align data. However, depending on users' demands, the scanning or alignment time is preferred relative to the alignment accuracy. The scanning or alignment time may vary according to the alignment method employed. Table 2 presents the characteristics of alignment methods for point cloud data. Cloud-to-cloud alignment does not require any specific target but utilizes a particular point in a point cloud. After selecting the model space of two stations to be aligned, a particular point is picked up in the same place, and individual points are selected in a multi-pick mode and are aligned. Here, a station is a scanning point, that is, a point raised by a laser scanner. When selecting the feature points of scanned scenes, which are obtained at each station, they need to be maximally magnified so that an identical point can be selected and picked, and a fixed point with a nonreflecting material should be selected. In addition, accurate picking is required because the alignment quality and error rate are affected. When stations are aligned, at least three pairs of identical points are needed between each scan, and three points need to constitute as large a triangle as possible to minimize the alignment error. In the case where three or more stations are to be aligned, this alignment process should be repeated.
The target-to-target alignment utilizes targets to align two scanned scenes and to combine them as a single scene Targets are installed beforehand on the plane or bottom, wall, and edge of a target object, and the central points or edges of targets are used for alignment. Targets are to be firmly installed.
In the case where a shadow area is included in a scene captured by an installed LIDAR, the object needs to be scanned in a different direction so that at least three common points can be recognized between two scan datasets. In this way, accurate alignment can be possible. If a target is installed on the ground that may be inclined or uneven, care is required because the alignment software program may not recognize the target. Besides, as the size of a target varies according to the scanning points, the alignment program may not recognize the target. Accordingly, if the target object is far from the scanner, the target needs to be larger.
Visual alignment is a manual alignment performed by a user who imports two scanning stations to be aligned into the same space. With this method, after two scanning stations are aligned in the X-axis and Y-axis from the user's perspective, they are also aligned by being moved on the Z-axis and rotated. Visual alignment is most effective for the same or similar features and is also easy for beginners to master.
Cloud-to-cloud alignment and target-to-target alignment, which identify the coordinates of each point and are basically manual operations, are representative methods for the geometric modeling of scanning data. However, if the scanned object is complex, a lot of time and alignment works are required. In such a case, a specific reverse engineering program is usually implemented to automatically extract and align the parts desired by the user. Nevertheless, the use of automatic extraction by employing reverse engineering software has a limitation in terms of the reverse engineering shape, and shapes are often wrongly recognized, which results in inaccurate data alignment. For this reason, the user needs to confirm the result of the automatic alignment using the reverse engineering software and needs to manually remove the wrongly extracted parts. In other words, manual modeling is necessary.

Drone-Based Point Cloud Data
Drone-based photogrammetry can acquire data of large buildings and terrains. As this method is applicable to large areas, it is recognized as an alternative or supplementary approach for conventional measuring devices [39]. With this advantage, drone-based photogrammetry has been used to measure tasks performed in diverse fields such as building construction, civil work, cultural property management, disaster prevention, and agriculture [40][41][42]. However, this method that employs a drone produces different outputs depending on the weather and brightness of the photo. Besides, it is difficult to obtain close-up images, and a large relative error tends to occur according to the skill of workers and the performance of equipment. In recent times, there have been numerous studies that aim to improve such disadvantages of drone-based photogrammetry in several ways. The majority of those studies focus on verifying the accuracy of data and enhancing it up to a suitable level. In particular, a marker is used for point matching in order to reduce the error range of drone-based scanning [37]. As shown in Figure 5, drone-based photogrammetry can extract point clouds by implementing various software programs such as Pix4D and Context Capture (Bentley), and it can also capture hardly accessible sites at high altitudes. Thus, this method is being more widely used for data acquisition while monitoring, managing, and inspecting facilities.

Selection of Target Object and Identification of Recognition Rate
This study acquired point cloud data and verified the accuracy of data obtained using LIDAR, which can scan both the exterior and interior of buildings and using a drone that could capture an area inaccessible to managers. This study also examined a method of acquiring available data using post-processing, and finally determined the accuracy and error of the data acquisition according to building shapes.
In this study, three buildings were selected to acquire point cloud data, which were obtained from the framework of those buildings, that is, from columns, girders, beams, and slabs. Building A consisted of two stories and a rooftop. The framework of this building included 12 columns, 20 girders, 23 beams, and 17 slabs. Building B also comprised two stories and a rooftop. The framework of this building included 25 columns, 36 girders, 40 beams, and 62 slabs. Building C consisted of five stories and a rooftop. For Building C, after point cloud data were acquired by using a drone, ultimate data were obtained by image matching. The base data employed for accuracy verification were acquired by comparing the data that were generated by aligning point clouds with measurements. Table 3 presents details of Buildings A, B, and C, where point cloud data were collected for accuracy verification. This study adopted the visual alignment where data were visually aligned and rotated in the X, Y, and Z axes of the same space. After 3D point cloud data of Building A were completely aligned, the recognition rate of data acquisition was determined for the members of the framework, which include columns, girders, beams, and slabs. In the case of Building A, when the scanning was conducted, the framework had already been completed, but the finishing work had not yet started. For this reason, data for the member of the framework could be easily acquired, and the scanning conditions were similar to those of real construction sites. Fifty rounds of scanning were carried out, and the total duration was 6 h. To prevent incomplete alignment, the scanning interval was set with an overlap of at least 50-60%. Thus, the recognition rate of the members was 100%, and the cloud data could be reliably acquired using LIDAR.
Building B was not under construction but had already been completed. However, as special finishing work had not yet been conducted, the members of the framework could be determined and were thus under similar conditions to those of real construction sites. Thirty rounds of scanning were carried out, and the total duration was 3 h. The LIDAR scanning of this building was performed by setting the acquisition density to "medium." To prevent incomplete alignment, the scanning interval was set with an overlap of at least 50-60%. While the acquisition density of LIDAR scanning was set to medium, the recognition rate of the members was 100%. Thus, the point cloud data acquisition using LIDAR was found to be reliable.
Building A was selected to verify the accuracy of object recognition. However, this building had a rooftop that could not be scanned using a terrestrial LIDAR. Therefore, a drone was needed to obtain aerial photos, from which data of the overall external building shape could be aligned. This study utilized a rotary wing drone for an experiment to acquire point cloud data. The aerial shots obtained using the drone for data acquisition should be as accurate as possible to minimize alignment errors. However, there are limitations with realizing data acquisition using a drone. These limitations pertain to battery, safety, and GPS technology, making it almost impossible to acquire the high-quality data required by the user. In the case where the need is for accurate engineering, the quality of the scanned data needs to be examined using an appropriate criterion before the data are applied. For Buildings A and C, 209 and 134 photos were, respectively, obtained by operating the drone. Then, point cloud alignment was carried out using those photos.

Determining Error of Aligned Data
The error of the point cloud data was determined by comparing the measurement data of Buildings A and B and the LIDAR-based alignment models of the scanned data. For the measurement data, the real distances between each building were measured using a measuring device. For alignment model data, the distance between point clouds was measured by implementing a software program. The error was measured by comparing the dimensions of the external width, the distance between columns, and the height of the column, which corresponded to the width, length, and height of the building, respectively. Table 4 presents the errors obtained by comparing measurements and LIDAR scanning results for Buildings A and B. It shows that in the case of Building A, the average error values were 0.011 m, 0.012 m, and 0.019 m in the external width, the distance between columns, and the column height, respectively. Meanwhile, in the case of Building B, the average error values were 0.012 m, 0.011 m, and 0.012 m in the external width, the distance between columns, and the column height, respectively. According to the BIM guide for 3D imaging, which is published by the General Service Administration (GSA) of the USA, the error range needs to be a maximum of 51 mm for urban design projects and a maximum of 13 mm for architectural designs. Otherwise, the practical accuracy cannot be maintained. In this study, the errors for each item, which were identified by performing comparative measurements, were between 11 mm and 19 mm. This result was remarkably close to 13 mm, which is recommended by GSA for the application of point cloud data to architectural designs. The distance between the two end points of a target member in the scanned data was measured by mouse picking. As this method implies an unavoidable error, the above errors indicate that very accurate data were acquired by this study. Consequently, based on the cases of Buildings A and B, the LIDAR-based measurement and alignment of this study is shown to be accurate.
Errors that were present in the point cloud data obtained using a drone were compared in the same way as the LIDAR-based error verification. The target was Building A. As the drone could capture only the external building shape, the measurement data of external members were compared with the drone-based alignment model of scanned data. Table 5 presents the errors between the measurements and the drone-based alignment data for Building A. It shows that the average errors for the width, length, and height of the building were 0.378 m, 0.358 m (distance between columns), and 0.072 m (column height), respectively. These values were far below 13 mm, which is recommended by GSA for the application of point cloud data to architectural designs. Such a large gap is attributable to the following intrinsic characteristics of drones. First, because each drone captures a target while flying, it is difficult to acquire accurate data. Second, images obtained by a drone need to be converted to point clouds and then imported into a software program that can measure distance. These steps result in significant gaps. Accordingly, this study used the drone-based point cloud data only for the parts for which data could not be acquired using LIDAR.

Creation of 3D Model of Point Cloud Date for Target Object
Upon verification of the accuracy of point cloud data, which had been acquired using drone-and LIDAR-based scanning, respectively, it was shown that the data obtained by LIDAR scanning had a higher accuracy than those acquired by the drone. However, the progress data acquisition is likely to include inaccessible areas such as rooftops, and there may be a risk factor when acquiring data. In such a case, the application of LIDAR to data acquisition may be restricted, which will result in uncertain parts in the alignment of the whole point clouds. In this regard, by combining the datasets that have been acquired by a drone and a LIDAR, respectively, the loss of data can be prevented, thus improving the accuracy of progress data for construction sites. As shown in Figure 6, to improve the accuracy of progress data, this study combined two types of point cloud data. The mixing process can be summarized as follows.

1.
As the two types of point cloud data acquired by LIDAR and a drone have different file formats, their file attributes need to be unified prior to mixing those data. In this study, the data acquired by a drone had the p4d format and were thus converted to xyz coordinates, which indicated GPS coordinates, in order to be combined with the LIDAR-based data.

2.
As the drone-based data thus converted to xyz coordinates had fixed coordinates, automatic alignment was possible without the need for any additional alignment tasks. Accordingly, when the files were imported into the program for LIDAR, the alignment was automatically completed. 3.
As the drone-based data were acquired by an aerial shot, they included not only the target object but also the surrounding area. Therefore, the noise was removed to obtain only the necessary part.

4.
From the drone-based data, which had been completely imported, only the part available for mixed data was selected, and the remaining parts were removed. 5.
The final mixed data were completed by conducting the cloud-to-cloud alignment between the selected drone-based data and the LIDAR-based data.

3D Polygon Mesh Modeling
Delanay triangulation (DT) and the Voronoi diagram (VD) are basic concept for 3D polygon mesh modeling. DT is a division in which points on a plane are connected in triangles to divide the space such that the minimum value of the cabinet is maximal, and the outer source of any triangle does not include anything other than the three vertices of the triangle. In other words, of various triangulations, the division in which each triangle is as close as possible to the regular triangle. Meanwhile, a VD is a division of a plane into polygons that contain each of these points when there are points on the plane. When there are points on the plane, the two adjacent generation points should be connected to the line, and a vertical equal division of this line should be drawn. In this way, a vertical isomeric line is drawn, creating a polygon with a vertical isomeric line that divides the sides into polygons. DT and VD are in a dual relationship, and if one is known, the other can immediately be obtained. Figure 7a shows VD and the DT of the same set of points. The VD is created by sequentially linking the center of the circumcircle of DTs with generating points as a common vertex, and by linking points between adjacent VD areas, DT can be generated for these points. For 3D stereoscopic modeling from 3D point clouds, the use of DT allows for polygon mesh to be obtained from a collection of points on the surface. The triangulation in 3D is called tetrahedralization or tetrahedrization [43]. A tetrahedralization is the partitioning of the input domain into a collection of tetrahedra that meet at only shared faces (vertices, edges, or triangles). Polygons are typically ideal for accurately representing the results of measurements, providing an optimal surface description. However, the results of tetrahedralization are much more complicated than those of a 2D triangulation. Therefore, this study was performed by utilizing the Commercial Modeling Software Package. The Leica Cyclone platform was used for 3D point cloud data visualization and processing, and the Leica 3D Reshaper platform was used for polygon mesh model generation Figure 7b,c. Mixed-point cloud data can be configured into a 3D model using a modeling process. This process generates a polygon from the outline of the point cloud. After the polygon model of each member is generated, an ultimate 3D model is completed by an editing process. However, this modeling method cannot reflect the details of the acquired data. Construction projects usually include the installation of molds and the casting of concrete, which may cause some errors or bent surfaces that were not originally planned. Manual 3D modeling sets the surfaces of each member and allocates heights in the form of a straight line. Accordingly, detailed errors such as a small slope or bends on a target surface cannot be modeled. Nevertheless, such errors can be detected compared with the actual plan. Figure 8 illustrates a representative process of 3D modeling for a completely aligned point cloud.

Determination of Errors in the Created 3D Model
The acquisition of accurate data is the most essential part of the reverse engineering using the progress data acquired from a construction site. The acquisition of accurate data during the proposed process in this study is based on the 3D shapes of buildings. Accordingly, error identification is necessary for the 3D shape of a target building. This study also determined the shape of the created 3D model. The volume of the 3D model was compared with the actual data. The amount of concrete poured for the construction was used as the actual data. Table 6 presents the locations, date, and volume (m 3 ) of concrete poured in Building A. Concrete was poured 6 times, and the total volume was 522 m 3 . In the case of Building A, the initial data acquired by LIDAR were limited to the above-ground part, and back filling parts, such as sub-slab concrete and foundation, were excluded. In other words, the 3D model was generated by acquiring the point cloud data for the above-ground part. Accordingly, the volume data of the poured concrete were compared over 468 m 3 , which included PIT, 1F, 2F, protective concrete, and the rooftop.
As with other commercial software programs, a software program that automatically creates a 3D model after a point cloud is imported enables the length, area, and volume of each object to be identified. The volume of the 3D model of Building A was measured to be 479 m 3 . When the actual data were compared with the volume data of the 3D model, the difference was 11 m 3 . This result corresponded to a difference of less than 3% compared with the actual data. Thus, the 3D model data showed relatively little error compared with the volume based on the actual data, demonstrating high accuracy.

Visualization of Construction Progress
In order to track the progress of a project, the current status should be compared with the planned status. This study examined an overlap-based method of comparing the BIM model, which provides the as-planned data of a project, and the point cloud-based 3D model, which shows as-built data. For the comparison, the point cloud-based 3D model needs to be imported into the BIM model. However, in the case where two types of 3D models were used, they were implemented on different software bases. File conversion is required for the importing of data. Figure 9 shows the process involved in comparing two models.

Conclusions
This study proposed methods that can be used to track the progress of construction projects, and each of the proposed methods was verified. With respect to for data acquisition, the drone-and LIDAR-based point cloud data acquisition methods were examined, and the accuracy of data was verified with respect to their application to actual construction projects. LIDAR-based point cloud data had error rates of roughly 11-19 mm, indicating a high accuracy level. However, the drone-based data showed a considerably low accuracy level. Because the progress data are based on the 3D shapes of buildings, errors in the 3D shapes were also examined. In the case of Building A, the 3D model based on point cloud data had a difference of 11 m 3 when compared to the actual data. This value represented a difference of less than 3% compared with the actual data, thereby demonstrating a low error rate.
In order to track the progress of a project, the current status should be compared with the planned status. The proposed overlapping method used in this study for the BIM model and the point cloud-based 3D model enabled the actual progress to be visualized and compared to the corresponding plan. Therefore, it is expected to permit project managers to more easily track project progress and identify precise statues when progress has not proceeded as planned. This provides the advantage of progress management being carried out through the establishment of future construction plans and the review of schedules. It is also believed that various reports and related data using visualized three-dimensional models will help project participants and stakeholders greatly. All additional accumulated data could also be used as the basis for the maintenance phase after the end of the project or for similar projects in the future.
Based on results obtained, the data acquisition method proposed in this study appears to be very efficient and can enable project managers to assess the progress and comprehensively manage projects. In particular, as decisions can be made quickly based on rapid information delivery, the incidence of workers' errors and the accompanying need for reconstruction can thus be prevented, leading to reductions in time and cost overruns. However, this study showed that errors and omissions in the alignment of point cloud data caused the poor-quality data alignment. The representative causes were the reflexivity of laser emitted on the surfaces of objects, the distance, and the atmospheric environment. The path of laser was also problematic. If it is possible to omit a specific section or to utilize an independent one that does not need to be aligned with other sections, the problem may be trivial. However, if the section is an essential one that interfaces with different sections, the problem should be resolved.