Next Article in Journal
A Real-Time and Multi-Sensor-Based Landing Area Recognition System for UAVs
Next Article in Special Issue
Compact and Efficient Topological Mapping for Large-Scale Environment with Pruned Voronoi Diagram
Previous Article in Journal
Identification of Emission Source Using a Micro Sampler Carried by a Drone
Previous Article in Special Issue
Large-Scale Earthwork Progress Digitalization Practices Using Series of 3D Models Generated from UAS Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review on UAV-Based Remote Sensing Technologies for Construction and Civil Applications

1
Department of Engineering, East Carolina University, Greenville, NC 27858, USA
2
Department of Construction Management, East Carolina University, Greenville, NC 27858, USA
*
Author to whom correspondence should be addressed.
Drones 2022, 6(5), 117; https://doi.org/10.3390/drones6050117
Submission received: 11 April 2022 / Revised: 25 April 2022 / Accepted: 28 April 2022 / Published: 6 May 2022
(This article belongs to the Special Issue Application of UAS in Construction)

Abstract

:
UAV-based technologies are evolving and improving at a rapid pace. The abundance of solutions and systems available today can make it difficult to identify the best option for construction and civil projects. The purpose of this literature review is to examine the benefits and limitations of UAV-based sensing systems in the context of construction management and civil engineering, with a focus on camera-based and laser-based systems. The risk factors associated with UAV operations at construction sites are also considered.

1. Introduction

The construction industry is one of the major industries in the world. There is about USD 10 trillion spent on construction-related tasks every year. With the rapid growth of the construction industry, construction sites and tasks are becoming more complex and diverse than before. There is a strong demand for introducing automation and intelligent technologies [1] to improve operation efficiency, reduce project costs, and most importantly ensure the safety of construction workers and infrastructure. More and more cutting-edge technologies are being introduced and put into practice, improving civil and construction industry sustainability. Among all these emerging technologies, one of the most promising and widely adopted technologies to improve construction and infrastructure sustainability is unmanned aerial vehicles (UAVs). Because of their natural advantages, including accessibility, high efficiency, and reasonable cost, UAVs act as a reliable partner to address some practical challenges and have been deployed in many different areas. To better serve practical applications, UAVs have been integrated with various types of navigation, sensing, and monitoring systems. Focusing on the civil and construction industry, UAV-based sensors are used to conduct multiple types of tasks, including construction site monitoring, infrastructure assessment, and surface and volume measurements. The sensor data collected in these tasks are usually integrated and analyzed with computer software. In this paper, we summarize some of the literature about UAV-based sensing applications in civil and construction industry applications, which covers topics including sensing technology types, data product integration, data quality and error models, practical application cases, and safety-related issues.

2. UAV-Based Sensing Systems

Various types of sensing systems have been integrated with UAVs to conduct different types of tasks. The most commonly used sensors include HD cameras, light detection and ranging (LIDAR), infrared cameras, and other imaging/ranging systems. In this paper, we are going to focus on two types of sensing systems integrated with UAVs that can be used in the civil and construction industry: (1) unmanned aerial vehicle (UAV)-based photogrammetry and (2) LIDAR systems.

2.1. UAV-Based Photogrammetry

UAV-based photogrammetry is primarily based on imageries collected with small onboard cameras. It typically requires ground control points (GCPs) with surveyed locations and can benefit from the recorded location and orientation of the camera. A 3D point cloud of the target area can be estimated via direct or indirect geo-referencing. Indirect geo-referencing refers to the methods that assign world-frame coordinates to 3D measurements collected in a relative local reference frame. One of the most popularly used UAV-based geo-referencing solutions is structure from motion (SFM). It has been proven to be superior to conventional handheld surveying methods in certain environments: projects with low vegetation, stable GPS availability, and substantial sunlight [2].
Multiple 2D images over the same area can be combined and the point features are matched across them. These images are expected to have great overlap areas (~80%). The 3D locations of these points are then estimated in the camera frame, which are then used to form a 3D model or point cloud. However, the camera pose (position and orientation) is not always precisely known in a world frame (a GPS frame, for example) when a small commercial UAV is used. Therefore, the 3D model created with SFM with a small UAV is typically dimensionless and cannot be directly geo-referenced. It requires additional GCPs to relate back to the world frame. The absolute accuracy of this model depends on both the image processing quality and the GCPs.
Some customized and commercially off-the-shelf UAVs are capable of recording the camera location and/or orientation for each of the images taken during a flight. In that case, camera-based direct geo-referencing is possible. It can be achieved by raytracing from a single image to a known surface such as in the digital elevation model (DEM) or other a priori terrain models; triangulation from multiple overlapped images; or a combination of both. Since no ground control is necessary for this method, the accuracy of 3D modeling is primarily determined by the accuracy of camera timing, orientation, and location. However, a small UAV that is not capable of carrying a high-quality navigation sensor cannot be used for direct geo-referencing. Therefore, direct geo-referencing has not been commonly used in low-cost small UAVs yet.
SFM does not require an a priori position and orientation of the camera or the complete camera calibration model. In fact, they can also be estimated as part of the outcome of SFM. However, if an a priori estimation of these items is available, it can be incorporated into the SFM software to further improve the quality of the data product. The core algorithm in SFM is typically based on bundle adjustment, which is essentially a triangulation process using multiple images. Triangulation is a key component in SFM and the direct geo-referencing system. Although there are several specialized software solutions for triangulation, it is usually included as a part of a commercial software solution for SFM today. A good review of the bundle adjustment algorithm can be found in [3].

2.2. UAV-Based LIDAR System

Alternatively, camera systems can be combined with, or replaced by, a direct ranging sensor, such as a UAV LIDAR system, on some bigger-sized UAVs. The LIDAR senses the distance to a point in the 3D world based on the return of a laser beam. Since the beam would be sent at a known direction specified in the LIDAR body frame, the position of this point is therefore directly measured in the LIDAR body frame. LIDARs are less sensitive to natural light conditions and may provide measurements in operational conditions which prohibit camera operation (such as low light). An airborne LIDAR directly measures the point cloud in the sensor frame, instead of the world frame. The point cloud will be transformed into the world frame by knowing the precise location and orientation of the LIDAR. Very much like camera direct geo-referencing, airborne LIDAR point cloud accuracy is also sensitive to timing/synchronization, LIDAR orientation, and location. Furthermore, airborne LIDAR sensors available today are still more expensive, more power-hungry, and heavier than cameras in general.
An airborne or UAV LIDAR system typically includes three types of sensors: a ranging sensor (2D scanning LIDAR, 3D scanning LIDAR, or 3D imager); a positioning sensor, such as a Global Positioning System (GPS) or Global Navigation Satellite System (GNSS) receiver; and an inertial sensor that measures acceleration and rotation. These three sensors are integrated into the data collection system and in the 3D modeling procedure. The GNSS and inertial sensors are typically coupled together to provide a precise and smooth pose (position and orientation) and velocity of the LIDAR. It is a good practice that the GPS/GNSS also be responsible for the accurate time tagging and synchronization of other sensors.

3. Sensor Data Product Integration

Technologies in remote sensing, computer vision and image processing software, computational hardware, navigation, and robotics have all been developed at a rapid pace in the last few years. This section focuses on the integration, visualization, and applications of a sensor data product.

3.1. Photogrammetry Data Processing

The principle of SFM and estimation algorithms has not changed much in the last few decades. However, high-quality cameras and sensors have become more suitable for small UAVs as they become cheaper, smaller, lighter, and less power-hungry. SFM software and computation hardware have been improved as well. There are more choices for commercial software and more powerful hardware available today. Commercial software is available from, for example, Agisoft [4], Trimble [5], and Pix4D [6], and open-source software such as CMVS [7] has also been used in scientific communities. A more complete list will be discussed below. As aforementioned, modern SFM software takes known calibration, position, or orientation as inputs. However, if only inaccurate position and/or orientation are available from low-quality navigation sensors, 3D point clouds can still be optimized in the SFM software based on known error models of these measurements. Furthermore, for UAVs that have a precise location, through real-time kinematic (RTK), Post-processed kinematic (PPK), or post-processed precise point positioning (PPP), without orientation, SFM can also be used to estimate the 3D point cloud. PPP is post-processed GNSS positioning that does not need a local reference station as RTK and PPK do, which could be less accurate. In the presence of a precise camera position, SFM can be accomplished with fewer GCPs.
Although the principle of SFM has been well known for decades [3], the efficient and robust commercial solutions took years of development in image processing software and computational hardware. There are several well-known software packages available today that can register images to each other, and/or produce a dense 3D point cloud; von Übel [8] provided a list of software developers shown in Table 1.
Focusing on one of the most popular software products, Agisoft, as an example, the main functions of Agisoft are listed here as examples, retrieved from [9].
  • Point features are acquired and matched across multiple images. The software detects point features in these images that are stable and repeatable. It then generates a descriptor for each point based on the appearance, which is often based on a small section of image centered on the point. The descriptors of all the points are then matched to detect correspondences across the images. This is similar to the well-known scale invariant feature transform (SIFT) approach [10].
  • Solving for camera intrinsic and extrinsic parameters. Agisoft uses a greedy algorithm to find approximate camera parameters and refines them in the bundle adjustment algorithm. For example, the camera/lens model is considered intrinsic and camera orientation is extrinsic. Both types of parameters can be estimated in bundle adjustment.
  • Dense reconstruction. Different processing algorithms are available at this step to create a dense point cloud based on all the involved images. The point cloud will be treated as a surface at this stage.
  • Texture mapping. As the last step, the software models a surface by possibly cutting it into smaller pieces, and assigns color and texture extracted from images to the surface.

3.2. LIDAR Data Processing

Some of the developers of the SFM packages also offer software for point cloud processing. In addition, LIDAR manufacturers such as Leica have their own software solution [11]. There are fewer options to autonomously register images to a 3D point cloud and to compare or merge different sets of clouds. Examples include Autodesk [12], MeshLab [13], and CloudCompare [14].
There are a few challenges in registering images with UAV LIDAR 3D point cloud: (1) The point cloud and imagery are both considered an ‘unstructured scan’ in [12]. Manual input is often needed in registration with today’s software solutions. (2) The high noise level on each point in the point cloud makes it difficult to register a ‘free form’ object [15]. (3) Some UAV LIDARs are not able to produce a 3D ‘image’. A direct 3D image requires image-like intensity values measured in LIDAR returns, in addition to distance, and a very dense 3D point cloud. Either or both features are not available in low-cost UAV LIDARs today, although LIDARs with these features have become more affordable recently. As a result, image-based and intensity-based autonomous registration methods such as in [16] are not applicable.

4. Error Models of UAV-Based Sensing

One of the most important factors to consider when using a UAV-based monitoring system is the accuracy of the point cloud product.

4.1. Photogrammetry Errors

The errors modeled in [17] included camera/lens calibration errors; motion blurriness; the altitude, pattern, and stability of flight; image overlap; and the distribution of GCPs. The main discoveries and arguments from this work include the following:
(1) Camera calibration can be estimated as part of SFM (self-calibration). However, a pre-calibrated camera/lens may be more convenient and robust. Other parameters, such as shutter speed, lens aperture, and ISO also have a considerable impact on the image quality.
(2) Small UAVs are often sensitive to wind and airframe vibration. Even mild wind during data acquisition can cause offset in the camera pointing direction, and eventually insufficient image overlap. Vibration can increase blurriness. Furthermore, light conditions during image acquisition can add to the complexity. To compensate for low light conditions, a lower shutter speed or higher ISO is used. Lowered shutter speed increases motion blurriness, while higher ISO increases noise. In most target applications, a larger area of interest will probably need multiple flight acquisitions. Appearance changes, such as changes in shadows, can cause another problem.
(3) The impact of flight altitude on accuracy is a little more complex. Flight altitude changes the distance, image footprint, image overlap, and geometry (slope) of the object. Errors tend to increase with distance and a steeper slope in SFM. Imaging the object from a steeper slope limits the variety of perspectives (view angles). Since SFM benefits from imagery from multiple perspectives, vertical accuracy decreases due to bad geometry. Examples to quantify the findings above can be found in [17].
It was also noted in [18] that images did not need to be acquired from the same distance or have the same scale. The authors argued that it was better to acquire multi-scale image sets. High altitude, large-scaled images could initially capture the whole site with fewer frames. Closer images could capture the desired detail at the required resolution and precision. It is particularly important when capturing areas of detail that are physically obscured by occlusions. Ref. [18] also gave specific advice to improve UAV SFM errors: (1) plan the mission in advance; (2) capture the whole area first before focusing on the details, but ensure that occlusions are captured adequately; (3) ensure appropriate coverage and overlap so that every point on the subject must appear on at least three images acquired from spatially different locations; (4) keep a static scene (no moving objects) and consistent light condition; (5) avoid overexposed images, underexposed images, and blurred images—normally arising from slow shutter speed and/or camera movement—and avoid transparent, reflective, or homogeneous surfaces.
Users of SFM software are typically advised to place GCPs throughout the target site, on the edge of the worksite, and in the center [19]. The locations of GCPs can be surveyed using GNSS-based RTK solutions, RTK and PPK solutions [20], and total station survey or TLS scans [21]. A PPK survey typically has positioning errors around 1 cm (1 sigma). However, to achieve centimeter-level accuracy in the point cloud, the user is required to place up to 40 GCPs per square kilometer [2].
Ref. [22] provided a systematic overview of accuracy in the point cloud involving GCPs. With a sufficient number of GCPs (more than 2 GCPs per 100 images as specified in this work), the error of the point cloud could approach double that of the GCP. If fewer GCPs were used, this paper reported that the point cloud error would be as high as 4–8 times the GCP error, which was still in the centimeter range. Vertical errors were approximately 2.5 times the error of horizontal components. It was also suggested that GCPs should be evenly distributed around the whole interest area, ideally in a triangular mesh grid. For a greater project, denser GCPs were needed to achieve the same accuracy. This is probably because of possible systematic errors in SFM, which tend to amplify with growing distance and area.
The goal of the GCP placement strategy is to minimize the distance from the point cloud to any GCP. In many scenarios or applications, it is not possible to place GCPs with this strategy. Ref. [22] also recommended the use of (1) pre-calibrated cameras rather than the self-calibration; (2) mixing different altitude flights; (3) various degrees of image convergence; and (4) known positional and orientation parameters.
The onboard pose error for direct geo-referencing was also considered [17]. If small UAVs can carry high-quality GNSS receivers, they may be capable of RTK on the fly or recording data for post processing. Post-processed position through PPK or PPP could be used to help improve the accuracy with limited GCPs. It is noted that RTK and PPK could both produce centimeter-level accuracy [19]. PPK would be more accurate than RTK, but less than using GCPs, especially in the vertical direction. In [23], the authors further compared PPK with PPP. Since PPP does not need an additional local reference GNSS receiver, it is more convenient and flexible. However, it was found that PPP produced worse accuracy in the vertical direction than RTK (10 cm error reported for PPP). Further, RTK requires a live data link between a reference station and the airborne receiver, which is not always possible or necessary.
Although the approaches above claimed that GCPs were not necessary if PPK positions were available for the cameras, it would be challenging to directly register the point cloud with just PPK. SFM with PPK can produce precise point clouds only in the camera body frame. Since the PPK position does not directly solve the orientation of the camera or the point cloud, an additional step is needed to align the point cloud in the correct direction.
Therefore, it is more practical to use a few GCPs even with PPK. Ref. [24] showed that a PPK–SFM solution workflow could provide a consistent, repeatable point cloud over time, with an accuracy of a few centimeters. A vertical bias could be corrected using one single GCP. The results were used to estimate centimeter-level topographical change detection. PPK-SFM could accurately and quickly achieve a very high spatial and temporal resolution.
As the main manufacturer of commercial small UAVs, DJI also stated similar conclusions [2]. The new UAV supports both RTK and PPK solutions. Although it could potentially reduce the required amount of GCPs to 0, DJI mentioned the use of ‘fewer GCPs’ and a reduction in GCP set-up time.

4.2. LIDAR and Direct Geo-Referencing Errors

Although a UAV LIDAR may have lower sensor quality than a more capable airborne laser scanner (ALS), both follow the same principle for measurements. The existing error analysis approach of ALS is based on direct geo-referencing and is largely applicable to UAV LIDAR. LIDAR measurement error, navigation and timing error, and modeling error can all contribute to the error in the LIDAR point cloud.
In this review, ‘LIDAR measurement error’ refers to the single point position error in the body frame. It is dependent on the beam width (or divergence), the reflecting surface, and the angular and range measurement made with the laser beam [25]. Beam divergence and the possible uncertainty in the scan angle are both considered angular errors in the LIDAR, whereas the reflecting surface and the measurement noise both contribute to the ranging error along the laser beam. In [26], the angular and ranging errors are both modeled as random processes. The magnitude of these errors depends on the LIDAR manufacturer. In a downward-looking laser beam, ranging error primarily contributes to the vertical position error. In practice, ranging error could also have a systematic component, such as a bias. It needs to be calibrated or bounded.
Some LIDARs are designed with narrow beams (1 or several milliradians; 1 milliradian is approximately 0.06 degrees) to minimize the uncertainty, such as in [27]. Others believe that a wider beam is more robust (~10 milliradians) for a UAV LIDAR, such as in [28]. With multiple returns measured on the same beam, a wide beam may get returns on the target or the ground after it hits occlusion due to dust, rain, and other objects [28]. Therefore, it has the potential to measure the distance to targets and the ground in a harsh environment.
The small angular error in LIDAR is scaled with the distance to the ground, which contributes to horizontal position error in a downward-looking laser beam. However, since the laser beam would have a slant angle on a sloped object even with a downward-looking LIDAR, it will also contribute to the vertical uncertainty.
The position of laser return points in the LIDAR body frame cannot be directly used in a 3D model if the LIDAR is mobile or airborne. The absolute position and orientation of the LIDAR itself in the global world frame need to be accurately measured, which should be synchronized with the measurement time of each point in the LIDAR point cloud.
The position of LIDAR is not directly measured. Instead, it is inferred from the location of the GPS/GNSS antenna measured with RTK, PPK, or PPP. The accuracy of RTK, PPK, or PPP had been discussed in the previous section, which is in the range of 1 cm to 10 cm. It must be noted that the navigation system used for UAV LIDAR should be GPS/GNSS tightly coupled with the onboard inertial measurement unit since it is mandatory for the LIDAR system to be accompanied by accurate orientation measurements. The post-processed solution with integrated GNSS and an inertial measurement unit can be less noisy than PPK or PPP alone. The typical error magnitude is 1 cm horizontal and 2 cm vertical in [29] or, similarly, 2–5 cm in [30]. The actual values are sensor specific.
The antenna position is combined with the lever arm between the antenna and the LIDAR center of measurement to compute the LIDAR position. Any errors in the lever arm, which is typically at the millimeter level, become biases in the point cloud.
Similarly, the navigation system measures the orientation of the UAV in the world frame. It is transferred into the LIDAR orientation via known boresighting of the LIDAR. Boresighting errors can be calibrated, and any residual error will contribute to the angular errors discussed below. Ref. [31] showed that successful calibration could reduce error magnitude down to the centimeter level.
The navigation system can be very accurate at measuring roll and pitch angles: typical values are much lower than 1 degree (0.008 degrees [29] or 0.015 degrees [30]). The actual values are also sensor specific.
However, the reported true heading angle accuracy for these sensors could be overly optimistic and misleading. The nominal accuracy, typically close to 0.1 degrees, is achieved only after maneuvers of the UAV and the fine alignment of the heading. The maneuvers may not always be possible for small UAVs with a short flight time, or for the operational environment at a small urban worksite. Without that, the heading is initialized by vehicle velocity, gyro-compassing, compassing, or manual input, which has the accuracy of a few degrees as reported in [32]. These heading accuracy levels are applicable to most high-end navigation systems that can fit on a small UAV.
A true heading error of a few degrees is a major concern for UAV LIDAR, although it is not a big issue for SFM. As discussed above, the SFM point cloud is calculated from overlapping images. The points from SFM are precisely located with respect to each other within the camera body frame, and the relative precision does not depend on the absolute orientation in the world frame. In fact, camera orientation can be precisely solved by matching point features in images [33].
The same does not apply to the LIDAR point cloud. In processing raw LIDAR data, the points are geo-located independently from each other. There is no relative precision as in SFM. As a result, a large angular error, such as the heading offset, causes each point to be out of its place. A point cloud formed in this case could be distorted so much that it could no longer represent the geometric shape of the target or the terrain. Therefore, the point cloud becomes meaningless with large angular errors. Smaller heading and boresighting errors would cause the points measured in different parts of a UAV flight or from different flights to be inconsistent [34].
Unfortunately, the operator of small UAVs may not know if the UAV has completed enough maneuvers to guarantee the desired heading accuracy. In the navigation industry, measuring true heading in real-time has always been a challenge. A possible solution for airborne and ground vehicles is to use a dual-antenna system. For example, VectorNav has a dual antenna system that can measure the relative location of both antennas in the GPS coordinate frame. The vector between both antennas thus provides an absolute heading, with the error of 0.3 degrees 1 sigma [35]. However, the accuracy is achieved by placing both antennas at least 1 m away from each other. Unfortunately, the heading error would be inversely proportional to the distance between both antennas. If installed on a small UAV, the maximum distance between antennas is typically much shorter than 1 m, and the heading error approaches 1 degree 1 sigma. Therefore, the dual antenna solution could not help with a lot of small UAVs.
In addition, the timing error in the navigation system is often overlooked. Ideally, the LIDAR orientation at the exact moment of measuring every single point in the point cloud must be recorded. Sometimes, the geo-registration process is simplified by using the same orientation for a batch of points, which leaves a small uncertainty in time, at the millisecond level. Any UAV rotation and vibration experienced within a few milliseconds are therefore not compensated, which contributes to the overall angular error.
Finally, the LIDAR point cloud will be processed and registered. In some applications, LIDAR points will be compared against a known model and fit with the known model [36]. In this case, the location of the fit 3D model would not directly reflect the noise level on each point. Instead, it could be affected by the bias and systematic errors in the LIDAR point cloud.
In summary, the position errors observed in the navigation system are typically limited, and the orientation errors could be significant. In an ideal case, the orientation errors would mainly affect the horizontal locations of the individual points in the LIDAR point cloud. For example, an angular error of 0.1 degrees is equivalent to horizontal errors of 5 cm at 30 m away. The expected vertical error is also around the level of several centimeters. An analytical example can be found in [25], and a similar behavior and performance were observed in [37].

4.3. Additional Error Reduction Methods

If the angular error magnitude or the flight altitude increases, centimeter-level accuracies cannot be guaranteed anymore. Some of the error sources are in fact systematic, which result in bias in the point cloud with respect to the truth and discrepancies among subsets of the LIDAR point cloud measured from different flight paths. An ALS point cloud was faced with similar problems [34].
Overlapped observation of the same terrain or target is not necessary to form a LIDAR point cloud, but it helps correct the self-discrepancies. The overlapped area between the footprint of different flight paths (also called ‘strips’) can be used to correct the subsets of the point cloud, which makes the entire point cloud more precise in a relative sense. Ref. [34] mentioned the data-driven approach to minimize the differences between strips for a given transformation model.
Points and geometric features can be extracted from LIDAR data and matched with ground control points or features with surveyed locations. This approach makes the point cloud accurate in an absolute sense [34]. These points and features could be calibration targets purposely distributed in the area, which makes them equivalent to GCPs, or common objects with recognizable shapes, such as sidewalks.

5. UAV-Based Remote Sensing Construction Management Applications

5.1. UAV-Based Photogrammetry Applications

Infrastructure and materials conditions are estimated by various types of simulation models. To obtain more detailed information, UAV-based sensing systems have been widely used for various types of operations and applications in the construction industry [38]. The main capabilities of a UAV-based imaging system include 2D surveying, 3D mapping and modeling, progress control, onsite monitoring, inspection, and assessment. They are applicable to buildings, bridges, transportation areas, and other infrastructure systems and help improve infrastructure sustainability. A summary of these applications can be found in [39].
Ref. [40] discussed applications for a safety inspection on construction sites. UAV imagery could be used to identify non-compliances with the safety requirements established. With improved visualization of the working conditions, UAVs could help improve the safety inspection process on job sites by means of better visualization of working conditions. Ref. [40] developed a set of procedures and guidelines for data collecting, processing, and analyzing safety requirements based on 2D imagery.
Construction progress monitoring could also benefit from using small UAVs. Most of the construction progress is simulated with computational models [41]. Instead of relying on the manual input and observation of each and every phase of the construction projects, which is costly and time-consuming, [42] proposed integrating building information modeling (BIM), UAVs, and real-time cloud-based data modeling and analysis. This enabled an accurate comparison between the as-planned and UAS-based as-built states of the project. The limitation of this approach lies in the fact that the data generated are currently qualitative with a visualization of the project’s progress. A software approach to automatically align and compare the BIM model and the point cloud was needed to produce quantitative and measurable data for project control and performance monitoring. Ref. [43] proposed an industry foundation classes (IFC)-based solution for UAV-enabled as-built and as-is BIM development, quality control, and smart inspections. It enabled the automated integration of as-built and as-is conditions into BIM. However, it was based on 2D images only.
Structural damage assessment could be done with 2D or 3D imagery. Ref. [44] showed examples of building scanning and monitoring using a small rotary-wing UAV. Two-dimensional UAV images were stitched together to become a high-resolution imagery map. It allowed damages and cracks to be observed in the millimeter range. Additional algorithms and processing software were developed to recognize and highlight the cracks based on 2D edge detection. In [36], a 3D point cloud was formed from the multi-perspective, overlapping, very high-resolution oblique images collected with UAVs. The 3D point cloud was collected for the entire building and was combined with a detailed object-based image analysis (OBIA) of façades and roofs. Major damages could be identified in the 3D point cloud, whereas other cases are by OBIA-based damage indicators. However, it was recognized that the 3D point cloud was collected for individual parts of the building. It required an additional algorithm to aggregate the information from these parts.
Three-dimensional mapping with UAV photogrammetry is the main application to be covered in this review. A review of relevant technologies can be found in [45]. In general, UAV photogrammetry can reduce the cost and the risks of mapping and surveying tasks in harsh environments. Centimeter-level accuracy is achievable, and rotatory-wing UAVs are better choices for small sites. However, the endurability of small UAVs may be a potential issue considering weather and wind conditions.
Ref. [46] demonstrated the use of UAV imagery and SFM on modeling the surface and volume of earthwork in a field-realistic environment. This study also incorporated the use of autonomous flight of the UAV with pre-programmed waypoints. The methodology used for this was based on the Mikrokopter Flight Planning Tool, and the new computer program was specifically designed for surveying aspects of aerial photogrammetry that are relevant for civil engineering. It was found that 70% longitudinal coverage and 40% traversal coverage are recommended. Although UAV was much more convenient than traditional methods, it was recognized that the volumetric measurements could bear large errors. The authors noted that error sources needed to be identified and mitigated. The DEM of a designated area could be created from UAV imagery and SFM [47]. The horizontal and vertical accuracy falls within the desirable threshold according to the National Standard for Spatial Data Accuracy. The DEM was used to choose a proper siting for dam construction. The authors concluded that the terrain model created in this approach was robust enough for planning purposes in construction and engineering applications.
Ref. [48] compared the efficiency of 3D mapping in terms of the easiness of model development, data quality, usefulness, and limitations on two typical building cases. The easiness of model development took into consideration the accessibility of the worksite for takeoff and landing; physical barriers for UAV flights; disruption on the worksite; and software processing time. The data quality considerations included the footprint, spatial resolution and overlap of the images, and visual inconsistency between images due to distortion, shadowing, and gaps. The usefulness and limitations were defined for the users of the data product. The users interviewed in this work noted that the 3D maps were useful for logistics, monitoring work progress, planning, and visualization. However, these maps could not provide details in a close range, and there were parts of the buildings that could not be modeled (such as the inside and top). Due to safety considerations and regulations, the UAV flight could not cover certain parts of the site to create a full 3D point cloud.
Ref. [49] demonstrated the use of UAVs for augmenting bridge inspections, using the Placer River Trail Bridge in Alaska as an example. The authors produced a 3D model of the bridge using UAV imagery and a hierarchical dense SFM algorithm. The UAV design, data capture, and data analysis were optimized together for a dense 3D model, and the results were compared against models generated through laser scanning. The 3D models created with UAV imagery did provide the accuracy to resolve defects and support the needs of infrastructure managers.

5.2. LIDAR Applications

LIDAR-based solutions are raising some interest within the construction industry as well [50]. UAV-based LIDAR is a relatively new technology for construction management, especially for improving construction and infrastructure sustainability. Users in this industry are more familiar with terrestrial laser scanners (TLS), mobile laser scanners (MLS) mounted on ground vehicles, and ALS mounted on large, manned aircraft.

5.2.1. TLS Applications

Ref. [51] showcased how a TLS point cloud is integrated with total station surveying to create BIM models for existing buildings. The point cloud-based BIM model provided the ability to detect and define facade damage on buildings. It also provided the ability to detect discrepancies between the existing drawings and the real situation captured with the TLS point cloud. Limitations of this method were also pointed out, including (i) the difficulty in manipulating point cloud data; (ii) the lack of a best fitting algorithm; (iii) the lack of the ability to enforce known shapes of openings such as windows in the point cloud; and (iv) the lack of a standard in managing data. Ref. [52] as well as [53] emphasize that, in general, the maximum range of a scanner should be taken into account before collecting data, as the low-density point clouds taken at the maximum distance range may not be sufficient for all surveying needs. Ref. [54] focused on TLS application on bridge inspection, involving geometric documentation, surface defect determination, and corrosion evaluation. Workflows based on TLS data were proposed to measure cracks and vertical deflection. They could save up to 90% of the time and could detect cracks between 1.6 mm and 4.8 mm.
TLS data are also able to assist with assessments of the saturation of building materials, which can be used for several civil engineering applications, such as monitoring bridges, landslides, dams, and tunnels [55]. Changes in roughness and color should be taken into consideration with the analysis of structure moisture content. In [56], the authors were able to use TLS to assess the deformation of bridge structures, suggesting that it is a viable method for construction inspection. The TLS data collected were processed using a shape information model and octree algorithm. It was found that this method is effective in detecting deflections of greater than 4 mm. TLS data have also been effective in measuring the thickness of concrete pavement on construction sites. Ref. [57] found that surveying construction sites before and after the addition of concrete may be a more accurate and time-efficient alternative to traditional core sampling methods conducted at construction sites.
With regards to construction site management, surface profiles have been created with TLS point cloud data by attaching a 2D profilometer to an excavator machine. This method is not commonly found in the literature but has been found to have an accuracy of better than 10 mm, and the advantages of this technique include accuracy, a high update rate, real-time measurements of the site, and construction without moving parts [58]. With the correct algorithms, there is potential for the excavator to be fully autonomous through the use of machine learning.

5.2.2. ALS Applications

TLS measures the point cloud from a fixed location, which is inconvenient in a lot of applications. LIDAR can be installed on airborne and ground vehicles and can measure point clouds while the vehicles are moving. As aforementioned, these types of LIDARs would require high-quality navigation sensors (typically differential GPS/GNSS and an inertial measurement unit [15]) to measure the position and orientation of the LIDAR.
ALS has been widely used to survey the ground and create topographical models, although normally it would not be used to survey construction worksites, due to cost and other practical limitations. The authors of [59] noted in their comparison of ALS, satellite imagery, and USGS models that LIDAR technology is not the most accurate choice when surveying in areas with steep slopes, ridges, or ditches. Ref. [60] described the use of aerial photography and ALS to estimate individual tree heights in forests. The main challenge of modeling the forest-covered terrain was to differentiate the LIDAR returns from the tree and the ground. This process depended on multiple returns of the laser beam, since the first return is usually from the treetops, and the last strong return is from the ground. However, due to the low density of ALS returns (3–4 returns per m2) and the small footprint of the laser beam (10 cm2), the tree models were not as accurate as one had hoped for with LIDAR measurements. Only meter-level accuracy was achieved. Airborne LIDAR may be effectively used for structural damage assessments. For example, airborne LIDAR has been evaluated for the damage assessment of buildings caused by hurricanes [61]. However, only severely damaged structures are able to be detected with this method, and high-density point cloud data are necessary.

5.2.3. MLS Applications

The application of MLS is similar to that of TLS. For example, [62] proposed using MLS in monitoring progress. MLS point cloud data and 4D design models were used to identify deviations of the performed work from the planned work. The proposed framework was tested using as-built data acquired from an ongoing bridge construction project. The percentage of completion for the as-built bridge elements was calculated and compared with the as-planned values. The differences for every element on a specific scan date were used for assessing the performance of the proposed framework. The obtained difference ranged from −7% to 6% for most elements.
Since MLS is mounted on ground vehicles, it can offer similar high data density as the TLS (higher than that of ALS), similar accuracy levels (millimeter to centimeter), and is more flexible than TLS. MLS is becoming a popular choice for mapping urban environments [15]. Available commercial systems today can produce close to or more than one million points per second and a few hundred-meter range. The manufacturers of these LIDARs include Faro, Velodyne, Riegl, Sick, Optech, and Leica. They have been used in mapping transportation infrastructure, building information modeling, utility surveying, and vegetation. Road markings, zebra crossings, center lines, and other features could be automatically identified from the integrated LIDAR-imagery data product. For example, the authors of [63] were able to extract road surface features from terrestrial mobile LIDAR point cloud data using an algorithm. This effectively resulted in the creation of an index of roadway features with greater than 90% correctness, suggesting that TLS data are useful in surface reconstruction situations [63]. The challenges identified in using MLS include: (1) classification and recognition of objects, (2) data integration and registration, and (3) city modeling.
The issue with data integration and registration is the most relevant to this work. Although the MLS point cloud can be directly geo-referenced, errors in navigation (position and orientation) can cause discrepancies among the point cloud data sets, since the position and orientation are non-stationary. In particular, the authors from [15] noted that ‘the misalignment among sensors needs to be carefully calibrated (through either indirect or direct sensor orientation), and their time needs to be rigorously synchronized’. This was because orientation and timing errors could cause a great offset in the location of the point cloud, the same as in ALS. The MLS point cloud could be registered with respect to other sensor data, such as a reference point cloud and imagery. Multiple sets of MLS point clouds could also be registered and stitched together. However, different data sets often had to be manually registered into the same coordinate system due to navigation errors. Special shaped artificial targets were used in the process. The precision of the MLS point cloud was verified via registration, which was around 4-5 cm.
Ref. [15] provided a summary of how urban objects could be modeled with a LIDAR point cloud from TLS, MLS, and ALS. Building roofs and façades could be modeled with ALS or ground-based LIDAR. The modeling process could be data-driven, which extracted models from the point cloud; or model-driven, which verified a hypothetic model with a point cloud; or a hybrid between the two. The choice of models was a balance between geometry, topology, and semantics. Power lines could be better modeled with ALS and geometric models (a more detailed example can be found in [24]). Road surfaces could be modeled with ALS or MLS, and with various types of models. Ref. [15] called for more research into LIDAR-based bridge models.
Ref. [15] also recognized that it was more challenging to model free-form objects, such as statues, towers, fountains, and certain types of buildings. Various types of surface reconstruction methods were discussed in this work, and it was certainly possible to extract robust and accurate (centimeter-level) representation from the point cloud. However, the accuracy depended on the surface characteristics and the input data.
Although there has not been much literature on the application of UAV-based ALS, the remote sensing industry has started to pay more attention to it. UAV LIDARs were developed based on adapted versions of ALS [64] and MLS [27,28,65]. As with ALS and MLS, the UAV LIDARs were tightly integrated with navigation systems, such as Trimble/Applenix [30] and NovAtel [29]. Attempts have also been made to use a hybrid form of TLS data and UAV-based image processing. This technique is thought to be most useful at large earthwork sites to improve the cost-effectiveness and means of efficiency of construction management [66].
Due to constraints in cost, power, size, and weight, the low-cost UAV LIDAR systems had limitations in range, point cloud density, ranging accuracy, and navigation accuracy. For example, the Hokuyo LIDAR in [65] has a nominal range of 30 m. While UAV LIDAR can be of use for capturing data over huge land areas, ground-based LIDAR is superior in capturing the specific details of an area [67]. UAV LIDAR is typically only suitable for ground vehicles and UAVs flying very low to the ground. GPS/GNSS receivers with RTK or differential corrections could produce large position errors, which translates to large 3D position errors in the point cloud. The orientation of the low-cost IMU sensors produced substantial angular errors, especially in the heading. As a result, the accuracy and resolution of low-cost UAV LIDARs were rather limited. Remote sensing experts had argued that UAV LIDARs were not as effective as UAV photogrammetry in construction management not too long ago [68]. Despite these constraints, a BIM–UAV LIDAR combination approach was found to be effective for construction project monitoring and quality control. This system provides real-time information which can assist in early defects detection on construction sites. The use of technology can be considered advantageous over traditional quality control checks when taking into consideration the safety, accessibility, and efficiency of the BIM–LIDAR system [69,70].
It is easier to obtain a high-density point cloud with photogrammetry, and high-resolution cameras are much more cost-effective than high-density LIDARs. While LIDAR sensors may be more costly and heavier than high-resolution cameras, [71] note that the results may improve the overall quality of construction project management. More importantly, the relative precision of the 3D point cloud from SFM photogrammetry is based on the consistency within imagery. It is relatively more convenient to achieve centimeter-level relative precision with sufficient imagery coverage. The absolute accuracy is dependent on GCPs. With sufficient GCPs, centimeter-level absolute accuracy can also be achieved. On the other hand, the LIDAR point cloud is always using direct geo-referencing. As discussed above, the accuracy is highly dependent on the navigation sensors, especially angular measurements. As a result, the errors in the 3D point cloud are amplified with distance. Limited by accuracy and range, low-cost UAV LIDARs often have to take measurements close to the ground (tens of meters). Therefore, it was argued that low-cost UAV LIDARs could only help when SFM or GCPs are not available [68].
However, it was pointed out that there could be several types of environments where UAV LIDAR would enable projects to be delivered that may not have been possible otherwise [72]. These projects included those that involved steep topography, a linear-based survey, or sites covered by dense vegetation. LIDAR direct geo-referencing minimizes the need for GCPs and therefore is suitable in environments where it is either too expensive or impossible to place GCPs. Ref. [73] tested the possibility of autonomous beyond visual range (BVR) flights in unknown environments with LIDAR. The three qualities that this study found to be necessary for autonomous long-distance UAV flights were BVR waypoint navigation flight, ground detection/terrain following, and obstacle detection and avoidance. More importantly, some LIDARs have multiple return capabilities [27]. The LIDAR beams are sometimes wide enough that they can be reflected by multiple surfaces and objects, including dust, rain, foliage, and the actual target (ground). It becomes possible for LIDAR to see through to the ground. Therefore, the main advantage of using LIDAR is potentially differentiating the ground from vegetation.
Furthermore, recent developments in the remote sensing and navigation industries have made available higher density UAV LIDARs at a greater range (a few hundred meters) and better inertial measurement units that can measure orientation more precisely. They could be used to take volumetric or topographic measurements of the ground, with or without vegetation cover, and model roads, cuts and other surfaces, and even buildings [37].
In the last few years, custom-built LIDAR systems have been reported that were specially designed for modeling the terrain or vegetation, such as in [74]. Commercial solutions are becoming more available, such as in [75]. Obviously, UAV-based photogrammetry (SFM) and LIDAR have different limitations and requirements on the hardware (UAV airframe and sensors) and the operational environment. The expected quality of the data product also differs between both technologies. In general, the SFM point cloud is expected to have a higher precision and higher density than that of UAV-LIDAR. For example, Figure 1 shows a tent-shaped calibration target placed on the ground. The target was designed by the authors of this paper to quantify the errors in SFM and the LIDAR point cloud. It has a base of approximately 1 m by 1 m, and the height is about 0.4 m. The surface of the target was covered with white canvas and painted with blue stripe patterns. The SFM and LIDAR point clouds have been illustrated in Figure 2 and Figure 3, respectively. The SFM point cloud was formed with imagery collected with a GoPro camera installed on a DJI Inspire UAV, and processed in Agisoft Metashape. This point cloud was geo-registered in a local North-East-Up frame. Each point in this point cloud has been assigned with a color extracted from the airborne imagery. A custom UAV-LIDAR system based on a Sick UAV LIDAR and a DJI Matrice 600 Pro UAV was used to collect the point cloud in Figure 3. The point cloud was displayed in the same North-East-Up frame as in Figure 2. The points in Figure 3 were not colored with imagery, since the UAV LIDAR does not have color-based returns. Instead, the colors in Figure 3 are simply used to illustrate height, such that the target can be visually differentiated from the ground. Both point clouds are accurately geo-referenced and can be aligned with each other. However, Figure 2 shows more 3D details of the target, with a higher density and colored points. On the other hand, the target in Figure 3 appears coarse and noisy, with a lower density.
UAV LIDARs and ALS are capable of measuring terrain and surfaces, with or without vegetation cover, via direct geo-referencing. The point cloud density and accuracy decrease with flight altitude. Therefore, it may not provide the same level of details that UAV imagery can. However, the absolute accuracy of SFM is dependent on the GCPs, and is more likely to be limited by the operational environment. The UAV used in this example circled around the target at different altitude levels for multiple times, to ensure that there were sufficient coverage and overlap between images, and that images were collected from multiple view angles. It took several minutes to scan this small target. The LIDAR SFM requires no GCP, and the UAV only needed one overhead flight in this example. It only took a few seconds in a flight to capture the point cloud in Figure 3. The pros and cons of both technologies have been summarized in Table 2, in terms of hardware, operations, and data quality.
Since SFM and LIDAR each have unique strengths, they can complement each other in some applications. A data fusion method can be used to merge two types of point clouds to obtain and analyze data in construction settings. Ref. [76] examines the compatibility of aligning data sources from different types of point cloud collection methods. MLS and ALS data are easily aligned through geo-referencing methods [76]. TLS data may be better in smaller scanning areas compared to MLS, and are able to be combined with ALS point cloud data. This process involves finding matching pairs of objects between the two-point cloud datasets and creating Laplacian matrices and finding the resulting correlation coefficients [76]. Point cloud data can also be overlaid with original design models as a way of determining construction progress [77]. Similarly, the overlapping of point clouds from consecutive days is a possibility as a way of assessing progress [77].
Overlaying point cloud data with imagery is an additional technique that may be used effectively. Ref. [78] examined the concept of automatic change detection with UAV image-based point clouds in the context of assessing landslide sites over time. This approach allows for the comparison of a large number of images from different dates without the necessity of having extensive ground control point information [78]. Construction progress may also be monitored by superimposing two-dimensional photographs with 3D point models. Ref. [79] utilized this technique in order to visualize the construction progress schedule. For example, aspects of a building site that were considered on the schedule were color-coded one color while entities that were behind schedule were coded a separate color for clear distinction [79]. This allowed for easily visible progress reports in the construction process. Alternatively, the LIDAR point cloud can be fused with available imagery to construct 3D models. The fusion will be based on direct geo-referencing and can still provide more details. Ref. [80] proposed an approach to register images with an ALS point cloud for urban models. OpenGL and graphics hardware were used in the optimization process for efficient registration. Ref. [16] discussed a hybrid intensity-based approach that utilizes both statistical and functional relationships between images, particularly in the case of registering aerial images and 3D point clouds. Statistical dependence of mutual information or functional relationships of correlation ratio alone was not sufficient to register photos to LIDAR reliably. However, the proposed method used both of them and performed a robust registration of urban areas. Ref. [81] discussed registering SFM 3D point clouds, 3D meshes, and geo-referenced orthophoto imagery in a fully automated manner. The data product could be used in disaster relief response and construction progress monitoring.
Ref. [82] focused more on road maintenance. This work combined a TLS point cloud with UAV photogrammetry. The authors acknowledged the difficulties faced with road maintenance using TLS alone: (1) As passengers and cars use the road being surveyed during measurements, and available space for instrumentation setup is limited, it is sometimes difficult to set up TLS. (2) TLS can only provide high-density measurements in a limited range (10 m). Part of the road that was surveyed used UAV photogrammetry and SFM. The point cloud was combined with that from TLS, which was used to scan a bridge, including sides and lower works. The inaccuracy for the bridge was an effective length of 1.2 cm and an effective width of 1.9 cm, and the three-dimensional data described the structure of the bridge with high accuracy. The combined point cloud could be used to develop a road maintenance management system that accumulates data and refers to the inspection results and repair information in three dimensions.
The existing literature mainly covered the registration of imagery with TLS and ALS. The fusion and registration of UAV LIDAR with imagery collected by an onboard camera has not been well documented yet. It is one of the emerging technologies that will soon find more applications in construction and civil engineering.

6. Safety and Risk Considerations

Additional risks arise primarily from operating in construction applications. Ref. [83] noted that ‘about 30 incidents of near-misses or crashes leading to human injury have been reported associated with the use of recreational UAVs. Unstable flying conditions, operator errors, and faulty equipment may represent potential hazards to nearby workers from the commercial use of UAVs’. This work described the use of UAVs in construction, the potential risks of their use to workers, and approaches for risk mitigation, including ‘prevention-through-design’ for small UAVs, the adequate training of operators, and updating occupational safety regulations.
Risks of small UAVs could result from a number of technical reasons, including (but not limited to) power, communications, navigation, and control. UAV operations may be autonomous, semi-autonomous, or remote-controlled [84]. In a fully autonomous or semi-autonomous operation, the low-level control is governed by the onboard flight controller and navigator, which relies on GNSS (or an equivalent sensor) as aforementioned. If the UAV follows a pre-loaded flight plan without the need for human intervention, it is considered fully autonomous. In a semi-autonomous operation, sometimes also referred to as a GNSS-assisted operation, the UAV follows the guidance of a remote controller, with commands transmitted via a communication channel. In a remote-controlled operation, the user directly performs low-level control functions, such as attitude or velocity control, without using on-board GNSS.
When a UAV is close to a building or other structure, it may lose communication with the operator. The quality of GNSS positioning in the vicinity of a construction site could also suffer from blockage and multipath. In an autonomous operation where GNSS has been corrupted, the onboard flight controller could command erroneous operations. A properly designed UAV system will attempt to stop the operation, by landing or returning to the home location, upon the loss of communications or GNSS. Without the ability to ‘sense and avoid’, the UAV could potentially cause damage during this process. An obvious way to prevent communication loss is for users to remain in the line of sight when operating UAVs, as often required in various regulations, including FAA part 107 (FAA 2016). Autonomous operations should be enabled only when GNSS (or equivalent) is available. Small UAVs with redundant navigation systems, payload capabilities, redundant rotors, and battery capability in case of a rotary-wing UAV provide additional safety protection. Furthermore, small UAVs with GNSS-denied and indoor navigation capabilities, and sense and avoid capabilities, are also available now.
Ref. [85] recognized that the construction industry had the potential to greatly increase safety and efficiency on the job site, particularly in safety inspections. This article discussed the opinions of safety managers and their thoughts on the implementation of UAVs. In 2019, the construction industry was found to be the second-highest economic market sector for UAVs, with agriculture coming first. It was found that various monitoring tasks, such as for cranes in the proximity of overhead power lines, are the most important safety-related tasks that might benefit from using UAVs on a construction project. It was also found that the three most important technical features of the UAV were the camera movability, sense and avoid capability, and a real-time video communication feed. A list of state regulations can be found in [86]. FAA part 107 guidelines [87] must be followed when operating small UAVs for these applications.

7. Conclusions

UAV-based remote sensing and inspections have been used widely in construction and civil fields. This paper summarizes the up-to-date performance and applications of UAV-based photogrammetry and LIDAR technologies. UAV-based technologies have demonstrated their unique advantages, especially in helping with construction and infrastructure sustainability, although there are also limitations in some of the applications. With the recent development of sensing technologies and their application in UAV-based systems, some of the limitations will be overcome soon. Although the operation of UAVs could potentially raise risks at a construction site, especially in fully autonomous operations, they can also improve the safety, efficiency, and sustainability of construction operations.

Author Contributions

Literature view, S.G., Z.Z. and G.W.; writing, S.G., Z.Z. and G.W.; editing, S.G., Z.Z. and G.W. All authors have read and agreed to the published version of the manuscript.

Funding

The authors would like to thank the North Carolina Department of Transportation (award number: RP 2020-35) for their assistance in supporting this research.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The content of the information provided in this publication does not necessarily reflect the position or the policy of the United States government, and no official endorsement should be inferred.

References

  1. Tao, C.; Watts, B.; Ferraro, C.C.; Masters, F.J. A multivariate computational framework to characterize and rate virtual Portland cements. Comput.-Aided Civ. Infrastruct. Eng. 2019, 34, 266–278. [Google Scholar] [CrossRef]
  2. DJI. Next Generation Mapping—Saving Time in Construction Surveying with Drones. Available online: https://enterprise.dji.com/news/detail/next-generation-mapping (accessed on 12 January 2019).
  3. Triggs, B.; McLauchlan, P.F.; Hartley, R.I.; Fitzgibbon, A.W. Bundle adjustment—A modern synthesis. In International Workshop on Vision Algorithms; Springer: Berlin/Heidelberg, Germany, 1999; pp. 298–372. [Google Scholar]
  4. Agisoft. Tutorial (Beginner Level): Orthomosaic and DEM Generation with Agisoft PhotoScan Pro 1.3 (with GCPs). Available online: https://www.agisoft.com/pdf/PS_1.3%20-Tutorial%20(BL)%20-%20Orthophoto,%20DEM%20(GCPs).pdf (accessed on 12 January 2019).
  5. Trimble. Inpho UASMaster. Available online: https://geospatial.trimble.com/products-and-solutions/trimble-inpho-uasmaster (accessed on 12 January 2019).
  6. Pix4D Home Page. Available online: https://pix4D.com (accessed on 4 January 2022).
  7. Yasutaka Furukawa, Jean Ponce CMVS-PMVS. Available online: https://github.com/pmoulon/CMVS-PMVS (accessed on 12 January 2019).
  8. von Übel, M. Affordable and Easy 3D Scanning 2019 Best Photogrammetry Software. Available online: https://all3dp.com/1/best-photogrammetry-software (accessed on 12 January 2019).
  9. Agisoft Technical Support. Algorithms Used in Photoscan. Available online: https://www.agisoft.com/forum/index.php?topic=89.0 (accessed on 4 January 2022).
  10. Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; pp. 1150–1157. [Google Scholar]
  11. Leica. Cyclone 3D Point Cloud Processing Software. Available online: https://leica-geosystems.com/en-us/products/laser-scanners/software/leica-cyclone (accessed on 4 January 2022).
  12. Autodesk Help. Registering Unstructured Scans. Available online: https://knowledge.autodesk.com/support/recap/learn-explore/caas/CloudHelp/cloudhelp/2018/ENU/Reality-Capture/files/GUID-AF55A2EB-FCE8-4982-B3D6-CEAD5732DF03-htm.html (accessed on 4 January 2022).
  13. Meshlab Homepage. Available online: http://www.meshlab.net (accessed on 4 January 2022).
  14. CloudCompare. 3D Point Cloud and Mesh Processing Software. Available online: https://www.danielgm.net/cc (accessed on 12 January 2019).
  15. Wang, R.; Peethambaran, J.; Chen, D. Lidar point clouds to 3-D urban models: A review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 606–627. [Google Scholar] [CrossRef]
  16. Parmehr, E.G.; Fraser, C.S.; Zhang, C.; Leach, J. Automatic Registration of Aerial Images with 3D LiDAR Data Using a Hybrid Intensity-Based Method. In Proceedings of the 2012 International Conference on Digital Image Computing Techniques and Applications (DICTA), Fremantle, WA, Australia, 3–5 December 2012; pp. 1–7. [Google Scholar]
  17. Nasrullah, A.R. Systematic Analysis of Unmanned Aerial Vehicle (UAV) Derived Product Quality. Master’s Thesis, University of Twente, Enschede, The Netherlands, 2016. [Google Scholar]
  18. Natan, M.; Jim, C.H.; Lane, S.N. Structure from Motion (SfM) Photogrammetry. In Geomorphological Techniques; British Society of Geomorphology: London, UK, 2015; Chapter 2. [Google Scholar]
  19. Pix4D. Do RTK/PPK Drones Give You Better Results than GCPs? Available online: https://assets.ctfassets.net/go54bjdzbrgi/2VpGjAxJC2aaYIipsmFswD/3bcd8d512ccfe88ff63168e15051baee/BLOG_rtk-ppk-drones-gcp-comparison.pdf (accessed on 4 January 2022).
  20. Ground Control Points for Drone Mapping. Creating Quality GCPs for Mapping Contour Lines. Available online: https://www.groundcontrolpoints.com/mapping-contour-lines-using-gcps (accessed on 12 January 2019).
  21. Shaw, L.; Helmholz, P.; Belton, D.; Addy, N. Comparison of UAV Lidar and imagery for beach monitoring. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 589–596. [Google Scholar] [CrossRef] [Green Version]
  22. Sanz-Ablanedo, E.; Chandler, J.H.; Rodríguez-Pérez, J.R.; Ordóñez, C. Accuracy of unmanned aerial vehicle (UAV) and SfM photogrammetry survey as a function of the number and location of ground control points used. Remote Sens. 2018, 10, 1606. [Google Scholar] [CrossRef] [Green Version]
  23. Grayson, B.; Penna, N.T.; Mills, J.P.; Grant, D.S. GPS precise point positioning for UAV photogrammetry. Photogramm. Rec. 2018, 33, 427–447. [Google Scholar] [CrossRef] [Green Version]
  24. Zhang, R.; Yang, B.; Xiao, W.; Liang, F.; Liu, Y.; Wang, Z. Automatic extraction of high-voltage power transmission objects from UAV lidar point clouds. Remote Sens. 2019, 11, 2600. [Google Scholar] [CrossRef] [Green Version]
  25. Guan, S.; Zhu, Z. UAS-Based 3D Reconstruction Imagery Error Analysis. Struct. Health Monit. 2019. [Google Scholar] [CrossRef]
  26. May, N.C.; Toth, C.K. Point positioning accuracy of airborne LiDAR systems: A rigorous analysis. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2007, 19–21. Available online: https://scholar.google.co.jp/scholar?q=.+Point+positioning+accuracy+of+airborne+LiDAR+systems:+A+rigorous+analysis.&hl=zh-TW&as_sdt=0&as_vis=1&oi=scholart (accessed on 10 April 2022).
  27. Velodyne. VLP-16 User Manual63-9243 Rev. D. Available online: https://github.com/UCSD-E4E/aerial_lidar/blob/master/Datasheets%20and%20User%20Manuals/velodyne%20lidar%20datasheets/***VLP-16%20User%20Manual%20and%20Programming%20Guide%2063-9243%20Rev%20A.pdf (accessed on 12 January 2019).
  28. Weber, H. Sick AG Whitepaper. Available online: https://cdn.sick.com/media/docs/2/22/322/Whitepaper_SICK_AG_Whitepaper_Select_the_best_technology_for_your_vision_application_en_IM0063322.PDF (accessed on 4 January 2022).
  29. NovAtel. SPAN IMU-CPT. Available online: https://www.novatel.com/assets/Documents/Papers/IMU-CPT.pdf (accessed on 4 January 2022).
  30. Applanix. APX-20 UAV High Performance GNSS-Inertial Solution with Dual IMU’S. Available online: https://www.applanix.com/downloads/products/specs/APX20_UAV.pdf (accessed on 4 January 2022).
  31. Ravi, R.; Lin, Y.J.; Elbahnasawy, M.; Shamseldin, T.; Habib, A. Bias impact analysis and calibration of terrestrial mobile lidar system with several spinning multibeam laser scanners. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5261–5275. [Google Scholar] [CrossRef]
  32. Mostafa, M.; Hutton, J.; Reid, B.; Hill, R. GPS/IMU products—The Applanix approach. In Photogrammetric Week; Wichmann: Berlin/Heidelberg, Germany, 2001; Volume 1, pp. 63–83. [Google Scholar]
  33. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  34. Toth, C.; Grejner-Brzezinska, D.A. Airborne LiDAR Reflective Linear Feature Extraction for Strip Adjustment and Horizontal Accuracy Determination; No. FHWA/OH-2008/15; Ohio State University: Columbus, OH, USA, 2009. [Google Scholar]
  35. VectorNav. Industrial Series. Available online: https://www.vectornav.com/docs/default-source/product-brochures/industrial-series-product-brochure-(12-0009).pdf (accessed on 4 January 2022).
  36. Fernandez Galarreta, J.; Kerle, N.; Gerke, M. UAV-based urban structural damage assessment using object-based image analysis and semantic reasoning. Nat. Hazards Earth Syst. Sci. 2015, 15, 1087–1101. [Google Scholar] [CrossRef] [Green Version]
  37. Geocue. Drone LIDAR Systems (Drone LIDAR Considerations). Available online: http://www.geocue.com (accessed on 12 January 2019).
  38. Tao, C.; Kutchko, B.G.; Rosenbaum, E.; Massoudi, M. A review of rheological modeling of cement slurry in oil well applications. Energies 2020, 13, 570. [Google Scholar] [CrossRef] [Green Version]
  39. Dastgheibifard, S.; Asnafi, M. A review on potential applications of unmanned aerial vehicle for construction industry. Sustain. Struct. Mater. 2018, 1, 44–53. [Google Scholar]
  40. De Melo, R.R.S.; Costa, D.B.; Álvares, J.S.; Irizarry, J. Applicability of unmanned aerial system (UAS) for safety inspection on construction sites. Saf. Sci. 2017, 98, 174–185. [Google Scholar] [CrossRef]
  41. Tao, C.; Kutchko, B.G.; Rosenbaum, E.; Wu, W.T.; Massoudi, M. Steady flow of a cement slurry. Energies 2019, 12, 2604. [Google Scholar] [CrossRef] [Green Version]
  42. Moeini, S.; Oudjehane, A.; Baker, T.; Hawkins, W. Application of an interrelated UAS-BIM system for construction progress monitoring, inspection and project management. PM World J. 2017, 6, 1–13. [Google Scholar]
  43. Hamledari, H.; Davari, S.; Azar, E.R.; McCabe, B.; Flager, F.; Fischer, M. UAV-enabled site-to-BIM automation: Aerial robotic-and computer vision-based development of as-built/as-is BIMs and quality control. In Construction Research Congress; ASCE: New Orleans, LA, USA, 2018; pp. 336–346. [Google Scholar]
  44. Eschmann, C.; Kuo, C.M.; Kuo, C.H.; Boller, C. Unmanned aircraft systems for remote building inspection and monitoring. In Proceedings of the 6th European Workshop on Structural Health Monitoring (EWSHM 2012), Dresden, Germany, 3–6 July 2012. [Google Scholar]
  45. Nex, F. UAV photogrammetry for mapping and 3d modeling—Current status and future perspectives. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2011, 38, 25–31. [Google Scholar]
  46. Siebert, S.; Teizer, J. Mobile 3D mapping for surveying earthwork projects using an Unmanned Aerial Vehicle (UAV) system. Autom. Constr. 2014, 41, 1–14. [Google Scholar] [CrossRef]
  47. Ajayi, O.G.; Palmer, M.; Salubi, A.A. Modelling farmland topography for suitable site selection of dam construction using unmanned aerial vehicle (UAV) photogrammetry. Remote Sens. Appl. Soc. Environ. 2018, 11, 220–230. [Google Scholar] [CrossRef]
  48. Álvares, J.S.; Costa, D.B.; de Melo, R.R.S. Exploratory study of using unmanned aerial system imagery for construction site 3D mapping. Constr. Innov. 2018, 18, 301–320. [Google Scholar] [CrossRef]
  49. Khaloo, A.; Lattanzi, D.; Cunningham, K.; Dell’Andrea, R.; Riley, M. Unmanned aerial vehicle inspection of the Placer River Trail Bridge through image-based 3D modelling. Struct. Infrastruct. Eng. 2018, 14, 124–136. [Google Scholar] [CrossRef]
  50. Knight, R. LiDAR: Going Beyond Photogrammetry. Inside Unmanned Systems. Available online: https://insideunmannedsystems.com/lidar-going-beyond-photogrammetry (accessed on 12 January 2019).
  51. Mill, T.; Alt, A.; Liias, R. Combined 3D building surveying techniques–terrestrial laser scanning (TLS) and total station surveying for BIM data management purposes. J. Civ. Eng. Manag. 2013, 19 (Suppl. S1), S23–S32. [Google Scholar] [CrossRef]
  52. Kiziltas, S.; Akinci, B.; Ergen, E.; Tang, P.; Gordon, C. Technological assessment and process implications of field data capture technologies for construction and facility/infrastructure management. J. Inf. Technol. Constr. (ITcon) 2008, 13, 134–154. [Google Scholar]
  53. Randall, T. Construction engineering requirements for integrating laser scanning technology and building information modeling. J. Constr. Eng. Manag. 2011, 137, 797–805. [Google Scholar] [CrossRef]
  54. Truong-Hong, L.; Laefer, D.F. Application of terrestrial laser scanner in bridge inspection: Review and an opportunity. In Engineering for Progress, Nature and People, Proceedings of the 37th IABSE Symposium: Engineering for Progress, Nature and People, Madrid, Spain, 3–5 September 2014; International Association for Bridge and Structural Engineering (IABSE): Zurich, Switzerland, 2014. [Google Scholar]
  55. Suchocki, C.; Katzer, J. Terrestrial laser scanning harnessed for moisture detection in building materials—Problems and limitations. Autom. Constr. 2018, 94, 127–134. [Google Scholar] [CrossRef]
  56. Cha, G.; Park, S.; Oh, T. A terrestrial LiDAR-based detection of shape deformation for maintenance of bridge structures. J. Constr. Eng. Manag. 2019, 145, 04019075. [Google Scholar] [CrossRef]
  57. Walters, R.; Jaselskis, E.; Zhang, J.; Mueller, K.; Kaewmoracharoen, M. Using scanning lasers to determine the thickness of concrete pavement. J. Constr. Eng. Manag. 2008, 134, 583–591. [Google Scholar] [CrossRef]
  58. Niskanen, I.; Immonen, M.; Makkonen, T.; Keränen, P.; Tyni, P.; Hallman, L.; Hiltunen, M.; Kolli, T.; Louhisalmi, Y.; Kostamovaara, J.; et al. 4D modeling of soil surface during excavation using a solid-state 2D profilometer mounted on the arm of an excavator. Autom. Constr. 2020, 112, 103112. [Google Scholar] [CrossRef]
  59. Karan, E.P.; Sivakumar, R.; Irizarry, J.; Guhathakurta, S. Digital modeling of construction site terrain using remotely sensed data and geographic information systems analyses. J. Constr. Eng. Manag. 2014, 140, 04013067. [Google Scholar] [CrossRef]
  60. Suárez, J.C.; Ontiveros, C.; Smith, S.; Snape, S. Use of airborne LiDAR and aerial photography in the estimation of individual tree heights in forestry. Comput. Geosci. 2005, 31, 253–262. [Google Scholar] [CrossRef]
  61. Zhou, Z.; Gong, J.; Hu, X. Community-scale multi-level post-hurricane damage assessment of residential buildings using multi-temporal airborne LiDAR data. Autom. Constr. 2019, 98, 30–45. [Google Scholar] [CrossRef]
  62. Puri, N.; Turkan, Y. Bridge construction progress monitoring using lidar and 4D design models. Autom. Constr. 2020, 109, 102961. [Google Scholar] [CrossRef]
  63. Guo, J.; Tsai, M.J.; Han, J.Y. Automatic reconstruction of road surface features by using terrestrial mobile lidar. Autom. Constr. 2015, 58, 165–175. [Google Scholar] [CrossRef]
  64. Riegl. ‘Downward-Looking’ LiDAR Sensor for Unmanned Laser Scanning. Available online: http://www.riegl.com/products/unmanned-scanning/riegl-minivux-1dl (accessed on 12 January 2019).
  65. Hokuyo. Scanning Laser Range Finder UTM-30LX/LN Specification. Available online: https://www.hokuyo-aut.jp/search/single.php?serial=169 (accessed on 12 January 2019).
  66. Moon, D.; Chung, S.; Kwon, S.; Seo, J.; Shin, J. Comparison and utilization of point cloud generated from photogrammetry and laser scanning: 3D world model for smart heavy equipment planning. Autom. Constr. 2019, 98, 322–331. [Google Scholar] [CrossRef]
  67. Guo, F.; Jahren, C.T.; Hao, J.; Zhang, C. Implementation of CIM-related technologies within transportation projects. Int. J. Constr. Manag. 2020, 20, 510–519. [Google Scholar] [CrossRef]
  68. Geocue. Drone Mapping–SfM Versus Low Precision LIDAR. Available online: https://support.geocue.com/drone-mapping-sfm-versus-low-precision-lidar (accessed on 12 January 2019).
  69. Wang, J.; Sun, W.; Shou, W.; Wang, X.; Wu, C.; Chong, H.Y.; Liu, Y.; Sun, C. Integrating BIM and LiDAR for real-time construction quality control. J. Intell. Robot. Syst. 2015, 79, 417–432. [Google Scholar] [CrossRef]
  70. Li, Y.; Liu, C. Applications of multirotor drone technologies in construction management. Int. J. Constr. Manag. 2019, 19, 401–412. [Google Scholar] [CrossRef]
  71. Zhou, S.; Gheisari, M. Unmanned aerial system applications in construction: A systematic review. Constr. Innov. 2018, 18, 453–468. [Google Scholar] [CrossRef]
  72. Tompkinson, W. Professional UAV Lidar Is (Finally) Focusing on the Ground. Available online: https://www.geoweeknews.com/blogs/professional-uav-lidar-is-finally-focusing-on-the-ground (accessed on 17 June 2019).
  73. Merz, T.; Kendoul, F. Beyond visual range obstacle avoidance and infrastructure inspection by an autonomous helicopter. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 4953–4960. [Google Scholar]
  74. Guo, Q.; Su, Y.; Hu, T.; Zhao, X.; Wu, F.; Li, Y.; Liu, J.; Chen, L.; Xu, G.; Lin, G.; et al. An integrated UAV-borne lidar system for 3D habitat mapping in three forest ecosystems across China. Int. J. Remote Sens. 2017, 38, 2954–2972. [Google Scholar] [CrossRef]
  75. Microdrones. Fully Integrated Systems for Professionals. Available online: https://www.microdrones.com/en/integrated-systems/mdlidar/mdlidar3000dl (accessed on 12 January 2019).
  76. Yang, B.; Zang, Y.; Dong, Z.; Huang, R. An automated method to register airborne and terrestrial laser scanning point clouds. ISPRS J. Photogramm. Remote Sens. 2015, 109, 62–76. [Google Scholar] [CrossRef]
  77. Naai-Jung, S.H.I.H.; Ming-Chang, W.U. A 3D Point-Cloud-Based Verification of As-Built Construction Progress; Springer: Vienna, Australia, 2005. [Google Scholar]
  78. Al-Rawabdeh, A.; Moussa, A.; Foroutan, M.; El-Sheimy, N.; Habib, A. Time series UAV image-based point clouds for landslide progression evaluation applications. Sensors 2017, 17, 2378. [Google Scholar] [CrossRef] [Green Version]
  79. Golparvar-Fard, M.; Peña-Mora, F.; Savarese, S. D4AR—A 4-dimensional augmented reality model for automating construction progress monitoring data collection, processing and communication. J. Inf. Technol. Constr. 2009, 14, 129–153. [Google Scholar]
  80. Mastin, A.; Kepner, J.; Fisher, J. Automatic registration of LIDAR and optical images of urban scenes. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 2639–2646. [Google Scholar]
  81. Thuy, C.T.; Watanabe, A.; Wakutsu, R. Cloud-based 3d data processing and modeling for uav application in disaster response and construction fields. In Geotechnics for Sustainable Infrastructure Development; Springer: Singapore, 2020; pp. 1177–1182. [Google Scholar]
  82. Kubota, S.; Ho, C.; Nishi, K. Construction and usage of three-dimensional data for road structures using terrestrial laser scanning and UAV with photogrammetry. In Proceedings of the International Symposium on Automation and Robotics in Construction, Banff, AB, Canada, 21–24 May 2019; IAARC Publications: London, UK, 2019; Volume 36, pp. 136–143. [Google Scholar]
  83. Howard, J.; Murashov, V.; Branche, C.M. Unmanned aerial vehicles in construction and worker safety. Am. J. Ind. Med. 2018, 61, 3–10. [Google Scholar] [CrossRef] [PubMed]
  84. Wang, G.; Hollar, D.; Sayger, S.; Zhu, Z.; Buckeridge, J.; Li, J.; Chong, J.; Duffield, C.; Ryu, D.; Hu, W. Risk considerations in the use of unmanned aerial vehicles in the construction industry. J. Risk Anal. Crisis Response 2016, 6, 165–177. [Google Scholar] [CrossRef] [Green Version]
  85. Gheisari, M.; Esmaeili, B. Applications and requirements of unmanned aerial systems (UASs) for construction safety. Saf. Sci. 2019, 118, 230–240. [Google Scholar] [CrossRef]
  86. UAVCoach. Master List of Drone Laws. Available online: https://uavcoach.com/drone-laws (accessed on 12 January 2019).
  87. FAA. Part 107 of the Federal Aviation Regulations. Available online: https://www.faa.gov/news/fact_sheets/news_story.cfm?newsId=20516 (accessed on 12 January 2019).
Figure 1. Airborne imagery of the calibration target.
Figure 1. Airborne imagery of the calibration target.
Drones 06 00117 g001
Figure 2. SFM point cloud of the calibration target (courtesy of Mr. Nicholas Hill).
Figure 2. SFM point cloud of the calibration target (courtesy of Mr. Nicholas Hill).
Drones 06 00117 g002
Figure 3. LIDAR point cloud of the calibration target.
Figure 3. LIDAR point cloud of the calibration target.
Drones 06 00117 g003
Table 1. Photogrammetry software.
Table 1. Photogrammetry software.
Software NameTypeOperating Systems
COLMAPAerial, Close-rangeWindows, macOS, Linux
MeshroomAerial, Close-rangeWindows, Linux
MicMacAerial, Close-rangeWindows, macOS, Linux
Multi-View EnvironmentAerial, Close-rangeWindows, macOS
OpenMVGAerial, Close-rangeWindows, macOS, Linux
Regard3DAerial, Close-rangeWindows, macOS, Linux
VisualSFMAerial, Close-rangeWindows, macOS, Linux
3DF ZephyrAerial, Close-rangeWindows
Autodesk RecapAerial, Close-rangeWindows
Agisoft MetashapeAerial, Close-rangeWindows, macOS, Linux
Bentley ContextCaptureAerial, Close-rangeWindows
Correlator3DAerialWindows
DroneDeployAerialWindows, macOS, Linux, Android, iOS
Elcovision 10Aerial, Close-rangeWindows
iWitnessProAerial, Close-rangeWindows
IMAGINE PhotogrammetryAerialWindows
PhotomodelerAerial, Close-rangeWindows
Pix4DmapperAerialWindows, macOS, Linux
RealityCaptureAerial, Close-rangeWindows
SOCET GXPAerialWindows
Trimble InphoAerial, Close-rangeWindows
WebODMAerialWindows, macOS
Table 2. A comparison between UAV-based SFM and UAV-based LIDAR point cloud.
Table 2. A comparison between UAV-based SFM and UAV-based LIDAR point cloud.
UAV-SFMUAV-LIDAR
Hardware
 GCPYesOptional
 GNSS-IMUOptionalYes
 AirframeAnyLarge
 CostLowHigh
Operations
 Robustness
(light/ground conditions)
LowHigh
 Flight altitudeVariousLow
 Flight timeLongShort
Data Quality
 Precisionmm-cmcm
 DensityHighMedium
 ImageryYesNo
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Guan, S.; Zhu, Z.; Wang, G. A Review on UAV-Based Remote Sensing Technologies for Construction and Civil Applications. Drones 2022, 6, 117. https://doi.org/10.3390/drones6050117

AMA Style

Guan S, Zhu Z, Wang G. A Review on UAV-Based Remote Sensing Technologies for Construction and Civil Applications. Drones. 2022; 6(5):117. https://doi.org/10.3390/drones6050117

Chicago/Turabian Style

Guan, Shanyue, Zhen Zhu, and George Wang. 2022. "A Review on UAV-Based Remote Sensing Technologies for Construction and Civil Applications" Drones 6, no. 5: 117. https://doi.org/10.3390/drones6050117

Article Metrics

Back to TopTop