Next Article in Journal
Molecularly Imprinted Sol-Gel for TNT Detection with Optical Micro-Ring Resonator Sensor Chips
Next Article in Special Issue
GNSS/LiDAR-Based Navigation of an Aerial Robot in Sparse Forests
Previous Article in Journal
Pedestrian Positioning Using a Double-Stacked Particle Filter in Indoor Wireless Networks
Previous Article in Special Issue
On the Use of the AIRA-UAS Corpus to Evaluate Audio Processing Algorithms in Unmanned Aerial Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On-Site 4-in-1 Alignment: Visualization and Interactive CAD Model Retrofitting Using UAV, LiDAR’s Point Cloud Data, and Video

Graduate School of Advanced Imaging Science, Multimedia, and Film, Chung-Ang University, Seoul 06974, Korea
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(18), 3908; https://doi.org/10.3390/s19183908
Submission received: 15 July 2019 / Revised: 4 September 2019 / Accepted: 7 September 2019 / Published: 10 September 2019

Abstract

:
Acquisition of 3D point cloud data (PCD) using a laser scanner and aligning it with a video frame is a new approach that is efficient for retrofitting comprehensive objects in heavy pipeline industrial facilities. This work contributes a generic framework for interactive retrofitting in a virtual environment and an unmanned aerial vehicle (UAV)-based sensory setup design to acquire PCD. The framework adopts a 4-in-1 alignment using a point cloud registration algorithm for a pre-processed PCD alignment with the partial PCD, and frame-by-frame registration method for video alignment. This work also proposes a virtual interactive retrofitting framework that uses pre-defined 3D computer-aided design models (CAD) with a customized graphical user interface (GUI) and visualization of a 4-in-1 aligned video scene from a UAV camera in a desktop environment. Trials were carried out using the proposed framework in a real environment at a water treatment facility. A qualitative and quantitative study was conducted to evaluate the performance of the proposed generic framework from participants by adopting the appropriate questionnaire and retrofitting task-oriented experiment. Overall, it was found that the proposed framework could be a solution for interactive 3D CAD model retrofitting on a combination of UAV sensory setup-acquired PCD and real-time video from the camera in heavy industrial facilities.

1. Introduction

Due to increase in availability, decreased price, and the development of virtual techniques, 3D scene (Data) acquisition of real-world environments has drawn much interest in related research and development work. The motivation behind these efforts is to represent real-world environments in 3D. These representations have various applications including mapping, renovation, reverse engineering, retrofitting, industrial design, and semantic analysis of complex buildings [1,2].
Various scanners are available due to the increasing acquisition of 3D scenes in different applications. These scanners are based on laser or light-beam return times and can be divided into two groups: high-resolution (dense data) and low-resolution (low data) scanners. High-resolution scanners (e.g., Trimble) need more time to complete a full scan because they generate a highly dense grid of dots. Hence, these are unsuitable for real-time applications.
Meanwhile, low-resolution scanners (Microsoft Kinect V2, Hokuyo and Velodyne LiDAR) are commonly used in real-time applications because they require less time for generating 3D data [3] and are suitable for 3D environment analysis. LiDAR scanners are commonly used in UAVs [4,5], robots [6] and autonomous cars [7]. Hence, laser sensor technology provides accurate geometric information by acquiring the complicated surfaces using various methods.
The sensors used in various applications produce massive amounts of 3D data. The typical interactions with 3D data that include visualization, retrofitting, and presentation are among the biggest challenges at present, because interaction techniques allow users to choose the best possible viewpoint for their analysis.
This work was motivated by the prototype implementation proposed in our previous work [8] and extended to make the system operational in real-time for virtual retrofitting of industrial plants where human reachability is difficult (industrial plants like thermal, petrochemical, and nuclear power stations).
Maintenance and upgrading heavy industrial facilities often need components to be redesigned and/or added. Validating these upgrades (retrofitting) is time-consuming, tedious, and expensive. Hence, virtual retrofitting with 3D models (CAD) is a requirement that can help to make retrofit decisions faster, easier and economical. Therefore, intuitive interaction methodologies for the virtual retrofitting of CAD models are urgently required.
Figure 1 shows the proposed novel generic framework for virtual retrofitting, intended for making interactive virtual changes and upgrading existing industrial facilities. To align pre-processed PCD with the physical world, partial PCD was acquired from a UAV-mounted LiDAR, and a CAD model was interactively retrofitted in the virtual environment and visualized on a flat monitor.
The rest of this paper is organized as follows. Section 2 discusses earlier work related to point cloud acquisition, visualization, point cloud registration algorithms, and alignment in video. Section 3, Section 4, and Section 5 describe the proposed framework, conduct experiments and obtain results of the proposed framework applied to a practical case study. Finally, this paper’s conclusions and future work are discussed in Section 6.

2. Related Works

Adequate visualization facilitates easier, deeper, and better understanding of the data and the underlying physical features for users [9,10,11]. Visualization techniques are considered more intuitive, interactive, and immersive [12]. The interactive manipulation and control of visualization allows users to focus more on their region of interest very quickly [13]. Interactions in a 3D environment give a more realistic symbolic representation of the real-world objects which enhances the users’ memories of the environment [14].
Modern remote sensing technologies such as 3D laser scanners and 3D scene construction based on images are in increasing demand. Laser scanners have been a common tool in the acquisition of 3D object geometry across many industries for applications in civil infrastructure design, maintenance, as-built construction verification, reverse engineering, and part inspection [15,16]. They can acquire 3D shapes with detailed geometrical information. Generally, the 3D data is represented in the form of point cloud models, mesh models and geometric models, among which point cloud models are the basis [17]. Laser scanners measure with accuracies of millimeters to centimeters from the sensor to nearby surfaces at speeds of thousands to hundred thousands of point measurements per second. However, manually acquiring PCD in heavy industrial plants with numerous pipelines is tedious, time-consuming, and laborious [18]. A UAV could provide spatial sensory information at a much higher resolution by inspecting at considerably closer range [19]; it could access many environments in which human access is restricted. UAVs are widely used for various application purposes such as aerial surveillance, environmental monitoring, traffic monitoring, and precision agriculture [20,21] and are capable of carrying payloads over long distances. Thus, a laser scanner mounted on a UAV could map an entire industrial environment and produce a comprehensive PCD.
There has been little work about virtual retrofitting in existing plant facilities in recent decades. In the current framework, we propose virtual retrofitting as opposed to usage of commercial software by professionals [22]. Using CAD models in a retrofitting or upgrade process is much less expensive when a retrofit has been solid-modeled on a computer and analyzed before the part or change is implemented onsite [23]. Therefore, there is a need for CAD model-based retrofitting for a process to re-engineer existing complex facilities.
For the alignment of PCD, the proposed framework uses the Generalized-Iterative Closest Point (G-ICP) registration algorithm [24]. Many efficient registration algorithms have been proposed over the last three to four decades. With regard to accuracy, performance, and robustness, the Iterative Closest Point (ICP) algorithm [25] is best suited for our needs. Several studies have discussed modeling real environments using an RGB-D camera that represents the envirnments as point clouds. The work demonstrated in [26] of 3D mapping and modeling using an RGB-D camera in indoor space investigates recovery from registration failures and coverage through visual inspection. A method proposed in [27] sought to integrate information from video sequences into existing reference 3D point clouds. It shows how to extract local 3D information from the video data, which allows incremental growing, refinement, and update of the existing 3D models.
In our previous work [8], we proposed a novel prototype framework for pipeline virtual retrofitting with 4-in-1 alignment approach, and experiments were conducted by setting an experimental scenario in an indoor environment. Also, our previous work used Velodyne LiDAR for both the PCD acquisition (i.e., pre-processed and partial PCD) as well as for PCD alignment and external GoPro camera were used for the video alignment. The basic setup and preliminary results demonstrated that the 4-in-1 alignment approach can be utilized for the pipeline retrofitting applications.
Hence, the proposed framework utilizes the 4-in-1 alignment approach [8] for an outdoor environment by extending real-time PCD acquisition using a UAV and various sensory setups (Trimble TX5, UAV mounted Velodyne LiDAR and Camera) at an on-site water treatment facility. The proposed current framework provides the precise insertion/modification of the CAD models that are responsible for updating existing industrial facilities in a desktop environment.

3. Proposed Retrofitting Framework

This study’s main goal is to develop a framework for virtual retrofitting in a desktop environment. This will provide an approach for upgrading comprehensive engineering models in heavy industrial facilities to support, assist decision-making for, and analyze retrofitting projects. Traditional retrofit projects involve engineers visiting a site multiple times to take field measurements for their designs.
The proposed framework for virtual retrofitting has an impact on reducing errors and interference that is possible in on-site construction works. This section discusses the proposed framework.

3.1. 3D Point Cloud Acquisition

The shape of a 3D model with detailed geometry information is acquired during the scanning process, and various techniques are used with a wide range of hardware devices to acquire the 3D model.

3.1.1. Pre-Processed PCD

In this study, a water treatment facility at the Korea Institute of Construction Technology was used for the experiment (Figure 2) with various pipe diameters. The commercial Trimble TX5 3D laser scanner, with accuracy of up to ± 2 mm, as shown in Figure 2, was used to acquire a pre-processed PCD of the experimental site. A single scan may contain several million 3D points. Since no single scanning position can visualize all surfaces within a facility, scans should be obtained from multiple positions. Hence, the scanner was positioned in different locations. The Trimble TX5 comes with professional software that can register and fuse several scans into a single complete model. Figure 3 shows the resulting pre-processed PCD.

3.1.2. UAV Setup

In this study, a DJI Matrice 100 UAV with TB47D battery was used and the Velodyne LiDAR was mounted on UAV to acquire partial PCD of the scanning environment. The UAV provided stabilized flight and 13 min of hover time with a maximum payload of 1000 g. Table 1 shows the DJI Matrice 100 technical specifications [28]. The UAV was also mounted with a Zenmuse X3 Gimbal camera to get real-time video feed for the video alignment process, as shown in Figure 1. A DROK voltage regulator was used to share the UAV’s battery power with the Velodyne LiDAR. Ethernet cable was used to transfer PCD from Velodyne LiDAR to the Manifold. The whole hardware integration of Velodyne sensor, Zenmuse X3 gimbal camera, and Manifold onboard computer with UAV is shown in Figure 4.

3.1.3. Real-Time Partial PCD

The Velodyne LiDAR Puck LITE was mounted on a UAV, as shown in Figure 1 and Figure 5, to acquire partial PCD. This is a lightweight version that is specifically designed to meet the relatively low UAV weight restrictions. The sensor is a 16-channel LiDAR scanning 360 in the horizontal and 15 in the vertical field of view.
The sensor has low power consumption, scans the environment in 3D at up to 20 Hz while generating about 300,000 points per second with a maximum range 100 m, and weighs 590 g, making it ideal for mounting on a UAV. The orientation of the scanning platform of the UAV at a fixed altitude was obtained through an inertial measurement unit (IMU) sensor (orientation estimation from an IMU sensor is explained in our previous work [5]). Before the acquisition of partial PCD to ensure optimal flight performance, the UAV compass was calibrated with payload (UAV mounted with Velodyne LiDAR, camera, and onboard computer) based on the DJI calibration procedure mentioned in [29] (Figure 6) at the experimental site.
The entire scanning process was implemented in the onboard computer (DJI Manifold) mounted on a UAV using a robot operating system (ROS) framework [30]. This was triggered by a command sent from a remotely connected computer to the onboard computer, and data was transferred to the remote computer via a secure SSH (Secure Shell) for visualization and retrofitting.

3.2. PCD Alignment

The very popular iterative closest point [31] algorithm’s variant called G-ICP [24] was used for PCD alignment. It was adopted to check the correct orientation and relative position with pre-processed PCD in a global coordinate system. The alignment in the real world increases the accuracy of the virtual retrofitting. The key features of the G-ICP algorithm are outlined in three steps:
  • Determine pairs of corresponding points in pre-processed PCD and partial PCD.
  • Estimate a transformation that minimizes the distances between the correspondences.
  • Apply the transformation to pre-processed PCD to align with partial PCD.
The process of detailed PCD alignment is given in our previous research [8]. Table 2 shows alignment accuracy by implementing G-ICP algorithm. The G-ICP algorithm ran for 16 iterations, where pre-processed PCD transformed to align with physical environment. The transformation between pre-processed and partial PCD occurred in every iteration with change in rotations (roll, pitch, and yaw) and reduced distance from the initial to an aligned position. Figure 7 and Figure 8 show before and after alignment of PCD used in the proposed framework and represent the customized GUI, which provides functionality such as
  • Connect—Connects to onboard computer from a remote computer through SSH.
  • GetData—Copies the partial PCD from onboard computer to the remote computer.
  • LoadData—Visualizes acquired partial PCD from UAV sensory setup.
  • ModelData—Loads pre-processed PCD for alignment with partial PCD.
  • AlignCloud—Aligns pre-processed PCD with partial PCD using G-ICP algorithm.
  • AlignVideo—Aligns pre-processed, partial PCD and retrofitted CAD model with real-time video.

3.3. Efficient Visualization and Interactive Retrofitting

The proposed framework provides a means to visualize and analyze a retrofit by interacting with CAD models and PCD for better decision making. The software setup was developed in a C++ programming environment using the visualization toolkit (VTK), an open-source software system [32], and Qt for the GUI.
The VTK pipeline architecture for visualization starts with a source that provides initial data input from files; this is fed as input to the filters, which is optional and helps modify the data in a manner such as conversion, reduction, or merging. Data from the filter were transferred to a mapper, which converts it into tangible objects. In the next step, actors adjust visible properties such as the transparency and color. The remaining work is done by renderers and windows, which create a view-port on the screen where mouse-based interaction could be done. Here, the PCD was rendered for visualization in a desktop environment that permits user analysis and retrofitting by upgrading the existing model. Figure 3 shows the customized GUI application for visualization and provides various interactions functionality such as
  • PointCloudData—Loads aligned pre-processed PCD for the visualization.
  • Model 1, 2—Enables user to switch between the models for mouse-based interactive retrofitting.
  • Interaction—Enables user to perform transformation (translate, rotate, and scale) interactions.
  • Camera—Enables to set best view-port for analysis.
The two virtual CAD models were designed and proposed for virtual retrofitting through user interactions. Models were designed by geometrical information acquired utilizing LiDAR.

3.3.1. CAD Model 1

Figure 9 shows the proposed predefined CAD Model 1 from the Auto-CAD software for virtual retrofitting; this model was designed by keeping the original pipeline facility as a reference. It introduces a T-joint as highlighted in Figure 9 to increase the water flow efficiency. Figure 10 shows the retrofitted Model 1 with PCD by performing mouse-based interactions.

3.3.2. CAD Model 2

Figure 11 shows the proposed predefined CAD Model 2, as highlighted in the figure; the original pipeline facility has been replaced by an L-joint that reduces the pipeline complexity and time required for water to flow through the pipeline system. Figure 12 shows Model 2 retrofitted with PCD by performing mouse-based interactions.

3.4. 3D Point Cloud Alignment in Video

A DJI Zenmuse X3 gimbal camera was mounted on a UAV to get the real-time video feed of the real-world environment. Figure 13 shows the setup prepared to get real-time video feed on a remote computer. In our current hardware setup, the onboard computer using ROS had a direct access to the real-time video. Hence, from the remote computer through SSH, video control access was transmitted to the UAV remote controller’s display device (Mobile/Tablet).
The DJI Go application provides the functionality to stream a real-time video from UAV camera to a customized broadcasting channel. Remote computer running on Windows 10 operating system prepared with real-time messaging protocol [33], OpenCV, and open broadcaster software [34] subscribes to that broadcasting channel to get real-time video feed on a remote computer.
Calibration between the sensors (camera and LiDAR sensors) is required to accurately align the PCD information in an image. To calibrate the camera with the LiDAR in the proposed framework, a sensors calibration approach presented in [35] was used. The approach [35] uses a special 3D marker to calibrate and can easily detect correspondence in camera and LiDAR sensors for deterministic estimation of the translation between two sensors. A 3D marker calibration approach enables fine alignment of the sensors before data acquisition from the camera and LiDAR.
LiDAR-generated 3D PCD were automatically aligned with the 2D video frames using a frame-to-frame registration method [26]. Before frame-to-frame registration, the PCD in the image plane, the orientation, and the position of the object image plane were estimated as given in our previous research [8]. PCD alignment is a time-consuming process in the 2D image plane; hence, only few frames were periodically selected to register with point cloud. Figure 14a,b shows the results of 4-in-1 alignment in video frames using proposed framework at on-site water treatment facility, with camera views in two different UAV hovering positions. We can recognize aligned pre-processed PCD (RGB) with partial PCD (Red) and retrofitted CAD model (Blue) in a video frame.

4. Retrofitting Task Oriented Evaluation

The proposed framework for retrofitting applications was trialed on-site at a water plant facility as shown in Figure 15. Due to technological advancements in the equipment’s, periodically forces to upgrade existing water treatment plants. Retrofitting occurs for many reasons at existing plants such as hydromodifications, reinforcement upgrades, and integration of new technologies.
Presently, most existing retrofitting framework/approach is either traditional which involves physical effort or manually done by a professional with the help of some commercial software. Therefore, the performance of the proposed framework was evaluated based on a retrofitting task and user satisfaction survey by inviting fifteen volunteers. More details about the retrofitting task and participant details are explained in the following subsections.
For the implementation of the proposed framework, development platform includes Alienware laptop running on Windows 10 operating system with Intel (R) CORE (TM) i9-8950 HK, 32 GB random access memory, and an NVIDIA GeForce GTX 1080 GPU (Nvidia Corporation, Santa Clara, CA, USA).

4.1. Participants

A group of fifteen participants volunteered to take part (five female and ten male) in the user evaluation study and successfully performed tasks (Tasks 1 and 2). The age of the participants ranged from 25 to 36 years, with a mean (M) of 29.6 years old and standard deviation (SD) = 3.65. All participants were given a brief verbal description of the idea of the evaluation task and they were regular computer users (at least 3 h per day), but none of them had any prior experience with retrofitting.
A separate eight participants were invited for the comparison evaluation task (Task 3). The participants were also regular computer users but with a knowledge of retrofitting.

4.2. Retrofitting Task Procedure

Participants took part in the retrofitting task evaluation individually. Prior to starting the task, participants were given a short oral presentation about the user study. It included an introduction to the framework, and instructions on how to use retrofitting GUI with help of interactions such as translation, rotation, and scale. All the participants were required to confirm the understanding of these introductions and the requirements of the experimental task. This was to get the participants familiar with interactions.
After completion of the oral introduction session, participants started the task directly; no earlier training period was provided before the formal task. The task was divided into three categories,

4.2.1. Task 1

In this category of the task, the proposed CAD Model 1, as shown in Figure 9, was placed away either in X- or Y-axis with a little change in orientation from that of PCD. Participants were asked to retrofit CAD Model 1 against the PCD using mouse-based interaction with transformations in five trials as described in Section 3.3 (Interaction functionality).

4.2.2. Task 2

In this category of the task, the proposed CAD Model 2, as shown in Figure 11, was placed away both in X- and Y-axis with more change in orientation from that of PCD. Here, participants need to perform multiple rotation, translation, and scale in order to adjust with the size and orientation of the pipelines in the PCD of the water treatment facility. Participants were asked to perform a retrofitting task in five trials using interaction functionality.
During the above tasks (Section 4.2.1 and Section 4.2.2) for the objective measure, the actual time-to-complete for retrofitting of each model which defines the efficiency based on the accuracy (effectiveness) of completion was recorded for each participant and trial, as shown in Table 3 and Table 4. The very goal of dividing the task was to understand the suitability of the proposed framework for simple and complex retrofitting tasks.

4.2.3. Task 3

To evaluate the performance of the proposed framework, a comparison evaluation was carried out in repeated measures by considering control and experimental group. Here, we invited eight separate participants (Section 4.1), assigned to be the control group, and the other eight participants among the fifteen were considered as an experimental group. The goal of the experiment was the same as described for Task 2 (Section 4.2.2), and participants in the experimental group used our proposed framework to perform retrofitting task. On the other hand, participants in the control group used open-source software CloudCompare [36] to perform the same task. The independent variable was the retrofitting interface, by which we compared the proposed framework’s performance (time-to-complete) with CloudCompare.
Once the retrofitting tasks (Section 4.2.1, Section 4.2.2 and Section 4.2.3) have been finished, we collected the participants’ qualitative feedback on the proposed framework by using questionnaire and short interview for the subjective measure.

5. Results

In this section, we report the results of the participants’ tasks evaluation through objective and subjective measures.

5.1. Objective Measures for Task 1 and Task 2

5.1.1. Efficiency

Efficiency was defined as time-to-complete both the retrofitting task and used as one of the objective measures in the evaluation of the proposed framework. Overall, these tasks were performed by each participant in five trials, providing a total of 150 experiments for both CAD models. Table 3 and Table 4 show the overall mean time-to-complete retrofitting tasks five trials for each model. For Model 1, we can clearly see that the M and SD for trial 1 ( M T 1 = 24.56 , S D T 1 = 3.24 ) to trial 5 ( M T 5 = 13.68 , S D T 5 = 2.36 ) on average significantly time decreased by 20 % for each trials. Similarly, for Model 2 the M and SD for trial 1 ( M T 1 = 80.53 , S D T 1 = 4.93 ) to trial 5 ( M T 5 = 65.52 , S D T 5 = 4.32 ) on average significantly time decreased by 20 % for each trial. Figure 16 shows the mean time-to-complete each retrofitting task in five trials. It shows that the participants become more acquainted with the system in every trial.

5.1.2. Accuracy

In addition to efficiency, accuracy was measured as a second objective measure in the evaluation of proposed framework. For accuracy evaluation, we considered:
  • Four key points in each CAD model and PCD.
  • For each user intended interaction, distance between key points in CAD model and PCD was checked against minimum threshold distance ( T D m i n )(in this paper T D m i n set to 2 mm).
Participants were allowed to carry out the retrofitting task by interactions till CAD model color turned to blue when CAD model transformed within the T D m i n . The color change in the CAD model represented the successful retrofit and end of the retrofitting task.

5.2. Objective Measures for Task 3

For the objective measure of the proposed framework, we observed the performance of the interface between two groups (control and experimental) in Task 3. Table 5 shows the control group participant’s time-to-complete retrofitting Model 2 using CloudCompare interface. On the other hand, we considered Table 4’s first eight participant’s time-to-complete for comparison against CloudCompare. A Paired T-test showed a highly significant difference in time-to-complete retrofitting tasks between the proposed framework and CloudCompare, T(15) = 2.78, p = 4.35 × 10 6 . Figure 17 shows the overall mean time-to-complete for each retrofitting process, and we can clearly see that using our proposed framework interface ( M P F = 75.38, S D P F = 0.36) in the experiment participants take significantly less time on average than when using CloudCompare ( M C C = 238.63, S D C C = 1.97).

5.3. Subjective Measure Results for Task 3: User Satisfaction and Feedback from Questionnaire

To determine the user satisfaction for the proposed framework a user satisfaction appropriate questionnaire [37,38] were adopted and filled by the participants after the end of the task. These are simple and widely used survey questions developed for a subjective measure of system usability.
All the questions, five in total were designed in favor of immersion, level of consistency, level of efficiency, interface quality, and ease of use to perform retrofitting. Participants were asked to rate their usability responses with a 5-point scale (rating from 1 (strongly disagree) to 5 (strongly agree)).
  • Q1: How well could you interact or manipulate models in the virtual environment?
  • Q2: How well could you investigate models from multiple viewpoints?
  • Q3: How helpful was the interface when performing the assigned tasks?
  • Q4: I felt very confident using the system.
  • Q5: I needed to learn a lot of things before using it.
Figure 18 shows a representation of the average ratings of the questionnaire. Almost all ratings lie between neutral and strongly agree, which means that there are no negative impressions about the proposed framework. However there is no significant difference in ratings of Q1; in fact the mean rating of proposed framework ( M Q 1 P F = 4.30, S D Q 1 P F = 0.50) is still slightly better than CloudCompare ( M Q 1 C C = 4.20, S D Q 1 C C = 0.37) regarding the user feedback on interaction/manipulation of models in the virtual environment. Question Q5 obtained a minimum rating for the proposed framework. It shows that participants need not learn more things before engaging in retrofitting tasks as compared to CloudCompare. Since the proposed framework interface provides more intuitive interactions and simple functionality.
From the short interview conducted at the end of the study, participants were asked to state an overall preference of the proposed framework. A majority of the participants felt that the proposed framework will be very useful in industrial facilities because it reduces the labors cost, time and manual effort. It is also very efficient in order to perform modification and upgrading of an existing facility.

5.4. Further Discussion on Observation Results and Limitation

From the results of the objective measures, it is clearly indicated that over the trials users will get acquainted with the framework within less time over each trail. This tells us that the user can intuitively retrofit predefined CAD models and analyze the retrofitted industrial facilities to make decisions before actual implementation.
Moreover, the user assessment was consistent with the objective results. On the other hand, during tasks, participants noticed that there was a delay in response during interaction and visualizing PCD. This is due to the large size of PCD; we will consider this limitation as a future scope to introduce down-sampling methods to increase the response time.

6. Conclusion and Future work

Maintenance and upgrading plant facilities often need components to be redesigned and/or added. Validating these upgrades (retrofitting) is time-consuming, tedious, and expensive. Hence, virtual retrofitting with 3D models (CAD) is a requirement that can help to make retrofit decisions faster, easier, and more economical. This paper proposes a novel generic framework for an interactive 3D CAD model based retrofitting and an efficient 4-in-1 alignment in a desktop environment before physical on-site implementation.
An on-site trial was carried out using the proposed framework at a water treatment facility in order to evaluate efficiency, ease of use and performance. Also, task-oriented evaluation in terms of objective measure and a user satisfaction questionnaire were done to understand the subjective measures with 15 participants over trials. The overall results inferred that the proposed framework could be a solution to perform virtual interactive retrofitting by reducing labors cost, time, and manual effort before actual on-site upgrading.
The future work direction involves investigation on comparison, and more usability studies. Also, the implementation of immersive visualization in the head-mounted display with interaction methods for more understanding of the actual environment scene should be done. In the proposed framework, the UAV was hovering at stable height during the trial. Future work can also include implementation of autonomous path-planning to acquire the PCD from different views and incorporation of GNNS for better position accuracy. Further, we also intend to improve the alignment of video with LiDAR data by considering calibration between LiDAR and camera.

Author Contributions

Conceptualization and methodology: P.K.B.N., A.K.P., and Y.H.C.; software, validation, formal analysis, and investigation: A.K.P., P.K.B.N., C.B.; writing-original draft preparation: P.K.B.N. and A.K.P.; writing-review and editing: P.K.B.N. and A.K.P.; visualization: A.K.P., C.B., and P.K.B.N.; data curation: A.K.P. and P.K.B.N.; supervision, project administration: A.K.P. and Y.H.C.; funding acquisition: Y.H.C.

Funding

This work was supported by the National Research Foundation (NRF) grant (2016R1D1A1B03930795) funded by the Korea government (MEST) and the Ministry of Science, ICT of Korea, under the Software Star Lab. program (IITP-2018-0-00599) supervised by the Institute for Information and communications Technology Promotion.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
UAVUnmanned aerial vehicle
PCDPoint cloud data
CADComputer-aided design
G-ICPGeneralized-iterative closest point
GUIGraphical user interface
VTKVisualization toolkit

References

  1. Ochmann, S.; Vock, R.; Wessel, R.; Tamke, M.; Klein, R. Automatic generation of structural building descriptions from 3D point cloud scans. In Proceedings of the 2014 International Conference on Computer Graphics Theory and Applications (GRAPP), Lisbon, Portugal, 5–8 January 2014; pp. 1–8. [Google Scholar]
  2. Patil, A.K.; Holi, P.; Lee, S.K.; Chai, Y.H. An adaptive approach for the reconstruction and modeling of as-built 3D pipelines from point clouds. Autom. Constr. 2017, 75, 65–78. [Google Scholar] [CrossRef]
  3. Miądlicki, K.; Pajor, M.; Saków, M. Real-time ground filtration method for a loader crane environment monitoring system using sparse LIDAR data. In Proceedings of the 2017 IEEE International Conference on INnovations in Intelligent SysTems and Applications (INISTA), Gdynia, Poland, 3–5 July 2017; pp. 207–212. [Google Scholar]
  4. de Haag, M.U.; Bartone, C.G.; Braasch, M.S. Flight-test evaluation of small form-factor LiDAR and radar sensors for sUAS detect-and-avoid applications. In Proceedings of the 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC), Sacramento, CA, USA, 25–29 September 2016; pp. 1–11. [Google Scholar]
  5. Kumar, G.A.; Patil, A.; Patil, R.; Park, S.; Chai, Y. A LiDAR and IMU integrated indoor navigation system for UAVs and its application in real-time pipeline classification. Sensors 2017, 17, 1268. [Google Scholar] [CrossRef] [PubMed]
  6. Lima, T.A.; do Nascimento Forte, M.D.; Nogueira, F.G.; Torrico, B.C.; de Paula, A.R. Trajectory tracking control of a mobile robot using lidar sensor for position and orientation estimation. In Proceedings of the 2016 12th IEEE International Conference on Industry Applications (INDUSCON), Curitiba, Brazil, 20–23 November 2016; pp. 1–6. [Google Scholar]
  7. Petrovskaya, A.; Thrun, S. Model based vehicle detection and tracking for autonomous urban driving. Auton. Robot. 2009, 26, 123–139. [Google Scholar] [CrossRef]
  8. Patil, A.K.; Kumar, G.A.; Kim, T.H.; Chai, Y.H. Hybrid approach for alignment of a pre-processed three-dimensional point cloud, video, and CAD model using partial point cloud in retrofitting applications. Int. J. Distrib. Sens. Net. 2018, 14. [Google Scholar] [CrossRef] [Green Version]
  9. Bajaj, C. Data Visualization Techniques. Available online: https://books.google.co.kr/books?id=2OVRAAAAMAAJ (accessed on 8 September 2019).
  10. Card, M. Readings in Information Visualization: Using Vsion to Think; Morgan Kaufmann: Burlington, MA, USA, 1999. [Google Scholar]
  11. Preiner, R.; Jeschke, S.; Wimmer, M. Auto splats: Dynamic point cloud visualization on the GPU. In Proceedings of the Eurographics Symposium on Parallel Graphics and Visualization, Cagliari, Italy, 13–14 May 2012; pp. 139–148. [Google Scholar]
  12. Burwell, C.; Jarvis, C.; Tansey, K. The potential for using 3D visualization for data exploration, error correction and analysis of LiDAR point clouds. Remote Sens. Lett. 2012, 3, 481–490. [Google Scholar] [CrossRef]
  13. Schünke, L.C.; de Oliveira, L.P.L.; Villamil, M.B. Visualization and analysis of interacting occurrences in a smart city. In Proceedings of the 2014 IEEE Symposium on Computers and Communications (ISCC), Funchal, Portugal, 23–26 June 2014; pp. 1–7. [Google Scholar]
  14. Oulasvirta, A.; Estlander, S.; Nurminen, A. Embodied interaction with a 3D versus 2D mobile map. Pers. Ubiquitous Comput. 2009, 13, 303–320. [Google Scholar] [CrossRef]
  15. Son, H.; Bosché, F.; Kim, C. As-built data acquisition and its use in production monitoring and automated layout of civil infrastructure: A survey. Adv. Eng. Inform. 2015, 29, 172–183. [Google Scholar] [CrossRef]
  16. Khaloo, A.; Lattanzi, D. Robust normal estimation and region growing segmentation of infrastructure 3D point cloud models. Adv. Eng. Inform. 2017, 34, 1–16. [Google Scholar] [CrossRef]
  17. Ma, Z.; Liu, S. A review of 3D reconstruction techniques in civil engineering and their applications. Adv. Eng. Inform. 2018, 37, 163–174. [Google Scholar] [CrossRef]
  18. Agapaki, E.; Miatt, G.; Brilakis, I. Prioritizing object types for modelling existing industrial facilities. Autom. Constr. 2018, 96, 211–223. [Google Scholar] [CrossRef]
  19. Ramalli, G.; Giovani, M.; Pacchiacucchi, F.; Manneschi, M. Pipeline monitoring with drones. Studia Universitatis Babes-Bolyai, Ambientum 2016, 61, 105–118. [Google Scholar]
  20. Savkin, A.V.; Huang, H. Asymptotically optimal deployment of drones for surveillance and monitoring. Sensors 2019, 19, 2068. [Google Scholar] [CrossRef] [PubMed]
  21. Meng, X.; Wang, W.; Leong, B. Skystitch: A cooperative multi-UAV-based real-time video surveillance system with stitching. In Proceedings of the 23rd ACM International Conference on Multimedia, Brisbane, Australia, 26–30 October 2015; pp. 261–270. [Google Scholar]
  22. LaserScan. Applications for Laser Scanning. Available online: http://www.vicosoftware.com/trimble-buildings/laser-scanning-for-construction/4-applications-for-laser-scanning (accessed on 8 September 2019).
  23. Macy, W.D.; Sherwinter, D.J.; Delph, E.R. Method for generating three-dimensional CAD models of complex products or systems. U.S. Patent 6,931,294, 16 August 2005. [Google Scholar]
  24. Segal, A.V.; Haehnel, D.; Thrun, S. Generalized-ICP. In Rob. Sci. Syst. 2009, 2, 435. [Google Scholar]
  25. Donoso, F.A.; Austin, K.J.; McAree, P.R. How do ICP variants perform when used for scan matching terrain point clouds? Robot. Auton. Syst. 2017, 87, 147–161. [Google Scholar] [CrossRef] [Green Version]
  26. Du, H.; Henry, P.; Ren, X.; Cheng, M.; Goldman, D.B.; Seitz, S.M.; Fox, D. Interactive 3D modeling of indoor environments with a consumer depth camera. In Proceedings of the 13th International Conference on Ubiquitous Computing, Beijing, China, 17–21 September 2011; pp. 75–84. [Google Scholar]
  27. Vidal, E.; Piotto, N.; Cordara, G.; Burgos, F.M. Automatic video to point cloud registration in a structure-from-motion framework. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 2646–2650. [Google Scholar]
  28. DJI Matrice 100. Available online: https://store.dji.com/product/matrice-100 (accessed on 8 September 2019).
  29. DJI Matrice 100 User Manual. Available online: https://dl.djicdn.com/downloads/m100/M100_User_Manual_EN.pdf (accessed on 8 September 2019).
  30. Quigley, M.; Conley, K.; Gerkey, B.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; Ng, A.Y. ROS: An open-source robot operating system. In Proceedings of the In ICRAWorkshop on Open Source Software, Kobe, Japan, 12–13 May 2009; Volume 3, p. 5. [Google Scholar]
  31. Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. In Proceedings of the Sensor Fusion IV: Control Paradigms and Data Structures, Boston, MA, USA, 12–15 November 1991; pp. 586–606. [Google Scholar]
  32. Schroeder, W.J.; Lorensen, B.; Martin, K. The Visualization Toolkit: An Object-Oriented Approach to 3D Graphics; Kitware: New York, NY, USA, 2004. [Google Scholar]
  33. Real-Time Messaging Protocol. Available online: https://en.wikipedia.org/wiki/Real-Time_Messaging_Protocol (accessed on 8 September 2019).
  34. Open Broadcaster Software. Available online: https://en.wikipedia.org/wiki/Open_Broadcaster_Software (accessed on 8 September 2019).
  35. Veľas, M.; Španěl, M.; Materna, Z.; Herout, A. Calibration of RGB camera with velodyne LiDAR. In Proceedings of the WSCG 2014 Communication Papers, Plzen, Czech Republic, 2–5 June 2014. [Google Scholar]
  36. CloudCompare. Available online: http://www.cloudcompare.org (accessed on 8 September 2019).
  37. Witmer, B.G.; Jerome, C.J.; Singer, M.J. The factor structure of the presence questionnaire. Presence Teleoperators Virtual Environ. 2005, 14, 298–312. [Google Scholar] [CrossRef]
  38. Brooke, J. SUS-A quick and dirty usability scale. Usability Eval. Ind. 1996, 189, 4–7. [Google Scholar]
Figure 1. Overall representation of proposed framework.
Figure 1. Overall representation of proposed framework.
Sensors 19 03908 g001
Figure 2. Pre-processed PCD acquisition using a Trimble X5 on-site.
Figure 2. Pre-processed PCD acquisition using a Trimble X5 on-site.
Sensors 19 03908 g002
Figure 3. Acquired pre-processed PCD using Trimble X5.
Figure 3. Acquired pre-processed PCD using Trimble X5.
Sensors 19 03908 g003
Figure 4. Velodyne sensor, Manifold onboard computer, and Zenmuse X3 hardware integration with UAV.
Figure 4. Velodyne sensor, Manifold onboard computer, and Zenmuse X3 hardware integration with UAV.
Sensors 19 03908 g004
Figure 5. UAV on-site for the partial PCD acquisition using Velodyne Puck LITE.
Figure 5. UAV on-site for the partial PCD acquisition using Velodyne Puck LITE.
Sensors 19 03908 g005
Figure 6. UAV calibration at the experimental site: (Left) Holding it horizontally and (Right) vertically.
Figure 6. UAV calibration at the experimental site: (Left) Holding it horizontally and (Right) vertically.
Sensors 19 03908 g006
Figure 7. Pre-processed and partial PCD before alignment.
Figure 7. Pre-processed and partial PCD before alignment.
Sensors 19 03908 g007
Figure 8. Pre-processed and partial PCD after alignment.
Figure 8. Pre-processed and partial PCD after alignment.
Sensors 19 03908 g008
Figure 9. Predefined CAD Model 1.
Figure 9. Predefined CAD Model 1.
Sensors 19 03908 g009
Figure 10. PCD after retrofitting with CAD Model 1.
Figure 10. PCD after retrofitting with CAD Model 1.
Sensors 19 03908 g010
Figure 11. Predefined CAD Model 2.
Figure 11. Predefined CAD Model 2.
Sensors 19 03908 g011
Figure 12. PCD after retrofitting with CAD Model 2.
Figure 12. PCD after retrofitting with CAD Model 2.
Sensors 19 03908 g012
Figure 13. Software setup prepared to get real-time video feed.
Figure 13. Software setup prepared to get real-time video feed.
Sensors 19 03908 g013
Figure 14. Pre-processed, partial PCD, CAD Model, and Video frame: (a) 4-in-1 alignment in first view, (b) 4-in-1 alignment in second view.
Figure 14. Pre-processed, partial PCD, CAD Model, and Video frame: (a) 4-in-1 alignment in first view, (b) 4-in-1 alignment in second view.
Sensors 19 03908 g014
Figure 15. Experimental setup at on-site water treatment facility.
Figure 15. Experimental setup at on-site water treatment facility.
Sensors 19 03908 g015
Figure 16. Average mean time-to-complete the tasks for each model and trial by participants.
Figure 16. Average mean time-to-complete the tasks for each model and trial by participants.
Sensors 19 03908 g016
Figure 17. Mean time-to-complete Task 3 using Proposed Framework (PF) and CloudCompare (CC).
Figure 17. Mean time-to-complete Task 3 using Proposed Framework (PF) and CloudCompare (CC).
Sensors 19 03908 g017
Figure 18. The average ratings of the (5 point scale: 1 - strongly disagree, 2 - disagree, 3 - neutral, 4 - agree, 5 - strongly agree) user satisfaction questionnaire for Proposed Framework (PF) and CloudCompare (CC).
Figure 18. The average ratings of the (5 point scale: 1 - strongly disagree, 2 - disagree, 3 - neutral, 4 - agree, 5 - strongly agree) user satisfaction questionnaire for Proposed Framework (PF) and CloudCompare (CC).
Sensors 19 03908 g018
Table 1. Technical specifications for the DJI Matrice 100.
Table 1. Technical specifications for the DJI Matrice 100.
ParametersValues
UAV typeQuadcopter, with customizable and programmable platform
BatteryIntelligent flight battery 5700 mAh LiPo 6S
Video outputUSB, High-Definition Multimedia Interface-Mini
Maximum speed ofAscent: 5 m/s, Descent: 4 m/s
Operating temperature-10C to 40C
Table 2. Point cloud alignment accuracy using G-ICP algorithm.
Table 2. Point cloud alignment accuracy using G-ICP algorithm.
IterationsAlignment
Distance (mm)Roll (°)Pitch (°)Yaw (°)
16115.48−3.5−1.98.7
87.83−2.4−1.156.6
163.55−1.3−1.33.8
Table 3. Task 1: Participants’ time taken for retrofitting Model 1.
Table 3. Task 1: Participants’ time taken for retrofitting Model 1.
TrialsTime Taken for Retrofitting in Seconds
Model 1
P1P2P3P4P5P6P7P8P9P10P11P12P13P14P15
124.0122.1230.7821.7426.1023.8532.1824.8626.7023.8623.7124.0622.6021.7120.12
221.5820.7523.8422.8123.7121.9323.0224.1026.0825.1021.3020.9021.7622.9021.70
317.7321.6525.8418.4624.7521.4821.4119.7121.8119.7818.4917.6116.0817.4318.39
413.6814.5620.8617.3219.4018.7417.1917.8419.5316.4015.7015.0813.8015.9414.97
510.8612.2717.2514.9216.7415.0815.7115.6016.4612.0813.0811.0811.0012.0010.98
Table 4. Task 2: Participants’ time taken for retrofitting Model 2.
Table 4. Task 2: Participants’ time taken for retrofitting Model 2.
TrialsTime Taken for Retrofitting in Seconds
Model 2
P1P2P3P4P5P6P7P8P9P10P11P12P13P14P15
178.1782.0087.1076.0181.5079.4988.8983.4774.1772.1485.6084.7181.6477.1075.92
272.0678.9681.9075.0880.8478.9785.1581.7770.3467.9380.1781.9777.4173.5073.64
375.7474.1079.7171.4778.4776.6083.9477.5168.9862.9773.4176.1872.6169.6069.36
463.1870.1873.4066.8572.1470.1678.9773.9065.3562.4068.7372.9669.4765.1965.49
560.0663.4871.8464.5269.6067.4771.3169.5961.4959.0164.7370.8465.1061.5262.17
Table 5. Task 3: Control group participant’s time taken for retrofitting Model 2.
Table 5. Task 3: Control group participant’s time taken for retrofitting Model 2.
TrialsTime Taken for Retrofitting in Seconds
Model 2
P1P2P3P4P5P6P7P8
1240.36235.80270.84284.52265.50246.81262.81290.53
2215.85215.70250.15255.85260.83225.63268.75275.62
3206.70200.48235.19250.45262.51220.81254.36263.55
4195.08195.62230.18234.58255.48215.76252.84245.81
5190.50190.48215.38229.54235.64210.61245.73242.61

Share and Cite

MDPI and ACS Style

B. N., P.K.; Patil, A.K.; B., C.; Chai, Y.H. On-Site 4-in-1 Alignment: Visualization and Interactive CAD Model Retrofitting Using UAV, LiDAR’s Point Cloud Data, and Video. Sensors 2019, 19, 3908. https://doi.org/10.3390/s19183908

AMA Style

B. N. PK, Patil AK, B. C, Chai YH. On-Site 4-in-1 Alignment: Visualization and Interactive CAD Model Retrofitting Using UAV, LiDAR’s Point Cloud Data, and Video. Sensors. 2019; 19(18):3908. https://doi.org/10.3390/s19183908

Chicago/Turabian Style

B. N., Pavan Kumar, Ashok Kumar Patil, Chethana B., and Young Ho Chai. 2019. "On-Site 4-in-1 Alignment: Visualization and Interactive CAD Model Retrofitting Using UAV, LiDAR’s Point Cloud Data, and Video" Sensors 19, no. 18: 3908. https://doi.org/10.3390/s19183908

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop