Next Article in Journal
Assessment of Wildfire Susceptibility and Wildfire Threats to Ecological Environment and Urban Development Based on GIS and Multi-Source Data: A Case Study of Guilin, China
Previous Article in Journal
Orthomosaicking Thermal Drone Images of Forests via Simultaneously Acquired RGB Images
 
 
Article
Peer-Review Record

Automatic Point Cloud Colorization of Ground-Based LiDAR Data Using Video Imagery without Position and Orientation System

Remote Sens. 2023, 15(10), 2658; https://doi.org/10.3390/rs15102658
by Junhao Xu, Chunjing Yao *, Hongchao Ma, Chen Qian and Jie Wang
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Remote Sens. 2023, 15(10), 2658; https://doi.org/10.3390/rs15102658
Submission received: 24 April 2023 / Revised: 15 May 2023 / Accepted: 18 May 2023 / Published: 19 May 2023

Round 1

Reviewer 1 Report

The article offers a ground mobile measurement system composed of a LiDAR and a GoPro camera. It also provides a point cloud coloring workflow from images to obtain 3D point cloud data with spectral information. The article is very interesting and we are really in need of developing low-cost systems that integrates LiDAR point cloud with RGB images, especially the available systems are either expensive or do not offer such high-resolution RGB images. However, it needs some modifications to be suitable for publication.

 

General comments:

1.       There is no quantitative assessment was introduced of the coloring step. How the authors can justify their results (Line 693: how you can judge that the results are good?).

2.       It would be very informative if you add a Figure sowing the processing workflow of the four steps.

3.       What is processing time of these steps? It is important to provide some numbers on the processing time.

4.       What is the programming language used to implement these different steps?

5.       All acronyms should be defined in full when they are first introduced. And each of the abstract, Figures and Tables is stand alone.

 

Other comments:

1.       What is the POS in the title?

2.       Line 14: What is the POS and IMU?

3.       Line 16: the word “simple” is not accurate in this sentence. The use of only two sensors (LiDAR and Camera) might be simple, but the integration process is more complicated.

4.       Line 27: What is the SURF and RANSAC?

5.       Line 35-38: how did you asses the coloring accuracy? And what is this accuracy to come up with its suitability for the mentioned applications?

6.       I think the abstract is too long, check the journal requirements. There is no need for the full description of the four steps.

7.       Line 247: the graphical abstract (Figure 2) is required by the journal for the webpage and it should not be included in the article. Check with the editors.

8.       Line 258: remove “As shown in Figure 3,”.

9.       Figure 4: LiDAR images or LiDAR point clouds? What is the difference between a and b? different views?

10.   Do you apply Equations 5 and 6 sequentially? If so the output of Equation 5 should be used as input to Equation 6, and it should be clarified.

11.   Figure 9: indicate what are the red patches in figure a?

12.   Figure 11: what is R.O.?

13.   Figure 12: define Src, R.O., and LM.

14.   Line 622, Line 699: Table 6 >> Table 5.

15.   Table 5: indicate that the coordinates of the “control points” are measured from the point cloud.

16.   Table 5: indicate that the three sub columns are for XYZ coordinates.

17.   Figure 13g: illustrate the meaning of green-blue color.

18.   Support the conclusions with some results.

 Minor editing of English language required.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

This research in this paper presents a valuable work on automatic point cloud colorization of the LiDAR data with a common GoPro camera. This work can be applied to 3D city modeling in a new and highly effective solution manner. In this regard, the contributions of this paper are sufficient to be published in this journal and apparently within the scope of this journal. Specifically, the work has addressed the following problems: establishing models for radial distortion and tangential distortion to correct video images; establishing a registration method based on normalized Zernike moments to obtain the exterior orientation elements; establishing adjacent video image relative orientation based on essential matrix decomposition and nonlinear optimization; proposed a point cloud coloring method based on Gaussian distribution with central region restriction. In addition, these works have been validated with the substantial results. To improve the quality of the paper, below are my minor comments:

1) This work focuses on only using the LiDAR and cameras sensors to do the colorization work in a cost effective way and mentioned that the registration strategy is required. I agree that to achieve low cost, the POS system should not be used. However, I suggest the authors to add some discussion or literature regarding using a cheap IMU in this system and this won’t increase the cost much. The main reason why a IMU will help is that it will improve the frame association process when you do the registration. The typical work is the lio-sam or vins-mono. But the requirement to achieve this is the careful and low cost calibration between the IMU and the lidar or cameras. I hope the authors to consider this suggestion and discuss more with the inclusion of the work: estimation on imu yaw misalignment by fusing information of automotive onboard sensors; tightly-coupled Lidar inertial odometry via smoothing and mapping. This will expand the interest of this work to the readers.

2) The registration is based on the SURF coarse matching, which is a classic SLAM algorithm. Since the requirement on the registration is high, the registration between the consecutive frames can also be based on the semantic features from a machine-learning algorithm such as the objects or the static features in the environment. In this regard, I suggest the authors could justify when using the a non-semantic feature based registration method. The work in: an automated driving systems data acquisition and analytics platform; yolov5-tassel: detecting tassels in rgb uav imagery with improved yolov5 based on transfer learning; can be referred to to discuss the justification and the advantage of the work in this paper.

3) For figure 7, authors state that “Figure 7 shows that the SIFT algorithm can extract sufficient corresponding points, but its accuracy is not high enough; the ORB algorithm has higher accuracy, but fewer corresponding points are extracted; the results of the SURF algorithm have both high accuracy and sufficient corresponding points.” However, in figure 7, it is hard to tell this information. Please revise the expression of figure 7 or modify the context in the paper.

4) Figure 10 can be improved as well.

5) Are equations (18)-(22) necessary to be included in the paper as the contributions of this work is not the theory proposition?

6) Although the focus of this paper is using the cost effective solution to perform the colorization, the registration method used in this work does not include the absolute pose information. This will cause the drift errors in the frames when you do the mapping or 3D modelling. In that regard, I think for the real application for a large scale purpose, having the GPS information is necessary. I hope authors can discuss the limitations of this work by including some related works: principles of GNSS, inertial, and multisensor integrated navigation systems; autonomous vehicle kinematics and dynamics synthesis for sideslip angle estimation based on consensus kalman filter; automated vehicle sideslip angle estimation considering signal measurement characteristic. These will be helpful to the readers to understand the contributions and limitations of this work properly.

Over all, this work is well done!

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The authors have done a great job in refining the manuscript and replying to all comments. Just a few minor comments to be considered:

1- I think POS stands for Position and Orientation System. Otherwise it should be written as: PoS. Double check and update the manuscript accordingly.

2- General-Point 5: yes, all acronyms should be fully defined in the abstract, figures or tables.

3- Remove: (POS), (IMU), (SURF), (RANSAC) in the abstract, since you mentioned them once in the abstract.

4- Line 757: remove “for data visualization”

5- Other-Point 1: yes, it is better to define it in the title.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop