Robust Semi-Supervised Point Cloud Registration via Latent GMM-Based Correspondence
Round 1
Reviewer 1 Report
This study proposes a novel semi-supervised point cloud registration algorithm capable of accurately estimating point correspondences and large transformations using limited prior datasets. Experimental results demonstrate its superiority in terms of accuracy, runtime, partial data, and robustness to different levels of noise and sparse data when compared to state-of-the-art methods. Overall, the quality of the work is very good, and the paper presents sufficient literature review on the related research topic. However, the following minor suggestions could further improve the manuscript:
1. The main motivation of this manuscript should be clearly highlighted in the introduction.
2. Graphs and tables should be placed as close to their corresponding text as possible, for example, Table 4 and Figure 7.
3. There are some typing errors in the mathematical symbols used throughout the manuscript that need to be corrected, such as line 221 and line 228.
4. It is suggested to add some explanations for Figure 7, Figure 8, and Figure 10 to enhance their clarity.
5. Authors may consider mentioning the theoretical logic behind how this method may be extended to medical applications.
6. There are some grammar errors in the manuscript that need to be corrected. It is recommended that the author proofread the entire text.
Overall, the paper is well-organized and presented, and the proposed algorithm is a significant contribution to the field.
There are some grammar errors in the manuscript that need to be corrected. It is recommended that the author proofread the entire text.
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Reviewer 2 Report
The authors presented a very interesting and innovative approach to point cloud registration that is robust to noise and high generalization ability. The paper is well written and structured. There are, however, some minor issues that should be adressed.
What are the reasons for slightly higher computational workload compared to the other state of the art?
What are the limitations of the approach?
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Reviewer 3 Report
The paper introduces a new method for registering point clouds using deep learning models. In this approach, point clouds are treated as distributions and represented as Gaussian mixture models. The goal of the method is to minimize the KL-divergence between these distributions within a hidden or latent space.
This work presents an exciting and fresh approach to point cloud registration, showing improvements in certain areas over existing top-tier methods. The authors conduct a thorough evaluation, comparing their technique with other leading approaches. Additional comments and concerns are outlined below.
Comments:
General Queries:
- Can larger weights be assigned to specific points that are not considered noise?
- The evaluation seems to lack consideration of larger point clouds, such as terrain LiDAR data. How would the proposed method perform on these types of data?
- I noticed the absence of memory consumption analysis in the performance evaluation. This information would help us understand how the method scales with larger datasets.
Major Concerns:
- Section 6.2: How were the values k and the number of Gaussian components chosen? What was the reasoning behind selecting 10 and 20?
- Section 6.3: Why were only angles ranging from [0, 45] and translations [0, 0.5] considered?
- Section 6.3.1: How were the noise clippings selected, and why were these specific values [-0.05, 0.05] used?
- Section 6.3.3: Why were transformations limited to [0, 60] and [-0.5, 0.5]? Could more extreme values have been explored?
- Some references seem to be missing in the related work section. These should be added and explained why they were not chosen for comparison. (References [1] to [8] listed)
- There are a few other recent papers addressing the topic that may be relevant.
- I recommend checking and citing a review paper by Si, Haiqing, et al. [9], which provides an overview of point cloud registration algorithms.
[1] Žagar, Bare Luka, Ekim Yurtsever, Arne Peters, and Alois C. Knoll. "Point cloud registration with object-centric alignment." Ieee Access 10 (2022): 76586-76595., doi:10.1109/ACCESS.2022.3191352
[2] Wu, Yue, Qianlin Yao, Xiaolong Fan, Maoguo Gong, Wenping Ma, and Qiguang Miao. "Panet: A point-attention based multi-scale feature fusion network for point cloud registration." IEEE Transactions on Instrumentation and Measurement (2023)., doi: 10.1109/TIM.2023.3271757
[3] Chen, Zhi, Fan Yang, and Wenbing Tao. "Detarnet: Decoupling translation and rotation by siamese network for point cloud registration." In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 1, pp. 401-409. 2022., doi: 10.1609/aaai.v36i1.19917
[4] You, Bo, Hongyu Chen, Jiayu Li, Changfeng Li, and Hui Chen. "Fast point cloud registration algorithm based on 3DNPFH descriptor." In Photonics, vol. 9, no. 6, p. 414. MDPI, 2022., doi: 10.3390/photonics9060414
[5] Zováthi, Örkény, Balázs Nagy, and Csaba Benedek. "Point cloud registration and change detection in urban environment using an onb~oard Lidar sensor and MLS reference data." International Journal of Applied Earth Observation and Geoinformation 110 (2022): 102767., doi: 10.1016/j.jag.2022.102767
[6] Qin, Zheng, Hao Yu, Changjian Wang, Yulan Guo, Yuxing Peng, and Kai Xu. "Geometric transformer for fast and robust point cloud registration." In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11143-11152. 2022.
[7] Zheng, Yuchao, Yujie Li, Shuo Yang, and Huimin Lu. "Global-PBNet: A novel point cloud registration for autonomous driving." IEEE Transactions on Intelligent Transportation Systems 23, no. 11 (2022): 22312-22319., doi: 10.1109/TITS.2022.3153133
[8] Chen, Zhi, Kun Sun, Fan Yang, and Wenbing Tao. "Sc2-pcr: A second order spatial compatibility for efficient and robust point cloud registration." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13221-13231. 2022.
[9] Si, Haiqing, Jingxuan Qiu, and Yao Li. "A review of point cloud registration algorithms for laser scanners: applications in large-scale aircraft measurement." Applied Sciences 12, no. 20 (2022): 10247., doi: 10.3390/app122010247
Minor Observations:
Section 7 appears to be missing in the paper structure.
Why did the authors choose not to use homogenous transformations, which would allow both rotation and translation through matrix multiplication? Is there an advantage to using non-homogenous transformations?
There should be a space between words and citations in Section 6.
All corresponding results' values in tables should be consistent in the number of decimal places, even if this means adding zeros at the end.
Recommendations:
Due to the significant number of missing references, I recommend a minor revision. This should include an analysis of how these references compare to the presented approach and an explanation of why they were not considered in the comparison.
/
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Reviewer 4 Report
Thank you for submitting your valuable manuscript.
Your manuscript is about the registration of point clouds, and you have developed and demonstrated a method to improve the accuracy of registration. The completeness of the dissertation will be improved if only some supplements are made.
1. It would be nice to have an introduction about Section 7 to p.3, line 81.
2. ModelNet40 and 7Scene datasets were used to demonstrate your research results and for generalization.
1) A detailed explanation of the above dataset seems necessary.
2) Is there a reason you selected this data set among many data sets?
3) Is the data set objective and valid to explain your methodology?
4) According to the reference, the data set appears to have been released in 2015 and 2017. It is somewhat old. Is there any need for an update?
thank you
Author Response
Please see the attachment.
Author Response File: Author Response.pdf