RSS-LIWOM: Rotating Solid-State LiDAR for Robust LiDAR-Inertial-Wheel Odometry and Mapping
Round 1
Reviewer 1 Report
To address the narrow field of view issues, this paper proposed a novel rotating solid-state LiDAR system that incorporates a servo motor to continuously rotate the solid-state LiDAR, expanding the horizontal field of view to 360°. This mechanism is interesting and novel. In addition, the authors also proposed a multi-sensor fusion odometry and mapping algorithm for the developed sensory system with an IMU, wheel encoder, motor encoder and the LiDAR integration within an iterated Kalman filter. Overall, the paper is pretty comprehensive and solid. It is almost acceptable with only some minor revisions. Please check my comments below:
- Although after going through the whole paper, the contributions of this paper can be understood, it will be helpful if the authors can highlight the contributions in the introduction.
- In the introduction, some related works about localization are missing. Please include them to increase the interest of the paper: an automated driving systems data acquisition and analytics platform; autonomous vehicle kinematics and dynamics synthesis for sideslip angle estimation based on consensus kalman filter; automated vehicle sideslip angle estimation considering signal measurement characteristic.
- It will be good if the figure 5 can be introduced at the beginning of the methodology design. In this way, the authors will know the structure of the algorithm in this paper.
- Will an adaptive Kalman filter is helpful for the sensor fusion? Please include some literature to mention this point: imu-based automated vehicle body sideslip angle and attitude estimation aided by gnss using parallel adaptive kalman filters; vehicle sideslip angle estimation by fusing inertial measurement unit and global navigation satellite system with heading alignment.
- For the LiDAR odometry, the dynamic object in the environment will affect the localization accuracy as the point cloud will be occupied by these objects. I hope the authors could mention this point in the paper. And also discuss some potential measures to address this issue. Some works based on camera or LiDAR for the object detection can be used: hydro-3D: hybrid object detection and tracking for cooperative perception using 3D LiDAR; yolov5-tassel: detecting tassels in rgb uav imagery with improved yolov5 based on transfer learning.
Overall, I would say this paper is well done and a good attempt to explore the potential of the solid-state LiDAR.
Author Response
Response to Reviewer 1 Comments
We would like to thank the editor and all the reviewers for all the comments and suggestions. The feedback has been constructive and allowed us to address shortcomings in the paper effectively. We have addressed the issues raised by the reviewers and incorporated their suggestions in the new version of the paper.
Please find our response to the questions raised by reviewer 1 as follows.
Point 1: Although after going through the whole paper, the contributions of this paper can be understood, it will be helpful if the authors can highlight the contributions in the introduction.
Response 1: Thank you for the helpful suggestions. We have listed our contributions in the introduction to make them more clear and obvious.
Point 2: In the introduction, some related works about localization are missing. Please include them to increase the interest of the paper: an automated driving systems data acquisition and analytics platform; autonomous vehicle kinematics and dynamics synthesis for sideslip angle estimation based on consensus kalman filter; automated vehicle sideslip angle estimation considering signal measurement characteristic.
Response 2: Thanks for pointing out the interesting related works. We have added them in our article.
Point 3: It will be good if the figure 5 can be introduced at the beginning of the methodology design. In this way, the authors will know the structure of the algorithm in this paper.
Response 3: Thanks for the advice. The methodology part of our article has two sections: the hardware, namely Rotating Sensory Platform and Experimental Robot, and the SLAM algorithm, namely, LiDAR-Inertial-Wheel Odometry and Mapping. Figure 5 is the overview of our SLAM algorithm and has been placed at the beginning of section 4.
Point 4: Will an adaptive Kalman filter is helpful for the sensor fusion? Please include some literature to mention this point: imu-based automated vehicle body sideslip angle and attitude estimation aided by gnss using parallel adaptive kalman filters; vehicle sideslip angle estimation by fusing inertial measurement unit and global navigation satellite system with heading alignment.
Response 4: Thanks for the suggestions and we have included these works about adaptive Kalman filter in the related work section.
Point 5: For the LiDAR odometry, the dynamic object in the environment will affect the localization accuracy as the point cloud will be occupied by these objects. I hope the authors could mention this point in the paper. And also discuss some potential measures to address this issue. Some works based on camera or LiDAR for the object detection can be used: hydro-3D: hybrid object detection and tracking for cooperative perception using 3D LiDAR; yolov5-tassel: detecting tassels in rgb uav imagery with improved yolov5 based on transfer learning.
Response 5: Thanks for the advice. Dynamic objects indeed influence the localization performance of the SLAM algorithm. We have added the mentioned related work and discussed them in our article. We will also consider integrating them to improve the performance of our algorithm in our future work.
Author Response File: Author Response.docx
Reviewer 2 Report
This paper introduces a rotating solid-state LiDAR sensory platform and develops a LIWOM framework that fuses LiDAR, IMU, and wheel encoder data for odometry and mapping on ground unmanned platforms. However, there are some concerns as follows:
1. Some important references are missing in the Introduction. The authors should give more literature reviews about loop closure detection. Please add a few recent works from a broader scope to this paper.
[1] Lcdnet: Deep loop closure detection and point cloud registration for lidar slam
[2] AttDLNet: Attention-based DL Network for 3D LiDAR Place Recognition
[3] SOE-Net: A Self-Attention and Orientation Encoding Network for Point Cloud based Place Recognition
2. It would be better to give more details of the experimental datasets to help readers understand the details of the scenes.
3. In experiments, the authors should compare the odometry frame rate and computation time with other methods.
Some small errors should be corrected.
Line 281 'Our' -> our
Author Response
Response to Reviewer 2 Comments
We would like to thank the editor and all the reviewers for all the comments and suggestions. The feedback has been constructive and allowed us to address shortcomings in the paper effectively. We have addressed the issues raised by the reviewers and incorporated their suggestions in the new version of the paper.
Please find our response to the questions raised by reviewer 2 as follows.
Point 1: Some important references are missing in the Introduction. The authors should give more literature reviews about loop closure detection. Please add a few recent works from a broader scope to this paper.[1] Lcdnet: Deep loop closure detection and point cloud registration for lidar slam, [2] AttDLNet: Attention-based DL Network for 3D LiDAR Place Recognition, [3] SOE-Net: A Self-Attention and Orientation Encoding Network for Point Cloud based Place Recognition.
Response 1: Thanks for pointing out these interesting related works. We have added and discussed them in our article.
Point 2: It would be better to give more details of the experimental datasets to help readers understand the details of the scenes.
Response 2: Thanks for the nice suggestions. We have added a brief introduction to the implementation details of experimental scenes and provided the link for downloading datasets in Section 5.2 as follows: “The experimental scenes include a narrow corridor, open-space and stairway that the complexity and comprehensiveness of the scene are quite challenging in the ground-based unmanned platform SLAM problem. To further demonstrate our experimental scene, we provide links to download the experimental datasets and the ground truth measurements based FocusS350 scanner.”
Point 3: In experiments, the authors should compare the odometry frame rate and computation time with other methods.
Response 3: Thanks for the nice suggestions. We have followed your advice and added the result about odometry frame rate in Table 2 and Table 3 and describe that in Scetion 5.1 as follows: “By integrating all modules, our approach achieves the best odometry performance and achieves real-time performance faster than the framerate 10 Hz of rotating solid-state LiDAR. The runtime cost of our RSS-LIWOM is 25 ms in campus and 9 ms in stairway.”
Point 4: Some small errors should be corrected. Line 281 'Our' -> our.
Response 4: Thank you for carefully checking and pointing out the grammatical error, we have corrected it and carefully checked the full article for grammar.
Author Response File: Author Response.docx
Reviewer 3 Report
Upon reviewing the article, it becomes apparent that the paper is more technically oriented rather than strictly scientific. While it introduces a new method to tackle the limitations of solid-state LiDARs for odometry and mapping applications, it lacks the depth and theoretical rigor typically expected in scientific papers. Consequently, it may be considered beyond the scope of a journal that predominantly focuses on scientific research.
However, it is important to acknowledge that the proposed rotating solid-state LiDAR system and the fusion algorithm demonstrate practical implications and potential benefits for real-world applications. The inclusion of a servo motor to enhance the field of view is a notable contribution, as it addresses one of the significant limitations of solid-state LiDARs. The comprehensive experiments conducted in both outdoor open environments and narrow indoor environments also provide empirical evidence of the effectiveness of the proposed approach.
Considering these aspects, while the paper may not align perfectly with the objectives and scope of a scientific journal, it could still be of interest to technical audiences involved in developing and implementing LiDAR-based systems for autonomous vehicles and robotics. It offers valuable insights and engineering solutions that could inspire further research and development in the field.
To conclude, while the article may not be suitable for publication in a scientific journal due to its technical focus and limited scientific contribution, it should not be entirely dismissed. Instead, it may find relevance in technical conferences, industry publications, or specialized forums dedicated to robotics, autonomous systems, and related fields.
Author Response
Response to Reviewer 3 Comments
We would like to thank the editor and all the reviewers for all the comments and suggestions. The feedback has been constructive and allowed us to address shortcomings in the paper effectively. We have addressed the issues raised by the reviewers and incorporated their suggestions in the new version of the paper.
Please find our response to the questions raised by reviewer 3 as follows.
Point 1: Upon reviewing the article, it becomes apparent that the paper is more technically oriented rather than strictly scientific. While it introduces a new method to tackle the limitations of solid-state LiDARs for odometry and mapping applications, it lacks the depth and theoretical rigor typically expected in scientific papers. Consequently, it may be considered beyond the scope of a journal that predominantly focuses on scientific research.
Response 1: Thanks for the comments. It is true that our article focuses on tackling a real-world issue when using robots with LiDAR sensors for localization and urban environment mapping. We also notice that similar works have been published in Remote Sensing, such as: A New RGB-D SLAM Method with Moving Object Detection for Dynamic Indoor Scenes, A GNSS/LiDAR/IMU Pose Estimation System Based on Collaborative Fusion of Factor Map and Filtering and A Robust LiDAR SLAM Method for Underground Coal Mine Robot with Degenerated Scene Compensation. We have referred and disccussed these works in our article. In addition this paper is an invited contribution to Special Issue "Lidar for Environmental Remote Sensing: Theory and Application" in Remote Sensing, which fits the keywords and required topics well.
Point 2: However, it is important to acknowledge that the proposed rotating solidstate LiDAR system and the fusion algorithm demonstrate practical implications and potential benefits for real-world applications. The inclusion of a servo motor to enhance the field of view is a notable contribution, as it addresses one of the significant limitations of solid-state LiDARs. The comprehensive experiments conducted in both outdoor open environments and narrow indoor environments also provide empirical evidence of the effectiveness of the proposed approach.
Response 2: Thanks for the comments and thank for your affirmation of our work about enhance the field of view that a significant limitation of solid-state LiDAR.
Point 3: Considering these aspects, while the paper may not align perfectly with the objectives and scope of a scientific journal, it could still be of interest to technical audiences involved in developing and implementing LiDAR based systems for autonomous vehicles and robotics. It offers valuable insights and engineering solutions that could inspire further research and development in the field.
To conclude, while the article may not be suitable for publication in a scientific journal due to its technical focus and limited scientific contribution, it should not be entirely dismissed. Instead, it may find relevance in technical conferences, industry publications, or specialized forums dedicated to robotics, autonomous systems, and related fields.
Response 3: As we replied in Response 1, we designed and implemented an economical rotating sensory platform to solve a significant limitation of solid-state LiDAR by enhancing the field of view and fusing multi-sensor measurements to build a robust and accurate odometry and mapping system. Furthermore, there are multiple similar works have been published in Remote Sensing. Therefore, we believe that our article matches the Special Issue: "Lidar for Environmental Remote Sensing: Theory and Application" in Remote Sensing well.
Author Response File: Author Response.docx
Round 2
Reviewer 1 Report
The paper can be accepted now.
Reviewer 2 Report
My concerns have been addressed, so I tend to accept this paper.