Next Article in Journal
Low-FPS Multi-Object Multi-Camera Tracking via Deep Learning
Previous Article in Journal
FL-MD3QN-Based IoT Intelligent Access Algorithm for Smart Construction Sites
 
 
Article
Peer-Review Record

A Distributed Time-of-Flight Sensor System for Autonomous Vehicles: Architecture, Sensor Fusion, and Spiking Neural Network Perception

Electronics 2025, 14(7), 1375; https://doi.org/10.3390/electronics14071375
by Edgars Lielamurs 1,*, Ibrahim Sayed 1, Andrejs Cvetkovs 1, Rihards Novickis 1, Anatolijs Zencovs 1, Maksis Celitans 1, Andis Bizuns 1, George Dimitrakopoulos 2, Jochen Koszescha 2 and Kaspars Ozols 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Electronics 2025, 14(7), 1375; https://doi.org/10.3390/electronics14071375
Submission received: 21 February 2025 / Revised: 24 March 2025 / Accepted: 26 March 2025 / Published: 29 March 2025
(This article belongs to the Section Electrical and Autonomous Vehicles)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

1) The title "Non-scanning Time-of-Flight Vision Sensors for Autonomous
Vehicles: Challenges and Future Opportunities" does not fit well with the content. The majority content is the description of a setup with seven ToF cameras. The paper is giving details about architecture and data processing (3D point cloud registration, OGM,  object detection based on Spiking Neural Networks and runtime  monitoring). Suggestions for authors: Make the titel more specific for this interesting work.

2) Suggestion for the abstract: instead of "complete coverage of all blind zones":  "coverage of blind zones in a range of 0.5 m- 6 m. Make clear in the abstract that the proposed system covers ultra short-range  -or "close-range" as mentioned in 1-  applications.

3) Suggestion for introduction: "Compared to other sensors, LiDARS are robust against light conditions": Only true for camera sensors, not for RADAR. Revise the statement. 

4) Enumerate formulas

 

Author Response

Thank you very much for your valuable review! You can find attached a pdf of the revised manuscript with tracked changes, including all other reviewer requests for changes. The corrections we made to address your requests are as follows:

Comment 1: The title "Non-scanning Time-of-Flight Vision Sensors for Autonomous
Vehicles: Challenges and Future Opportunities" does not fit well with the content. The majority content is the description of a setup with seven ToF cameras. The paper is giving details about architecture and data processing (3D point cloud registration, OGM,  object detection based on Spiking Neural Networks and runtime  monitoring). Suggestions for authors: Make the titel more specific for this interesting work.

Response 1: We have improved the title, changing it to: "Distributed Time-of-Flight Sensor System for Autonomous Vehicles: Architecture, Sensor Fusion, and Spiking Neural Network Perception". We believe this reflects the essence of the contents better, emphasizing ToF sensor fusion and ToF-specific neural networks.

Comment 2: Suggestion for the abstract: instead of "complete coverage of all blind zones":  "coverage of blind zones in a range of 0.5 m- 6 m. Make clear in the abstract that the proposed system covers ultra short-range  -or "close-range" as mentioned in 1-  applications.

Response 2: The abstract has been clarified with the statement "coverage of blind zones in a range of 0.5m-6m".

Comment 3: Suggestion for introduction: "Compared to other sensors, LiDARS are robust against light conditions": Only true for camera sensors, not for RADAR. Revise the statement. 

Response 3: The first paragraph of Introduction (lines 23, 28) has been revised. We clarified that LiDARs actively illuminate the scene but are not immune to external sunlight and rain/fog.

Comment 4: Enumerate formulas

Response 4: Equations 1-3 have been numerated.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The article presents an interesting concept of using Time-of-Flight (ToF) sensors as an alternative to traditional LiDAR systems in autonomous vehicles. The authors thoroughly analyze the drawbacks of mechanical LiDAR systems, such as mounting limitations, blind spots, and structural complexity, proposing a solution based on a distributed ToF sensor system that allows for flexible sensor placement and increased spatial coverage. The article describes the hardware-software architecture of the system, sensor synchronization, data fusion, and the application of Spiking Neural Networks (SNN) and probabilistic grid maps for vehicle environment analysis.

The article requires several improvements that could enhance its substantive value and clarity. Firstly, it lacks a more detailed comparative analysis with other modern detection systems, such as solid-state LiDAR, which would help better assess the actual advantages of the proposed solution. The authors should also conduct a more in-depth examination of the potential limitations of their system, particularly in the context of operation under adverse weather conditions, such as rain, fog, or strong sunlight, which may affect the accuracy of ToF measurements. Another important issue is the lack of cost and efficiency analysis while the system offers potential benefits, it is unclear whether its large-scale implementation would be economically viable compared to existing solutions.

The article constitutes a valuable contribution to the development of perception technologies for autonomous vehicles and presents an innovative approach to sensor data fusion. However, to enhance its scientific value, the authors should expand the comparative analysis and provide a more detailed discussion of the system’s limitations.

Comments on the Quality of English Language

The overall quality of the English language in the article is adequate

Author Response

Thank you very much for your valuable review! You can find attached a pdf of the revised manuscript with tracked changes, including all other reviewer requests for changes. The corrections we made to address your requests are as follows:

Comment 1: The article requires several improvements that could enhance its substantive value and clarity. Firstly, it lacks a more detailed comparative analysis with other modern detection systems, such as solid-state LiDAR, which would help better assess the actual advantages of the proposed solution. 

Response 1: Responding also to other reviewer requests about the same concern, we have added section 5. Discussion, which reflects on different advantages and drawbacks of mechanical LiADAR, solid-state LiDAR and ToF. We believe that the cost, performance and integration comparisons complements the main content of the manuscript, specifically, the literature review 2.2. subsection about different technology comparisons as well as the side-by-side LiDAR/ToF results comparisons in section 4 (e.g. Table 3, Figure 14).

Comment 2: The authors should also conduct a more in-depth examination of the potential limitations of their system, particularly in the context of operation under adverse weather conditions, such as rain, fog, or strong sunlight, which may affect the accuracy of ToF measurements.

Response 2: To address this concern, we have added the section 4.5. Effects of environmental conditions. This concern is indeed particularly relevant for current generation ToF sensors, especially the 850nm kind. In this section we have illustrated the effect of strong, direct sunlight and its impact on the loss in point count. For the effect of fog and rain we have referenced a relevant study of IR light attenuation of automotive LiDAR in these conditions. Although we need to specifically test this in future studies, due to the same IR wavelengths, ToF operation would be similarly affected by weather in long range but less affected in the close blind zone range.  We believe this emphasizes the significant limitations of current COTS sensors, while underlining that the developed architecture itself would be compatible with future higher dynamic range ToF sensors.

Comment 3: Another important issue is the lack of cost and efficiency analysis while the system offers potential benefits, it is unclear whether its large-scale implementation would be economically viable compared to existing solutions.

Response 3: We discuss this concern in section 5. Discussion. We estimated the ToF sensor system's price based on average COTS sensor prices and placed it in comparison with LiDAR. While currently available ToF sensors are indeed costly and multiple sensors combined would cost close to mechanical LiDAR, a hybrid system of ToF sensors and solid-state LiDAR would be a cost efficient replacement for mechanical LiDAR. The arguments that solid-state sensors are more desirable for mass production and vehicle integration also support this.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

This paper suggests an adaptable software architecture that achieves good performance in distributed Time-of-Flight (ToF) sensor configurations. The system effectively integrates a hardware triggering scheme, robust 3D point cloud registration with continuous fidelity checks, probabilistic occupancy grid mapping, sophisticated Spiking Neural Network (SNN)-based object detection, and comprehensive runtime execution monitoring. Crucially, the architecture exhibits scalability, seamlessly integrating up to seven cameras while maintaining low average latency, thereby highlighting its potential for real-time applications.

The paper is nice and I enjoyed reading it; however, I have several concerns:

  1. To more accurately represent the contents of Section 2, 'Literature Review' would be a more suitable title than the current 'Background’.
  2. The text's description of Figure 1 and the figure's caption are inconsistent. They require revision to ensure closer alignment.
  3. In Figure 2, the authors write about yellow and blue points; however, in the image itself there are light blue, green and brown points.
  4. The literature review is very comprehensive; however, the authors do not refer to the combination of different apparatus besides the LIDAR as was suggested in Y. Wiseman, "Ancillary Ultrasonic Rangefinder for Autonomous Vehicles", International Journal of Security and its Applications, Vol. 12(5), pp. 49-58, 2018. Available online at: https://u.cs.biu.ac.il/~wisemay/ijsia2018.pdf and also in Premnath, S., Mukund, S., Sivasankaran, K., Sidaarth, R., & Adarsh, S., "Design of an autonomous mobile robot based on the sensor data fusion of LIDAR 360, ultrasonic sensor and wheel speed encoder", In 2019 9th IEEE International Conference on Advances in Computing and Communication (ICACC), pp. 62-65, 2019.‏ I would encourage the authors to cite these two papers and add a paragraph about a combination of different methods in the Literature Review (Background) section.
  5. In Figure 3 and Figure 4, the author needs to specify what information goes through each arrow drawn in the figure.
  6. I was unable to locate an in-text reference to Figure 6.
  7. The image of a chair in Figure 6 appears to be unrelated to the paper's content, which centres around vehicles.
  8. In line 509, the authors write “where (xmin, ymin, zmin) and (xmax, ymax, zmax) define the bounding box of the original point cloud”. Could the authors please explain how they define these values?
  9. The comparison made in Table 2 is very important; however, why did the authors put it on a table? The data would be easier to understand if it were presented in a graph.
  10. It would be helpful to include a discussion on the potential shortcomings and avenues for enhancing the proposed model.

 

Author Response

Thank you very much for your valuable review! You can find attached a pdf of the revised manuscript with tracked changes, including all other reviewer requests for changes. The corrections we made to address your requests are as follows:

Comment 1: To more accurately represent the contents of Section 2, 'Literature Review' would be a more suitable title than the current 'Background’.

Response 1: We have changed the section 2 title to 'Literature Review'.

Comment 2: The text's description of Figure 1 and the figure's caption are inconsistent. They require revision to ensure closer alignment.

Response 2: Description of Figure 1 has been clarified (lines 202-206).

Comment 3: In Figure 2, the authors write about yellow and blue points; however, in the image itself there are light blue, green and brown points.

Response 3: Figure 3 caption has been fixed.

Comment 4: The literature review is very comprehensive; however, the authors do not refer to the combination of different apparatus besides the LIDAR as was suggested in Y. Wiseman, "Ancillary Ultrasonic Rangefinder for Autonomous Vehicles", International Journal of Security and its Applications, Vol. 12(5), pp. 49-58, 2018. Available online at: https://u.cs.biu.ac.il/~wisemay/ijsia2018.pdf and also in Premnath, S., Mukund, S., Sivasankaran, K., Sidaarth, R., & Adarsh, S., "Design of an autonomous mobile robot based on the sensor data fusion of LIDAR 360, ultrasonic sensor and wheel speed encoder", In 2019 9th IEEE International Conference on Advances in Computing and Communication (ICACC), pp. 62-65, 2019.‏ I would encourage the authors to cite these two papers and add a paragraph about a combination of different methods in the Literature Review (Background) section.

Response 4: The manuscript would indeed benefit from review of hybrid systems. We have described this on line 178.

Comment 5: In Figure 3 and Figure 4, the author needs to specify what information goes through each arrow drawn in the figure.

Response 5: We have annotated Figure 3. Additionally, text on lines 399-409 describes Figure 4 information in finer detail.

Comment 6: I was unable to locate an in-text reference to Figure 6.

Response 6: Indeed the reference was missing. However, we decided to remove the figure completely.

Comment 7: The image of a chair in Figure 6 appears to be unrelated to the paper's content, which centres around vehicles.

Response 7: We decided to remove the figure. Initially, the figure was intended to illustrate the steps of DGR in principle. However, be believe the DGR procedure is documented in good detail as described in section 3.3 and figure 5.

Comment 8: In line 509, the authors write “where (xmin, ymin, zmin) and (xmax, ymax, zmax) define the bounding box of the original point cloud”. Could the authors please explain how they define these values?

Response 8: Indeed these values should be carefully selected. We elaborate on the choice of these parameters on lines 525-528 and in the training and measurements section 4.4 lines 667-669.

Comment 9: The comparison made in Table 2 is very important; however, why did the authors put it on a table? The data would be easier to understand if it were presented in a graph.

Response 9: We have illustrated the execution time data as a bar plot in figure 11.

Comment 10: It would be helpful to include a discussion on the potential shortcomings and avenues for enhancing the proposed model.

Response 11: We have included a dedicated paragraph (lines 713-722) that emphasizes limitations and potential improvements of DL models. The SNN and DNN models could certainly benefit from multiple improvements. Additionally, some limitations have been also discussed in response to other reviewer concerns in sections 4.5 and 5.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

This revision covers the problems of the first version. 

Reviewer 3 Report

Comments and Suggestions for Authors

The authors have addressed all my concerns. The revised manuscript is ready for publication.

 

Back to TopTop