Design of Multimodal Sensor Module for Outdoor Robot Surveillance System

Round 1
Reviewer 1 Report
In this paper the author proposes a multi-mode sensor module for multi-condition monitoring and response to abnormal conditions. The author proposes that the sensing system has been running for more than four years, which can confirm the reliability of the system. So I suggest minor repairs.
Below are some of my questions.
The multi-mode sensor module fabricated in this paper employs multiple cameras, which adds significantly to the cost. So I want to know the advantages of this sensing system compared to other devices on the market, such as manufacturing cost, image resolution and more。.
Modern sensor technology is developing towards smaller and smaller scales. What are the irreplaceable advantages of using large sensor systems in this context? Using more sensors will also increase the volume and power consumption. In the context of global energy conservation and emission reduction, this sensing system will undoubtedly consume more energy. Therefore, under what kind of lightness, a sensing module with multiple working conditions is still needed without redundancy and waste.
Author Response
The multi-mode sensor module fabricated in this paper employs multiple cameras, which adds significantly to the cost. So I want to know the advantages of this sensing system compared to other devices on the market, such as manufacturing cost, image resolution and more.
sol) The proposed multi-modal sensor module is designed to collect 24-hour sensor data and use it for monitoring missions (page 2 line 88 revised). Night vision cameras cost about $20, and most sensors use low-end sensors, but thermal camera cost a lot. It was hired because it is essential for monitoring fire and night objects (page 10 figure 20 revised).
Modern sensor technology is developing towards smaller and smaller scales. What are the irreplaceable advantages of using large sensor systems in this context? Using more sensors will also increase the volume and power consumption. In the context of global energy conservation and emission reduction, this sensing system will undoubtedly consume more energy. Therefore, under what kind of lightness, a sensing module with multiple working conditions is still needed without redundancy and waste.
sol) I agree that energy conservation is necessary. However, monitoring using multimodal sensor data must withstand high temperature (summer(40°C)) and low temperature (winter(-20°C)), and corrosion around the sea. For this purpose, the modularized sensors are easy to protect. The proposed sensor module can be used for many years and is capable of autonomous driving in storms.
our test video https://www.youtube.com/watch?v=1WD-FvlVmPU
Author Response File: Author Response.pdf
Reviewer 2 Report
The authors show the design, building process, and calibration procedure for a multimodal sensor to be used in outdoor surveillance applications The proposal must be considered an innovative one because its main objective is not a research one. That is, the purpose of the paper is not to show a solution for a scientific-related knowledge problem but a device development one. This could be considered a key factor for the acceptance of the paper in relation to the journal objectives and orientation The paper is well structured, described, and written, but some comments must be considered: 1 - it is important to use a unified naming schema for the parts of the sensor. For example, in Fig. 1 some subsystems appear as "Night vision camera", "Thermal camera" and "RGB & Depth Camera" While in Fig. 2 they appear as "Night vision" "Thermal image sensor" and "RGBD camera". even when it is not a big mistake, it is recommended to be really precise in the naming convention 2 - subsection 3.2 is devoted to "Multi-modal Sensor Calibration Method"; even when the method is clearly explained and detailed, there are no validation results for the calibration method. It is desirable to show a series of executed calibration procedures to prove how precise and adequate is the calibration method 3 - section 4 is devoted to showing the operation of the sensor, but we can´t see a proper result presentation and analysis within the section. It would be very interesting and a deep validation for the sensor to show the correspondence between the real scenery that is going to be sensed and the complex multimodal data taken by the sensorAuthor Response
1 - it is important to use a unified naming schema for the parts of the sensor. For example, in Fig. 1 some subsystems appear as "Night vision camera", "Thermal camera" and "RGB & Depth Camera" While in Fig. 2 they appear as "Night vision" "Thermal image sensor" and "RGBD camera". even when it is not a big mistake, it is recommended to be really precise in the naming convention
sol) The names have been unified.
2 - subsection 3.2 is devoted to "Multi-modal Sensor Calibration Method"; even when the method is clearly explained and detailed, there are no validation results for the calibration method. It is desirable to show a series of executed calibration procedures to prove how precise and adequate is the calibration method
sol) Comparison of the calibration method has been corrected (page 8 figure 15) revised).
3 - section 4 is devoted to showing the operation of the sensor, but we can´t see a proper result presentation and analysis within the section. It would be very interesting and a deep validation for the sensor to show the correspondence between the real scenery that is going to be sensed and the complex multimodal data taken by the sensor
sol) revised for observation regions in the operation fixed and mobile agents(page 10 figure 20 revised)
48 hours test video: https://www.youtube.com/watch?v=1WD-FvlVmPU
Author Response File: Author Response.pdf
Round 2
Reviewer 1 Report
I recommend this work to be publish in Electronics.
Author Response
Using sensor data, we added human and car extraction results in day, night and fog environments.
Author Response File: Author Response.pdf
Reviewer 2 Report
even when calibration and validation has been improved we are not able to validate the performance of the system since we have no cuantitative results from a series of well designed experiments. The idea behind section #4 must be deeply developed, not only by presenting some additional images regarding sensor performance.
Author Response
For regarding sensor performance, we added extraction results in day, night and fog environments.
Author Response File: Author Response.pdf
Round 3
Reviewer 2 Report
Accepted for publication even when validation results could be improved in order to show in a more formal way the results provided by the sensor
Author Response
Thanks for your comments.
I edited according to your comments.
Author Response File: Author Response.pdf
This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.