Next Article in Journal
BSCNNLaneNet: A Novel Bidirectional Spatial Convolution Neural Network for Lane Detection
Previous Article in Journal
An Overview of State-of-the-Art Research on Smart Building Systems
Previous Article in Special Issue
Reducing Patient Movement During Magnetic Resonance Imaging: A Case Study
 
 
Article
Peer-Review Record

A Systematic Parametric Campaign to Benchmark Event Cameras in Computer Vision Tasks

Electronics 2025, 14(13), 2603; https://doi.org/10.3390/electronics14132603
by Dario Cazzato *, Graziano Renaldi and Flavio Bono
Reviewer 1: Anonymous
Reviewer 2:
Electronics 2025, 14(13), 2603; https://doi.org/10.3390/electronics14132603
Submission received: 26 May 2025 / Revised: 19 June 2025 / Accepted: 23 June 2025 / Published: 27 June 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

1. Formulate and present appropriately a problem statement with its corresponding novel research question, for instance, “How do bias parameters and illumination variability impact the performance of a sensor in different computer vision tasks?”  

2. Along with citation MVSEC, DSEC or N-Caltech101, articulate what differs from those benchmarks and what is their lack, like capture rigorously controlled parametric variations and repeatability, bias effects over time.

3. Please explain how this dataset underpins tasks such as event-based tracking, motion deblurring, frequency estimation, and autobiasing in real-world robotic or surveillance systems.

4. Include baseline model performance for these tasks to enhance practical value illustration.

5. Implement evaluation metrics such as signal-to-noise ratio, latency benchmarks, and precision-recall metrics for keypoint detection.

 

 

Author Response

Kindly refer to the attached PDF document.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The paper addresses the issue that existing datasets primarily rely on simulated data or uncontrolled environments, making it challenging to analyze the influence of camera parameters and lighting conditions systematically. It proposes the JRC INVISIONS Neuromorphic Sensors Parametric Tests dataset. This dataset comprises 2156 scenes, with data collected using two commercial event cameras (Prophesee EVK4 and DAVIS346) in three controlled scenarios (moving targets, mechanical vibration, and rotational speed estimation) and ground truth obtained by synchronizing frame cameras (Allied Vision Alvium) and laser sensors. The paper provides a detailed description of the dataset's hardware configuration, parameter variations (such as ON/OFF thresholds and refractory periods), scene design, and data structure. It also analyzes the performance of the sensors, event statistics, and acquisition artifacts (such as event loss and motion distortion) under different speeds and lighting conditions, providing a basis for the development and performance evaluation of event camera algorithms. However, there are the following shortcomings in the paper:
1. The abstract only mentions that existing datasets "rely on simulated data or loosely controlled conditions." However, it does not explicitly compare the advantages of this dataset in terms of parameter coverage, scene types, or data scale. It is suggested to modify the relevant content in the abstract to highlight the innovation.
2. Although the introduction points out that existing datasets lack controlled parameter analysis, it does not mention the progress in similar directions that has occurred in recent years. It is suggested to supplement the relevant research progress and references.
3. Eq. (1) only defines the event structure but does not explain the specific impact of threshold parameters (such as brightness change threshold) on event generation. It is suggested to add this information.
4. Regarding the description of the DAVIS sensor, the paper mentions that DAVIS combines event and frame cameras but does not explain the pixel allocation mechanism (such as how event and frame pixels are divided) or the factors that lead to low spatial resolution.
5. The paper mentions the use of 3D-printed targets (Figure 10). However, it is suggested to supplement the explanation of the impact of target surface texture or geometric features on event generation (such as how edge density affects the event rate).
6. Please explain why only EVK4 was used to collect data and not DAVIS346.
7. The laser sensor is used to synchronize event and frame data. Please provide a detailed explanation of the synchronization accuracy (e.g., microsecond-level error) or the specific implementation method (e.g., trigger signal interface).
8. In Figures 16-23, there are no standard error bars or statistical attributes; Figure 26 lacks a time scale on the x-axis, making it difficult to quantify the accuracy of the frequency estimation.
9. Figure 14 is repeated in the mechanical vibration section (corresponding to the metal rod and ARUCO marker, respectively).
10. Only formula (1) is labeled in the main text. Other formulas are not labeled.
11. Some references do not have volume numbers or page numbers (such as reference 15), and the citation order does not match the order mentioned in the main text.
12. It is suggested to add a comparison table of parameter ranges and scene types with existing datasets (such as ESIM simulator and public event datasets).

Author Response

Kindly refer to the attached PDF document.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

Manuscript improved a lot

Back to TopTop