Next Article in Journal
Quantitative Dynamic Allodynograph—A Standardized Measure for Testing Dynamic Mechanical Allodynia in Chronic Limb Pain
Next Article in Special Issue
Deep-Learning-Aided Evaluation of Spondylolysis Imaged with Ultrashort Echo Time Magnetic Resonance Imaging
Previous Article in Journal
Explainable Risk Prediction of Post-Stroke Adverse Mental Outcomes Using Machine Learning Techniques in a Population of 1780 Patients
Previous Article in Special Issue
Computer Vision Technology for Monitoring of Indoor and Outdoor Environments and HVAC Equipment: A Review
 
 
Article
Peer-Review Record

Graph Sampling-Based Multi-Stream Enhancement Network for Visible-Infrared Person Re-Identification

Sensors 2023, 23(18), 7948; https://doi.org/10.3390/s23187948
by Jinhua Jiang 1,†, Junjie Xiao 1,†, Renlin Wang 2, Tiansong Li 1, Wenfeng Zhang 1,*, Ruisheng Ran 1,* and Sen Xiang 3
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Sensors 2023, 23(18), 7948; https://doi.org/10.3390/s23187948
Submission received: 23 August 2023 / Revised: 11 September 2023 / Accepted: 13 September 2023 / Published: 18 September 2023
(This article belongs to the Special Issue Multi-Modal Data Sensing and Processing)

Round 1

Reviewer 1 Report

This paper proposes graph sampling-based multi-stream enhancement network for visible-infrared person re-identification. Besides, an innovative Cross-modality Graph Sampler (CGS) is designed for sample selection before training. Experiments are conducted to verify the effectiveness of the proposed method. This study is interesting, but some issues need be addressed before publication.

1.      What is the difference between the proposed Cross-modality Graph Sampler and k-Nearest Neighbor algorithm? The proposed method is actually a classification algorithm, similar to k-Nearest Neighbor.

2.      In Section 3.2, why Euclidean distances are selected for distances or similarities calculation? Hash algorithm is more commonly used for image similarity measurement.

3.      Some details of the proposed method should be illustrated, such as the most basic calculation formula, rather than just providing a block diagram.

4.      Lastly, I understand there are two first authors for this paper as they contributed equally to this work. But why are there two corresponding authors? Funding acquisition and Project administration were both completed by Wenfeng Zhang, why is Ruisheng Ran also the corresponding author?

1.      Language needs polishing. For example, the last sentence of Abstract, what are achieved 93.69% and 92.56%? In Section 3.2, the initial letter of a sentence ’both’ should be capitalized. Other issues will not be listed one by one.

Author Response

Please note that the detailed responses are provided in the attachment for your reference. If you require a more in-depth understanding of our responses to the reviewer's comments and the corresponding revisions, kindly open the attachment for further details. Thank you for your time and attention.

Author Response File: Author Response.pdf

Reviewer 2 Report

This manuscript proposed concatenating the contour for person re-identification and during training, the authors group classes with a similar feature to the same batch to boost the performance. Although the idea itself is straightforward, I think the manuscript is well-written and presented and the results are promising. Some minor comments are below.

1. I would suggest adding the citations in Table 1.

2. On Page-6, line-231, "both" -> "Both".

3. I think the experiments show that the contour information is beneficial,  but the selection of the contour extraction method is questionable. It would be good to have an additional experiment using one classical edge detection algorithm like Canny edge detection and compare its performance to the proposed approach.

I think the language is fine.

Author Response

Please note that the detailed responses are provided in the attachment for your reference. If you require a more in-depth understanding of our responses to the reviewer's comments and the corresponding revisions, kindly open the attachment for further details. Thank you for your time and attention.

Author Response File: Author Response.pdf

Back to TopTop