Next Article in Journal
MSFFAL: Few-Shot Object Detection via Multi-Scale Feature Fusion and Attentive Learning
Previous Article in Journal
From Raising Awareness to a Behavioural Change: A Case Study of Indoor Air Quality Improvement Using IoT and COM-B Model
 
 
Article
Peer-Review Record

Absolute and Relative Depth-Induced Network for RGB-D Salient Object Detection

Sensors 2023, 23(7), 3611; https://doi.org/10.3390/s23073611
by Yuqiu Kong 1, He Wang 2, Lingwei Kong 3,†, Yang Liu 1,*, Cuili Yao 1 and Baocai Yin 2
Reviewer 1:
Reviewer 2:
Sensors 2023, 23(7), 3611; https://doi.org/10.3390/s23073611
Submission received: 10 February 2023 / Revised: 21 March 2023 / Accepted: 23 March 2023 / Published: 30 March 2023
(This article belongs to the Section Optical Sensors)

Round 1

Reviewer 1 Report

The authors propose using a Depth-Induced Network (DIN) for RGB-D salient object detection to take full advantage of both absolution and relative depth information and further enforce the in-depth fusion of the RGB-D cross-modalities. The proposed DIN is a lightweight network, and the model size is much smaller than that of state-of-the-art algorithms. Extensive experiments on six challenging benchmarks show that the proposed method is feasible. 

Overall, the submission is written well and innovative. However, there are some minor comments on the improvements.

1. The baseline network in Fig. 1(d) is not described or no reference given.

2. Fig. 2 should appear in Sec. 4.2 since all the evaluated models are not mentioned and the performance metrics are not given. The figure legend should also be modified for there are no curves in the figure.

3. Sec. 2 heading “Related work” è “Related Works”.

4. The parentheses are not equal in Eq. (7).

5. “Tab.” need not be shortened in text.

Author Response

Please see the attachment.

Reviewer 2 Report

It's a very good manuscipt.

What about if the depth maps are not available or they are not very accurate? Few comments are needed that describe/compare RGB-D and RGB methods. In turn, authors may compare it with such a work 'Pyramidal attention for saliency detection, CVPR2022'.

Few notations can be revised to be consistent. Like, in Fig. 3 and Eq. 1

[.;.]  --> c

Up --> U

‘MLP is mentioned in Eq. 5 & 6, but it’s architecture is not described anywhere.’

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop