Next Article in Journal
Unsupervised SAR Imagery Feature Learning with Median Filter-Based Loss Value
Next Article in Special Issue
Vision-Based Efficient Robotic Manipulation with a Dual-Streaming Compact Convolutional Transformer
Previous Article in Journal
Comparison of Graph Fitting and Sparse Deep Learning Model for Robot Pose Estimation
Previous Article in Special Issue
Human Pulse Detection by a Soft Tactile Actuator
 
 
Article
Peer-Review Record

Cross-Modal Reconstruction for Tactile Signal in Human–Robot Interaction

Sensors 2022, 22(17), 6517; https://doi.org/10.3390/s22176517
by Mingkai Chen * and Yu Xie
Reviewer 1:
Reviewer 2:
Sensors 2022, 22(17), 6517; https://doi.org/10.3390/s22176517
Submission received: 1 August 2022 / Revised: 22 August 2022 / Accepted: 25 August 2022 / Published: 29 August 2022

Round 1

Reviewer 1 Report

This paper presents an algorithm to decode force information from video frames that contain interactions with the object. The algorithm consists of general CNN network and attention mechanism to spatially discriminate the point of interest. It is of great interest as the fields of application in human-robot interactions. Here are some minor comments:

1)      How will the prediction vary by the speed of the force and size of the robotic arm?

2)      The selection of the window (about 500 frames) was not justified strongly in the paper. The work could benefit from showing the performance when considering different window sizes.

3)      Could authors include accuracy of the system?

4)      Authors may comment how the system can deal with varying conditions such as camera angles.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Dear authors,

Have a look my comments

Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Done

Back to TopTop