Next Article in Journal
A Novel Algorithm for Detecting Pedestrians on Rainy Image
Next Article in Special Issue
Driven by Vision: Learning Navigation by Visual Localization and Trajectory Prediction
Previous Article in Journal
Vision-Based Suture Tensile Force Estimation in Robotic Surgery
Previous Article in Special Issue
Two-Dimensional LiDAR Sensor-Based Three-Dimensional Point Cloud Modeling Method for Identification of Anomalies inside Tube Structures for Future Hypersonic Transportation
Article

Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition

1
School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, ON K1N 6N5, Canada
2
Department of Computer Science and Engineering, Université du Québec en Outaouais, Gatineau, QC J8X 3X7, Canada
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(1), 113; https://doi.org/10.3390/s21010113
Received: 14 November 2020 / Revised: 19 December 2020 / Accepted: 23 December 2020 / Published: 27 December 2020
(This article belongs to the Special Issue Sensors and Computer Vision Techniques for 3D Object Modeling)
Transfer of learning or leveraging a pre-trained network and fine-tuning it to perform new tasks has been successfully applied in a variety of machine intelligence fields, including computer vision, natural language processing and audio/speech recognition. Drawing inspiration from neuroscience research that suggests that both visual and tactile stimuli rouse similar neural networks in the human brain, in this work, we explore the idea of transferring learning from vision to touch in the context of 3D object recognition. In particular, deep convolutional neural networks (CNN) pre-trained on visual images are adapted and evaluated for the classification of tactile data sets. To do so, we ran experiments with five different pre-trained CNN architectures and on five different datasets acquired with different technologies of tactile sensors including BathTip, Gelsight, force-sensing resistor (FSR) array, a high-resolution virtual FSR sensor, and tactile sensors on the Barrett robotic hand. The results obtained confirm the transferability of learning from vision to touch to interpret 3D models. Due to its higher resolution, tactile data from optical tactile sensors was demonstrated to achieve higher classification rates based on visual features compared to other technologies relying on pressure measurements. Further analysis of the weight updates in the convolutional layer is performed to measure the similarity between visual and tactile features for each technology of tactile sensing. Comparing the weight updates in different convolutional layers suggests that by updating a few convolutional layers of a pre-trained CNN on visual data, it can be efficiently used to classify tactile data. Accordingly, we propose a hybrid architecture performing both visual and tactile 3D object recognition with a MobileNetV2 backbone. MobileNetV2 is chosen due to its smaller size and thus its capability to be implemented on mobile devices, such that the network can classify both visual and tactile data. An accuracy of 100% for visual and 77.63% for tactile data are achieved by the proposed architecture. View Full-Text
Keywords: 3D object recognition; transfer learning; machine intelligence; convolutional neural networks; tactile sensors; force-sensing resistor; Barrett Hand 3D object recognition; transfer learning; machine intelligence; convolutional neural networks; tactile sensors; force-sensing resistor; Barrett Hand
Show Figures

Figure 1

MDPI and ACS Style

Rouhafzay, G.; Cretu, A.-M.; Payeur, P. Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition. Sensors 2021, 21, 113. https://doi.org/10.3390/s21010113

AMA Style

Rouhafzay G, Cretu A-M, Payeur P. Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition. Sensors. 2021; 21(1):113. https://doi.org/10.3390/s21010113

Chicago/Turabian Style

Rouhafzay, Ghazal, Ana-Maria Cretu, and Pierre Payeur. 2021. "Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition" Sensors 21, no. 1: 113. https://doi.org/10.3390/s21010113

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop