Next Article in Journal
Dynamic Insulin Basal Needs Estimation and Parameters Adjustment in Type 1 Diabetes
Next Article in Special Issue
Triboelectric Nanogenerators as Active Tactile Stimulators for Multifunctional Sensing and Artificial Synapses
Previous Article in Journal
Ultra-Wideband Positioning Sensor with Application to an Autonomous Ultraviolet-C Disinfection Vehicle
Previous Article in Special Issue
An Effective Self-Powered Piezoelectric Sensor for Monitoring Basketball Skills
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Texture Recognition Based on Perception Data from a Bionic Tactile Sensor

State Key Laboratory of Digital Manufacturing Equipment and Technology, School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(15), 5224; https://doi.org/10.3390/s21155224
Submission received: 7 July 2021 / Revised: 28 July 2021 / Accepted: 29 July 2021 / Published: 2 August 2021
(This article belongs to the Special Issue Micro/Nano Energy and Flexible Sensors)

Abstract

:
Texture recognition is important for robots to discern the characteristics of the object surface and adjust grasping and manipulation strategies accordingly. It is still challenging to develop texture classification approaches that are accurate and do not require high computational costs. In this work, we adopt a bionic tactile sensor to collect vibration data while sliding against materials of interest. Under a fixed contact pressure and speed, a total of 1000 sets of vibration data from ten different materials were collected. With the tactile perception data, four types of texture recognition algorithms are proposed. Three machine learning algorithms, including support vector machine, random forest, and K-nearest neighbor, are established for texture recognition. The test accuracy of those three methods are 95%, 94%, 94%, respectively. In the detection process of machine learning algorithms, the asamoto and polyester are easy to be confused with each other. A convolutional neural network is established to further increase the test accuracy to 98.5%. The three machine learning models and convolutional neural network demonstrate high accuracy and excellent robustness.

1. Introduction

As an irreplaceable source of information, tactile perception is an important way of understanding the surrounding environment for humans [1]. Due to the fact that tactile perception is achieved through direct contact with objects, tactile data collection is only sensitive to the inherent physical characteristics of the object. This characteristic renders tactile perception highly reliable and less susceptible to external environmental interferences. Similar to humans, tactile perception in robots is essential to realize their intelligence. In the robotic system, tactile perception is achieved through the implementation of tactile sensors [2,3,4,5,6,7,8,9,10]. Similar as tactile perception of human, tactile perception of robots include many aspects such as discerning the shape, size, modulus, hardness, textures of objects, as well as contact forces and slippage during manipulation, etc. Among them, object texture recognition is important as the characteristics of object surface can significantly affect the grasping or manipulation strategies of robots.
In recent decades, the development of tactile sensors has greatly enhanced the ability of robotic tactile perception. Tactile sensors that measure temperature, humidity, contact force, or pressure have been demonstrated. Among those sensors, the contact force or pressure sensors are essential as the shape, size, texture, hardness of the object and slippage of grasping are usually judged based on the analysis of force data [11,12,13]. Contact force or pressure sensors based on various measurement mechanisms, such as piezoelectric sensors, piezoresistive sensors, triboelectric sensors, have been developed [14,15,16,17]. Tactile sensors that can only measure single-point contact pressure [18,19,20,21,22,23,24,25,26,27,28], or tactile sensor arrays with high spatial resolution [28,29,30,31,32,33,34,35,36,37,38], or those cover large area [39,40,41,42,43,44], have been developed. Commercially available tactile sensors, such as Biotac, which can simulate part of human perception and have high detection accuracy and stability, are also widely adopted for the recognition of object properties.
With the raw data collected from tactile sensors, it is necessary to propose effective algorithms for the classification of material properties such as surface texture. With the advances of machine learning [45,46,47,48] and deep learning [49,50,51,52], new solutions have been provided for interpreting tactile data. Fishel et al. [45] proposed a Bayesian exploration-based texture recognition method using three descriptive indicators of traction, roughness, and fineness. Through a number of well-chosen exploratory movements, the recognition success rate was 95.4%. However, this method required manual selection of the direction of exploration multiple times, which consumed a lot of time and labor. Yuan et al. [13] developed a surface material recognition algorithm based on tactile vibration and contact force. After the experimental data collection was completed, the method did not require any manual intervention, and only used the support vector machine model to achieve texture recognition, finally the overall texture recognition rate reached 80%. The neural network model can be used to extract data features directly, and the model can be trained to achieve classification and recognition. The deep learning algorithms have low manual participation and high accuracy, and they are widely used in many fields. Gandarias et al. [38] used two artificial intelligence methods to recognize objects through pressure images from high-resolution tactile sensors. The experimental results showed that the classification accuracy based on SURF was 80%, and the classification accuracy based on DCNN was 91.67%.
Although a variety of texture recognition methods have been proposed, it is still desirable to further improve the accuracy and reduce the computational costs of the algorithm. Therefore, in this study, we adopt the BioTac SP multi-modal tactile sensor to obtain the vibration signal generated when the sensor and the materials to be tested slide against each other. Based on the collected tactile signals, the material textures are identified through machine learning and deep learning methods, and the classification accuracy can reach as high as 98.5%. Compared with texture recognition methods reported previously, our approach features higher recognition accuracy with low computational time. The advantages and disadvantages of machine learning algorithms and deep learning algorithms in the classification of material texture are analyzed and evaluated. This work is organized as follows. Section 2 describes the collection of the data set and the algorithms used. Section 3 describes the results and discussion of the algorithms. The conclusions are given in Section 4.

2. Materials and Methods

In this work, an experimental setup is built and the vibration data when the sensor and the material moved relative to each other under fixed experimental conditions are collected. Three machine learning algorithms and a deep learning algorithm are established to analyze and process the vibration data of different materials to achieve texture recognition of objects. In this paper, the term identification refers to detection and classification. The frequency domain features of the vibration data are extracted, and three machine learning models including Support Vector Machines, random forests, and K Nearest Neighbor are used to analyze the frequency domain features. A seven-layer convolutional neural network model is established, and the self-built data set is used to train the network model. A general flowchart of texture recognition is shown in Figure 1.

2.1. Experimental Device and Data Collection

BioTac SP, a bionic sensor with high sensitivity and stability, was selected as the tactile sensor for vibration data collection in this study [53]. The original BioTac sensor structure is shown in Figure 2a. The biomimetic design consists of a rigid core surrounded by an elastic skin filled with liquid to provide compliance similar to the human fingertip. The photograph of BioTac SP is shown in Figure 2b. The design principle of BioTac SP is the same as the original BioTac but with slightly different structure design. The BioTac SP is capable of detecting the full range of cutaneous sensory information that human fingers can detect: forces, micro vibrations, and temperature [54]. The sampling rate of the sensor is 4.4 KHz. Row data collected from the BioTac SP include voltages on impedance sensing electrodes, absolute fluid pressure (DC Pressure), dynamic fluid pressure (AC Pressure), temperature (DC Temperature), and heat flow (AC Temperature). Among them, the AC pressure signal is related to the surface roughness and texture of the material to be tested, and can indicate the micro-vibration generated during the relative movement.
The AC pressure is measured with the BioTac SP sensor with a range of 0–4095 bits. In a small pressure range, the AC pressure is linearly related to the actual normal force (N), and the relationship between the change of the AC pressure and the normal force is shown in Figure 2c. Specifically, in order to characterize the relationship between the AC pressure displayed by the BioTac sensor and the actual applied force, a commercial force sensor (HP-50N by HANDPI) is used for calibration. The BioTac sensor is in contact with the force sensor, and they are subjected to the same magnitude of force at the contact position. In this way, the value indicated by the AC pressure in the BioTac is correlated to the commercial force sensor. By continuously changing the contact condition to vary the applied force, the correlation can be established as shown in Figure 2c. In order to ensure that the elastic skin of the sensor is not subject to severely wear during the data collection process, the experiment was carried out under the condition of a normal force of 0.15 N.
The photograph of the experiment setup for tactile data acquisition is shown in Figure 2d. The experiment setup is mainly composed of a LinMot linear motor, a lifting platform, a BioTac SP multi-modal tactile sensor and materials to be tested. The sensor is fixed at the end of the linear motor. We can change the height of the lifting platform to control the pressure between the sensor and the materials to be tested. The linear motor drives the sensor to move at a constant speed, and transmits the tactile sensor data collected during the movement to the host computer for processing.
We adjusted the height of the lifting platform to adjust the degree of contact between the BioTac SP sensor and the materials to be tested. The data were collected under a fixed pressure and speed. During the experiment, the tactile sensor maintained good contact with the sample which means the sensor is in close contact with the sample without separation, and the tactile sensor surface was not severely worn. The size of the materials to be tested was 4.5 cm × 5.5 cm. The linear motor was controlled to drive the sensor to move 3 cm at a speed of 0.01 m/s, and the vibration signal detected by the sensor was collected during this process. We adjusted the starting position of the sensor and repeated the movement. Vibration data are measured by multiple contacts in different areas and directions of each material. A total of 1000 sets of vibration data were collected from ten materials. The photographs of ten materials are shown in Figure 3, and the vibration signals of some materials are shown in Figure 4.
As shown in Figure 4, the vibration amplitude and frequency of the data collected from these materials are different, which is due to the different surface roughness and texture. Smooth surfaces are found to produce virtually no vibrations, while rougher surfaces produce vibrations of much greater amplitudes [22]. The lambswool has a smooth surface and the polyester has a rough surface, so that the vibration amplitude of polyester is much greater than the vibration amplitude of lambswool. The roughness of other materials is distributed between these two materials. Coarser textures are found to produce low-frequency vibrations, while finer textures produce higher-frequency vibrations [22]. The texture of asamoto is finer than the texture of lambswool, so that the vibration frequency of asamoto is higher than the vibration frequency of lambswool. The fineness of other materials is distributed between these two materials.

2.2. Feature Extraction of Vibration Data

The original vibration data have a large amount of data, and the sensor data are unstable in the acceleration and deceleration phases. The excessively large sample data have redundant information, and it is very easy to cause overfitting in training when it is completely input into the classification models. Therefore, the original data needs to be preprocessed and particular features need to be extracted to represent the characteristics of the data.
The setup of the hardware environment requires an Inter core i7-9750H processor at 2.60 GHz, with 16 GB RAM, 1T solid state drive, and NVIDIA GeForce RTX 2060 graphics card. The operating system is 64-bit Windows 10, and our programming software is MATLAB R2018b.
The fast Fourier transform (FFT) with rectangle window function was performed on the original vibration signals of different materials to obtain the frequency domain signal, as shown in Figure 5. It can be found that the frequency domain signal of the ten materials is distributed in 0–140 Hz, but the intensity distribution of different materials is rather different, so the feature can be extracted in the frequency domain for effective classification. The window length was set to 20 Hz, and the window was traversed from 0–140 Hz. Then we took the average intensity of each window as the feature, and each signal generated a seven-dimensional vector as the feature. Feature extraction of a set of data takes only 0.47 s.

2.3. Texture Recognition

The features of the vibration data of different materials are input to the models of support vector machine, random forest, K-nearest neighbor, and convolutional neural network.
The support vector machine is used to identify different textures based on vibration data. The vibration data collected in the experimental is non-linear and a few features are extracted. Therefore, the radial basis kernel function which is a nonlinear kernel is selected in this work. There are two hyperparameters in the model, namely the penalty coefficient of the support vector machine C and the radial basis kernel parameter gamma. These two hyperparameters are related to the generalization ability and time complexity. The penalty is optimized to adjust the tolerance of the model to errors, and the radial basis kernel parameter is adjusted to optimize the number of support vectors. The grid search algorithm is used to search and debug C and gamma. The accuracy of the algorithm reaches the highest when C is 30 and gamma is 0.1. A ten-fold cross-validation algorithm is used to test the accuracy of the model.
When using random forest to identify different materials, multiple decision trees need to be established, and the final result is voted by the recognition results of the decision trees. In the random forest model, it is necessary to determine the number of decision trees n_estimators and the maximum tree depth max_depth. When increasing the value of n_estimators and max_depth, the model may have better performance, but it will also increase the time complexity of the algorithm. Therefore, those two hyperparameters need to be adjusted appropriately. When the sample features are limited, the value of max_depth does not need to be limited. The grid search algorithm is used to search for these two parameters. When n_estimators is 100 and max_depth is not limited, the accuracy of the algorithm is the highest.
K-nearest neighbor is a supervised algorithm that can solve classification and regression problems. When using the K-nearest neighbor algorithm to identify different materials, it is necessary to determine the number of neighbor samples called K value, and the current sample label is voted by the neighbor sample label. The K value affects the accuracy and time complexity of the algorithm model, so adjusting K to an appropriate value is very important for the entire algorithm. The grid search algorithm is used to search for the value of the number of neighborhoods, and finally, when K is 5, the algorithm has the highest accuracy. A ten-fold cross-validation algorithm was used to test the accuracy of the model.
Convolutional neural network, a typical deep neural network, is widely used in the field of object detection and target recognition. A 7-layer convolutional neural network with 2 convolutional layers, 2 pooling layers, and 3 fully connected layers is also established to achieve texture recognition. The original data are reconstructed before being input into the convolutional layer, so that each sample in the original data is a 3300 × 1 vector. The structure of the convolutional neural network model is shown in Table 1. The first layer of the convolutional neural network is the convolutional layer, which has 8 convolution kernels of size 25, and the activation function is the Relu function. After the first convolutional layer, the output data size is 3300 × 8. The second layer is the pooling layer, which uses maximum pooling with a kernel of size 25, and the output data size is 220 × 8. The third layer is a convolutional layer, which has 16 convolution kernels of size 25, the activation function is the Relu function, and finally the output data size is 220 × 16. The fourth layer is the pooling layer, which uses maximum pooling with a kernel of size 15, and the output data size is 14 × 16. The output data are expanded and flattened, and a layer containing 224 neurons is obtained. After two fully connected layers, 10 neurons are output, which represent 10 different materials. Based on the output of 10 neurons, the type of material can be classified. The choice of hyperparameters in model training is very important to the final training results. The RMSprop optimization algorithm is used during training. The batch size is set to 16, the number of iterations, namely nb_epoch is set to 50, and the loss function of the model is Cross entropy loss function.

2.4. Performance Measures

The performance metrics used to compare the classifier performances are precision, recall, F1 score, and overall accuracy. These are computed from the confusion matrixes using Equations (1)–(4).
P r e c i s i o n = T P T P + F P
R e c a l l ( T P R ) = T P T P + F N
F 1   S c o r e = 2 T P 2 T P + F P + F N
A c c u r a c y = T P + T N T P + F P + F N + T N
TP refers to the positive samples that are predicted to be positive. TN refers to the negative samples that are predicted to be negative. FP refers to the negative samples that are predicted to be positive. FN refers to the positive samples that are predicted to be negative.

3. Results and Discussion

Support vector machine, random forest, and K-nearest neighbor algorithm models were built on the Jupter Notebook platform relying on the Sklearn Library, and they were used to classify ten materials.
In the process of training, it is necessary to divide the training set and the test set. Ten-fold cross-validation was used for the three machine learning algorithm models. The data set was divided into ten parts, and nine of them were used as training samples in turn, the rest was used as test sample. The average of the ten results was used as the estimation accuracy of the algorithm. The detection results of support vector machine, random forest, and K nearest neighbor algorithm are shown in Table 2.
From the results of the three machine learning algorithms shown in Table 2, it can be seen that the three machine learning algorithms have high detection accuracy. For support vector machine, we can find that most of the test set data can be accurately classified, and only a few data are incorrectly identified. Among them, the most severely misidentified material is asamoto with an accuracy rate of 0.86. The recognition accuracy of materials except asamoto is above 0.9. According to the detection results of the random forest, we find that the recognition accuracy of asamoto is the lowest among all materials, and the recognition accuracy of other materials is above 0.9. The detection results of the K-nearest neighbor also show that the recognition accuracy of asamoto is the lowest among all materials, and the recognition accuracy of other materials is above 0.9. The confusion matrices of the three machine learning algorithms are shown in Figure 6. Among the three machine learning algorithms, most of the misclassified asamoto is judged to be polyester and most of the materials misjudged as asamoto are polyester.
The asamoto and polyester are easy to be confused with each other, and their features are shown in Figure 7. It can be found that all the seven features from the two materials overlap, thus they are easy to be misidentified. This is the reason the recognition accuracy of asamoto is not high when using the three machine learning algorithms.
We can see from the recognition results of the three machine learning algorithms that the recognition accuracy of asamoto is low, so other algorithms based on deep neural network need to be explored. The detection results of convolutional neural network proposed in this study are shown in Table 3.
It can be seen in Table 3 that most materials can be accurately classified, and the recognition accuracy of all materials is above 0.9. The convolutional neural network solves the problem of low recognition accuracy of asamoto in machine learning algorithms. When using a convolutional neural network to realize object material recognition, the accuracy and training loss curves during training and testing are shown in Figure 8. The recognition accuracy and training time of the four algorithms are shown in Table 4. As the number of iterations increases, the training accuracy rate converges to 98.5%, and the test accuracy rate converges to 98.5%.
According to the training and test results, among the three machine learning algorithms, the support vector machine model has the highest recognition accuracy. The training time of the K-nearest neighbor algorithm is the shortest. This is because the result of the algorithm is only the bidding voting by the labels of the neighborhood samples, and no complicated mathematical operations are performed on the feature vectors. The training time of the random forest model is the longest. This is because the sample selection and node feature selection of each tree in the random forest are random, and there are two hyperparameters, namely the number of trees and the maximum tree depth. Each training requires exploring the number of trees and the depth of each tree to optimize the test accuracy. Five-fold cross-validation is adopted for all the machine learning algorithms. The variance of the accuracy obtained is shown in Table 4. It is observed that the variances are very low, indicating that the algorithms are very stable and robust. The test accuracy of the convolutional neural network is higher than that of the three machine learning algorithms, and the training accuracy of the convolutional neural network is 98.5%. Our method is compared with the methods proposed by other reports in recent years, and the results are shown in Table 5. It can be found that the algorithm we proposed has a higher recognition accuracy with low computational time.
Judging from the model training results, the convolutional neural network has very obvious advantages. In the process of establishing the model, the convolutional neural network can automatically extract features according to the settings of the model, without manually defining the types of features, rendering the establishment of the entire model simple and efficient. In the convolutional neural network, the concept of convolution kernel is also introduced, so that the convolutional layer has the characteristics of parameter sharing. The pooling layer reduces the number of neurons, and greatly reduces the number of parameters required to train the convolutional neural network. Therefore, the complexity and time required for training of the network is greatly reduced. The convolutional neural network achieves higher accuracy than machine learning methods.

4. Conclusions and Future Work

We have proposed object texture recognition methods based on tactile perception information from a bionic tactile sensor. Based on the vibration data collected by the sensor during sliding on material, the data preprocessing algorithm and feature extraction algorithm are proposed, four algorithm models of support vector machine, random forest, K nearest neighbor algorithm, and convolutional neural network are established, and training and test verification are completed. Experimental results show that compared with machine learning algorithms, convolutional neural networks have better recognition accuracy, which can reach 98.5%. Among the four algorithms, the K-nearest neighbor algorithm has a simple principle and does not involve complicated mathematical operations. The result is only obtained by voting on the neighborhood sample labels of the samples to be tested, so it has the most advantage in time efficiency. In the machine learning algorithm, our feature extraction is basically completed by addition, which reduces the time complexity of the entire algorithm. In the deep learning algorithm, our neural network has fewer layers, and the convolution kernel is not as complicated as 3D CNN, so the time complexity of the algorithm is relatively low. According to the results, our algorithm requires a shorter time. It has been shown that our approach features higher recognition accuracy with low computational time, when compared with texture recognition methods reported previously. In future work, the algorithm model can be further improved to improve the recognition accuracy of the algorithm. For deep learning algorithms, the upper limit of detection accuracy depends on the size of the data set, so the data set can be further expanded.

Author Contributions

All parts of the research are done by both of the authors. Both authors have read and agreed to the current version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

We did not report any data.

Acknowledgments

We acknowledge the support from National Natural Science Foundation of China (No. U2013213, 51820105008, 92048302), the Technology Innovation Project of Hubei Province of China under Grant 2019AEA171. We also acknowledge the support from Flexible Electronics Research Center of HUST for providing experiment facility.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kalisch, T.; Kattenstroth, J.C.; Kowalewski, R.; Tegenthoff, M.; Dinse, H.C. Cognitive and Tactile Factors Affecting Human Haptic Performance in Later Life. PLoS ONE 2012, 7, e30420. [Google Scholar] [CrossRef] [Green Version]
  2. Liu, C.K.; Qiao, H.; Su, J.H.; Zhang, P. Vision-Based 3-D Grasping of 3-D Objects with a Simple 2-D Gripper. IEEE Trans. Syst. Man Cybern. Syst. 2014, 44, 605–620. [Google Scholar]
  3. Bekiroglu, Y.; Kragic, D.; Kyrki, V. Learning Grasp Stability Based on Tactile Data and HMMs. In Proceedings of the 19th IEEE International Conference on Robot and Human Interactive Communication, Viareggio, Italy, 13–15 September 2010; pp. 132–137. [Google Scholar]
  4. Cheng, Y.; Su, C.Z.; Jia, Y.Y.; Xi, N. Data Correlation Approach for Slippage Detection in Robotic Manipulations Using Tactile Sensor Array. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, Germany, 28 Septermber–2 October 2015; pp. 2717–2722. [Google Scholar]
  5. Zhang, Y.F.; Liu, H. Tactile Sensor Based Varying Contact Point Manipulation Strategy for Dexterous Robot Hand Manipulating Unknown Objects. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Portugal, 7–12 October 2012; pp. 4756–4761. [Google Scholar]
  6. Taira, R.; Saga, S.; Okatani, T.; Deguchi, K. 3D Reconstruction of Reflective Surface on Reflection Type Tactile Sensor Using Constraints of Geometrical Optics. In Proceedings of the Society of Instrument and Control Engineers of Japan (SICE), Taipei, Taiwan, 18–21 August 2010; pp. 3144–3149. [Google Scholar]
  7. Yao, G.; Xu, L.; Cheng, X.W.; Li, Y.Y.; Huang, X.; Guo, W.; Liu, S.Y.; Wang, Z.L.; Wu, H. Bioinspired Triboelectric Nanogenerators as Self-Powered Electronic Skin for Robotic Tactile Sensing. Adv. Funct. Mater. 2020, 30, 1907312. [Google Scholar] [CrossRef]
  8. Wu, H.; Yang, G.G.; Zhu, K.H.; Liu, S.Y.; Guo, W.; Jiang, Z.; Li, Z. Materials, Devices, and Systems of On-Skin Electrodes for Electrophysiological Monitoring and Human–Machine Interfaces. Adv. Sci. 2020, 8, 2001938. [Google Scholar] [CrossRef] [PubMed]
  9. Zhang, C.; Liu, S.Y.; Huang, X.; Guo, W.; Li, Y.Y.; Wu, H. A stretchable dual-mode sensor array for multifunctional robotic electronic skin. Nano Energy 2019, 62, 164–170. [Google Scholar] [CrossRef]
  10. Luo, S.; Bimbo, J.; Dahiya, R.; Liu, H. Robotic tactile perception of object properties: A review. Mechatronics 2017, 48, 54–67. [Google Scholar] [CrossRef] [Green Version]
  11. Tsuji, S.; Kohama, T. Using a Convolutional Neural Network to Construct a Pen-type Tactile Sensor System for Roughness Recognition. Sens. Actuators A Phys. 2019, 291, 7–12. [Google Scholar] [CrossRef]
  12. Romano, J.M.; Kuchenbecker, K.J. Methods for Robotic Tool-mediated Haptic Surface Recognition. In Proceedings of the IEEE Haptics Symposium, Houston, TX, USA, 23–37 February 2014; pp. 49–56. [Google Scholar]
  13. Yuan, W.; Zhu, C.; Owens, A.; Srinivasan, M.A.; Adelson, E.H. Shape-independent Hardness Estimation Using Deep Learning and a GelSight Tactile Sensor. In Proceedings of the IEEE International Conference on Robotics and Automation, Marina Bay Sands, Singapore, 29 May–3 June 2017; pp. 951–958. [Google Scholar]
  14. An, B.W.; Heo, S.; Ji, S.; Bien, F.; Park, J.U. Transparent and flexible fingerprint sensor array with multiplexed detection of tactile pressure and skin temperature. Nat. Commun. 2018, 9, 2458. [Google Scholar] [CrossRef] [PubMed]
  15. Chen, H.; Bai, L.; Li, T.; Zhao, C.; Zhang, J.S.; Zhang, N.; Song, G.F.; Gan, Q.Q.; Xu, Y. Wearable and robust triboelectric nanogenerator based on crumpled gold films. Nano Energy 2018, 46, 73–80. [Google Scholar] [CrossRef]
  16. Wang, X.M.; Tao, L.Q.; Yuan, M.; Wang, Z.P.; Yu, J.B.; Xie, D.L.; Luo, F.; Chen, X.P.; Wong, C.P. Sea urchin-like microstructure pressure sensors with an ultra-broad range and high sensitivity. Nat. Commun. 2021, 12, 1776. [Google Scholar] [CrossRef]
  17. Wang, H.S.; Hong, S.K.; Han, J.H.; Jung, Y.H.; Jeong, H.K.; Im, T.H.; Jeong, C.K.; Lee, B.Y.; Kim, G.; Yoo, C.D.; et al. Biomimetic and flexible piezoelectric mobile acoustic sensors with multiresonant ultrathin structures for machine learning biometrics. Sci. Adv. 2021, 7, eabe5683. [Google Scholar] [CrossRef] [PubMed]
  18. Kroemer, O.; Lampert, C.H.; Peters, J. Learning Dynamic Tactile Sensing with Robust Vision-Based Training. IEEE Trans. Robot. 2011, 27, 545–557. [Google Scholar] [CrossRef]
  19. Fox, C.W.; Evans, M.; Pearson, M.; Prescott, T.J. Tactile SLAM with a Biomimetic Whiskered Robot. In Proceedings of the IEEE International Conference on Robotics and Automation, St. Paul, MN, USA, 14–18 May 2012; pp. 4925–4930. [Google Scholar]
  20. Kuchenbecker, K.J.; Fiene, J.; Niemeyer, G. Improving Contact Realism through Event-Based Haptic Feedback. IEEE Trans. Vis. Comput. Graph. 2006, 12, 219–230. [Google Scholar] [CrossRef]
  21. Mitchinson, B.; Pearson, M.J.; Pipe, A.G.; Prescott, T.I. Biomimetic Tactile Target Acquisition, Tracking and Capture. Robot. Auton. Syst. 2014, 62, 366–375. [Google Scholar] [CrossRef] [Green Version]
  22. Huet, L.A.; Rudnicki, J.W.; Hartmann, M.J. Tactile Sensing with Whiskers of Various Shapes: Determining the Three-Dimensional Location of Object Contact Based on Mechanical Signals at The Whisker Base. Soft Robot. 2017, 4, 88–102. [Google Scholar] [CrossRef]
  23. Giguere, P.; Dudek, G. A Simple Tactile Probe for Surface Identification by Mobile Robots. IEEE Trans. Robot. 2011, 27, 534–544. [Google Scholar] [CrossRef]
  24. Kaboli, M.; Mittendorfer, P.; Hugel, V.; Cheng, G. Humanoids Learn Object Properties from Robust Tactile Feature Descriptors via Multi-modal Artificial Skin. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots, Seoul, Korea, 3–5 November 2015; pp. 187–192. [Google Scholar]
  25. Winstone, B.; Griffiths, G.; Pipe, T.; Melhuish, C.; Rossiter, J. TACTIP—Tactile Fingertip Device, Texture Analysis through Optical Tracking of Skin Features. In Proceedings of the Conference on Biomimetic and Biohybrid Systems, London, UK, 29 July–2 August 2013; pp. 323–334. [Google Scholar]
  26. Hoelscher, J.; Peters, J.; Hermans, T. Evaluation of Tactile Feature Extraction for Interactive Object Recognition. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots, Seoul, Korea, 3–5 November 2015; pp. 310–317. [Google Scholar]
  27. Jamali, N.; Sammut, C. Majority Voting: Material Classification by Tactile Sensing Using Surface Texture. IEEE Trans. Robot. 2011, 27, 508–521. [Google Scholar] [CrossRef]
  28. Juan, M.G.; Jesús, G.G.; Alfonso, J.G. Enhancing Perception with Tactile Object Recognition in Adaptive Grippers for Human–Robot Interaction. Sensors 2018, 18, 692. [Google Scholar]
  29. Hui, X.; Liu, H.; Shan, L.; Seneviratne, L.D.; Althoefer, K. Fiber Optics Tactile Array Probe for Tissue Palpation during Minimally Invasive Surgery. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 2539–2544. [Google Scholar]
  30. Tenzer, Y.; Jentoft, L.P.; Howe, R. The Feel of MEMS Barometers: Inexpensive and Easily Customized Tactile Array Sensors. Robot. Autom. Mag. 2014, 21, 89–95. [Google Scholar] [CrossRef]
  31. Chorley, C.; Melhuish, C.; Pipe, T.; Rossiter, J. Development of a Tactile Sensor Based on Biologically Inspired Edge Encoding. In Proceedings of the International Conference on Advanced Robotics, Munich, Germany, 22–26 June 2009; pp. 1–6. [Google Scholar]
  32. Sato, K.; Kamiyama, K.; Kawakami, N.; Tachi, S. Finger-Shaped Gelforce: Sensor for Measuring Surface Traction Fields for Robotic Hand. IEEE Trans. Haptics 2010, 3, 37–47. [Google Scholar] [CrossRef] [PubMed]
  33. Johnson, M.K.; Cole, F.; Raj, A.; Adelson, E.H. Microgeometry Capture Using an Elastomeric Sensor. ACM Trans. Graph. 2011, 30, 46–53. [Google Scholar] [CrossRef] [Green Version]
  34. Yamaguchi, A.; Atkeson, C.G. Combining Finger Vision and Optical Tactile Sensing: Reducing and Handling Errors while Cutting Vegetables. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots, Cancun, Mexico, 15–17 November 2016; pp. 1045–1051. [Google Scholar]
  35. Funabashi, S.; Morikuni, S.; Geier, A.; Schmitz, A.; Ogasa, S.; Tomo, T.P.; Somlor, S.; Sugano, S. Object Recognition Through Active Sensing Using a Multi-Fingered Robot Hand with 3D Tactile Sensors. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, Madrid, Spain, 1–5 October 2018; pp. 2589–2595. [Google Scholar]
  36. Li, R.; Adelson, E.H. Sensing and Recognizing Surface Textures Using a GelSight Sensor. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 25–27 June 2013; pp. 1241–1247. [Google Scholar]
  37. Garcia-Garcia, A.; Zapata-Impata, B.S.; Orts-Escolano, S.; Gil, P.; Garcia-Rodriguez, J. Tactile GCN: A Graph Convolutional Network for Predicting Grasp Stability with Tactile Sensors. In Proceedings of the 2019 International Joint Conference on Neural Networks, Budapest, Hungary, 14–19 July 2019; pp. 1–8. [Google Scholar]
  38. Gandarias, J.M.; Gómez-de-Gabriel, J.M.; García-Cerezo, A.J. Human and Object Recognition with a High-resolution Tactile Sensor. In Proceedings of the 2017 IEEE Sensors, Glasgow, UK, 29 October–1 November 2017; pp. 1–3. [Google Scholar]
  39. Polat, E.O.; Balci, O.; Kakenov, N.; Uzlu, H.B.; Kocabas, C.; Dahiya, R. Synthesis of Large Area Graphene for High Performance in Flexible Optoelectronic Devices. Sci. Rep. 2015, 5, 16744. [Google Scholar] [CrossRef] [Green Version]
  40. Hoffmann, M.; Straka, Z.; Farkas, I.; Vavrecka, M.; Metta, G. Robotic Homunculus: Learning of Artificial Skin Representation in a Humanoid Robot Motivated by Primary Somatosensory Cortex. IEEE Trans. Cogn. Dev. Syst. 2017, 10, 163–176. [Google Scholar] [CrossRef]
  41. Muscari, L.; Seminara, L.; Mastrogiovanni, F.; Valle, M.; Capurro, M.; Cannata, G. Real-Time Reconstruction of Contact Shapes for Large Area Robot Skin. In Proceedings of the IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 2360–2366. [Google Scholar]
  42. Mittendorfer, P.; Cheng, G. Humanoid Multimodal Tactile-Sensing Modules. IEEE Trans. Robot. 2011, 27, 401–410. [Google Scholar] [CrossRef] [Green Version]
  43. Bartolozzi, C.; Natale, L.; Nori, F.; Metta, G. Robots with a Sense of Touch. Nat. Mater. 2016, 15, 921–925. [Google Scholar] [CrossRef] [PubMed]
  44. Kaboli, M.; Long, A.; Cheng, G. Humanoids Learn Touch Modalities Identification via Multi-modal Robotic Skin and Robust Tactile Descriptors. Adv. Robot. 2015, 29, 1411–1425. [Google Scholar] [CrossRef]
  45. Fishel, J.A.; Loeb, G.E. Bayesian Exploration for Intelligent Identification of Textures. Front. Neurorobot. 2012, 6, 4. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Ho, V.A.; Araki, T.; Makikawa, M.; Hirai, S. Experimental investigation of surface identification ability of a low-profile fabric tactile sensor. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Algarve, Portugal, 7–11 October 2012; pp. 4497–4504. [Google Scholar]
  47. Dallaire, P.; Giguere, P.; Emond, D.; Chaibdraa, B. Autonomous Tactile Perception: A Combined Improved Sensing and Bayesian Nonparametric Approach. Robot. Auton. Syst. 2014, 62, 422–435. [Google Scholar] [CrossRef]
  48. Liu, H.; Song, X.; Bimbo, J.; Seneviratne, L.; Althoefer, K. Surface Material Recognition through Haptic Exploration Using an Intelligent Contact Sensing Finger. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Algarve, Portugal, 7–11 October 2012; pp. 52–57. [Google Scholar]
  49. Sm, A.; Dr, B.; Clt, B. Deep Residual Pooling Network for Texture Recognition. Pattern Recognit. 2021, 112, 107817. [Google Scholar]
  50. Florindo, J.B. DSTNet: Successive applications of the discrete Schroedinger transform for texture recognition. Inf. Sci. 2020, 507, 356–364. [Google Scholar] [CrossRef]
  51. Yuan, W.; Mo, Y.; Wang, S.; Adelson, E.H. Active clothing material perception using tactile sensing and deep learning. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 4842–4849. [Google Scholar]
  52. Polic, M.; Krajacic, I.; Lepora, N.; Orsag, M. Convolutional autoencoder for feature extraction in tactile sensing. IEEE Robot. Autom. Lett. 2019, 4, 3671–3678. [Google Scholar] [CrossRef]
  53. Su, Z.; Hausman, K.; Chebotar, Y.; Molchanov, A.; Loeb, G.E.; Sukhatme, G.S.; Schaal, S. Force estimation and slip detection/classification for grip control using a biomimetic tactile sensor. In Proceedings of the IEEE-RAS 15th International Conference on Humanoid Robots, Seoul, Korea, 3–5 November 2015; pp. 297–303. [Google Scholar]
  54. Jiminez, M.C.; Fishel, J.A. Evaluation of force, vibration and thermal tactile feedback in prosthetic limbs. In Proceedings of the IEEE International Conference on Haptic Interfaces for Virtual Environment and Teleoperator Systems (Haptics), Houston, TX, USA, 23–26 February 2014; pp. 437–441. [Google Scholar]
  55. SynTouch. Available online: https://syntouchinc.com (accessed on 18 July 2021).
  56. Strese, M.; Lee, J.Y.; Schuwerk, C.; Han, Q.; Kim, H.G.; Steinbach, E. A haptic texture database for tool-mediated texture recognition and classification. In Proceedings of the 2014 IEEE International Symposium on Haptic, Audio and Visual Environments and Games (HAVE) Proceedings, Richardson, TX, USA, 10–11 October 2014; pp. 118–123. [Google Scholar]
  57. Orii, H.; Tsuji, S.; Kouda, T.; Kohama, T. Tactile texture recognition using convolutional neural networks for time-series data of pressure and 6-axis acceleration sensor. In Proceedings of the 2017 IEEE International Conference on Industrial Technology (ICIT), Toronto, ON, Canada, 22–25 March 2017; pp. 1076–1080. [Google Scholar]
  58. Kerr, E.; McGinnity, T.M.; Coleman, S. Material recognition using tactile sensing. Expert Syst. Appl. 2018, 94, 94–111. [Google Scholar] [CrossRef] [Green Version]
  59. Gandarias, J.M.; Pastor, F.; García-Cerezo, A.J.; Gómez-de-Gabriel, J.M. Active tactile recognition of deformable objects with 3d convolutional neural networks. In Proceedings of the 2019 IEEE World Haptics Conference (WHC), Tokyo, Japan, 9–12 July 2019; pp. 551–555. [Google Scholar]
Figure 1. Flowchart of the proposed approach for texture recognition.
Figure 1. Flowchart of the proposed approach for texture recognition.
Sensors 21 05224 g001
Figure 2. Experimental setup and measurement performances of the sensor: (a) schematic showing the structure of the original BioTac [55]; (b) photograph of BioTac SP sensor [55]; (c) the relationship between the change of AC pressure (bits) and applied normal force (N); (d) photograph of the experiment setup.
Figure 2. Experimental setup and measurement performances of the sensor: (a) schematic showing the structure of the original BioTac [55]; (b) photograph of BioTac SP sensor [55]; (c) the relationship between the change of AC pressure (bits) and applied normal force (N); (d) photograph of the experiment setup.
Sensors 21 05224 g002
Figure 3. Photograph of ten materials tested: (a) polyester; (b) flannel; (c) asamoto; (d) denim fabric; (e) foam; (f) double-sided; (g) cotton linen; (h) flax; (i) lambswool; (j) satin.
Figure 3. Photograph of ten materials tested: (a) polyester; (b) flannel; (c) asamoto; (d) denim fabric; (e) foam; (f) double-sided; (g) cotton linen; (h) flax; (i) lambswool; (j) satin.
Sensors 21 05224 g003
Figure 4. Vibration data of four materials: (a) polyester; (b) lambswool; (c) cotton linen; (d) foam.
Figure 4. Vibration data of four materials: (a) polyester; (b) lambswool; (c) cotton linen; (d) foam.
Sensors 21 05224 g004
Figure 5. Frequency domain signals of polyester (a), lambswool (c), cotton linen (e), foam (g). The zoom in view of the signals in 0–200 Hz for polyester (b), lambswool (d), cotton linen (f), foam (h).
Figure 5. Frequency domain signals of polyester (a), lambswool (c), cotton linen (e), foam (g). The zoom in view of the signals in 0–200 Hz for polyester (b), lambswool (d), cotton linen (f), foam (h).
Sensors 21 05224 g005
Figure 6. The confusion matrices of the three machine learning algorithms: (a) SVM; (b) RF; (c) KNN.
Figure 6. The confusion matrices of the three machine learning algorithms: (a) SVM; (b) RF; (c) KNN.
Sensors 21 05224 g006
Figure 7. Features of asamoto and polyester.
Figure 7. Features of asamoto and polyester.
Sensors 21 05224 g007
Figure 8. Training and test results of the convolutional neural network: (a) The change curve of training accuracy and training loss as the number of iterations increases; (b) the change curve of test accuracy and test loss as the number of iterations increases.
Figure 8. Training and test results of the convolutional neural network: (a) The change curve of training accuracy and training loss as the number of iterations increases; (b) the change curve of test accuracy and test loss as the number of iterations increases.
Sensors 21 05224 g008
Table 1. Operation, input, and output of each layer in CNN.
Table 1. Operation, input, and output of each layer in CNN.
Layer ConnectionInput SizeOperationKernel SizeOutput Size
0–13300 × 1Convolution25 × 1 × 83300 × 8
1–23300 × 8Maxpooling15 × 1220 × 8
2–3220 × 8Convolution25 × 1 × 16220 × 16
3–4220 × 16Maxpooling15 × 114 × 16
4–514 × 16Fully connected layer224224
5–6224Fully connected layer128128
6–7128Fully connected layer (softmax)1010
Table 2. Detection results of different materials using machine learning algorithms.
Table 2. Detection results of different materials using machine learning algorithms.
Algorithm/MaterialTPFPFNPrecisionRecallF1
SVMPolyester878130.920.870.89
Flannel95150.990.950.97
Asamoto8918110.860.890.87
Denim fabric98520.950.980.97
Foam99011.00.990.99
Double-sided97630.940.970.96
Cotton linen98520.950.980.97
Flax100200.981.00.99
Lambswool95250.980.950.96
Satin905100.950.900.92
RFPolyester867140.920.860.89
Flannel95250.980.950.96
Asamoto9021100.810.900.85
Denim fabric97630.940.970.96
Foam97031.00.970.98
Double-sided95450.960.950.95
Cotton linen97530.950.970.96
Flax99510.950.990.97
Lambswool96240.980.960.97
Satin897110.930.890.91
KNNPolyester887120.930.880.90
Flannel91190.990.910.95
Asamoto922080.820.920.87
Denim fabric98920.920.980.95
Foam99210.980.990.99
Double-sided97930.920.970.94
Cotton linen98420.960.980.97
Flax97230.980.970.97
Lambswool97230.980.970.97
Satin843160.970.840.90
Table 3. Detection results of convolutional neural network.
Table 3. Detection results of convolutional neural network.
Algorithm/MaterialTPFPFNPrecisionRecallF1
CNNPolyester14001.01.01.0
Flannel11001.01.01.0
Asamoto25001.01.01.0
Denim fabric24001.01.01.0
Foam28001.01.01.0
Double-sided16100.941.00.97
Cotton linen16021.00.890.94
Flax23001.01.01.0
Lambswool21200.911.00.95
Satin19011.00.950.97
Table 4. Recognition accuracy of four algorithms.
Table 4. Recognition accuracy of four algorithms.
ModelTraining Accuracy (%)Test Accuracy (%)Time (s)
SVM95.88 ± 1.4895 ± 2.2853.9
RF95.59 ± 1.2994 ± 2.24273
KNN96.75 ± 1.6894 ± 2.243.76
CNN98.598.525.9
Table 5. Comparisons of texture recognition algorithms.
Table 5. Comparisons of texture recognition algorithms.
ReferenceMethodSensorAccuracy (%)Run Time (to Train One Fold) (s)Description
Strese et al. [56], 2014Gaussian Mixture ModelAccelerometer80.2/classification of 43 kinds of objects
Orii et al. [57], 2017CNNPressure and 6-axis accelerometer70.7/classification of 4 kinds of objects
Gandarias et al. [38], 20171. SURFHigh resolution pressure sensor800.01classification of 8 kinds of objects
2. DCNN91.70.7
Kerr et al. [58], 2018SVMBioTac86.190.48classification of 14 kinds of objects
Gandarias et al. [59], 20193D CNN whose input is 3D dataHigh resolution pressure sensor96.3/classification of 9 kinds of deformable objects
Our methodCNNBioTac SP98.50.032classification of 10 kinds of objects
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Huang, S.; Wu, H. Texture Recognition Based on Perception Data from a Bionic Tactile Sensor. Sensors 2021, 21, 5224. https://doi.org/10.3390/s21155224

AMA Style

Huang S, Wu H. Texture Recognition Based on Perception Data from a Bionic Tactile Sensor. Sensors. 2021; 21(15):5224. https://doi.org/10.3390/s21155224

Chicago/Turabian Style

Huang, Shiyao, and Hao Wu. 2021. "Texture Recognition Based on Perception Data from a Bionic Tactile Sensor" Sensors 21, no. 15: 5224. https://doi.org/10.3390/s21155224

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop