Next Article in Journal
A Variable-Length Rational Finite Element Based on the Absolute Nodal Coordinate Formulation
Previous Article in Journal
Distribution Characteristics of Screw Axes Belonging to General Three-System of Screws
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Texture Identification and Object Recognition Using a Soft Robotic Hand Innervated Bio-Inspired Proprioception

1
School of Biological Science and Medical Engineering, Beihang University, Beijing 100191, China
2
Department of Mathematics and Computer Science, Colorado College, Colorado Springs, CO 80903, USA
*
Author to whom correspondence should be addressed.
Machines 2022, 10(3), 173; https://doi.org/10.3390/machines10030173
Submission received: 27 January 2022 / Revised: 19 February 2022 / Accepted: 24 February 2022 / Published: 25 February 2022
(This article belongs to the Topic Robotics and Automation in Smart Manufacturing Systems)

Abstract

:
In this study, we innervated bio-inspired proprioception into a soft hand, facilitating a robust perception of textures and object shapes. The tendon-driven soft finger with three joints, inspired by the human finger, was detailed. With tension sensors embedded in the tendon that simulate the Golgi tendon organ of the human body, 17 types of textures can be identified under uncertain rotation angles and actuator displacements. Four classifiers were used and the highest identification accuracy was 98.3%. A three-fingered soft hand based on the bionic finger was developed. Its basic grasp capability was tested experimentally. The soft hand can distinguish 10 types of objects that vary in shape with top grasp and side grasp, with the highest accuracies of 96.33% and 96.00%, respectively. Additionally, for six objects with close shapes, the soft hand obtained an identification accuracy of 97.69% with a scan-grasp method. This study offers a novel bionic solution for the texture identification and object recognition of soft manipulators.

1. Introduction

Soft manipulators have been widely studied due to their inherent compliance during interactions with objects and the environment [1,2,3,4,5]. Most of them are generally driven by pneumatic actuators [1,2], tendons [3,4] and Magneto-/electro-responsive polymers [5], etc. Some studies have also realized delicate in-hand manipulations [6,7,8,9]. However, adequately endowing robots with a “sense of touch” remains an unsolved challenge [10,11]. Considerable work has focused on flexible, surface-mountable tactile sensors to realize texture recognition [12,13] and mimic the human cutaneous mechanoreceptive system to achieve tactile sensation in robotic hands [14,15,16]. However, it is difficult to densely cover the entire manipulator using these types of sensors. Even if array sensors are used in some adaptive grippers to realize the perception of large areas [17,18], it leads to high cost, and the sensitive area is still confined. Moreover, recalibration is generally required for changes in contact conditions, such as the contact angle and actuate state [19]. All these drawbacks limit the application of these sensors.
Recent work has shown that proprioception is an effective method by which to achieve robotic hand sensory ability. In Zhao’s work, they innervated a soft robotic hand via optical waveguides to detect the shapes and textures of objects [20]. In addition, an analogous sensor was used to measure the curvature of the soft structure in [21]. By embedding bend sensors in soft fingers, the soft hand in [22] was able to identify different objects that vary in shape. Luca’s work reconstructed the shape of a soft finger using a hexagonal tactile array placed at the base of the cylinder finger [23]. Additionally, the combination of machine learning and the distributed proprioceptive sensors were used for the perceived shape of a soft arm [24]. All these works demonstrate the potential of proprioception in soft robot.
Compared to achieving tactile sensation with surface-mountable sensors, proprioceptive sensors are usually more compact in integration and robust to changes in external conditions. Given these benefits, this study developed a soft hand that mimics the human proprioception framework. As shown in Figure 1, when using the tension sensor embedded in the tendon of the finger, the dynamic changes in the tendon force can be recorded via contact, which we used to identify textures. The bionic finger can classify 17 textures under various contact angles and actuator displacements without recalibration. Based on this, a three-fingered soft hand was developed to recognize objects. In general, object recognition relies on the difference in object shape, as in [22]. For objects with the same shape, this method is difficult. Some studies have found that the object relies on other information, such as a combination of temperature, contact pressure, and thermal conductivity information [25]. However, these methods rely on highly complex sensors and require accurate contact. In this study, we identified six cylinders with identical geometries using universal one-dimensional force sensors. This approach was robust to detecting changes in contact conditions and was able to integrate simply into other tendon-driven systems.
In this paper, Section 2 describes the design of the system and the experiment methods. The results are elaborated on in Section 3, which shows the perception ability of the soft hand. The discussion is illustrated in Section 4. Section 5 presents the conclusion of this paper.

2. Materials and Methods

2.1. Inspiration

The proprioception of our system, inspired by the human body, is shown in Figure 1. For humans, proprioception refers to the sense of body position and load, primarily achieved through joint, muscle, and tendon proprioceptors [26]. Muscle proprioceptors mainly contain muscle spindles, which are enveloping structures with spiraling afferents that deliver signals to the central nervous system. The muscle spindle can encode the stretch, length, and velocity of the muscle length change. In our perception system, the function of the muscle spindle was mimicked by a linear coder within the actuator, which signals the position and velocity of the actuator.
In addition to positional feedback, in the human body, force-specific information was provided by the Golgi tendon organs (GTO), the tendon proprioceptors. Tendons are elastic structures that serve as interfaces between muscles and bones. The Golgi tendon organs are located at the junction between the muscles and tendons [26]. When the tensile load on a tendon increases, as happens during resisted movements, the afferents that connect the corresponding GTO to the central nervous system increase their firing frequency to deliver this information. In our system, a one-dimensional force sensor embedded in the tendon performs in the same way as a GTO, which we term the bionic Golgi tendon organ (BGTO).
Additionally, three types of joint proprioceptors reside at different levels of human joint anatomy, and they are sensitive to different stimulus frequencies. These receptors have the property of firing when the joint reaches a certain angular threshold, which is quite similar to mechanical angular encoders. Studies have shown that the information provided by joint receptors is rather ambiguous, and approximately 70% of these receptors respond only to drastic movements of the joint. Joint proprioceptors have also been marked as less important contributors to kinesthesia and the awareness of movement and position than muscle spindles [27,28]. Therefore, in our system, bionic joint receptors were not implemented.
Using a linear encoder to mimic the muscle spindle and tension sensor to mimic GTO, our system was able to achieve perception. We integrated it into a bionic finger and then into a soft robotic hand.

2.2. Design Overview of the System

2.2.1. The Tendon-Driven Finger

We adopted the finger design from our previous work [29,30], which is a 3D-printed hollow continuum structure. The continuum joint is constructed of a series of notches that can flex under tendon tightening and extend under tendon release. There are three finger joints, similar to a human finger (Figure 2a). The finger length is 118 mm and the diameter is 16 mm. A rubber sleeve with a thickness of 0.5 mm was attached to the distal end of the finger as a finger pad to provide more stability for grasping. The tail of the tendon was fixed with an aluminum sheet as the tendon joint. The through-hole on the finger pipe was used for the tendon. A 0.5-mm-diameter stainless-steel wire was used as the tendon. The entire finger was 3D-printed using nylon material, which is resistant to chemical substances and has adequate toughness [4]. The curved finger shapes under different tension values are shown in Figure 2b.

2.2.2. The Soft Hand

A three-fingered soft hand was developed based on the soft bionic finger (Figure 3a). Three fingers were evenly mounted onto a palm. Three commercial, single-axis force sensors with a range of 0–20 N and a resolution of 0.01 N were attached to the tendons and functioned as BGTOs. Three cylindrical shell warps were placed around the sensors so they could only move vertically with the palm. Three linear motors were used as the actuators of the soft hand. The lower controller is an Arduino Uno, communicating with the actuators through a serial port. The sensors communicated with the upper computer through Modbus. The entire gripper could be mounted on a six-DOF (degree of freedom) robotic arm to perform grasp operations (Figure 3a).

2.3. Experiments

In this subsection, a series of experiments were conducted to evaluate the proposed system. First, we tested the basic grasp ability of the soft hand. Then, in terms of texture identification, 17 different textures were classified. Lastly, we used two approaches to recognize the different objects. For objects with distinct shapes, the static forces of three BGTOs were used for recognition, while the dynamic changes in three BGTOs were used to recognize objects with similar shapes.

2.3.1. Basic Grasp Ability of the Soft Hand

The basic grasp ability of the soft hand was experimentally validated using the following two grasp types: top grasp and side grasp. In the top grasp, the hand grasps objects from the top, exploiting the support surface to guide the operation while in the side grasp, the soft hand grasps objects from the side as the object slides along a surface. First, a series of objects were selected for grasping, and some of the grasp scenes are shown in Figure 3b–i. The soft hand can successfully grasp fragile objects such as an egg and a bulb (Figure 3b,f), and proteiform objects such as a bunch of grapes and a paper cup (Figure 3c,d) under simple position control. Objects with different shapes and sizes (a marker, an apple and a tennis ball) can be grasped by the uniform grasp motion, which indicates adequate compliance (Figure 3e,g,h). In Figure 3b–h, objects were grasped by the top grasp. We also show a side grasp of the coke bottle in Figure 3i.
The grasp forces under the top grasp and side grasp were tested. A tennis ball was used for the top grasp, whereby the tennis ball was first fixed to a digital force meter, then it was grasped under a certain actuated tendon length, and the soft hand was moved vertically until separated from the ball. For each trial, the actuated tendon length of three fingers was the same. The maximal force in this process of the force meter was recorded as grasp force. Three trials were executed for each actuated tendon length and the results were averaged. A total of five tendon length states were selected for grasping, ranging from 6.8 mm to 10.2 mm, with an interval of 0.85 mm. For each grasp, the force values of the three BGTOs were recorded when the grasp motion was just completed and the hand did not move. For the side grasp, the object was a coke bottle, and the same method was used except that the soft hand grasped the bottle from the side and moved horizontally until separated.
We evaluated the grasp ability under object positions uncertainly, using the methods presented in [31,32]. The top grasp and side grasp were adopted uniformly, and the objects were the same as those in the grasp force test. An 11 cm × 11 cm grid was used to determine the positions of the objects. For each position, the soft hand executed a programmed grasp, and the result was recorded as a success or failure. In the top-grasp test, the robotic hand grasped the tennis ball from the top and lifted it to a certain height after the grasp was completed. In the side-grasp test, the soft hand grasped the coke bottle from the side and lifted it. A successful grasp means that the object was lifted, and there was no slip in the manipulator. To keep the number of trials within a reachable range, only one trial was conducted for each position, so for each grasp type, a total of 121 trials were recorded. A 3 × 3 sliding window was used to filter the results by averaging the covered area.

2.3.2. Textures Identification

In this subsection, we evaluate the texture-identification capacity of the soft bionic finger. Even if a number of previous works achieved a high classification accuracy for textures, a known limitation of requiring precise contacts exists, so the experimental conditions are generally constrained, such as the given actuating state or contact area between object and finger. In this paper, we set fewer constraints on the contact conditions. Texture signals were collected under random actuator displacements and five finger-rotation angles. The results showed the robust texture classification ability of the finger.
The setup in this experiment is shown in Figure 4a and primarily includes a finger module, an active linear guide, and a texture template. The finger module consists of a soft bionic finger, a tension sensor working as a BGTO, a linear motor, and a support part. The soft finger was mounted on the inner base with a rectangular locating pin. The finger can be rotated to different angles depending on the mounting angle of the locating pin. In this experiment, five angles were selected: 0, 15, 30, 45 and 60 degrees. At the beginning of each trial, the linear guide carried the finger module to a certain position (Figure 4b). Then, the finger was actuated to bend to touch the front end of the texture template (Figure 4c). To test the generalizability of the proposed system, the actuated tendon length in this phase was set to a random workable range of 6.5 mm–10.2 mm. “Workable” means that the finger is in contact with the template. The accuracy of the actuator displacement was determined by the linear encoder in this phase. After this, the finger palpated the texture template by the linear guide, which moved backward at a speed of 0.15 m/s (Figure 4d). A trial was deemed complete when the fingertip arrived at the tail end of the texture template (Figure 4e). The change in tendon strain was simultaneously recorded by the BGTO at a frequency of 60 Hz. For each finger rotation angle, 20 trials were conducted, and a total of 120 trials were executed at this stage.
All texture templates were rectangular blocks with sizes of 100 × 30 × 10 mm (Figure 5). They were 3D printed with photosensitive resin. A total of 17 textures were adopted, which can be divided into five groups according to the texture shape, i.e., flat surface (F), circular grooves (C1–C4), rectangular grooves (R1–R4), triangular grooves (T1–T4) and sloped grooves (S1–S4). The textured surface was a 60 × 30 mm rectangle centered along the template.
Some preprocessing is needed before extracting texture features. The beginning and end of each trial were cropped as they represented uniformly flat sections. Then, 200 continuous samples in the middle of the sliding process were used for the next step. Subsequently, zero-mean normalization was performed on the cropped data to compensate for the initial contact strain on the tendon. The resulting data indicate the normalized fluctuation in the tendon stress when the finger palpates different plates. To increase the randomness of the samples, a window with a length of 90 was used to randomly crop the continuous samples. For each trial, 50 crops were applied, so, in this stage, a total of 17,000 cropped samples were obtained for identification. We then extracted the Fourier components of the post-preprocessing data using a fast Fourier transform (FFT). The magnitude of the Fourier components between frequencies of 0 and 30 Hz, with an interval of 0.33 Hz, were used as candidate features for classification.
After data preprocessing, a feature selection (FS) method was used to select the most useful features. An adequate FS method is usually vital for classification tasks, as it can decrease the dimension of the feature space and eliminate noise. In this task, we used the chi-squared to obtain a subset of the features. This method computes chi-squared stats between each non-negative feature and class. The features are then ranked according to the correlation coefficient. The number of highest-ranking n features for identification in this study was set to eight, which is a tradeoff between accuracy and efficiency.
Four classifiers were used to identify the 17 textures, namely support vector machines with a linear kernel (SVM-linear), support vector machines with a radial basis function kernel (SVM-rbf), K nearest neighbor (KNN) and decision trees (DTs). For KNN, the five nearest neighbors were found in order to output the identity of the texture. These classifiers were all run in the make_pipeline of scikit-learn. To systematically validate the classification results, we utilized k-fold cross validation. This method first pseudo randomly partitions the dataset into k subsets, or folds. In this situation, let there be folds of f1, f2, …, fk. As the validation begins, the learning model first omits f1 and trains using f2 to fk; it then validates using the f1 set and receives an accuracy. Afterward, the learning model resets and retrieves f2 as the validation set, trains it, and then validates again. This process is repeated for each of the folds, and the final accuracy is calculated by averaging the accuracy at each fold. k-fold cross validation is often used to avoid bias or intentional training and validation-set selection to embellish the machine learning performance. In our experiment, k was set at ten. We ran the 10-fold cross validation and averaged the results to obtain an overall accuracy.
To evaluate the influence of the finger rotation angle on the classification accuracy, we used five combination forms to construct the data set (DS). Five DSs were obtained, and DS1 only contains the sample collected at 0 degrees of the finger rotation angle. Furthermore, DS2 contains the samples of 0 and 15 degrees, while DS3 contains 0, 15 and 30 degrees, and DS4 contains 0, 15, 30 and 45 degrees. DS5 contains all samples. Then, the textures of all five DSs were classified using the four classifiers.

2.3.3. Recognizing Objects Varied in Shape

Soft manipulators can complete grasp manipulation without sophisticated control, owing to their inherent compliance. Even under a uniform actuated state, the finger configuration varies according to the shape and stiffness of the object. Accordingly, for tendon-driven manipulators, the tendon tensions are distinct when grasping under this condition. Given this characteristic, we can classify objects with tendon tensions, which implies the shape information of the objects. In this study, for a single grasp, a three-dimensional force space was defined with the force values of three BGTOs. Objects can be classified according to their position in this space.
A total of 20 objects were selected for recognition, with 10 selected for the top grasp and 10 for the side grasp (Figure 6). All the objects in this experiment were successfully grasped by our soft hand in the basic grasp ability test. The objects selected for top grasp were a workbox, a medicine bottle, a rubber ball, a tennis ball, a candy jar, a pepper, a Rubik’s cube, an apple, an octahedron and a tape. We applied a uniform actuated state to execute the grasp for all objects. At the beginning of each trial, the soft hand grasped the object from the top and lifted it to a height of approximately 20 cm. The static force values of the three BGTOs, when the grasp was just completed, were recorded as the configuration state. Each object was grasped 20 times and the recorded force values of BGTOs were used for classification. The objects used for side grasp were a shampoo bottle, a coke bottle, a vase, a cup, a milk bottle, a toothpaste, a rubber sac, a fan, a cuboid and a cylinder. For the side grasp, the procedure was nearly identical except for the fact that the soft hand grasped the object from the side. The four classifiers with the same texture classification were used to identify the objects, and the distance was calculated via the Euclidean metric on the three-dimensional force point. For each object, ten sets of force data were used for training and the other 10 were used for testing. A 10-fold cross validation was performed to obtain averaged accuracies.

2.3.4. Recognizing Objects with Similar Dimension

The method in the subsection above can help to identify objects that vary in shape. However, it is difficult to distinguish objects with similar geometries using the proposed mechanism. In this case, the hand configuration was close when the grasp was completed, leading to confusion. In this subsection, a novel approach named “scan-grasp” was presented to distinguish objects with the same shape.
Six 3D-printed cylinders (Cy1–Cy6) with the same outer dimensions (height: 80 mm, diameter 60 mm) were used for classification (Figure 7). The Cy1–Cy3 were 3D-printed by polylactic acid (PLA), which are comparatively hard; the Cy4–Cy6 were hollow and made with silicone, so they were soft compared with Cy1–Cy3, while Cy1 and Cy4 have a smooth surface, Cy2 and Cy5 have the same square bulges on their surface, and Cy3 and Cy6 have the same semicircular bulges on their surface.
We used a “scan-grasp” approach to grasp these cylinders (Figure 8): First the cylinder was fixed and the hand was moved to the top of it (Figure 8a). Then, grasp was used a “workable” actuated state (Figure 8b). In this step, the actuated tendon lengths of the three fingers were identical, and randomly assigned. “Workable” means the fingers and cylinder were actually in contact after grasping was complete. Finally, the hand was moved vertically to “scan” the cylinder until the hand separated from the cylinder (Figure 8c). The force values of the three BGTOs were recorded throughout the entire process. For each cylinder, 20 trials were executed. A data-processing method similar to that described in Section 2.3 was used to acquire the candidate frequency features of each finger. The final eight features were selected using the chi-square method from the candidate features of all three fingers. Then, three classifiers including SVM-rbf, KNN and DTs were used to identify the six cylinders.

3. Results

3.1. Basic Grasp Ability

The results of the grasp force experiment are shown in Figure 9a. Generally, we can see that a longer actuated tendon length leads to a higher grasp force and tendon forces. For the top grasp, the minimum and maximum grasp force of the tennis ball were 0.87 N and 6.27 N, with a disparity of 5.40 N. For the side grasp, the two extreme grasp forces of a coke bottle were 2.40 N and 4.65 N, with a disparity of 2.25 N. The smaller force gap when grasping the bottle may be due to its deformability. For the top grasp, the magnitude of three force values of BGTOs are closer than for the side grasp. This is because the spherical shape of tennis ball makes the forces consistent in all directions, while the cylindrical shape of the coke bottle makes them inconsistent. It is worth noting that when the actuated tendon length is 8.5 mm, there is a nonlinearity for the top grasp, which may be caused by the fact that the tennis-ball surface is not completely consistent.
In Figure 9b, we show the grasp areas of the soft hand under the top grasp and side grasp. The hand can grasp the tested object in a contiguous range of positions under both grasp types. For the top grasp, the dark area is basically a triangle, because it is difficult for the soft hand to effectively touch the object in the gap area of the three fingers. For side grasp, the dark area is larger, and the object is easier to successfully grasp in the lower half than in the upper half. This is because in the process of grasping, the upper part corresponds to one finger, making it easy to knock the object down when the finger contacts the object, while the lower part corresponds to two fingers, which can better push the object to slide on the surface and then grasp it.

3.2. Textures Identification

The normalized strain data and extracted Fourier components of eight different textures are displayed in Figure 10. The corresponding textures are R1, F, T1, S1, C1, C2, C3 and C4 from left to right, successively. The overall identification results are listed in Table 1. We can see that the KNN classifier achieved the best performance in this task. The average accuracy using the KNN of five DSs was 98.3%, and an accuracy of more than 99% was obtained for both DS1 and DS2. The worst-performing classifier was SVM-linear, with an average accuracy of 59.58%. In addition, the accuracies decreased slightly as the number of finger-rotation angles involved in the DS increased. For SVM-rbf, KNN and DTs, the accuracies of DS2 and DS1 were basically the same, with a difference of only 0.76%, 0.04% and 0.02%. However, when the finger-rotation angle difference in the dataset was greater than or equal to 30 degrees, the identification accuracy began to significantly decline. For SVM-rbf, KNN and DTs, the accuracies of DS3 decreased by 7.59%, 1.96% and 3.60%; the accuracies of DS4 decreased by 8.34%, 1.88% and 3.28%; and the accuracies of DS5 decreased by 14.69%, 3.83% and 6.29%, compared with DS1, respectively, indicating that the randomness of the finger rotation angles affected the results. The confusion matrix, using KNN to identify DS5, is shown in Figure 11, which achieved the highest classification accuracy in this task. For the SVM-linear classifier, the irregular change in accuracy implied that the textures are hard to linearly identify in a low-dimensional feature space.
Overall, the high accuracies using SVM-rbf, KNN and DTs indicate the high-performance texture discrimination ability of the proposed bionic finger. Considering the random contact force and finger-rotation angle variation, the results imply high robustness and the possibility of application in real environments.

3.3. Recognizing Objects Varied in Shape

In Figure 12, we plot the three-finger tension distribution of each object in a three-dimensional space. We can see that the positions varied roughly between different objects, which implies the hand configuration. The classification results are listed in Table 2. We can see that the highest accuracy of the top grasp was 96.33% by KNN and the highest accuracy of the side grasp was 96.00% by DTs. Neither SVM-linear nor SVM-rbf performed well in this task, with accuracies of less than 90%. The recognition confusion matrixes under the top grasp and side grasp using KNN are shown in Figure 13. For the top grasp, misclassification mainly occurred between the octahedron and Rubik’s cube. There is a 16.67% probability that the Rubik’s cube was misclassified as an octahedron and the inverse is 5.0%. Apple has a 15.0% rate of misclassification as Tape. For the side grasp, the coke bottle has a 11.76% percentage of being misclassified as a cylinder, toothpaste has a 5.88% rate of being misclassified as a rubber sac and cylinder, respectively, and the cylinder has an 18.18% rate of being misclassified as a rubber sac. For these failed identification cases, we can see from Figure 12 that the confused objects are close in the three-dimensional space, which indicates a similar grasp configuration of the hand.

3.4. Recognizing Objects with Similar Dimension

We set three objectives in this experiment, namely, of identifying hard cylinders (Cy1–Cy3) and soft cylinders (Cy4–Cy6); identifying smooth cylinders (Cy1, Cy4), square-bulge cylinders (Cy2, Cy5) and semicircular cylinders (Cy3, Cy6); and identifying all six cylinders. Therefore, the labels of each cylinder were assigned according to the three identification purposes. The raw force values and frequency amplitudes of the six scan-grasp are shown in Figure 14. The classification results are listed in Table 3. We can see that when the cylinders were labeled as soft and hard, the classifiers had the best performance, and the accuracies of SVM-rbf, KNN and DTs were all greater than 98%. For the other two types of identification, the accuracies of KNN and DTs were all greater than 95%, but SVM-rbf did not perform well, with accuracies of 65.38% and 80.77%. As a whole, our soft hand could recognize the six cylinders according to their stiffness, textures, or the combinations.

4. Discussion

The proposed sensory system is robust and has adequate environmental adaptability. This approach can easily be applied to other tendon-driven systems. In addition, the bionic design makes it convenient to apply to prosthetic hands because the signal can easily be converted into neuromorphic spike trains. In some previous work, researchers provided sensing to amputees using neuromorphic methods [33]. However, there are still some problems in this study. For instance, the finger material can be further optimized to reduce the possibility of plastic deformation. In [4], the researchers used a material mixed with nylon and acrylonitrile butadiene styrene (ABS) to achieve ideal mechanical properties, which can be applied to future research. Furthermore, the textures identified in this paper are of a rough visual scale. We can explore the recognition ability of fine textures in future research. The characteristics of the sensor are also critical for perception. A better resolution and response speed may improve the results. In the feature-selection part, other methods can be used to select candidate features, such as some widely used features in EMG signal processing [34,35].
In addition, for object recognition, the algorithm is sensitive to the position in which the objects are placed. Excessive positional changes may affect the recognition accuracy. In the future, more complex algorithms can be used to solve this problem. We also noticed that increasing the number of tendons of a single finger and BGTOs can provide more perceptual possibilities. In the future, we will further investigate the sensory ability of the bionic approach.

5. Conclusions

In this paper, we present a bionic approach to achieve the reliable perception of a robotic hand. By mimicking the proprioception of the human body in a soft finger, 17 textures were accurately classified under distinct finger-rotation angles and actuator displacements, with the highest average accuracy of 98.3%. The design of the bionic framework was detailed. Based on this, a three-fingered soft hand was constructed. Its basic grasp ability was evaluated with two types of grasps. The object recognition capacity was experimentally verified. For objects that varied in shape and had similar dimensions, two different approaches were adopted to recognize them. The average recognition accuracies were 95.5% and 98.0%, respectively.
Our study showed that a soft finger with bionic proprioception can discriminate textures with high accuracy. The object-recognition method based on this also offers a possibility of robots perceiving the environment. We believe that this work will bring the abilities of robotic manipulators closer to that of a natural hand.

Author Contributions

Y.Y. and Y.W. conceived the idea and designed the study. Y.Y. and C.C. proposed the design of the experiments. Y.Y., M.G. and J.Z. performed experiments and analyzed the experimental data. Y.Y. and C.C. coded the program and prepared the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

The research was supported by the “National Key R & D Program of China” under Grant 2017YFA0701101.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets generated or analyzed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare that they have no competing interest.

References

  1. Deimel, R.; Brock, R. A novel type of compliant and underactuated robotic hand for dexterous grasping. Int. J. Robot. Res. 2016, 35, 161–185. [Google Scholar] [CrossRef] [Green Version]
  2. Hao, Y.; Biswas, S.; Hawkes, E.W.; Wang, T.; Zhu, M.; Wen, L.; Visell, Y. A Multimodal, Enveloping Soft Gripper: Shape Conformation, Bioinspired Adhesion, and Expansion-Driven Suction. IEEE Trans. Robot. 2021, 37, 350–362. [Google Scholar] [CrossRef]
  3. Manti, M.; Hassan, T.; Passetti, G.; D’Elia, N.; Laschi, C.; Cianchetti, M. A Bioinspired Soft Robotic Gripper for Adaptable and Effective Grasping. Soft Robot. 2015, 2, 107–116. [Google Scholar] [CrossRef]
  4. Zolfagharian, A.; Gharaie, S.; Gregory, J.; Bodaghi, M.; Kaynak, A.; Nahavandi, S. A Bioinspired Compliant 3D-Printed Soft Gripper. Soft Robot. 2021. [Google Scholar] [CrossRef] [PubMed]
  5. Yarali, E.; Baniasadi, M.; Zolfagharian, A.; Chavoshi, M.; Arefi, F.; Hossain, M.; Bastola, A.; Ansari, M.; Foyouzat, A.; Dabbagh, A.; et al. Magneto-/electro-responsive polymers toward manufacturing, characterization, and biomedical/soft robotic applications. Appl. Mater. Today 2021, 26, 101306. [Google Scholar] [CrossRef]
  6. Abondance, S.; Teeple, C.B.; Wood, R.J. A Dexterous Soft Robotic Hand for Delicate In-Hand Manipulation. IEEE Robot. Autom. Lett. 2020, 5, 5502–5509. [Google Scholar] [CrossRef]
  7. Zhou, J.; Chen, X.; Chang, U.; Lu, J.-T.; Leung, C.C.Y.; Chen, Y.; Hu, Y.; Wang, Z. A Soft-Robotic Approach to Anthropomorphic Robotic Hand Dexterity. IEEE Access 2019, 7, 101483–101495. [Google Scholar] [CrossRef]
  8. Zhou, J.; Yi, J.; Chen, X.; Liu, Z.; Wang, Z. BCL-13: A 13-DOF Soft Robotic Hand for Dexterous Grasping and In-Hand Manipulation. IEEE Robot. Autom. Lett. 2018, 3, 3379–3386. [Google Scholar] [CrossRef]
  9. Zhe, X.; Todorov, E. Design of a highly biomimetic anthropomorphic robotic hand towards artificial limb regeneration. In Proceedings of the 2016 the IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016. [Google Scholar]
  10. Dahiya, R.S.; Metta, G.; Valle, M.; Sandini, G. Tactile Sensing—From Humans to Humanoids. IEEE Trans. Robot. 2010, 26, 1–20. [Google Scholar] [CrossRef]
  11. Dahiya, R.; Mittendorfer, P.; Valle, M.; Cheng, G.; Lumelsky, V.J. Directions toward Effective Utilization of Tactile Skin: A Review. IEEE Sens. J. 2013, 13, 4121–4138. [Google Scholar] [CrossRef]
  12. Rasouli, M.; Chen, Y.; Basu, A.; Kukreja, S.L.; Thakor, N.V. An Extreme Learning Machine-Based Neuromorphic Tactile Sensing System for Texture Recognition. IEEE Trans. Biomed. Circuits Syst. 2018, 12, 313–325. [Google Scholar] [CrossRef]
  13. Cao, Y.; Li, T.; Gu, Y.; Luo, H.; Wang, S.; Zhang, T. Fingerprint-Inspired Flexible Tactile Sensor for Accurately Discerning Surface Texture. Small 2018, 14, e1703902. [Google Scholar] [CrossRef] [PubMed]
  14. Jamali, N.; Sammut, C. Majority Voting: Material Classification by Tactile Sensing Using Surface Texture. IEEE Trans. Robot. 2011, 27, 508–521. [Google Scholar] [CrossRef]
  15. Jamali, N.; Sammut, C. Slip prediction using Hidden Markov models: Multidimensional sensor data to symbolic temporal pattern learning. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012. [Google Scholar]
  16. Sankar, S.; Balamurugan, D.; Brown, A.; Ding, K.; Xu, X.; Low, J.H.; Yeow, C.H.; Thakor, N. Texture Discrimination with a Soft Biomimetic Finger Using a Flexible Neuromorphic Tactile Sensor Array That Provides Sensory Feedback. Soft Robot. 2021, 8, 577–587. [Google Scholar] [CrossRef] [PubMed]
  17. Kuppuswamy, N.; Alspach, A.; Uttamchandani, A.; Creasey, S.; Ikeda, T.; Tedrake, R. Soft-bubble grippers for robust and perceptive manipulation. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020. [Google Scholar]
  18. Pastor, F.; Gandarias, J.M.; García-Cerezo, A.J.; Gómez-De-Gabriel, J.M. Using 3D Convolutional Neural Networks for Tactile Object Recognition with Robotic Palpation. Sensors 2019, 19, 5356. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Amjadi, M.; Kyung, K.-U.; Park, I.; Sitti, M. Stretchable, Skin-Mountable, and Wearable Strain Sensors and Their Potential Applications: A Review. Adv. Funct. Mater. 2016, 26, 1678–1698. [Google Scholar] [CrossRef]
  20. Zhao, H.; O’Brien, K.; Li, S.; Shepherd, R.F. Optoelectronically innervated soft prosthetic hand via stretchable optical waveguides. Sci. Robot. 2016, 1, eaai7529. [Google Scholar] [CrossRef] [Green Version]
  21. To, C.; Hellebrekers, T.L.; Park, Y.-L. Highly stretchable optical sensors for pressure, strain, and curvature measurement. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–3 October 2015; pp. 5898–5903. [Google Scholar]
  22. Homberg, B.S.; Katzschmann, R.K.; Dogar, M.R.; Rus, D. Robust proprioceptive grasping with a soft robot hand. Auton. Robot. 2018, 43, 681–696. [Google Scholar] [CrossRef] [Green Version]
  23. Scimeca, L.; Hughes, J.; Maiolino, P.; Iida, F. Model-free Soft-Structure Reconstruction for Proprioception using Tactile Arrays. IEEE Robot. Autom. Lett. 2019, 4, 2479–2484. [Google Scholar] [CrossRef]
  24. Truby, R.L.; Santina, C.D.; Robotics, D.R.J.I.; Letters, A. Distributed Proprioception of 3D Configuration in Soft, Sensorized Robots via Deep Learning. IEEE Robot. Autom. Lett. 2020, 5, 3299–3306. [Google Scholar] [CrossRef]
  25. Li, G.; Liu, S.; Wang, L.; Zhu, R. Skin-inspired quadruple tactile sensors integrated on a robot hand enable object recognition. Sci. Robot. 2020, 5, 8134. [Google Scholar] [CrossRef] [PubMed]
  26. Tuthill, J.C.; Azim, E.J.C.B.C. Proprioception. Curr. Biol. 2018, 28, R194. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Grigg, P.; Finerman, G.A.; Riley, L.H. Joint-Position Sense after Total Hip Replacement. J. Bone Jt. Surg. 1973, 55, 1016–1025. [Google Scholar] [CrossRef]
  28. Clark, F.J.; Horch, K.W.; Bach, S.M.; Larson, G.F. Contributions of cutaneous and joint receptors to static knee-position sense in man. J. Neurophysiol. 1979, 42, 877–888. [Google Scholar] [CrossRef] [PubMed]
  29. Yan, Y.; Cheng, C.; Guan, M.; Zhang, J.; Wang, Y. A Soft Robotic Gripper Based on Bioinspired Fingers. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Guadalajara, Mexico, 26–30 July 2021; pp. 4570–4573. [Google Scholar]
  30. Guan, M.; Yan, Y.; Wang, Y. A Bio-inspired Variable-Stiffness Method Based on Antagonism. In Proceedings of the 2021 4th International Conference on Robotics, Control and Automation Engineering (RCAE), Wuhan, China, 4–6 November 2021; pp. 372–375. [Google Scholar]
  31. Deimel, R.; Brock, O. A compliant hand based on a novel pneumatic actuator. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 2047–2053. [Google Scholar] [CrossRef]
  32. Kazemi, M.; Valois, J.-S.; Bagnell, J.A.; Pollard, N. Robust Object Grasping using Force Compliant Motion Primitives. In Robotics: Science and Systems VIII; MIT Press: Cambridge, MA, USA, 2012. [Google Scholar]
  33. Sankar, S.; Brown, A.; Balamurugan, D.; Nguyen, H.; Iskarous, M.; Simcox, T.; Kumar, D.; Nakagawa, A.; Thakor, N. Texture Discrimination using a Flexible Tactile Sensor Array on a Soft Biomimetic Finger. In Proceedings of the 2019 IEEE SENSORS, Montreal, QC, Canada, 27–30 October 2019; Institute of Electrical and Electronics Engineers (IEEE): Montreal, QC, Canada, 2019; pp. 1–4. [Google Scholar]
  34. Miften, F.S.; Diykh, M.; Abdulla, S.; Siuly, S.; Green, J.H.; Deo, R.C. A new framework for classification of multi-category hand grasps using EMG signals. Artif. Intell. Med. 2021, 112, 102005. [Google Scholar] [CrossRef] [PubMed]
  35. Chamakura, L.; Saha, G. An instance voting approach to feature selection. Inf. Sci. 2019, 504, 449–469. [Google Scholar] [CrossRef]
Figure 1. The framework of the bionic design.
Figure 1. The framework of the bionic design.
Machines 10 00173 g001
Figure 2. (a) The finger structure. (b) Bending motion of the finger under different actuated forces.
Figure 2. (a) The finger structure. (b) Bending motion of the finger under different actuated forces.
Machines 10 00173 g002
Figure 3. The soft robotic hand. (a) The soft hand mounted on a UR5 robotic arm. (bh) The soft hand grasping an egg, a bunch of grapes, a paper cup, a marker, a bulb, an apple and a tennis ball using the top grasp. (i) The soft hand grasping a coke bottle using the side grasp.
Figure 3. The soft robotic hand. (a) The soft hand mounted on a UR5 robotic arm. (bh) The soft hand grasping an egg, a bunch of grapes, a paper cup, a marker, a bulb, an apple and a tennis ball using the top grasp. (i) The soft hand grasping a coke bottle using the side grasp.
Machines 10 00173 g003
Figure 4. (a) The experiment setup of texture classification. (be) Experimental Procedure.
Figure 4. (a) The experiment setup of texture classification. (be) Experimental Procedure.
Machines 10 00173 g004
Figure 5. The 17 texture templates.
Figure 5. The 17 texture templates.
Machines 10 00173 g005
Figure 6. Objects for recognition.
Figure 6. Objects for recognition.
Machines 10 00173 g006
Figure 7. Six cylinders with same outer dimensions for recognition.
Figure 7. Six cylinders with same outer dimensions for recognition.
Machines 10 00173 g007
Figure 8. The process of scan-grasp. (a) Moving the soft hand to the top of the object. (b) Grasping the object. (c) Moving the hand vertically until separated from the object.
Figure 8. The process of scan-grasp. (a) Moving the soft hand to the top of the object. (b) Grasping the object. (c) Moving the hand vertically until separated from the object.
Machines 10 00173 g008
Figure 9. Grasp force and grasp area. (a) left: the top grasp force of a tennis ball and the side grasp force of a coke bottle; middle: the force values of BGTOs of top grasp; right: the force values of the BGTOs of side grasp. (b) The grasp success rates of top grasp and side grasp under uncertain object position.
Figure 9. Grasp force and grasp area. (a) left: the top grasp force of a tennis ball and the side grasp force of a coke bottle; middle: the force values of BGTOs of top grasp; right: the force values of the BGTOs of side grasp. (b) The grasp success rates of top grasp and side grasp under uncertain object position.
Machines 10 00173 g009
Figure 10. Time-domain features and frequency-domain features of eight different textures.
Figure 10. Time-domain features and frequency-domain features of eight different textures.
Machines 10 00173 g010
Figure 11. Confusion matrix of DS5 using KNN classifier.
Figure 11. Confusion matrix of DS5 using KNN classifier.
Machines 10 00173 g011
Figure 12. The distribution of objects in a three-dimensional force space. (a) Top grasp. (b) Side grasp.
Figure 12. The distribution of objects in a three-dimensional force space. (a) Top grasp. (b) Side grasp.
Machines 10 00173 g012
Figure 13. The confusion matrixes of object recognition using KNN classifier (upper: top grasp; lower: side grasp).
Figure 13. The confusion matrixes of object recognition using KNN classifier (upper: top grasp; lower: side grasp).
Machines 10 00173 g013
Figure 14. Time-domain features and frequency-domain features of six cylinders.
Figure 14. Time-domain features and frequency-domain features of six cylinders.
Machines 10 00173 g014
Table 1. Overall classification accuracies for five DTs using four classifiers.
Table 1. Overall classification accuracies for five DTs using four classifiers.
Data SetClassifier
SVM-LinearSVM-RBFKNNDTs
DS143.73%98.97%99.85%98.21%
DS263.79%98.21%99.81%98.19%
DS360.53%91.38%97.89%94.61%
DS465.36%90.63%97.97%94.93%
DS564.50%84.28%96.02%91.92%
Table 2. Overall classification accuracies of object recognition using four different classifiers.
Table 2. Overall classification accuracies of object recognition using four different classifiers.
Grasp TypeClassifier
SVM-LinearSVM-RBFKNNDTs
Top grasp84.21%88.06%96.33%93.50%
Side grasp79.1%80.47%95.82%96.00%
Table 3. Overall accuracies for three types of three classifiers.
Table 3. Overall accuracies for three types of three classifiers.
Characteristics of ClassificationClassifier
SVM-RBFKNNDTs
Stiffness99.23%98.46%99.23%
Surface texture65.38%96.15%95.38%
Integrated stiffness and texture80.77%97.69%96.92%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yan, Y.; Cheng, C.; Guan, M.; Zhang, J.; Wang, Y. Texture Identification and Object Recognition Using a Soft Robotic Hand Innervated Bio-Inspired Proprioception. Machines 2022, 10, 173. https://doi.org/10.3390/machines10030173

AMA Style

Yan Y, Cheng C, Guan M, Zhang J, Wang Y. Texture Identification and Object Recognition Using a Soft Robotic Hand Innervated Bio-Inspired Proprioception. Machines. 2022; 10(3):173. https://doi.org/10.3390/machines10030173

Chicago/Turabian Style

Yan, Yadong, Chang Cheng, Mingjun Guan, Jianan Zhang, and Yu Wang. 2022. "Texture Identification and Object Recognition Using a Soft Robotic Hand Innervated Bio-Inspired Proprioception" Machines 10, no. 3: 173. https://doi.org/10.3390/machines10030173

APA Style

Yan, Y., Cheng, C., Guan, M., Zhang, J., & Wang, Y. (2022). Texture Identification and Object Recognition Using a Soft Robotic Hand Innervated Bio-Inspired Proprioception. Machines, 10(3), 173. https://doi.org/10.3390/machines10030173

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop