Next Article in Journal
Ship Detection in Gaofen-3 SAR Images Based on Sea Clutter Distribution Analysis and Deep Convolutional Neural Network
Next Article in Special Issue
A Novel Single-Axis MEMS Tilt Sensor with a High Sensitivity in the Measurement Range from 0 to 360
Previous Article in Journal
Passive Infrared (PIR)-Based Indoor Position Tracking for Smart Homes Using Accessibility Maps and A-Star Algorithm
Previous Article in Special Issue
Joint Bearing and Range Estimation of Multiple Objects from Time-Frequency Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Event-Based Incipient Slip Detection Using Dynamic Active-Pixel Vision Sensor (DAVIS)

1
Faculty of Science, Engineering and Computing, Kingston University London, London SW15 3DW, UK
2
Robotics Institute, Khalifa University of Science and Technology, P.O. Box 127788, Abu Dhabi 999041, UAE
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(2), 333; https://doi.org/10.3390/s18020333
Submission received: 28 November 2017 / Revised: 19 January 2018 / Accepted: 22 January 2018 / Published: 24 January 2018
(This article belongs to the Special Issue Smart Sensors for Mechatronic and Robotic Systems)

Abstract

:
In this paper, a novel approach to detect incipient slip based on the contact area between a transparent silicone medium and different objects using a neuromorphic event-based vision sensor (DAVIS) is proposed. Event-based algorithms are developed to detect incipient slip, slip, stress distribution and object vibration. Thirty-seven experiments were performed on five objects with different sizes, shapes, materials and weights to compare precision and response time of the proposed approach. The proposed approach is validated by using a high speed constitutional camera (1000 FPS). The results indicate that the sensor can detect incipient slippage with an average of 44.1 ms latency in unstructured environment for various objects. It is worth mentioning that the experiments were conducted in an uncontrolled experimental environment, therefore adding high noise levels that affected results significantly. However, eleven of the experiments had a detection latency below 10 ms which shows the capability of this method. The results are very promising and show a high potential of the sensor being used for manipulation applications especially in dynamic environments.

Graphical Abstract

1. Introduction

Human dexterous abilities can optimally hold various objects without knowledge of shape, weight, and friction coefficient. Skin receptors in the human fingers acquire information of object slippage to apply minimum force during precision gripping [1,2]. Capability of the human touch sense inspires researchers to create artificial skins and various tactile sensors to enhance robots proficiency in human–robot interactions [3], manipulation tasks [4], and surgical applications [5].
Robotic grasping techniques are divided into two main categories, power grasp and precision grasp, whereas the former focuses on stability and the latter requires high sensitivity and precision [6]. Simple tasks for taking and placing specific tools in structured environments can be performed through power grasps. However, delicate manipulation for surgical applications in unstructured environments is required; therefore, the precision of the grasp method is essential in order to minimize risk of grasping failure. Moreover, extra applied force could result in object deformation or destruction; therefore, grasping nonrigid or breakable objects could require a more complex grasping model. Bicchi and Kumar [7] reviewed the literature on different power grasping methods and contact models, and indicated significant progress in structured grasping applications. Recent grasping techniques have simplified the force regulations by using tactile and position sensors considering unstructured environments with high uncertainty [8,9,10]. Detection of main parameters such as angular force, local strain and prediction of incipient slip based on the contact area facilitates grasping applications to perform adaptively regardless of object characteristics. To achieve acceptable and stable precision grasp, it is necessary to acquire contact area properties with low latency to feedback force into controllers and plan best grasping trajectories.
Tactile sensors are defined as devices to recognize contact status and exploit contact events. Various types of tactile sensors have been developed in the past, each of which has beneficial and negative impacts regarding the sensitivity, size, spatial resolution, and flexibility [11,12,13]. Piezoresistive tactile sensors consist of a conductive or semi-conductive elastomer that deforms under pressure. This deformation results in a change of material resistance that can be measured to identify touch and estimate the applied force [14,15]. Furthermore, Teshigawara et al. [16] proposed an application of Piezoresistive sensors to detect initial slip considering high frequency components of the output signal.
Capacitive tactile sensors are another well known category that considers variation of capacitance by changing load forces. These sensors comprise parallel conductive plates with dielectric in between, which can measure applied force on the plates by decreasing distance between plates, and, therefore, increasing the capacitance [17]. Similarly, Muhammad et al. [18] developed a capacitive tactile sensor array to recognize the texture pattern of the contact area.
Piezoelectrics are types of materials such as quartz, ceramics, and PolyVinyliDene Fluoride (PVDF) that generate electric flow under heat or deformation. This feature of piezoelectric materials has led researchers to compose a new generation of tactile sensors by placing a piezoelectric layer between two electrodes to measure generated voltage caused by the force load [19]. Goeger et al. [20] illustrated capability of PVDF tactile sensors to detect slip using short-time Fourier transform and k-nearest neighbour classifiers to identify slip states. They reconstructed images (8 × 128) pixels from PVDF output signal in order to use Eigenfaces algorithm to classify different states. Recently, Yu et al. [21] have developed a flexible piezoelectric tactile sensor array to measure force dynamically in three axes.
Moreover, some applications have measured the force vector through changes of magnetic properties. Takenawa [22] proposed a magnetic tactile sensor based on a two-dimensional array of inductors that can measure force vectors in three dimensions for detection of slippage. Furthermore, Goka et al. [23] added four Giant Magneto Resistance (GMR) chips to characterise load force and detect slip based on GMR’S output voltage. A more recent and interesting approach is presented in [24], using integrated magnetic tactile sensors with Hall-effect sensors to measure force vectors with high sensitivity and low hysteresis. Other methods such as Resistance Temperature Detectors (RTD) [25], acoustic [26], photoelastic, and friction-based [27] sensors were proposed to detect slippage and extract information of the object contact area [28].
Advancements in optical and digital image processing over the past decades have changed automation and industrial robotics applications drastically. Optical tactile sensors were introduced a long time ago to measure shear components of force for dexterous robotics applications [29]. Very early optical tactile sensors consisted of a transparent medium like silicate glass or rubber, LED light sources and a Charged-Coupled Device (CCD) camera [30,31]. They consider a source of light to emit beams on a rubber surface that has a direct contact with an object. Consequently, the camera records back-scattered beams and determines force vectors by comparing incidences and reflected angles. This method has been expanded over recent years to achieve higher sensitivity and optimize the cost of the sensors by using photo transistors and LEDs [32,33]. Kamiyama et al. [34] suggested an empirical method using a transparent elastic fingertip with spherical marks to track force angles and detect partial and incipient slippage. This approach has become more popular and similar sensors have been developed in [35,36,37,38,39].
Recent vision-based techniques acquire further information such as slip direction, surface texture, and object deformation. For instance, in [40], a vision-based slip sensor was developed that can identify object properties and slippage for deformable objects. They applied a Speeded Up Robust Features (SURF) algorithm to extract features of the contact area and match the feature points for both homogeneous and textured materials. Although camera-based sensors have made a significant progress in tactile sensing, improving sensor sensitivity, decreasing system latency and power consumption are necessary to meet industrial requirements.
Frame-based cameras have a considerable blind time between frames that limit the sensor latency and sensitivity. Even though high speed cameras can capture frames within less than one millisecond, they are less practical due to capturing redundant pixels, being expensive and having a high power consumption. Furthermore, capturing frames requires appropriate light conditions that limit the sensors performance in dark environments.
On the other hand, state-of-the-art neuromorphic cameras use retina silicone sensors to capture log-intensity brightness changes (events) in the scene rather than frames. Dynamic Vision Sensor (DVS) is an asynchronous event-based camera with high temporal contrast sensitivity to capture events efficiently in microseconds [41,42]. For instance, Ref. [43] developed a method to track object shape for force feedback estimation. Bio-inspired DVS is able to capture events in low light conditions while frame-based cameras could not obtain images in dark scenes. In this paper, we propose a novel event-driven incipient slip detection method using a Dynamic Active-pixel Vision Sensor (DAVIS) that can capture both events and log intensity frames. This developed sensor uses events to detect incipient slip, slip and vibration based on observing the object contact area with high sensitivity, low latency, and low power consumption in comparison to other vision-based methods.

2. Sensor Prototype

The functionality of the suggested sensor is based on monitoring the variation and displacement of the contact area between an object and a custom designed robot gripper. In [44], different stages of slip have been observed when moving a finger over a transparent piece of glass whilst measuring the changes in the contact area with a high speed camera (200 FPS). As a result, they demonstrated that the slip can be identified by monitoring the changes in the contact area, whereas a reduction of the contact area was observed prior to slip.
In this paper, a method based on the same principle but adjusted to operate with events is proposed for identifying incipient slip according to relevant changes in the contact area using an event-based camera. In order to detect incipient slip with high sensitivity, it is necessary to eliminate noise since changes of local contact area can be slight at the start of incipient slip. For this purpose, the intensity trigger threshold of the sensor has been increased to reduce noise levels, thus acting as a high-pass filter. Figure 1 represents a diagram of the proposed sensor that includes a DAVIS camera (iniLabs, Zurich, Switzerland), silicone medium, source of light and mounting platform.

2.1. Silicone Medium Specifications

A transparent elastic medium has been used to provide a flexibility to the sensor in order to identify concavity of the contact surface. The transparent silicone has been molded into a rectangular prism shape and placed between the object and camera. To extract the contact area of an object with the silicone medium surface and eliminate background noise, the DAVIS sensor has been deployed with θ angle to the vertical axis of the contact surface as shown in Figure 1. The angular view eliminates background noise due to light refraction. Figure 2 illustrates three different views of the contact between silicone medium and an object, whereas Figure 2a is the top view of an Ethernet cable in contact with the silicone surface, Figure 2b illustrates direct observation through the silicone medium, and Figure 2c shows the output when the camera has an angular view. As a result, Figure 2c provides clear observation of contact area whilst filtering out the background scene due to light refraction. This approach isolates the contact area from background and substantially improves the precision and reduces the computational time. A light source has been placed behind the silicone medium to reduce the impact of external illumination and enhance the visibility of the contact area. Furthermore, the silicone material is diffusive that also eliminates background noise.

2.2. Dynamic Active-Pixel Vision Sensor (DAVIS)

The DAVIS camera is a state-of-the-art hybrid camera that consists of both dynamic and active-pixel sensors that can record events and intensity frames. In this paper, we consider an event-driven solution instead of intensity frames to decrease sensor latency and optimize overall power consumption. Frame-based cameras use consistent power to capture frames while there are no changes in the scene. However, the event-based channel of the camera has a high temporal resolution of 12 μs for mean of 20 pixels and a low power consumption between 5–14 mW. For instance, a 4 mW power consumption is required when a hundred thousand events are triggered per second. This figure can be reduced to 2.6 mW when the scene has a thousand triggered events per second [41,42]. Furthermore, typical frame-based cameras consume more than 1 W to capture the scene. In [45], a CCD camera was used to detect object slip which typically has higher power-consumption than Active-Pixel Sensors (APS). As all the camera-based approaches in the literature used frame-based cameras to study the contact area, our proposed method consumes less power than other vision-based methods.
As mentioned earlier, the DVS camera fires events based on log-intensity brightness changes. The changes are continuously monitored and only fire ON/OFF events when the changes exceed ± trigger threshold. Timestamps of these asynchronous events are precisely measured with resolution up to 1 μs; however, timestamps increase to sub-ms for practical application. The DVS output includes precise time measurement (time-stamps), pixel positions (190 × 180) and polarity for each pixel. Moreover, a C-mount lens is used with the DAVIS camera with a focal length of 4.5 mm and iris range of F1.4. The size of each pixel is corresponding to approximately 0.126 × 0.126 mm in this setup considering a distance of 1.85 cm from the lens to the silicone medium surface. It is worth mentioning that changing the camera distance, lens properties, angle of camera, and silicone medium material can affect sensor resolution significantly.
Interestingly, in the DAVIS camera, DVS and Active-Pixel Sensors (APS) share the same photo-diode, whereas DAVIS has an added concurrent gray-scale frames output for each pixel. As a definition when the pixel value changes from its previous intensity to a higher value or lower value, it is referred as positive-polarity events or negative-polarity events, respectively. Polarity changes are important in event-based cameras as they can provide additional information of the scene. For instance, they can be used for understanding the movement direction of an object in an environment. Due to a high sampling rate of pixels, the sensor can obtain events continuously while frame-based cameras have a considerable blind time between frames.
In the proposed approach, the events polarity is used to identify pressure and slippage in two different stages that are described later in Section 3. This paper presents an event-driven incipient slip sensor to detect incipient slip, slip and vibration in low latency.

3. Image Processing

An image processing algorithm has been developed in this paper to reconstruct the contact area, estimate stress distribution and detect slip and vibration. As discussed in Section 2.2, the event-based camera captures events with positive and negative polarities that can be processed to estimate input force variation and shape of the contact area.
A robust algorithm should be able to filter various types of noise and generate robust results regardless of an object’s specifications. We performed initial experiments using different objects to analyse events precisely in grasping and releasing phases. Observations indicate that identifying incipient slip requires a certain amount of events to distinguish noise from events of interest. Each event represents one pixel in the scene; therefore, it is necessary to have several events in order to obtain meaningful features. Hence, we consider a fixed window of 10 ms to reconstruct a frame that includes all events triggered in that time period. All processes are performed on the reconstructed frames, which are denoted as event-based frames in this paper. Although the experimental environment can be controlled to reduce noise and improve sensitivity of the sensor, an ideal sensor should be robust to perform in unstructured environments.
The developed algorithm consists of two main parts: at first, reconstruct contact area and measure stress distribution. Secondly, identify vibration and slip. The following pseudo-code illustrates procedures of the proposed algorithm.
(1) Contact Area Reconstruction and Stress Distribution Map
while(startTime<startTime+10) do  // Create frames from events within 10ms
begin
  • posFrame ← Read positive events
  • negFrame ← Read negative events
  • posFrame ← Morphology (posFrame)
  • negFrame ← Morphology (negFrame)
  • posFrameArea ← AreaFilter(posFrame)
  • negFrameArea ← AreaFilter(negFrame)
  if(negFrameArea>0) then
  begin
  •     contactArea ← negFrameArea
  •     stressMap ← previousNegFrameArea + negFrameArea
  end
end
stressConcentrationPoint ← maxPoint(stressMap))
(2) Slip and Vibration Detection
if(posFrameArea>0)  //Incident is detected
begin
  if(posFrameArea>= negFrameArea) then incident ← Slip
  else incident ← Vibration
end

3.1. Contact Area Reconstruction and Stress Distribution Map

In order to measure input force qualitatively in real time, it is required to analyse contact area characteristics during grasping phase. The increase of input force results in expanding the contact area as well as darkening this region. Therefore, the changes lead to triggering negative-polarity events. In contrast, during the releasing phase, a loss of contact area results in brightening these regions; therefore, it leads to trigger positive-polarity events. Figure 3a shows a 10 ms reconstructed frame from the negative-polarity events in the process of grasping an object and Figure 3b shows the positive-polarity events in the releasing phase.
Several factors affect the noise level in the images such as background noise, light variations, and silicone displacement during both grasping and releasing phases. As shown in Figure 3, a number of events have been triggered outside the contact area, which are considered as noise events. Since silicone medium has a diffusive surface, most of the background noise is eliminated physically and fewer noise events are triggered, which are widely scattered around the frame. In contrast, contact area events occur in specific regions with gregarious behavior. First, morphological operations based on the application of erosion and dilation masks are applied to remove any spurious pixels. Then, an area size filter is applied to identify and segment the largest connected areas above the minimum size. Morphological opening is accomplished by applying an erosion mask followed by a dilation while closing is performing by applying a dilation and then an erosion mask. The size of masks in the morphological operations can affect resolution of the sensor and, in the current algorithm, the resolution is 8 pixels, which is equivalent to 0.13 mm2. Figure 4 demonstrates the reconstructed frame before and after the noise filtering.
After filtering the noise, detected regions are considered to reconstruct the contact area. Furthermore, detected regions are accumulated into one frame in order to construct a stress distribution map. It is worth mentioning that high values in the stress distribution map indicate the high stress concentration points that can be investigated to analyse force and slip directions. Figure 5 demonstrates the qualitative reconstructed contact area with a stress distribution map scaling between 0 for the lowest and 10 for the highest stress points.

3.2. Slip and Vibration

In this research, incipient slip is considered as the earliest slight loss of the object contact area with the medium silicone and moving into the slip period, where the object contact area decreases continuously until it loses the whole contact to the silicone surface. Detection of incipient slip facilitates the system to prevent slip by modifying the gripping force. In addition, it is important to identify vibration for the following two reasons: (1) prevent a false-positive situation where vibration is detected as slip; (2) use detected vibration to correct trajectory plans and force feedback. In this paper, the vibration is referring to the specific object vibrations where it leads to change in the contact area. There are a lot of other parameters such as motor, arm, and camera vibrations that can affect the results which are out of the scope of this study.
A reduction in contact area indicates slip occurring, which can be observed as positive-polarity events due to an increase in intensity. Size of detected area in positive-polarity events can evaluate the loss of contact area in each frame. This phenomenon is referred as ‘incident’ in this paper, where incident may be either slip or vibration. In vibration, both positive-polarity and negative-polarity events are observed. Clearly, when a negative-polarity area increases substantially, minor reduction of the contact area on the other regions could not result in object slip or dropping. Hence, if the negative polarity area is greater than the positive polarity area, the incident is classified as vibration; otherwise, it is classified as slip. For instance, in Figure 6b, the object has lost contact area from the bottom section shown in green while gaining more contact due to pressure on the top region shown in red. This means that the object has vibrated and the contact area has increased. In Figure 6a, the positive-polarity area is greater than the negative-polarity area, and, therefore, the incident is classified as slip.
In the proposed method, whenever positive-polarity areas are detected, further investigation is required to classify the incident. Figure 7 shows both positive and negative polarity areas for all detected incidents. As it can be observed, the first 50 incidents have been classified as vibration due to the negative-polarity area being greater than positive-polarity area. In contrast, after incident number 50, slip starts to occur, whereas the positive-polarity areas are greater than negative-polarity areas. Moreover, positive-polarity areas reach a peak at incident 65, which indicates a substantial loss of contact area at this point. Due to noise thresholding, small changes in the positive-polarity areas have been filtered, which is demonstrated by a dotted horizontal blue line in Figure 7.

4. Validation and Results

In this section, the experimental setup, stress distribution map, validation method using a high speed camera, and evaluation of slip sensor are presented and discussed in detail. Afterwards, the results are analysed in the discussion section.

4.1. Experimental Setup

For validation purposes, thirty-seven experiments were performed using a Baxter robot (Rethink Robotics, Boston, MA, USA). In the experiment scenario, a force was applied to an object perpendicular to the surface of silicone by the robot arm, and then kept constant for a few seconds. Afterwards, the force from the robot arm was gradually reduced to release the object, whereas slip was observed. During this phase, object can be vibrated, which results in change of the object contact area. As the sensor considers only the contact area to detect slip, it is necessary to distinguish vibration and slippage incidents. As shown in the sensor diagram (Figure 1), the sensor includes an event-based camera, LED, silicone medium and adjustable camera frame. Figure 8 demonstrates experimental layout for the sensor components, high speed camera, Baxter robot arm, and the test object.
The adjustable sensor frame is a 3D printed platform that has been designed to mount the event-based camera and silicone medium. The platform enables linear and angular movement of the camera, in order to find the best angle to view the contact area and filter the background noise. The LED light increases the contrast on the surface of the contact area, which reduces the effect of background and environmental noise. A high speed camera is used to monitor the changes in the contact area independent of the event-based camera for validation purposes.
Five objects with dark colors were considered to perform the experiments. A variety of objects with different hardnesses, shapes and sizes have been chosen for the experiments to examine the viability of the proposed method. Figure 9 shows the shape and size for the used objects including metal nut, plastic servo head, Integrated Circuit (IC) made of aluminum and plastic, metal bolt, and rubber grommet.

4.2. Stress Distribution Map

As discussed in Section 3.1, a stress distribution map is reconstructed based on negative-polarity events. The stress concentration point is defined as the maximum value of pressure within the contact area. The left column in Figure 10 demonstrates reconstructed stress distribution maps for the five objects, which are normalized between 0 to 10. Moreover, the second column in the figure illustrates the actual contact area that is captured by the concurrent frame output channel of the DAVIS camera.
The results show that the contact area is reconstructed satisfactorily. In the proposed method, further information is extracted such as stress distribution throughout the contact area. The stress distribution map provides an understanding of crucial stress points, which can be used to estimate the consequent force vector. These features provide further information of incidents, and, therefore, the system can modify the force and trajectories more efficiently.

4.3. High Speed Camera and Synchronization

The performance of the proposed slip sensor using an event-based camera (DAVIS) can be affected by light noises in the contact area and background. Therefore, we consider using a conventional high speed camera as a validation method to study the contact area visually.
A high speed camera (1000 frames per second) is used to validate the proposed method, viewing the contact area from a different angle compared to the event-based camera. Ground truth contact areas are estimated by applying image thresholding on the high speed camera frames. Figure 11 demonstrates the detected area by the high speed camera over time. Zone 1 represents the synchronisation phase where an LED blinks to allow synchronisation between the high speed and the event based cameras. Zone 2 illustrates the increase of contact area for the grasping phase, which is caused by increasing input force from the robot. Zone 3 is where the applied force is constant, and zone 4 displays the releasing phase as the contact area is decreased by the reduction of input force. The start of zone 4 is known as incipient slip, which is followed by slip incidents.
In the releasing phase, the slip starting point (incipient slip) and final point where no contact area is detected are used for validation of slip results. To identify the exact time of incipient slip, the changes in the contact area are monitored in the zone 3 (with constant applied force), where the minimum detected area size is found. In the releasing phase, whenever the contact area decreases to less than the minimum detected area in zone 3, incipient slip has started. Clearly, slip events continue until the detected area size becomes zero.

4.4. Evaluation of the Slip Sensor

To evaluate the sensor performance, the results obtained from the event-based camera have been benchmarked against a high speed camera. As discussed in Section 4.3, the results from the event-based camera are synchronized with the high speed camera. The validation method is based on the precision and recall (i.e., sensitivity) method whereas the number of True-Positive (TP), False-Positive (FP) and False-Negative (FN) situations have been considered. In this approach, True-Negatives (TNs) are ignored, as their huge number skew the results.
To measure the number of TP and FP situations, all frames from the start of incipient slip are investigated until the object no longer has any contact with the silicone medium. For validation purposes, the time is split into 20 ms frames. For each frame, a TP condition occurs when slip happens and the sensor detects it correctly. However, if, in such a circumstance the sensor is not able to detect the slip, the incident is considered as FN. If the sensor detects slip incorrectly while no actual slip exists, this incident counts as an FP situation. Figure 12 illustrates ground truth results that have been obtained from the high speed camera in blue. The high speed camera results have been synchronized with the proposed sensor, whereas red lines indicate TP detections, and the abnormal gap between slip incidents during the releasing phase represents FN incidents. Furthermore, the time between the dotted line (start of slip) and the first slip incident is defined as sensor latency.
Similarly, TP and FN slip incidents can be observed in right side of Figure 13 during the releasing phase. However, three slip incidents have been detected during the grasping phase that represent FP conditions.
Furthermore, T P , F P and F N can be taken into account to calculate sensitivity (1), precision (2) and F 1 score (3) by the following formulas:
S e n s i t i v i t y = T P / ( T P + F N ) ,
P r e c i s i o n = T P / ( T P + F P ) ,
F 1 = 2 T P / ( 2 T P + F P + F N ) .
In order to measure the sensor latency, the time between the start of incipient slip and the first TP detection is measured.
Average results obtained from thirty-seven experiments are summarized in Table 1, including Precision, Sensitivity, F1 Score, and Latency. F1 score can indicate the relation of average precision and sensitivity for the achieved results. Moreover, average standard deviation for latency results of each object is denoted as σ .

5. Discussion

The objects can be classified into three categories based on material hardness, whereas nut and bolt are considered hard metals, the IC and Servo head as plastics, and the washer as a flexible rubber. It can be observed from Table 1 that the two objects with the highest sensitivity and lowest latency are the Nut and Bolt. Therefore, the metal objects have shown a better performance due to the friction and physical adhesion. It is worth reminding readers that parameters such as the size of contact area, object shape, direction of input force, and level of contrast between silicone medium and the object can also affect the results slightly.
On the other hand, the worst results were achieved from the plastic Servo head in all experiments. The small inner ring of the servo head is in contact with the silicone medium, and, therefore, it produces a smaller contact area. This can be one of the main reasons for the high latency. The rubber grommet was the smallest object, thus it was expected to achieve unfavorable results. However, the rubber grommet is a soft object and it expands its contact area in grasping phase. Hence, the results for the rubber grommet have achieved higher sensitivity and lower latency than excepted.
A lower detection latency provides more time for the robot gripper to respond to slippage. Therefore, one of the main factors in slip prevention is to detect latency of incipient slip. In [37,40], cameras sampling rate are 82 FPS (12.2 ms) and 30 FPS (33.3 ms), respectively. However, they did not provide the latency of sensors detection. Moreover, a flexible capacitive polymer tactile sensor was proposed in [46] with sampling rate of 6 FPS (166.6 ms) and sensor response time of 160 ms. In this paper, the proposed sensor reconstructs frames from the events with sampling rate of 100 FPS (10 ms) and average of 44.1 ms latency, which shows enhancement in sampling rate and latency compared to the methods mentioned above. To the best of the authors’ knowledge, this research study proposes the first approach for slip detection using an event-driven camera. This method is based on monitoring the changes in the contact area. Use of events instead of frames can enhance power consumption and latency in vision-based slip sensors criteria. Notably, eleven of the experiments had a detection latency below 10 ms, which shows the capability of the sensor for real-time applications and future improvements. It is worth mentioning that the achieved latency should be improved since other tactile sensors such as illustrated in [47], and they demonstrated low latency of 1–3 ms with high resolution and a 100 FPS sampling rate.
According to Table 1, the standard deviation of detection latency for each object has widely varied over experiments. The reason is that experiments were not performed under a strictly controlled environment and have been influenced by several factors such as vibration, force direction, and robot arm acceleration. Moreover, the experiment’s scenario is designed to test the novel approach for incipient slip detection, and it requires further experiments to evaluate stability of the system and performance of the sensor precisely.
In [37], a vision-based slip margin feedback was proposed for a elastic object with the displacement resolution between 10 1 to 10 3 pixel/mm. Moreover, in [48], a flexible 16 × 16 tactile sensor array was proposed for invasive surgery applications with a high spatial resolution of 1 mm and non-steady output. Furthermore, in [49], resolution of 0.1 mm is reported for localisation of the force. As mentioned in Section 2.2 and Section 3.1, each pixel corresponds to 0.126 × 0.126 mm and the sensor has equivalent resolution of 0.13 mm2. For different applications, the resolution of the system can be modified by changing camera distance, lens properties, silicone material and morphological operations.
It is important to note that this research study considers parameters that influence only the object vibrations. Figure 14 illustrates vibration incidents detected by the developed approach in the grasping phase. As it can be seen, several vibration incidents have been detected in the beginning of grasping phase since this part involves severe vibrations. In contrast, when the system is moving towards the constant force phase, vibration is reduced in the gripper. As a result, the sensor has correctly detected fewer vibration incidents.

6. Conclusions

In this paper, a novel methodology to detect incipient slip using neuromorphic event-based vision sensor (DAVIS) is demonstrated. The results show that the proposed methodology is very efficient and able to detect the incipient slip using different objects’ materials and sizes with average of 44.1 ms latency and satisfactory resolution. Therefore, it can be applied for industrial applications where the robot manipulator has to handle a variety of objects without prior knowledge of their specifications. Challenges such as estimating force quantitatively and evaluating the proposed sensor by using force sensors can be studied in the future. Furthermore, designing new scenarios and performing more experiments are necessary to study vibration of the system and stability precisely.

Acknowledgments

This work was funded by Kingston University, London, UK. We would like to thank iniLabs group (http://sensors.ini.uzh.ch) for making the DAVIS sensor available.

Author Contributions

All authors have made great contributions to the work. Amin Rigi, Fariborz Baghaei Naeini and Yahya Zweiri conceived and designed the experiments. Amin Rigi, Fariborz Baghaei Naeini, Dimitrios Makris and Yahya Zweiri analysed the data and revised the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Johansson, R.S.; Westling, G. Roles of glabrous skin receptors and sensorimotor memory in automatic control of precision grip when lifting rougher or more slippery objects. Exp. Brain Res. 1984, 56, 550–564. [Google Scholar] [CrossRef] [PubMed]
  2. Johansson, R.S.; Cole, K.J. Grasp stability during manipulative actions. Can. J. Physiol. Pharmacol. 1994, 72, 511–524. [Google Scholar] [CrossRef] [PubMed]
  3. Mukai, T.; Onishi, M.; Odashima, T.; Hirano, S.; Luo, Z. Development of the tactile sensor system of a human-interactive robot “RI-MAN”. IEEE Trans. Robot. 2008, 24, 505–512. [Google Scholar] [CrossRef]
  4. Chebotar, Y.; Kroemer, O.; Peters, J. Learning robot tactile sensing for object manipulation. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 3368–3375. [Google Scholar]
  5. Puangmali, P.; Althoefer, K.; Seneviratne, L.D.; Murphy, D.; Dasgupta, P. State-of-the-art in force and tactile sensing for minimally invasive surgery. IEEE Sens. J. 2008, 8, 371–380. [Google Scholar] [CrossRef]
  6. Cutkosky, M.R. On grasp choice, grasp models, and the design of hands for manufacturing tasks. IEEE Trans. Robot. Autom. 1989, 5, 269–279. [Google Scholar] [CrossRef]
  7. Bicchi, A.; Kumar, V. Robotic grasping and contact: A review. In Proceedings of the IEEE International Conference on Robotics and Automation, San Francisco, CA, USA, 24–28 April 2000. [Google Scholar]
  8. Sabetian, P.; Feizollahi, A.; Cheraghpour, F.; Moosavian, S.A.A. A compound robotic hand with two under-actuated fingers and a continuous finger. In Proceedings of the 9th IEEE International Symposium on Safety, Security, and Rescue Robotics, SSRR 2011, Kyoto, Japan, 1–5 November 2011; pp. 238–244. [Google Scholar]
  9. Wang, L.; DelPreto, J.; Bhattacharyya, S.; Weisz, J.; Allen, P.K. A highly-underactuated robotic hand with force and joint angle sensors. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 1380–1385. [Google Scholar]
  10. Deimel, R.; Brock, O. A novel type of compliant and underactuated robotic hand for dexterous grasping. Int. J. Robot. Res. 2016, 35, 161–185. [Google Scholar] [CrossRef]
  11. Maheshwari, V.; Saraf, R. Tactile devices to sense touch on a par with a human finger. Angew. Chem. Int. Ed. 2008, 47, 7808–7826. [Google Scholar] [CrossRef] [PubMed]
  12. Dahiya, R.S.; Metta, G.; Valle, M.; Sandini, G. Tactile sensing-from humans to humanoids. IEEE Trans. Robot. 2010, 26, 1–20. [Google Scholar] [CrossRef]
  13. Kappassov, Z.; Corrales, J.A.; Perdereau, V. Tactile sensing in dexterous robot hands—Review. Robot. Auton. Syst. 2015, 74, 195–220. [Google Scholar] [CrossRef]
  14. Stassi, S.; Cauda, V.; Canavese, G.; Pirri, C.F. Flexible tactile sensing based on piezoresistive composites: A review. Sensors 2014, 14, 5296–5332. [Google Scholar] [CrossRef] [PubMed]
  15. Jung, Y.; Lee, D.G.; Park, J.; Ko, H.; Lim, H. Piezoresistive tactile sensor discriminating multidirectional forces. Sensors 2015, 15, 25463–25473. [Google Scholar] [CrossRef] [PubMed]
  16. Teshigawara, S.; Tadakuma, K.; Ming, A.; Ishikawa, M.; Shimojo, M. High sensitivity initial slip sensor for dexterous grasp. In Proceedings of the IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 4867–4872. [Google Scholar]
  17. Kellogg, N.R. Capacitive Tactile Sensor. U.S. Patent 4,584,625, 22 April 1986. [Google Scholar]
  18. Muhammad, H.B.; Recchiuto, C.; Oddo, C.M.; Beccai, L.; Anthony, C.J.; Adams, M.J.; Carrozza, M.C.; Ward, M.C.L. A capacitive tactile sensor array for surface texture discrimination. Microelectron. Eng. 2011, 88, 1811–1813. [Google Scholar] [CrossRef]
  19. Ramadan, K.S.; Sameoto, D.; Evoy, S. A review of piezoelectric polymers as functional materials for electromechanical transducers. Smart Mater. Struct. 2014, 23, 033001. [Google Scholar] [CrossRef]
  20. Goeger, D.; Ecker, N.; Woern, H. Tactile sensor and algorithm to detect slip in robot grasping processes. In Proceedings of the 2008 IEEE International Conference on Robotics and Biomimetics, ROBIO 2008, Bangkok, Thailand, 22–25 February 2008; pp. 1480–1485. [Google Scholar]
  21. Yu, P.; Liu, W.; Gu, C.; Cheng, X.; Fu, X. Flexible piezoelectric tactile sensor array for dynamic three-axis force measurement. Sensors 2016, 16, 819. [Google Scholar] [CrossRef] [PubMed]
  22. Takenawa, S. A magnetic type tactile sensor using a two-dimensional array of inductors. In Proceedings of the IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 3295–3300. [Google Scholar]
  23. Goka, M.; Nakamoto, H.; Takenawa, S. A magnetic type tactile sensor by GMR elements and inductors. In Proceedings of the IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010—Conference Proceedings, Taipei, Taiwan, 18–22 October 2010; pp. 885–890. [Google Scholar]
  24. Paulino, T.; Ribeiro, P.; Neto, M.; Cardoso, S.; Schmitz, A.; Santos-Victor, J.; Bernardino, A.; Jamone, L. Low-cost 3-axis soft tactile sensors for the human-friendly robot Vizzy. In Proceedings of the IEEE International Conference on Robotics and Automation, Singapore, 29 May–3 June 2017; pp. 966–971. [Google Scholar]
  25. Francomano, M.T.; Accoto, D.; Guglielmelli, E. Experimental characterization of a flexible thermal slip sensor. Sensors 2012, 12, 15267–15280. [Google Scholar] [CrossRef] [PubMed]
  26. Shinoda, H.; Matsumoto, K.; Ando, S. Acoustic resonant tensor cell for tactile sensing. In Proceedings of the Proceedings of International Conference on Robotics and Automation, Albuquerque, NM, USA, 25 April 1997; Volume 4, pp. 3087–3092. [Google Scholar]
  27. Dzitac, P.; Mazid, A.M.; Ibrahim, M.Y.; Appuhamillage, G.K.; Choudhury, T.A. Friction-based slip detection in robotic grasping. In Proceedings of the IECON 2015—41st Annual Conference of the IEEE Industrial Electronics Society, Yokohama, Japan, 9–12 November 2015; pp. 4871–4874. [Google Scholar]
  28. Francomano, M.T.; Accoto, D.; Guglielmelli, E. Artificial sense of slip—A review. IEEE Sens. J. 2013, 13, 2489–2498. [Google Scholar] [CrossRef]
  29. Begej, S. Planar and finger-shaped optical tactile sensors for robotic applications. IEEE J. Robot. Autom. 1988, 4, 472–484. [Google Scholar] [CrossRef]
  30. Dario, P.; De Rossi, D. Tactile sensors and the gripping challenge. IEEE Spectr. 1985, 22, 46–53. [Google Scholar] [CrossRef]
  31. Maekawa, H.; Tanie, K.; Komoriya, K.; Kaneko, M.; Horiguchi, C.; Sugawara, T. Development of a finger-shaped tactile sensor and its evaluation by active touch. In Proceedings of the 1992 IEEE International Conference on Robotics and Automation, Nice, France, 12–14 May 1992; pp. 1327–1334. [Google Scholar]
  32. Piacenza, P.; Dang, W.; Hannigan, E.; Espinal, J.; Hussain, I.; Kymissis, I.; Ciocarlie, M. Accurate contact localization and indentation depth prediction with an optics-based tactile sensor. In Proceedings of the IEEE International Conference on Robotics and Automation, Singapore, 29 May–3 June 2017; pp. 959–965. [Google Scholar]
  33. Cirillo, A.; Luigi, C.; Luigi, C.; Cirillo, A.; Cirillo, P.; Maria, G.D.; Natale, C. Force/tactile aensors based on optoelectronic technology for manipulation and physical human—Robot interaction. In Advanced Mechatronics and MEMS Devices II; Number October; Springer: Berlin, Germany, 2017; Chapter 6; pp. 95–131. [Google Scholar]
  34. Kamiyama, K.; Kajimoto, H.; Inami, M.; Kawakami, N.; Tachi, S. A vision-based tactile sensor. In Proceedings of the International Conference on Artificial Reality and Telexistence, Tokyo, Japan, 5–7 December 2001; pp. 127–134. [Google Scholar]
  35. Kamiyama, K.; Kajimoto, H.; Kawakami, N.; Tachi, S. Evaluation of a vision-based tactile sensor. In Proceedings of the 2004 IEEE International Conference on Robotics and Automation, New Orleans, LA, USA, 26 April–1 May 2004; pp. 1542–1547. [Google Scholar]
  36. Ueda, J.; Ishida, Y.; Kondo, M.; Ogasawara, T. Development of the NAIST-hand with vision-based tactile fingertip sensor. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; pp. 2343–2348. [Google Scholar]
  37. Ikeda, A.; Kurita, Y.; Ueda, J.; Matsumoto, Y.; Ogasawara, T. Grip force control for an elastic finger using vision-based incipient slip feedback. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), Sendai, Japan, 28 September–2 October 2004; Volume 1, pp. 810–815. [Google Scholar]
  38. Vlack, K.; Mizota, T.; Kawakami, N.; Kamiyama, K.; Kajimoto, H.; Tachi, S. GelForce: A vision-based traction field computer interface. In Proceedings of the ACM CHI 2005 Conference on Human Factors in Computing Systems, Portland, OR, USA, 2–7 April 2005; Volume 2, pp. 1154–1155. [Google Scholar]
  39. Obinata, G. Contact region estimation based on a vision-based tactile sensor using a deformable touchpad. Sensors 2014, 14, 5805–5822. [Google Scholar]
  40. Zhao, K.; Li, X.; Lu, C.; Lu, G.; Wang, Y. Video-based slip sensor for multidimensional information detecting in deformable object grasp. Robot. Auton. Syst. 2017, 91, 71–82. [Google Scholar] [CrossRef]
  41. Lichtsteiner, P.; Posch, C.; Delbruck, T.; Member, S. Temporal contrast vision sensor. IEEE J. Solid-State Circ. 2008, 43, 566–576. [Google Scholar] [CrossRef] [Green Version]
  42. Yang, M.; Liu, S.C.; Delbruck, T. As dynamic vision sensor with 1% temporal contrast sensitivity and in-pixel asynchronous delta modulator for event encoding. IEEE J. Solid-State Circ. 2015, 50, 2149–2160. [Google Scholar] [CrossRef]
  43. Ni, Z.; Bolopion, A.; Agnus, J.; Benosman, R.; Régnier, S. Asynchronous event-based visual shape tracking for stable haptic feedback in microrobotics. IEEE Trans. Robot. 2012, 28, 1081–1089. [Google Scholar]
  44. Delhaye, B.; Lefèvre, P.; Thonnard, J.L. Dynamics of fingertip contact during the onset of tangential slip. J. R. Soc. Interface 2014, 11, 20140698. [Google Scholar] [CrossRef] [PubMed]
  45. Ueda, J.; Ikeda, A.; Ogasawara, T. Grip-force control of an elastic object by vision-based slip-margin feedback during the incipient slip. IEEE Trans. Robot. 2005, 21, 1139–1147. [Google Scholar] [CrossRef]
  46. Lee, H.K.; Chung, J.; Chang, S.I.; Yoon, E. Real-time measurement of the three-axis contact force distribution using a flexible capacitive polymer tactile sensor. J. Micromech. Microeng. 2011, 21, 035010. [Google Scholar] [CrossRef]
  47. Drimus, A.; Kootstra, G.; Bilberg, A.; Kragic, D. Design of a flexible tactile sensor for classification of rigid and deformable objects. Robot. Auton. Syst. 2014, 62, 3–15. [Google Scholar] [CrossRef]
  48. Goethals, P. Tactile feedback for robot assisted minimally invasive surgery: An overview. In Technical Report: 08RP012; KU Leuven: Leuven, Belgium, 14 July 2008. [Google Scholar]
  49. Lepora, N.F.; Ward-Cherrier, B. Superresolution with an optical tactile sensor. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Hamburg, Germany, 28 September–2 October 2015; pp. 2686–2691. [Google Scholar]
Figure 1. Sensor diagram.
Figure 1. Sensor diagram.
Sensors 18 00333 g001
Figure 2. (a) contact of Ethernet cable with the silicone medium from the top; (b) direct observation through the silicone (0 degrees); (c) view of contact area considering 54 degrees to eliminate background noise.
Figure 2. (a) contact of Ethernet cable with the silicone medium from the top; (b) direct observation through the silicone (0 degrees); (c) view of contact area considering 54 degrees to eliminate background noise.
Sensors 18 00333 g002
Figure 3. (a) negative-polarity events during the grasping phase; (b) positive-polarity events during the releasing phase.
Figure 3. (a) negative-polarity events during the grasping phase; (b) positive-polarity events during the releasing phase.
Sensors 18 00333 g003
Figure 4. (a) negative-polarity events before filtering the noise; (b) detected region after filtering the noise.
Figure 4. (a) negative-polarity events before filtering the noise; (b) detected region after filtering the noise.
Sensors 18 00333 g004
Figure 5. Reconstructed contact area with stress distribution map.
Figure 5. Reconstructed contact area with stress distribution map.
Sensors 18 00333 g005
Figure 6. Green sections demonstrate positive-polarity areas and red regions represent negative-polarity areas. (a) incident is classified as slip; (b) incident is classified as vibration.
Figure 6. Green sections demonstrate positive-polarity areas and red regions represent negative-polarity areas. (a) incident is classified as slip; (b) incident is classified as vibration.
Sensors 18 00333 g006
Figure 7. The red line shows negative-polarity and green illustrates positive-polarity changes, x-axis illustrates the incident number and y-axis shows the size of detected area.
Figure 7. The red line shows negative-polarity and green illustrates positive-polarity changes, x-axis illustrates the incident number and y-axis shows the size of detected area.
Sensors 18 00333 g007
Figure 8. Top-down (left) and sideways (right) views of the experiment setup.
Figure 8. Top-down (left) and sideways (right) views of the experiment setup.
Sensors 18 00333 g008
Figure 9. Objects used for the experiments including sizes in millimeters from left: nut, plastic servo head, IC, bolt, rubber grommet.
Figure 9. Objects used for the experiments including sizes in millimeters from left: nut, plastic servo head, IC, bolt, rubber grommet.
Sensors 18 00333 g009
Figure 10. The first column shows the reconstructed contact area based on events. The second column illustrates the actual contact area from the concurrent frame output channel of the DAVIS camera.
Figure 10. The first column shows the reconstructed contact area based on events. The second column illustrates the actual contact area from the concurrent frame output channel of the DAVIS camera.
Sensors 18 00333 g010
Figure 11. Detected area by high speed camera. (1) synchronisation phase; (2) increasing force to grasp the object; (3) holding the object; (4) releasing phase.
Figure 11. Detected area by high speed camera. (1) synchronisation phase; (2) increasing force to grasp the object; (3) holding the object; (4) releasing phase.
Sensors 18 00333 g011
Figure 12. Ground truth: blue, start of incipient slip: dotted line, TP incidents: red lines, FN incidents: abnormal gap after dotted line.
Figure 12. Ground truth: blue, start of incipient slip: dotted line, TP incidents: red lines, FN incidents: abnormal gap after dotted line.
Sensors 18 00333 g012
Figure 13. First three slip incidents represent FP conditions.
Figure 13. First three slip incidents represent FP conditions.
Sensors 18 00333 g013
Figure 14. Detection of vibration incidents during grasping phase.
Figure 14. Detection of vibration incidents during grasping phase.
Sensors 18 00333 g014
Table 1. Algorithm average results for precision, F1 score, Sensitivity, and Latency ( δ = standard deviation).
Table 1. Algorithm average results for precision, F1 score, Sensitivity, and Latency ( δ = standard deviation).
ObjectPrecisionSensitivityF1 ScoreDetection Latency (ms) and σ
Bolt0.710.970.8034.5 ( σ = 49.0)
Servo Head0.630.710.6573.6 ( σ = 56.1)
Rubber Grommet0.660.870.7341.2 ( σ = 29.2)
Nut0.650.900.7332.0 ( σ = 48.1)
IC0.850.800.8239.4 ( σ = 45.2)
Average0.700.850.7544.1 ( σ = 45.5)

Share and Cite

MDPI and ACS Style

Rigi, A.; Baghaei Naeini, F.; Makris, D.; Zweiri, Y. A Novel Event-Based Incipient Slip Detection Using Dynamic Active-Pixel Vision Sensor (DAVIS). Sensors 2018, 18, 333. https://doi.org/10.3390/s18020333

AMA Style

Rigi A, Baghaei Naeini F, Makris D, Zweiri Y. A Novel Event-Based Incipient Slip Detection Using Dynamic Active-Pixel Vision Sensor (DAVIS). Sensors. 2018; 18(2):333. https://doi.org/10.3390/s18020333

Chicago/Turabian Style

Rigi, Amin, Fariborz Baghaei Naeini, Dimitrios Makris, and Yahya Zweiri. 2018. "A Novel Event-Based Incipient Slip Detection Using Dynamic Active-Pixel Vision Sensor (DAVIS)" Sensors 18, no. 2: 333. https://doi.org/10.3390/s18020333

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop