Next Article in Journal
Averaging Pixel Current Adjustment Technique for Reducing Fixed Pattern Noise in the Bolometer-Type Uncooled Infrared Image Sensor
Next Article in Special Issue
Star Image Prediction and Restoration under Dynamic Conditions
Previous Article in Journal
Tapered Fiber-Optic Mach-Zehnder Interferometer for Ultra-High Sensitivity Measurement of Refractive Index
Previous Article in Special Issue
2D Rotation-Angle Measurement Utilizing Least Iterative Region Segmentation

Vision for Robust Robot Manipulation

RoViT, University of Alicante, 03690 San Vicente del Raspeig (Alicante), Spain
RobInLab, Jaume I University, 12071 Castello de la Plana, Spain
Interaction Science Dept., Sungkyunkwan University, Jongno-Gu, Seoul 110-745, Korea
Authors to whom correspondence should be addressed.
Sensors 2019, 19(7), 1648;
Received: 24 December 2018 / Revised: 1 April 2019 / Accepted: 3 April 2019 / Published: 6 April 2019
(This article belongs to the Special Issue Visual Sensors)
Advances in Robotics are leading to a new generation of assistant robots working in ordinary, domestic settings. This evolution raises new challenges in the tasks to be accomplished by the robots. This is the case for object manipulation where the detect-approach-grasp loop requires a robust recovery stage, especially when the held object slides. Several proprioceptive sensors have been developed in the last decades, such as tactile sensors or contact switches, that can be used for that purpose; nevertheless, their implementation may considerably restrict the gripper’s flexibility and functionality, increasing their cost and complexity. Alternatively, vision can be used since it is an undoubtedly rich source of information, and in particular, depth vision sensors. We present an approach based on depth cameras to robustly evaluate the manipulation success, continuously reporting about any object loss and, consequently, allowing it to robustly recover from this situation. For that, a Lab-colour segmentation allows the robot to identify potential robot manipulators in the image. Then, the depth information is used to detect any edge resulting from two-object contact. The combination of those techniques allows the robot to accurately detect the presence or absence of contact points between the robot manipulator and a held object. An experimental evaluation in realistic indoor environments supports our approach. View Full-Text
Keywords: robotics; robot manipulation; depth vision robotics; robot manipulation; depth vision
Show Figures

Figure 1

MDPI and ACS Style

Martinez-Martin, E.; del Pobil, A.P. Vision for Robust Robot Manipulation. Sensors 2019, 19, 1648.

AMA Style

Martinez-Martin E, del Pobil AP. Vision for Robust Robot Manipulation. Sensors. 2019; 19(7):1648.

Chicago/Turabian Style

Martinez-Martin, Ester, and Angel P. del Pobil 2019. "Vision for Robust Robot Manipulation" Sensors 19, no. 7: 1648.

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

Back to TopTop