Next Article in Journal
Sensor-Based Smart Oven System to Enhance Cooking Safety
Previous Article in Journal
Optimal Sensor Placement through Bayesian Experimental Design: Effect of Measurement Noise and Number of Sensors
Article Menu
Issue 2 (July) cover image

Article Versions

Export Article

Open AccessProceedings
Proceedings 2017, 1(2), 22; doi:10.3390/ecsa-3-E006

Data-Driven Representation of Soft Deformable Objects Based on Force-Torque Data and 3D Vision Measurements

Department of Computer Science and Engineering, Université du Québec en Outaouais, Gatineau, QC J8X 3X7, Canada
Presented at the 3rd International Electronic Conference on Sensors and Applications, 15–30 November 2016; Available online: https://sciforum.net/conference/ecsa-3.
*
Author to whom correspondence should be addressed.
Published: 14 November 2016
Download PDF [836 KB, uploaded 27 June 2017]

Abstract

The realistic representation of deformations is still an active area of research, especially for soft objects whose behavior cannot be simply described in terms of elasticity parameters. Most of existing techniques assume that the parameters describing the object behavior are known a priori based on assumptions on the object material, such as its isotropy or linearity, or values for these parameters are chosen by manual tuning until the results seem plausible. This is a subjective process and cannot be employed where accuracy is expected. This paper proposes a data-driven neural-network-based model for capturing implicitly deformations of a soft object, without requiring any knowledge on the object material. Visual data, in form of 3D point clouds gathered by a Kinect sensor, is collected over an object while forces are exerted by means of the probing tip of a force-torque sensor. A novel approach advantageously combining distance-based clustering, stratified sampling and neural gas-tuned mesh simplification is then proposed to describe the particularities of the deformation. The representation is denser in the region of the deformation (an average of 97% perceptual similarity with the collected data in the deformed area), while still preserving the object overall shape (74% similarity over the entire surface) and only using on average 30% of the number of vertices in the mesh.
Keywords: deformation; force-torque sensor; kinect; RGB-D data; neural gas; clustering; mesh simplification; 3D object modeling deformation; force-torque sensor; kinect; RGB-D data; neural gas; clustering; mesh simplification; 3D object modeling
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Tawbe, B.; Cretu, A.-M. Data-Driven Representation of Soft Deformable Objects Based on Force-Torque Data and 3D Vision Measurements. Proceedings 2017, 1, 22.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Proceedings EISSN 2504-3900 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top