Robotics and 3D Vision

A special issue of Robotics (ISSN 2218-6581).

Deadline for manuscript submissions: closed (1 April 2017) | Viewed by 68752

Special Issue Editors


E-Mail
Guest Editor
Department of Mechanical Engineering, National University of Singapore, Singapore 119260, Singapore
Interests: stereovision; image processing and industrial automation

E-Mail Website
Guest Editor
Department of Mechanical Engineering, National University of Singapore, Singapore 117543, Singapore
Interests: medical engineering; biomechanics; computational intelligence; computer vision and graphics; image processing and visualization; mechatronics and parallel processing

Special Issue Information

Dear Colleagues, 

Robots are widely used in many applications, such as medical, industrial automation, defense, etc. Usage and performance are enhanced with 3D vision which supplies information on the working environment to robotic systems. Indeed, 3D vision could become an integral part of robotic systems in the immediate future. In this regard, it is necessary to have full understanding of 3D vision and robot control, workspace, and the interaction between them to form a cohesive whole for the tasks at hand. It is similar to having good hand–eye coordination and knowledge of the work environment for human beings in performing a task. 

The task performed by robots integrated with a 3D vision system is to employ an on board camera system or a camera mounted in the work space to get a snapshot of an object and find out where that object is in space, relative to the robot’s position, and the robot can use that positional data to manipulate the object. The vision feedback enables the robot to go out and find that object no matter where you move it. 

For robotic systems with 3D vision, the three following areas are important: Robotics, 3D vision and scene representation and understanding. In addition, the integration of them are essential in the development of a cohesive system. Robotics in itself comprises many sub-areas, such as kinematics, dynamics and control. Three-dimensional vision, which is also known as stereovision, consists of image understanding, correspondence, object recognition, and 3D scene reconstruction and understanding. These areas by themselves are not new, but in order to integrate them to form a complete and useful system, good knowledge in each of them is essential. The objective of this Special Issue is therefore to promote a deeper understanding of recent breakthroughs and challenges in these three areas: Robotics, 3D Vision, 3D Scene Reconstruction and Understanding. More importantly, we are interested in efforts that integrate them into a complete system in different application domains.

Topics of interest include (but are not limited to):

  • Control strategies for robot manipulation
  • Stereovision and correspondence
  • Object recognition and representation in 3D
  • 3D scene recovery and reconstruction
  • Hand–eye coordination in 3D for robots equipped with 3D vision
  • Robot sensing (tactile, visual, etc.)
  • Fusion of data sensed by other 3D sensing methods and image based sensing
  • Vision guided automatic robotic surgery
  • Applications
  • We particularly encourage the submission of papers that group these topics in research areas such as 3D robot perception, robotic hand–eye collaborative manipulation, robot manipulation based on workspace understanding

Dr. Kah Bin Lim
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Robotics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Robotics
  • 3D vision
  • Stereovision
  • Robotic vision
  • Hand–eye coordination
  • 3D scene understanding

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

1059 KiB  
Article
Compressed Voxel-Based Mapping Using Unsupervised Learning
by Daniel Ricao Canelhas, Erik Schaffernicht, Todor Stoyanov, Achim J. Lilienthal and Andrew J. Davison
Robotics 2017, 6(3), 15; https://doi.org/10.3390/robotics6030015 - 29 Jun 2017
Cited by 10 | Viewed by 9985
Abstract
In order to deal with the scaling problem of volumetric map representations, we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective [...] Read more.
In order to deal with the scaling problem of volumetric map representations, we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective decompression in scenarios relevant to robotic applications. As compression methods, we compare using PCA-derived low-dimensional bases to nonlinear auto-encoder networks. Selecting two application-oriented performance metrics, we evaluate the impact of different compression rates on reconstruction fidelity as well as to the task of map-aided ego-motion estimation. It is demonstrated that lossily reconstructed distance fields used as cost functions for ego-motion estimation can outperform the original maps in challenging scenarios from standard RGB-D (color plus depth) data sets due to the rejection of high-frequency noise content. Full article
(This article belongs to the Special Issue Robotics and 3D Vision)
Show Figures

Figure 1

21136 KiB  
Article
Automated Assembly Using 3D and 2D Cameras
by Adam Leon Kleppe, Asgeir Bjørkedal, Kristoffer Larsen and Olav Egeland
Robotics 2017, 6(3), 14; https://doi.org/10.3390/robotics6030014 - 27 Jun 2017
Cited by 5 | Viewed by 8477
Abstract
2D and 3D computer vision systems are frequently being used in automated production to detect and determine the position of objects. Accuracy is important in the production industry, and computer vision systems require structured environments to function optimally. For 2D vision systems, a [...] Read more.
2D and 3D computer vision systems are frequently being used in automated production to detect and determine the position of objects. Accuracy is important in the production industry, and computer vision systems require structured environments to function optimally. For 2D vision systems, a change in surfaces, lighting and viewpoint angles can reduce the accuracy of a method, maybe even to a degree that it will be erroneous, while for 3D vision systems, the accuracy mainly depends on the 3D laser sensors. Commercially available 3D cameras lack the precision found in high-grade 3D laser scanners, and are therefore not suited for accurate measurements in industrial use. In this paper, we show that it is possible to identify and locate objects using a combination of 2D and 3D cameras. A rough estimate of the object pose is first found using a commercially available 3D camera. Then, a robotic arm with an eye-in-hand 2D camera is used to determine the pose accurately. We show that this increases the accuracy to < 1 and < 1 . This was demonstrated in a real industrial assembly task where high accuracy is required. Full article
(This article belongs to the Special Issue Robotics and 3D Vision)
Show Figures

Figure 1

6090 KiB  
Article
Augmented Reality Guidance with Multimodality Imaging Data and Depth-Perceived Interaction for Robot-Assisted Surgery
by Rong Wen, Chin-Boon Chng and Chee-Kong Chui
Robotics 2017, 6(2), 13; https://doi.org/10.3390/robotics6020013 - 24 May 2017
Cited by 22 | Viewed by 10957
Abstract
Image-guided surgical procedures are challenged by mono image modality, two-dimensional anatomical guidance and non-intuitive human-machine interaction. The introduction of Tablet-based augmented reality (AR) into surgical robots may assist surgeons with overcoming these problems. In this paper, we proposed and developed a robot-assisted surgical [...] Read more.
Image-guided surgical procedures are challenged by mono image modality, two-dimensional anatomical guidance and non-intuitive human-machine interaction. The introduction of Tablet-based augmented reality (AR) into surgical robots may assist surgeons with overcoming these problems. In this paper, we proposed and developed a robot-assisted surgical system with interactive surgical guidance using tablet-based AR with a Kinect sensor for three-dimensional (3D) localization of patient anatomical structures and intraoperative 3D surgical tool navigation. Depth data acquired from the Kinect sensor was visualized in cone-shaped layers for 3D AR-assisted navigation. Virtual visual cues generated by the tablet were overlaid on the images of the surgical field for spatial reference. We evaluated the proposed system and the experimental results showed that the tablet-based visual guidance system could assist surgeons in locating internal organs, with errors between 1.74 and 2.96 mm. We also demonstrated that the system was able to provide mobile augmented guidance and interaction for surgical tool navigation. Full article
(This article belongs to the Special Issue Robotics and 3D Vision)
Show Figures

Figure 1

4718 KiB  
Article
A New Combined Vision Technique for Micro Aerial Vehicle Pose Estimation
by Haiwen Yuan, Changshi Xiao, Supu Xiu, Yuanqiao Wen, Chunhui Zhou and Qiliang Li
Robotics 2017, 6(2), 6; https://doi.org/10.3390/robotics6020006 - 28 Mar 2017
Cited by 8 | Viewed by 7107
Abstract
In this work, a new combined vision technique (CVT) is proposed, comprehensively developed, and experimentally tested for stable, precise unmanned micro aerial vehicle (MAV) pose estimation. The CVT combines two measurement methods (multi- and mono-view) based on different constraint conditions. These constraints are [...] Read more.
In this work, a new combined vision technique (CVT) is proposed, comprehensively developed, and experimentally tested for stable, precise unmanned micro aerial vehicle (MAV) pose estimation. The CVT combines two measurement methods (multi- and mono-view) based on different constraint conditions. These constraints are considered simultaneously by the particle filter framework to improve the accuracy of visual positioning. The framework, which is driven by an onboard inertial module, takes the positioning results from the visual system as measurements and updates the vehicle state. Moreover, experimental testing and data analysis have been carried out to verify the proposed algorithm, including multi-camera configuration, design and assembly of MAV systems, and the marker detection and matching between different views. Our results indicated that the combined vision technique is very attractive for high-performance MAV pose estimation. Full article
(This article belongs to the Special Issue Robotics and 3D Vision)
Show Figures

Figure 1

7162 KiB  
Article
Visual Tracking of Deformation and Classification of Non-Rigid Objects with Robot Hand Probing
by Fei Hui, Pierre Payeur and Ana-Maria Cretu
Robotics 2017, 6(1), 5; https://doi.org/10.3390/robotics6010005 - 17 Mar 2017
Cited by 14 | Viewed by 10169
Abstract
Performing tasks with a robot hand often requires a complete knowledge of the manipulated object, including its properties (shape, rigidity, surface texture) and its location in the environment, in order to ensure safe and efficient manipulation. While well-established procedures exist for the manipulation [...] Read more.
Performing tasks with a robot hand often requires a complete knowledge of the manipulated object, including its properties (shape, rigidity, surface texture) and its location in the environment, in order to ensure safe and efficient manipulation. While well-established procedures exist for the manipulation of rigid objects, as well as several approaches for the manipulation of linear or planar deformable objects such as ropes or fabric, research addressing the characterization of deformable objects occupying a volume remains relatively limited. The paper proposes an approach for tracking the deformation of non-rigid objects under robot hand manipulation using RGB-D data. The purpose is to automatically classify deformable objects as rigid, elastic, plastic, or elasto-plastic, based on the material they are made of, and to support recognition of the category of such objects through a robotic probing process in order to enhance manipulation capabilities. The proposed approach combines advantageously classical color and depth image processing techniques and proposes a novel combination of the fast level set method with a log-polar mapping of the visual data to robustly detect and track the contour of a deformable object in a RGB-D data stream. Dynamic time warping is employed to characterize the object properties independently from the varying length of the tracked contour as the object deforms. The proposed solution achieves a classification rate over all categories of material of up to 98.3%. When integrated in the control loop of a robot hand, it can contribute to ensure stable grasp, and safe manipulation capability that will preserve the physical integrity of the object. Full article
(This article belongs to the Special Issue Robotics and 3D Vision)
Show Figures

Figure 1

3072 KiB  
Article
A Matlab-Based Testbed for Integration, Evaluation and Comparison of Heterogeneous Stereo Vision Matching Algorithms
by Raul Correal, Gonzalo Pajares and Jose Jaime Ruz
Robotics 2016, 5(4), 24; https://doi.org/10.3390/robotics5040024 - 9 Nov 2016
Cited by 2 | Viewed by 8583
Abstract
Stereo matching is a heavily researched area with a prolific published literature and a broad spectrum of heterogeneous algorithms available in diverse programming languages. This paper presents a Matlab-based testbed that aims to centralize and standardize this variety of both current and prospective [...] Read more.
Stereo matching is a heavily researched area with a prolific published literature and a broad spectrum of heterogeneous algorithms available in diverse programming languages. This paper presents a Matlab-based testbed that aims to centralize and standardize this variety of both current and prospective stereo matching approaches. The proposed testbed aims to facilitate the application of stereo-based methods to real situations. It allows for configuring and executing algorithms, as well as comparing results, in a fast, easy and friendly setting. Algorithms can be combined so that a series of processes can be chained and executed consecutively, using the output of a process as input for the next; some additional filtering and image processing techniques have been included within the testbed for this purpose. A use case is included to illustrate how these processes are sequenced and its effect on the results for real applications. The testbed has been conceived as a collaborative and incremental open-source project, where its code is accessible and modifiable, with the objective of receiving contributions and releasing future versions to include new algorithms and features. It is currently available online for the research community. Full article
(This article belongs to the Special Issue Robotics and 3D Vision)
Show Figures

Figure 1

Review

Jump to: Research

2843 KiB  
Review
Application of Augmented Reality and Robotic Technology in Broadcasting: A Survey
by Dingtian Yan and Huosheng Hu
Robotics 2017, 6(3), 18; https://doi.org/10.3390/robotics6030018 - 17 Aug 2017
Cited by 18 | Viewed by 12132
Abstract
As an innovation technique, Augmented Reality (AR) has been gradually deployed in the broadcast, videography and cinematography industries. Virtual graphics generated by AR are dynamic and overlap on the surface of the environment so that the original appearance can be greatly enhanced in [...] Read more.
As an innovation technique, Augmented Reality (AR) has been gradually deployed in the broadcast, videography and cinematography industries. Virtual graphics generated by AR are dynamic and overlap on the surface of the environment so that the original appearance can be greatly enhanced in comparison with traditional broadcasting. In addition, AR enables broadcasters to interact with augmented virtual 3D models on a broadcasting scene in order to enhance the performance of broadcasting. Recently, advanced robotic technologies have been deployed in a camera shooting system to create a robotic cameraman so that the performance of AR broadcasting could be further improved, which is highlighted in the paper. Full article
(This article belongs to the Special Issue Robotics and 3D Vision)
Show Figures

Figure 1

Back to TopTop