Special Issue "Robot Vision"

Quicklinks

A special issue of Robotics (ISSN 2218-6581).

Deadline for manuscript submissions: closed (28 February 2014)

Special Issue Editors

Guest Editor
Prof. Dr. Jeremy L. Wyatt (Website)

Intelligent Robotics Laboratory, School of Computer Science, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK
Fax: +44 121 414 4788
Interests: robot learning; reinforcement learning; sequential decision making under uncertainty; robot task planning; robot manipulation; machine learning; machine vision
Guest Editor
Prof. Dr. Danica Kragic (Website)

School of Computer Science and Communication, Royal Institute of Technology (KTH), Teknikringen 14, R 709 KTH, SE-100 44 Stockholm, Sweden
Interests: vision based control; vision systems; human-robot interaction; object grasping and manipulation; activity recognition
Guest Editor
Dr. Norbert Krüger

Cognitive Vision Lab, The Maersk Mc-Kinney Moller Institute, University of Southern Denmark, Niels Bohrs Alle 1, 5230, Odense M, Denmark
Interests: deep hierarchies; cognitive vision; robot manipulation; biologically motivated vision; learning in assembly processes
Guest Editor
Prof. Dr. Ales Leonardis (Website)

School of Computer Science, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK
Interests: robust and adaptive methods for computer vision; object and scene recognition and categorization; statistical visual learning; 3D object modeling; biologically motivated vision

Special Issue Information

Dear Colleagues,

Vision is one of the major senses that enables humans to act and interact in changing environments.
In a similar vein, computer vision should play an equally important role in relation to intelligent robotics. Vision and image understanding techniques are now applied in a range of robotics applications either as an alternative or a supplement to other sensing modalities. There have been significant recent advances in robust visual inference for mobility and manipulation among other tasks. This special issue welcomes submissions in both advances in the theory of vision with relevance to robotics and applications of computer vision in robotics. We particularly encourage papers with thorough experimental evaluation

Dr. Jeremy L. Wyatt
Prof. Dr. Danica Kragic
Dr. Norbert Krüger
Prof. Dr. Ales Leonardis
Guest Editors

Submission

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (N.B. Conference papers may only be submitted if the paper was not originally copyrighted and if it has been extended substantially and completely re-written). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Robotics is an international peer-reviewed Open Access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. For the first couple of issues the Article Processing Charge (APC) will be waived for well-prepared manuscripts. English correction and/or formatting fees of 250 CHF (Swiss Francs) will be charged in certain cases for those articles accepted for publication that require extensive additional formatting and/or English corrections.

Keywords

  • new visual sensors
  • hardware implementations of visual processing algorithms
  • novel algorithms and architectures for vision
  • real-time visual processing
  • integration of vision and haptics
  • integration of vision and other modalities
  • recovery of 3D object shape
  • object recognition and categorisation
  • perception of objects with transparent and specular surfaces
  • vision for articulated and deformable objects
  • vision for manipulation
  • visual servoing
  • active vision
  • visual tracking and vision in multi-modal tracking
  • visual navigation
  • vision in unstructured environments
  • visual place recognition and categorization
  • visual recovery of scene structure
  • visual slam
  • vision for robot human interaction
  • biologically motivated vision approaches applied to robot systems
  • evaluation metrics for vision systems
  • applications of robot vision

Published Papers (2 papers)

View options order results:
result details:
Displaying articles 1-2
Export citation of selected articles as:

Research

Open AccessArticle IMU and Multiple RGB-D Camera Fusion for Assisting Indoor Stop-and-Go 3D Terrestrial Laser Scanning
Robotics 2014, 3(3), 247-280; doi:10.3390/robotics3030247
Received: 19 February 2014 / Revised: 24 April 2014 / Accepted: 17 June 2014 / Published: 11 July 2014
Cited by 3 | PDF Full-text (1770 KB) | HTML Full-text | XML Full-text
Abstract
Autonomous Simultaneous Localization and Mapping (SLAM) is an important topic in many engineering fields. Since stop-and-go systems are typically slow and full-kinematic systems may lack accuracy and integrity, this paper presents a novel hybrid “continuous stop-and-go” mobile mapping system called Scannect. A [...] Read more.
Autonomous Simultaneous Localization and Mapping (SLAM) is an important topic in many engineering fields. Since stop-and-go systems are typically slow and full-kinematic systems may lack accuracy and integrity, this paper presents a novel hybrid “continuous stop-and-go” mobile mapping system called Scannect. A 3D terrestrial LiDAR system is integrated with a MEMS IMU and two Microsoft Kinect sensors to map indoor urban environments. The Kinects’ depth maps were processed using a new point-to-plane ICP that minimizes the reprojection error of the infrared camera and projector pair in an implicit iterative extended Kalman filter (IEKF). A new formulation of the 5-point visual odometry method is tightly coupled in the implicit IEKF without increasing the dimensions of the state space. The Scannect can map and navigate in areas with textureless walls and provides an effective means for mapping large areas with lots of occlusions. Mapping long corridors (total travel distance of 120 m) took approximately 30 minutes and achieved a Mean Radial Spherical Error of 17 cm before smoothing or global optimization. Full article
(This article belongs to the Special Issue Robot Vision)
Open AccessArticle Illumination Tolerance for Visual Navigation with the Holistic Min-Warping Method
Robotics 2014, 3(1), 22-67; doi:10.3390/robotics3010022
Received: 15 December 2013 / Revised: 24 January 2014 / Accepted: 27 January 2014 / Published: 13 February 2014
Cited by 1 | PDF Full-text (2487 KB) | HTML Full-text | XML Full-text
Abstract
Holistic visual navigation methods are an emerging alternative to the ubiquitous feature-based methods. Holistic methods match entire images pixel-wise instead of extracting and comparing local feature descriptors. In this paper we investigate which pixel-wise distance measures are most suitable for the holistic [...] Read more.
Holistic visual navigation methods are an emerging alternative to the ubiquitous feature-based methods. Holistic methods match entire images pixel-wise instead of extracting and comparing local feature descriptors. In this paper we investigate which pixel-wise distance measures are most suitable for the holistic min-warping method with respect to illumination invariance. Two novel approaches are presented: tunable distance measures—weighted combinations of illumination-invariant and illumination-sensitive terms—and two novel forms of “sequential” correlation which are only invariant against intensity shifts but not against multiplicative changes. Navigation experiments on indoor image databases collected at the same locations but under different conditions of illumination demonstrate that tunable distance measures perform optimally by mixing their two portions instead of using the illumination-invariant term alone. Sequential correlation performs best among all tested methods, and as well but much faster in an approximated form. Mixing with an additional illumination-sensitive term is not necessary for sequential correlation. We show that min-warping with approximated sequential correlation can successfully be applied to visual navigation of cleaning robots. Full article
(This article belongs to the Special Issue Robot Vision)

Journal Contact

MDPI AG
Robotics Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
robotics@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Robotics
Back to Top