Robot Vision

A special issue of Robotics (ISSN 2218-6581).

Deadline for manuscript submissions: closed (28 February 2014) | Viewed by 24411

Special Issue Editors


E-Mail Website
Guest Editor
Intelligent Robotics Laboratory, School of Computer Science, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK
Interests: robot learning, humanoid robot, robot motion, control theory

E-Mail Website
Guest Editor
School of Computer Science and Communication, Royal Institute of Technology (KTH), Teknikringen 14, R 709 KTH, SE-100 44 Stockholm, Sweden
Interests: vision based control; vision systems; human-robot interaction; object grasping and manipulation; activity recognition

E-Mail
Guest Editor
Cognitive Vision Lab, The Maersk Mc-Kinney Moller Institute, University of Southern Denmark, Niels Bohrs Alle 1, 5230, Odense M, Denmark
Interests: deep hierarchies; cognitive vision; robot manipulation; biologically motivated vision; learning in assembly processes

E-Mail Website
Guest Editor
School of Computer Science, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK
Interests: robust and adaptive methods for computer vision; object and scene recognition and categorization; statistical visual learning; 3D object modeling; biologically motivated vision

Special Issue Information

Dear Colleagues,

Vision is one of the major senses that enables humans to act and interact in changing environments.
In a similar vein, computer vision should play an equally important role in relation to intelligent robotics. Vision and image understanding techniques are now applied in a range of robotics applications either as an alternative or a supplement to other sensing modalities. There have been significant recent advances in robust visual inference for mobility and manipulation among other tasks. This special issue welcomes submissions in both advances in the theory of vision with relevance to robotics and applications of computer vision in robotics. We particularly encourage papers with thorough experimental evaluation

Dr. Jeremy L. Wyatt
Prof. Dr. Danica Kragic
Dr. Norbert Krüger
Prof. Dr. Ales Leonardis
Guest Editors

Submission

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (N.B. Conference papers may only be submitted if the paper was not originally copyrighted and if it has been extended substantially and completely re-written). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Robotics is an international peer-reviewed Open Access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. For the first couple of issues the Article Processing Charge (APC) will be waived for well-prepared manuscripts. English correction and/or formatting fees of 250 CHF (Swiss Francs) will be charged in certain cases for those articles accepted for publication that require extensive additional formatting and/or English corrections.

Keywords

  • new visual sensors
  • hardware implementations of visual processing algorithms
  • novel algorithms and architectures for vision
  • real-time visual processing
  • integration of vision and haptics
  • integration of vision and other modalities
  • recovery of 3D object shape
  • object recognition and categorisation
  • perception of objects with transparent and specular surfaces
  • vision for articulated and deformable objects
  • vision for manipulation
  • visual servoing
  • active vision
  • visual tracking and vision in multi-modal tracking
  • visual navigation
  • vision in unstructured environments
  • visual place recognition and categorization
  • visual recovery of scene structure
  • visual slam
  • vision for robot human interaction
  • biologically motivated vision approaches applied to robot systems
  • evaluation metrics for vision systems
  • applications of robot vision

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

1770 KiB  
Article
IMU and Multiple RGB-D Camera Fusion for Assisting Indoor Stop-and-Go 3D Terrestrial Laser Scanning
by Jacky C.K. Chow, Derek D. Lichti, Jeroen D. Hol, Giovanni Bellusci and Henk Luinge
Robotics 2014, 3(3), 247-280; https://doi.org/10.3390/robotics3030247 - 11 Jul 2014
Cited by 44 | Viewed by 16434
Abstract
Autonomous Simultaneous Localization and Mapping (SLAM) is an important topic in many engineering fields. Since stop-and-go systems are typically slow and full-kinematic systems may lack accuracy and integrity, this paper presents a novel hybrid “continuous stop-and-go” mobile mapping system called Scannect. A 3D [...] Read more.
Autonomous Simultaneous Localization and Mapping (SLAM) is an important topic in many engineering fields. Since stop-and-go systems are typically slow and full-kinematic systems may lack accuracy and integrity, this paper presents a novel hybrid “continuous stop-and-go” mobile mapping system called Scannect. A 3D terrestrial LiDAR system is integrated with a MEMS IMU and two Microsoft Kinect sensors to map indoor urban environments. The Kinects’ depth maps were processed using a new point-to-plane ICP that minimizes the reprojection error of the infrared camera and projector pair in an implicit iterative extended Kalman filter (IEKF). A new formulation of the 5-point visual odometry method is tightly coupled in the implicit IEKF without increasing the dimensions of the state space. The Scannect can map and navigate in areas with textureless walls and provides an effective means for mapping large areas with lots of occlusions. Mapping long corridors (total travel distance of 120 m) took approximately 30 minutes and achieved a Mean Radial Spherical Error of 17 cm before smoothing or global optimization. Full article
(This article belongs to the Special Issue Robot Vision)
Show Figures

Figure 1

2487 KiB  
Article
Illumination Tolerance for Visual Navigation with the Holistic Min-Warping Method
by Ralf Möller, Michael Horst and David Fleer
Robotics 2014, 3(1), 22-67; https://doi.org/10.3390/robotics3010022 - 13 Feb 2014
Cited by 13 | Viewed by 6957
Abstract
Holistic visual navigation methods are an emerging alternative to the ubiquitous feature-based methods. Holistic methods match entire images pixel-wise instead of extracting and comparing local feature descriptors. In this paper we investigate which pixel-wise distance measures are most suitable for the holistic min-warping [...] Read more.
Holistic visual navigation methods are an emerging alternative to the ubiquitous feature-based methods. Holistic methods match entire images pixel-wise instead of extracting and comparing local feature descriptors. In this paper we investigate which pixel-wise distance measures are most suitable for the holistic min-warping method with respect to illumination invariance. Two novel approaches are presented: tunable distance measures—weighted combinations of illumination-invariant and illumination-sensitive terms—and two novel forms of “sequential” correlation which are only invariant against intensity shifts but not against multiplicative changes. Navigation experiments on indoor image databases collected at the same locations but under different conditions of illumination demonstrate that tunable distance measures perform optimally by mixing their two portions instead of using the illumination-invariant term alone. Sequential correlation performs best among all tested methods, and as well but much faster in an approximated form. Mixing with an additional illumination-sensitive term is not necessary for sequential correlation. We show that min-warping with approximated sequential correlation can successfully be applied to visual navigation of cleaning robots. Full article
(This article belongs to the Special Issue Robot Vision)
Show Figures

Figure 1

Back to TopTop