Special Issue "Robot Vision"
A special issue of Robotics (ISSN 2218-6581).
Deadline for manuscript submissions: closed (28 February 2014)
Prof. Dr. Jeremy L. Wyatt
Intelligent Robotics Laboratory, School of Computer Science, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK
Phone: +44 1214 144788
Fax: +44 121 414 4788
Interests: robot learning; reinforcement learning; sequential decision making under uncertainty; robot task planning; robot manipulation; machine learning; machine vision
Prof. Dr. Danica Kragic
School of Computer Science and Communication, Royal Institute of Technology (KTH), Teknikringen 14, R 709 KTH, SE-100 44 Stockholm, Sweden
Interests: vision based control; vision systems; human-robot interaction; object grasping and manipulation; activity recognition
Dr. Norbert Krüger
Cognitive Vision Lab, The Maersk Mc-Kinney Moller Institute, University of Southern Denmark, Niels Bohrs Alle 1, 5230, Odense M, Denmark
Interests: deep hierarchies; cognitive vision; robot manipulation; biologically motivated vision; learning in assembly processes
Prof. Dr. Ales Leonardis
School of Computer Science, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK
Interests: robust and adaptive methods for computer vision; object and scene recognition and categorization; statistical visual learning; 3D object modeling; biologically motivated vision
Vision is one of the major senses that enables humans to act and interact in changing environments.
In a similar vein, computer vision should play an equally important role in relation to intelligent robotics. Vision and image understanding techniques are now applied in a range of robotics applications either as an alternative or a supplement to other sensing modalities. There have been significant recent advances in robust visual inference for mobility and manipulation among other tasks. This special issue welcomes submissions in both advances in the theory of vision with relevance to robotics and applications of computer vision in robotics. We particularly encourage papers with thorough experimental evaluation
Dr. Jeremy L. Wyatt
Prof. Dr. Danica Kragic
Dr. Norbert Krüger
Prof. Dr. Ales Leonardis
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (N.B. Conference papers may only be submitted if the paper was not originally copyrighted and if it has been extended substantially and completely re-written). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Robotics is an international peer-reviewed Open Access quarterly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. For the first couple of issues the Article Processing Charge (APC) will be waived for well-prepared manuscripts. English correction and/or formatting fees of 250 CHF (Swiss Francs) will be charged in certain cases for those articles accepted for publication that require extensive additional formatting and/or English corrections.
- new visual sensors
- hardware implementations of visual processing algorithms
- novel algorithms and architectures for vision
- real-time visual processing
- integration of vision and haptics
- integration of vision and other modalities
- recovery of 3D object shape
- object recognition and categorisation
- perception of objects with transparent and specular surfaces
- vision for articulated and deformable objects
- vision for manipulation
- visual servoing
- active vision
- visual tracking and vision in multi-modal tracking
- visual navigation
- vision in unstructured environments
- visual place recognition and categorization
- visual recovery of scene structure
- visual slam
- vision for robot human interaction
- biologically motivated vision approaches applied to robot systems
- evaluation metrics for vision systems
- applications of robot vision
Article: IMU and Multiple RGB-D Camera Fusion for Assisting Indoor Stop-and-Go 3D Terrestrial Laser Scanning
Robotics 2014, 3(3), 247-280; doi:10.3390/robotics3030247
Received: 19 February 2014; in revised form: 24 April 2014 / Accepted: 17 June 2014 / Published: 11 July 2014| PDF Full-text (1770 KB) | HTML Full-text | XML Full-text
Robotics 2014, 3(1), 22-67; doi:10.3390/robotics3010022
Received: 15 December 2013; in revised form: 24 January 2014 / Accepted: 27 January 2014 / Published: 13 February 2014| PDF Full-text (2487 KB)
The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.
Type of Paper: Article
Title: An Inexpensive Image Fusion Vision System for Robot Navigation
Author(s): Lyle N. Long
Affiliation(s): The Pennsylvania State University; E-Mail: email@example.com
Abstract: A hybrid vision system was developed for robot navigation. The robot used an all-terrain robot chassis as its base. It was equipped with a stereo camera, a parallax microcontroller, a sabertooth motor controller, two NimH batteries, and a Samsung Q430 laptop. The robot vision system utilized stereo vision for its navigation. Raw images were captured from the stereo camera, in real time and using a feature-based stereo algorithm disparity maps were created. The disparity results were converted into depth results, and these were fused with the edge map of the scene. Range values were inferred from this fused image and using the range values in a fuzzy logic algorithm, the intersections were detected. The robot navigation was then based on the detected intersection. To develop the system, various stereo algorithms were explored. Most algorithms provide excellent results for stereo images but fail with raw images from the camera. A feature based algorithm was found to be the most robust for the present vision system. The system was tested in various scenarios. It was successful as a close range system. It was able to detect all the intersections in close range. In the tested scenarios it was able to move avoiding obstacles and walls.
Last update: 27 May 2013