Next Article in Journal
NEURON: Enabling Autonomicity in Wireless Sensor Networks
Next Article in Special Issue
Data Acquisition, Analysis and Transmission Platform for a Pay-As-You-Drive System
Previous Article in Journal
A Reusable PZT Transducer for Monitoring Initial Hydration and Structural Health of Concrete
Previous Article in Special Issue
Common Criteria Related Security Design Patterns—Validation on the Intelligent Sensor Example Designed for Mine Environment
Article Menu

Export Article

Open AccessArticle
Sensors 2010, 10(5), 5209-5232; doi:10.3390/s100505209

Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors

Universidad Miguel Hern´andez, Avda. de la Universidad s/n, Ed. Quorum V, r03202 Elche (Alicante), Spain
*
Author to whom correspondence should be addressed.
Received: 11 February 2010 / Revised: 31 March 2010 / Accepted: 14 April 2010 / Published: 25 May 2010
(This article belongs to the Special Issue Intelligent Sensors - 2010)
View Full-Text   |   Download PDF [2746 KB, uploaded 21 June 2014]   |  

Abstract

In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment. View Full-Text
Keywords: visual SLAM; sensor fusion; uncertainty estimation; cooperative robots visual SLAM; sensor fusion; uncertainty estimation; cooperative robots
This is an open access article distributed under the Creative Commons Attribution License (CC BY 3.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Gil, A.; Reinoso, Ó.; Ballesta, M.; Juliá, M.; Payá, L. Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors. Sensors 2010, 10, 5209-5232.

Show more citation formats Show less citations formats

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top