Next Article in Journal
Online Condition Monitoring of Bearings to Support Total Productive Maintenance in the Packaging Materials Industry
Next Article in Special Issue
Fusion of Haptic and Gesture Sensors for Rehabilitation of Bimanual Coordination and Dexterous Manipulation
Previous Article in Journal
Implementation of Context Aware e-Health Environments Based on Social Sensor Networks
Previous Article in Special Issue
Shape Reconstruction Based on a New Blurring Model at the Micro/Nanometer Scale
Article Menu

Export Article

Open AccessArticle
Sensors 2016, 16(3), 311; doi:10.3390/s16030311

A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots

1
Department of Electrical and Computer Engineering, Automation and Systems Research Institute (ASRI), Seoul National University, Seoul 151-742, Korea
2
Inter-University Semiconductor Research Center (ISRC), Seoul National University, Seoul 151-742, Korea
*
Author to whom correspondence should be addressed.
Academic Editor: Yajing Shen
Received: 29 December 2015 / Revised: 3 February 2016 / Accepted: 17 February 2016 / Published: 1 March 2016
(This article belongs to the Special Issue Sensors for Robots)
View Full-Text   |   Download PDF [3304 KB, uploaded 1 March 2016]   |  

Abstract

This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. View Full-Text
Keywords: obstacle detection; monocular vision; segmentation obstacle detection; monocular vision; segmentation
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Lee, T.-J.; Yi, D.-H.; Cho, D.-I.“. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots. Sensors 2016, 16, 311.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top