Next Article in Journal
Optical Fiber Sensor Based on Localized Surface Plasmon Resonance Using Silver Nanoparticles Photodeposited on the Optical Fiber End
Previous Article in Journal
FPGA-Based Smart Sensor for Drought Stress Detection in Tomato Plants Using Novel Physiological Variables and Discrete Wavelet Transform
Article Menu

Export Article

Open AccessArticle
Sensors 2014, 14(10), 18670-18700; doi:10.3390/s141018670

A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model

Electronic Engineering Department, Soongsil University, 511 Sangdo-Dong, Dongjak-Gu, Seoul 156-743, Korea
*
Author to whom correspondence should be addressed.
Received: 4 August 2014 / Revised: 15 September 2014 / Accepted: 18 September 2014 / Published: 9 October 2014
(This article belongs to the Section Physical Sensors)

Abstract

A wearable guidance system is designed to provide context-dependent guidance messages to blind people while they traverse local pathways. The system is composed of three parts: moving scene analysis, walking context estimation and audio message delivery. The combination of a downward-pointing laser scanner and a camera is used to solve the challenging problem of moving scene analysis. By integrating laser data profiles and image edge profiles, a multimodal profile model is constructed to estimate jointly the ground plane, object locations and object types, by using a Bayesian network. The outputs of the moving scene analysis are further employed to estimate the walking context, which is defined as a fuzzy safety level that is inferred through a fuzzy logic model. Depending on the estimated walking context, the audio messages that best suit the current context are delivered to the user in a flexible manner. The proposed system is tested under various local pathway scenes, and the results confirm its efficiency in assisting blind people to attain autonomous mobility. View Full-Text
Keywords: electronic mobility aids; sensor fusion; object detection; Bayesian network; context-aware guidance; multimodal information transformation electronic mobility aids; sensor fusion; object detection; Bayesian network; context-aware guidance; multimodal information transformation
Figures

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Supplementary material

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Lin, Q.; Han, Y. A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model. Sensors 2014, 14, 18670-18700.

Show more citation formats Show less citations formats

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top