A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model
AbstractA wearable guidance system is designed to provide context-dependent guidance messages to blind people while they traverse local pathways. The system is composed of three parts: moving scene analysis, walking context estimation and audio message delivery. The combination of a downward-pointing laser scanner and a camera is used to solve the challenging problem of moving scene analysis. By integrating laser data profiles and image edge profiles, a multimodal profile model is constructed to estimate jointly the ground plane, object locations and object types, by using a Bayesian network. The outputs of the moving scene analysis are further employed to estimate the walking context, which is defined as a fuzzy safety level that is inferred through a fuzzy logic model. Depending on the estimated walking context, the audio messages that best suit the current context are delivered to the user in a flexible manner. The proposed system is tested under various local pathway scenes, and the results confirm its efficiency in assisting blind people to attain autonomous mobility. View Full-Text
- Supplementary File 1:
Supplementary (PDF, 2901 KB)
Share & Cite This Article
Lin, Q.; Han, Y. A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model. Sensors 2014, 14, 18670-18700.
Lin Q, Han Y. A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model. Sensors. 2014; 14(10):18670-18700.Chicago/Turabian Style
Lin, Qing; Han, Youngjoon. 2014. "A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model." Sensors 14, no. 10: 18670-18700.