Next Article in Journal
High-Speed Lateral Flow Strategy for a Fast Biosensing with an Improved Selectivity and Binding Affinity
Next Article in Special Issue
Developing an Acoustic Sensing Yarn for Health Surveillance in a Military Setting
Previous Article in Journal
Nordic Walking Performance Analysis with an Integrated Monitoring System
Previous Article in Special Issue
Use of the Stockwell Transform in the Detection of P300 Evoked Potentials with Low-Cost Brain Sensors
Article Menu
Issue 5 (May) cover image

Export Article

Open AccessArticle
Sensors 2018, 18(5), 1506; https://doi.org/10.3390/s18051506

Unifying Terrain Awareness for the Visually Impaired through Real-Time Semantic Segmentation

1
State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, Hangzhou 310027, China
2
Department of Electronics, University of Alcalá, Madrid 28805, Spain
3
Department of Computing, Imperial College London, London SW7 2AZ, UK
4
KR-VISION Technology Co., Ltd., Hangzhou 310023, China
5
Department of Electrical and Computer Engineering, University of California, Los Angeles, CA 90095, USA
*
Author to whom correspondence should be addressed.
Received: 5 April 2018 / Revised: 5 May 2018 / Accepted: 8 May 2018 / Published: 10 May 2018
(This article belongs to the Special Issue Wearable Smart Devices)
Full-Text   |   PDF [12727 KB, uploaded 11 May 2018]   |  

Abstract

Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework. View Full-Text
Keywords: navigation assistance; semantic segmentation; traversability awareness; obstacle avoidance; RGB-D sensor; visually-impaired people navigation assistance; semantic segmentation; traversability awareness; obstacle avoidance; RGB-D sensor; visually-impaired people
Figures

Graphical abstract

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Yang, K.; Wang, K.; Bergasa, L.M.; Romera, E.; Hu, W.; Sun, D.; Sun, J.; Cheng, R.; Chen, T.; López, E. Unifying Terrain Awareness for the Visually Impaired through Real-Time Semantic Segmentation. Sensors 2018, 18, 1506.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top