Next Article in Journal
Three-Dimensional Imaging of Terahertz Circular SAR with Sparse Linear Array
Previous Article in Journal
Reduction of the Measurement Time by the Prediction of the Steady-State Response for Quartz Crystal Microbalance Gas Sensors
Article Menu
Issue 8 (August) cover image

Export Article

Open AccessArticle
Sensors 2018, 18(8), 2476; https://doi.org/10.3390/s18082476

Visual Localizer: Outdoor Localization Based on ConvNet Descriptor and Global Optimization for Visually Impaired Pedestrians

State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, Hangzhou 310027, China
These authors contributed equally to this work.
*
Author to whom correspondence should be addressed.
Received: 13 June 2018 / Revised: 25 July 2018 / Accepted: 26 July 2018 / Published: 31 July 2018
(This article belongs to the Special Issue Sensor Technologies for Caring People with Disabilities)
Full-Text   |   PDF [14560 KB, uploaded 31 July 2018]   |  

Abstract

Localization systems play an important role in assisted navigation. Precise localization renders visually impaired people aware of ambient environments and prevents them from coming across potential hazards. The majority of visual localization algorithms, which are applied to autonomous vehicles, are not adaptable completely to the scenarios of assisted navigation. Those vehicle-based approaches are vulnerable to viewpoint, appearance and route changes (between database and query images) caused by wearable cameras of assistive devices. Facing these practical challenges, we propose Visual Localizer, which is composed of ConvNet descriptor and global optimization, to achieve robust visual localization for assisted navigation. The performance of five prevailing ConvNets are comprehensively compared, and GoogLeNet is found to feature the best performance on environmental invariance. By concatenating two compressed convolutional layers of GoogLeNet, we use only thousands of bytes to represent image efficiently. To further improve the robustness of image matching, we utilize the network flow model as a global optimization of image matching. The extensive experiments using images captured by visually impaired volunteers illustrate that the system performs well in the context of assisted navigation. View Full-Text
Keywords: assisted navigation; place recognition; topological localization; impaired vision; convolutional neural networks; deep feature; network flow; data association graph assisted navigation; place recognition; topological localization; impaired vision; convolutional neural networks; deep feature; network flow; data association graph
Figures

Graphical abstract

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Supplementary material

SciFeed

Share & Cite This Article

MDPI and ACS Style

Lin, S.; Cheng, R.; Wang, K.; Yang, K. Visual Localizer: Outdoor Localization Based on ConvNet Descriptor and Global Optimization for Visually Impaired Pedestrians. Sensors 2018, 18, 2476.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top