Next Article in Journal
Indoor Air Quality Analysis Using Deep Learning with Sensor Data
Next Article in Special Issue
Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror
Previous Article in Journal
Low-Cost Air Quality Monitoring Tools: From Research to Practice (A Workshop Summary)
Previous Article in Special Issue
Robust Small Target Co-Detection from Airborne Infrared Image Sequences
Article Menu
Issue 11 (November) cover image

Export Article

Open AccessArticle
Sensors 2017, 17(11), 2473; https://doi.org/10.3390/s17112473

DEEP-SEE: Joint Object Detection, Tracking and Recognition with Application to Visually Impaired Navigational Assistance

1,2,†,* , 1,2,†
and
1
1
Advanced Research and TEchniques for Multidimensional Imaging Systems Department, Institut Mines-Télécom/Télécom SudParis, UMR CNRS MAP5 8145 and 5157 SAMOVAR, 9 rue Charles Fourier, 91000 Évry, France
2
Telecommunication Department, Faculty of ETTI, University “Politehnica” of Bucharest, SplaiulIndependentei 313, 060042 Bucharest, Romania
These authors contributed equally to this work.
*
Author to whom correspondence should be addressed.
Received: 28 August 2017 / Revised: 5 October 2017 / Accepted: 25 October 2017 / Published: 28 October 2017
(This article belongs to the Special Issue Video Analysis and Tracking Using State-of-the-Art Sensors)
View Full-Text   |   Download PDF [5808 KB, uploaded 28 October 2017]   |  

Abstract

In this paper, we introduce the so-called DEEP-SEE framework that jointly exploits computer vision algorithms and deep convolutional neural networks (CNNs) to detect, track and recognize in real time objects encountered during navigation in the outdoor environment. A first feature concerns an object detection technique designed to localize both static and dynamic objects without any a priori knowledge about their position, type or shape. The methodological core of the proposed approach relies on a novel object tracking method based on two convolutional neural networks trained offline. The key principle consists of alternating between tracking using motion information and predicting the object location in time based on visual similarity. The validation of the tracking technique is performed on standard benchmark VOT datasets, and shows that the proposed approach returns state-of-the-art results while minimizing the computational complexity. Then, the DEEP-SEE framework is integrated into a novel assistive device, designed to improve cognition of VI people and to increase their safety when navigating in crowded urban scenes. The validation of our assistive device is performed on a video dataset with 30 elements acquired with the help of VI users. The proposed system shows high accuracy (>90%) and robustness (>90%) scores regardless on the scene dynamics. View Full-Text
Keywords: object detection; tracking and recognition; convolutional neural networks; visually impaired users; wearable assistive device object detection; tracking and recognition; convolutional neural networks; visually impaired users; wearable assistive device
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Tapu, R.; Mocanu, B.; Zaharia, T. DEEP-SEE: Joint Object Detection, Tracking and Recognition with Application to Visually Impaired Navigational Assistance. Sensors 2017, 17, 2473.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top