Next Article in Journal / Special Issue
Machine Learning in Classification Time Series with Fractal Properties
Previous Article in Journal
Human Male Body Images from Multiple Perspectives with Multiple Lighting Settings
Previous Article in Special Issue
Continuous Genetic Algorithms as Intelligent Assistance for Resource Distribution in Logistic Systems
Article Menu
Issue 1 (March) cover image

Export Article

Open AccessArticle

The Model and Training Algorithm of Compact Drone Autonomous Visual Navigation System

Department of Computer Science, Sumy State University, 40007 Sumy, Ukraine
*
Authors to whom correspondence should be addressed.
Received: 4 November 2018 / Revised: 20 December 2018 / Accepted: 22 December 2018 / Published: 28 December 2018
(This article belongs to the Special Issue Data Stream Mining and Processing)
  |  
PDF [1705 KB, uploaded 28 December 2018]
  |  

Abstract

Trainable visual navigation systems based on deep learning demonstrate potential for robustness of onboard camera parameters and challenging environment. However, a deep model requires substantial computational resources and large labelled training sets for successful training. Implementation of the autonomous navigation and training-based fast adaptation to the new environment for a compact drone is a complicated task. The article describes an original model and training algorithms adapted to the limited volume of labelled training set and constrained computational resource. This model consists of a convolutional neural network for visual feature extraction, extreme-learning machine for estimating the position displacement and boosted information-extreme classifier for obstacle prediction. To perform unsupervised training of the convolution filters with a growing sparse-coding neural gas algorithm, supervised learning algorithms to construct the decision rules with simulated annealing search algorithm used for finetuning are proposed. The use of complex criterion for parameter optimization of the feature extractor model is considered. The resulting approach performs better trajectory reconstruction than the well-known ORB-SLAM. In particular, for sequence 7 from the KITTI dataset, the translation error is reduced by nearly 65.6% under the frame rate 10 frame per second. Besides, testing on the independent TUM sequence shot outdoors produces a translation error not exceeding 6% and a rotation error not exceeding 3.68 degrees per 100 m. Testing was carried out on the Raspberry Pi 3+ single-board computer. View Full-Text
Keywords: navigation; visual odometry; convolutional neural network; neural gas; information criterion; extreme learning navigation; visual odometry; convolutional neural network; neural gas; information criterion; extreme learning
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Moskalenko, V.; Moskalenko, A.; Korobov, A.; Semashko, V. The Model and Training Algorithm of Compact Drone Autonomous Visual Navigation System. Data 2019, 4, 4.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Data EISSN 2306-5729 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top