Next Article in Journal
Generalized Nonlinear Chirp Scaling Algorithm for High-Resolution Highly Squint SAR Imaging
Next Article in Special Issue
Real-Time Indoor Scene Description for the Visually Impaired Using Autoencoder Fusion Strategies with Visible Cameras
Previous Article in Journal
An Embedded Wireless Sensor Network with Wireless Power Transmission Capability for the Structural Health Monitoring of Reinforced Concrete Structures
Previous Article in Special Issue
Line-Constrained Camera Location Estimation in Multi-Image Stereomatching
Article Menu
Issue 11 (November) cover image

Export Article

Open AccessArticle
Sensors 2017, 17(11), 2567; doi:10.3390/s17112567

Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices

Department of Computer Science, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 03722, Korea
*
Author to whom correspondence should be addressed.
Received: 23 August 2017 / Revised: 26 October 2017 / Accepted: 3 November 2017 / Published: 7 November 2017
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
View Full-Text   |   Download PDF [4508 KB, uploaded 8 November 2017]   |  

Abstract

Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method. View Full-Text
Keywords: monocular simultaneous localization and mapping; visual–inertial odometry; optical flow; adaptive execution; mobile device monocular simultaneous localization and mapping; visual–inertial odometry; optical flow; adaptive execution; mobile device
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Piao, J.-C.; Kim, S.-D. Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices. Sensors 2017, 17, 2567.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top