Next Article in Journal
Prediction and Decision-Making in Intelligent Environments Supported by Knowledge Graphs, A Systematic Review
Previous Article in Journal
New Trends in the Simulation of Nanosplasmonic Optical D-Type Fiber Sensors
Article Menu

Export Article

Open AccessArticle
Sensors 2019, 19(8), 1773; https://doi.org/10.3390/s19081773

Mobile Robot Indoor Positioning Based on a Combination of Visual and Inertial Sensors

1, 2, 1,* and 3,4
1
Institute of Space Science and Technology, Nanchang University, Nanchang 330031, China
2
College of Computer Information and Engineering, Jiangxi Normal University, Nanchang 330022, China
3
School of Electrical Engineering, University of Jinan, Jinan 250022, China
4
School of Control Science and Engineering, Shandong University, Jinan 250100, China
*
Author to whom correspondence should be addressed.
Received: 12 March 2019 / Revised: 1 April 2019 / Accepted: 10 April 2019 / Published: 13 April 2019
(This article belongs to the Section Physical Sensors)
  |  
PDF [3635 KB, uploaded 16 April 2019]
  |     |  

Abstract

Multi-sensor integrated navigation technology has been applied to the indoor navigation and positioning of robots. For the problems of a low navigation accuracy and error accumulation, for mobile robots with a single sensor, an indoor mobile robot positioning method based on a visual and inertial sensor combination is presented in this paper. First, the visual sensor (Kinect) is used to obtain the color image and the depth image, and feature matching is performed by the improved scale-invariant feature transform (SIFT) algorithm. Then, the absolute orientation algorithm is used to calculate the rotation matrix and translation vector of a robot in two consecutive frames of images. An inertial measurement unit (IMU) has the advantages of high frequency updating and rapid, accurate positioning, and can compensate for the Kinect speed and lack of precision. Three-dimensional data, such as acceleration, angular velocity, magnetic field strength, and temperature data, can be obtained in real-time with an IMU. The data obtained by the visual sensor is loosely combined with that obtained by the IMU, that is, the differences in the positions and attitudes of the two sensor outputs are optimally combined by the adaptive fade-out extended Kalman filter to estimate the errors. Finally, several experiments show that this method can significantly improve the accuracy of the indoor positioning of the mobile robots based on the visual and inertial sensors. View Full-Text
Keywords: robot positioning; visual sensor; inertial sensor; SIFT algorithm; adaptive fade-out extended Kalman filter robot positioning; visual sensor; inertial sensor; SIFT algorithm; adaptive fade-out extended Kalman filter
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Gao, M.; Yu, M.; Guo, H.; Xu, Y. Mobile Robot Indoor Positioning Based on a Combination of Visual and Inertial Sensors. Sensors 2019, 19, 1773.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top