Next Article in Journal
Microwave Deposition of Palladium Catalysts on Graphite Spheres and Reduced Graphene Oxide Sheets for Electrochemical Glucose Sensing
Next Article in Special Issue
On Efficient Deployment of Wireless Sensors for Coverage and Connectivity in Constrained 3D Space
Previous Article in Journal
High-Sensitivity Encoder-Like Micro Area-Changed Capacitive Transducer for a Nano-g Micro Accelerometer
Previous Article in Special Issue
A Novel Energy-Efficient Approach for Human Activity Recognition
Article Menu
Issue 10 (October) cover image

Export Article

Open AccessArticle
Sensors 2017, 17(10), 2164; https://doi.org/10.3390/s17102164

Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter

1
Department of Electrical, Electronic and Computer Engineering, University of Pretoria, Pretoria 0028, South Africa
2
Department of Computer Science, City University of Hong Kong, Hong Kong, China
*
Author to whom correspondence should be addressed.
Received: 7 August 2017 / Revised: 1 September 2017 / Accepted: 5 September 2017 / Published: 21 September 2017
View Full-Text   |   Download PDF [4441 KB, uploaded 21 September 2017]   |  

Abstract

Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs). View Full-Text
Keywords: pose estimation; mobile robot; inertial sensors; vision; object; extended Kalman filter pose estimation; mobile robot; inertial sensors; vision; object; extended Kalman filter
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Share & Cite This Article

MDPI and ACS Style

Alatise, M.B.; Hancke, G.P. Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter. Sensors 2017, 17, 2164.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top