Next Article in Journal
Non-Invasive Blood Pressure Estimation from ECG Using Machine Learning Techniques
Next Article in Special Issue
A BLE-Based Pedestrian Navigation System for Car Searching in Indoor Parking Garages
Previous Article in Journal
A Novel Strategy of Ambiguity Correction for the Improved Faraday Rotation Estimator in Linearly Full-Polarimetric SAR Data
Previous Article in Special Issue
A SINS/SRS/GNS Autonomous Integrated Navigation System Based on Spectral Redshift Velocity Measurements
Article Menu
Issue 4 (April) cover image

Export Article

Open AccessArticle
Sensors 2018, 18(4), 1159;

PL-VIO: Tightly-Coupled Monocular Visual–Inertial Odometry Using Point and Line Features

1,2,* , 3
Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
University of Chinese Academy of Sciences, Beijing 100049, China
The ReadSense Ltd., Shanghai 200040, China
Author to whom correspondence should be addressed.
Received: 23 March 2018 / Revised: 8 April 2018 / Accepted: 9 April 2018 / Published: 10 April 2018
(This article belongs to the Special Issue Sensor Fusion and Novel Technologies in Positioning and Navigation)
Full-Text   |   PDF [5785 KB, uploaded 3 May 2018]   |  


To address the problem of estimating camera trajectory and to build a structural three-dimensional (3D) map based on inertial measurements and visual observations, this paper proposes point–line visual–inertial odometry (PL-VIO), a tightly-coupled monocular visual–inertial odometry system exploiting both point and line features. Compared with point features, lines provide significantly more geometrical structure information on the environment. To obtain both computation simplicity and representational compactness of a 3D spatial line, Plücker coordinates and orthonormal representation for the line are employed. To tightly and efficiently fuse the information from inertial measurement units (IMUs) and visual sensors, we optimize the states by minimizing a cost function which combines the pre-integrated IMU error term together with the point and line re-projection error terms in a sliding window optimization framework. The experiments evaluated on public datasets demonstrate that the PL-VIO method that combines point and line features outperforms several state-of-the-art VIO systems which use point features only. View Full-Text
Keywords: sensor fusion; visual–inertial odometry; tightly-coupled; point and line features sensor fusion; visual–inertial odometry; tightly-coupled; point and line features

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Share & Cite This Article

MDPI and ACS Style

He, Y.; Zhao, J.; Guo, Y.; He, W.; Yuan, K. PL-VIO: Tightly-Coupled Monocular Visual–Inertial Odometry Using Point and Line Features. Sensors 2018, 18, 1159.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics



[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top