Next Article in Journal
Fodder Biomass Monitoring in Sahelian Rangelands Using Phenological Metrics from FAPAR Time Series
Next Article in Special Issue
Scanning Photogrammetry for Measuring Large Targets in Close Range
Previous Article in Journal
Short-Term Forecasting of Surface Solar Irradiance Based on Meteosat-SEVIRI Data Using a Nighttime Cloud Index
Article Menu

Export Article

Open AccessArticle
Remote Sens. 2015, 7(7), 9091-9121; doi:10.3390/rs70709091

Optimized 3D Street Scene Reconstruction from Driving Recorder Images

1
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
2
Electronic Science and Engineering, National University of Defence Technology, Changsha 410000, China
These authors contributed equally to this work.
*
Author to whom correspondence should be addressed.
Academic Editors: Diego Gonzalez-Aguilera, Gonzalo Pajares Martinsanz and Prasad S. Thenkabail
Received: 13 May 2015 / Revised: 6 July 2015 / Accepted: 13 July 2015 / Published: 17 July 2015

Abstract

The paper presents an automatic region detection based method to reconstruct street scenes from driving recorder images. The driving recorder in this paper is a dashboard camera that collects images while the motor vehicle is moving. An enormous number of moving vehicles are included in the collected data because the typical recorders are often mounted in the front of moving vehicles and face the forward direction, which can make matching points on vehicles and guardrails unreliable. Believing that utilizing these image data can reduce street scene reconstruction and updating costs because of their low price, wide use, and extensive shooting coverage, we therefore proposed a new method, which is called the Mask automatic detecting method, to improve the structure results from the motion reconstruction. Note that we define vehicle and guardrail regions as “mask” in this paper since the features on them should be masked out to avoid poor matches. After removing the feature points in our new method, the camera poses and sparse 3D points that are reconstructed with the remaining matches. Our contrast experiments with the typical pipeline of structure from motion (SfM) reconstruction methods, such as Photosynth and VisualSFM, demonstrated that the Mask decreased the root-mean-square error (RMSE) of the pairwise matching results, which led to more accurate recovering results from the camera-relative poses. Removing features from the Mask also increased the accuracy of point clouds by nearly 30%–40% and corrected the problems of the typical methods on repeatedly reconstructing several buildings when there was only one target building. View Full-Text
Keywords: street scene reconstruction; driving recorder; structure from motion; outliers; sparse 3D point clouds; artificial intelligence; classifier street scene reconstruction; driving recorder; structure from motion; outliers; sparse 3D point clouds; artificial intelligence; classifier
Figures

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Zhang, Y.; Li, Q.; Lu, H.; Liu, X.; Huang, X.; Song, C.; Huang, S.; Huang, J. Optimized 3D Street Scene Reconstruction from Driving Recorder Images. Remote Sens. 2015, 7, 9091-9121.

Show more citation formats Show less citations formats

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Remote Sens. EISSN 2072-4292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top