Next Article in Journal
Fodder Biomass Monitoring in Sahelian Rangelands Using Phenological Metrics from FAPAR Time Series
Next Article in Special Issue
Scanning Photogrammetry for Measuring Large Targets in Close Range
Previous Article in Journal
Short-Term Forecasting of Surface Solar Irradiance Based on Meteosat-SEVIRI Data Using a Nighttime Cloud Index
 
 
Article

Optimized 3D Street Scene Reconstruction from Driving Recorder Images

by 1,†, 1,*,†, 2,†, 1, 1, 1, 1 and 1
1
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
2
Electronic Science and Engineering, National University of Defence Technology, Changsha 410000, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Academic Editors: Diego Gonzalez-Aguilera, Gonzalo Pajares Martinsanz and Prasad S. Thenkabail
Remote Sens. 2015, 7(7), 9091-9121; https://doi.org/10.3390/rs70709091
Received: 13 May 2015 / Revised: 6 July 2015 / Accepted: 13 July 2015 / Published: 17 July 2015
The paper presents an automatic region detection based method to reconstruct street scenes from driving recorder images. The driving recorder in this paper is a dashboard camera that collects images while the motor vehicle is moving. An enormous number of moving vehicles are included in the collected data because the typical recorders are often mounted in the front of moving vehicles and face the forward direction, which can make matching points on vehicles and guardrails unreliable. Believing that utilizing these image data can reduce street scene reconstruction and updating costs because of their low price, wide use, and extensive shooting coverage, we therefore proposed a new method, which is called the Mask automatic detecting method, to improve the structure results from the motion reconstruction. Note that we define vehicle and guardrail regions as “mask” in this paper since the features on them should be masked out to avoid poor matches. After removing the feature points in our new method, the camera poses and sparse 3D points that are reconstructed with the remaining matches. Our contrast experiments with the typical pipeline of structure from motion (SfM) reconstruction methods, such as Photosynth and VisualSFM, demonstrated that the Mask decreased the root-mean-square error (RMSE) of the pairwise matching results, which led to more accurate recovering results from the camera-relative poses. Removing features from the Mask also increased the accuracy of point clouds by nearly 30%–40% and corrected the problems of the typical methods on repeatedly reconstructing several buildings when there was only one target building. View Full-Text
Keywords: street scene reconstruction; driving recorder; structure from motion; outliers; sparse 3D point clouds; artificial intelligence; classifier street scene reconstruction; driving recorder; structure from motion; outliers; sparse 3D point clouds; artificial intelligence; classifier
Show Figures

Graphical abstract

MDPI and ACS Style

Zhang, Y.; Li, Q.; Lu, H.; Liu, X.; Huang, X.; Song, C.; Huang, S.; Huang, J. Optimized 3D Street Scene Reconstruction from Driving Recorder Images. Remote Sens. 2015, 7, 9091-9121. https://doi.org/10.3390/rs70709091

AMA Style

Zhang Y, Li Q, Lu H, Liu X, Huang X, Song C, Huang S, Huang J. Optimized 3D Street Scene Reconstruction from Driving Recorder Images. Remote Sensing. 2015; 7(7):9091-9121. https://doi.org/10.3390/rs70709091

Chicago/Turabian Style

Zhang, Yongjun, Qian Li, Hongshu Lu, Xinyi Liu, Xu Huang, Chao Song, Shan Huang, and Jingyi Huang. 2015. "Optimized 3D Street Scene Reconstruction from Driving Recorder Images" Remote Sensing 7, no. 7: 9091-9121. https://doi.org/10.3390/rs70709091

Find Other Styles

Article Access Map by Country/Region

1
Only visits after 24 November 2015 are recorded.
Back to TopTop