Next Article in Journal
A Self-Assessment Stereo Capture Model Applicable to the Internet of Things
Next Article in Special Issue
Time-Resolved Synchronous Fluorescence for Biomedical Diagnosis
Previous Article in Journal
A Context-Aware EEG Headset System for Early Detection of Driver Drowsiness
Previous Article in Special Issue
Nonintrusive Finger-Vein Recognition System Using NIR Image Sensor and Accuracy Analyses According to Various Factors
Article Menu

Export Article

Open AccessArticle
Sensors 2015, 15(8), 20894-20924; doi:10.3390/s150820894

Reliable Fusion of Stereo Matching and Depth Sensor for High Quality Dense Depth Maps

1
The Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute ofComputing Technology, Chinese Academy of Sciences, No.6 Kexueyuan South Road Zhongguancun, Haidian District, Beijing 100190, China
2
University of Chinese Academy of Sciences, No.19A Yuquan Road, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Academic Editor: Gonzalo Pajares Martinsanz
Received: 23 June 2015 / Revised: 7 August 2015 / Accepted: 17 August 2015 / Published: 21 August 2015
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
View Full-Text   |   Download PDF [22352 KB, uploaded 21 August 2015]   |  

Abstract

Depth estimation is a classical problem in computer vision, which typically relies on either a depth sensor or stereo matching alone. The depth sensor provides real-time estimates in repetitive and textureless regions where stereo matching is not effective. However, stereo matching can obtain more accurate results in rich texture regions and object boundaries where the depth sensor often fails. We fuse stereo matching and the depth sensor using their complementary characteristics to improve the depth estimation. Here, texture information is incorporated as a constraint to restrict the pixel’s scope of potential disparities and to reduce noise in repetitive and textureless regions. Furthermore, a novel pseudo-two-layer model is used to represent the relationship between disparities in different pixels and segments. It is more robust to luminance variation by treating information obtained from a depth sensor as prior knowledge. Segmentation is viewed as a soft constraint to reduce ambiguities caused by under- or over-segmentation. Compared to the average error rate 3.27% of the previous state-of-the-art methods, our method provides an average error rate of 2.61% on the Middlebury datasets, which shows that our method performs almost 20% better than other “fused” algorithms in the aspect of precision. View Full-Text
Keywords: stereo matching; depth sensor; multiscale pseudo-two-layer model; segmentation; texture constraint; fusion move stereo matching; depth sensor; multiscale pseudo-two-layer model; segmentation; texture constraint; fusion move
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Liu, J.; Li, C.; Fan, X.; Wang, Z. Reliable Fusion of Stereo Matching and Depth Sensor for High Quality Dense Depth Maps. Sensors 2015, 15, 20894-20924.

Show more citation formats Show less citations formats

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top