Line-Constrained Camera Location Estimation in Multi-Image Stereomatching
AbstractStereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid—we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature. View Full-Text
Share & Cite This Article
Donné, S.; Goossens, B.; Philips, W. Line-Constrained Camera Location Estimation in Multi-Image Stereomatching. Sensors 2017, 17, 1939.
Donné S, Goossens B, Philips W. Line-Constrained Camera Location Estimation in Multi-Image Stereomatching. Sensors. 2017; 17(9):1939.Chicago/Turabian Style
Donné, Simon; Goossens, Bart; Philips, Wilfried. 2017. "Line-Constrained Camera Location Estimation in Multi-Image Stereomatching." Sensors 17, no. 9: 1939.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.