Depth Image Super Resolution Based on Edge-Guided Method

Depth image super-resolution (SR) is a technique which can reconstruct a high-resolution (HR) depth image from a low-resolution (LR) depth image. Its purpose is to obtain HR details to meet the needs of various applications in computer vision. In general, conventional depth image SR methods often cause edges in the final HR image to be blurred or ragged. To solve this problem, an edge-guided method for depth image SR is presented in this paper. To get high-quality edge information, a pair of sparse dictionaries was applied to reconstruct edges of depth image. Then, with the guidance of these high-quality edges, a depth image was interpolated by using a modified joint bilateral filter. Edge-guided method can preserve the sharpness of edges and effectively avoid generating blurry and ragged edges when SR is performed. Experiments showed that the proposed method can get better results on both subjective and objective evaluation, and the reconstructed performance was superior to conventional depth image SR methods.


Introduction and Related Works
In recent years, with the rapid development of computer vision technology, the depth information of scenes becomes increasingly essential for many applications, such as 3D Reconstruction [1,2], Augmented Reality [3], Robot Navigation [4] and so on.Some active sensors [5], such as Kinect and PMD (Photonic Mixer Device), can easily acquire depth information of scenes.Then, this information will be used to create a depth image.However, due to the theoretical and practical limitations, the achievable resolution of any depth imaging device is usually too low to meet the needs of many practical applications.How to improve depth image resolution is an urgent problem that needs to be solved.One way to solve this problem is to apply some sophisticated vision sensors.However, these sensors are usually very expensive.Another way is to use super-resolution (SR) algorithm.Compared with expensive sensors, SR algorithm, not relying on hardware configuration, is evidently a low-cost approach.Inspired by the idea of color image SR, researchers have proposed many promising depth image SR methods [6][7][8] in recent years, which can improve the resolution of depth image effectively.
According to the difference of referenced information, depth SR can be mainly divided into four categories: (1) interpolation; (2) SR from LR depth image frames; (3) SR through fusing depth image and HR color image; and (4) example-based SR.
(1) Interpolation: There are many analytic methods for image interpolation, including nearest neighbor interpolation, bilinear and cubic interpolation [9].However, when interpolation is done by a large factor, these analytic methods can cause image edge to be ragged and blurred because of the big value difference of pixels across edges.To solve the problem, Pang [10] presents an SR method based on bilinear interpolation and adaptive sharpening filter.This method can suppress effectively the edge-blurred effect.Ning [11] proposes an improved cubic interpolation algorithm, which uses cubic interpolation to compute the pixels in smooth areas and uses edge-vector interpolation to compute the pixels near edges.Xie [8] presents an edge-guided approach.This method reconstructs sharp HR edges through Markov random field at first, and then an SR depth image can be interpolated under the guidance of these edges.With the help of edge-guided information, the sharpness of edges can be well preserved in the final SR image.At the same time, some bilateral filtering methods [12,13] can also preserve edge well.
(2) SR from LR depth image frames: The HR image can be reconstructed by fusing the complementary information among a few LR depth images captured from the same scene.Schuon [14] uses an optimization framework with a bilateral total-variation regularization term to solve such a SR problem.Rjagopalan [15] constructs an energy function through the Markov Random Field, and minimizes the energy function to get the HR image.Ismaeil [16] proposes a dynamic-scene SR method for depth image to deal with the problem of non-rigid body motion between LR images.Gevrekci [17] uses convex projection method to construct the imaging model of depth image sequence for depth image SR.
(3) SR through fusing depth image and HR color image: Most commercial depth cameras can get a depth image and a color image about the same scene simultaneously, and usually the resolution of the color image is higher than that of the depth image.Thus, the HR depth image can be reconstructed with the help of HR color image.Ferstl [18] calculates an anisotropic total variation diffusion tensor from HR color image, and then the tensor is used to reconstruct the SR depth image.Yang [19] combines bilateral filter with median filter to compute adaptively weights from the HR image, and then the depth image is interpolated according to these weights.Lo [20] proposes a depth image SR method based on joint trilateral filter.The method considers not only the weight of distance, but also the weight of pixel value and gradient.
(4) Example-based SR: This method learns the transformation between LR and HR image from example database, and then an HR depth image can be reconstructed through the learned transformation when an LR depth image is inputted.Yang [21] uses a sparse coding method to grasp the transformation.Therefore, HR image patches can be represented by a sparse linear combination of HR dictionary atoms.Zeyde [6] modifies the above sparse coding method, and uses the K-SVD [22] and the orthogonal matching pursuit (OMP) [23] to train an LR and HR dictionary pair.Xie [24] proposes a pairwise dictionary training method with local coordinate constraints for depth image SR.Timofte [7] clusters the dictionary atoms into sub-dictionaries using K-NN algorithm, and then the HR patches can be represented by the most suited sub-dictionary.Kim [25] presents an accurate color image SR method based on VGG-NET [26], which can also be applied to solve the depth image SR problem.
Although the above methods can effectively reconstruct SR depth image from LR input, some existing problems cannot be ignored as well.The methods of the first category can cause discontinuous regions jagged and blurred.The methods of the second category need to be subject to the rigorous assumption that adjacent images only have slight movements on the plane parallel to the focal plane of camera.This assumption is usually difficult to satisfy in practical scenarios.Depth image SR based on fusing depth image and HR color image needs first to obtain an HR color image which register fully with the depth image.The example-based method has a strong dependence on training databases.That is, the difference of training databases may have a great effect on experiment results.
To address these problems, in this study, an edge-guided method for depth image SR is presented.We first train a pair of sparse dictionaries to recover high-quality edge information, and then an HR depth image is interpolated with the guidance of these high-quality edges.This method is a mixture of the example-based method and the interpolated method.We make full use of the advantage of the two methods.In this way, the proposed method can achieve improved results that are comparable to current state-of-the-art methods.Our approach needs neither strict assumptions nor the assistance of HR color image, so it can be used to improve depth image resolution conveniently.At the same time, our approach not only can achieve the goal of preserving sharp edge in depth image SR, but also can get a better color image SR.
The remainder of this paper is organized as follows.A detailed overview of the proposed method is presented in Section 2. Section 3 reports and discusses the results of the experiments.Finally, Section 4 concludes the paper.

Proposed Method
In this section, we first present the general steps of our work.Then, the way we have built the LR and HR edge dictionaries is discussed.Afterward, we continue with the details of how to interpolate HR image by joint bilateral filter.
To keep blurred and jagged edges away from the final SR result, we present a novel depth image SR method, which employs joint bilateral filter based on edge guidance for LR-to-HR reconstruction.The general steps of the proposed depth image SR method are summarized and shown in Figure 1.
Appl.Sci.2018, 8, x FOR PEER REVIEW 3 of 14 conveniently.At the same time, our approach not only can achieve the goal of preserving sharp edge in depth image SR, but also can get a better color image SR.The remainder of this paper is organized as follows.A detailed overview of the proposed method is presented in Section 2. Section 3 reports and discusses the results of the experiments.Finally, Section 4 concludes the paper.

Proposed Method
In this section, we first present the general steps of our work.Then, the way we have built the LR and HR edge dictionaries is discussed.Afterward, we continue with the details of how to interpolate HR image by joint bilateral filter.
To keep blurred and jagged edges away from the final SR result, we present a novel depth image SR method, which employs joint bilateral filter based on edge guidance for LR-to-HR reconstruction.The general steps of the proposed depth image SR method are summarized and shown in Figure 1.To avoid computational complexity caused by the different size between LR image and the final HR image, we first use simple interpolation algorithm (bicubic interpolation) to magnify the input LR image l I to the same size as the final HR image.However, interpolation algorithm can cause blurred and jagged effects near edges, so we use a Shock filter [27] to clear the magnified image for further process.
Edges provide essential structural information to describe the objects in the scene.Thus, we first focus to recover HR edges before reconstructing the whole image.As illustrated in Figure 1, LR edge map l E is extracted from the preprocessed LR image.Edge map preserves only the primary structure information and abandons widespread smooth area.This leads to it having very strong sparseness.Thus, we choose sparse coding method to recover HR edge map h E in our method.
After getting HR edge map h E , depth image h I will be interpolated by a modified joint bilateral filter under the guidance of the high-quality edge map.The usage of bilateral filter can not only preserve the edge sharpness but also suppress noise further.To avoid computational complexity caused by the different size between LR image and the final HR image, we first use simple interpolation algorithm (bicubic interpolation) to magnify the input LR image I l to the same size as the final HR image.However, interpolation algorithm can cause blurred and jagged effects near edges, so we use a Shock filter [27] to clear the magnified image for further process.
Edges provide essential structural information to describe the objects in the scene.Thus, we first focus to recover HR edges before reconstructing the whole image.As illustrated in Figure 1, LR edge map E l is extracted from the preprocessed LR image.Edge map preserves only the primary structure information and abandons widespread smooth area.This leads to it having very strong sparseness.Thus, we choose sparse coding method to recover HR edge map E h in our method.
After getting HR edge map E h , depth image I h will be interpolated by a modified joint bilateral filter under the guidance of the high-quality edge map.The usage of bilateral filter can not only preserve the edge sharpness but also suppress noise further.
From the above introduction, the proposed method mainly includes two important parts: (1) edge recovery using sparse coding method; and (2) edge-guided depth interpolation using bilateral filter.The details on these two parts will be discussed in the following subsections.

Edge Recovery Using Sparse Coding
In this section, we first present some notation for our work.Then, the way we have built the LR and HR dictionaries for edge map recovery is discussed.

Sparse Dictionary Training
The LR and HR images are represented as z l ∈ R N l , and y h ∈ R N h , where N h = s 2 • N l , and s > 1 is some integer scale-up factor.The blur operator is denoted by H : R N h → R N l , and the decimation operator for a factor s is denoted by D : R N h → R N l .The acquisition model of how to generate an LR image from an HR image can be described as: where v is an additive noise in the acquisition process.Given z l , the problem is to find ŷ ∈ R N h such that ŷ ≈ y h .That is, ŷ − y h 2 tends to zero.To avoid the complexities caused by the different resolutions between z l and y h , it is assumed that the image z l is scaled-up by a simple interpolation operator Q : R N l → R N h (e.g., bicubic interpolation) that fills out the missing pixels between the original pixels in the input LR image.The scaled-up image shall be denoted by y l and it satisfies the relation: The reconstruction problem now is cast to process y l ∈ R N h and produce a result ŷh ∈ R N h , which will get as close as possible to the original HR image, The algorithm we propose operates on patches extracted from y l , aiming to estimate the corresponding patch from y h .Let p k = R k n y be an image patch of size n × n centered at location k and extracted from the image y by the linear operator R. The stride d is used for spatially shifting of image patches.Hence, the LR and HR patches are extracted as: It shall be further assumed that p k l and p k h can be represented sparsely by coefficients q k over the dictionary pair A l and A h , respectively, namely: To acquire such a dictionary pair A l and A h , we choose to apply jointly dictionary training method proposed in ref. [7].
The For LR dictionary A l , the K-SVD dictionary training procedure [22] is applied to LR patches , resulting in the dictionary A l : .0 is the zero norm, and q k 0 is used to count the nonzero entries of vector q k .L is a constant that controls sparsity.The next step is the high-resolution dictionary construction.Recall that we assume that the HR patch p k h can be approximated by p k h = A h q k .The dictionary A h is therefore sought such that this approximation is as exact as possible, i.e., where the matrix P h is constructed with the HR training patches p k h k as its columns, and similarly, as its columns (give that Q has full row rank).
F is the Frobenius norm [28].The solution of the least-squares problem is given by the following expression: .
Step 2. LR dictionary training: apply K-SVD [22] dictionary training procedure to train LR patches p k l k , resulting with LR dictionary A l and the sparse representation coefficients vectors q k k .
Step 3. HR dictionary training: HR dictionary A h is trained using the sparse representation coefficients vectors q k k to match corresponding LR one

Edge Map Recovery
Once we get LR-HR dictionary pair {A l , A h }, high-quality edge map E h can be represented by a sparse linear combination of HR dictionary atoms.Before starting reconstruction, we first process the input LR image I l to obtain an LR edge map E l .The process can be divided into the following three steps: (1) The input image I l is interpolated to the same size as the desired HR image using bicubic interpolation algorithm, producing an LR image Îl .
(2) Shock filter [27] is applied to suppress zigzag effect produced by up-sampling interpolation.
(3) Canny operator is used to extract edge E l from Îl .Then, the HR edge map E h can be reconstructed from E l using the LR-HR dictionary pair {A l , A h }.The process is described in Algorithm 2.

Algorithm 2.
Input: LR-HR dictionary pairwise {A l , A h } and edge map E l Output: High-quality edge map E h

Edge-Guided Depth Interpolation
In this section, we first introduce some notation during interpolation.Then, the method of discriminating pixels distribution is discussed.

Modified Joint Bilateral Filter
After obtaining HR edge image E h , HR depth image I h can be interpolated through a modified joint bilateral filter with the guidance of E h .For each pixel p in the target HR depth image I h , its value can be interpolated by a local neighborhood of LR image: where N(p) is an s × s neighborhood window centered at pixel p. p ↓ and q ↓ represent pixel coordinate corresponding to pixel p and pixel q in the LR depth image I l , and only integer coordinate is considered.f s (•) is a Gaussian kernel with standard deviation σ and mean value 0, which is used to weight the correlation of different pixel in the neighborhood.k p is a normalizing factor.f r (•) is a binary indicator, which determines whether or not two pixels are on the same side of the edge.The indicator is defined as: f r (E h , p, q) = 1 i f pixel p and pixel q are at the same side o f E h 0 otherwise The concrete form of f r (•) can be created by discriminating the distribution of pixels p and q.

Discrimination of pixels distribution
Firstly, the set C e is used to store the pixels on the edge.The pixels on the line segment between pixels p and q are stored in set L. Pixels p and pixel q are on the same side of the edge if the intersection of sets C e and L is null, as shown Figure 2a.The distribution of pixels p and q may have two situations when the intersection of sets C e and L is not null.Pixels p and q are not on the same side of the edge in Figure 2b, but they are on the same side in Figure 2c.In this situation, we divide each neighborhood window into some sets according to the edge.The process is summarized in Algorithm 3.
As shown in Figure 2, white lines represent the edge pixels, the whole black portion is a to-be-divided area, and an image patch will be area-divided based on the connectivity of the black area.In addition, there are some special edge curve formats that need to be stated clearly.As shown in Figure 3, if the edge curve is not traversing the entire image patch, we think this is a special form of connectivity, that is, a form where interior space which is enclosed within the edge pixels is zero.Furthermore, the details of the algorithm are as follows.
edge if the intersection of sets e C and L is null, as shown Figure 2a.The distribution of pixels p and q may have two situations when the intersection of sets e C and L is not null.Pixels p and q are not on the same side of the edge in Figure 2b, but they are on the same side in Figure2c.In this situation, we divide each neighborhood window into some sets according to the edge.The process is summarized in Algorithm 3.
(a) (b) (c) As shown in Figure 2, white lines represent the edge pixels, the whole black portion is a to-be-divided area, and an image patch will be area-divided based on the connectivity of the black area.In addition, there are some special edge curve formats that need to be stated clearly.As shown in Figure 3, if the edge curve is not traversing the entire image patch, we think this is a special form of connectivity, that is, a form where interior space which is enclosed within the edge pixels is zero.Furthermore, the details of the algorithm are as follows.As shown in Figure 2, white lines represent the edge pixels, the whole black portion is a to-be-divided area, and an image patch will be area-divided based on the connectivity of the black area.In addition, there are some special edge curve formats that need to be stated clearly.As shown in Figure 3, if the edge curve is not traversing the entire image patch, we think this is a special form of connectivity, that is, a form where interior space which is enclosed within the edge pixels is zero.Furthermore, the details of the algorithm are as follows.Input: An image patch A with edge pixels and the set C e Output: Different sets C i (i = 1, 2, 3 . . . . . .n), where i is the index of sets, and n is the total number of sets.
Step 1.The initial pixel r is chosen randomly from A. The following will be sequentially obtained based on the coordinates; Step Step 4. Repeat Step 3 until set C 1 does not change; Step 5.The remaining pixels are judged by the same method as C 1 ; Step 6. Area A is divided into different pixel sets C i (i = 1, 2, 3 . . . . . .n).
After determining the distribution of pixels, pixels p and q can be discriminated easily whether on the same side of the edge.They are on the same side of the edge when they belong to the same pixel set, otherwise they are not on the same side.Once the kernel functions of bilateral filter are determined, the HR depth image can be interpolated using Equation (9).When interpolation is performed, the Gaussian kernel f s (•) also suppresses some noise for the depth values.In addition, with the guidance provided by the indicator f r (•), only pixels at the same side of the edge will be considered during interpolation so that edges can be well preserved.

Experiments and Analysis
In this section, we analyzed the performance of our proposed depth image SR method and benchmarked it in quantitative and qualitative comparison with other state-of-the-art methods.All the experiments are implemented in a same experimental environment.

Test Environment and Parameter Setting
In our experiments, the programming tool was MATALAB (v.2016a) [29], and the test environment is the following.The processor was Intel(R) Xeon(R) CPU E5-2620 v3@ 2.40 Hz.Computer memory size was 64.0 Gb.The multithreading technology was used in the experiment.The proposed algorithm supports GPU computing, but it is not used.Test images were from the Middlebury Stereo database [30,31].Some parameters were selected based on the smallest Root Mean Square Error (RMSE).We calculated average RMSE of 10 test images by varying the size of image patches from n × n = 3 × 3 to n × n = 13 × 13 per experiment.The results are depicted in Figure 4a.By comparison, we chose n × n = 9 × 9 as the size of image patch.Similarly, we also compared the stride d of patch selection in Figure 3b.The stride is determined to be 2.The size of neighborhood window was s × s = 7 × 7 when joint bilateral filter was performed.The reliability of the value s has been confirmed in [8].The standard deviation σ = 0.5 for f s (•) in Equation ( 8).Dictionaries were trained using the database from Yang [32] which consisted of 100,000 patches extracted from 30 training images.

Experimental Results and Comparative Analysis
To compare the proposed method quantitatively, we chose RMSE, the Peak Signal Noise Ratio (PSNR), Structural Similarity (SSIM) and Percent of Error (PE) [8] to evaluate experimental results.Tables 1-4 show experimental results of 10 test images from Middlebury Stereo database using different SR methods.These methods included: Neighbor Embedding with Locally Linear Embedding (NE + LLE) [33], Neighbor Embedding with Least Squares (NE + LS) [34], Neighbor Embedding with Non-Negative Least Squares (NE + NNLS) [35], Global Regression (GR) and Anchored Neighborhood Regression (ANR) [36], Adjusted Anchored Neighborhood Regression for Fast Super-Resolution (AANR) [7], Accurate Image Super-Resolution Using Very Deep Convolutional Networks (CNN) [25], the sparse coding method of Yang [21], the modified sparse coding method of Zeyde [6], and edge-guided method based Markov random field of Xie [8].

Experimental Results and Comparative Analysis
To compare the proposed method quantitatively, we chose RMSE, the Peak Signal Noise Ratio (PSNR), Structural Similarity (SSIM) and Percent of Error (PE) [8] to evaluate experimental results.Tables 1-4 show experimental results of 10 test images from Middlebury Stereo database using different SR methods.These methods included: Neighbor Embedding with Locally Linear Embedding (NE + LLE) [33], Neighbor Embedding with Least Squares (NE + LS) [34], Neighbor Embedding with Non-Negative Least Squares (NE + NNLS) [35], Global Regression (GR) and Anchored Neighborhood Regression (ANR) [36], Adjusted Anchored Neighborhood Regression for Fast Super-Resolution (AANR) [7], Accurate Image Super-Resolution Using Very Deep Convolutional Networks (CNN) [25], the sparse coding method of Yang [21], the modified sparse coding method of Zeyde [6], and edge-guided method based Markov random field of Xie [8].To make the tables readable, we marked the top three reconstruction methods in the four tables.The value in bold is the best.The value with single underline is the second best, and this is the value that is closest to the best value in the optimal direction.Likewise, the value with double underlines denote the third best, which is closest to the second best value in the optimal direction.From the tables, we can conclude that the RMSE and PSNR values of our method both rank the first in the test results.There were seven SSIM values ranked first and three SSIM values ranked third using our method in the test results.Three PE values of our result were first, and seven PE values were second.These objective measurements showed that our method can get good performance compared with other methods.We also provided visual assessments on test image "cones" and "tsukuba".The ground-truth HR image and the final SR images using the top five methods in objective evaluation tables (4× scaling factor) are shown in Figures 5 and 6, and note that except for Figures 5a and 6a, all the remaining experimental images for comparison are all generated by ourselves after repeating the original algorithms.To make the tables readable, we marked the top three reconstruction methods in the four tables.The value in bold is the best.The value with single underline is the second best, and this is the value that is closest to the best value in the optimal direction.Likewise, the value with double underlines denote the third best, which is closest to the second best value in the optimal direction.From the tables, we can conclude that the RMSE and PSNR values of our method both rank the first in the test results.There were seven SSIM values ranked first and three SSIM values ranked third using our method in the test results.Three PE values of our result were first, and seven PE values were second.These objective measurements showed that our method can get good performance compared with other methods.
From the above Figures, we can see that, in the SR result of Kim et al. [25], serious zigzag effect exists.Zeyde [6] and Timofte [7] could relieve zigzags using sparse coding method, but still introduced many artifacts around edges.The method of Xie [8] could get good results, but the detail information of edges could not be reconstructed very well compared with our method.In Figures 5f and 6f, we can see clearly that our reconstructed depth images not only avoided blurred edges, but also reduced zigzags near edges and preserved sharpness of edges.

Conclusion and Future Works
Conventional SR methods can cause edges to be blurred and jagged.Aiming at solving this problem, this paper proposes an edge-guided SR method.First, high-quality edge information is reconstructed based on generating a dictionary from pairs of HR and their corresponding LR edge patches.Then, with the guidance of these recovered edges, the SR depth image is interpolated by a joint bilateral filter.The guidance of high-quality edge information can improve the performance of SR algorithm resulting in sharper SR depth image.The quantitative and qualitative analyses of the experimental results showed the superiority of the proposed technique over conventional and state-of-the-art techniques.
There are still some shortages of the proposed method.The running time is higher than some methods shown in Table 5.The process of dictionary pair requires acquiring a database from external HR-LR images.In the future, we will further improve the proposed method in the following ways.
(1) Database Construction: We will construct an image pyramid by interpolating the inputted image across different scales, and then database can be extracted from image pyramid.(2) Dictionary Training: We will use an optimal approach to train a sparse dictionary so that the running time can be reduced.

Figure 1 .
Figure 1.Pipeline of the edge-guided depth image SR.

Figure 1 .
Figure 1.Pipeline of the edge-guided depth image SR.
flow of training dictionary pair is summarized in Algorithm 1.The first step is to construct the training set.A set of HR training images y scale-down operator U and pairs of matching patches that form the training database p k h , p k l k , are extracted.After finishing training database preparation, we can enter into dictionary learning stage.

14 A
side product of this training is the sparse representation coefficients vectors q k k that correspond to the training patches p k l k

Algorithm 1 .
Input: A set of HR training images y j h j Output: LR-HR dictionary pairwise {A l , A h } Step 1. Construct training set: use scale-down operator U to construct LR images y extract pairs of matching patches that form the training database p k h

Step 1 .;Step 3 .;Step 4 .
Extract patches b k l k from edge map E l ; Step 2. Patches b k l k can be represented by the atoms of LR dictionary A l , and the side product is the corresponding sparse coefficients c k k Multiply the obtained sparse coefficients c k k by HR dictionary A h to find HR patches b k h k The high-quality edge map E h can be constructed by merging these HR patches b k h k, and the overlap regions of image patches are processed by the method of Zeyde[6].

Figure 2 .
Figure 2. Distinguish of two pixels near edge.

Figure 3 .
Figure 3.Some special forms of the edge curves.

Figure 2 .
Figure 2. Distinguish of two pixels near edge.

Figure 2 .
Figure 2. Distinguish of two pixels near edge.

Figure 3 .
Figure 3.Some special forms of the edge curves.Figure 3. Some special forms of the edge curves.

Figure 3 .
Figure 3.Some special forms of the edge curves.Figure 3. Some special forms of the edge curves.

2 .
If r / ∈ C e , we assume it belongs to C 1 .If r ∈ C e , the algorithm returns to the Step 1; Step 3. Adjacent pixels of the newly added pixels in set C 1 are judged.If the adjacent pixel does not belong to C e , we add it into set C 1 ;

Figure 4 .
Figure 4.The sensitivity of patch size n and stride d .

Figure 4 .
Figure 4.The sensitivity of patch size n and stride d.

Table 1 .
RMSE values on the Middlebury Stereo database with scaling factor of 4.

Table 2 .
SSIM values on the Middlebury Stereo database with scaling factor of 4.

Table 3 .
PSNR values on the Middlebury Stereo database with scaling factor of 4.

Table 4 .
PE values on the Middlebury Stereo database with scaling factor of 4.

Table 4 .
PE values on the Middlebury Stereo database with scaling factor of 4.

Table 5 .
Running time on the Middlebury Stereo database with scaling factor 4.