Next Article in Journal
Anomaly Detection Based on Mining Six Local Data Features and BP Neural Network
Previous Article in Journal
Eigenvalue Based Approach for Assessment of Global Robustness of Nonlinear Dynamical Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stereo Matching Methods for Imperfectly Rectified Stereo Images

Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Gwangju 61005, Korea
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(4), 570; https://doi.org/10.3390/sym11040570
Submission received: 26 March 2019 / Revised: 17 April 2019 / Accepted: 17 April 2019 / Published: 19 April 2019

Abstract

:
Stereo matching has been under development for decades and is an important process for many applications. Difficulties in stereo matching include textureless regions, occlusion, illumination variation, the fattening effect, and discontinuity. These challenges are effectively solved in recently developed stereo matching algorithms. A new imperfect rectification problem has recently been encountered in stereo matching, and the problem results from the high resolution of stereo images. State-of-the-art stereo matching algorithms fail to exactly reconstruct the depth information using stereo images with imperfect rectification, as the imperfectly rectified image problems are not explicitly taken into account. In this paper, we solve the imperfect rectification problems, and propose matching stereo matching methods that based on absolute differences, square differences, normalized cross correlation, zero-mean normalized cross correlation, and rank and census transforms. Finally, we conduct experiments to evaluate these stereo matching methods using the Middlebury datasets. The experimental results show the proposed stereo matching methods can reduce error rate significantly for stereo images with imperfect rectification.

1. Introduction

Stereo matching is an important process in the field of computer vision, the goal of which is to reconstruct three-dimensional (3D) information from a scene with left and right stereo images [1]. Stereo matching algorithms have been commonly applied in medical imaging and 3D imaging systems, such as satellite-based earth and space exploration, autonomous robots, and vehicle and security systems [2]. Stereo matching is a challenging task due to difficulties such as textureless regions, occlusion, illumination variation, the fattening effect, discontinuity, flying snow, sun flare, and rain blur [3,4].
Sparse stereo matching methods typically use feature descriptors, such as scale-invariant feature transform [5] and speeded-up robust features [6], to compute sparse disparity map, where not all pixels have disparity values [7,8,9]. Sarkis and Diepold [10] introduced an approach to convert sparse disparity map to dense maps. The efficient large-scale stereo matching method (ELAS) [11] operates on rectified input images, such that correspondences are restricted to the same line in both images.
In our work, we solve the different problem, which input stereo images have been rectified, but the rectification operates imperfectly. Unlike ELAS, our proposed method does not assume that correspondences are restricted to the same line in both images. In addition, our proposed method is a dense stereo matching. There is no interpolation step in our proposed method.
Scharstein et al. [12] classified stereo matching algorithms into local and global algorithms, which consist of steps for matching cost computation, cost aggregation, depth map computation, and depth map refinement phases. The matching cost computation step is required for both types of stereo matching algorithms and is important to the accuracy of the disparity map. The output of the matching cost computation step is a disparity space image C [12] in which C d ( p ) is the matching cost value of a pixel p in the reference image, e.g., the left image of a stereo pair, and at a disparity hypothesis d.
Local stereo matching algorithms use cost aggregation techniques to locally smooth the matching cost values in C . Let C be the result of applying a cost aggregation technique to C . From C , a disparity value for p can be obtained by using a winner-takes-all strategy, as follows:
D E p = arg min d C p , d ,
where D E is an estimated disparity map.
Global stereo algorithms can use global optimization methods, such as graph-cut [13] or belief propagation [14], to minimize the energy function that constrains the smoothness of the disparities between two neighboring pixels. In global stereo matching, the energy function is first defined and is then solved as an energy minimization problem. A disparity map with higher energy is more erroneous, whereas a disparity map with lower energy is more accurate. The typical form of an energy function in stereo matching is
E ( D E ) = E d a t a ( D E ) + E s m o o t h ( D E ) ,
where E d a t a is the measurement of the photo consistency which is computed using a matching cost function. E s m o o t h is a measurement of the smoothness, which is defined as follows:
E s m o o t h ( D E ) = < p , q > Ω s ( d p , d q )
and
s ( d p , d q ) = 0 i f d p = d q Δ o t h e r w i s e
where Δ is a predefined penalty value that balances the smoothness and data terms, Ω is the set of neighboring pixels in the reference image, and s ( ) is a smoothness function that gives a penalty if the disparities of two pixels are different. d p and d q are disparity values of pixels p and q , respectively.
According to Hirschmuller et al. [15], radiometric differences between stereo images are inherent and inevitable even when the images are produced under controlled lighting and exposure conditions. However, advanced stereo matching cost functions [16,17] can operate robustly with stereo images of different intensity transformations. In other words, the radiometric distortion problem in stereo matching can be solved in the matching cost computation step. Textureless regions, discontinuity, and occlusion problems can be solved by cost aggregation or depth map computation processes [18].
The assumption of existing dense stereo matching algorithms is that input stereo images are perfectly rectified such that correspondent pixels between the rectified stereo images have the same y-coordinate values. This assumption is commonly known as the frontal-parallel assumption. However, obtaining perfect rectification for a stereo pair, especially for large stereo images, is currently a challenge [19]. Therefore, when working on stereo images with high resolution, stereo matching algorithms are required to consider this imperfect rectification problem, as the frontal-parallel assumption does not hold true anymore.
A stereo pair, before used as input stereo images for stereo matching algorithms, typically undergoes a rectification process. The rectification process aims for correspondent pixels between stereo images to be located in the same frontal-parallel lines (or epipolar lines). However, according to [19], it is difficult to achieve perfect results with current rectification methods when operating on a stereo pair with high resolution. Correspondent pixels in stereo images with imperfect rectification may be located in different epipolar lines [19]. This means that correspondent pixels do not satisfy the frontal-parallel assumption that all dense stereo matching algorithms require. The imperfect problem is unavoidable when rectifying high resolution stereo images, even using advanced rectification methods [19]. At the same time, the need for high resolution stereo images is on the rise [18,19]. However, there is a lack of research on imperfect rectification in stereo matching and most previous studies [20,21,22,23,24,25,26,27,28] are not aware of the problem of high resolution images.
Existing stereo matching methods are dense methods that compute disparity values for each pixel, and most algorithms implicitly or explicitly make an assumption about epipolar geometry that the corresponding pixels locate in the same epipolar line. Currently, only the Middlebury dataset provides stereo images with high resolution and imperfect rectification, and these stereo images are not included in its benchmark. Therefore, existing research only focuses on low and high stereo images with perfect rectification.
In this paper, we propose several novel matching cost using state-of-the-art matching cost for high resolution stereo images. We use the Middlebury dataset [19] to evaluate the proposed matching cost functions in local and global stereo matching frameworks. The testing local stereo matching algorithms include the absolute different (AD)-based window algorithm, squared difference (SD)-based window algorithm, Rank-based window algorithm, Census-based window algorithm, normalized cross correlation (NCC), and zero-mean normalized cross correlation (ZNCC) [29]. According to [15,30], NCC and ZNCC can be considered a local stereo algorithm, so in our experiments, we do not apply the cost aggregation (via a window) for NCC and ZNCC. The testing global stereo matching algorithms include the AD and graph cut (GC) [13], SD and GC, Rank and GC, and Census and GC algorithms.

2. Matching Cost Functions

2.1. Application to Dense Stereo Matching

Existing stereo matching algorithms operate on the perfect rectification assumption that correspondent pixels are frontal-parallel. Therefore, in the matching cost computation, for a pixel p in the reference image I, candidate pixels p in the target image I have the same y-coordinate values as p and only differ with regard to the x-coordinate values. However, when working with stereo images with high resolution, the rectification algorithm can operate imperfectly. As a result, correspondent pixels between stereo images may have different y-coordinate values. This means that the frontal-parallel assumption does not hold true in these cases. Therefore, existing stereo matching algorithms tolerate a new problem that is introduced by the imperfect rectification process.
Let p = [ x p , y p ] T be a pixel in the reference image I and p = [ x p , y p ] T be a pixel in the reference image I , and d = [ d , r ] T be a disparity value. Without explicitly stating so, we implicitly use the left image as the reference image. The existing stereo matching algorithms work on the frontal-parallel assumption, so the value r is always set to zero. This fixed value r = 0 is the main reason that existing stereo matching algorithms work poorly in stereo images with imperfect rectification. The expansion parameter r can change and is in the interval [ R , R ] where R is an expansion range. Let  M 1 be a matching cost function in a traditional approach. A matching cost value for the pixel p and a disparity hypothesis d is computed using the frontal-parallel assumption as follows:
C p , d = M 1 p , d .
Here, the function M 1 takes the coordinate of p in the reference image and the value d. The coordinate of the correspondent pixel p in the target image is computed as follows:
p = [ x p d , y p ] T .
The frontal-parallel assumption can be described using Equation (6). The pixel p in the reference image and correspondent pixel p in the target image have the same values y p . This means that for each pixel in the reference image, the correspondent pixel in the target image has the same epipolar line.
The imperfect rectification problem is that correspondent pixels between the left and right images can be located in different epipolar lines. Therefore, the setting to look for the correspondent pixel in Equation (6) fails to correctly recover the disparity information because p is constrained to be located in the same epipolar line as p .
To cope with imperfect rectification, a search space for the correspondent pixel p requires to include pixels from above and below the considered line y = y p in the target image. We redesign the setting to obtain a matching cost value as follows:
C p , d = min r M 2 p , d , r ,
where M 2 is a matching cost function in the proposed setting, and r N and r [ R , R ] . The function M 2 takes one more input parameter r that determines how much the search space should be expanded. Our idea in Equation (7) is that for each disparity hypothesis d, a matching cost function should consider pixels above and below the pixel p = [ x p d , y p ] T in the target image, and the most similar pixel is chosen to compute a matching cost value.
In this paper, we apply this proposed setting for matching cost functions, including AD and SD (pixel-wise matching cost functions), census and rank (transform-based matching cost functions), and NCC and ZNCC (window-based matching cost functions).

2.2. Application to Pixel-Wise Matching Cost Functions

In this subsection, two pixel-wise matching cost functions, AD and SD is modified to adapt with high resolution stereo images. The AD and SD matching cost functions compute a matching cost value for the pixel p and a hypothesis disparity d using the intensities of p and p . We denote the new functions ImpAD and ImpSD, respectively.

2.2.1. ImpAD

The AD matching cost function computes the absolute value of the intensity difference of a pixel pair. An AD matching cost value measures the similarity between two pixels. Matching cost values of AD are computed as follows:
A D p , d = I x p , y p I x p d , y p ,
where I x p , y p = I p is the intensity value of p in the reference image, and I x p d , y p = I p is the intensity value of p in the target image.
As a traditional matching cost function, the AD function requires only the estimated disparity information d to determine the correspondent pixel p = [ x p d , y p ] T in the target image. The resulting value of A D p , d is simply assigned to the disparity space image C as follows:
C p , d = A D p , d .
The ImpAD matching cost function requires not only the estimated disparity value d, but also the expansion value r to determine the correspondent pixel p i = [ x p d , y p + r ] T in the target image. Here, we denote p i as a correspondent pixel of p in the proposed setting, which uses both pieces of information d and r. An ImpAD matching cost value is computed as follows:
I m p A D p , d , r = I x p , y p I x p d , y p + r .
Here, p i differs p from values d and r where d [ d m i n , d m a x ] and r [ R , R ] . A matching cost value at pixel p and disparity hypothesis d in C is computed as follows:
C p , d = min r I m p A D p , d , r .
Among different matching cost values for different values r, the minimum matching value is selected to assign to C as a matching cost value for p and d.

2.2.2. ImpSD

The SD matching cost function computes the square of the absolute value of the intensity difference between two pixels. A SD matching cost values is computed as follows:
S D p , d = I x p , y p I x p d , y p 2 .
Like AD, the SD matching cost function needs only d to compute the correspondent pixel p = [ x p d , y p ] T in the target image. The resulting value of S D p , d is set to the disparity space image C as follows:
C p , d = S D p , d
The ImpSD matching cost function needs both the estimated disparity value d and the expansion value r to compute the correspondent pixel p i = [ x p d , y p + r ] T in the target image. An ImpSD matching cost value is computed as follows:
I m p S D p , d , r = I x p , y p I x p d , y p + r 2
A matching cost value at the pixel p and a disparity hypothesis d in C is computed as follows:
C p , d = min r I m p S D p , d , r

2.3. Application to Transform-Based Matching Cost Functions

We introduce two transform-based matching cost functions, Rank and Census for high resolution images. The Rank and Census matching cost functions do not depend directly on pixel intensities to compute their matching values. The functions first compute the relative order between the anchor pixel (the pixel at the center of a support window) and its neighbors. Therefore, Rank and Census can operate robustly on stereo images under radiometric distortion. We denote ImpRank and ImpCensus for Rank and Census that are modified and are aware of the imperfect rectification problem.

2.3.1. ImpRank

The Rank matching cost function computes the sum of the order relative to the pixel pairs and results in an integer value that describes the local structure of an image patch. The Rank function is computed as follows:
R a n k p , d = q N p δ p , q q N p δ p , q ,
where N p and N p are sets of neighboring pixels of the pixels p and p in the left I and right I images, respectively. The indicator functions δ ( ) and δ ( ) are computed as follows:
δ p , q = 1 i f I p < I q 0 o t h e r w i s e
and
δ p , q = 1 i f I p < I q 0 o t h e r w i s e
where p = p [ d 0 ] T .
The Rank matching cost function computes correspondent candidate pixels p using only d. Therefore, the Rank function fails to find the correct correspondent pixels p to measure matching cost values. The ImpRank matching cost function takes into account the imperfect rectification problem and uses the expansion parameter r to further search for the correspondent pixels p i in the target image.
An ImpRank matching cost value is computed as follows:
I m p R a n k p , d = q N p δ p , q q i N p i δ p i , q i ,
A matching cost value at the pixel p and a disparity hypothesis d in C is computed as follows:
C p , d = min r I m p R a n k p , d , r

2.3.2. ImpCensus

The Census matching cost function transforms a local structure of an image patch into a bit string, and use the Hamming distance to measure the similarity between two bit strings. Bit strings are encoded as follows:
ξ p = q N p δ p , q
and
ξ p = q N p δ p , q
where ξ p and ξ p are two bit strings for p and p , respectively.
The Census function is computed as follows:
C e n s u s p , d = H ξ p , ξ p
Take into account the imperfect rectification problem, the proposed ImpCensus matching cost function uses the expansion parameter r in its matching cost computation. An ImpCensus matching cost value is computed as follows:
I m p C e n s u s p , d , r = H ξ p , ξ p i
A matching cost value at the pixel p and a disparity hypothesis d in C is computed as follows:
C p , d = min r I m p C e n s u s p , d , r

2.4. Application to Window-Based Matching Cost Functions

NCC and ZNCC require support windows and use direct intensity values in their matching cost computation. Like SAD, NCC and ZNCC can be considered local stereo matching methods because disparity maps from these two methods have local smoothness of disparity values [15]. NCC and ZNCC can be computed efficiently using box filtering (BF) [31] or integral image (II) [32] techniques.

2.4.1. ImpNCC

NCC can tolerate small brightness changes between stereo images due to locally dividing by the standard deviation. An NCC matching cost value is computed as follows:
N C C p , d = q N p I q × I q q N p I q 2 × q N p I q 2
We denote ImpNCC for NCC that is aware of the imperfect rectification problem. An ImpNCC matching cost values is computed as follows:
I m p N C C p , d , r = q N p I q × I q i q N p I q 2 × q i N p i I q i 2
A matching cost value at the pixel p and a disparity hypothesis d in C is computed as follows:
C p , d = min r I m p N C C p , d , r

2.4.2. ImpZNCC

The brightness of stereo images can vary due to lighting and exposure conditions. The stereo images can be first locally normalized by subtracting the mean and dividing by the standard deviation.
A ZNCC matching cost value is computed as follows:
Z N C C p , d = q N p I q I ¯ p × I q I ¯ p q N p I q I ¯ p 2 × q N p I q I ¯ p 2
We denote ImpZNCC for ZNCC that is aware of the imperfect rectification problem. An ImpZNCC matching cost value is computed as follows:
I m p Z N C C p , d , r = q N p I q I ¯ p × I q i I ¯ p i q N p I q I ¯ p 2 × q i N p i I q i I ¯ p i 2
A matching cost value at pixel p and a disparity hypothesis d in C is computed as follows:
C p , d = min r I m p Z N C C p , d , r
Like ZNCC, ImpZNCC can be computed efficiently by using the BF and II techniques. Let × and / be element-wise multiplication and division of two matrices, respectively. Algorithm 1 shows the procedure to compute the ImpZNCC matching cost function. Computation of the sum over a support window in Algorithm 1 is computed fast and efficiently using the BF technique. In Algorithm 1, a value at position [ x , y ] T in K d , r is computed as K d , r x , y = K x d , y + r .
Algorithm 1 The procedure of ImpZNCC matching cost function to construct C .
Input: Left and right images I and I , window size W, expansion range R.  
1 . I ¯ compute average over W for I using BF
2 . I ¯ compute average over W for I using BF
3 . K I I ¯
4 . K I I ¯
5 . K 2 K × K
6 . K 2 K × K
7 . D compute sum over W for K 2 using BF
8 . D compute sum over W for K 2 using BF
9 . For d = d m i n to d m a x do
10 . For r = R to R do
11 . M K × K d , r
12 . S compute sum over W from M using BF
13 . C d compute S / D × D
14 . end for
15 . end for
16 . Return C

3. Experimental Results

We used the Middlebury [19,33] dataset to measure the performance of matching cost functions including AD, SD, NCC, ZNCC, Rank, Census, ImpAD, ImpSD, ImpNCC, ImpZNCC, ImpRank, and ImpCensus in local and global frameworks. In the present experiments, we do not intend to compare the performance of the test matching cost functions and stereo matching algorithms. We plan to compare the performance of stereo matching algorithms before and after applying the modification to solve the imperfect rectification problem.
For each of the test matching cost functions, we implemented local and global stereo matching algorithms that use the function in the matching cost computation. For local stereo matching algorithms, we used a 15 × 15 window to aggregate matching costs using C . For global stereo matching, we use graph-cut (GC) [34] to smooth C . We used the source code of GC in [35]. We carefully and optimally choose the parameters of GC for global stereo algorithms, which use AD, SD, Rank, and Census, by using stereo images with perfect rectification conditions as training examples. The global stereo matching algorithms, which are based on ImpAD, ImpSD, ImpRank, and ImpCensus, use the same parameter values as the global algorithms that are based on the AD, SD, Rank, and Census matching cost functions, respectively.
According to [15,30], NCC and ZNCC can be considered local stereo matching algorithms; hence, we do not apply cost aggregation techniques and global optimization methods for NCC, ZNCC, ImpNCC, and ImpZNCC. For the Rank, Census, ImpRank, ImpCensus, NCC, ZNCC, ImpNCC, and ImpZNCC functions, which require a support window, we used the 9 × 9 window.
For AD, SD, ImpAD, and ImpSD, each pixel of the input stereo images is subtracted by a mean value which is computed by an image window of the pixel. As a result, these four matching cost functions can reduce the effect of illumination different between the stereo images, and we can measure better the effect of the modification that solve the imperfect rectification problem. We used the 9 × 9 window for this mean subtraction.
The performance of these four matching cost functions is measured by using the winner-takes-all strategy for C . All of the matching cost algorithms were evaluated using the average percentage of erroneous pixels in all zones, except occluded areas, and were computed at a 2-pixel error threshold. This error threshold is a default value in Middlebury benchmark 3 [19]. The error percentage ( E r r ) was computed as follows:
E r r % = 100 I n o c c p I n o c c 0 , i f D E ( p ) D G ( p ) 2 1 , o t h e r w i s e ,
where I n o c c is the set of all nonoccluded pixels, I n o c c is the number of pixels in I n o c c , and D G ( p ) and D E ( p ) are the ground truth and estimated disparity at p , respectively.
Middlebury dataset 3 [36] provides the test and training stereo images with different conditions: varying illumination and exposure, and both perfect and imperfect rectification problems. The training stereo images are with ground truth, whereas the test datasets are not. The Middlebury benchmark 3 compares submitted stereo matching algorithms using the test dataset with these four conditions. However, in this paper, we focus on solving the imperfect rectification problem of stereo images. Therefore, in our experiments, we use the training datasets, which contain stereo images with imperfect rectification and varying illumination and exposure. Table 1 presents data for the stereo images in the training datasets. We implemented three versions with R = 0 , R = 1 , and R = 2 , respectively. The algorithms with R = 0 has no effect on matching cost function. Therefore, for example, ImpZNCC with R = 0 is simply ZNCC.

3.1. ImpCensus and ImpRank

We conducted experiments to evaluate the performance of Census, Rank, ImpCensus, and ImpRank matching cost functions in local and global stereo matching approaches. Denote ImpCensus/Win/R1 as a local stereo matching algorithm that uses the ImpCensus matching cost function with R = 1 to construct C and aggregates matching costs using a window. In addition, denote ImpCensus/GC/R1 as a global stereo matching algorithm that uses ImpCensus with R = 1 and GC to globally optimize the energy function, as described in Equation (2). Similarly, other denoted stereo matching algorithms can be used by changing the matching cost functions and the R values.
Figure 1 shows the results of the ImpCensus-based stereo matching algorithms using the Backpack stereo images with different R values. Disparity maps in the second line are the result of the ImpCensus-based local algorithms, whereas the third line shows the disparity maps of the ImpCensus-based global algorithms. Census/Win and Census/GC produced the most erroneous disparity maps because they were un-aware of the imperfect rectification problem. ImpCensus/GC/R1 and ImpCensus/GC/R2 reduced the error rates. The error rate reduction is clearly seen from Figure 1g,h, especially in textured image regions. These observations agree with those in [19] that the imperfect rectification problem commonly happens in textured image regions.
Table 2 and Table 3 show the quantitative results of local and global stereo matching algorithms that use Rank and ImpRank, and Census and ImpCensus, respectively. The ImpCensus-based stereo matching algorithms outperformed the Census-based algorithms for all the test stereo images. Similarly, the performance of the ImpRank-based stereo matching algorithms were superior to the Rank-based algorithms. In the Playtable stereo images, for example, the modification allows the ImpCensus-based local algorithm to reduce the error rate by up to 27.9% (65.28% of Census/Win and 37.38% of ImpCensus/Win/R2). On the other hand, in a global approach, the error rate of ImpCensus/GC/R2 was 39% smaller than that of Census/GC (70.29% of Census/Win and 31.20% of ImpCensus/Win/R2).
For the Census- and ImpCensus-based local and global stereo matching algorithms, average error rates of ImpCensus/Win/R1 (39.49%) and ImpCensus/Win/R2 (38.50%) were about 6% smaller than that of Census/Win (45.46%), whereas average error rates of ImpCensus/GC/R1 (37.74%) and ImpCensus/GC/R2 (32.42%) were more than 12% smaller than that of Census/GC (50.38%). Similarly, the awareness of high resolution images had the positive effect for the ImpRank-based stereo matching algorithms such that the ImpRank-based algorithms with R = 1 and R = 2 had smaller average error rates than the Rank-based algorithms.

3.2. ImpAD and ImpSD

We performed experiments to evaluate the performance of AD, SD, ImpAD, and ImpSD in local and global stereo matching approaches. Table 4 and Table 5 show the quantitative results of local and global stereo matching algorithms that use AD and ImpAD, and SD and ImpSD, respectively. For all of the test stereo images, ImpAD/Win/R1 and ImpAD/Win/R2 outperformed AD/Win, and ImpAD/GC/R1 and ImpAD/GC/R2 were superior to AD/GC. Similarly, the error rates of ImpSD/Win/R1 and ImpSD/Win/R2 were smaller than those of SD/Win, and ImpSD/GC/R1 and ImpSD/GC/R2 performed better than ImpSD/GC/R2 for all the test stereo pairs.
We computed the average performance of each of the test stereo matching algorithms for the test stereo images. For the AD- and ImpAD-based stereo matching algorithms, AD/Win and AD/GC had the largest errors in their corresponding groups, with average error rates of 54.46% and 45.72%, respectively. In contrast, ImpAD/Win/R1 and ImpAD/GC/R1 had the beter performance in the local and global approaches, respectively. ImpAD/Win/R1 performed with average error rates of 48.38%, whereas ImpAD/GC/R1 operated at 34.59% for the test stereo pairs.
For the SD- and ImpSD-based stereo matching algorithms, SD/Win and SD/GC had the largest errors in their correspondent groups, with the average error rates of 54.76% and 45.47%, respectively. In contrast, ImpSD/Win/R1 and ImpSD/GC/R1 had the best performance in the local and global approach, respectively. ImpSD/Win/R1 performed with average error rate of 48.79%, whereas ImpSD/GC/R1 had an error rate of 35.39% over the test stereo pairs.

3.3. ImpNCC and ImpZNCC

We evaluated the performance of NCC and ZNCC with and without using the modification. We evaluated the performance of NCC, ImpNCC, ZNCC, and ImpZNCC directly from the corresponding disparity space image C using a winner-take-all strategy. Denote ImpNCC/R1 as a matching cost function that uses ImpNCC with R = 1 to construct C .
Figure 2 shows the results of the ImpZNCC matching cost functions with different R values using the Motorcycle stereo images. Figure 2a,b show the left and right images, whereas the ground truth of the left image is shown in Figure 2c. Disparity maps of ZNCC, ImpZNCC/R1, and ImpZNCC/R1 are shown in Figure 2d–f, respectively. ZNCC produced the most erroneous disparity maps with an average error rate of 49.02% because ZNCC ignores the imperfect rectification problem. ImpZNCC/R1 and ImpZNCC/R2 reduced the error rates with average error rates of 43.73% and 43.64%, respectively.
Table 6 and Table 7 show the quantitative results of the NCC, ImpNCC, ZNCC, and ImpZNCC matching functions, respectively. Using the modification, NCC had the worst performance when producing more erroneous disparity maps than ImpNCC/R1 and ImpNCC/R2. Similarly, the awareness of high resolution images improved the performance of ImpZNCC/R1 and ImpZNCC/R2, which were superior to ZNCC for all of the test stereo pairs.

3.4. Stereo Image with Radiometric Distortion

Stereo matching algorithms need to operate robustly on stereo images with radiometric distortion such that they can be used for outdoor applications and road-driving images. In this subsection, we evaluated the performance of stereo matching algorithms that are aware of the high resolution images for stereo images with radiometric distortion and imperfect rectification problems. We used two Middlebury sub-datasets in which one sub-dataset had imperfect rectification and varying exposure and the other sub-dataset had imperfect rectification and varying illumination.
In the present experiments, because Census is one of the most robust matching functions for stereo images with radiometric distortions [15], we use only the ImpCensus-based global stereo matching algorithms. Figure 3 shows the results of Census/GC, ImpCensus/GC/R1, and ImpCensus/GC/R2 using two stereo pairs. The second line shows the disparity maps of the test stereo matching algorithms using a stereo pair (a) and (b) with varying exposure and imperfect rectification, whereas the third line shows the disparity maps using a stereo pair (a) and (c) with varying illumination and imperfect rectification. The error rates of ImpCensus/GC/R1 and ImpCensus/GC/R2 were smaller than those of Census/GC in the two stereo pairs.
Table 8 and Table 9 show the quantitative results of the local and global stereo matching algorithms, which use ImpCensus and the two Middlebury sub-datasets. For all of the cases in the two tables, the performance of the ImpCensus-based global stereo matching algorithms were improved. Stereo images with varying illumination are often more challenging for stereo matching algorithms than stereo images with varying exposure [15]. Overall, the performance of ImpCensus/GC/R1 and ImpCensus/GC/R2 were superior to Census/GC for all the test stereo images.

3.5. Using Normal Stereo Images

In this subsection, we evaluated the performance of the proposed stereo matching methods using normal stereo images. In other words, we measure the Imperfect-based method using perfectly rectified Middebury stereo datasets.
We used sub-datasets, including Aloe, Baby1, Baby2, Baby3, Cloth1, Cloth2, Cloth3, Cloth4, Rocks1, Rocks2, Wood1, and Wood2, to evaluate the Imperfect-based method with different R. Figure 4 shows the quanlitative results of the ImpCensus-based method for the Aloe, Baby1, Rock1, and Wood2 image pairs. The ImpCensus-based method explores correspondences in larger searching spaces in terms of the expasion parameter r. As a result, the ImpCensus-based method marginally degraded for perfectly rectified stereo images.
Table 10 shows the error rates for the ImpCensus-based method using perfectly rectified stereo images. Clearly, the expansion parameter r had no benefit for these images. Looking for correspondences for larger searching space (with R = 1 and R = 2 ) made the ImpCensus-based method more erroneous.

3.6. Computation Time

In order to measure the computation times of the matching cost functions, we used the Bicycle stereo images with a resolution of 1968 × 3052 and a disparity range of 180. We experimentally investigated the matching cost functions, including ImpCensus, ImpRank, ImpAD, ImpSD, ImpNCC, and ImpZNCC, with R = 0 , R = 1 , and R = 2 , respectively. The experimental PC platform had a configuration consisting of an Intel core i7, a 4.00 GHz CPU, and 16.00 GB of memory. Table 11 shows the computation times that are needed for the test matching cost functions to compute the disparity space image C . The testing algorithms requires more computation time when the expansion factor R increases.
As shown in the above tables, methods with the expansion range R = 1 clearly reduce the error rates of their original versions. However, methods with R = 2 performed comparable or marginally better than those with R = 1 .
In addition, we further evaluated performance of the proposed local stereo matching methods for R = 3 and R = 4 using the imperfectly rectified stereo images of the Middlebury dataset, as shown in Table 12. Increasing value for the parameter range R had the negative effects and increase error rates. Therefore, generally, R = 1 shows to be the best appropriate value.
Let I be image size and D be disparity range. AD and SD are pixel-wise method, so their computational complexities are O I × D . Rank and Census are window-based cost functions that each matching cost is computed for windows W. For each window pairs, Rank accumulates values of relative order between center pixel and its neighbors. Therefore, Rank computational complexity is O I × D × ( P 1 ) . Census encodes ( P 1 ) relative orders into a bit string and then compute a matching cost by comparing differences between two strings. Therefore, Census computational complexity is O I × D × P 2 .
The proposed cost functions with the parameter range R requires to process K = R × 2 + 1 pixels in the right images for each pixel in the left image. Therefore, the computational complexities for ImpAD and ImpSD are O I × D × K , and for ImpCensus and ImpRank are O I × D × ( P 1 ) × K .

4. Conclusions

In this paper, we applied the modification to the state-of-the-art stereo matching methods in order to overcome imperfect rectification. We conducted experiments to evaluate these stereo matching methods using the Middlebury datasets. The experimental results indicate that the proposed stereo matching methods largely improved their performance. The proposed stereo matching methods in this paper increases the computation cost for a stereo matching algorithm. To reduce the computation cost or to develop a different approach that can solve the imperfect rectification problem without increasing computation cost is left to our future work.

Author Contributions

Both authors contributed equally to this work and have read and approved the final manuscript.

Funding

This work was supported by the NRF grant funded by the Korea government (MSIT) (NRF-2018R1D1A1A09084148) and (NRF-2018R1D1A1B07049682).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Trucco, E.; Verri, A. Introductory Techniques For 3-D Computer Vision; Prentice Hall PTR: Upper Saddle River, NJ, USA, 1998. [Google Scholar]
  2. Cyganek, B.; Siebert, J.P. Introduction to 3D Computer Vision Techniques and Algorithms; Wiley: Hoboken, NJ, USA, 2009. [Google Scholar]
  3. Meister, S.; Jähne, B.; Kondermann, D. Outdoor stereo camera system for the generation of real-world benchmark data sets. Opt. Eng. 2012, 51, 021107. [Google Scholar] [CrossRef]
  4. Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012. [Google Scholar]
  5. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  6. Bay, H.; Tuytelaars, T.; Van Gool, L. SURF: Speeded Up Robust Features. In Proceedings of the Ninth European Conference on Computer Vision, Graz, Austria, 7–13 May 2006. [Google Scholar]
  7. Medioni, G.; Nevatia, R. Segment-based stereo matching. Comput. Vis. Graph. Image Process. 1985, 31, 2–18. [Google Scholar] [CrossRef]
  8. Robert, L.; Faugeras, O. Curve-based stereo: Figural continuity and curvature. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Lahaina, HI, USA, 3–6 June 1991. [Google Scholar]
  9. Olson, C.F. Subpixel localization and uncertainty estimation using occupancy grids. In Proceedings of the IEEE International Conference on Robotics and Automation, Detroit, MI, USA, 10–15 May 1999. [Google Scholar]
  10. Sarkis, M.; Diepold, K. Sparse stereo matching using belief propagation. In Proceedings of the IEEE International Conference on Image Processing, San Diego, CA, USA, 12–15 October 2008. [Google Scholar]
  11. Geiger, A.; Roser, M.; Urtasun, R. Efficient Large-Scale Stereo Matching. In Proceedings of the Asian Conference on Computer Vision (ACCV), Queenstown, New Zealand, 8–12 November 2010. [Google Scholar]
  12. Scharstein, D.; Szeliski, R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vis. 2002, 47, 7–42. [Google Scholar] [CrossRef]
  13. Kolmogorov, V.; Zabih, R.R. Computing Visual Correspondence with Occlusions using Graph Cuts. Proc. ICCV 2001, 2, 508–515. [Google Scholar]
  14. Felzenszwalb, P.F.; Huttenlocher, D.P. Efficient Belief Propagation for Early Vision. Int. J. Comput. Vis. 2006, 70, 41–54. [Google Scholar] [CrossRef]
  15. Hirschmuller, H.; Scharstein, D. Evaluation of stereo matching costs on images with radiometric differences. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 1582–1599. [Google Scholar] [CrossRef] [PubMed]
  16. Nguyen, V.D.; Nguyen, D.D.; Nguyen, T.T.; Dinh, V.Q.; Jeon, J.W. Support local pattern and its application to disparity improvement and texture classification. IEEE Trans. Circuits Syst. Video Technol. 2014, 24, 263–276. [Google Scholar] [CrossRef]
  17. Heo, Y.S.; Lee, K.M.; Lee, S.U. Robust Stereo Matching Using Adaptive Normalized Cross-Correlation. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 807–822. [Google Scholar] [PubMed]
  18. Hosni, A.; Rhemann, C.; Bleyer, M.; Rother, C.; Gelautz, M. Fast Cost-Volume Filtering for Visual Correspondence and Beyond. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 504–511. [Google Scholar] [CrossRef] [PubMed]
  19. Scharstein, D.; Hirschmüller, H.; Kitajima, Y.; Krathwohl, G.; Nesic, N.; Wang, X.; Westling, P. High-resolution stereo datasets with subpixel-accurate ground truth. Conf. Pattern Recognit. 2014. [Google Scholar] [CrossRef]
  20. Wang, Y.; Wang, K.; Dunn, E.; Frahm, J.-M. Stereo under sequential optimal sampling: A statistical analysis framework for search space reduction. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 23–28 June 2014; pp. 485–492. [Google Scholar]
  21. Luo, W.; Schwing, A.G.; Urtasun, R. Efficient Deep Learning for Stereo Matching. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 5695–5703. [Google Scholar]
  22. Kowalczuk, J.; Psota, E.T.; Perez, L.C. Real-Time Stereo Matching on CUDA Using an Iterative Refinement Method for Adaptive Support-Weight Correspondences. IEEE Trans. Circuits Syst. Video Technol. 2013, 23, 94–104. [Google Scholar] [CrossRef]
  23. Hirschmüller, H.; Innocent, P.R.; Garibaldi, J. Real-Time Correlation-Based Stereo Vision with Reduced Border Errors. Int. J. Comput. Vis. 2002, 47, 229–246. [Google Scholar] [CrossRef]
  24. Li, L.; Zhang, S.; Yu, X.; Zhang, L. PMSC: PatchMatch-Based Superpixel Cut for Accurate Stereo Matching. IEEE Trans. Circuits Syst. Video Technol. 2018, 28, 679–692. [Google Scholar] [CrossRef]
  25. Psota, E.T.; Kowalczuk, J.; Mittek, M.; Perez, L.C. MAP Disparity Estimation Using Hidden Markov Trees. IEEE Trans. Circuits Syst. Video Technol. 2018, 28, 2219–2227. [Google Scholar]
  26. Li, L.; Yu, X.; Zhang, S.; Zhao, X.; Zhang, L. 3D Cost Aggregation with Multiple Minimum Spanning Trees for Stereo Matching. 2017. Available online: http://ao.osa.org/abstract.cfm?URI=ao-56-12-3411 (accessed on 19 April 2019).
  27. Kim, K.R.; Kim, C.S. Adaptive smoothness constraints for efficient stereo matching using texture and edge information. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3429–3433. [Google Scholar]
  28. Nahar, S.; Joshi, M.V. A learned sparseness and IGMRF-based regularization framework for dense disparity estimation using unsupervised feature learning. IPSJ Trans. Comput. Vis. Appl. 2017, 9, 3429–3433. [Google Scholar] [CrossRef]
  29. Kanade, T.; Kano, H.; Kimura, S.; Yoshida, A.; Oda, K. Development of a video-rate stereo machine. In Proceedings of the 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems, Pittsburgh, PA, USA, 5–9 August 1995. [Google Scholar]
  30. Antunes, M.; Barreto, J.P. SymStereo: Stereo Matching using Induced Symmetry. Int. J. Comput. Vis. 2014, 109, 187–208. [Google Scholar] [CrossRef]
  31. McDonnell, M.J. Box-Filtering techniques. Comput. Graph. Image Process. 1981, 17, 65–70. [Google Scholar] [CrossRef]
  32. Crow, F. Summed-area tables for texture mapping. SIGGRAPH 1984, 18, 207–212. [Google Scholar] [CrossRef]
  33. Scharstein, D.; Szeliski, R. Middlebury Online Stereo Evaluation. 2002. Available online: http://vision.middlebury.edu/stereo (accessed on 19 April 2019).
  34. Boykov, Y.; Kolmogorov, V. An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Vision. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 1124–1137. [Google Scholar] [CrossRef] [PubMed]
  35. Kolmogorov, V. Min-Cut/Max-Flow Algorithm Source Code. 2004. Available online: http://pub.ist.ac.at/vnk/software.html (accessed on 19 April 2019).
  36. Scharstein, D.; Szeliski, R. Middlebury Online Stereo Evaluation. Available online: http://vision.middlebury.edu/stereo/eval3 (accessed on 19 April 2019 ).
Figure 1. Results of the ImpCensus-based stereo matching algorithms with different R values using the Backpack stereo images with imperfect rectification. (a) Left image. (b) Right image. (c) Ground truth. (d) Disparity map of Census/Win ( E r r = 21.38 % ). (e) Disparity map of ImpCensus/Win/R1 ( E r r = 16.77 % ). (f) Disparity map of ImpCensus/Win/R2 ( E r r = 17.38 % ). (g) Disparity map of Census/GC ( E r r = 22.59 % ). (h) Disparity map of ImpCensus/GC/R1 ( E r r = 14.65 % ). (i) Disparity map of ImpCensus/GC/R2 ( E r r = 14.43 % ).
Figure 1. Results of the ImpCensus-based stereo matching algorithms with different R values using the Backpack stereo images with imperfect rectification. (a) Left image. (b) Right image. (c) Ground truth. (d) Disparity map of Census/Win ( E r r = 21.38 % ). (e) Disparity map of ImpCensus/Win/R1 ( E r r = 16.77 % ). (f) Disparity map of ImpCensus/Win/R2 ( E r r = 17.38 % ). (g) Disparity map of Census/GC ( E r r = 22.59 % ). (h) Disparity map of ImpCensus/GC/R1 ( E r r = 14.65 % ). (i) Disparity map of ImpCensus/GC/R2 ( E r r = 14.43 % ).
Symmetry 11 00570 g001
Figure 2. Results of the ImpZNCC-based stereo matching algorithms with different R values using the Motorcycle stereo images with imperfect rectification. (a) Left image. (b) Right image. (c) Ground truth. (d) Disparity map of ZNCC ( E r r = 35.97%). (e) Disparity map of ImpZNCC/R1 ( E r r = 25.78 % ). (f) Disparity map of ImpZNCC/R2 ( E r r = 26.30%).
Figure 2. Results of the ImpZNCC-based stereo matching algorithms with different R values using the Motorcycle stereo images with imperfect rectification. (a) Left image. (b) Right image. (c) Ground truth. (d) Disparity map of ZNCC ( E r r = 35.97%). (e) Disparity map of ImpZNCC/R1 ( E r r = 25.78 % ). (f) Disparity map of ImpZNCC/R2 ( E r r = 26.30%).
Symmetry 11 00570 g002
Figure 3. Results of the Census-based stereo matching algorithms using the Sword1 stereo images of imperfect rectification and radiometric distortion. (a) Left image. (b) Right image with varying exposure. (c) Right image with varying illumination. (d–f) Disparity maps using the stereo pair (a,b). (d) Disparity map of Census/GC ( E r r = 18.87%). (e) Disparity map of ImpCensus/GC/R1 ( E r r = 14.55%). (f) Disparity map of ImpCensus/GC/R2 ( E r r = 14.39 % ). (gi) Disparity maps using the stereo pair (a,c). (g) Disparity map of Census/GC ( E r r = 36.05%). (h) Disparity map of ImpCensus/GC/R1 ( E r r = 31.30%). (i) Disparity map of ImpCensus/GC/R2 ( E r r = 31.18 % ).
Figure 3. Results of the Census-based stereo matching algorithms using the Sword1 stereo images of imperfect rectification and radiometric distortion. (a) Left image. (b) Right image with varying exposure. (c) Right image with varying illumination. (d–f) Disparity maps using the stereo pair (a,b). (d) Disparity map of Census/GC ( E r r = 18.87%). (e) Disparity map of ImpCensus/GC/R1 ( E r r = 14.55%). (f) Disparity map of ImpCensus/GC/R2 ( E r r = 14.39 % ). (gi) Disparity maps using the stereo pair (a,c). (g) Disparity map of Census/GC ( E r r = 36.05%). (h) Disparity map of ImpCensus/GC/R1 ( E r r = 31.30%). (i) Disparity map of ImpCensus/GC/R2 ( E r r = 31.18 % ).
Symmetry 11 00570 g003
Figure 4. Results of the ImpCensus-based stereo matching algorithms with different R values using the perfectly rectified images. The first column is left images, and the second column is disparity maps for Census/Win. The next two columns are disparity maps for ImpCensus/Win/R1 and ImpCensus/Win/R2, respectively. The last column is ground truths.
Figure 4. Results of the ImpCensus-based stereo matching algorithms with different R values using the perfectly rectified images. The first column is left images, and the second column is disparity maps for Census/Win. The next two columns are disparity maps for ImpCensus/Win/R1 and ImpCensus/Win/R2, respectively. The last column is ground truths.
Symmetry 11 00570 g004
Table 1. Stereo images with imperfect rectification in the Middlebury training datasets of version 3.
Table 1. Stereo images with imperfect rectification in the Middlebury training datasets of version 3.
DatasetHeightWidthDisparityDatasetHeightWidthDisparity
Adirondack19842872290Playroom19042796330
Backpack19882948260Playtable18522720290
Bicycle19683052180Recycle19442880260
Cable19162816460Shelves19882952240
Classroom18962996260Storage19882792660
Couch19922296630Sword120042928260
Flowers19842888640Sword219562884370
Motorcycle19882964280Umbrella20082960250
Pipes19402940300
Table 2. Error rates of the Census- and ImpCensus-based stereo matching algorithms using the imperfectly rectified stereo images of the Middlebury dataset. Bold results represent the lowest error rates among the testing methods for each sub-dataset.
Table 2. Error rates of the Census- and ImpCensus-based stereo matching algorithms using the imperfectly rectified stereo images of the Middlebury dataset. Bold results represent the lowest error rates among the testing methods for each sub-dataset.
Local AlgorithmsGlobal Algorithms
DatasetCensusImpCensus/R1ImpCensus/R2DatasetCensusImpCensus/R1ImpCensus/R2
Adirondack45.3038.4537.62Adirondack52.7137.6731.14
Backpack21.3816.7717.38Backpack22.5914.6514.43
Bicycle51.2145.1042.12Bicycle53.6943.1335.01
Cable51.8445.4742.46Cable63.3947.0736.37
Classroom30.4126.8928.45Classroom38.4725.4918.85
Couch30.6529.4230.50Couch33.3428.0526.83
Flowers61.6456.3854.37Flowers64.7054.4349.27
Motorcycle29.5021.6320.62Motorcycle37.3021.2517.52
Pipes33.5928.0825.64Pipes38.4627.5222.70
Playroom45.2343.1443.64Playroom49.7942.0939.13
Playtable65.2839.5037.38Playtable70.2934.9231.20
Recycle44.8140.2740.39Recycle50.7436.9131.72
Shelves54.3348.2748.11Shelves56.8847.3844.78
Storage54.7049.7447.33Storage63.4151.9944.81
Sword116.4515.6116.43Sword118.4114.0813.99
Sword270.2462.9757.66Sword271.6352.7336.92
Umbrella66.1763.6464.40Umbrella70.6262.2356.46
Average45.4639.4938.50Average50.3837.7432.42
Table 3. Error rates of the Rank- and ImpRank-based stereo matching algorithms using the imperfectly rectified stereo images of the Middlebury dataset. Bold results represent the lowest error rates among the testing methods for each sub-dataset.
Table 3. Error rates of the Rank- and ImpRank-based stereo matching algorithms using the imperfectly rectified stereo images of the Middlebury dataset. Bold results represent the lowest error rates among the testing methods for each sub-dataset.
Local AlgorithmsGlobal Algorithms
DatasetRankImpRank/R1ImpRank/R2DatasetRankImpRank/R1ImpRank/R2
Adirondack59.6850.6654.20Adirondack48.3034.0633.27
Backpack25.6020.2124.27Backpack20.1814.6416.01
Bicycle60.8152.7751.79Bicycle48.5735.7731.24
Cable67.5460.0163.09Cable51.6534.8235.74
Classroom41.4938.4148.13Classroom23.9111.2310.29
Couch38.4835.8445.36Couch30.5327.1832.48
Flowers72.0362.6462.66Flowers60.6848.5146.57
Motorcycle43.2129.7331.26Motorcycle30.4717.8617.55
Pipes42.6734.9134.13Pipes32.9625.2423.51
Playroom55.6150.7454.73Playroom45.0538.2739.77
Playtable73.6454.2652.70Playtable69.8839.3640.49
Recycle64.4553.0156.38Recycle43.9230.5829.46
Shelves58.3453.2157.39Shelves55.5746.5646.75
Storage69.3062.3464.56Storage57.9036.6735.60
Sword120.7119.0723.50Sword114.0811.5713.80
Sword278.8175.5577.43Sword263.1144.1339.95
Umbrella71.8370.0772.47Umbrella64.1352.0550.66
Average55.5448.4451.41Average44.7632.2631.95
Table 4. Error rates of the AD- and ImpAD-based stereo matching algorithms using the imperfectly rectified stereo images of the Middlebury dataset. Bold results represent the lowest error rates among the testing methods for each sub-dataset.
Table 4. Error rates of the AD- and ImpAD-based stereo matching algorithms using the imperfectly rectified stereo images of the Middlebury dataset. Bold results represent the lowest error rates among the testing methods for each sub-dataset.
Local AlgorithmsGlobal Algorithms
DatasetADImpAD/R1ImpAD/R2DatasetADImpAD/R1ImpAD/R2
Adirondack54.1746.3347.48Adirondack42.0929.2831.11
Backpack28.5722.5424.39Backpack38.2521.5922.28
Bicycle61.2755.2553.39Bicycle50.6838.0335.97
Cable65.6359.1460.25Cable43.0933.9733.32
Classroom40.7738.3744.96Classroom18.1312.4212.96
Couch37.9535.0940.32Couch33.8630.9933.78
Flowers69.4762.5662.35Flowers59.5652.8347.05
Motorcycle41.3928.8228.54Motorcycle40.8125.0122.64
Pipes43.4935.3833.01Pipes49.4230.3925.62
Playroom50.5549.4351.82Playroom43.4440.2641.67
Playtable69.1450.9946.05Playtable67.2044.2935.48
Recycle56.6652.5555.04Recycle30.5325.9030.18
Shelves57.1452.3954.05Shelves55.8449.2149.84
Storage74.2568.0470.28Storage68.0249.3445.84
Sword128.1323.6526.50Sword134.1818.9819.63
Sword276.5473.7574.28Sword254.7042.9936.46
Umbrella70.6768.1869.36Umbrella47.4142.6043.98
Average54.4648.3849.53Average45.7234.5933.40
Table 5. Error rates of the SD- and ImpSD-based stereo matching algorithms using the imperfectly rectified stereo images of the Middlebury dataset. Bold results represent the lowest error rates among the testing methods for each sub-dataset.
Table 5. Error rates of the SD- and ImpSD-based stereo matching algorithms using the imperfectly rectified stereo images of the Middlebury dataset. Bold results represent the lowest error rates among the testing methods for each sub-dataset.
Local AlgorithmsGlobal Algorithms
DatasetSDImpSD/R1ImpSD/R2DatasetSDImpSD/R1ImpSD/R2
Adirondack54.1946.2848.20Adirondack41.6731.3737.42
Backpack29.4623.0125.15Backpack40.8823.0223.36
Bicycle62.2156.3154.57Bicycle50.2344.3842.52
Cable65.3059.2461.41Cable41.9634.3735.24
Classroom40.4638.1541.93Classroom16.4912.5913.18
Couch38.3535.9242.31Couch34.8632.3336.47
Flowers70.1063.6064.52Flowers56.5644.7551.13
Motorcycle42.0729.4529.84Motorcycle41.3325.4725.17
Pipes44.3435.9033.85Pipes50.0630.9727.12
Playroom51.0450.3252.23Playroom42.6642.3945.01
Playtable69.2551.9348.25Playtable67.7446.7337.77
Recycle56.2252.6754.10Recycle32.7730.5927.42
Shelves56.7252.3554.70Shelves53.7050.5153.21
Storage74.3567.5170.09Storage66.7746.5946.97
Sword129.9725.1127.80Sword136.4019.4019.93
Sword276.5673.6874.99Sword251.3241.6538.54
Umbrella70.3468.0569.78Umbrella47.5444.4542.62
Average54.7648.7950.22Average45.4735.3935.47
Table 6. Error rates of the NCC and ImpNCC stereo matching algorithms using the imperfectly rectified stereo images of the Middlebury dataset. Bold results represent the lowest error rates among the testing methods for each sub-dataset.
Table 6. Error rates of the NCC and ImpNCC stereo matching algorithms using the imperfectly rectified stereo images of the Middlebury dataset. Bold results represent the lowest error rates among the testing methods for each sub-dataset.
DatasetNCCImpNCC/R1ImpNCC/R2
Adirondack49.4045.2246.11
Backpack25.4221.4322.11
Bicycle58.2354.0953.12
Cable54.8549.6448.51
Classroom43.3841.6743.37
Couch35.1035.3436.11
Flowers62.1859.6159.31
Motorcycle36.1726.9027.52
Pipes36.6130.9029.05
Playroom50.2249.1248.30
Playtable66.2939.5040.61
Recycle52.9852.5653.83
Shelves54.6048.8849.72
Storage56.4753.0552.93
Sword125.8523.9124.96
Sword274.3570.7868.46
Umbrella72.9872.2672.91
Average50.3045.5845.70
Table 7. Error rates of the ZNCC and ImpZNCC stereo matching algorithms using imperfectly rectified stereo images of the Middlebury dataset. Bold results represent the lowest error rates among the testing methods for each sub-dataset.
Table 7. Error rates of the ZNCC and ImpZNCC stereo matching algorithms using imperfectly rectified stereo images of the Middlebury dataset. Bold results represent the lowest error rates among the testing methods for each sub-dataset.
DatasetZNCCImpZNCC/R1ImpZNCC/R2
Adirondack47.4942.4543.36
Backpack24.9221.0521.68
Bicycle56.7751.6849.75
Cable57.7651.5850.26
Classroom34.6832.7334.46
Couch35.6235.9836.71
Flowers63.4760.1659.44
Motorcycle35.9725.7826.30
Pipes38.3932.0730.01
Playroom50.3749.3448.65
Playtable68.1240.0340.98
Recycle51.0449.4549.31
Shelves55.4649.0549.89
Storage55.9651.2350.13
Sword122.4621.2322.03
Sword270.6863.5358.44
Umbrella67.6865.9967.05
Average49.2343.7343.44
Table 8. Error rates of the Census- and ImpCensus-based stereo matching algorithms using the imperfectly rectified stereo images of the Middlebury dataset. The stereo images have varying exposure. Bold results represent the lowest error rates among the testing methods for each sub-dataset.
Table 8. Error rates of the Census- and ImpCensus-based stereo matching algorithms using the imperfectly rectified stereo images of the Middlebury dataset. The stereo images have varying exposure. Bold results represent the lowest error rates among the testing methods for each sub-dataset.
Local AlgorithmsGlobal Algorithms
DatasetCensusImpCensus/R1ImpCensus/R2DatasetCensusImpCensus/R1ImpCensus/R2
Adirondack43.8036.9536.28Adirondack51.6636.3328.32
Backpack20.2117.0617.68Backpack21.1915.2114.28
Bicycle50.2544.3741.46Bicycle54.0242.5834.16
Cable49.8643.3640.83Cable63.3046.0136.62
Classroom38.9035.9737.80Classroom47.5137.1231.62
Couch34.5033.0434.40Couch39.2732.3930.87
Flowers63.3458.6656.90Flowers67.6658.0552.68
Motorcycle27.2822.6022.76Motorcycle32.7922.3919.43
Pipes34.0228.3025.90Pipes39.6327.9123.09
Playroom45.0143.0243.71Playroom49.1942.1739.71
Playtable66.2340.5839.07Playtable70.8936.3633.73
Recycle48.0743.6144.49Recycle55.7141.9036.92
Shelves54.8947.9147.59Shelves57.7547.6944.95
Storage54.1449.0746.77Storage62.6850.9544.50
Sword117.0616.2717.09Sword118.8714.5514.39
Sword281.6576.4871.25Sword283.4371.8056.88
Umbrella65.1264.4565.98Umbrella68.5363.5259.46
Average46.7241.2840.59Average52.0140.4135.39
Table 9. Error rates of the Census- and ImpCensus-based stereo matching algorithms using the imperfectly rectified stereo images of the Middlebury dataset. There stereo images have varying illumination. Bold results represent the lowest error rates among the testing methods for each sub-dataset.
Table 9. Error rates of the Census- and ImpCensus-based stereo matching algorithms using the imperfectly rectified stereo images of the Middlebury dataset. There stereo images have varying illumination. Bold results represent the lowest error rates among the testing methods for each sub-dataset.
Local AlgorithmsGlobal Algorithms
DatasetCensusImpCensus/R1ImpCensus/R2DatasetCensusImpCensus/R1ImpCensus/R2
Adirondack68.6064.2964.45Adirondack75.6566.4263.04
Backpack33.4833.2032.69Backpack36.0133.1033.18
Bicycle72.5371.1370.62Bicycle74.9971.7970.45
Cable82.6880.6580.68Cable87.2482.3879.88
Classroom73.6973.4375.44Classroom79.2376.7075.19
Couch53.9951.6051.59Couch62.3453.2448.40
Flowers76.9474.5474.20Flowers79.0674.3672.13
Motorcycle48.5146.7647.90Motorcycle55.1248.8546.40
Pipes58.2352.1150.86Pipes70.1758.2852.28
Playroom60.6159.7260.51Playroom65.6961.0959.19
Playtable80.4976.5569.63Playtable83.0476.8062.73
Recycle62.5059.4560.03Recycle69.5660.1856.58
Shelves66.0163.4464.48Shelves69.5563.2962.24
Storage72.3670.2869.48Storage78.0172.9668.94
Sword130.3630.0631.84Sword136.0531.3031.18
Sword279.1775.4173.05Sword281.0571.0159.94
Umbrella78.8878.9879.64Umbrella81.4379.5879.01
Average64.6562.4562.18Average69.6663.6160.04
Table 10. Average error rates of proposed local stereo matching algorithms with different R using the perfectly rectified stereo images of the Middlebury dataset.
Table 10. Average error rates of proposed local stereo matching algorithms with different R using the perfectly rectified stereo images of the Middlebury dataset.
DatasetCensus/WinImpCensus/Win/R1ImpCensus/Win/R2
Aloe20.29321.01222.103
Baby114.65815.01815.263
Baby220.26220.87922.654
Baby320.52320.88021.835
Bowling129.24530.18333.428
Bowling223.51224.40125.628
Cloth110.91711.04812.553
Cloth218.24518.60319.083
Cloth313.79314.13215.834
Cloth418.58618.95219.463
Flowerpots26.91927.80228.128
Lampshade135.20136.25438.236
Lampshade237.06037.97439.137
Midd152.16552.68053.572
Midd249.18349.80050.178
Monopoly35.37435.96737.907
Plastic62.28762.49267.283
Rocks114.63414.97115.248
Rocks214.42614.63914.817
Wood118.17418.53219.565
Wood217.15017.49919.058
Average26.31526.84427.027
Table 11. Computation time (in second) required to compute the disparity space image C .
Table 11. Computation time (in second) required to compute the disparity space image C .
FunctionImpADImpSDImpNCCImpZNCCImpRankImpCensus
R = 02210104163
R = 1109323114485
R = 21413565520784
Table 12. Average error rates of proposed local stereo matching algorithms with different R using the imperfectly rectified stereo images of the Middlebury dataset.
Table 12. Average error rates of proposed local stereo matching algorithms with different R using the imperfectly rectified stereo images of the Middlebury dataset.
DatasetR = 0R = 1R = 2R = 3R = 4
ImpCensus-based45.4639.4938.5039.2740.34
ImpRank-based55.5448.4451.4153.7856.07
ImpAD-based54.4648.3849.5349.8750.62
ImpSD-based54.7648.7950.2251.1353.49
ImpNCC-based50.3045.5845.7045.9246.53
ImpZNCC-based49.2343.7343.4444.0645.33

Share and Cite

MDPI and ACS Style

Nguyen, P.H.; Ahn, C.W. Stereo Matching Methods for Imperfectly Rectified Stereo Images. Symmetry 2019, 11, 570. https://doi.org/10.3390/sym11040570

AMA Style

Nguyen PH, Ahn CW. Stereo Matching Methods for Imperfectly Rectified Stereo Images. Symmetry. 2019; 11(4):570. https://doi.org/10.3390/sym11040570

Chicago/Turabian Style

Nguyen, Phuc Hong, and Chang Wook Ahn. 2019. "Stereo Matching Methods for Imperfectly Rectified Stereo Images" Symmetry 11, no. 4: 570. https://doi.org/10.3390/sym11040570

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop