# An Empirical Study of Exhaustive Matching for Improving Motion Field Estimation

^{†}

## Abstract

**:**

## 1. Introduction

#### 1.1. Optical Flow Estimation

#### 1.2. Robust Motion Estimation

#### 1.3. Related Works

#### 1.4. Contribution of This Version

- (a)
- We determine the parameters of the optical flow estimation model using particle swarm optimization (PSO).
- (b)
- Our proposal was evaluated in the large database MIP-Sintel in both of its sets: training and test.
- (c)
- We estimated the occluded pixels in two consecutive images based on the largest values of the optical flow error. We avoided computing exhaustive matching in these occluded pixels because the matching will be unreliable.
- (d)
- We divided the gradient of the image into three sets: (i) small gradients, (ii) medium gradients, and (iii) large gradients. In sets (ii) and (iii), we have located additional matching in uniform locations and we have evaluated the performance in MPI-Sintel.
- (e)
- We extended the Horn-Schunck optical flow to handle additional information coming from exhaustive search. This extended method was evaluated in the MPI-Sintel Database, and the obtained results were compared with the results obtained by our proposal.
- (f)
- We performed exhaustive matching using colors and using gradients in order to make this experimental study more complete.

## 2. Materials and Methods

#### 2.1. Proposed Model

#### 2.2. Linearization

#### 2.2.1. Decoupling Variable

#### 2.2.2. Color Model

#### 2.2.3. Optical Flow to Handle Large Displacements

#### 2.2.4. Occlusion Estimation

#### 2.2.5. Solving the Model

- (1)
- Solve exhaustively$${J}_{d}\left({\mathbf{u}}_{\mathbf{e}}\left(\mathbf{x}\right)\right)=\underset{{\mathbf{u}}_{\mathbf{e}}\left(\mathbf{x}\right)}{\mathrm{min}}{\int}_{\mathrm{\Omega}}\left|{I}_{0}\left(\mathbf{x}\right)-{I}_{1}(\mathbf{x}+{\mathbf{u}}_{\mathbf{e}}\left(\mathbf{x}\right))\right|d\mathbf{x}.$$
- (2)
- Let us fix ${\mathbf{v}}_{\mathbf{1}}\left(\mathbf{x}\right)$, ${\mathbf{v}}_{\mathbf{2}}\left(\mathbf{x}\right)$, ${\mathbf{v}}_{\mathbf{3}}\left(\mathbf{x}\right)$, ${\mathbf{v}}_{\mathbf{4}}\left(\mathbf{x}\right)$, and ${\mathbf{v}}_{\mathbf{5}}\left(\mathbf{x}\right)$ and then solve for $\mathbf{u}\left(\mathbf{x}\right)$:$$\underset{\mathbf{u}\left(\mathbf{x}\right)}{\mathrm{min}}\left\{{\int}_{\mathrm{\Omega}}\sum _{i=1}^{5}\frac{{\overline{\alpha}}_{i}\left(\mathbf{x}\right){(\mathbf{u}\left(\mathbf{x}\right)-{\mathbf{v}}_{\mathbf{i}}\left(\mathbf{x}\right))}^{2}}{2\theta}d\mathbf{x}+{\int}_{\mathrm{\Omega}}\kappa \chi \left(\mathbf{x}\right)(1-\mathbf{o}\left(\mathbf{x}\right))(\mathbf{u}\left(\mathbf{x}\right)-{\mathbf{u}}_{\mathbf{e}}\left(\mathbf{x}\right))d\mathbf{x}+{\int}_{\mathrm{\Omega}}\parallel \nabla u\parallel d\mathbf{x}.\right\}$$
- (3)
- Let us fix $\mathbf{u}\left(\mathbf{x}\right)$ and solve the problem for ${\mathbf{v}}_{\mathbf{1}}\left(\mathbf{x}\right)$, ${\mathbf{v}}_{\mathbf{2}}\left(\mathbf{x}\right)$, ${\mathbf{v}}_{\mathbf{3}}\left(\mathbf{x}\right)$, ${\mathbf{v}}_{\mathbf{4}}\left(\mathbf{x}\right)$, and ${\mathbf{v}}_{\mathbf{5}}\left(\mathbf{x}\right)$:$$\underset{{\mathbf{v}}_{\mathbf{i}}\left(\mathbf{x}\right)}{\mathrm{min}}\left\{{\int}_{\mathrm{\Omega}}\sum _{i=1}^{5}{(\mathbf{u}\left(\mathbf{x}\right)-{\mathbf{v}}_{\mathbf{i}}\left(\mathbf{x}\right))}^{2}d\mathbf{x}+{\int}_{\mathrm{\Omega}}\left|{\rho}_{i}\left({\mathbf{v}}_{\mathbf{i}}\left(\mathbf{x}\right)\right)\right|d\mathbf{x}.\right\}.$$

**Proposition**

**1.**

**Proposition**

**2.**

#### 2.3. An Extended Version of Horn-Schunck’s Optical Flow

#### An Extended Version of the Horn-Schunck’s Optical Flow

## 3. Implementation and Pseudo-Code

#### 3.1. Exhaustive Search

#### 3.2. Matching Confidence Value

#### 3.3. Construction of $\chi \left(\mathbf{x}\right)$

#### 3.3.1. Uniform Location

#### 3.3.2. Random Location

#### 3.3.3. Location in Maximum Values of the Gradient Magnitude

#### 3.3.4. Location in the Large Magnitudes of the Gradient in a Uniform Location

#### 3.3.5. Location in the Medium Magnitudes of the Gradient in a Uniform Location

#### 3.3.6. $\chi \left(\mathbf{x}\right)$ Value

#### 3.4. Pseudo Code

Algorithm 1: Integration of additional information coming from exhaustive matching |

$Input$: Two consecutive color images ${I}_{0}\left(\mathbf{x}\right)$, ${I}_{1}\left(\mathbf{x}\right)$. |

$Parameters$ α, λ, P, ${\theta}_{0}$, ${\tau}_{d}$, β, $MaxIter$, β, κ, $Numbe{r}_{scales}$, $Numbe{r}_{warpings}$, ${\theta}_{occ}$. |

$Output$: optical flow $\mathbf{u}\left(\mathbf{x}\right)=({u}_{1}\left(\mathbf{x}\right),{u}_{2}\left(\mathbf{x}\right))$ |

$\mathrm{Down-scale}\phantom{\rule{4.pt}{0ex}}{I}_{0}\left(\mathbf{x}\right),{I}_{1}\left(\mathbf{x}\right)$ |

Initialization $\mathbf{u}\left(\mathbf{x}\right)=\mathbf{v}\left(\mathbf{x}\right);w=0$ |

$for$ $scales\leftarrow Numbe{r}_{scales}$ to 1 |

Construct ψ |

In specific locations defined by the strategy compute ${\mathbf{u}}_{\mathbf{e}}\left(\mathbf{x}\right)$ using Equation (31) |

Using ${\mathbf{u}}_{\mathbf{e}}\left(\mathbf{x}\right)$ compute $c\left(\mathbf{x}\right)$ and update $\chi \left(\mathbf{x}\right)$. |

$for$ $w\leftarrow 1$ to $Numbe{r}_{warpings}$ |

Compute $\alpha \left(\mathbf{x}\right)$ and occlusion $\mathbf{o}\left(\mathbf{x}\right)$. |

Compute ${\mathbf{v}}_{\mathbf{i}}\left(\mathbf{x}\right)$, Equations (24) and (25). |

Compute $\mathbf{u}$, Equation (21). |

Compute ξ Equations (22) and (23). |

update ${\kappa}_{n}=\kappa {\left(0.5\right)}^{scales}$ |

$endfor$ |

up-sample $\mathbf{u}\left(\mathbf{x}\right)$ |

$endfor$ |

Out $\mathbf{u}\left(\mathbf{x}\right)$. |

## 4. Experiments and Database

#### 4.1. Middlebury Database

#### 4.2. The MPI-Sintel Database

#### 4.3. Experiments with the Middlebury Database

#### 4.4. Parameter Estimation

#### 4.5. Experiments with MPI-Sintel

#### 4.6. Parameter Estimation of the Proposed Model with MPI-Sintel

#### 4.6.1. Parameter Model Estimation

#### 4.6.2. Exhaustive Search Parameter Estimation

#### 4.7. Parameter Estimation of the Extended Horn-Schunk

#### 4.7.1. ${\alpha}_{hs}$ Parameter

#### 4.7.2. Exhaustive Search Parameter Estimation

## 5. Results

#### 5.1. Reported Results in Middlebury

#### 5.2. Specific Location of Matching $\kappa \ne 0$

#### 5.2.1. Uniform Locations

#### 5.2.2. Random Locations

#### 5.2.3. Locations for Maximum Magnitudes of the Gradient

#### 5.3. Evaluation in MPI-Sintel

#### 5.3.1. Uniform Location

#### 5.3.2. Random Location

#### 5.3.3. Location on Maximum Gradients

#### 5.4. Summary

#### 5.4.1. Combined Uniform–Large Gradient Locations

#### 5.4.2. Combined Uniform–Medium Gradient Locations

#### 5.4.3. Results Obtained by Horn-Schunck

#### 5.4.4. Results Obtained by Horn-Schunck Using Uniform Locations

## 6. Discussion

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Sánchez, J.; Meinhardt-Llopis, E.; Facciolo, G. TV-L1 Optical Flow Estimation. Image Process. Line
**2013**, 2013, 137–150. [Google Scholar] [CrossRef] - Horn, B.K.P.; Schunck, B.G. Determining Optical Flow. Artif. Intell.
**1981**, 17, 185–204. [Google Scholar] [CrossRef] - Xu, L.; Jia, J.; Matsushita, Y. Motion Detail Preserving Optical Flow. In Proceedings of the IEEE CVPR, San Francisco, CA, USA, 13–18 June 2010. [Google Scholar]
- Palomares, R.P.; Haro, G.; Ballester, C. A Rotation-Invariant Regularization Term for Optical Flow Related Problems. In Lectures Notes in Computer Science, Proceedings of the ACCV’14, Singapore, 1–5 November 2014; Springer: Cham, Switzerland, 2014; Volume 9007, pp. 304–319. [Google Scholar]
- Wedel, A.; Pock, T.; Zach, C.; Bischof, H.; Cremers, D. An Improved Algorithm for TV-L1 Optical Flow, Statistical and Geometrical Approaches to Visual Motion Analysis. In Proceedings of the International Dagstuhl Seminar, Dagstuhl Castle, Germany, 13–18 July 2008; Volume 5604. [Google Scholar]
- Brox, T.; Bregler, C.; Malik, J. Large Displacemnet Optical Flow. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami Beach, FL, USA, 20–25 June 2009. [Google Scholar]
- Stoll, M.; Volz, S.; Bruhn, A. Adaptive Integration of Features Matches into Variational Optical Flow Methods. In Proceedings of the 11th Asian Conference on Computer Vision, Daejeon, Korea, 5–9 November 2012. [Google Scholar]
- Lazcano, V. Some Problems in Depth Enhanced Video Processing. Ph.D. Thesis, Universitat Pompeu Fabra, Barcelona, Spain, 2016. Available online: http://www.tdx.cat/handle/10803/373917 (accessed on 17 October 2018).
- Bruhn, A.; Weickert, J.; Feddern, C.; Kohlberger, T.; Schnoerr, C. Real-time Optical Flow Computation with Variational Methods. In Proceedings of the International Conference on Computer Analysis of Images and Patterns, Groningen, The Netherlands, 25–27 August 2003; pp. 222–229. [Google Scholar]
- Weinzaepfel, P.; Revaud, J.; Harchaoui, Z.; Schmid, C. DeepFlow: Large displacement optical flow with deep matching. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013. [Google Scholar]
- Timofte, R.; van Gool, L. Sparseflow: Sparse matching for small to large displacement optical flow. In Proceedings of the 2015 IEEE Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 5–9 January 2015. [Google Scholar]
- Kennedy, R.; Taylor, C.J. Optical flow with geometric occlusion estimation and fusion of multiple frames. In Energy Minimization Methods in Computer Vision and Pattern Recognition, Proceedings of the International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, Hong Kong, China, 13–16 January 2015; Springer: Cham, Switzerland, 2015; Volume 8932, pp. 364–377. [Google Scholar]
- Fortun, D.; Bouthemy, P.; Kervrann, C. Aggregation of local parametric candidates with exemplar-based occlusion handling for optical flow. Comput. Vis. Image Underst.
**2016**, 145, 81–94. [Google Scholar] [CrossRef][Green Version] - Lazcano, V.; Garrido, L.; Ballester, C. Jointly Optical Flow and Occlusion Estimation for Images with Large Displacements. In Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Madeira, Portugal, 27–29 January 2018; Volume 5, pp. 588–595. [Google Scholar] [CrossRef]
- Lazcano, V. Study of Specific Location of Exhaustive Matching in Order to Improve the Optical Flow Estimation. Information Technology—New Generations. In Proceedings of the 15th International Conference on Information Technology, Las Vegas, NV, USA, 16–18 April 2018; pp. 603–661. [Google Scholar]
- Steinbruecker, F.; Pock, T.; Cremers, D. Large Displacement Optical Flow Computation without Warping. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 185–203. [Google Scholar]
- Xiao, J.; Cheng, H.; Sawhney, H.; Rao, C.; Isnardi, M. Bilateral Filetring-based Flow Estimation with Occlusion Detection. In Proceedings of the European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; pp. 221–224. [Google Scholar]
- Meinhardt-Llopis, E.; Sanchez, J.; Kondermann, D. Horn-Schunck Optical Flow with a Multi-Scale Strategy. Image Processing Line
**2013**, 2013, 151–172. [Google Scholar] [CrossRef] - Baker, S.; Scharstein, D.; Lewis, J.; Roth, S.; Black, M.; Szelinsky, R. A Database and Evaluation Methodology for Optical Flow. Int. J. Comput. Vis.
**2011**, 92, 1–31. [Google Scholar] [CrossRef] - Butler, D.J.; Wulff, J.; Stanley, G.B.; Black, M.J. A naturalistic open source movie for optical flow evaluation. In Proceedings of the European Conference on Computer Vision (ECCV), Florence, Italy, 7–13 October 2012; Part IV, LNCS 7577. Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 611–625. [Google Scholar]
- Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the IEEE International Conference on Neural Networks. IV, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]

**Figure 1.**Occlusion estimation comparing optical flow error with a threshold ${\theta}_{occ}$. (

**a**) Frame_0049 of sequence ambush_5; (

**b**) Frame_0050; (

**c**) occlusion estimation.

**Figure 4.**Correspondence of patches located in the largest magnitude of the gradient. (

**a**) Gradient magnitude of the reference image. (

**b**) Ordered gradient from the minimum magnitude to the maximum magnitude. (

**c**) Correspondences of the patches located in the maximum gradient magnitudes (white arrows).

**Figure 5.**(

**a**) Gradient of the reference image. (

**b**) Large gradient set. (

**c**) Matching result of patches located in the large gradient uniform grid.

**Figure 6.**(

**a**) Gradient of the reference image. (

**b**) Medium gradient set. (

**c**) Matching result of the patches located in the medium gradient uniform grid.

**Figure 7.**Images of the Middlebury database containing small displacements. (

**a**) and (

**b**) frame10 and frame11 of sequence Grove2, respectively. (

**c**) and (

**d**) frame10 and frame11 of sequence Grove3, respectively. (

**e**) and (

**f**) frame10 and frame11 of sequence RubberWhale, respectively. (

**g**) and (

**h**) frame10 and frame11 of sequence Hydrangea, respectively. (

**i**) and (

**j**) frame10 and frame11 of sequence Urban2, respectively and finally (

**k**) and (

**l**) of sequence Urban2, respectively.

**Figure 8.**Image extracted from MPI-Sintel clean version and MPI-Sintel final version. We extracted frame_0014 from sequence ambush_2. (

**a**) frame_0014 clean version. (

**b**) frame_0014 final version.

**Figure 10.**Examples of images of the MPI-Sintel database video sequence. (

**a**,

**b**) frame_0010 and frame_0011, (

**c**) color-coded ground truth optical flow of the cave_4 sequence, (

**d**) ground truth represented with arrows. (

**e**,

**f**) frame_0045 and frame_0046, (

**g**) color-coded ground truth, and (

**h**) arrow representation of optical flow of the cave_4 sequence, (

**i**,

**j**) frame_0030 and frame_0031, (

**k**) color-coded ground truth optical flow, (

**l**) arrow representation (in blue) of the temple3 sequence. (

**m**,

**n**) frame_0006 and frame_0007, (

**o**) color-coded ground truth and (

**p**) arrow representation (in green) ground truth optical flow of the ambush_4 sequence.

**Figure 11.**Color-coded optical flow. (

**a**) Color-coded optical flow for Grove2. (

**b**) Color-coded optical flow for RubberWhale. (

**c**) Ground truth for the Grove2 sequence. (

**d**) Ground truth for the RubberWhale sequence.

**Figure 12.**Color-coded optical flow. (

**a**) Color-coded coded optical flow for Grove2. (

**b**) Color-coded optical flow for RubberWhale. (

**c**) Ground truth for the Grove2 sequence. (

**d**) Ground truth for the RubberWhale sequence. (

**e**) Weight map $\alpha \left(\mathbf{x}\right)$ for Grove2. (

**f**) Weight map $\alpha \left(\mathbf{x}\right)$ for RubberWhale.

**Figure 13.**Image sequences used to estimate model parameters. In the parameter estimation, we used the first two frames of each considered sequence: (

**a**–

**d**) the frames of the alley_1 sequence, the ground truth, and our results, respectively. (

**e**,

**f**) the ambush_2 sequence, (

**g**) the ground truth, and (

**h**) our result. (

**i**,

**j**) frames of the bamboo_2 sequence, (

**k**) the ground truth, and (

**l**) our result. (

**m**,

**n**) the frame of bandage_1 sequence (

**o**) the ground truth, and (

**p**) our results.

**Figure 14.**Performance of the PSO algorithm. (

**a**) Performance of each individual. (

**b**) Performance of the best individual in each generation.

**Figure 18.**Exhaustive matching represented with white arrows. (

**a**) Exhaustive matching using $D=18$ and $P=10$ for Grove2. (

**b**) Exhaustive matching using $D=6$ and $P=8$ for Rubberwhale.

Sequence Name | Number of Images |
---|---|

alley | 100 |

ambush | 174 |

bamboo | 100 |

bandage | 100 |

cave | 100 |

market | 140 |

mountain | 50 |

shaman | 100 |

sleeping | 100 |

temple | 100 |

Total images | 1064 |

Parameter | Value |
---|---|

NIndividuals | 3 |

NGeneration | 20 |

NSequences | 8 |

$\omega $ | 0.5 |

${\phi}_{g}$ | 0.5 |

${\phi}_{b}$ | 1.0 |

alley_1 | ambush_4 | bamboo_2 | bandage_1 | cave_4 | market_5 | mountain_1 | temple_3 | |
---|---|---|---|---|---|---|---|---|

$EPE$ | $0.22$ | $10.21$ | $0.23$ | $0.78$ | $3.49$ | $18.37$ | $9.71$ | $6.50$ |

$AAE$ | $2.30$ | $6.68$ | $4.32$ | $7.25$ | $11.91$ | $25.53$ | $0.74$ | $30.07$ |

alley_1 | ambush_4 | bamboo_2 | bandage_1 | cave_4 | market_5 | mountain_1 | temple_3 | |
---|---|---|---|---|---|---|---|---|

$EPE$ | $0.48$ | $16.72$ | $0.34$ | $1.81$ | $5.47$ | $24.82$ | $1.14$ | $9.47$ |

$AAE$ | $4.81$ | $10.79$ | $6.27$ | $26.57$ | $22.69$ | $37.87$ | $14.96$ | $61.54$ |

alley_1 | ambush_4 | bamboo_2 | bandage_1 | cave_4 | market_5 | mountain_1 | temple_3 | |
---|---|---|---|---|---|---|---|---|

$EPE$ | $0.48$ | $16.87$ | $0.34$ | $1.80$ | $6.42$ | $23.30$ | $1.15$ | $8.92$ |

$AAE$ | $4.77$ | $10.89$ | $6.22$ | $26.32$ | $22.94$ | $34.87$ | $14.97$ | $55.27$ |

**Table 6.**The reported performance of TV-L1 in Middlebury [1].

Error | Dime | Grove3 | Hydra | Urban3 | Venus | Average |
---|---|---|---|---|---|---|

$EPE$ | 0.162 | 0.721 | 0.258 | 0.711 | 0.394 | 0.4492 |

$AAE$ | 2.888 | 6.590 | 2.814 | 6.631 | 6.831 | 5.1508 |

Error | Dime | Grove3 | Hydra | Urban3 | Venus | Average |
---|---|---|---|---|---|---|

$EPE$ | 0.0925 | 0.7090 | 0.1729 | 0.7078 | 0.3492 | 0.4063 |

$AAE$ | 1.8248 | 6.5913 | 2.0626 | 6.9080 | 6.0818 | 4.6937 |

Error | Dime | Grove3 | Hydra | Urban3 | Venus | Average |
---|---|---|---|---|---|---|

$EPE$ | 0.0975 | 0.6924 | 0.1672 | 0.4811 | 0.3034 | 0.3485 |

$AAE$ | 1.8739 | 6.4759 | 2.0160 | 4.334 | 4.2259 | 3.7872 |

Error | Dime | Grove3 | Hydra | Urban3 | Venus | Average |
---|---|---|---|---|---|---|

$EPE$ | 0.0967 | 0.6798 | 0.1666 | 0.5446 | 0.2988 | 0.3573 |

$AAE$ | 1.8876 | 6.3781 | 2.0214 | 4.8060 | 4.1977 | 3.8581 |

**Table 10.**Average $EPE$ and $AAE$ obtained by the maximum gradient location strategy in VALIDATION_SET.

Error | Dime | Grove3 | Hydra | Urban3 | Venus | Average |
---|---|---|---|---|---|---|

$EPE$ | 0.0938 | 0.7147 | 0.1772 | 0.7338 | 0.3014 | 0.4042 |

$AAE$ | 1.8596 | 6.5793 | 2.1159 | 5.9172 | 4.3285 | 4.1601 |

**Table 11.**Summary of results. The end point error and average angular error obtained by uniform locations in the MPI-Sintel training set.

Sequence Name | $\mathit{EPE}$ | $\mathit{AAE}$ |
---|---|---|

alley | 0.36 | 3.05 |

ambush | 19.72 | 26.07 |

bamboo | 1.68 | 13.66 |

bandage | 0.64 | 7.01 |

cave | 9.26 | 13.13 |

market | 8.79 | 13.61 |

mountain | 1.02 | 10.29 |

shaman | 0.38 | 7.18 |

sleeping | 0.11 | 1.79 |

temple | 9.94 | 13.71 |

Total $EPE$ | 5.58 | 10.87 |

$EPE+AAE$ | 16.45 |

**Table 12.**Summary of results. The end point error and average angular error obtained by random locations in the MPI-Sintel training set.

Sequence Name | $\mathit{EPE}$ | $\mathit{AAE}$ |
---|---|---|

alley | 0.36 | 3.04 |

ambush | 19.59 | 25.70 |

bamboo | 1.69 | 13.69 |

bandage | 0.64 | 7.02 |

cave | 9.54 | 13.52 |

market | 8.78 | 13.72 |

mountain | 1.02 | 10.31 |

shaman | 0.38 | 7.19 |

sleeping | 0.11 | 1.79 |

temple | 10.16 | 14.03 |

Total $EPE$ | 5.62 | 10.91 |

$EPE+AAE$ | 16.53 |

**Table 13.**Summary of results. The end point error and average angular error obtained by locations on maximum gradients in the MPI-Sintel training set.

Sequence Name | $\mathit{EPE}$ | $\mathit{AAE}$ |
---|---|---|

alley | 0.37 | 3.05 |

ambush | 19.83 | 26.50 |

bamboo | 1.66 | 13.56 |

bandage | 0.65 | 7.04 |

cave | 9.72 | 14.18 |

market | 8.88 | 13.78 |

mountain | 1.00 | 10.22 |

shaman | 0.37 | 7.12 |

sleeping | 0.11 | 1.79 |

temple | 9.90 | 14.46 |

Total $EPE$ | 5.64 | 11.10 |

$EPE+AAE$ | 16.73 |

Strategy | $\mathit{EPE}$ | $\mathit{AAE}$ |
---|---|---|

Uniform | 5.58 | 10.87 |

Random | 5.62 | 10.91 |

Maximum Gradient | 5.64 | 11.10 |

**Table 15.**Summary of results. The end point error and average angular error obtained by the combination of uniform and large gradients in the MPI-Sintel training set.

Sequence Name | $\mathit{EPE}$ | $\mathit{AAE}$ |
---|---|---|

alley | 0.36 | 3.03 |

ambush | 19.85 | 26.43 |

bamboo | 1.70 | 13.60 |

bandage | 0.65 | 7.03 |

cave | 9.84 | 14.39 |

market | 9.10 | 14.07 |

mountain | 0.98 | 10.24 |

shaman | 0.38 | 7.16 |

sleeping | 0.11 | 1.78 |

temple | 10.03 | 14.31 |

Total $EPE$ | 5.70 | 11.14 |

$EPE+AAE$ | 16.83 |

**Table 16.**Summary of results. The end point error and average angular error obtained by the combination of uniform and medium gradients in the MPI-Sintel training set.

Sequence Name | $\mathit{EPE}$ | $\mathit{AAE}$ |
---|---|---|

alley | 0.63 | 5.99 |

ambush | 26.42 | 34.81 |

bamboo | 3.16 | 27.47 |

bandage | 1.19 | 18.21 |

cave | 12.49 | 19.18 |

market | 10.76 | 19.90 |

mountain | 1.43 | 13.57 |

shaman | 0.67 | 14.39 |

sleeping | 0.14 | 2.44 |

temple | 16.21 | 26.23 |

Total $EPE$ | 5.67 | 11.10 |

$EPE+AAE$ | 16.77 |

**Table 17.**Summary of results. The end point error and average angular error obtained by Horn-Schunck in the MPI-Sintel training set.

Sequence Name | $\mathit{EPE}$ | $\mathit{AAE}$ |
---|---|---|

alley | 0.36 | 3.02 |

ambush | 19.91 | 26.46 |

bamboo | 1.67 | 13.59 |

bandage | 0.65 | 7.03 |

cave | 9.77 | 14.32 |

market | 9.10 | 14.07 |

mountain | 0.99 | 10.24 |

shaman | 0.38 | 7.16 |

sleeping | 0.11 | 1.79 |

temple | 9.80 | 13.90 |

Total $EPE$ | 7.76 | 17.74 |

$EPE+AAE$ | 25.50 |

**Table 18.**Summary of results. The end point error and average angular error obtained by Horn-Schunck using uniform locations in the MPI-Sintel training set.

Sequence Name | $\mathit{EPE}$ | $\mathit{AAE}$ |
---|---|---|

alley | 0.64 | 6.07 |

ambush | 26.01 | 34.88 |

bamboo | 3.52 | 28.04 |

bandage | 1.19 | 18.14 |

cave | 34.44 | 45.41 |

market | 11.74 | 20.60 |

mountain | 1.59 | 14.33 |

shaman | 0.66 | 11.85 |

sleeping | 0.29 | 2.95 |

temple | 17.33 | 27.95 |

Total $EPE$ | 10.16 | 20.95 |

$EPE+AAE$ | 30.76 |

© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Lazcano, V. An Empirical Study of Exhaustive Matching for Improving Motion Field Estimation. *Information* **2018**, *9*, 320.
https://doi.org/10.3390/info9120320

**AMA Style**

Lazcano V. An Empirical Study of Exhaustive Matching for Improving Motion Field Estimation. *Information*. 2018; 9(12):320.
https://doi.org/10.3390/info9120320

**Chicago/Turabian Style**

Lazcano, Vanel. 2018. "An Empirical Study of Exhaustive Matching for Improving Motion Field Estimation" *Information* 9, no. 12: 320.
https://doi.org/10.3390/info9120320