# Multiple View Relations Using the Teaching and Learning-Based Optimization Algorithm

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Epipolar Geometry

#### 2.1. Feature Matching

#### 2.2. Geometric Entities of the Epipolar Geometry

#### 2.2.1. Fundamental Matrix

#### 2.2.2. Homography

## 3. Teaching and Learning Based Optimization Algorithm

Algorithm 1: Simplest form of the Teaching–Learning-Based Optimization (TLBO) algorithm. |

## 4. Epipolar Geometry Estimation Using Tlbo

#### 4.1. Search Space

#### 4.2. Individual Representation

#### 4.3. Objective Function

#### 4.4. Tlbo for Epipolar Geometry Estimation

## 5. Experimental Results

#### 5.1. Number of Inliers and ${E}_{r}$ error

#### 5.2. Residual Error

## 6. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
- Xia, M.; Yao, J.; Xie, R.; Li, L.; Zhang, W. Globally consistent alignment for planar mosaicking via topology analysis. Pattern Recognit.
**2017**, 66, 239–252. [Google Scholar] [CrossRef] - Zhang, Y.; Lai, Y.K.; Zhang, F.L. Stereoscopic image stitching with rectangular boundaries. Vis. Comput.
**2019**, 35, 823–835. [Google Scholar] [CrossRef] - Park, K.w.; Shim, Y.J.; Lee, M.J.; Ahn, H. Multi-Frame Based Homography Estimation for Video Stitching in Static Camera Environments. Sensors
**2020**, 20, 92. [Google Scholar] [CrossRef] [Green Version] - D’Orazio, T.; Guaragnella, C. A survey of automatic event detection in multi-camera third generation surveillance systems. Int. J. Pattern Recognit. Artif. Intell.
**2015**, 29, 1555001. [Google Scholar] [CrossRef] - El Akkad, N.; Merras, M.; Saaidi, A.; Satori, K. Camera self-calibration with varying intrinsic parameters by an unknown three-dimensional scene. Vis. Comput.
**2014**, 30, 519–530. [Google Scholar] [CrossRef] [Green Version] - Montijano, E.; Cristofalo, E.; Zhou, D.; Schwager, M.; Saguees, C. Vision-based distributed formation control without an external positioning system. IEEE Trans. Robot.
**2016**, 32, 339–351. [Google Scholar] [CrossRef] - Ullah, H.; Zia, O.; Kim, J.H.; Han, K.; Lee, J.W. Automatic 360 Mono-Stereo Panorama Generation Using a Cost-Effective Multi-Camera System. Sensors
**2020**, 20, 3097. [Google Scholar] [CrossRef] [PubMed] - El Hazzat, S.; Merras, M.; El Akkad, N.; Saaidi, A.; Satori, K. 3D reconstruction system based on incremental structure from motion using a camera with varying parameters. Vis. Comput.
**2018**, 34, 1443–1460. [Google Scholar] [CrossRef] - Töberg, S.; Reithmeier, E. Quantitative 3D Reconstruction from Scanning Electron Microscope Images Based on Affine Camera Models. Sensors
**2020**, 20, 3598. [Google Scholar] [CrossRef] - Safdarnejad, S.M.; Atoum, Y.; Liu, X. Temporally robust global motion compensation by keypoint-based congealing. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2016; pp. 101–119. [Google Scholar]
- Kanojia, G.; Raman, S. Patch-based detection of dynamic objects in CrowdCam images. Vis. Comput.
**2019**, 35, 521–534. [Google Scholar] [CrossRef] - Cleju, I.; Saupe, D. Evaluation of texture registration by epipolar geometry. Vis. Comput.
**2010**, 26, 1407–1420. [Google Scholar] [CrossRef] [Green Version] - Delfin, J.; Becerra, H.M.; Arechavaleta, G. Humanoid localization and navigation using a visual memory. In Proceedings of the 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Cancun, Mexico, 15–17 November 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 725–731. [Google Scholar]
- Delfin, J.; Becerra, H.M.; Arechavaleta, G. Humanoid navigation using a visual memory with obstacle avoidance. Robot. Auton. Syst.
**2018**, 109, 109–124. [Google Scholar] [CrossRef] - López-Martínez, A.; Cuevas, F.; Sosa-Balderas, J. Visual Memory Construction for Autonomous Humanoid Robot Navigation. In Progress in Optomechatronic Technologies; Springer: Berlin/Heidelberg, Germany, 2019; pp. 103–109. [Google Scholar]
- Wu, Z.; Zhou, Z.; Tian, D.; Wu, W. Reconstruction of three-dimensional flame with color temperature. Vis. Comput.
**2015**, 31, 613–625. [Google Scholar] [CrossRef] [Green Version] - Simon, G.; Berger, M.O. Interactive building and augmentation of piecewise planar environments using the intersection lines. Vis. Comput.
**2011**, 27, 827–841. [Google Scholar] [CrossRef] [Green Version] - Zhang, Y.; Zhou, L.; Shang, Y.; Zhang, X.; Yu, Q. Contour model based homography estimation of texture-less planar objects in uncalibrated images. Pattern Recognit.
**2016**, 52, 375–383. [Google Scholar] [CrossRef] - Saputra, M.R.U.; Markham, A.; Trigoni, N. Visual SLAM and structure from motion in dynamic environments: A survey. ACM Comput. Surv.
**2018**, 51, 37. [Google Scholar] [CrossRef] - Liu, S.; Chen, J.; Chang, C.H.; Ai, Y. A new accurate and fast homography computation algorithm for sports and traffic video analysis. IEEE Trans. Circuits Syst. Video Technol.
**2018**, 28, 2993–3006. [Google Scholar] [CrossRef] - Du, W.L.; Li, X.Y.; Ye, B.; Tian, X.L. A Fast Dense Feature-Matching Model for Cross-Track Pushbroom Satellite Imagery. Sensors
**2018**, 18, 4182. [Google Scholar] [CrossRef] [Green Version] - Zhang, Z.; Deriche, R.; Faugeras, O.; Luong, Q.T. A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry. Artif. Intell.
**1995**, 78, 87–119. [Google Scholar] [CrossRef] [Green Version] - Pollefeys, M.; Koch, R.; Van Gool, L. Self-calibration and metric reconstruction inspite of varying and unknown intrinsic camera parameters. Int. J. Comput. Vis.
**1999**, 32, 7–25. [Google Scholar] [CrossRef] - Hartley, R.; Zisserman, A. Epipolar geometry and the fundamental matrix. Mult. View Geom.
**2000**, 9, 239–261. [Google Scholar] - Furukawa, Y.; Ponce, J. Accurate, dense, and robust multiview stereopsis. IEEE Trans. Pattern Anal. Mach. Intell.
**2010**, 32, 1362–1376. [Google Scholar] [CrossRef] [PubMed] - Roberts, R.; Sinha, S.N.; Szeliski, R.; Steedly, D. Structure from motion for scenes with large duplicate structures. In Proceedings of the CVPR 2011, Providence, RI, USA, 20–25 June 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 3137–3144. [Google Scholar]
- Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM
**1981**, 24, 381–395. [Google Scholar] [CrossRef] - Cuevas, E.; Zaldívar, D.; Perez-Cisneros, M. Estimation of Multiple View Relations Considering Evolutionary Approaches. In Applications of Evolutionary Computation in Image Processing and Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 2016; pp. 107–138. [Google Scholar]
- Cuevas, E.; Díaz, M. A method for estimating view transformations from image correspondences based on the harmony search algorithm. Comput. Intell. Neurosci.
**2015**, 2015, 434263. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Cuevas, E.; Osuna, V.; Oliva, D. Estimation of View Transformations in Images. In Evolutionary Computation Techniques: A Comparative Perspective; Springer: Berlin/Heidelberg, Germany, 2017; pp. 181–204. [Google Scholar]
- Harris, C.G.; Stephens, M. A combined corner and edge detector. In Alvey Vision Conference; Elsevier: Amsterdam, The Netherlands, 1988; Volume 15, pp. 10–5244. [Google Scholar]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis.
**2004**, 60, 91–110. [Google Scholar] [CrossRef] - Bay, H.; Tuytelaars, T.; Van Gool, L. Surf: Speeded up robust features. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2006; pp. 404–417. [Google Scholar]
- Rosten, E.; Drummond, T. Machine learning for high-speed corner detection. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2006; pp. 430–443. [Google Scholar]
- Calonder, M.; Lepetit, V.; Strecha, C.; Fua, P. Brief: Binary robust independent elementary features. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2010; pp. 778–792. [Google Scholar]
- Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G.R. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; Volume 11, p. 2. [Google Scholar]
- Huynh, D.Q.; Saini, A.; Liu, W. Evaluation of three local descriptors on low resolution images for robot navigation. In Proceedings of the 2009 24th International Conference Image and Vision Computing New Zealand, Wellington, New Zealand, 23–25 November 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 113–118. [Google Scholar]
- Hartley, R. In Defense of the Eight Point Algorithm. In IEEE Transactions on Pattern Analysis and Machine Intelligence; IEEE: Piscataway, NJ, USA, 1997. [Google Scholar]
- Rao, R.V.; Savsani, V.J.; Vakharia, D. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des.
**2011**, 43, 303–315. [Google Scholar] [CrossRef] - Rao, R. Review of applications of TLBO algorithm and a tutorial for beginners to solve the unconstrained and constrained optimization problems. Decis. Sci. Lett.
**2016**, 5, 1–30. [Google Scholar] - Goyal, R.K.; Kaushal, S. A constrained non-linear optimization model for fuzzy pairwise comparison matrices using teaching learning based optimization. Appl. Intell.
**2016**, 45, 652–661. [Google Scholar] [CrossRef] - El Ghazi, A.; Ahiod, B. Energy efficient teaching-learning-based optimization for the discrete routing problem in wireless sensor networks. Appl. Intell.
**2018**, 48, 2755–2769. [Google Scholar] [CrossRef] - López, A.; Cuevas, F.J. Automatic multi-circle detection on images using the teaching learning based optimisation algorithm. IET Comput. Vis.
**2018**, 12, 1188–1199. [Google Scholar] [CrossRef] - Lopez-Martinez, A.; Cuevas, F.J. Automatic circle detection on images using the Teaching Learning Based Optimization algorithm and gradient analysis. Appl. Intell.
**2019**, 49, 2001–2016. [Google Scholar] [CrossRef] - López-Martinez, A.; Cuevas, F.J. Vanishing point detection using the teaching learning-based optimisation algorithm. IET Image Process.
**2020**, 14, 2487–2494. [Google Scholar] [CrossRef] - Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? the kitti vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 3354–3361. [Google Scholar]
- Garcia-Fidalgo, E.; Ortiz, A. Probabilistic appearance-based mapping and localization using visual features. In Iberian Conference on Pattern Recognition and Image Analysis; Springer: Berlin/Heidelberg, Germany, 2013; pp. 277–285. [Google Scholar]

**Figure 1.**Epipolar geometry depiction: the fundamental matrix case. Two cameras, ${C}_{1}$ and ${C}_{2}$, are viewing the same scene. Both cameras are represented by their centres, and image plane. As shown in the figure, the centres, ${C}_{1}$ and ${C}_{2}$ lie in the same plane as the 3D point X and its images x and ${x}^{\prime}$. The point correspondence geometry is constrained as follows. The image point x back-projects to a ray in 3D space defined by the first camera centre, ${C}_{1}$, and x. As can be seen, the 3D point X which projects to x must lie on this ray. Since the image of this ray, is the line ${l}^{\prime}$ in the second view, the image of X in the second view must lie on ${l}^{\prime}$

**Figure 2.**Epipolar geometry depiction: The Homography case. A point x in one image is transferred, differently from the fundamental-matrix case, via the plane $\pi $ to a matching point ${x}^{\prime}$ in the second image. The epipolar line through ${x}^{\prime}$ is obtained by joining ${x}^{\prime}$ to the epipole ${e}^{\prime}$

**Figure 3.**Teaching–Learning-Based Optimization (TLBO)-based process for estimating geometric relations. Three main steps are followed in order to compute the geometric relations given by either the fundamental matrix or the homography. First, a preprocessing step is carried out to compute the search space. Then, the TLBO population is initialized according to the constrains of the problem. Finally, the TLBO learning process is iteratively executed to find the final solution, i.e., a fundamental matrix or a homography.

**Figure 4.**Results for the fundamental-matrix estimation from test images (

**a**) Room_1 and (

**b**) Room_2. The first and second columns show the first and second view, respectively. The third column shows the correspondence points along with outliers contained in the dataset. The fourth column depicts a blended image of the two views with inliers found by the proposed method.

**Figure 5.**Results for the fundamental-matrix estimation from test images (

**a**) Street_1 and (

**b**) Street_2. The first row shows the first and second view. The second row shows the correspondence points along with outliers contained in the dataset. The third row depicts a blended image of the two views with inliers found by the proposed method.

**Figure 6.**Results for the homography estimation from the test images (

**a**) Book_1 and (

**b**) Book_2. The first and second columns show the first and second view, respectively. The third column shows the correspondence points along with outliers contained in the dataset. The fourth column depicts a blended image of the two views with inliers found by the proposed method.

**Figure 7.**Image pairs for algorithm comparison. With and estimated fundamental matrix, epipole lines can be computed. The whole set of accurate matched points should be close to its corresponding epipole line. (

**a**) The pair of original images. (

**b**) The Euclidian-based matches. (

**c**) The inlier points consistent with the epipolar restriction as found by the proposed method. (

**d**) The epipolar lines (25% of the lines are depicted in the image).

**Figure 8.**Results of the experimental evaluation of the methods for computing the fundamental matrix for the Calibration rig image in Figure 7. In each case, six different approaches are compared. For the experiment, n number of points are used to search for a fundamental matrix. For each n, 100 experiments were carried out and the residual error averaged.

**Figure 9.**Results of the experimental evaluation of the methods for computing the fundamental matrix for the Park image in Figure 7. In each case, six different approaches are compared. For the experiment, n number of points are used to search for a fundamental matrix. For each n, 100 experiments were carried out and the residual error averaged.

**Figure 10.**Results of the experimental evaluation of the methods for computing the fundamental matrix for the Corridor image in Figure 7. In each case, six different approaches are compared. For the experiment, n number of points are used to search for a fundamental matrix. For each n, 100 experiments were carried out and the residual error averaged.

**Figure 11.**Results of the experimental evaluation of the methods for computing the fundamental matrix for the Book image in Figure 7. In each case, six different approaches are compared. For the experiment, n number of points are used to search for a fundamental matrix. For each n, 100 experiments were carried out and the residual error averaged.

**Figure 12.**Results of the experimental evaluation of the methods for computing a homography. For each image, the number of matched points is shown. The GA and DE estimators found more matched points. However, some of them are false detections. Clonal Selection Algorithm (CSA), Random Sample Consensus (RANSAC) and MLESAC also detect false positives. The TLBO, on the other hand, detected 103 true positive matched points.

Parameter | Value |
---|---|

Iterations | 100 |

Population size | 50 |

Parameter | Value |
---|---|

Number of generations | 201 |

Population size | 50 |

Crossover rate | 0.85 |

Mutation rate | 0.10 |

Selection method | Roulette with sigma scaling |

Crossover method | 1-point crossover |

Parameter | Value |
---|---|

Number of epochs | 200 |

Population size | 50 |

Differential weight | 0.25 |

Crossover probability | 0.80 |

Image | TLBO-Based | GA-Based | DE-Based | CSA-Based | RANSAC | MLESAC | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|

IN | E_{r} Error | IN | E_{r} Error | IN | E_{r} Error | IN | E_{r} Error | IN | E_{r} Error | IN | E_{r} Error | |

Room_1 | 47 | 0.86 | 42 | 0.85 | 45 | 1.12 | 47 | 0.78 | 46 | 2.80 | 43 | 2.36 |

Room_2 | 42 | 0.75 | 45 | 1.24 | 45 | 1.03 | 45 | 1.29 | 43 | 2.45 | 39 | 1.76 |

Street_1 | 64 | 0.61 | 59 | 0.91 | 58 | 0.83 | 62 | 1.03 | 62 | 1.75 | 67 | 1.85 |

Street_2 | 127 | 0.91 | 115 | 1.01 | 110 | 1.24 | 110 | 1.52 | 98 | 2.54 | 95 | 2.67 |

**Table 5.**Number of inliers IN and error E

_{r}for all approaches, considering the test images in Figure 6. The best result is highlighted in bold.

Image | TLBO-Based | GA-Based | DE-Based | CSA-Based | RANSAC | MLESAC | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|

IN | E_{r} Error | IN | E_{r} Error | IN | E_{r} Error | IN | E_{r} Error | IN | E_{r} Error | IN | E_{r} Error | |

Book_1 | 8 | 0.25 | 8 | 0.78 | 8 | 0.85 | 8 | 0.64 | 8 | 1.068 | 8 | 1.84 |

Book_2 | 71 | 0.42 | 73 | 0.63 | 71 | 0.51 | 71 | 0.73 | 68 | 1.07 | 63 | 1.36 |

**Table 6.**Data points for the experiment with the Calibration rig image of Figure 7. The number n of points for model estimation and the average residual error E

_{res}for all methods are shown. The best result is highlighted in bold.

n | TLBO | GA | DE | CSA | RANSAC | MLESAC |
---|---|---|---|---|---|---|

Average E_{r} | Average E_{r} | Average E_{r} | Average E_{r} | Average E_{r} | Average E_{r} | |

8 | 0.796 | 0.825 | 0.805 | 0.842 | 0.851 | 0.802 |

10 | 0.587 | 0.583 | 0.610 | 0.585 | 0.579 | 0.576 |

15 | 0.408 | 0.463 | 0.416 | 0.504 | 0.415 | 0.455 |

20 | 0.279 | 0.282 | 0.299 | 0.286 | 0.285 | 0.269 |

25 | 0.173 | 0.171 | 0.178 | 0.182 | 0.195 | 0.181 |

27 | 0.173 | 0.174 | 0.177 | 0.180 | 0.195 | 0.184 |

**Table 7.**Data points for the experiment with the Park image of Figure 7. The number n of points for model estimation and the average residual error E

_{res}for all methods are shown. The best result is highlighted in bold.

n | TLBO | GA | DE | CSA | RANSAC | MLESAC |
---|---|---|---|---|---|---|

Average E_{r} | Average E_{r} | Average E_{r} | Average E_{r} | Average E_{r} | Average E_{r} | |

8 | 22.127 | 22.784 | 22.752 | 23.446 | 22.685 | 23.560 |

10 | 15.524 | 17.285 | 15.553 | 15.722 | 16.048 | 15.595 |

15 | 12.183 | 13.757 | 13.573 | 14.554 | 12.197 | 12.694 |

20 | 9.220 | 9.912 | 9.434 | 8.965 | 10.064 | 9.164 |

25 | 2.214 | 2.224 | 2.507 | 2.267 | 2.248 | 2.486 |

27 | 2.197 | 2.633 | 2.352 | 2.585 | 2.442 | 2.256 |

**Table 8.**Data points for the experiment with the Corridor image of Figure 7. The number n of points for model estimation and the average residual error E

_{res}for all methods are shown. The best result is highlighted in bold.

n | TLBO | GA | DE | CSA | RANSAC | MLESAC |
---|---|---|---|---|---|---|

Average E_{r} | Average E_{r} | Average E_{r} | Average E_{r} | Average E_{r} | Average E_{r} | |

8 | 8.116 | 8.252 | 8.479 | 8.378 | 8.224 | 8.791 |

10 | 6.873 | 7.124 | 6.956 | 7.243 | 7.248 | 7.289 |

15 | 5.642 | 5.748 | 6.015 | 6.215 | 5.951 | 5.696 |

20 | 2.970 | 2.982 | 3.049 | 3.512 | 3.267 | 3.226 |

25 | 1.896 | 1.929 | 1.971 | 1.964 | 1.973 | 1.982 |

27 | 1.913 | 1.942 | 1.950 | 1.964 | 1.913 | 1.951 |

**Table 9.**Data points for the experiment with the Book image of Figure 7. The number n of points for model estimation and the average residual error E

_{res}for all methods are shown. The best result is highlighted in bold.

n | TLBO | GA | DE | CSA | RANSAC | MLESAC |
---|---|---|---|---|---|---|

Average E_{r} | Average E_{r} | Average E_{r} | Average E_{r} | Average E_{r} | Average E_{r} | |

8 | 4.756 | 5.077 | 5.046 | 5.059 | 4.875 | 4.923 |

10 | 3.483 | 3.831 | 3.545 | 3.715 | 3.583 | 3.862 |

15 | 2.722 | 2.715 | 2.957 | 2.814 | 2.965 | 2.941 |

20 | 1.993 | 2.218 | 2.150 | 2.199 | 2.101 | 2.346 |

25 | 1.530 | 1.589 | 1.595 | 1.535 | 1.612 | 1.602 |

27 | 1.542 | 1.579 | 1.565 | 1.546 | 1.548 | 1.597 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

López-Martínez, A.; Cuevas, F.J.
Multiple View Relations Using the Teaching and Learning-Based Optimization Algorithm. *Computers* **2020**, *9*, 101.
https://doi.org/10.3390/computers9040101

**AMA Style**

López-Martínez A, Cuevas FJ.
Multiple View Relations Using the Teaching and Learning-Based Optimization Algorithm. *Computers*. 2020; 9(4):101.
https://doi.org/10.3390/computers9040101

**Chicago/Turabian Style**

López-Martínez, Alan, and Francisco Javier Cuevas.
2020. "Multiple View Relations Using the Teaching and Learning-Based Optimization Algorithm" *Computers* 9, no. 4: 101.
https://doi.org/10.3390/computers9040101