# Evaluation of Clustering Methods in Compression of Topological Models and Visual Place Recognition Using Global Appearance Descriptors

^{1}

^{2}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

## 2. Global Appearance Descriptors

#### 2.1. Fourier Signature Descriptor

#### 2.2. Histogram of Oriented Gradients Descriptor

#### 2.3. Gist Descriptor

#### 2.4. Homomorphic Filter

## 3. Clustering Methods to Compact the Visual Information

- Learning: creating a map of the environment and compacting it. A set of omnidirectional images is captured from different positions, and a global appearance descriptor for each image is calculated. After that, a clustering method is used to determine the structure and compact the model.
- Validation: Once the map is built, the robot obtains a new image from an unknown position, calculates the descriptor, and compares it with the set of descriptors obtained in the learning step. Through this comparison, the robot must be able to estimate its position.

#### 3.1. Spectral Clustering Algorithm

- Calculation of the normalized Laplacian matrix:$$L=I-{D}^{-1/2}S{D}^{1/2}$$
- Calculation of the ${n}_{c}$ main eigenvectors of L, $\{\overrightarrow{{u}_{1}},\overrightarrow{{u}_{2}},\dots ,\overrightarrow{{u}_{{n}_{c}}}\}$. Arranging these vectors by columns, the matrix $U\in {\mathbb{R}}^{N\times {n}_{c}}$ is obtained.
- Normalization of the matrix U to obtain the matrix $T\in {\mathbb{R}}^{N\times {n}_{c}}$.
- Extraction of vector ${\overrightarrow{y}}_{i}\in {\mathbb{R}}^{{n}_{c}}$ from the i
^{th}row of the matrix T. $i=1,\dots ,N$. - Clustering of the ${\overrightarrow{y}}_{i}$ vectors by using a simple clustering algorithm (such as k-means or hierarchical clustering). Through this, the clusters ${A}_{1},{A}_{2},\dots ,{A}_{{n}_{c}}$ are obtained.
- Obtaining the clusters with the original data as ${C}_{1},{C}_{2},\dots ,{C}_{{n}_{c}}$ where ${C}_{i}=\overrightarrow{{d}_{j}}$ | $\overrightarrow{{y}_{j}}\in {A}_{i}$.

#### 3.2. Cluster with a Self-Organizing Map Neural Network

## 4. Using the Compact Topological Maps to Localize the Robot

#### 4.1. Distance Measures between Descriptors

- Euclidean distance: This a particular case of the the weighted metric distance and is defined as:$$dis{t}_{euclidean}(\overrightarrow{a},\overrightarrow{b})=\sqrt[]{\sum _{i=1}^{l}{({a}_{i}-{b}_{i})}^{2}}$$
- Cosine distance: Departing from a similitude metric, which is defined as the scalar product between two vectors, the distance is defined as:$$\begin{array}{c}\hfill dis{t}_{cosine}(\overrightarrow{a},\overrightarrow{b})=1-si{m}_{cosine}(\overrightarrow{a},\overrightarrow{b})\\ \hfill si{m}_{cosine}(\overrightarrow{a},\overrightarrow{b})=\frac{{\overrightarrow{a}}^{T}\xb7\overrightarrow{b}}{|\overrightarrow{a}||\overrightarrow{b}|}\end{array}$$
- Correlation distance: Again, departing from a similitude metric, which is defined as a normalized version of the scalar product between two vectors, the distance is defined as:$$\begin{array}{c}\hfill dis{t}_{correlation}(\overrightarrow{a},\overrightarrow{b})=1-si{m}_{correlation}(\overrightarrow{a},\overrightarrow{b})\\ \hfill si{m}_{correlation}(\overrightarrow{a},\overrightarrow{b})=\frac{{(\overrightarrow{a}-\overline{a})}^{T}(\overrightarrow{b}-\overline{b})}{\sqrt[]{{(\overrightarrow{a}-\overline{a})}^{T}(\overrightarrow{a}-\overline{a})}\sqrt[]{{(\overrightarrow{b}-\overline{b})}^{T}(\overrightarrow{b}-\overline{b})}}\end{array}$$$$\begin{array}{c}\hfill \overline{a}=\frac{1}{l}\sum _{i=1}^{l}{a}_{i};\phantom{\rule{28.45274pt}{0ex}}\overline{b}=\frac{1}{l}\sum _{i=1}^{l}{b}_{i}\end{array}$$

#### 4.2. Resolution of the Localization Problem in a Model That Has Not Been Compacted

- The robot captures a new image at time instant t from an unknown position ($i{m}_{t}$).
- It calculates the global appearance descriptor of the captured image $\overrightarrow{{d}_{t}}$.
- The distances between this new descriptor and the set of descriptors in the map are obtained. The comparison between descriptors is carried out through one of the distance metrics presented in Section 4.1.
- A distance vector ${l}_{t}=\{{l}_{t1},\dots ,{l}_{tN}\}$ is obtained where ${l}_{tj}=dist\{\overrightarrow{{d}_{t}},\overrightarrow{{d}_{j}}\}$ according to any distance measure.
- Considering the position of the robot as the position of the closest neighbour within the map (the problem known as image retrieval [53]), the corresponding position of the robot is the position in the map that minimizes the distance $arg{min}_{j}{l}_{tj}$. This way, the position $(x,y)$ of the robot in the instant t is estimated.

#### 4.3. Resolution of the Localization Problem in a Compact Model

## 5. Experiments

#### 5.1. Datasets

#### 5.2. Creating Compact Maps through Clustering

- a
- The average moment of inertia of the cluster.
- b
- The average silhouette of the points.
- c
- The average silhouette of the descriptors.

^{th}image that belongs to the cluster ${C}_{i}$, and ${n}_{i}$ is the number of images within this cluster.

#### 5.2.1. Clustering in the Quorum V Environment

#### 5.2.2. Clustering in COLD Environments

#### 5.3. Localization Using the Compact Maps

#### 5.3.1. Localization in the Quorum V Environment

#### 5.3.2. Localization in the Freiburg Environment

#### 5.3.3. Localization When Several Maps Are Available

#### 5.4. A Comparative Study of Localization with Straightforward and with Compact Maps

#### 5.5. Discussion of the Results

## 6. Conclusions and Future Works

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Okuyama, K.; Kawasaki, T.; Kroumov, V. Localization and position correction for mobile robot using artificial visual landmarks. In Proceedings of the 2011 International Conference on Advanced Mechatronic Systems, Zhengzhou, China, 11–13 August 2011; pp. 414–418. [Google Scholar]
- Zhao, Y.; Cheng, W.; Liu, G. The navigation of mobile robot based on stereo vision. In Proceedings of the 2012 Fifth International Conference on Intelligent Computation Technology and Automation, Zhangjiajie, China, 12–14 January 2012; pp. 670–673. [Google Scholar]
- Gwinner, K.; Jaumann, R.; Hauber, E.; Hoffmann, H.; Heipke, C.; Oberst, J.; Neukum, G.; Ansan, V.; Bostelmann, J.; Dumke, A.; et al. The High Resolution Stereo Camera (HRSC) of Mars Express and its approach to science analysis and mapping for Mars and its satellites. Planet. Space Sci.
**2016**, 126, 93–138. [Google Scholar] [CrossRef] - Jia, Y.; Li, M.; An, L.; Zhang, X. Autonomous navigation of a miniature mobile robot using real-time trinocular stereo machine. In Proceedings of the IEEE International Conference on Robotics, Intelligent Systems and Signal, Changsha, China, 8–13 Octorber 2003. [Google Scholar]
- Valiente, D.; Gil, A.; Reinoso, Ó.; Juliá, M.; Holloway, M. Improved Omnidirectional Odometry for a View-Based Mapping Approach. Sensors
**2017**, 17, 325. [Google Scholar] [CrossRef] [PubMed] - Berenguer, Y.; Payá, L.; Ballesta, M.; Reinoso, O. Position Estimation and Local Mapping Using Omnidirectional Images and Global Appearance Descriptors. Sensors
**2015**, 15, 26368–26395. [Google Scholar] [CrossRef] [PubMed] - Tardif, J.P.; Pavlidis, Y.; Daniilidis, K. Monocular visual odometry in urban environments using an omnidirectional camera. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 Septtmber 2008; pp. 2531–2538. [Google Scholar]
- Murillo, A.; Guerrero, J.; Sagues, C. SURF features for efficient robot localization with omnidirectional images. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 3901–3907. [Google Scholar]
- Menegatti, E.; Pretto, A.; Scarpa, A.; Pagello, E. Omnidirectional vision scan matching for robot localization in dynamic environments. IEEE Trans. Robot.
**2006**, 22, 523–535. [Google Scholar] [CrossRef][Green Version] - Payá, L.; Gil, A.; Reinoso, O. A State-of-the-Art Review on Mapping and Localization of Mobile Robots Using Omnidirectional Vision Sensors. J. Sens.
**2017**, 2017, 3497650. [Google Scholar] [CrossRef] - Pantazi, X.E.; Tamouridou, A.A.; Alexandridis, T.; Lagopodi, A.L.; Kashefi, J.; Moshou, D. Evaluation of hierarchical self-organising maps for weed mapping using uas multispectral imagery. Comput. Electron. Agric.
**2017**, 139, 224–230. [Google Scholar] [CrossRef] - Hagiwara, Y.; Inoue, M.; Kobayashi, H.; Taniguchi, T. Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots. Front. Neurorobot.
**2018**, 12, 11. [Google Scholar] [CrossRef] [PubMed] - Hwang, Y.; Choi, B. Hierarchical System Mapping for Large-Scale Fault-Tolerant Quantum Computing. arXiv, 2018; arXiv:1809.07998. [Google Scholar]
- Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; pp. 1150–1157. [Google Scholar]
- Bay, H.; Tuytelaars, T.; Gool, L. SURF: Speeded Up Robust Features. In Computer Vision at ECCV 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 404–417. [Google Scholar]
- Calonder, M.; Lepetit, V.; Strecha, C.; Fua, P. Brief: Binary robust independent elementary features. In Proceedings of the 11th European Conference on Computer Vision, Crete, Greece, 5–11 September 2010. [Google Scholar]
- Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
- Angeli, A.; Doncieux, S.; Meyer, J.; Filliat, D. Visual topological SLAM and global localization. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 2029–2034. [Google Scholar]
- Menegatti, E.; Maeda, T.; Ishiguro, H. Image-based memory for robot navigation using properties of omnidirectional images. Robot. Autom. Syst.
**2004**, 47, 251–267. [Google Scholar] [CrossRef][Green Version] - Liu, M.; Scaramuzza, D.; Pradalier, C.; Siegwart, R.; Chen, Q. Scene recognition with omnidirectional vision for topological map using lightweight adaptive descriptors. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 Octorber 2009; pp. 116–121. [Google Scholar]
- Payá, L.; Fernández, L.; Gil, A.; Reinoso, O. Map Building and Monte Carlo Localization Using Global Appearance of Omnidirectional Images. Sensors
**2010**, 10, 11468–11497. [Google Scholar] [CrossRef][Green Version] - Rituerto, A.; Murillo, A.C.; Guerrero, J. Semantic labeling for indoor topological mapping using a wearable catadioptric system. Robot. Autom. Syst.
**2014**, 62, 685–695. [Google Scholar] [CrossRef][Green Version] - Leonardis, A.; Bischof, H. Robust recognition using eigenimages. Comput. Vis. Image Understand.
**2000**, 78, 99–118. [Google Scholar] [CrossRef] - Oliva, A.; Torralba, A. Modeling the shape of the scene: A holistic representation of the spatial envelope. Int. J. Comput. Vis.
**2001**, 42, 145–175. [Google Scholar] [CrossRef] - Radon, J. Uber die bestimmug von funktionen durch ihre integralwerte laengs geweisser mannigfaltigkeiten. Ber. Saechsishe Acad. Wiss. Math. Phys.
**1917**, 69, 262. [Google Scholar] - Zivkovic, Z.; Bakker, B.; Krose, B. Hierarchical map building and planning based on graph partitioning. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation, Orlando, FL, USA, 15–19 May 2006; pp. 803–809. [Google Scholar]
- Grudic, G.Z.; Mulligan, J. Topological Mapping with Multiple Visual Manifolds. In Proceedings of the Robotics Science and Systems 2005 Workshop, Cambridge, MA, USA, 8–11 June 2005. [Google Scholar]
- Valgren, C.; Lilienthal, A. SIFT, SURF & seasons: Appearance-based long-term localization in outdoor environments. Robot. Autom. Syst.
**2010**, 58, 149–156. [Google Scholar] - Stimec, A.; Jogan, M.; Leonardis, A. Unsupervised learning of a hierarchy of topological maps using omnidirectional images. Int. J. Pattern Recognit. Artif. Intell.
**2007**, 22, 639–665. [Google Scholar] [CrossRef] - Shi, X.; Shen, Y.; Wang, Y.; Bai, L. Differential-Clustering Compression Algorithm for Real-Time Aerospace Telemetry Data. IEEE Access
**2018**, 6, 57425–57433. [Google Scholar] [CrossRef] - Payá, L.; Mayol, W.; Cebollada, S.; Reinoso, O. Compression of topological models and localization using the global appearance of visual information. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017. [Google Scholar]
- Mekonnen, A.A.; Briand, C.; Lerasle, F.; Herbulot, A. Fast HOG based person detection devoted to a mobile robot with a spherical camera. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 631–637. [Google Scholar]
- Dong, L.; Yu, X.; Li, L.; Hoe, J.K.E. HOG based multi-stage object detection and pose recognition for service robot. In Proceedings of the 2010 11th International Conference on Control Automation Robotics & Vision, Singapore, 7–10 December 2010; pp. 2495–2500. [Google Scholar]
- Dalal, N.; Triggs, B. Histograms of Oriented Gradients fot Human Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; pp. 886–893. [Google Scholar]
- Zhu, Q.; Avidan, S.; Yeh, M.; Cheng, K. Fast Human Detection Using a Cascade of Histograms of Oriented Gradients. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; pp. 1491–1498. [Google Scholar]
- Payá, L.; Amorós, F.; Fernández, L.; Reinoso, O. Performance of global-appearance descriptors in map building and localization using omnidirectional vision. Sensors
**2014**, 14, 3033–3064. [Google Scholar] [CrossRef] [PubMed] - Oliva, A.; Torralba, A. Building the gist of ascene: The role of global image features in recognition. Prog. Brain Res.
**2006**, 155, 23–36. [Google Scholar] - Siagian, C.; Itti, L. Biologically Inspired Mobile Robot Vision Localization. IEEE Trans. Robot.
**2009**, 25, 861–873. [Google Scholar] [CrossRef][Green Version] - Chang, C.; Siagian, C.; Itti, L. Mobile robot vision navigation and localization using Gist and Saliency. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 Octorber 2010; pp. 4147–4154. [Google Scholar]
- Murillo, A.C.; Singh, G.; Kosecka, J.; Guerrero, J.J. Localization in Urban Environments Using a Panoramic Gist Descriptor. IEEE Trans. Robot.
**2013**, 29, 146–160. [Google Scholar] [CrossRef][Green Version] - Fernández, L.; Payá, L.; Reinoso, Ó.; Gil, A.; Juliá, M. Robust Methods for Robot Localization under Changing Illumination Conditions-Comparison of Different Filtering Techniques. ICAART
**2010**, 1, 223–228. [Google Scholar] - Gonzalez, R.C.; Woods, R.E. Digital Image Processing; Prentice Hall: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
- Payá, L.; Reinoso, O.; Berenguer, Y.; Úbeda, D. Using Omnidirectional Vision to Create a Model of the Environment: A Comparative Evaluation of Global-Appearance Descriptors. J. Sens.
**2016**, 2016, 1–21. [Google Scholar] [CrossRef] [PubMed] - Fernández, L.; Paya, L.; Amoros, F.; Reinoso, O. Using Global Appearance Descriptors to Solve Topological Visual SLAM. In Encyclopedia of Information Science and Technology, 4th ed.; IGI Global: Philadelphia, PA, USA, 2018; pp. 6894–6905. [Google Scholar]
- Luxburg, U. A tutorial on spectral clustering. Stat. Comput.
**2007**, 17, 395–416. [Google Scholar] [CrossRef][Green Version] - Ng, A.Y.; Jordan, M.I.; Weiss, Y. On Spectral Clustering: Analysis and an algorithm. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2001; pp. 849–856. [Google Scholar]
- Valgren, C.; Duckett, T.; Lilienthal, A. Incremental spectral clustering and its application to topological mapping. In Proceedings of the IEEE International conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 4283–4288. [Google Scholar]
- Sorensen, D.C. Implicitly restarted Arnoldi/Lanczos methods for large scale eigenvalue calculations. In Parallel Numerical Algorithms; Springer: Berlin/Heidelberg, Germany, 1997; pp. 119–165. [Google Scholar]
- Kohonen, T. The self-organizing map. Neurocomputing
**1998**, 21, 1–6. [Google Scholar] [CrossRef] - Van Gassen, S.; Callebaut, B.; Van Helden, M.J.; Lambrecht, B.N.; Demeester, P.; Dhaene, T.; Saeys, Y. FlowSOM: Using self-organizing maps for visualization and interpretation of cytometry data. Cytometry Part A
**2015**, 87, 636–645. [Google Scholar] [CrossRef] [PubMed][Green Version] - Thrun, S.; Fox, D.; Burgard, W.; Dellaert, F. Robust Monte Carlo localization for mobile robots. Artif. Intell.
**2001**, 128, 99–141. [Google Scholar] [CrossRef][Green Version] - Pérez, J.; Caballero, F.; Merino, L. Enhanced Monte Carlo localization with visual place recognition for robust robot localization. J. Intell. Robot. Syst.
**2015**, 80, 641–656. [Google Scholar] [CrossRef] - Rui, Y.; Huang, T.S.; Chang, S.F. Image retrieval: Current techniques, promising directions, and open issues. J. Vis. Commun. Image Represent.
**1999**, 10, 39–62. [Google Scholar] [CrossRef] - Automation, Robotics and Computer Vision Research Group. Quorum 5 Set of Images. Available online: http://arvc.umh.es/db/images/quorumv/ (accessed on 1 June 2018).
- Pronobis, A.; Caputo, B. COLD: COsy Localization Database. IJRR
**2009**, 28, 588–594. [Google Scholar] [CrossRef]

**Figure 1.**(

**a**) Example of a robot Pioneer P3-AT

^{®}equipped with an omnidirectional vision system and a laser range finder. In this work, only the omnidirectional camera is used. (

**b**) Example of an omnidirectional image captured from one office.

**Figure 2.**Two main methods to extract the most relevant information from the images for mapping and localization purposes. (

**a**) Detection, description, and tracking of some relevant landmarks along a set of scenes. (

**b**) Building a unique descriptor per image that contains information on its global appearance.

**Figure 3.**Example of an indoor map and a compression of the information. (

**a**) Positions where the images were captured. (

**b**) Result of the clustering process. (

**c**) Each cluster is reduced to one representative.

**Figure 5.**Bird’s eye view of the COsy Localization Database (COLD). (

**a**) Freiburg and (

**b**) Saarbrücken environment. Extracted from https://www.nada.kth.se/cas/COLD/.

**Figure 6.**Results of the two clustering methods: average moment of inertia, average silhouette of points, and average silhouette of descriptors vs. number of clusters, when using FS in the Quorum V environment. SOM, Self-Organizing Maps.

**Figure 7.**Results of the two clustering methods: average moment of inertia, average silhouette of points, and average silhouette of descriptors vs. number of clusters, when using HOG in the Quorum V environment.

**Figure 8.**Results of the two clustering methods: average moment of inertia, average silhouette of points, and average silhouette of descriptors vs. number of clusters, when using gist in the Quorum V environment.

**Figure 9.**Results of the two clustering methods: computing time vs. number of clusters, when using FS, HOG, and gist descriptors in the Quorum V environment.

**Figure 10.**Quorum V environment. Cluster obtained with spectral clustering and gist description (${k}_{3}=32,{n}_{masks}=16$).

**Figure 11.**Results of the two clustering methods: average moment of inertia, average silhouette of points, and average silhouette of descriptors vs. number of clusters, when using HOG in the Freiburg environment.

**Figure 12.**Results of the two clustering methods: average moment of inertia, average silhouette of points, and average silhouette of descriptors vs. number of clusters, when using gist in the Freiburg environment.

**Figure 13.**Results of the two clustering methods: average moment of inertia, average silhouette of points, and average silhouette of descriptors vs. number of clusters, when using HOG in the Saarbrücken environment.

**Figure 14.**Results of the two clustering methods: average moment of inertia, average silhouette of points, and average silhouette of descriptors vs. number of clusters, when using gist in the Saarbrücken environment.

**Figure 15.**Clusters obtained in the COLD environments through the use of Spectral clustering and gist description. (

**a**) Freiburg and (

**b**) Saarbrücken environment.

**Figure 16.**Results of the localization process with FS, HOG, and gist used to describe the representatives of the clusters and the test images: average localization error (cm) vs. number of clusters. Quorum V environment.

**Figure 17.**Results of the localization process with FS, HOG, and gist used to describe the representatives of the clusters and the test images: average computing time vs. number of clusters. Quorum V environment.

**Figure 18.**Results of the localization process with HOG and gist used to describe the representatives of the clusters and the test images: average localization error (cm) vs. number of clusters. Freiburg environment.

**Figure 19.**Percentage of success to detect the correct environment between Freiburg and Saarbrücken with FS, HOG, and gist used to describe the representatives of the clusters and the test images: percentage of success vs. number of clusters.

**Figure 20.**Results of the localization process in the Freiburg environment by using two types of models to retain visual representatives. Average localization error (cm) vs. number of clusters. Model 1 uses representatives obtained through spectral clustering, and Model 2 obtains the representatives through sampling the dataset. The localization task has been carried out with HOG and gist, and the distances are calculated through the cosine distance.

**Figure 21.**Best results of the clustering and localization processes. (

**a**) Clustering with gist and spectral clustering: silhouette of points (left axis, solid lines) and computing time (right axis, dashed lines) vs. number of clusters. (

**b**) Localization with HOG and cosine distance: average localization error (cm) (left axis, solid lines) and computing time (right axis, dashed lines) vs. the number of clusters. Freiburg environment.

Dataset Name | Number of Images | Number of Rooms |
---|---|---|

QuorumV_training | 872 | 6 |

QuorumV_test | 77 | |

Freiburg_training | 519 | 9 |

Freiburg_test | 52 | |

Saarbrucken_training | 566 | 8 |

Saarbrucken_test | 57 |

**Table 2.**Summary of the parameters that have been varied to carry out the clustering experiments. FS, Fourier Signature.

Parameter | Values |
---|---|

Environment | Quorum V |

Freiburg (COLD) | |

Saarbrücken (COLD) | |

Descriptor | FS |

HOG | |

gist | |

Descriptor parameters | FS: ${k}_{1}$ = 4, 8, 16, 32, 64, 128, 256 |

HOG: ${k}_{2}$ = 2, 4, 16, 32, 64, 128 | |

gist: ${k}_{3}$ = 2, 4, 8, 16, 32, 64 | |

gist: ${n}_{masks}$ = 2, 4, 8, 16, 32, 64 | |

Number of clusters | Quorum V: ${n}_{c}$ = 15, 25, 40, 60, 80, 100 |

Freiburg: ${n}_{c}$ = 10, 20, 30, 40, 50, 60, 70 | |

Saarbrücken: ${n}_{c}$ = 10, 20, 30, 40, 50, 60, 70 |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Cebollada, S.; Payá, L.; Mayol, W.; Reinoso, O. Evaluation of Clustering Methods in Compression of Topological Models and Visual Place Recognition Using Global Appearance Descriptors. *Appl. Sci.* **2019**, *9*, 377.
https://doi.org/10.3390/app9030377

**AMA Style**

Cebollada S, Payá L, Mayol W, Reinoso O. Evaluation of Clustering Methods in Compression of Topological Models and Visual Place Recognition Using Global Appearance Descriptors. *Applied Sciences*. 2019; 9(3):377.
https://doi.org/10.3390/app9030377

**Chicago/Turabian Style**

Cebollada, Sergio, Luis Payá, Walterio Mayol, and Oscar Reinoso. 2019. "Evaluation of Clustering Methods in Compression of Topological Models and Visual Place Recognition Using Global Appearance Descriptors" *Applied Sciences* 9, no. 3: 377.
https://doi.org/10.3390/app9030377