Entropy-Based Variational Scheme with Component Splitting for the Efficient Learning of Gamma Mixtures
Abstract
:1. Introduction
2. The Statistical Model
2.1. Finite Gamma Mixture Model
2.2. Finite Gamma Mixture Model with Local Model Selection
3. Variational Bayesian Learning via Entropy-Based Splitting
3.1. Model Learning Using Variational Bayes
3.2. Gamma Model Learning via Entropy-Based Component Splitting
3.2.1. Theoretical Entropy of Gamma Mixtures
3.2.2. MeanNN Entropy Estimator
3.2.3. Variational Learning Algorithm via Entropy-Based Splitting
Algorithm 1:Proposed Entropy-based Variational Learning for GaMM. |
(1) Initialization Initialize hyperparameters u, v, g, h, a. (2) Splitting process Split j into two new components j and j with equal proportion equal /2 •M = M + l • Initialise the parameters of j and j using same parameters of j (3) Perform standard variational Bayes, until convergence. (4) Determine the number of components through the evaluation of the mixing coefficients {} (5) if then M = M−1 and program terminates
end else Evaluate , choose j according to Equation (29), and go to the splitting process in stepend |
4. Experimental Results
4.1. Dynamic Texture Clustering
4.2. Human Gesture Recognition
4.3. Object Categorization
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Bishop, C.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
- Bouguila, N.; Fan, W. Mixture Models and Applications; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
- McNicholas, P.D. Model-Based Clustering. J. Classif. 2016, 33, 331–373. [Google Scholar] [CrossRef] [Green Version]
- Andrews, J.L.; McNicholas, P.D.; Subedi, S. Model-based classification via mixtures of multivariate t-distributions. Comput. Stat. Data Anal. 2011, 55, 520–529. [Google Scholar] [CrossRef]
- Liu, X.; Fu, H.; Jia, Y. Gaussian mixture modeling and learning of neighboring characters for multilingual text extraction in images. Pattern Recognit. 2008, 41, 484–493. [Google Scholar] [CrossRef]
- Bourouis, S.; Channoufi, I.; Alroobaea, R.; Rubaiee, S.; Andejany, M.; Bouguila, N. Color object segmentation and tracking using flexible statistical model and level-set. Multim. Tools Appl. 2021, 80, 5809–5831. [Google Scholar] [CrossRef]
- Alharithi, F.S.; Almulihi, A.H.; Bourouis, S.; Alroobaea, R.; Bouguila, N. Discriminative Learning Approach Based on Flexible Mixture Model for Medical Data Categorization and Recognition. Sensors 2021, 21, 2450. [Google Scholar] [CrossRef] [PubMed]
- Bourouis, S.; Alroobaea, R.; Rubaiee, S.; Andejany, M.; Almansour, F.M.; Bouguila, N. Markov Chain Monte Carlo-Based Bayesian Inference for Learning Finite and Infinite Inverted Beta-Liouville Mixture Models. IEEE Access 2021, 9, 71170–71183. [Google Scholar] [CrossRef]
- Constantinopoulos, C.; Likas, A. Unsupervised Learning of Gaussian Mixtures Based on Variational Component Splitting. IEEE Trans. Neural Netw. 2007, 18, 745–755. [Google Scholar] [CrossRef]
- Najar, F.; Bourouis, S.; Bouguila, N.; Belghith, S. A new hybrid discriminative/generative model using the full-covariance multivariate generalized Gaussian mixture models. Soft Comput. 2020, 24, 10611–10628. [Google Scholar] [CrossRef]
- Song, Z.; Ali, S.; Bouguila, N.; Fan, W. Nonparametric hierarchical mixture models based on asymmetric Gaussian distribution. Digit. Signal Process. 2020, 106, 102829. [Google Scholar] [CrossRef]
- Beckmann, C.; Woolrich, M.; Smith, S. Gaussian/Gamma mixture modelling of ICA/GLM spatial maps. Neuroimage 2003, 19. [Google Scholar]
- Sallay, H.; Bourouis, S.; Bouguila, N. Online Learning of Finite and Infinite Gamma Mixture Models for COVID-19 Detection in Medical Images. Computers 2021, 10, 6. [Google Scholar] [CrossRef]
- Lai, Y.; Cao, H.; Luo, L.; Zhang, Y.; Bi, F.; Gui, X.; Ping, Y. Extended variational inference for gamma mixture model in positive vectors modeling. Neurocomputing 2021, 432, 145–158. [Google Scholar] [CrossRef]
- Bourouis, S.; Sallay, H.; Bouguila, N. A Competitive Generalized Gamma Mixture Model for Medical Image Diagnosis. IEEE Access 2021, 9, 13727–13736. [Google Scholar] [CrossRef]
- Najar, F.; Bourouis, S.; Zaguia, A.; Bouguila, N.; Belghith, S. Unsupervised Human Action Categorization Using a Riemannian Averaged Fixed-Point Learning of Multivariate GGMM. In Proceedings of the Image Analysis and Recognition—15th International Conference, ICIAR 2018, Póvoa de Varzim, Portugal, 27–29 June 2018; pp. 408–415. [Google Scholar]
- Evans, M.; Swartz, T. Methods for approximating integrals in statistics with special emphasis on Bayesian integration problems. Stat. Sci. 1995, 10, 254–272. [Google Scholar] [CrossRef]
- Rasmussen, C.E. A Practical Monte Carlo Implementation of Bayesian Learning. In Advances in Neural Information, Proceedings of the Systems 8, NIPS, Denver, CO, USA, 27–30 November 1995; Touretzky, D.S., Mozer, M., Hasselmo, M.E., Eds.; MIT Press: Cambridge, MA, USA, 1995; pp. 598–604. [Google Scholar]
- Corduneanu, A.; Bishop, C.M. Variational Bayesian model selection for mixture distributions. In Artificial Intelligence and Statistics; Morgan Kaufmann: Waltham, MA, USA, 2001; Volume 2001, pp. 27–34. [Google Scholar]
- Jordan, M.I.; Ghahramani, Z.; Jaakkola, T.S.; Saul, L.K. An Introduction to Variational Methods for Graphical Models. Mach. Learn. 1999, 37, 183–233. [Google Scholar] [CrossRef]
- Fan, W.; Sallay, H.; Bouguila, N.; Bourouis, S. Variational learning of hierarchical infinite generalized Dirichlet mixture models and applications. Soft Comput. 2016, 20, 979–990. [Google Scholar] [CrossRef]
- Bernardo, J.; Bayarri, M.; Berger, J.; Dawid, A.; Heckerman, D.; Smith, A.; West, M. The variational Bayesian EM algorithm for incomplete data: With application to scoring graphical model structures. Bayesian Stat. 2003, 7, 210. [Google Scholar]
- Benavent, A.P.; Escolano, F. Entropy-Based Incremental Variational Bayes Learning of Gaussian Mixtures. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 534–540. [Google Scholar]
- Fan, W.; Bouguila, N.; Ziou, D. Variational learning of finite Dirichlet mixture models using component splitting. Neurocomputing 2014, 129, 3–16. [Google Scholar] [CrossRef]
- Fan, W.; Bouguila, N. Variational learning of a Dirichlet process of generalized Dirichlet distributions for simultaneous clustering and feature selection. Pattern Recognit. 2013, 46, 2754–2769. [Google Scholar] [CrossRef] [Green Version]
- Fan, W.; Bouguila, N.; Bourouis, S.; Laalaoui, Y. Entropy-based variational Bayes learning framework for data clustering. IET Image Process. 2018, 12, 1762–1772. [Google Scholar] [CrossRef]
- Marin, J.M.; Robert, C. Bayesian Core: A Practical Approach to Computational Bayesian Statistics; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
- Faivishevsky, L.; Goldberger, J. ICA based on a Smooth Estimation of the Differential Entropy. In Proceedings of the Twenty-Second Annual Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–11 December 2008; pp. 433–440. [Google Scholar]
- Fan, W.; Bouguila, N.; Ziou, D. Unsupervised Hybrid Feature Extraction Selection for High-Dimensional Non-Gaussian Data Clustering with Variational Inference. IEEE Trans. Knowl. Data Eng. 2013, 25, 1670–1685. [Google Scholar] [CrossRef]
- Leonenko, N.; Pronzato, L.; Savani, V. A class of Rényi information estimators for multidimensional densities. Ann. Stat. 2008, 36, 2153–2182. [Google Scholar] [CrossRef]
- Doretto, G.; Chiuso, A.; Wu, Y.N.; Soatto, S. Dynamic textures. Int. J. Comput. Vis. 2003, 51, 91–109. [Google Scholar] [CrossRef]
- Ravichandran, A.; Chaudhry, R.; Vidal, R. Categorizing Dynamic Textures Using a Bag of Dynamical Systems. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 342–353. [Google Scholar] [CrossRef]
- Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Laptev, I.; Lindeberg, T. Space-time Interest Points. In Proceedings of the 9th IEEE International Conference on Computer Vision (ICCV 2003), Nice, France, 14–17 October 2003; pp. 432–439. [Google Scholar]
- Péteri, R.; Fazekas, S.; Huiskes, M.J. DynTex: A comprehensive database of dynamic textures. Pattern Recognit. Lett. 2010, 31, 1627–1632. [Google Scholar] [CrossRef]
- Lui, Y.M. Human gesture recognition on product manifolds. J. Mach. Learn. Res. 2012, 13, 3297–3321. [Google Scholar]
- Mitra, S.; Acharya, T. Gesture Recognition: A Survey. IEEE Trans. Syst. Man Cybern. Part C 2007, 37, 311–324. [Google Scholar] [CrossRef]
- Fan, W.; Sallay, H.; Bouguila, N.; Bourouis, S. A hierarchical Dirichlet process mixture of generalized Dirichlet distributions for feature selection. Comput. Electr. Eng. 2015, 43, 48–65. [Google Scholar] [CrossRef]
- Hu, Y.; Cao, L.; Lv, F.; Yan, S.; Gong, Y.; Huang, T.S. Action detection in complex scenes with spatial and temporal ambiguities. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 128–135. [Google Scholar]
- Davis, J.W.; Bobick, A.F. The Representation and Recognition of Human Movement Using Temporal Templates. In Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR ’97), San Juan, Puerto Rico, 17–19 June 1997; pp. 928–934. [Google Scholar]
- Dalal, N.; Triggs, B. Histograms of Oriented Gradients for Human Detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), San Diego, CA, USA, 20–26 June 2005; pp. 886–893. [Google Scholar]
- Kim, T.; Wong, S.; Cipolla, R. Tensor Canonical Correlation Analysis for Action Classification. In Proceedings of the 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2007), Minneapolis, MN, USA, 18–23 June 2007. [Google Scholar]
- Lin, Z.; Jiang, Z.; Davis, L.S. Recognizing actions by shape-motion prototype trees. In Proceedings of the IEEE 12th International Conference on Computer Vision, ICCV 2009, Kyoto, Japan, 27 September–4 October 2009; pp. 444–451. [Google Scholar]
- Viitaniemi, V.; Laaksonen, J. Techniques for Still Image Scene Classification and Object Detection. In Artificial Neural Networks, Proceedings of the ICANN 2006, 16th International Conference, Athens, Greece, 10–14 September 2006; Kollias, S.D., Stafylopatis, A., Duch, W., Oja, E., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; Volume 4132, pp. 35–44. [Google Scholar]
- Papageorgiou, C.; Oren, M.; Poggio, T.A. A General Framework for Object Detection. In Proceedings of the Sixth International Conference on Computer Vision (ICCV-98), Bombay, India, 4–7 January 1998; pp. 555–562. [Google Scholar]
- Griffin, G.; Holub, A.; Perona, P. Caltech-256 Object Category Dataset. Available online: https://resolver.caltech.edu/CaltechAUTHORS:CNS-TR-2007-001 (accessed on 1 August 2021).
Approach | Average Accuracy (%) ± Standard Error | Average Time (S) |
---|---|---|
GM-Split | 86.11 ± 1.21 | 4.26 |
GM-En | 86.34 ± 1.13 | 4.22 |
GaM-VB | 88.27 ± 1.09 | 3.56 |
GaM-En (our method) | 93.40 ± 1.03 | 1.64 |
Approach | Average Accuracy (%) | Average Time (S) |
---|---|---|
GM-Split | 85.25 ± 1.33 | 2.21 |
GM-En | 85.28 ± 1.24 | 2.33 |
GaM-VB | 87.37 ± 1.09 | 3.18 |
GaM-En (our method) | 91.66 ± 0.91 | 1.59 |
Datasets/Method | GaM-En (Proposed Method) | GaM-VB | GM-En | GM-Split |
---|---|---|---|---|
Caltech256 | 97.84 ± 0.86 (2.11) | 94.32 ± 1.14 (2.99) | 92.97 ± 1.09 (2.28) | 92.91 ± 1.10 (2.18) |
GHIM10K | 97.02 ± 0.92 (2.89) | 95.37 ± 1.18 (3.11) | 93.33 ± 1.13 (2.97) | 93.17 ± 1.15 (2.93) |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bourouis, S.; Pawar, Y.; Bouguila, N. Entropy-Based Variational Scheme with Component Splitting for the Efficient Learning of Gamma Mixtures. Sensors 2022, 22, 186. https://doi.org/10.3390/s22010186
Bourouis S, Pawar Y, Bouguila N. Entropy-Based Variational Scheme with Component Splitting for the Efficient Learning of Gamma Mixtures. Sensors. 2022; 22(1):186. https://doi.org/10.3390/s22010186
Chicago/Turabian StyleBourouis, Sami, Yogesh Pawar, and Nizar Bouguila. 2022. "Entropy-Based Variational Scheme with Component Splitting for the Efficient Learning of Gamma Mixtures" Sensors 22, no. 1: 186. https://doi.org/10.3390/s22010186
APA StyleBourouis, S., Pawar, Y., & Bouguila, N. (2022). Entropy-Based Variational Scheme with Component Splitting for the Efficient Learning of Gamma Mixtures. Sensors, 22(1), 186. https://doi.org/10.3390/s22010186