# Nonparametric Bayesian Learning of Infinite Multivariate Generalized Normal Mixture Models and Its Applications

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction and Related Works

## 2. Proposed Method

#### 2.1. Features Extraction and Encoding

- Extracting visual features from the training images;
- Encoding extracted features into robust descriptors based on the proposed infinite mixture model and a Fisher kernel to encode the higher order statistics of the infinite mixtures of the extracted features;
- Classifying and/or categorizing image descriptors using the SVM classifier.

#### 2.2. Model Specification

**Definition**

**1**

- ${\overrightarrow{\mu}}_{k}$ is the mean vector of the MGGD distribution.
- ${\beta}_{k}$ represents its shape parameter. It inspects the peakedness and the spread of the distribution.
- ${\mathsf{\Sigma}}_{k}$ is its covariance matrix, also called a scatter matrix.

#### 2.3. The Infinite Mixture Model

#### 2.3.1. Priors and Conditional Posterior Distributions

- We start by specifying a common prior for ${\overrightarrow{\mu}}_{j}$ as a Normal distribution, thus we have ${\overrightarrow{\mu}}_{j}\sim \mathcal{N}({\overrightarrow{\mu}}_{0},{\mathsf{\Sigma}}_{0})$.
- For the shape parameter $,{\beta}_{j}$, an appropriate prior is the Gamma distribution (${\beta}_{j}\sim \mathcal{G}({\alpha}_{\beta},{\beta}_{\beta})$).
- For the covariance matrix ${\mathsf{\Sigma}}_{j}$, a natural prior choice is the Inverted Wishart (${\mathsf{\Sigma}}_{j}\sim IW(\psi ,\nu )$).

- -
- The conditional posterior distribution for the mean parameter is determined as:$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& p\left({\overrightarrow{\mu}}_{j}\right|\mathcal{Z},\mathcal{Y})\propto p\left({\overrightarrow{\mu}}_{j}\right)\prod _{{Z}_{ij}=1}p\left({\overrightarrow{Y}}_{i}\right|{\overrightarrow{\mu}}_{j},{\beta}_{j},{\mathsf{\Sigma}}_{j})\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \propto \mathcal{N}({\overrightarrow{\mu}}_{0},{\mathsf{\Sigma}}_{0})\prod _{{Z}_{ij}=1}p\left({\overrightarrow{Y}}_{i}\right|{\overrightarrow{\mu}}_{j},{\beta}_{j},{\mathsf{\Sigma}}_{j})\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \propto exp(-\frac{1}{2}{({\overrightarrow{\mu}}_{j}-{\overrightarrow{\mu}}_{0})}^{T}{\mathsf{\Sigma}}_{0}^{-1}({\overrightarrow{\mu}}_{j}-{\overrightarrow{\mu}}_{0}))\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\times \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& exp\left(\right)open="("\; close=")">\sum _{{Z}_{ij}=1}(\frac{-1}{2{m}^{{\beta}_{j}}}{({({\overrightarrow{Y}}_{i}-{\overrightarrow{\mu}}_{j})}^{T}{\mathsf{\Sigma}}_{j}^{-1}({\overrightarrow{Y}}_{i}-{\overrightarrow{\mu}}_{j}))}^{{\beta}_{j}}).\hfill \end{array}$$
- -
- The conditional posterior distribution for the shape parameter is determined as:$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& p\left({\beta}_{j}\right|\mathcal{Z},\mathcal{Y})\propto p\left({\beta}_{j}\right)\prod _{{Z}_{ij}=1}p\left({\overrightarrow{Y}}_{i}\right|{\overrightarrow{\mu}}_{j},{\beta}_{j},{\mathsf{\Sigma}}_{j})\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \propto \mathcal{G}({\alpha}_{\beta},{\beta}_{\beta})\prod _{{Z}_{ij}=1}p\left({\overrightarrow{Y}}_{i}\right|{\overrightarrow{\mu}}_{j},{\beta}_{j},{\mathsf{\Sigma}}_{j})\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \propto {\beta}_{j}^{{\alpha}_{\beta}-1}{e}^{-{\beta}_{\beta}{\beta}_{j}}{\left(\frac{{\beta}_{j}}{\mathsf{\Gamma}(d/2{\beta}_{j}){2}^{d/2{\beta}_{j}}}\right)}^{{n}_{j}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\times \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& exp\left(\right)open="("\; close=")">\sum _{{Z}_{ij}=1}(\frac{-1}{2{m}^{{\beta}_{j}}}{({({\overrightarrow{Y}}_{i}-{\overrightarrow{\mu}}_{j})}^{T}{\mathsf{\Sigma}}_{j}^{-1}({\overrightarrow{Y}}_{i}-{\overrightarrow{\mu}}_{j}))}^{{\beta}_{j}}).\hfill \end{array}$$
- -
- The conditional posterior distribution of covariance matrix is calculated as:$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& p\left({\mathsf{\Sigma}}_{j}\right|\mathcal{Z},\mathcal{Y})\propto p\left({\mathsf{\Sigma}}_{j}\right)\prod _{{Z}_{ij}=1}p\left({\overrightarrow{Y}}_{i}\right|{\overrightarrow{\mu}}_{j},{\beta}_{j},{\mathsf{\Sigma}}_{j})\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \propto IW(\psi ,\nu )\prod _{{Z}_{ij}=1}p\left({\overrightarrow{Y}}_{i}\right|{\overrightarrow{\mu}}_{j},{\beta}_{j},{\mathsf{\Sigma}}_{j})\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \propto |{\mathsf{\Sigma}}_{j}{|}^{-(\nu +d+1)/2}{e}^{-1/2tr\left(\psi {\mathsf{\Sigma}}_{j}^{-1}\right)}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\times \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& exp\left(\right)open="("\; close=")">\sum _{{Z}_{ij}=1}(\frac{-1}{2{m}^{{\beta}_{j}}}{({({\overrightarrow{Y}}_{i}-{\overrightarrow{\mu}}_{j})}^{T}{\mathsf{\Sigma}}_{j}^{-1}({\overrightarrow{Y}}_{i}-{\overrightarrow{\mu}}_{j}))}^{{\beta}_{j}}).\hfill \end{array}$$

#### 2.3.2. Pseudo-Algorithm

- Update the represented clusters denoted by M.
- Update ${w}_{j}=\frac{{n}_{j}}{N+\delta}$ and ${n}_{j}$, $j=1,\dots ,M$.
- Update the mixing parameters ${w}_{U}=\frac{\delta}{\delta +N}$.
- Update ${\overrightarrow{\mu}}_{k}$, ${\beta}_{k}$ and ${\mathsf{\Sigma}}_{k}$ using the underlying posteriors, $k=1,\dots ,K$.

## 3. Experiments

#### 3.1. Texture Classification

#### 3.2. Human Actions Categorization

## 4. Discussion

## 5. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Li, Y.; Wang, G.; Li, M.; Li, J.; Shi, L.; Li, J. Application of CT images in the diagnosis of lung cancer based on finite mixed model. Saudi J. Biol. Sci.
**2020**, 27, 1073–1079. [Google Scholar] [CrossRef] - Lai, Y.; Ping, Y.; He, W.; Wang, B.; Wang, J.; Zhang, X. Variational Bayesian inference for finite inverted Dirichlet mixture model and its application to object detection. Chin. J. Electron.
**2018**, 27, 603–610. [Google Scholar] [CrossRef] - Bourouis, S.; Sallay, H.; Bouguila, N. A Competitive Generalized Gamma Mixture Model for Medical Image Diagnosis. IEEE Access
**2021**, 9, 13727–13736. [Google Scholar] [CrossRef] - McLachlan, G.J.; Peel, D. Finite Mixture Models; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
- Bouguila, N.; Ziou, D. MML-Based Approach for Finite Dirichlet Mixture Estimation and Selection. In Machine Learning and Data Mining in Pattern Recognition, Proceedings of the 4th International Conference, MLDM 2005, Leipzig, Germany, 9–11 July 2005; Lecture Notes in Computer Science; Proceedings; Perner, P., Imiya, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2005; Volume 3587, pp. 42–51. [Google Scholar] [CrossRef]
- Khan, A.M.; El-Daly, H.; Rajpoot, N.M. A Gamma-Gaussian mixture model for detection of mitotic cells in breast cancer histopathology images. In Proceedings of the 21st International Conference on Pattern Recognition, ICPR 2012, Tsukuba, Japan, 11–15 November 2012; pp. 149–152. [Google Scholar]
- Alroobaea, R.; Rubaiee, S.; Bourouis, S.; Bouguila, N.; Alsufyani, A. Bayesian inference framework for bounded generalized Gaussian-based mixture model and its application to biomedical images classification. Int. J. Imaging Syst. Technol.
**2020**, 30, 18–30. [Google Scholar] [CrossRef] - Bourouis, S.; Channoufi, I.; Alroobaea, R.; Rubaiee, S.; Andejany, M.; Bouguila, N. Color object segmentation and tracking using flexible statistical model and level-set. Multimed. Tools Appl.
**2021**, 80, 5809–5831. [Google Scholar] [CrossRef] - Bouchard, G.; Celeux, G. Selection of Generative Models in Classification. IEEE Trans. Pattern Anal. Mach. Intell.
**2006**, 28, 544–554. [Google Scholar] [CrossRef] - Najar, F.; Bourouis, S.; Bouguila, N.; Belghith, S. A new hybrid discriminative/generative model using the full-covariance multivariate generalized Gaussian mixture models. Soft Comput.
**2020**, 24, 10611–10628. [Google Scholar] [CrossRef] - Elguebaly, T.; Bouguila, N. A hierarchical nonparametric Bayesian approach for medical images and gene expressions classification. Soft Comput.
**2015**, 19, 189–204. [Google Scholar] [CrossRef] - Najar, F.; Bourouis, S.; Bouguila, N.; Belghith, S. Unsupervised learning of finite full covariance multivariate generalized Gaussian mixture models for human activity recognition. Multim. Tools Appl.
**2019**, 78, 18669–18691. [Google Scholar] [CrossRef] - Alharithi, F.S.; Almulihi, A.H.; Bourouis, S.; Alroobaea, R.; Bouguila, N. Discriminative Learning Approach Based on Flexible Mixture Model for Medical Data Categorization and Recognition. Sensors
**2021**, 21, 2450. [Google Scholar] [CrossRef] [PubMed] - Bouguila, N.; Almakadmeh, K.; Boutemedjet, S. A finite mixture model for simultaneous high-dimensional clustering, localized feature selection and outlier rejection. Expert Syst. Appl.
**2012**, 39, 6641–6656. [Google Scholar] [CrossRef] - Robert, C.; Casella, G. Monte Carlo Statistical Methods; Springer: New York, NY, USA, 2004. [Google Scholar]
- Bourouis, S.; Laalaoui, Y.; Bouguila, N. Bayesian frameworks for traffic scenes monitoring via view-based 3D cars models recognition. Multim. Tools Appl.
**2019**, 78, 18813–18833. [Google Scholar] [CrossRef] - Chen, J.; Tan, X. Inference for multivariate normal mixtures. J. Multivar. Anal.
**2009**, 100, 1367–1383. [Google Scholar] [CrossRef] [Green Version] - Bourouis, S.; Bouguila, N. Nonparametric learning approach based on infinite flexible mixture model and its application to medical data analysis. Int. J. Imaging Syst. Technol.
**2021**. [Google Scholar] [CrossRef] - Fan, W.; Sallay, H.; Bouguila, N.; Bourouis, S. Variational learning of hierarchical infinite generalized Dirichlet mixture models and applications. Soft Comput.
**2016**, 20, 979–990. [Google Scholar] [CrossRef] - Rasmussen, C.E. The Infinite Gaussian Mixture Model. In Proceedings of the Advances in Neural Information Processing Systems (NIPS); MIT: Cambridge, MA, USA, November 2000; pp. 554–560. [Google Scholar]
- Bouguila, N.; Ziou, D. A Dirichlet process mixture of generalized Dirichlet distributions for proportional data modeling. IEEE Trans. Neural Netw.
**2010**, 21, 107–122. [Google Scholar] [CrossRef] - Gelman, A.; Stern, H.S.; Carlin, J.B.; Dunson, D.B.; Vehtari, A.; Rubin, D.B. Bayesian Data Analysis; Texts in Statistical Science; Chapman and Hall/CRC: Boca Raton, FL, USA, 2013. [Google Scholar]
- Bourouis, S.; Al-Osaimi, F.R.; Bouguila, N.; Sallay, H.; Aldosari, F.M.; Mashrgy, M.A. Bayesian inference by reversible jump MCMC for clustering based on finite generalized inverted Dirichlet mixtures. Soft Comput.
**2019**, 23, 5799–5813. [Google Scholar] [CrossRef] - Bourouis, S.; Alroobaea, R.; Rubaiee, S.; Andejany, M.; Almansour, F.M.; Bouguila, N. Markov Chain Monte Carlo-Based Bayesian Inference for Learning Finite and Infinite Inverted Beta-Liouville Mixture Models. IEEE Access
**2021**, 9, 71170–71183. [Google Scholar] [CrossRef] - Mehta, R.; Egiazarian, K.O. Texture Classification Using Dense Micro-Block Difference. IEEE Trans. Image Process.
**2016**, 25, 1604–1616. [Google Scholar] [CrossRef] - Boiman, O.; Shechtman, E.; Irani, M. In defense of Nearest-Neighbor based image classification. In Proceedings of the 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2008), Anchorage, AK, USA, 24–26 June 2008. [Google Scholar]
- Jaakkola, T.S.; Haussler, D. Exploiting Generative Models in Discriminative Classifiers. In Advances in Neural Information Processing Systems 11, Proceedings of the NIPS Conference, Denver, CO, USA, 30 November–5 December 1998; Kearns, M.J., Solla, S.A., Cohn, D.A., Eds.; The MIT Press: Cambridge, MA, USA, 1998; pp. 487–493. [Google Scholar]
- Sánchez, J.; Perronnin, F.; Mensink, T.; Verbeek, J.J. Image Classification with the Fisher Vector: Theory and Practice. Int. J. Comput. Vis.
**2013**, 105, 222–245. [Google Scholar] [CrossRef] - Marin, J.; Mengersen, K.; Robert, C. Bayesian modeling and inference on mixtures of distributions. In Handbook of Statistics 25; Dey, D., Rao, C., Eds.; Elsevier-Sciences: Amsterdam, The Netherlands, 2004. [Google Scholar]
- Husmeier, D.; Penny, W.D.; Roberts, S.J. An empirical evaluation of Bayesian sampling with hybrid Monte Carlo for training neural network classifiers. Neural Netw.
**1999**, 12, 677–705. [Google Scholar] [CrossRef] - Geiger, D.; Heckerman, D. Parameter priors for directed acyclic graphical models and the characterization of several probability distributions. In Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, Stockholm, Sweden, 30 July–1 August 1999; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1999; pp. 216–225. [Google Scholar]
- Neal, R.M. Markov chain sampling methods for Dirichlet process mixture models. J. Comput. Graph. Stat.
**2000**, 9, 249–265. [Google Scholar] - Roberts, G.O.; Tweedie, R.L. Bounds on regeneration times and convergence rates for Markov chainsfn1. Stoch. Process. Their Appl.
**1999**, 80, 211–229. [Google Scholar] [CrossRef] - Carlo, M. Comment: One Long Run with Diagnostics: Implementation Strategies for Markov Chain. Stat. Sci.
**1992**, 7, 493–497. [Google Scholar] - Cowles, M.K.; Carlin, B.P. Markov Chain Monte Carlo Convergence Diagnostics: A Comparative Review. J. Am. Stat. Assoc.
**1996**, 91, 883–904. [Google Scholar] [CrossRef] - Huang, Y.; Zhou, F.; Gilles, J. Empirical curvelet based fully convolutional network for supervised texture image segmentation. Neurocomputing
**2019**, 349, 31–43. [Google Scholar] [CrossRef] - Zhang, Q.; Lin, J.; Tao, Y.; Li, W.; Shi, Y. Salient object detection via color and texture cues. Neurocomputing
**2017**, 243, 35–48. [Google Scholar] [CrossRef] - Fan, W.; Bouguila, N. Online facial expression recognition based on finite Beta-Liouville mixture models. In Proceedings of the 2013 International Conference on Computer and Robot Vision, Regina, SK, Canada, 28–31 May 2013; pp. 37–44. [Google Scholar]
- Zhao, G.; Pietikainen, M. Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Trans. Pattern Anal. Mach. Intell.
**2007**, 29, 915–928. [Google Scholar] [CrossRef] [Green Version] - Hayman, E.; Caputo, B.; Fritz, M.; Eklundh, J. On the Significance of Real-World Conditions for Material Classification. In Computer Vision—ECCV 2004, Proceedings of the 8th European Conference on Computer Vision, Prague, Czech Republic, 11–14 May 2004; Pajdla, T., Matas, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; Volume 3024, Part IV; pp. 253–266. [Google Scholar]
- Zhang, J.; Marszalek, M.; Lazebnik, S.; Schmid, C. Local Features and Kernels for Classification of Texture and Object Categories: A Comprehensive Study. Int. J. Comput. Vis.
**2007**, 73, 213–238. [Google Scholar] [CrossRef] [Green Version] - Badoual, A.; Unser, M.; Depeursinge, A. Texture-driven parametric snakes for semi-automatic image segmentation. Comput. Vis. Image Underst.
**2019**, 188, 102793. [Google Scholar] [CrossRef] - Zheng, Y.; Chen, K. A general model for multiphase texture segmentation and its applications to retinal image analysis. Biomed. Signal Process. Control
**2013**, 8, 374–381. [Google Scholar] [CrossRef] - Haralick, R.M.; Shanmugam, K.S.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern.
**1973**, 3, 610–621. [Google Scholar] [CrossRef] [Green Version] - Guo, Z.; Zhang, L.; Zhang, D. A Completed Modeling of Local Binary Pattern Operator for Texture Classification. IEEE Trans. Image Process.
**2010**, 19, 1657–1663. [Google Scholar] [PubMed] [Green Version] - Manjunath, B.S.; Ma, W. Texture Features for Browsing and Retrieval of Image Data. IEEE Trans. Pattern Anal. Mach. Intell.
**1996**, 18, 837–842. [Google Scholar] [CrossRef] [Green Version] - Lazebnik, S.; Schmid, C.; Ponce, J. A Sparse Texture Representation Using Local Affine Regions. IEEE Trans. Pattern Anal. Mach. Intell.
**2005**, 27, 1265–1278. [Google Scholar] [CrossRef] [Green Version] - Xu, Y.; Ji, H.; Fermüller, C. Viewpoint Invariant Texture Description Using Fractal Analysis. Int. J. Comput. Vis.
**2009**, 83, 85–100. [Google Scholar] [CrossRef] - Yao, Y.; Yang, W.; Huang, P.; Wang, Q.; Cai, Y.; Tang, Z. Exploiting textual and visual features for image categorization. Pattern Recognit. Lett.
**2019**, 117, 140–145. [Google Scholar] [CrossRef] - Traboulsi, Y.E.; Dornaika, F. Flexible semi-supervised embedding based on adaptive loss regression: Application to image categorization. Inf. Sci.
**2018**, 444, 1–19. [Google Scholar] [CrossRef] - Roy, S.; Shivakumara, P.; Jain, N.; Khare, V.; Dutta, A.; Pal, U.; Lu, T. Rough-fuzzy based scene categorization for text detection and recognition in video. Pattern Recognit.
**2018**, 80, 64–82. [Google Scholar] [CrossRef] [Green Version] - Najar, F.; Bourouis, S.; Zaguia, A.; Bouguila, N.; Belghith, S. Unsupervised Human Action Categorization Using a Riemannian Averaged Fixed-Point Learning of Multivariate GGMM. In Proceedings of the Image Analysis and Recognition—15th International Conference, ICIAR 2018, Póvoa de Varzim, Portugal, 27–29 June 2018; pp. 408–415. [Google Scholar]
- Vrigkas, M.; Nikou, C.; Kakadiaris, I.A. A review of human activity recognition methods. Front. Robot. AI
**2015**, 2, 28. [Google Scholar] [CrossRef] [Green Version] - Zhu, R.; Dornaika, F.; Ruichek, Y. Learning a discriminant graph-based embedding with feature selection for image categorization. Neural Netw.
**2019**, 111, 35–46. [Google Scholar] [CrossRef] - Zhou, H.; Zhou, S. Scene categorization towards urban tunnel traffic by image quality assessment. J. Vis. Commun. Image Represent.
**2019**, 65, 102655. [Google Scholar] [CrossRef] - Sánchez, D.L.; Arrieta, A.G.; Corchado, J.M. Visual content-based web page categorization with deep transfer learning and metric learning. Neurocomputing
**2019**, 338, 418–431. [Google Scholar] [CrossRef] - Vieira, T.; Faugeroux, R.; Martínez, D.; Lewiner, T. Online human moves recognition through discriminative key poses and speed-aware action graphs. Mach. Vis. Appl.
**2017**, 28, 185–200. [Google Scholar] [CrossRef] - Zhu, C.; Sheng, W. Motion-and location-based online human daily activity recognition. Pervasive Mob. Comput.
**2011**, 7, 256–269. [Google Scholar] [CrossRef] - Schuldt, C.; Laptev, I.; Caputo, B. Recognizing human actions: A local SVM approach. In Proceedings of the 17th International Conference on Pattern Recognition, Cambridge, UK, 26 August 2004; Volume 3, pp. 32–36. [Google Scholar]
- Scovanner, P.; Ali, S.; Shah, M. A 3-dimensional sift descriptor and its application to action recognition. In Proceedings of the 15th ACM international conference on Multimedia, Augsburg, Germany, 25 September 2007; pp. 357–360. [Google Scholar]
- Csurka, G.; Dance, C.; Fan, L.; Willamowski, J.; Bray, C. Visual categorization with bags of keypoints. In Proceedings of the Workshop on Statistical Learning in Computer Vision, Prague, Czech Republic, 11–14 May 2004; Volume 1, pp. 1–2. [Google Scholar]
- Bosch, A.; Zisserman, A.; Muñoz, X. Scene classification via pLSA. In Proceedings of the Computer Vision–ECCV, Graz, Austria, 7–13 May 2006; pp. 517–530. [Google Scholar]
- Niebles, J.C.; Wang, H.; Fei-Fei, L. Unsupervised learning of human action categories using spatial-temporal words. Int. J. Comput. Vis.
**2008**, 79, 299–318. [Google Scholar] [CrossRef] [Green Version] - Wong, S.F.; Cipolla, R. Extracting spatiotemporal interest points using global information. In Proceedings of the 2007 IEEE 11th International Conference on Computer Vision. Citeseer, Rio de Janeiro, Brazil, 14–21 October 2007; pp. 1–8. [Google Scholar]
- Fan, W.; Bouguila, N. Variational learning for Dirichlet process mixtures of Dirichlet distributions and applications. Multimed. Tools Appl.
**2014**, 70, 1685–1702. [Google Scholar] [CrossRef] - Dollár, P.; Rabaud, V.; Cottrell, G.; Belongie, S. Behavior Recognition via Sparse Spatio-Temporal Features; VS-PETS: Beijing, China, 2005. [Google Scholar]
- Channoufi, I.; Bourouis, S.; Bouguila, N.; Hamrouni, K. Spatially Constrained Mixture Model with Feature Selection for Image and Video Segmentation. In Image and Signal Processing, Proceedings of the 8th International Conference, ICISP 2018, Cherbourg, France, 2–4 July 2018; Mansouri, A., Elmoataz, A., Nouboud, F., Mammass, D., Eds.; Springer: Berlin/Heidelberg, Germany, 2018; Volume 10884, pp. 36–44. [Google Scholar]
- Fan, W.; Sallay, H.; Bouguila, N.; Bourouis, S. A hierarchical Dirichlet process mixture of generalized Dirichlet distributions for feature selection. Comput. Electr. Eng.
**2015**, 43, 48–65. [Google Scholar] [CrossRef] - Versaci, M.; Morabito, F.C. Image edge detection: A new approach based on fuzzy entropy and fuzzy divergence. Int. J. Fuzzy Syst.
**2021**, 23, 918–936. [Google Scholar] [CrossRef] - Orujov, F.; Maskeliunas, R.; Damasevicius, R.; Wei, W. Fuzzy based image edge detection algorithm for blood vessel detection in retinal images. Appl. Soft Comput.
**2020**, 94, 106452. [Google Scholar] [CrossRef]

**Figure 1.**The pipeline of the proposed method. First, robust invariant visual features are extracted from each image. Then, these features are modelled using an infinite multivariate generalized Normal mixture model. The next step consists of encoding them using Fisher vectors to obtain effective descriptors. Finally, an SVM classifier is applied for categorization and classification applications.

**Figure 2.**Graphical representation of our developed Bayesian infinite multivariate generalized Normal mixture model. Fixed hyperparameters are indicated by rounded boxes and random variables by circles. Y is the observed variable, Z is the latent variable, large boxes indicate repeated process, and the arcs show the dependencies between variables.

**Figure 3.**Samples of textures from different datasets (

**a**) KTH-TIPS, (

**b**) UIUCTex, and (

**c**) UMD dataset.

**Table 1.**Overall classification accuracy rate (%) of different approaches for the three texture datasets using SIFT features.

Method | KTH-TIPS | UIUCTex | UMD |
---|---|---|---|

GMM | 80.22 | 84.64 | 83.70 |

inGMM | 80.94 | 85.33 | 84.14 |

GGMM | 82.11 | 86.51 | 86.42 |

inGGMM | 83,30 | 87.21 | 87.17 |

MGGMM [10] | 84.10 | 88.09 | 88.15 |

inMGNMM (our method) | 85.91 | 89.03 | 89.10 |

**Table 2.**Overall classification accuracy rate (%) of different approaches for the three texture datasets using LBP features.

Method | KTH-TIPS | UIUCTex | UMD |
---|---|---|---|

GMM | 80.88 | 84.98 | 83.99 |

inGMM | 81.24 | 86.03 | 84.84 |

GGMM | 82.90 | 87.05 | 86.96 |

inGGMM | 83.94 | 87.84 | 87.86 |

MGGMM [10] | 84.81 | 88.78 | 88.87 |

inMGNMM (our method) | 86.29 | 89.81 | 89.97 |

**Table 3.**Overall classification accuracy rate (%) of different approaches for the three texture datasets using DMD features.

Method | KTH-TIPS | UIUCTex | UMD |
---|---|---|---|

GMM | 83.08 | 87.77 | 86.12 |

inGMM | 84.13 | 89.77 | 87.12 |

GGMM | 85.76 | 90.02 | 89.75 |

inGGMM | 86.88 | 90.67 | 90.20 |

MGGMM [10] | 87.67 | 91.07 | 91.23 |

inMGNMM (our method) | 88.91 | 92.11 | 92.12 |

**Table 4.**Comparative study for Human Activity Recognition between different approaches for the KTH database.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Bourouis, S.; Alroobaea, R.; Rubaiee, S.; Andejany, M.; Bouguila, N.
Nonparametric Bayesian Learning of Infinite Multivariate Generalized Normal Mixture Models and Its Applications. *Appl. Sci.* **2021**, *11*, 5798.
https://doi.org/10.3390/app11135798

**AMA Style**

Bourouis S, Alroobaea R, Rubaiee S, Andejany M, Bouguila N.
Nonparametric Bayesian Learning of Infinite Multivariate Generalized Normal Mixture Models and Its Applications. *Applied Sciences*. 2021; 11(13):5798.
https://doi.org/10.3390/app11135798

**Chicago/Turabian Style**

Bourouis, Sami, Roobaea Alroobaea, Saeed Rubaiee, Murad Andejany, and Nizar Bouguila.
2021. "Nonparametric Bayesian Learning of Infinite Multivariate Generalized Normal Mixture Models and Its Applications" *Applied Sciences* 11, no. 13: 5798.
https://doi.org/10.3390/app11135798