Next Article in Journal
An Optimal Capacity Configuration Method for a Renewable Energy Integration-Transmission System Considering Economics and Reliability
Previous Article in Journal
Distinguishing Strongly Interacting Dark Matter Spikes via EMRI Gravitational Waves
Previous Article in Special Issue
Estimation and Bayesian Prediction for New Version of Xgamma Distribution Under Progressive Type-II Censoring
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bayesian Model Averaging with Diffused Priors for Model-Based Clustering Under a Cluster Forests Architecture

School of Mathematics and Statistics, Northwestern Polytechnical University, Xi’an 710129, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(11), 1879; https://doi.org/10.3390/sym17111879
Submission received: 2 September 2025 / Revised: 10 October 2025 / Accepted: 15 October 2025 / Published: 5 November 2025
(This article belongs to the Special Issue Bayesian Statistical Methods for Forecasting)

Abstract

This paper considers a class of generative graphical models for parsimonious modeling of Gaussian mixtures and robust unsupervised learning, each assuming that the data are generated independently and identically from a finite mixture model with an extended naïve Bayes structure. To account for model uncertainty, the expectation model-averaging algorithm, which approximates the Bayesian model averaging with incomplete data, is introduced using a novel class of non-informative priors for the parameters. A Cluster Forests architecture to circumvent intractable model averaging over a large selective model space is developed. Extensive synthetic data experiments and real-world data applications show that the proposed methodology can produce clustering results of high robustness and attain good model detection performance.

1. Introduction

Finite Gaussian mixture models are powerful tools for modeling the distributions of random phenomena. They are widely used for unsupervised classification tasks and lay the foundation for many deep learning-based clustering algorithms [1,2]. However, the competitive performance of Gaussian mixture models cannot be expected for high-dimensional datasets due to the curse of dimensionality [3]. They are easily over-parameterized and may suffer from singularity problems when the sample size is small [4]. Moreover, the impact of redundancy and noise can degrade the model’s interpretability and efficiency, resulting in a model with limited generalization capacity [5,6].
Parsimonious modeling of Gaussian mixture models via feature selection aims to reduce the dimensions by retaining only the discriminative features for clustering. Related approaches are commonly established under the local independence assumption [7,8], i.e., adopting a diagonal component covariance matrix, which facilitates computational efficiency but damages the flexibility of modeling the dependence relationship across features. It has been shown in experiments that the local independence assumption can be easily violated in real-world datasets [9,10]. Ignorance of the correlations may undermine the reliability of classification algorithms and lead to misleading conclusions about the feature’s saliency [11]. Moreover, in addition to the prediction performance, scientists are often interested in describing the relationships between attributes as in generative models. Modeling the covariance structures can result in better data generating performance [1].
To allow for the presence of within-component dependence, Celeux and Govaert [12] proposed parsimonious models derived from spectral decomposition of the component covariance matrices. The model family related to the local factor analysis was suggested in [13,14]. Fop et al. [15] proposed to construct the mixture components directly via sparse covariance graphs. Moreover, by viewing the components as Gaussian graphical models, the graphical Lasso path of solutions for sparsity patterns was adopted in [16]. Nevertheless, these specifications are only designed for parsimonious modeling of the covariance structures and are generally not extendable for feature selection purposes.
In this paper, we consider the block-diagonal structure of component covariance matrices [17], a natural generalization of the diagonal one and closely related to the graphical Lasso solution to Gaussian graphical models [18]. It assumes that the features can be partitioned into several groups and local independence is established between groups of features. Relevance or irrelevance of the features grouped together to class assignment can therefore be examined as a whole. Such extension from the perspective of a Bayesian network gives rise to a class of parsimonious Gaussian mixture models by partitioning the features and choosing a naïve Bayes structure [19]. We term this class of models as the leaf-augmented naïve Bayes (LAN) family to distinguish it from the conventional naïve Bayes models, where each leaf node is formed by exactly one feature.
In most Bayesian inferences, it is common practice to fit many models within a selective model space and report the clustering results based on the best one according to some model selection criterion, such as the Bayesian information criterion (BIC) [20] and the integrated complete-data likelihood (ICL) [21]. Alternative approaches fusing model selection with clustering algorithms have been proposed to circumvent the loss of information between these two stages, including the structural expectation-maximization (EM) algorithm [15], the sequential updating and greedy search algorithm [22], and the maximum integrated complete-data likelihood (MICL) iterative algorithm [21]. Nevertheless, inference conditioning on a single selected model ignores model uncertainty, which can lead to underestimation of the uncertainty about quantities of interest and yield risky decisions [23,24,25]. A complete Bayesian solution to this problem involves integrating over all parameter configurations and possible model structures, which is known as Bayesian model averaging (BMA) [26,27,28,29,30]. This approach is optimal in the sense of maximizing the predictive ability as measured by a logarithmic scoring rule [23]. However, the exact BMA of the parsimonious Gaussian mixture models is typically intractable due to the existence of latent class variables. Moreover, without restrictions, the selective model space can be prohibitively large, which makes BMA an impractical proposition for model- based clustering.
This paper endeavors to give a comprehensive treatment to the BMA of the parsimonious Gaussian mixture models under the LAN assumption. Our work is built upon the underpinnings of the expectation model-averaging (EMA) algorithm for the unsupervised naïve Bayes classifiers with categorical data [31], which fuses the standard BMA procedures and the clustering in an omnibus fashion and can be conducted efficiently with the same time complexity as the usual EM algorithm [32]. With a slight modification, we show that the EMA algorithm constitutes an instantiation of the reverse collapsed variational Bayes (RCVB) approach [33], proposed recently as a novel variational Bayes (VB) method.
Extension of the EMA algorithm to the parsimonious Gaussian mixture models under the naïve Bayes structures is straightforward. Inspired by a prior modeling strategy in the context of Bayesian hypothesis tests [34], we introduce a new class of non-informative priors for the Gaussian parameters to achieve objective Bayesian inference. Despite the diffuse property of the priors, we show that the BMA over the selective naïve Bayes structures can be obtained exactly in the complete-data setting, which will be the main conducer to the efficiency of the EMA algorithm. Extension of the algorithm to the overall LAN family is obstructed by the sheer size of the selective model space. Indeed, the total number of feature partitions increases with the number of dimensions, as does the Bell number [35]. Therefore, we construct a Cluster Forests (CF) [36] architecture to circumvent the two-level model averaging for model-based clustering. Combined with the EMA algorithm, the CF produces a cluster ensemble by making random but progressively refined probings of the edges in a Gaussian graphical model. We introduce aggregation metrics to evaluate the patterns of feature importance for clustering and investigate the covariance structures of the Gaussian mixture model. The CF architecture can be implemented efficiently and is expected to produce a clustering model of high robustness and generalization capacity.
The rest of this paper is organized as follows. Section 2 introduces the notations of the parsimonious Gaussian mixture models under the LAN assumption. Section 3 forms the EMA algorithm that approximates the BMA with incomplete data. Section 4 discusses the implementation details of the EMA algorithm for model-based clustering, where a class of non-informative priors is introduced for objective Bayesian inference. Section 5 establishes the overall CF architecture based on the EMA algorithm. Aggregation metrics for clustering and model structure detection are developed. In Section 6, the performance of the proposed method is evaluated on synthetic datasets as well as some real-world datasets. Section 7 concludes this paper, points out limitations, and suggests future research directions.

2. Parsimonious Gaussian Mixture Models

In a clustering problem, there is a set of i.i.d. observations X = { x i } i = 1 n , where x i = ( x i 1 , x i 2 , , x i d ) T R d is the d-dimensional feature data for the ith individual. The aim is to find a decision rule that partitions the individuals into heterogeneous groups. As a probabilistic method, the finite mixture model assumes that each datum is generated from a class-specific distribution but with the class label missing, then it marginally follows a finite mixture distribution. The clustering is realized by assigning each individual the class label where it has the highest posterior probability of belonging. Throughout the paper, we denote the number of mixture components or the number of latent classes as K. The latent class label for the ith individual is denoted as c i , taking value in { 1 , 2 , , K } .
For continuous feature data, the Gaussian mixture model under the local independence assumption is ubiquitously used [7,8]. It assumes that the features are conditionally independent given the hidden class label and each follows a Gaussian distribution. As a feature can be relevant or irrelevant to data separation, the binary variables r = ( r 1 , r 2 , , r d ) T are introduced, where r l { 0 , 1 } , with r l = 1 indicating that the lth feature is relevant to class assignment, and r l = 0 indicating that the lth feature is irrelevant and follows a common distribution independent of class assignment. The resulting parsimonious Gaussian mixture model is given as follows, which is often termed conventionally as the naïve Bayes model:
p ( x i | θ , r ) = k = 1 K τ k l = 1 d N ( x i l ; μ k l , σ k l ) r l N ( x i l ; μ 0 l , σ 0 l ) 1 r l ,
where N ( x ; μ , σ ) is the density function of the Gaussian distribution with mean μ and variance σ . The set of parameters in the Gaussian mixture model is denoted as θ = { τ , μ , σ } . τ = ( τ 1 , τ 2 , , τ K ) T , where τ k ( τ k > 0 and k = 1 K τ k = 1 ) is the mixing proportion of class k. μ = { μ l , μ 0 l } l = 1 d with μ l = { μ k l } k = 1 K . Correspondingly, σ = { σ l , σ 0 l } l = 1 d with σ l = { σ k l } k = 1 K . μ 0 l and σ 0 l are parameters for the common distribution of the lth feature when r l = 0 .
The local independence assumption for the Gaussian mixture model could be too restrictive when the features are locally correlated [15]. Here, we consider the LAN model family where the features can be partitioned into several mutually independent groups given the class label, i.e., assuming a block-diagonal component covariance matrix. To ease interpretation, we call the groups of features “feature blocks”. The pattern of feature relevance/irrelevance to data separation can therefore be examined block-wise due to within-block correlations.
Specifically, let s denote a partition of the features and S denote the collection of selective feature partitions. According to s, x i can be arranged as x i = ( x i 1 s , x i 2 s , , x i d s s ) T after appropriate permutation of the coordinates, where d s denotes the total number of feature blocks and x i l s the lth feature block under partition s. Let d l s ( l = 1 d s d l s = d ) be the dimensions of x i l s . Assuming the feature blocks are mutually independent given the class label, we introduce the indicators r s = ( r 1 s , r 2 s , , r d s s ) T with r l s = 1 ( r l s = 0 ), indicating the lth feature block is relevant (irrelevant) to class assignment. Given the feature partition s and the indicators r s , the density of x i becomes
p ( x i | θ s , r s , s ) = k = 1 K τ k l = 1 d s N ( x i l s ; μ k l s , Σ k l s ) r l s N ( x i l s ; μ 0 l s , Σ 0 l s ) 1 r l s ,
where μ k l s and Σ k l s are the mean vector and covariance matrix for the lth feature block in class k when r l s = 1 . μ 0 l s and Σ 0 l s are for the common distribution of the lth feature block when r l s = 0 . We use θ s = { τ , μ s , Σ s } to denote the set of parameters under partition s, where μ s = { μ l s , μ 0 l s } l = 1 d s with μ l s = { μ k l s } k = 1 K and Σ s = { Σ l s , Σ 0 l s } l = 1 d s with Σ l s = { Σ k l s } k = 1 K . Note that model (2) reduces to model (1) when each feature stands as a block.

3. Unsupervised Classifier via BMA

3.1. General Rule of BMA

In a model-based clustering problem, data separation is often realized by constructing an unsupervised classifier. As the true model and parameter values are unknown a priori, the unsupervised classifier accounting for model and parameter uncertainty can be derived based on the rule of BMA.
Without loss of generosity, we denote the model as m , which belongs to a collection of models M ; for example, a LAN model m = ( r s , s ) from the collection M = { ( r s , s ) : s S , r s { 0 , 1 } d s } . The set of parameters under model m is denoted as θ m and θ m Θ m . The unsupervised classifier based on the rule of BMA can be obtained by learning the predictive probability:
p ( c 0 , x 0 | X ) = m M Θ m p ( c 0 , x 0 | θ m , m ) p ( θ m , m | X ) d θ m ,
where p ( c 0 , x 0 | θ m , m ) is the joint likelihood function on class label c 0 and feature data x 0 . p ( θ m , m | X ) is the posterior of the model and parameters given the observed data X . Then, we can obtain the classifier using the following:
p ( c 0 | x 0 , X ) = p ( c 0 , x 0 | X ) c 0 p ( c 0 , x 0 | X ) .
Generally, the predictive distribution in (3) cannot be obtained in closed form, as its computation involves intractable integrals over the unobserved data of the latent class label. We denote c = { c i } i = 1 n as the data of the latent class label, which indicate the true class assignments of the n objects.
If we have an estimation of c denoted as c ^ , the predictive distribution p ( c 0 , x 0 | X ) can be approximated by interpolation of c with c ^ in p ( c 0 , x 0 | c , X ) , which is the predictive distribution given the complete data, which can be obtained via
p ( c 0 , x 0 | c , X ) = m M Θ m p ( c 0 , x 0 | θ m , m ) p ( θ m , m | c , X ) d θ m ,
or
p ( c 0 , x 0 | c , X ) = p ( c 0 , c , x 0 , X ) p ( c , X ) .
In (5) and (6),
p ( θ m , m | c , X ) p ( c , X | θ m , m ) p ( θ m , m ) ,
is the posterior of the model and parameters given the complete data, and
p ( c , X ) = m M Θ m p ( c , X | θ m , m ) p ( θ m , m ) d θ m ,
is the integrated complete-data likelihood function. p ( θ m , m ) is the joint prior of the model and parameters. Generally, with properly specified priors and moderate size of model space, the posterior in (7) and the integral of (8) can be computed exactly, and we can obtain p ( c 0 , x 0 | c , X ) in closed form.
For the interpolation c ^ , a natural choice is a c ^ MAP value that maximizes the posterior p ( c | X ) or equally maximizes p ( c , X ) . However, this optimization problem involves a combinatorial search over { 1 , 2 , , K } n , which is computationally infeasible. Here, we consider an alternative interpolation of c . It can be obtained using an efficient EM-like algorithm, and interestingly, it falls into the scheme of a recently proposed VB method.

3.2. The EMA Algorithm

The EMA algorithm proposed by Santafé et al. [31] is composed of iterations of the expectation (E) step and the model-averaging (MA) step. The E step is analogous to that in the EM algorithm, which is formed as an interpolation of the latent class data via conditional expectations. Instead of a maximization step to find the maximum a posteriori (MAP) estimator of the model and parameters, the MA step conducts the BMA to integrate model and parameter uncertainty given the pseudo-complete data from the E step. The EMA algorithm can be implemented as follows.
E step: Denote the pseudo-complete data after the ( t 1 ) th iteration as ( Z ^ ( t 1 ) , X ) , where Z ^ ( t 1 ) = { z ^ i ( t 1 ) } i = 1 n . To ease interpretation, we have represented the data of the latent class label as Z = { z i } i = 1 n , where z i = ( z i 1 , z i 2 , , z i K ) T and z i k = δ c i , k ( δ c i , k is the Kronecker delta function). At the ( t ) th iteration, we compute the following:
Q ( t ) ( θ m , m ) i = 1 n E log p ( z i , x i | θ m , m ) | Z ^ i ( t 1 ) , x + log p ( θ m , m ) .
The subscript i is used to indicate that the data for the ith individual is removed. As with the property of the finite mixture model, the E step completes an interpolation of data z i based on the response z ^ i ( t ) = ( z ^ i 1 ( t ) , z ^ i 2 ( t ) , , z ^ i K ( t ) ) T , where
z ^ i k ( t ) : = p ( z i = 1 k | Z ^ i ( t 1 ) , X ) p ( z i = 1 k , x i | Z ^ i ( t 1 ) , X i ) ,
and 1 k = ( δ k , 1 , δ k , 2 , , δ k , K ) T . Denote Z ^ ( t ) = { z ^ i ( t ) } i = 1 n .
MA step: In the classical EM algorithm [32], the second step is to find the estimates of θ m and m that maximize the function Q ( t ) ( θ m , m ) . In the EMA algorithm, instead, the predictive distribution of ( z i , x i ) for (10) is obtained by integrating model and parameter uncertainty, leading to the MA step, as follows:
p ( z i , x i | Z ^ i ( t ) , X i ) = m M Θ m p ( z i , x i | θ m , m ) p ( θ m , m | Z ^ i ( t ) , X i ) d θ m ,
where
p ( θ m , m | Z ^ i ( t ) , X i ) exp i i E log p ( z i , x i | θ m , m ) | Z ^ i ( t 1 ) , X + log p ( θ m , m ) .
While the EMA algorithm can be implemented efficiently and has been justified using synthetic and real-world datasets as having good practical performance, the possible rationale behind it has not been well studied and illustrated. Yu et al. [33] recently proposed the RCVB method in a variational discriminant analysis, which was developed as an approximation to the collapsed variational Bayes (CVB) approach [37]. In Appendix A, we show that the EMA algorithm is an instantiation of the RCVB.

4. EMA Algorithm for Model-Based Clustering

In this section, we discuss the implementation details of the EMA algorithm for model-based clustering, where parsimonious Gaussian mixture models under the LAN assumption are considered.

4.1. Choice of Priors

The choice of priors is of great importance in Bayesian modeling. It can affect the computational course of the posterior inference and allow prior knowledge to influence the analysis results. To simplify the inference, we assume that the prior of model and parameters can be decomposed as follows:
p ( τ , μ s , Σ s , r s , s ) = p ( τ ) · l = 1 d s p ( μ l s , Σ l s ) r l s p ( μ 0 l s , Σ 0 l s ) 1 r l s · p ( r s | s ) · p ( s ) ,
where
p ( μ l s , Σ l s ) = k = 1 K p ( μ k l s , Σ k l s ) .
A natural choice for p ( τ ) is
p ( τ ) = D i r ( τ ; α ) ,
where D i r ( τ ; α ) is the density function of the Dirichlet distribution with parameters α = ( α 1 , α 2 , , α K ) T . It is the conjugate prior for the multinomial parameters. Taking α k = 1 2 , k = 1 , 2 , , K , (15) becomes the Jeffrey non-informative prior of τ [21,38].
Choosing priors for the Gaussian parameters is a more complex issue. Conventionally, the conjugate priors are assumed, i.e., for s S , l = 1 , 2 , , d s , and k = 1 , 2 , , K ,
p ( μ k l s , Σ k l s ) = N ( μ k l s ; ξ k l s , Σ k l s / β k l s ) IW ( Σ k l s ; v k l s , η k l s I d l s ) , p ( μ 0 l s , Σ 0 l s ) = N ( μ 0 l s ; ξ 0 l s , Σ 0 l s / β 0 l s ) IW ( Σ 0 l s ; v 0 l s , η 0 l s I d l s ) ,
where IW ( Σ ; v , V ) is the inverse-Wishart density function with degrees of freedom v and scale matrix V . We use I d to denote the d × d identity matrix.
Using conjugate priors can ease the inference. They also have simple and understandable interpretation as arising from the analysis of a conceptual sample generated using the same structure for the current sample [39]. However, there are many hyperparameters to be specified. In the absence of reliable prior knowledge, it is recommended to use non-informative priors, which retain the diffuse property after monotone transformation of the parameters [33,40] for objective Bayesian inference. But, such a benchmark prior for the mean and covariance of a Gaussian distribution is an improper prior, determined only up to an arbitrary multiplicative constant, and is typically not permitted in situations where the computation of the model posterior or Bayes factor is required.
Here, we consider a class of diffused-conjugate priors that retains the merits of conjugacy while realizing the desirable properties of the benchmark prior in the limit. It allows for calibration of the undefined multiplicative constant and leads to sensible Bayesian inference. Specifically, we impose the following assumptions on the conjugate priors given in (16).
Assumption 1.
Consider the conjugate priors in (16) for the Gaussian parameters of a LAN model. For s S and l { 1 , 2 , , d s } ,
1. 
the hyperparameters β 0 l s and β k l s , k = 1 , , K , satisfy
2 π β 0 l s = f 2 d s d l s , 2 π β 1 l s = f 2 K d s d l s ,
and
β k l s = n k n d l s + 1 2 + 1 β 1 l s , for k = 2 , 3 , , K ,
where n k = i = 1 n z i k and f + ;
2. 
the hyperparameters v 0 l s and v k l s , k = 1 , , K satisfy
v 0 l s 0 , i = 1 d l s Γ v 0 l s + 1 i 2 = 2 d l s ( d l s + 1 ) 4 g 1 d s , v 1 l s 0 , i = 1 d l s Γ v 1 l s + 1 i 2 = 2 d l s ( d l s + 1 ) 4 g 1 K d s ,
and
v k l s = v 1 l s , for k = 2 , 3 , , K ,
where Γ ( ) is the Gamma function and g + ;
3. 
η 0 l s = | v 0 l s | and η k l s = | v k l s | , k = 1 , 2 , , K .
Remark 1. 
Under Assumption 1, by forcing f , g + , we obtain
lim f , g + p ( μ k l s , Σ k l s ) | 2 π Σ k l s | 1 2 | Σ k l s | d l s + 1 2 , lim f , g + p ( μ 0 l s , Σ 0 l s ) | 2 π Σ 0 l s | 1 2 | Σ 0 l s | d l s + 1 2 ,
which are the benchmark priors for the Gaussian parameters [41,42]. The hyperparameters ξ k l s and ξ 0 l s can be arbitrary constant vectors with finite L 2 norm, the choice of which will not impact the inference results in the limit.
Remark 2. 
Following the definition of the Gamma function at non-positive values [43], to meet the second condition, the degrees of freedom v 0 l s and v 1 l s in the inverse-Wishart distributions should approach zero from the left when d l s = 2 , 6 , 10 , 14 , . Therefore, to form the third condition, we take the absolute value on v 0 l s and v k l s to make sure η 0 l s and η k l s for the scale matrices are positive. In the formal definition of the inverse-Wishart distribution, the degrees of freedom are also required to be positive. Here, negative infinitesimal values are allowed, which when present in the prior can be interpreted from the perspective of conjugacy as a tiny “owed” number of degrees of freedom.
Remark 3. 
The practice of making the conjugate priors diffuse to achieve objective Bayesian inference has been adopted previously. But, most of these studies only involve the posterior inference of parameters where the undefined normalizing constant in the limiting prior is canceled [40,44]. In the context of model selection, Fernández et al. [42] and Alharthi [45] used a diffused version of the conjugate prior for a common parameter between different models. Therefore, the undefined normalizing constant is still not relevant. In this paper, we tackle the cases where the diffused-conjugate priors are used for model-specific parameters. Recently, Ormerod et al. [34] proposed an automatic prior (termed the “cake prior”) for Gaussian parameters, which has similar properties with the proposed one. However, they develop the cake prior only for the one-dimensional case. The generalization to the multi-dimensional cases is left unsolved. Compared to the cake prior, the proposed diffused-conjugate prior reserves the conjugacy property and is applicable in multi-dimensional cases. Moreover, it will be shown that the proposed prior can achieve the intended frequentist properties of the cake prior even in multi-dimensional cases.
We specify the prior for r s as
p ( r s | s ) = l = 1 d s p ( r l s | s ) = l = 1 d s ( ρ l s ) r l s ( 1 ρ l s ) 1 r l s ,
where r 1 s , r 2 s , , r d s s have been assumed prior independent given the partition s, and p ( r l s | s ) follows the Bernoulli distribution with parameter ρ l s . The prior independence between r l s ’s is often cited as the structure modularity assumption [19,31]. The probability mass when taking the partition s is denoted as ϕ s . Flat priors are given by assigning ϕ s = 1 | S | and ρ l s = 1 2 , where | S | is the number of selective feature partitions.

4.2. Model Averaging

Following the definition of the EMA algorithm, a crucial part is the calculation of model averaging to obtain the predictive distribution in (11). Without loss of generosity, we derive the predictive distribution function p ( z 0 , x 0 | Z , X ) , where the datum ( z 0 , x 0 ) to be predicted is generated independently with ( Z , X ) .
Let X = [ x 1 , x 2 , , x n ] T = X 1 s , X 2 s , , X d s s R n × d , where X l s = [ x 1 l s , x 2 l s , , x n l s ] T R n × d l s is the data for the lth feature block under partition s. Through mathematical manipulations, the predictive distribution based on the rule of BMA can be decomposed as follows.
p ( z 0 , x 0 | Z , X ) = p ( z 0 | Z ) s S r s { 0 , 1 } d s p ( x 0 | z 0 , Z , X , r s , s ) p ( r s | Z , X , s ) p ( s | Z , X ) = p ( z 0 | Z ) s S p ( s | Z , X ) l = 1 d s [ p ( r l s = 1 | Z , X l s , s ) p ( x 0 l s | z 0 , Z , X l s , r l s = 1 , s ) + p ( r l s = 0 | Z , X l s , s ) p ( x 0 l s | X l s , r l s = 0 , s ) ] .
Using the Dirichlet prior defined in (15), we have
p ( z 0 | Z ) = p ( z 0 | τ ) p ( τ | Z ) d τ = M u l t ( z 0 ; τ ) D i r ( τ ; α ^ ) d τ = M u l t ( z 0 ; τ ^ ) ,
where M u l t ( z ; τ ) is the multinomial probability mass function with parameters τ = ( τ 1 , τ 2 , , τ K ) T . The parameters in the posterior D i r ( τ ; α ^ ) can be obtained through conjugacy, where α ^ = ( α ^ 1 , α ^ 2 , , α ^ K ) T and α ^ k = 1 2 + n k . τ ^ = ( τ ^ 1 , τ ^ 2 , , τ ^ K ) T with τ ^ k = α ^ k / k α ^ k .
The predictive module of the lth feature block can be calculated as follows.
p ( x 0 l s | z 0 , Z , X l s , r l s = 1 , s ) = p ( x 0 l s | z 0 , μ l s , Σ l s , r l s = 1 , s ) p ( μ l s , Σ l s | Z , X l s ) d μ l s d Σ l s = k = 1 K N ( x 0 l s ; μ k l s , Σ k l s ) N ( μ k l s ; x ¯ k l s , Σ k l s / n k ) IW ( Σ k l s ; n k , n k S k l s ) d μ k l s d Σ k l s z 0 k = k = 1 K M t x 0 l s ; x ¯ k l s , n k + 1 n k + 1 d l s S k l s , n k + 1 d l s z 0 k ,
where M t ( x ; μ , Σ , v ) is the density function of multivariate Student’s t distribution with mean μ , scale matrix Σ and degrees of freedom v. The undefined multiplicative constants in the limiting priors (17) do not influence the posterior inference of the parameters. The sufficient statistics in the posterior of ( μ l s , Σ l s ) are given by x ¯ k l s = i = 1 n z i k x i l s / n k and S k l s = i = 1 n z i k ( x i l s x ¯ k l s ) ( x i l s x ¯ k l s ) T / n k , k = 1 , 2 , , K .
Similarly, we can obtain
p ( x 0 l s | X l s , r l s = 0 , s ) = p ( x 0 l s | μ 0 l s , Σ 0 l s , r l s = 0 , s ) p ( μ 0 l s , Σ 0 l s | X l s ) d μ 0 l s d Σ 0 l s = N ( x 0 l s ; μ 0 l s , Σ 0 l s ) N ( μ 0 l s ; x ¯ 0 l s , Σ 0 l s / n ) IW ( Σ 0 l s ; n , n S 0 l s ) d μ 0 l s d Σ 0 l s = M t x 0 l s ; x ¯ 0 l s , n + 1 n + 1 d l s S 0 l s , n + 1 d l s ,
where x ¯ 0 l s = i = 1 n x i l s / n and S 0 l s = i = 1 n ( x i l s x ¯ 0 l s ) ( x i l s x ¯ 0 l s ) T / n .
The posterior probability p ( r l s = 1 | Z , X l s , s ) in (19) can be computed by
p ( r l s = 1 | Z , X l s , s ) = p ( Z , X l s | r l s = 1 , s ) p ( r l s = 1 | s ) p ( Z , X l s | r l s = 1 , s ) p ( r l s = 1 | s ) + p ( Z , X l s | r l s = 0 , s ) p ( r l s = 0 | s ) = expit 1 2 λ Bayes ( X l s , Z , s ) = : ρ ^ l s ,
where we have assumed the flat prior for r l s . λ Bayes ( X l s , Z , s ) is the Bayesian test statistic to test { H 0 l s : r l s = 0 } against { H 1 l s : r l s = 1 } . It is defined as follows:
λ Bayes ( X l s , Z , s ) = 2 log p ( X l s | Z , r l s = 1 , s ) p ( X l s | r l s = 0 , s ) ,
where
p ( X l s | Z , r l s = 1 , s ) = lim f , g + p ( X l s | Z , μ l s , Σ l s , r l s = 1 , s ) p ( μ l s , Σ l s ) d μ l s d Σ l s = lim f , g + k = 1 K i = 1 n N ( x i l s ; μ k l s , Σ k l s ) z i k N ( μ k l s ; ξ k l s , Σ k l s / β k l s ) IW ( Σ k l s ; v k l s , η k l s I d l s ) d μ k l s d Σ k l s ,
and
p ( X l s | r l s = 0 , s ) = lim f , g + p ( X l s | μ 0 l s , Σ 0 l s , r l s = 0 , s ) p ( μ 0 l s , Σ 0 l s ) d μ 0 l s d Σ 0 l s = lim f , g + i = 1 n N ( x i l s ; μ 0 l s , Σ 0 l s ) N ( μ 0 l s ; ξ 0 l s , Σ 0 l s / β 0 l s ) IW ( Σ 0 l s ; v 0 l s , η 0 l s I d l s ) d μ 0 l s d Σ 0 l s .
As we use the diffused priors, there are undefined normalizing constants present in (25) and (26), which should be carefully treated when computing the Bayesian test statistic. As illustrated in Theorem 1, Assumption 1 allows us to maintain the diffuse property of the priors while leading to a Bayesian test statistic with meaningful interpretation and strong concordance with the frequentist approaches [33].
Theorem 1. 
Consider the conjugate priors given in (16) for the Gaussian parameters of a LAN model. Under Assumption 1, and given the conditions n k d l s for k = 1 , 2 , , K , the Bayesian test statistic defined in (24) to test { H 0 l s : r l s = 0 } against { H 1 l s : r l s = 1 } when f , g + is given by
λ Bayes ( X l s , Z , s ) = λ L R T ( X l s , Z , s ) l s log n + O p ( n 1 ) ,
where
l s = ( K 1 ) [ d l s + d l s ( d l s + 1 ) 2 ] ,
is the difference in the number of parameters between model H 1 l s and H 0 l s . λ LRT ( X l s , Z , s ) is the likelihood ratio test (LRT) statistic, defined as follows:
λ LRT ( X l s , Z , s ) = 2 log p ( X l s | Z , μ ^ l s , Σ ^ l s , r l s = 1 , s ) p ( X l s | μ ^ 0 l s , Σ ^ 0 l s , r l s = 0 , s ) ,
where μ ^ 0 l s , Σ ^ 0 l s , μ ^ l s = { μ ^ k l s } k = 1 K , Σ ^ l s = { Σ ^ k l s } k = 1 K are the maximum likelihood estimates computed by μ ^ 0 l s = x ¯ 0 l s , Σ ^ 0 l s = S 0 l s , μ ^ k l s = x ¯ k l s , Σ ^ k l s = S k l s .
Proof of Theorem 1. 
Expanding Equations (25) and (26) with exact forms of the Gaussian and inverse-Wishart density functions and forcing β k l s , v k l s 0 , k = 0 , 1 , , K , we can obtain
p ( X l s | Z , r l s = 1 , s ) = lim f , g + k = 1 K exp [ d l s n k 2 log 2 π + d l s 2 log β k l s log i = 1 d l s Γ v k l s + 1 i 2 d l s 2 log n k + log i = 1 d l s Γ n k + 1 i 2 + d l s n k 2 log 2 n k 2 log | n k S k l s | ] ,
and
p ( X l s | r l s = 0 , s ) = lim f , g + exp [ d l s n 2 log 2 π + d l s 2 log β 0 l s log i = 1 d l s Γ v 0 l s + 1 i 2 d l s 2 log n + log i = 1 d l s Γ n + 1 i 2 + d l s n 2 log 2 n 2 log | n S 0 l s | ] .
Applying the assumptions
β k l s = n k n d l s + 1 2 + 1 β 1 l s , k = 2 , 3 , , K ,
and v 1 l s = v 2 l s = = v K l s from Assumption 1 when computing the ratio of p ( X l s | Z , r l s = 1 , s ) and p ( X l s | r l s = 0 , s ) , we can simplify the Bayesian test statistic (24) as follows:
λ Bayes ( X l s , Z , s ) = λ LRT ( X l s , Z , s ) l s log n + lim f , g + ( C 1 + C 2 ) ,
where
λ LRT ( X l s , Z , s ) = k = 1 K n k log | S k l s | d l s n k n log | S 0 l s | d l s n , l s = ( K 1 ) d l s d l s + 1 2 + 1 .
The terms C 1 and C 2 cancel out with the error term O p ( n 1 ) by applying the following assumptions:
β 0 l s = 1 2 π f 2 d s d l s , β 1 l s = 1 2 π f 2 K d s d l s , i = 1 d l s Γ v 0 l s + 1 i 2 = 2 d l s ( d l s + 1 ) 4 g 1 d s , i = 1 d l s Γ v 1 l s + 1 i 2 = 2 d l s ( d l s + 1 ) 4 g 1 K d s ,
and the Stirling’s asymptotic expansion for the Gamma function [34]. (More proof details can be found in Appendix B).    □
Remark 4. 
When the diffused-conjugate priors are used following Assumption 1, the Bayesian test statistic can be approximately expressed as a penalized version of the LRT statistic and is the difference in BIC values. This is also the main property of the cake priors [34]. As has been proved in [33], with moderate regularity conditions, model selection based on this test statistic can achieve Chernoff consistency.
Through mathematical manipulations, we have
p ( s | Z , X ) p ( s ) p ( X | Z , s ) = p ( s ) l = 1 d s p ( r l s = 1 | s ) p ( X l s | Z , r l s = 1 , s ) + p ( r l s = 0 | s ) p ( X l s | r l s = 0 , s ) 1 2 d s l = 1 d s p ( X l s | r l s = 0 , s ) exp 1 2 λ Bayes ( X l s , Z , s ) + 1 ,
where the flat priors for s and r l s have been applied. Under Assumption 1, the undefined quantities in p ( X l s | r l s = 0 , s ) , l = 1 , 2 , , d s multiplying over the d s feature blocks produce ( f g ) 1 , which is irrelevant to partition s and can be canceled out during normalization for p ( s | Z , X ) . Denote p ( s | Z , X ) as ϕ ^ s .

5. The CF Architecture

In the EMA algorithm, model averaging over the selective naïve Bayes structures fixing on a partition s can be performed analytically under the pseudo-complete data. However, as the number of possible feature partitions follows the Bell number, the second level of model averaging is computationally infeasible even with moderate dimension size.
We propose to accomplish model averaging over feature partitions through the idea of CF. Motivated by the Random Forests methodology, the CF performs strong overall clustering by aggregating many “different and good” cluster instances. A crucial aspect in the original CF is the growth of clustering vectors. Each clustering vector is composed of a subset of features, obtained via several random probings of the feature space but progressively refined to produce relatively high-quality clustering. In [36], CF substantially boosts the performance of the base clustering algorithms.
Different from the original CF architecture, we define a clustering vector s ˜ to be a subset of feature partitions. We define the root model s 0 as the partition where each feature forms as a feature block. Starting from s ˜ = { s 0 } , at each growth, we expand the current clustering vector with one feature partition, which is obtained by randomly merging two feature blocks in the partition from the last growth.
A natural quality measure to govern the growth of a clustering vector is given as follows:
L ( s ˜ ) = i = 1 n log z i s s ˜ p ( z i , x i | Z ^ i , X i , s ) p ( s | Z ^ i , X i ) ,
where s s ˜ p ( s | Z ^ i , X i ) = 1 . L ( s ˜ ) can be obtained conveniently as an output of the EMA algorithm with selective partition set S = s ˜ . The measure L is similar with the logarithmic pseudo-marginal likelihood defined in [22,46] for Bayesian model selection. The EMA algorithm modified for embedding in the CF architecture as the base clustering algorithm is summarized in Algorithm 1.
Algorithm 1 The EMA algorithm
  • Require: training data X = { x n } i = 1 n , the number of clusters K and selective partition set S ;
  • Ensure: response matrix Z ^ = { z ^ i } i = 1 n , estimated model weights { { ρ ^ l s } l = 1 d s , ϕ ^ s } for s S , and quality measure L ( S ) ;
1:
Initialize z ^ i for 1 i n ;
2:
while the number of iteration is less than I t e r M a x  do
3:
    (MA step)
4:
    Compute ρ ^ l s according to (23) for 1 l d s , s S ;
5:
    Compute ϕ ^ s according to (33) for s S ;
6:
    Compute p ( z i , x i | Z ^ i , X i ) according to (19) for 1 i n ;
7:
    (E step)
8:
    Update z ^ i according to (10) for 1 i n ;
9:
end while
10:
Compute L ( S ) according to (34) with Z ^ .
Using the quality measure L , we progressively grow the clustering vector. Following the framework of the original CF, let κ 0 denote the number of consecutive unsuccessful attempts in expanding the set s ˜ , and let κ 0 * be the maximal allowed value of κ 0 . To control the complexity of each cluster instance, we also introduce κ 1 to record the number of successful growths and define κ 1 * as the maximal allowed value of κ 1 . The growth of a clustering vector is described in Algorithm 2.
Another crucial aspect in the CF architecture is aggregating the results from cluster instances. As label switching problems [47] exist, the response probability matrix Z ^ must be transformed to a quantity that has a consistent definition across instances. A common choice is the n × n consensus matrix P with entry
P ( i , i ) = k = 1 K z ^ i k z ^ i k , i i ; 1 , i = i .
P ( i , i ) is the probability that individual i and i belong to the same cluster. Aggregation of results then can be performed by averaging the matrix P over the instances. The quantity B ( i , i ) = 1 P ( i , i ) is the probability that individuals i and i do not belong to the same cluster, describing the dissimilarity between i and i . Therefore, if we construct the matrix B with entry B ( i , i ) , it follows that the hierarchical clustering algorithm with complete linkage can be used to derive an intuitive data separation [48].
Algorithm 2 The growth of a clustering vector s ˜
1:
Initialize s ˜ = { s 0 } , κ 0 = 0 and κ 1 = 0 ;
2:
Run the EMA algorithm under the partition set s ˜ and compute L ( s ˜ ) ;
3:
while  κ 0 < κ 0 * and κ 1 < κ 1 *  do
4:
    Sample randomly two feature blocks in s κ 1 , merge the two feature blocks, and denote the obtained feature partition as s κ 1 + 1 ;
5:
    Run the EMA algorithm under partition set s ˜ { s κ 1 + 1 } and compute L ( s ˜ { s κ 1 + 1 } ) ;
6:
    if  L ( s ˜ { s κ 1 + 1 } ) > L ( s ˜ )  then
7:
        grow s ˜ by s ˜ s ˜ { s κ 1 + 1 } ;
8:
        set κ 0 0 and κ 1 κ 1 + 1 ;
9:
    else
10:
        discard { s κ 1 + 1 } ;
11:
        set κ 0 κ 0 + 1 .
12:
    end if
13:
end while
In a model-based clustering algorithm, it is often interesting to explore the underlying models that generate the data. For assignment of the feature’s importance to a class, we compute vector FI of length d in each cluster instance with entry
FI ( j ) = s s ˜ ρ ^ l ( j ) s ϕ ^ s , j = 1 , 2 , , d ,
where l ( j ) indexes the feature block of partition s including the jth feature. FI ( j ) takes a value in [ 0 , 1 ] . A larger value of FI ( j ) implies higher significance of the jth feature. The overall pattern of feature importance is obtained by averaging FI over all instances.
To investigate the covariance structure, a feasible metric is to count the occurrence frequencies of edges in the d-dimensional Gaussian graphical model when the growth of each clustering vector is finished. We denote the total number of cluster instances as T. The entry of the d × d occurrence matrix is given by
EO ( j , j ) = h = 1 T 1 ( j , j ) A ( h ) ,
where A ( h ) is the set of edges in graph s κ 1 ( h ) ( h ) . s ˜ ( h ) = { s 0 , s 1 ( h ) , s 2 ( h ) , , s κ 1 ( h ) ( h ) } is the clustering vector obtained in the instance indexed by h. 1 ( ) is the indicating function. Normalization of the occurrence matrix can be achieved as follows:
EO ¯ ( j , j ) = EO ( j , j ) EO * , EO * 0 , j j ; 0 , EO * = 0 , j j ; 1 , j = j .
EO * denotes the maximum value of EO ( j , j ) over 1 j < j d . The normalized quantity EO ¯ ( j , j ) lies in [ 0 , 1 ] . The higher the value, the stronger the contribution of the conditional dependence between feature j and j . The overall CF architecture is summarized in Algorithm 3. Figure 1 provides an overall working flow for model-based clustering based on the EMA-CF algorithm, where the T clustering trees are growing in parallel.
Algorithm 3 The EMA-CF algorithm
1:
for   h = 1   to T do
2:
    Grow a clustering vector s ˜ ( h ) according to Algorithm 2 and obtain the Gaussian graphical model s κ 1 ( h ) ( h ) ;
3:
    Apply the EMA algorithm under the partition set s ˜ ( h ) ;
4:
    Construct the n × n consensus matrix P ( h ) according to (35);
5:
    Compute the vector FI ( h ) of feature’s importance according to (36);
6:
end for
7:
Average P ( h ) to get P ¯ 1 T h = 1 T P ( h ) and apply the hierarchical clustering algorithm based on B = I n P ¯ to get the final clustering;
8:
Compute the pattern of feature’s importance as FI ¯ 1 T h = 1 T FI ( h ) ;
9:
Compute the normalized occurrence matrix EO ¯ according to (38) for covariance structure detection.
Based on the principle of the EMA-CF algorithm, each cluster instance in the CF architecture has a maximum real-time complexity of O [ n · K 2 · d · ( κ 1 * ) 2 · I t e r M a x ] when using κ 1 * < d 1 2 . Meanwhile, the real-time complexity of only applying the EMA algorithm is O [ n · K 2 · d · I t e r M a x ] . The increased complexity is due to replacement of the naïve Bayes assumption with the LAN assumption to consider within-component correlations.

6. Experiment Study

6.1. Experiments on Synthetic Data

In this section, we assess the proposed parsimonious modeling and approximated BMA approach for model-based clustering through different simulated data scenarios. The objective is to evaluate the ability of this framework to recover the original grouping of the data, as well as its ability to select features and detect covariance structures. We refer to the proposed method for clustering and model detection as EMA-CF.
Eight state-of-the-art model-based clustering methods were chosen for comparison. They are closely related to the proposed one and summarized in Table 1. To the best of our knowledge, except the proposed method, there are few approaches to performing clustering, feature selection, and covariance structure detection simultaneously. The MBIC method suggested in [49] uses the structural EM algorithm combined with the BIC to realize simultaneous feature selection and clustering in Gaussian mixture models. The MICL method proposed in [21] further tackles parameter uncertainty by considering the ICL, where an alternating optimization algorithm has been designed to find the data separation and distinctive features. Both MBIC and MICL are established on the local independence assumption. They were implemented using the R package “VarSelLCM”. The mcgStep method [15] directly seeks the sparse pattern of the component covariance matrices using the structural EM algorithm. Stepwise searching is required in each iteration to refine the component covariance graphs. The mcgStep method was implemented using the R package “mixggm” with the BIC model selection criterion. The mclust approach [50] in the R package “mclust” is based on the Gaussian parsimonious clustering model (GPCM) family, which is composed of 14 component covariance matrix structures under different levels of constraints after eigenvalue decomposition. mclust-BIC conducts model selection based on the BIC value after the fitting of each model. mclust-RMA and mclust-PMA are approximated BMA methods proposed in [27] based on the 14 model structures. While mclust-RMA performs averaging on the response matrices, mclust-PMA takes averaging on the estimated model parameters. When conducting model averaging, the model weights are approximated by normalizing the BIC values. In addition, we included the EMA algorithm in comparison, which takes BMA over only the naïve Bayes structures to investigate the effects of accounting for the within-class associations with the CF architecture. We denote the method as EMA-naïve. (In the Supplementary Materials, by viewing the EMA algorithm as an approximated CVB approach, we provide comparison results with two additional related clustering methods VarFnMS [51] and VarFnMST [3], which are both VB methods based on the naïve Bayes assumption.)
The MBIC, MICL and mcgStep algorithms require initialization of the cluster allocations as well as the model structure. In the R package “VarSelLCM” for MBIC and MICL, each feature is initialized as discriminant or not discriminant via random sampling. The initial class allocations are then provided by the EM algorithm associated with the initial feature discrimination pattern. We set the number of random initializations in the two algorithms as 100. The output was the result with the maximum BIC value for MBIC and maximum ICL for MICL across the 100 initializations. In the packages “mixggm” and “mclust”, the Gaussian model-based hierarchical clustering approach in [52] is used to initialize the class allocations. Initialization of the component covariance graphs for mcgStep is provided by filtering the sample correlation matrix in each class. The EMA-CF and EMA-naïve algorithms only need initialization of the cluster allocations. Like the competitive algorithms, a poor choice of the starting values may make the convergence very slow and obtain a local optimal. Thus, the choice of a good starting class allocation for EMA-CF or MEA-naïve is important. Or, multiple starting points should be tried as in MBIC and MICL. For overall efficiency of the algorithm, we chose to use the result from the k-means clustering algorithm. To implement the EMA-CF algorithm, the number of maximum allowed times of iterations I t e r M a x in the EMA algorithm was fixed at 100 when growing a clustering vector and set as 500 for the final EMA algorithm with the obtained clustering vector. We set the number of paralleled cluster instances as 100. The maximal allowed times of tree growth κ 1 * and the maximal allowed times of consecutive unsuccessful attempts κ 0 * were both set as 5. Further explorations on the CF controllable parameters are given in the Supplementary Materials. The values specified for these parameters are adequate to ensure a stable and good performance of the algorithm in the following simulation settings.
We considered the synthetic data from a bi-component Gaussian mixture model with mixing proportions τ = ( 0.5 , 0.5 ) , composed of ten features. The component mean of class one was μ 1 = ( μ 11 , μ 12 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) T , where μ 11 , μ 12 were generated randomly from the uniform distribution U ( 0 , 1 ) . The mean of class two was given by μ 2 = ( μ 11 + 2.5 ϵ , μ 12 + 2.5 ϵ , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) T . The value ϵ permits us to tune the class overlaps. Six scenarios differentiated by the covariance structures were considered. In Scenarios 1–4, the covariance structures were well-specified with two, three, five, and ten feature blocks from the LAN family. Scenario 4 corresponds to the naïve Bayes configuration. We also took into account the scenarios where the covariance structures were mis-specified as the Toeplitz type matrix and the Erdos–Renyi model [15] in Scenarios 5 and 6. The probability of two features being marginally correlated was set as 0.2 in the Erdos–Renyi specification. We used the same covariance matrix for different mixture components in each scenario. Without loss of generosity, the diagonal elements of each covariance matrix were set as one. The non-zero off-diagonal elements were randomly chosen from the uniform distribution U ( 0.2 , 0.8 ) submitting to the symmetric and positive definite constraint. Figure 2 exhibits the simulated covariance graphs and the corresponding Gaussian graphical models in the six scenarios. For each scenario, we generated random datasets with different combinations of sample size n { 25 , 50 , 75 , 100 } and class overlap ϵ { 1.0 , 1.1 , 1.3 , 1.6 , 2.0 } , and we replicated each experiment twenty times.
As the true class assignment of the synthetic dataset was known, we computed the classification error rate to evaluate the quality of the clustering obtained by the competitive algorithms. The results averaged over the twenty replicates are reported in Table 2. The first and the second smallest classification error rates in each case are marked in bold. Table S3 provides the results of the standard deviation of the classification error rates. In general, the EMA-CF method outperforms the others and shows robustness across various simulation settings. The performance gain is substantial when the within-component correlation is strong, such as in Scenarios 1 and 2. It is noticeable that the mcgStep method also obtains relatively small classification errors in these scenarios, which emphasizes the importance of modeling the component covariance structures when presumptively high association relationships are present in the data. For small class overlap ϵ = 2.0 , all the methods tend to attain an almost perfect classification of the data. The EMA-CF method improves the data separation dramatically when the class overlap is increased. While EMA-CF improves the performance of EMA-naïve in Scenarios 1 and 2 when the within-component correlation is strong, the two methods show an overall competitive performance in the remaining four scenarios. Moreover, the EMA-naïve method outperforms MBIC and MICL when the class overlap is high. While the three methods all assume the naïve Bayes network structure, EMA-naïve considers the model uncertainty by employing the BMA.
The predictive log score (PLS) [25] is defined by
PLS t a r g e t = i 0 = 1 n 0 log p t a r g e t ( z i 0 , x i 0 ) ,
where { z i 0 , x i 0 } i 0 = 1 n 0 is the set of testing data and p t a r g e t ( z i 0 , x i 0 ) denotes the predictive probability on datum ( z i 0 , x i 0 ) given in the target method. Following the logarithmic scoring rule of Good [23], a better modeling strategy should consistently assign higher probabilities to the events that actually occur. Therefore, the smaller the PLS, the more reliable the method. The results of PLS obtained using the eight methods in the six scenarios are compared in the bubble plot shown in Figure 3. The radius of each circle indicates the variation of the PLS values over the twenty replicates. The EMA-CF method shows remarkably high predicting performance across different covariance structure configurations. And, the results are robust across various simulation settings. Particularly, using EMA-CF reduces the PLS of EMA-naïve in each case, which suggests the significance of modeling the within-component correlations.
The covariance structure detection ability was compared between EMA-CF, mcgStep, and mclust-BIC, and the ability to identify the feature importance pattern was compared between EMA-CF, EMA-naïve, MBIC, and MICL. Figure 4 shows the covariance structures detected using the EMA-CF and mcgStep methods. As the results are not sensitive to the class overlap, only the cases with ϵ = 1.3 are illustrated here. The remaining results are present in the Supplementary Materials. While the mcgStep method gives an estimation of the covariance structure as the covariance graph, EMA-CF provides an estimation as the Gaussian graphical model. Both methods show good performance in detecting the underlying graph configurations. The EMA-CF method exhibits more robustness, especially when the sample size is small. The covariance structures detected most frequently over the twenty replicates by the mclust-BIC method are shown in Table 3. The detection frequency is present within the parentheses. The EII structure indicates that the component covariance matrices are diagonal and homogeneous. EEE indicates a homogeneous full covariance structure for the components.
Figure 5, Figure 6 and Figure 7 show the patterns of feature importance estimated by the EMA-CF, EMA-naïve, MBIC, and MICL methods under class overlap settings of ϵ { 1.0 , 1.3 , 2.0 } , respectively. The cases for ϵ { 1.1 , 1.6 } are present in the Supplementary Materials. Overall, the results of EMA-CF show great robustness, responsive to the corresponding association structures between features. It is noticeable that the estimated patterns of feature importance for Scenarios 1 and 2 by EMA-CF are distinguished from those estimated by EMA-naïve, MBIC, and MICL. The difference becomes evident as the class overlap decreases and the sample size increases. Indeed, the associations of the last eight features with the first two induce their indirect contributions to the classification. The MBIC and MICL methods perform erratically in Scenarios 1 and 2 when the class overlap is high. In Scenarios 3 and 4, where there is no conditional association between the first two and the remaining eight features, the four methods give similar identification results. In Scenarios 5 and 6, where the structures assumed are not from the LAN family, EMA-CF detects some latent patterns of the feature’s significance induced by the simulated association structures. Overall, EMA-CF and EMA-naïve, which are based on the BMA, show consistency behaviors as the sample size increases. Such behaviors are conformable to the principle of BMA, where the probability mass function of model structures (model weights) peaks gradually at the MAP structure as the size of the dataset increases [31].

6.2. Experiments on Real-World Datasets

In this section, we illustrate the proposed method by applying it to some benchmark datasets: Iris, Olive, Wine, and Digit. The Iris dataset was obtained from the R package “datasets”. Olive is the Italian olive oil dataset and Wine the Italian wine dataset. They were both obtained from the R package “pgmm”. The Digit dataset was obtained from the UCI machine learning repository (https://doi.org/10.24432/C50P49; accessed on 26 August 2025). The complete data of Digit contain more than 5000 images for the handwritten digits 0–9. They are gray-scale with a size of 8 × 8 pixels. We focused on separation of the 4 and 9 digits and randomly reserved 100 images for each digit. As the variability of some pixels for a digit was exactly zero, singularity problems could occur in the model-based clustering algorithms. Therefore, we put a noise mask on the data matrix (of size 200 × 64 ). Each element of the noise mask was generated from N ( 0 , 0.1 ) . Table 4 presents the basic information for the four datasets.
To implement the EMA-CF method, as we had little information about the covariance structure of the data, we varied the setting of κ 1 * that controls the tree growth in the CF architecture between d 1 4 , d 1 2 and d 3 4 (rounded to the nearest integer) to match with different levels of sparsity. We denote the corresponding EMA-CF algorithms as EMA-CF-1, EMA-CF-2, and EMA-CF-3, respectively. The number of cluster instances was fixed at T = 100 , and let κ 0 * = 5 . We kept the settings of the other algorithms the same as those in the simulation study.
The clustering quality was evaluated by comparing it with the original grouping of the data. Table 5 shows the classification errors obtained by the eight competing methods. In general, the EMA-CF method gives the lowest classification errors for all the four datasets. For the Wine and Digit datasets, the best results can be achieved when κ 1 * is small, and the EMA-naïve method that does not model the covariance structure performs equally well. Larger κ 1 * values improve data separation for the Iris and Olive datasets. The MBIC and MICL methods based on the conditional independence assumption provide the worst results for Iris and Olive, but they show sound performance for Wine and Digit. The mcgStep method provides relatively good classification for Iris and Olive but is undesirable for Wine and Digit.
Figure 8 shows the covariance structures detected by EMA-CF and mcgStep. The two methods both advocate strong within-class correlations in Iris and Olive. Even with a small κ 1 * value, the overall color of the normalized occurrence matrix given by EMA-CF is remarkably deep. In contrast, the associations in Wine and Digit are much sparser. The covariance structures given by the mclust-BIC method are VEV, EVV, EVI, and EEE for Iris, Olive, Wine, and Digit, respectively. While EVI indicates diagonal component covariance matrices, VEV, EVV, and EEE produce full covariance matrices. The combined results with the classification performance indicate that the EMA-CF method can accommodate various kinds of covariance structures and give more reliable clustering results.
Implementation of EMA-CF, EMA-naïve, MBIC, and MICL gives the estimated patterns of feature importance. For Iris and Olive, the four methods have the same identification, where all the features are significant at the highest level (see Figure S11 in the Supplementary Materials). The results for Wine and Digit are compared in Figure 9. While the MBIC and MICL methods separate the features as discriminating and undiscriminating, EMA-CF and EMA-naïve give relatively conservative estimations of feature importance by taking into account model uncertainty with BMA. There are no apparent differences between the results of EMA-CF andEMA-naïve, which agrees with the sparse patterns of feature association in the two datasets, as detected in Figure 8.

6.3. Application on Tartary Buckwheat Data

In this section, we present an application of the developed method in a real agriculture problem. The data reflect a traditional edible and medicinal crop, Tartary buckwheat. The experimental data contain a total of 200 Tartary buckwheat landraces growing in two different locations with distinct climate conditions [53], denoted as E1 and E2, respectively. Eleven phenotypic traits of the Tartary buckwheat plant were investigated:
  • plant morphological traits
    plant height (PH), stem diameter (SD), number of nodes (NN), number of branches (NB), and branch height (BH)
  • grain-related traits
    grain length (GL) and grain width (GW)
  • yield-related traits
    number of grains per plant (NGP), weight of grains per plant (WGP), 1000-grain weight (TGW), and yield per hectare (Y)
It is commonly acknowledged that changing environment has non-negligible impacts on the growth and development of crops. Our study is concerned with the influence of environmental changes on the Tartary buckwheat landraces. We mixed the phenotypic data of 100 randomly selected landraces from environments E1 and E2. The experiment was repeated twenty times. Again, in using the EMA-CF method, we set κ 1 * as d 1 4 , d 1 2 , and d 3 4 . In each of the eight competing methods, a bi-component clustering model was constructed. Therefore, major impacts of the environmental changes could be confirmed by separation of the data according to their original environments.
Table 6 shows the classification errors and the PLS values obtained using the eight methods. The class assignment given by EMA-CF with κ 1 * = d 3 4 shows the highest consistency with the original grouping of the data divided by the environments. Moreover, EMA-CF shows the highest predictive ability compared with the competitive methods.
The covariance structures detected by EMA-CF and mcgStep show strong evidence of the within-class correlations between the phenotypic traits of Tartary buckwheat. As shown in Figure 10, there are heavy associations between the yield-related traits. Moreover, the conditional correlations are evident between PH and SD, between NN and NB, and between GL and GW, as assessed by the EMA-CF method. The phenotype TGW is found to be related to GL and GW, which agrees with the findings in a previous study [54]. The covariance structures provided by mclust-BIC over the twenty replicates are EVE (10 times) and VVE (10 times), both of which indicate full component covariance matrices.
From Figure 11, we can find that all the yield-related traits are discriminant between environments E1 and E2. In addition, the phenotypes PH, SD, BH, and GL are also identified as significant. These facts suggest the sensitivity of the Tartary buckwheat landraces to the environmental changes. While the significance of GW is gradually identified by EMA-CF as κ 1 * increases, it is regarded as unimportant by the EMA-naïve, MBIC, or MICL methods, which have ignored the associations between GW and the yield-related traits.
While most Tartary buckwheat landraces can be separated according to their growing environments (E1 and E2), there are a small group of landraces that are easy to misclassify. Table 7 summaries those landraces that have data misclassified into the same class ten times or more across the twenty replicates by the EMA-CF-3 algorithm. Among them, SC-8 and GZ-32 exhibit high yields in both environments, which could provide potentially excellent varieties for further investigation.

7. Conclusions

In this paper, we present a comprehensive framework for model-based clustering. This framework is based on the LAN family for parsimonious modeling of the Gaussian mixtures, which allows for feature selection and covariance structure detection simultaneously. A class of diffused-conjugate priors were proposed to realize objective Bayesian inference. The EMA-CF algorithm was developed to integrate model uncertainty by BMA over the LAN family, which takes advantage of the closed-form expression of the BMA classifier under the naïve Bayes assumption and provides an efficient approximation of model averaging over feature partitions using the ensemble learning strategy of CF.
Extensive experiments on synthetic and real-world datasets showed that the proposed method is able to capture real model structures and exhibits better clustering performance than those relying on the naïve Bayes assumption or solely modeling the association structures. The application of the developed method on the multi-environmental Tartary buckwheat data showed its applicability and usefulness in the agricultural fields. More applications will be explored in our future work.
In the proposed method, we took the number of clusters (number of components in the mixture model) as fixed and given before inference, which however, in most of the situations is unknown a priori. A commonly applied approach is Bayesian model selection from a set of intended numbers with some model selection criterion [17,20,21,49]. In ongoing work, we would like to implement a soft model selection which takes into account model uncertainty regarding the number of mixture components by extending the proposed architecture with an additional level of BMA. Datasets generated with different numbers of mixture components will be experimented on to examine the robustness of this enhanced method for simultaneous clustering and model structure detection.
Additionally, in the synthetic and real-data experiments, we considered the clustering problems mainly of the balanced case, i.e., the clusters are of similar sizes. Like the EM algorithm, a potential problem for EMA in the unbalanced case is the empty components. These could disrupt the execution of the algorithm. A direct and simple remedy is to add a noise mask to the response matrix when the summation of a column in the matrix becomes smaller than a threshold value. Further enhancement could be achieved by utilizing the simulated annealing strategy [55]. More detailed investigation of unbalanced clustering will be present in our subsequent work.
In Section 5, we briefly analyzed the real-time complexity of the EMA-CF algorithm. The additional level of model averaging over the feature partitions would worsen the time complexity but allow for an investigation into the within-component correlations. Indeed, the EMA-CF, mcgStep, and mclust algorithms took relatively longer times to complete the computation during the experiments than the EMA-naïve, MBIC, and MICL algorithms, which only consider the feature selection. Given that the execution time of an algorithm is heavily influenced by the choice of programming language, the performance of the EMA-CF algorithm, which has been initially written in the R language, could be notably enhanced through implementation in C++. As part of our subsequent work, we will rewrite this algorithm in C++ with the goal of significantly improving its efficiency and distributing it systematically as an R package.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/sym17111879/s1, S1. Exploration of the settings of Cluster Forests; S2. Supplementary figures for the experiment study; S3. Supplementary tables for the experiment study; S4. Comparison with variational Bayes methods; S5. Simulation with extra noise in the LAN model.

Author Contributions

Conceptualization, W.X. and Y.N.; methodology, S.F.; software, S.F.; validation, S.F.; formal analysis, S.F.; writing—original draft preparation, S.F.; writing—review and editing, W.X.; visualization, S.F.; supervision, Y.N.; funding acquisition, Y.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China, grant number 2020YFA0713603, and the National Natural Science Foundation of China, grant number 11971386.

Data Availability Statement

The benchmark datasets used in Section 6.2 are publicly available. The Iris data can be obtained from the R package “datasets”. (The R software can be downloaded at http://www.r-project.org/; R version is 4.5.0. (accessed on 3 May 2025)). The Olive and Wine data can be obtained from the R package “pgmm”. The handwritten digits data are openly available in the UCI machine learning repository: https://doi.org/10.24432/C50P49 (accessed on 26 August 2025). The phenotypic data of Tartary buckwheat used in Section 6.3 were kindly provided by the Minor Grain Research Centre of the College of Agronomy, Northwest A & F University, Yangling, Shaanxi, China, and are available with the permission of the research center. The R codes for the developed algorithm are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Relationship with the RCVB Method

The RCVB method maximizes a lower bound of the marginal likelihood using the VB for one set of variables, and the remaining set of variables are collapsed over by marginalization. Let the collapsed variables be the model and parameters, and let the VB be applied to the latent class labels. We denote the auxiliary function as q ( Z ) = i = 1 n q i ( z i ) . For i = 1 , 2 , , n , using Jensen’s inequality gives
log p ( z i , X , θ m , m ) E q i log p ( Z , X | θ m , m ) p ( θ m , m ) i i q i ( z i ) ,
where the expectation is taken with respect to i i q i ( z i ) . We marginalize over θ m and m in both sides to obtain the following:
p ( z i , X ) m M Θ m exp E q i log p ( Z , X | θ m , m ) p ( θ m , m ) i i q i ( z i ) d θ m = : p ̲ ( z i , X ) .
Then, using another time of Jensen’s inequality, the evidence lower bound objective (ELBO) on the logarithm of marginal likelihood can be formed as follows:
log p ( X ) E q i log p ( z i , X ) log q i ( z i ) E q i log p ̲ ( z i , X ) log q i ( z i )   = : ELBO RCVB .
Like the CVB approach, RCVB can give a tighter lower bound than the ordinary VB [33].
Through simple variational calculus, q i ( z i ) that maximizes ELBO RCVB fixing on q i ( s i ) , i i is given as follows:
q i ( z i ) p ̲ ( z i , x ) m M Θ m exp E q i log p ( Z , X | θ m , m ) + log p ( θ m , m ) d θ m .
Mathematical manipulations give
q i ( z i ) p ( z i , Z ^ i , X ) p ( z i , x i | Z ^ i , X i ) ,
which is the MA step in the EMA algorithm. When the batch coordinate-ascent search strategy [56] is used to optimize ELBO RCVB , the unsupervised classifier obtained by EMA is equivalent to that from RCVB.

Appendix B. Proofs of Theorem 1

This section is devoted to the proof of Theorem 1. To compute the Bayesian test statistic for testing { H 0 l s : r l s = 0 } against { H 1 l s : r l s = 1 } when f , g + , we need to derive the marginals p ( X l s | Z , r l s = 1 , s ) and p ( X l s | r l s = 0 , s ) . Recall
p ( X l s | Z , r l s = 1 , s ) = lim f , g + p ( X l s | Z , μ l s , Σ l s , r l s = 1 , s ) p ( μ l s , Σ l s ) d μ l s d Σ l s = lim f , g + k = 1 K i = 1 n N ( x i l s ; μ k l s , Σ k l s ) z i k N ( μ k l s ; ξ k l s , Σ k l s / β k l s ) IW ( Σ k l s ; v k l s , η k l s I d l s ) d μ k l s d Σ k l s .
Expanding Equation (A6) with the exact forms of the Gaussian and inverse-Wishart density functions, we have the following:
p ( X l s | Z , r l s = 1 , s ) = lim f , g + k = 1 K exp { d l s n k 2 log 2 π n k 2 log | Σ k l s | 1 2 i = 1 n z i k ( x i l s μ k l s ) T ( Σ k l s ) 1 ( x i l s μ k l s ) d l s 2 log 2 π + d l s 2 log β k l s 1 2 log | Σ k l s | 1 2 ( μ k l s ξ k l s ) T 1 β k l s Σ k l s 1 ( μ k l s ξ k l s ) d l s ( d l s 1 ) 4 log π log i = 1 d l s Γ v k l s + 1 i 2 d l s v k l s 2 log 2 + d l s v k l s 2 log η k l s v k l s + d l s + 1 2 log | Σ k l s | 1 2 tr η k l s ( Σ k l s ) 1 } d μ k l s d Σ k l s .
Using Assumption 1, we force β k l s , v k l s 0 , k = 0 , 1 , , K . Several mathematical manipulations give
p ( X l s | Z , r l s = 1 , s ) = lim f , g + k = 1 K exp { d l s 2 log 2 π 1 2 log | 1 n k Σ k l s | 1 2 ( μ k l s x ¯ k l s ) T 1 n k Σ k l s 1 ( μ k l s x ¯ k l s ) d l s n k 2 log 2 π + d l s 2 log β k l s log i = 1 d l s Γ v k l s + 1 i 2 d l s 2 log n k d l s ( d l s 1 ) 4 log π n k + d l s + 1 2 log | Σ k l s | 1 2 tr n k S k l s ( Σ k l s ) 1 } d μ k l s d Σ k l s ,
from which we can extract the posterior of ( μ k l s , Σ k l s ) as
p ( μ k l s , Σ k l s | Z , X l s ) = N ( μ k l s ; x ¯ k l s , Σ k l s / n k ) IW ( Σ k l s ; n k , n k S k l s ) .
Then, by computing the integral in (A8), it follows
p ( X l s | Z , r l s = 1 , s ) = lim f , g + k = 1 K exp [ d l s n k 2 log 2 π + d l s 2 log β k l s log i = 1 d l s Γ v k l s + 1 i 2 d l s 2 log n k + log i = 1 d l s Γ n k + 1 i 2 + d l s n k 2 log 2 n k 2 log | n k S k l s | ] .
Analogously, we obtain
p ( X l s | r l s = 0 , s ) = lim f , g + p ( X l s | μ 0 l s , Σ 0 l s , r l s = 0 , s ) p ( μ 0 l s , Σ 0 l s ) d μ 0 l s d Σ 0 l s = lim f , g + i = 1 n N ( x i l s ; μ 0 l s , Σ 0 l s ) N ( μ 0 l s ; ξ 0 l s , Σ 0 l s / β 0 l s ) IW ( Σ 0 l s ; v 0 l s , η 0 l s I d l s ) d μ 0 l s d Σ 0 l s = lim f , g + exp [ d l s n 2 log 2 π + d l s 2 log β 0 l s log i = 1 d l s Γ v 0 l s + 1 i 2 d l s 2 log n + log i = 1 d l s Γ n + 1 i 2 + d l s n 2 log 2 n 2 log | n S 0 l s | ] ,
and
p ( μ 0 l s , Σ 0 l s | X l s ) = N ( μ 0 l s ; x ¯ 0 l s , Σ 0 l s / n ) IW ( Σ 0 l s ; n , n S 0 l s ) .
Using (A10) and (A11), we can express the Bayesian test statistic in (24) as
λ Bayes ( X l s , Z , s ) = 2 log p ( X l s | Z , r l s = 1 , s ) p ( X l s | r l s = 0 , s ) = λ LRT ( X l s , Z , s ) l s log n + C 1 + C 2 ,
where
λ LRT ( X l s , Z , s ) = k = 1 K n k log | S k l s | d l s n k n log | S 0 l s | d l s n , l s = ( K 1 ) d l s d l s + 1 2 + 1 ,
and
C 1 = lim f , g + K d l s log β 1 l 2 K log i = 1 d l s Γ v 1 l s + 1 i 2 d l s log β 0 l + 2 log i = 1 d l s Γ v 0 l s + 1 i 2 , C 2 = k = 1 K d l s n k d l s n k d l s + 1 2 log n k + 2 log i = 1 d l s Γ n k + 1 i 2 d l s n d l s n d l s + 1 2 log n + 2 log i = 1 d l s Γ n + 1 i 2 .
When computing (A15), we have applied the assumptions:
β k l s = n k n d l s + 1 2 + 1 β 1 l s , k = 2 , 3 , , K ,
and v 1 l s = v 2 l s = = v K l s .
The quantity C 1 can be further simplified using the assumptions
β 0 l s = 1 2 π f 2 d s d l s , β 1 l s = 1 2 π f 2 K d s d l s ,
and
i = 1 d l s Γ v 0 l s + 1 i 2 = 2 d l s ( d l s + 1 ) 4 g 1 d s , i = 1 d l s Γ v 1 l s + 1 i 2 = 2 d l s ( d l s + 1 ) 4 g 1 K d s ,
which gives
C 1 = ( 1 K ) d l s log 2 π + d l s ( d l s + 1 ) 2 log 2 .
Finally, we apply Stirling’s asymptotic expansion (for large n, log Γ ( n ) = n log n n 1 2 log n + 1 2 log 2 π + O ( 1 n ) ) for the Gamma functions in C 2 . Through mathematical induction, we obtain the following:
C 2 ( K 1 ) d l s log 2 π + d l s ( d l s + 1 ) 2 log 2 .
Insert (A19) and (A20) in (A13). It follows
λ Bayes ( X l s , Z , s ) λ LRT ( X l s , Z , s ) l s log n .

References

  1. Jiang, Z.; Zheng, Y.; Tan, H.; Tang, B.; Zhou, H. Variational deep embedding: An unsupervised and generative approach to clustering. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, Melbourne, Australia, 19–25 August 2017; pp. 1965–1972. [Google Scholar]
  2. Yang, L.; Cheung, N.M.; Li, J.; Fang, J. Deep clustering by Gaussian mixture variational autoencoders with graph embedding. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6440–6449. [Google Scholar]
  3. Sun, J.; Zhou, A.; Keates, S.; Liao, S. Simultaneous Bayesian clustering and feature selection through student’s t mixtures model. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 1187–1199. [Google Scholar] [CrossRef] [PubMed]
  4. Bouveyron, C.; Brunet-Saumard, C. Model-based clustering of high-dimensional data: A review. Comput. Stat. Data Anal. 2014, 71, 52–78. [Google Scholar] [CrossRef]
  5. Ormoneit, D.; Tresp, V. Averaging, maximum penalized likelihood and Bayesian estimation for improving Gaussian mixture probability density estimates. IEEE Trans. Neural Netw. 1998, 9, 639–650. [Google Scholar] [CrossRef]
  6. Law, M.H.C.; Figueiredo, M.A.T.; Jain, A.K. Simultaneous feature selection and clustering using mixture models. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 1154–1166. [Google Scholar] [CrossRef]
  7. Li, Y.; Dong, M.; Hua, J. Simultaneous localized feature selection and model detection for Gaussian mixtures. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 953–960. [Google Scholar] [CrossRef]
  8. Hong, X.; Li, H.; Miller, P.; Zhou, J.; Li, L.; Crookes, D.; Lu, Y.; Li, X.; Zhou, H. Component-based feature saliency for clustering. IEEE Trans. Knowl. Data Eng. 2021, 33, 882–896. [Google Scholar] [CrossRef]
  9. Perthame, E.; Friguet, C.; Causeur, D. Stability of feature selection in classification issues for high-dimensional correlated data. Stat. Comput. 2016, 26, 783–796. [Google Scholar] [CrossRef]
  10. Fan, J.; Ke, Y.; Wang, K. Factor-adjusted regularized model selection. J. Econom. 2020, 216, 71–85. [Google Scholar] [CrossRef]
  11. Mai, Q.; Zou, H.; Yuan, M. A direct approach to sparse discriminant analysis in ultra-high dimensions. Biometrika 2012, 99, 29–42. [Google Scholar] [CrossRef]
  12. Celeux, G.; Govaert, G. Gaussian parsimonious clustering models. Pattern Recognit. 1995, 28, 781–793. [Google Scholar] [CrossRef]
  13. McLachlan, G.J.; Bean, R.W.; Jones, L.B.T. Extension of the mixture of factor analyzers model to incorporate the multivariate t-distribution. Comput. Stat. Data Anal. 2007, 51, 5327–5338. [Google Scholar] [CrossRef]
  14. Andrews, J.L.; McNicholas, P.D. Extending mixtures of multivariate t-factor analyzers. Stat. Comput. 2011, 21, 361–373. [Google Scholar] [CrossRef]
  15. Fop, M.; Murphy, T.B.; Scrucca, L. Model-based clustering with sparse covariance matrices. Stat. Comput. 2019, 29, 791–819. [Google Scholar] [CrossRef]
  16. Ruan, L.; Yuan, M.; Zou, H. Regularized parameter estimation in high-dimensional Gaussian mixture models. Neural Comput. 2011, 23, 1605–1622. [Google Scholar] [CrossRef] [PubMed]
  17. Galimberti, G.; Soffritti, G. Using conditional independence for parsimonious model-based Gaussian clustering. Stat. Comput. 2013, 23, 625–638. [Google Scholar] [CrossRef]
  18. Witten, D.M.; Friedman, J.H.; Simon, N. New insights and faster computations for the graphical lasso. J. Comput. Graphical Stat. 2011, 20, 892–900. [Google Scholar] [CrossRef]
  19. Dash, D.; Cooper, G.F. Exact model averaging with naive Bayesian classifiers. In Proceedings of the 19th International Conference on Machine Learning, San Francisco, CA, USA, 8–12 July 2002; pp. 91–98. [Google Scholar]
  20. Bhattacharya, S.; McNicholas, P.D. A LASSO-penalized BIC for mixture model selection. Adv. Data Anal. Classif. 2014, 8, 45–61. [Google Scholar] [CrossRef]
  21. Marbac, M.; Sedki, M. Variable selection for model-based clustering using the integrated complete-data likelihood. Stat. Comput. 2017, 27, 1049–1063. [Google Scholar] [CrossRef]
  22. Crook, O.M.; Gatto, L.; Kirk, P.D.W. Fast approximate inference for variable selection in Dirichlet process mixtures, with an application to pan-cancer proteomics. Stat. Appl. Genet. Mol. Biol. 2019, 18, 20180065. [Google Scholar] [CrossRef]
  23. Madigan, D.; Raftery, A.E. Model selection and accounting for model uncertainty in graphical models using Occam’s window. J. Am. Stat. Assoc. 1994, 89, 1535–1546. [Google Scholar] [CrossRef]
  24. Raftery, A.E.; Madigan, D.; Hoeting, J.A. Bayesian model averaging for linear regression models. J. Am. Stat. Assoc. 1997, 92, 179–191. [Google Scholar] [CrossRef]
  25. Hoeting, J.A.; Madigan, D.; Raftery, A.E.; Volinsky, C.T. Bayesian model averaging: A tutorial. Stat. Sci. 1999, 14, 382–401. [Google Scholar] [CrossRef]
  26. Santafé, G.; Lozano, J.A.; Larrañaga, P. Inference of population structure using genetic markers and a Bayesian model averaging approach for clustering. J. Comput. Biol. 2008, 15, 207–220. [Google Scholar] [CrossRef] [PubMed]
  27. Wei, Y.; McNicholas, P.D. Mixture model averaging for clustering. Adv. Data Anal. Classif. 2015, 9, 197–217. [Google Scholar] [CrossRef]
  28. Chen, C.C.M.; Keith, J.M.; Mengersen, K.L. Accurate phenotyping: Reconciling approaches through Bayesian model averaging. PLoS ONE 2017, 12, e0176136. [Google Scholar] [CrossRef]
  29. Fragoso, T.M.; Bertoli, W.; Louzada, F. Bayesian model averaging: A systematic review and conceptual classification. Int. Stat. Rev. 2018, 86, 1–28. [Google Scholar] [CrossRef]
  30. Hinne, M.; Gronau, Q.F.; van den Bergh, D.; Wagenmakers, E.J. A conceptual introduction to Bayesian model averaging. Adv. Methods Pract. Psychol. Sci. 2020, 3, 200–215. [Google Scholar] [CrossRef]
  31. Santafé, G.; Lozano, J.A.; Larrañaga, P. Bayesian model averaging of naive Bayes for clustering. IEEE Trans. Syst. Man Cybern. B 2006, 36, 1149–1161. [Google Scholar] [CrossRef]
  32. Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. B 1977, 39, 1–22. [Google Scholar] [CrossRef]
  33. Yu, W.; Ormerod, J.T.; Stewart, M. Variational discriminant analysis with variable selection. Stat. Comput. 2020, 30, 933–951. [Google Scholar] [CrossRef]
  34. Ormerod, J.T.; Stewart, M.; Yu, W.; Romanes, S.E. Bayesian hypothesis tests with diffuse priors: Can we have our cake and eat it too? Aust. N. Z. J. Stat. 2024, 66, 204–227. [Google Scholar] [CrossRef]
  35. Andrade, D.; Takeda, A.; Fukumizu, K. Robust Bayesian model selection for variable clustering with the Gaussian graphical model. Stat. Comput. 2020, 30, 351–376. [Google Scholar] [CrossRef]
  36. Yan, D.; Chen, A.; Jordan, M.I. Cluster forests. Comput. Stat. Data Anal. 2013, 66, 178–192. [Google Scholar] [CrossRef]
  37. Teh, Y.W.; Newman, D.; Welling, M. A collapsed variational Bayesian inference algorithm for latent Dirichlet allocation. In Proceedings of the Advances in Neural Information Processing Systems 19; MIT Press: Cambridge, MA, USA, 2007; pp. 1353–1360. [Google Scholar]
  38. Tuyl, F. A note on priors for the multinomial model. Am. Stat. 2017, 71, 298–301. [Google Scholar] [CrossRef]
  39. Liang, F.; Paulo, R.; Molina, G.; Clyde, M.A.; Berger, J.O. Mixtures of g priors for Bayesian variable selection. J. Am. Stat. Assoc. 2008, 103, 410–423. [Google Scholar] [CrossRef]
  40. Svensson, L.; Lundberg, M. On posterior distributions for signals in Gaussian noise with unknown covariance matrix. IEEE Trans. Signal Process. 2005, 53, 3554–3571. [Google Scholar] [CrossRef]
  41. Jeffreys, H. Theory of Probability, 3rd ed.; Clarendon Press: Oxford, UK, 1961. [Google Scholar]
  42. Fernández, C.; Ley, E.; Steel, M.F. Benchmark priors for Bayesian model averaging. J. Econom. 2001, 100, 381–427. [Google Scholar] [CrossRef]
  43. Herman, R. A First Course in Differential Equations for Scientists and Engineers; LibreTexts: Davis, CA, USA, 2025. [Google Scholar]
  44. Sharma, S.; Chaudhury, S.; Jayadeva, J.; Bhagat, S. Sparse signal recovery for multiple measurement vectors with temporally correlated entries: A Bayesian perspective. In Proceedings of the 11th Indian Conference on Computer Vision, Graphics and Image Processing, Hyderabad, India, 18–22 December 2018; pp. 1–8. [Google Scholar] [CrossRef]
  45. Alharthi, M.F. Computational methods for estimating the evidence and Bayes factor in SEIR stochastic infectious diseases models featuring asymmetrical dynamics of transmission. Symmetry 2023, 15, 1239. [Google Scholar] [CrossRef]
  46. Wang, L.; Dunson, D.B. Fast Bayesian inference in Dirichlet process mixture models. J. Comput. Graph. Stat. 2011, 20, 196–216. [Google Scholar] [CrossRef]
  47. Rodríguez, C.E.; Walker, S.G. Label switching in Bayesian mixture models: Deterministic relabeling strategies. J. Comput. Graphical Stat. 2014, 23, 25–45. [Google Scholar] [CrossRef]
  48. Russell, N.; Murphy, T.B.; Raftery, A.E. Bayesian model averaging in model-based clustering and density estimation. arXiv 2015, arXiv:1506.09035. [Google Scholar] [CrossRef]
  49. Marbac, M.; Sedki, M.; Patin, T. Variable selection for mixed data clustering: Application in human population genomics. J. Classif. 2020, 37, 124–142. [Google Scholar] [CrossRef]
  50. Scrucca, L.; Fop, M.; Murphy, T.B.; Raftery, A.E. mclust 5: Clustering, classification and density estimation using Gaussian finite mixture models. R J. 2016, 8, 289–317. [Google Scholar] [CrossRef] [PubMed]
  51. Constantinopoulos, C.; Titsias, M.K.; Likas, A. Bayesian feature and model selection for Gaussian mixture models. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1013–1018. [Google Scholar] [CrossRef]
  52. Scrucca, L.; Raftery, A.E. Improved initialisation of model-based clustering using Gaussian hierarchical partitions. Adv. Data Anal. Classif. 2015, 9, 447–460. [Google Scholar] [CrossRef]
  53. Li, J.; Feng, S.; Qu, Y.; Gong, X.; Luo, Y.; Yang, Q.; Zhang, Y.; Dang, K.; Gao, X.; Feng, B. Identifying the primary meteorological factors affecting the growth and development of Tartary buckwheat and a comprehensive landrace evaluation using a multi-environment phenotypic investigation. J. Sci. Food Agric. 2021, 101, 6104–6116. [Google Scholar] [CrossRef]
  54. Zuo, J.; Li, J. Molecular genetic dissection of quantitative trait loci regulating rice grain size. Annu. Rev. Genet. 2014, 48, 99–118. [Google Scholar] [CrossRef]
  55. Javidrad, F.; Nazari, M. A new hybrid particle swarm and simulated annealing stochastic optimization method. Appl. Soft Comput. 2017, 60, 634–654. [Google Scholar] [CrossRef]
  56. Zhang, A.Y.; Zhou, H.H. Theoretical and computational guarantees of mean field variational inference for community detection. Ann. Stat. 2020, 48, 2575–2598. [Google Scholar] [CrossRef]
Figure 1. Working flow for model-based clustering based on the EMA-CF algorithm.
Figure 1. Working flow for model-based clustering based on the EMA-CF algorithm.
Symmetry 17 01879 g001
Figure 2. Covariance structure configurations for the six scenarios shown as the covariance graph in (subplot a) and the Gaussian graphical model in (subplot b). Within each large lattice, a colored cell denotes the presence of an edge between a pair of features. The darker the color, the higher the weight of an edge.
Figure 2. Covariance structure configurations for the six scenarios shown as the covariance graph in (subplot a) and the Gaussian graphical model in (subplot b). Within each large lattice, a colored cell denotes the presence of an edge between a pair of features. The darker the color, the higher the weight of an edge.
Symmetry 17 01879 g002
Figure 3. Predictive log scores obtained by the clustering algorithms under comparison in the six scenarios with different sample sizes and class overlaps.
Figure 3. Predictive log scores obtained by the clustering algorithms under comparison in the six scenarios with different sample sizes and class overlaps.
Symmetry 17 01879 g003
Figure 4. Covariance structure detection by EMA-CF and mcgStep in the six scenarios with the class overlap setting ϵ = 1.3 . In (subplot a), EMA-CF provides an estimation of covariance structure as the Gaussian graphical model, and mcgStep provides an estimation as the covariance graph. The corresponding standard errors over the twenty replicates are present in (subplot b).
Figure 4. Covariance structure detection by EMA-CF and mcgStep in the six scenarios with the class overlap setting ϵ = 1.3 . In (subplot a), EMA-CF provides an estimation of covariance structure as the Gaussian graphical model, and mcgStep provides an estimation as the covariance graph. The corresponding standard errors over the twenty replicates are present in (subplot b).
Symmetry 17 01879 g004
Figure 5. Estimated patterns of feature importance in the six scenarios with the class overlap setting ϵ = 1.0 using the EMA-CF (subplot a), EMA-naïve (subplot b), MBIC (subplot c), and MICL (subplot d) methods.
Figure 5. Estimated patterns of feature importance in the six scenarios with the class overlap setting ϵ = 1.0 using the EMA-CF (subplot a), EMA-naïve (subplot b), MBIC (subplot c), and MICL (subplot d) methods.
Symmetry 17 01879 g005
Figure 6. Estimated patterns of feature importance in the six scenarios with the class overlap setting ϵ = 1.3 using the EMA-CF (subplot a), EMA-naïve (subplot b), MBIC (subplot c), and MICL (subplot d) methods.
Figure 6. Estimated patterns of feature importance in the six scenarios with the class overlap setting ϵ = 1.3 using the EMA-CF (subplot a), EMA-naïve (subplot b), MBIC (subplot c), and MICL (subplot d) methods.
Symmetry 17 01879 g006
Figure 7. Estimated patterns of feature importance in the six scenarios with the class overlap setting ϵ = 2.0 using the EMA-CF (subplot a), EMA-naïve (subplot b), MBIC (subplot c) and MICL (subplot d) methods.
Figure 7. Estimated patterns of feature importance in the six scenarios with the class overlap setting ϵ = 2.0 using the EMA-CF (subplot a), EMA-naïve (subplot b), MBIC (subplot c) and MICL (subplot d) methods.
Symmetry 17 01879 g007
Figure 8. Covariance structure detection using the EMA-CF method with different settings of κ 1 * and the mcgStep method for the Iris (subplot a), Olive (subplot b), Wine (subplot c), and Digit (subplot d) datasets. EMA-CF provides an estimation of covariance structure as the Gaussian graphical model, and mcgStep provides an estimation as the covariance graph.
Figure 8. Covariance structure detection using the EMA-CF method with different settings of κ 1 * and the mcgStep method for the Iris (subplot a), Olive (subplot b), Wine (subplot c), and Digit (subplot d) datasets. EMA-CF provides an estimation of covariance structure as the Gaussian graphical model, and mcgStep provides an estimation as the covariance graph.
Symmetry 17 01879 g008
Figure 9. Estimated patterns of feature importance using the EMA-CF, EMA-naïve, MBIC, and MICL methods for the Wine (subplot a) and Digit (subplot b) datasets.
Figure 9. Estimated patterns of feature importance using the EMA-CF, EMA-naïve, MBIC, and MICL methods for the Wine (subplot a) and Digit (subplot b) datasets.
Symmetry 17 01879 g009
Figure 10. Covariance structure detection by the EMA-CF method with different settings of κ 1 * and the mcgStep method. In (subplot a), EMA-CF provides an estimation of covariance structure as the Gaussian graphical model, and mcgStep provides an estimation as the covariance graph. The corresponding standard errors over the twenty replicates are present in (subplot b).
Figure 10. Covariance structure detection by the EMA-CF method with different settings of κ 1 * and the mcgStep method. In (subplot a), EMA-CF provides an estimation of covariance structure as the Gaussian graphical model, and mcgStep provides an estimation as the covariance graph. The corresponding standard errors over the twenty replicates are present in (subplot b).
Symmetry 17 01879 g010
Figure 11. Patterns of feature importance estimated by the EMA-CF, EMA, MBIC, and MICL methods.
Figure 11. Patterns of feature importance estimated by the EMA-CF, EMA, MBIC, and MICL methods.
Symmetry 17 01879 g011
Table 1. The model-based clustering methods under comparison.
Table 1. The model-based clustering methods under comparison.
MethodAssumptionPurpose
ClusteringFeature SelectionCovariance Structure Detection
EMA-CFleaf-augmented naïve Bayes
EMA-naïvenaïve Bayes
MBICnaïve Bayes
MICLnaïve Bayes
mcgStepcovariance graphs
mclust-BICgeometric properties
mclust-RMAgeometric properties
mclust-PMAgeometric properties
Table 2. Classification error rates obtained by the clustering algorithms under comparison in the six scenarios with different sample sizes and class overlaps.
Table 2. Classification error rates obtained by the clustering algorithms under comparison in the six scenarios with different sample sizes and class overlaps.
ScenarioMethod ϵ = 1.0 ϵ = 1.1 ϵ = 1.3 ϵ = 1.6 ϵ = 2.0
n = 25 n = 50 n = 75 n = 100 n = 25 n = 50 n = 75 n = 100 n = 25 n = 50 n = 75 n = 100 n = 25 n = 50 n = 75 n = 100 n = 25 n = 50 n = 75 n = 100
Scen.1EMA-CF0.160.100.100.110.090.090.070.060.030.030.010.010.010.000.000.000.000.000.000.00
EMA-naïve0.200.110.110.150.130.090.070.080.050.040.030.020.020.010.010.010.000.000.000.00
MBIC0.310.340.380.450.300.280.320.390.160.140.150.180.050.050.050.030.020.000.000.00
MICL0.240.250.320.400.180.180.280.320.070.080.080.160.030.030.010.030.000.000.000.00
mcgStep0.160.160.160.100.100.090.130.040.060.020.010.020.010.010.000.000.000.000.000.00
mclust-BIC0.290.300.250.320.220.240.240.220.180.220.210.150.110.100.090.110.050.100.070.00
mclust-RMA0.260.340.240.220.210.300.200.230.210.250.180.150.130.180.070.080.100.160.050.04
mclust-PMA0.260.340.240.220.210.310.200.230.210.250.170.150.130.180.070.080.100.160.050.04
Scen.2EMA-CF0.060.110.080.060.050.050.050.040.010.030.010.010.010.010.010.000.010.000.000.00
EMA-naïve0.090.100.090.080.030.070.050.050.020.030.020.020.010.010.010.010.010.000.000.00
MBIC0.250.240.130.260.170.110.090.190.080.060.060.050.030.010.010.010.010.000.000.00
MICL0.170.200.130.220.100.080.090.130.020.050.040.020.010.010.010.010.010.000.000.00
mcgStep0.120.100.130.100.110.050.050.060.060.030.020.010.030.010.010.000.000.000.000.00
mclust-BIC0.200.170.280.280.190.180.250.210.150.100.140.140.020.030.110.060.000.000.020.04
mclust-RMA0.180.180.230.250.130.200.230.170.070.080.120.040.060.050.020.050.040.020.000.05
mclust-PMA0.180.180.230.250.120.200.230.170.070.080.120.040.060.060.020.050.040.020.000.05
Scen.3EMA-CF0.070.080.070.060.050.050.050.050.040.030.020.020.020.010.010.010.010.000.000.00
EMA-naïve0.070.080.070.060.050.060.050.050.040.030.020.020.020.010.010.010.010.000.000.00
MBIC0.200.150.070.060.160.070.050.050.090.050.020.020.040.010.010.010.010.000.000.00
MICL0.160.140.070.060.070.080.050.050.030.030.020.020.020.010.010.010.010.000.000.00
mcgStep0.100.120.130.130.070.090.090.080.040.050.050.030.020.020.010.010.000.010.000.00
mclust-BIC0.140.100.070.060.090.080.050.040.060.030.020.020.030.010.010.000.000.000.000.00
mclust-RMA0.130.100.070.060.090.080.050.040.060.030.020.020.030.010.010.000.000.000.000.00
mclust-PMA0.130.100.070.070.090.080.050.040.060.030.020.020.030.010.010.000.000.000.000.00
Scen.4EMA-CF0.120.060.050.050.060.040.040.040.020.020.020.010.010.000.010.000.000.000.000.00
EMA-naïve0.110.060.050.050.060.040.040.040.020.020.020.010.010.000.010.000.000.000.000.00
MBIC0.250.060.050.050.180.040.040.040.020.020.020.010.010.000.010.000.000.000.000.00
MICL0.140.060.050.050.100.040.040.040.020.020.020.010.010.000.010.000.000.000.000.00
mcgStep0.090.090.060.070.070.060.040.050.030.030.020.020.000.010.000.010.000.000.000.00
mclust-BIC0.160.050.040.050.130.030.030.040.060.020.010.020.050.000.000.000.020.000.000.00
mclust-RMA0.150.060.050.050.110.040.040.030.040.010.010.020.000.000.000.000.000.000.000.00
mclust-PMA0.140.060.050.050.110.040.040.030.040.010.010.020.000.000.000.000.000.000.000.00
Scen.5EMA-CF0.120.070.070.050.070.060.040.040.040.030.020.020.020.010.000.000.010.000.000.00
EMA-naïve0.120.080.060.070.090.060.040.050.030.020.020.030.010.010.010.010.000.000.000.00
MBIC0.230.130.060.070.200.070.040.050.070.030.020.030.010.010.010.010.000.000.000.00
MICL0.160.110.060.070.090.070.040.050.020.030.020.030.010.010.010.010.000.000.000.00
mcgStep0.120.110.080.060.100.070.050.020.050.040.010.010.020.010.000.000.010.000.000.00
mclust-BIC0.210.310.290.260.160.210.240.160.100.160.170.130.080.050.060.040.050.030.020.04
mclust-RMA0.210.300.300.250.160.200.240.160.100.160.170.130.080.050.060.040.050.030.020.04
mclust-PMA0.200.300.290.250.160.210.240.160.100.160.170.130.080.050.060.040.050.030.020.04
Scen.6EMA-CF0.090.060.050.050.050.030.020.030.020.010.010.010.000.000.010.000.000.000.000.00
EMA-naïve0.100.080.050.040.060.040.030.030.030.020.010.010.010.000.000.000.000.000.000.00
MBIC0.200.130.080.060.100.050.030.030.070.020.010.010.010.000.000.000.000.000.000.00
MICL0.120.080.070.040.040.040.030.030.020.010.010.010.010.000.000.000.000.000.000.00
mcgStep0.100.090.040.000.070.050.020.000.030.010.000.000.000.000.000.000.000.000.000.00
mclust-BIC0.160.150.130.100.140.150.180.080.060.090.050.040.030.030.000.080.030.000.000.00
mclust-RMA0.150.150.130.100.140.150.180.080.060.110.050.040.030.030.000.080.030.000.000.00
mclust-PMA0.160.150.130.100.130.140.180.080.060.110.050.040.030.030.000.080.030.000.000.00
Table 3. Covariance structure detection by mclust-BIC in the six scenarios with the class overlap setting ϵ = 1.3 .
Table 3. Covariance structure detection by mclust-BIC in the six scenarios with the class overlap setting ϵ = 1.3 .
Scenario n = 25 n = 50 n = 75 n = 100
Scen.1EEE 1 (6)EEE (13)EEE (12)EEE (13)
Scen.2EII 2 (7)EII (9)EEE (14)EEE (17)
Scen.3EII (14)EII (17)EII (17)EII (19)
Scen.4EII (17)EII (18)EII (18)EII (20)
Scen.5EII (14)EEE (7)EEE (13)EEE (15)
Scen.6EII (13)EII (11)EEE (14)EEE (15)
1 This is a model from the GPCM family with equal volume, shape, and orientation parameters across the components. 2 This is a model from the GPCM family with equal volume and shape parameters across the components but no orientation parameters.
Table 4. Basic information for the four benchmark datasets.
Table 4. Basic information for the four benchmark datasets.
DatasetIrisOliveWineDigit
K3332
d482764
n150572178200
Table 5. Classification error rates obtained using the clustering methods under comparison for the four benchmark datasets.
Table 5. Classification error rates obtained using the clustering methods under comparison for the four benchmark datasets.
MethodIrisOliveWineDigit
EMA-CF-10.0400.2450.0450.060
EMA-CF-20.0330.2450.0450.060
EMA-CF-30.0330.0660.0450.060
EMA-naïve0.1000.2450.0450.060
MBIC0.3930.4530.0560.090
MICL0.3930.4530.0560.100
mcgStep0.0870.1800.2360.140
mclust-BIC0.0330.3900.0560.495
mclust-RMA0.0330.3390.0560.495
mclust-PMA0.0330.3390.0560.495
Table 6. Classification error rates and predictive log scores (PLS) obtained using the clustering methods under comparison.
Table 6. Classification error rates and predictive log scores (PLS) obtained using the clustering methods under comparison.
MethodErrorPLS
EMA-CF-10.108 (0.035)6064 (259)
EMA-CF-20.102 (0.044)5917 (315)
EMA-CF-30.080 (0.050)5294 (290)
EMA-naïve0.104 (0.034)6660 (752)
MBIC0.108 (0.034)7427 (1411)
MICL0.109 (0.034)7956 (1425)
mcgStep0.248 (0.150)6755 (1203)
mclust-BIC0.234 (0.232)6307 (1226)
mclust-RMA0.234 (0.232)6453 (2391)
mclust-PMA0.234 (0.232)5763 (33)
Table 7. Yield and the percentile of yield of the selected landraces in environments E1 and E2.
Table 7. Yield and the percentile of yield of the selected landraces in environments E1 and E2.
LandraceE1E2
Y (kg/ha)Percentile (%)Y (kg/ha)Percentile (%)
XZ-1470.661.5177.11.0
GZ-1583.282.0398.04.5
XZ-1273.57.0370.03.5
SNX-15474.462.5992.067.0
SC-8587.882.51038.673.5
GZ-32909.398.01044.274.5
SNX-47486.965.5457.07.5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Feng, S.; Xie, W.; Nie, Y. Bayesian Model Averaging with Diffused Priors for Model-Based Clustering Under a Cluster Forests Architecture. Symmetry 2025, 17, 1879. https://doi.org/10.3390/sym17111879

AMA Style

Feng S, Xie W, Nie Y. Bayesian Model Averaging with Diffused Priors for Model-Based Clustering Under a Cluster Forests Architecture. Symmetry. 2025; 17(11):1879. https://doi.org/10.3390/sym17111879

Chicago/Turabian Style

Feng, Shan, Wenxian Xie, and Yufeng Nie. 2025. "Bayesian Model Averaging with Diffused Priors for Model-Based Clustering Under a Cluster Forests Architecture" Symmetry 17, no. 11: 1879. https://doi.org/10.3390/sym17111879

APA Style

Feng, S., Xie, W., & Nie, Y. (2025). Bayesian Model Averaging with Diffused Priors for Model-Based Clustering Under a Cluster Forests Architecture. Symmetry, 17(11), 1879. https://doi.org/10.3390/sym17111879

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop