Next Article in Journal
Modeling, Analysis, and Transmission Dynamics of Cassava Mosaic Disease Through Stochastic Fractional Delay Differential Equations
Previous Article in Journal
A Control Method for Thermal Structural Tests of Hypersonic Missile Aerodynamic Heating
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analytical Bounds for Mixture Models in Cauchy–Stieltjes Kernel Families

1
Department of Mathematics, College of Science, Jouf University, P.O. Box 2014, Sakaka 72311, Saudi Arabia
2
Department of Mathematical Sciences, College of Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(3), 381; https://doi.org/10.3390/math13030381
Submission received: 15 December 2024 / Revised: 20 January 2025 / Accepted: 22 January 2025 / Published: 24 January 2025
(This article belongs to the Section D1: Probability and Statistics)

Abstract

:
Mixture models are widely used in mathematical statistics and theoretical probability. However, the mixture probability distribution is rarely explicit in its formula. One must then decide whether to keep the parent probability distribution or to obtain an approximation of the mixture probability distribution. In such cases, it is essential to estimate or evaluate the distance between a mixture probability distribution and its parent probability distribution. On the other hand, orthogonal polynomials offer a versatile mathematical tool for approximating, fitting, and analyzing mixture models, facilitating more accurate and efficient modeling in statistics and data science. This article considers mixture models in Cauchy–Stieltjes Kernel (CSK) families. Using a suitable basis of polynomials, we obtain an expression for the distance in the L 2 -norm between the mixed probability distribution and its parent probability distribution which belongs to a given CSK family. For the distance between the corresponding distribution functions, bounds are derived in L 1 -norm. The results are illustrated by some examples from quadratic CSK families.

1. Introduction

In statistical modeling and data analysis, mixture models play a pivotal role in capturing complex data structures where the underlying population is assumed to be heterogeneous. However, the mixture probability distribution rarely has an explicit formula. Then, we must choose either to keep a parent probability distribution (i.e., the underlying distribution from which the components or sub-distributions of the mixture are drawn) or to obtain an approximation of the mixing probability distribution. In such cases, it is very important to approximate or evaluate the distance between a mixture probability distribution and its parent probability distribution. Therefore, the literature focuses on establishing bounds concerning the distance between a mixture probability distribution and its parent probability distribution. In this context, bounds were evaluated for different distances: for uniform distance in [1], for L 1 -norm in [2] and for difference between distribution functions in [3,4]. However, orthogonal polynomials offer a versatile mathematical tool for approximating, fitting, and analyzing mixture models, facilitating more accurate and efficient modeling in statistics and data science. They help in simplifying computations in mixture models. By using the property of orthogonality, the polynomial terms can be efficiently computed, reducing the complexity of estimating parameters in the mixture model.
On the other hand, the study of mixture models in a family of probability measures is significant because it enables the flexible modeling of complicated data derived from numerous underlying distributions. Many real-world circumstances create data from a variety of multiple sources or latent groups rather than a single process or distribution. Mixture models capture this heterogeneity by merging many probability distributions, each capturing a distinct portion of the data. In this paper, based on the concept of orthogonal polynomials, we are interested in providing analytical bounds for mixture models in Cauchy–Stieltjes Kernel (CSK) families. For a better presentation of the purpose of this article, we need first to introduce some basic concepts about CSK families and their associated orthogonal polynomials.
The setting of CSK families in free probability is recently introduced. It concerns families of probabilities defined similarly to natural exponential families by exploring the Cauchy–Stieltjes kernel ( 1 ζ θ ) 1 replacing the exponential kernel exp ( ζ θ ) . Denote by P c the set of (non-degenerate) compactly supported probabilities on the real line. Let ρ P c , then
M ρ ( θ ) = 1 1 θ ζ ρ ( d ζ )
is defined ∀ θ ( θ ρ , θ + ρ ) with 1 / θ + ρ = max { sup   supp ( ρ ) , 0 } and 1 / θ ρ = min { inf   supp ( ρ ) , 0 } .
The family of probabilities
K ( ρ ) = { P ( θ , ρ ) ( d ζ ) = 1 M ρ ( θ ) ( 1 θ ζ ) ρ ( d ζ ) : θ ( θ ρ , θ + ρ ) }
is called the CSK family induced by ρ .
Following [5], the mean function θ K ρ ( θ ) is strictly increasing on ( θ ρ , θ + ρ ) . The image of ( θ ρ , θ + ρ ) by K ρ ( · ) is the mean domain of K ( ρ ) and is denoted as ( m ρ , m + ρ ) . Denoting by ψ ρ the inverse function of K ρ ( · ) . Writing for m ( m ρ , m + ρ ) , Q ( m , ρ ) ( d ζ ) = P ( ψ ρ ( m ) , ρ ) ( d ζ ) , we obtain the mean re-parametrization of K ( ρ ) as
K ( ρ ) = { Q ( m , ρ ) ( d ζ ) : m ( m ρ , m + ρ ) } .
Define
B = B ( ρ ) = max { sup   supp ( ρ ) , 0 } = 1 / θ + ρ [ 0 , ) ,
and
A = A ( ρ ) = min { inf   supp ( ρ ) , 0 } = 1 / θ ρ ( , 0 ] .
It is shown in [6] that
m ρ = A lim z A 1 G ρ ( z ) and m + ρ = B lim z B + 1 G ρ ( z ) ,
where
G ρ ( z ) = 1 z ζ ρ ( d ζ ) , z C supp ( ρ )
is the Cauchy transform of ρ .
The map
m V ρ ( m ) = ( ζ m ) 2 Q ( m , ρ ) ( d ζ )
is called the variance function (VF) of K ( ρ ) , see [5]. An interesting fact is that the governing measure ρ is characterized by V ρ ( · ) and the first moment of ρ (denoted m 1 ρ ): If we set
ϖ = ϖ ( m ) = m + V ρ ( m ) m m 1 ρ ,
then the Cauchy transform satisfies
G ρ ( ϖ ) = m m 1 ρ V ρ ( m ) .
In addition, Q ( m , ρ ) ( d ζ ) = h ρ ( ζ , m ) ρ ( d ζ ) with
h ρ ( ζ , m ) : = V ρ ( m ) V ρ ( m ) + ( m m 1 ρ ) ( m ζ ) .
Now we come to the concept of polynomials associated with CSK families. Bryc [5] characterized the class of quadratic CSK families such that the VF is a polynomial in the mean m of degree 2 . This class consists of the free Meixner laws. Different findings involving orthogonal polynomials have been proved for the quadratic class of CSK families. Some results are stated in [7] for the sequence of polynomials associated with a CSK family and new versions are provided of the Feinsilver and Meixener characterizations based on the orthogonality of polynomials. These versions encompass the quadratic class of CSK families. For completeness, we recall the CSK -version of Feinsilver characteristic property, see ([7], Theorem 3.2).
Theorem 1. 
Let K ( ρ ) = { Q ( m , ρ ) ( d ζ ) : m ( m ρ , m + ρ ) } be the CSK family induced by ρ P c with mean m 1 ρ = 0 . Assume that V ρ ( · ) is analytic near 0 and V ρ ( 0 ) > 0 . Define the polynomials P n ( . ) , n = 0 , 1 , 2 , , as
P n ( ζ ) = 1 n ! n m n h ρ ( ζ , m ) m = 0 .
Then, the following assertions are equivalent.
(i) 
Polynomials ( P n ) n = 0 , 1 , 2 , , are ρ-orthogonal.
(ii) 
K ( ρ ) is a quadratic CSK family.
(iii) 
a 0 > 0 , a 1 R , a 2 > 1 exists so that
x P n ( ζ ) = ( 1 + a 2 ) P n 1 ( ζ ) + a 1 P n ( ζ ) + a 0 P n + 1 ( ζ ) .
In addition, V ρ ( m ) = a 0 + a 1 m + a 2 m 2 .
Now, we present the purpose of this article in more detail. We study mixtures of laws from the perspective of compactly supported CSK families. We provide the distance of a mixing law from its parent law in a CSK family. Mixing laws of the form M ( μ , σ ) ( ζ ) = h μ ( ζ , m ) σ ( d m ) are considered, where σ is a given probability measure and h μ ( · , m ) represent a parent probability measure with mean m, that belongs to a CSK family governed by a (non-degenerate) compactly supported probability measure μ . The objective is to find bounds for the distance between M ( μ , σ ) ( · ) and h μ ( · , m ) , for some m fixed in the mean domain of the corresponding CSK family. We investigate the polynomial expansion of the probability h μ ( ζ , m ) and we deduce expansions of the mixing density M ( μ , σ ) ( · ) . For the quadratic CSK family, the difference between M ( μ , σ ) ( · ) and a parent density function h μ ( · , m ) is provided by means of orthogonal polynomials in L 1 and L 2 -norm. We also give bounds for the distance between the mixing distribution function and its parent distribution function.

2. Main Results

Consider K ( μ ) = { Q ( m , μ ) ( d ζ ) : m ( m μ , m + μ ) } the CSK family induced by μ P c , with m 1 μ = 0 . According to [7], r > 0 exists so that ∀ m ( r , r )
h μ ( ζ , m ) = n = 0 + m n P n ( ζ ) ,
where P n ( . ) , n = 0 , 1 , 2 , , are polynomials introduced as (9).
Throughout the paper, for some μ P c with m 1 μ = 0 , a mixture of the form
M ( μ , σ ) ( ζ ) = h μ ( ζ , m ) σ ( d m )
is considered, where σ is a real probability measure. E σ ( · ) will denote the expectation with respect to σ . It is obligatory that all moments of σ exist finitely: that is, for all integers p, E σ ( m p ) < . Let us first discuss a significant outcome of (10).
Lemma 1. 
Let M ( μ , σ ) ( · ) be a mixture density defined by (11). Suppose that s u p p ( σ ) ( r , r ) . If the series n = 0 + E σ ( | m | n ) | P n ( ζ ) | converge then we have
M ( μ , σ ) ( ζ ) = n = 0 + E σ ( m n ) P n ( ζ ) .
Proof. 
Combining (11) with (10) we obtain
M ( μ , σ ) ( ζ ) = h μ ( ζ , m ) σ ( d m ) = n = 0 + m n P n ( ζ ) σ ( d m ) = n = 0 + E σ ( m n ) P n ( ζ ) .
For the permutation between series and integrals, see ([8] Proposition 3.1).
Consequently, we obtain the following expansion of the difference between the parent and the mixture density.
Proposition 1. 
Let M ( μ , σ ) ( · ) be a mixture density defined by (11) and let P n ( · ) , n = 0 , 1 , 2 , , defined by (9). If s u p p ( σ ) ( r , r ) , then t ( r , r ) , we have
M ( μ , σ ) ( ζ ) h μ ( ζ , t ) = n = 1 + { E σ ( m n ) t n } P n ( ζ ) .
Proof. 
Combining (10) and (12), we obtain
M ( μ , σ ) ( ζ ) h μ ( ζ , t ) = n = 0 + E σ ( m n ) P n ( ζ ) n = 0 + t n P n ( ζ ) = n = 1 + { E σ ( m n ) t n } P n ( ζ ) .
Remark 1. 
If we take t = 0 , then we have h μ ( ζ , 0 ) = 1 and Proposition 1 gives the following:
M ( μ , σ ) ( ζ ) 1 = n = 1 + E σ ( m n ) P n ( ζ ) .
Furthermore, for the choice t = 0 = E σ ( m ) , we obtain
M ( μ , σ ) ( x ) 1 = V a r σ ( m ) P 2 ( ζ ) + n = 3 + E σ ( m n ) P n ( ζ ) .
where V a r σ ( m ) = E σ ( m 2 ) ( E σ ( m ) ) 2 denotes the variance of σ.
Denote by H ( μ , σ ) ( · ) and F μ ( · , m ) the distribution functions associated to M ( μ , σ ) ( · ) and h μ ( · , m ) , respectively, that is,
H ( μ , σ ) ( x ) = x M ( μ , σ ) ( ζ ) μ ( d ζ ) and F μ ( x , m ) = x h μ ( ζ , m ) μ ( d ζ ) .
We first provide a general outcome for all distribution functions in a CSK family.
Proposition 2. 
Let M ( μ , σ ) ( · ) be a mixture density defined by (11) and let ( P n ( · ) ) n N be defined by (9). If s u p p ( σ ) ( r , r ) , then t ( r , r ) , we have
| H ( μ , σ ) ( x ) F μ ( x , t ) | n = 1 + E σ ( m n ) t n x P n ( ζ ) μ ( d ζ ) .
Proof. 
From Proposition 1, one sees that
| H ( μ , σ ) ( x ) F μ ( x , t ) | = x ( M ( μ , σ ) ( ζ ) h μ ( ζ , t ) ) ν ( d ζ ) = x n = 1 + { E σ ( m n ) t n } P n ( ζ ) μ ( d ζ ) n = 1 + E σ ( m n ) t n x P n ( ζ ) μ ( d ζ ) .
We now provide some results related to quadratic CSK families.
Theorem 2. 
Assume that K ( μ ) = { Q ( m , μ ) ( d x ) : m ( m μ , m + μ ) } is a quadratic CSK family. Under the hypothesis of Proposition 1, if
n = 0 + | E σ ( m n ) | P n ( · )
converges, then we have
M ( μ , σ ) ( · ) h μ ( · , t ) 2 = n = 1 + ( E σ ( m n ) t n ) 2 P n ( · ) 2 .
Moreover, if t = 0 , we obtain
M ( μ , σ ) ( · ) 1 2 = n = 1 + ( E σ ( m n ) ) 2 P n ( · ) 2 .
Proof. 
From Proposition 1, we have that
M ( μ , σ ) ( · ) h μ ( · , t ) 2 = n = 1 + { E σ ( m n ) t n } P n ( ζ ) 2 μ ( d ζ ) .
The existence of this series is guaranteed by (15). Relation (18) is
M ( μ , σ ) ( · ) h μ ( · , t ) 2 = n = 1 + { E σ ( m n ) t n } P n ( ζ ) k = 1 + { E σ ( m k ) t k } P k ( ζ ) μ ( d ζ ) .
Since we deal with quadratic CSK families, recall Theorem 1, the polynomials P n ( ζ ) , n = 0 , 1 , 2 , , are μ -orthogonal. Then, Equation (19) reduces to (16). □
Proposition 3. 
Assume that K ( μ ) = { Q ( m , μ ) ( d ζ ) : m ( m μ , m + μ ) } is a quadratic CSK family. Under the hypothesis of Proposition 1, we have
| M ( μ , σ ) ( ζ ) h μ ( ζ , t ) | μ ( d ζ ) n = 1 + | E σ ( m n ) t n | | P n ( ζ ) | μ ( d ζ ) .
Moreover, if t = 0 , we obtain
| M ( μ , σ ) ( ζ ) 1 | μ ( d ζ ) n = 1 + | E σ ( m n ) | | P n ( ζ ) | μ ( d ζ ) .
Proof. 
| M ( μ , σ ) ( ζ ) h μ ( ζ , t ) | μ ( d ζ ) = n = 1 + ( E σ ( m n ) t n ) P n ( ζ ) μ ( d ζ ) n = 1 + | E σ ( m n ) t n | | P n ( ζ ) | μ ( d ζ ) .
In prior studies, the distance between mixture and parent laws may have been explored qualitatively or under specific conditions, but not always with concrete bounds. The new contribution here is the establishment of quantitative bounds that allow for a more precise understanding of how far a mixture law can be from its parent law. Traditionally, the distance between a mixture distribution and its parent law might be analyzed using moment-based methods [9,10] or using distances like total variation [11,12] or Kullback–Leibler divergence [13,14]. However, using orthogonal polynomials introduces a new layer of precision by representing both the mixture and parent distributions in terms of their polynomial expansions. This allows for a more detailed study of how the mixture deviates from the parent distribution across different orders of moments. Orthogonal polynomials can help sharpen the bounds on the distance between the mixture law and the parent law. In many cases, they offer a more refined approach compared to traditional methods, allowing for exact or tighter bounds in the analysis of distances. This results in stronger mathematical guarantees for approximating or bounding the behavior of mixture distributions, especially when the parent distribution belongs to a CSK family.

3. Examples

In this section, some illustrations of the previous results are given for semicircle and free Poisson mixtures. In free probability, the semicircle law represents the free analog of the Gaussian law in classical probability. It is a result of random matrix theory that describes the distribution of eigenvalues of certain types of random matrices. The free Poisson law provides a distribution similar to the classical Poisson distribution, but in the context of free random variables. It describes the asymptotic behavior of singular values of large rectangular random matrices and it is important for understanding complex interactions in systems like quantum mechanics and random matrices.
We recall from [15] a technical result which is useful for the following examples:
Lemma 2. 
(i) 
For n = 0 , 1 , 2 , , the Tchebychev polynomials of the first kind satisfy
| T n ( x ) | 1 , x [ 2 , 2 ] .
(ii) 
For n = 0 , 1 , 2 , , the Tchebychev polynomials of the second kind satisfy
| S n ( x ) | n + 1 , x [ 2 , 2 ] .
Example 1. 
If μ is the semicircle law m 1 μ = 0 and variance 1. The associated orthogonal polynomials P n ( · ) , n = 0 , 1 , 2 , , are derived from Tchebychev polynomials of the second kind. Then, we have
(i) 
| M ( μ , σ ) ( ζ ) 1 | μ ( d ζ ) n = 1 + ( n + 1 ) | E σ ( m n ) | .
(ii) 
| H ( μ , σ ) ( x ) F μ ( x , 0 ) | F ν ( x , 0 ) n = 1 + ( n + 1 ) | E σ ( m n ) | .
where F ν ( x , 0 ) is the distribution function of the standard Semicircle law.
If σ is the uniform distribution on the interval [ 0 , 1 2 ] , then we have E σ ( m n ) = ( 1 2 ) n n + 1 . In this case, we obtain
| M ( μ , σ ) ( ζ ) 1 | μ ( d x ) n = 1 + 1 2 n = 1 ,
and
| H ( μ , σ ) ( x ) F μ ( x , 0 ) | F μ ( x , 0 ) .
Example 2. 
If μ is the free Poisson law with m 1 μ = 0 and variance 1. The associated orthogonal polynomials P n ( · ) , n = 0 , 1 , 2 , , are derived from Tchebychev polynomials of the first kind. Then, we have
(i) 
| M ( μ , σ ) ( ζ ) 1 | μ ( d ζ ) n = 1 + | E σ ( m n ) | .
(ii) 
| H ( μ , σ ) ( x ) F μ ( x , 0 ) | F μ ( x , 0 ) n = 1 + | E σ ( m n ) | .
where F μ ( x , 0 ) is the distribution function of the free Poisson law.
If σ is the uniform distribution on the interval [ 0 , 1 2 ] , then we have
| M ( μ , σ ) ( ζ ) 1 | μ ( d x ) n = 1 + 1 2 n n + 1 n = 1 + 1 2 n + 1 = 1 2 ,
and
| H ( μ , σ ) ( x ) F μ ( x , 0 ) | 1 2 F μ ( x , 0 ) .

4. Conclusions

We have examined a mixture of probability distributions from a CSK family in this study. A formula is derived for the difference between the parent probability distribution from a CSK family and the mixed probability distribution using a suitable base of polynomials. We have also evaluated the distance of the mixture from the parent probability distribution in the L 2 -norm. Additionally, the L 1 -norm bounds are determined for the difference between distribution functions. A few instances are used to demonstrate the findings via quadratic CSK families. However, the results of this paper can be extended to cover families of probability measures having polynomial variance functions as the mean of arbitrary degree based on a new notion of generalized orthogonality of polynomials introduced in [8]. Furthermore, other alternative methods such as stochastic representation, as presented in [16], can offer a powerful approach to capture the complexity of families of probability measures. Instead of relying solely on deterministic formulations, stochastic representation allows for the incorporation of random processes and latent variables, providing a more flexible framework for modeling diverse distributions. By modeling the mixture components using stochastic processes, such as random measures or Markov chains, one can account for the underlying uncertainty, dependencies, and variability within the data. This approach can be particularly useful when dealing with complex or heterogeneous families of probability measures, providing a more robust and adaptable way to represent mixture models.
The motivation for investigating analytical bounds for mixture models in the CSK families of probability measures derives from the need to better comprehend and quantify uncertainty in real-world systems with complicated, multimodal distributions. In various domains, such as finance, signal processing, and machine learning, data are frequently derived from a combination of underlying processes or populations. Mixture models provide a versatile framework for capturing heterogeneity. By focusing on CSK families, which are linked to distributions with heavy tails or singularities, this work aims to provide sharper, more reliable bounds that can enhance the accuracy of statistical inference and prediction. Such advances can improve model robustness, optimize decision-making, and provide better uncertainty quantification in applications like risk management, anomaly detection, and complex data analysis. In summary, the present work not only aims to advance statistical theory but also aims to provide practical solutions to pressing challenges in applied domains. By harnessing the power of the Cauchy–Stieltjes kernel and orthogonal polynomials, we can improve the accuracy and reliability of statistical models, leading to better decision-making and risk management in complex, multimodal environments. This research represents an important step towards bridging the gap between advanced theoretical frameworks and their concrete applications, ultimately contributing to more robust, efficient, and interpretable models for a wide range of real-world problems.

Author Contributions

Conceptualization, F.A. (Fahad Alsharari); Methodology, R.F.; Validation, F.A. (Fahad Alsharari) and F.A. (Fatimah Alshahrani); Formal analysis, F.A. (Fatimah Alshahrani); Investigation, F.A. (Fahad Alsharari); Resources, F.A. (Fahad Alsharari); Data curation, R.F.; Writing—original draft, R.F.; Writing–review & editing, R.F.; Project administration, F.A. (Fatimah Alshahrani); Funding acquisition, F.A. (Fatimah Alshahrani). All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2025R358), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shaked, M. Bounds on the distance of a mixture from its parent distribution. J. Appl. Probab. 1981, 18, 853–863. [Google Scholar] [CrossRef]
  2. Shimizu, R. Expansion of the scale mixture of the multivariate normal distributions with error bound evaluated in the L1-Norm. J. Multivar. Anal. 1995, 53, 126–138. [Google Scholar] [CrossRef]
  3. Pommeret, D. Distance of a mixture from its parent distribution. Sankhya Indian J. Stat. 2005, 67, 699–714. [Google Scholar]
  4. Hall, P. Polynomial expansion of density and distribution functions of scale mixtures. J. Multivar. Anal. 1981, 11, 173–184. [Google Scholar] [CrossRef]
  5. Bryc, W. Free exponential families as kernel families. Demonstr. Math. 2009, XLII, 657–672. [Google Scholar] [CrossRef]
  6. Bryc, W.; Hassairi, A. One-sided Cauchy-Stieltjes kernel families. Journ. Theoret. Probab. 2011, 24, 577–594. [Google Scholar] [CrossRef]
  7. Fakhfakh, R. Characterization of quadratic Cauchy-Stieltjes Kernel families based on the orthogonality of polynomials. J. Math. Anal. Appl. 2018, 459, 577–589. [Google Scholar] [CrossRef]
  8. Bryc, W.; Fakhfakh, R.; Mlotkowski, W. Cauchy-Stieltjes families with polynomial variance funtions and generalized orthogonality. Probab. Math. Stat. 2019, 39, 237–258. [Google Scholar] [CrossRef]
  9. Lindsay, B.; Roeder, K. Moment-based oscillation properties of mixture models. Ann. Statist. 1997, 25, 378–386. [Google Scholar] [CrossRef]
  10. Lindsay, B.G.; Pilla, R.S.; Basak, P. Moment-Based Approximations of Distributions Using Mixtures: Theory and Applications. Ann. Inst. Stat. Math. 2000, 52, 215–230. [Google Scholar] [CrossRef]
  11. Nielsen, F.; Sun, K. Guaranteed deterministic bounds on the total variation distance between univariate mixtures. In Proceedings of the 2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP), Aalborg, Denmark, 17–20 September 2018; pp. 1–6. [Google Scholar] [CrossRef]
  12. Davies, S.; Mazumdar, A.; Pal, S.; Rashtchian, C. Lower Bounds on the Total Variation Distance Between Mixtures of Two Gaussians. In Proceedings of the 33rd International Conference on Algorithmic Learning Theory, Paris, France, 29 March–1 April 2022; Volume 167, pp. 319–341. [Google Scholar]
  13. Hershey, J.R.; Olsen, P.A. Approximating the Kullback Leibler Divergence Between Gaussian Mixture Models. In Proceedings of the 2007 IEEE International Conference on Acoustics, Speech and Signal Processing—ICASSP ’07, Honolulu, HI, USA, 16–20 April 2007; pp. IV-317–IV-320. [Google Scholar] [CrossRef]
  14. Van Hulle, M.M. Mixture density modeling, Kullback–Leibler divergence, and differential log-likelihood. Signal Process. 2005, 85, 951–963. [Google Scholar] [CrossRef]
  15. Abramowitz, M.; Stegun, I.A. Handbook of Mathematical Functions; Dover: New York, NY, USA, 1972. [Google Scholar]
  16. Fang, K.T.; Kotz, S.; Ng, K.W. Symmetric Multivariate and Related Distributions; Chapman and Hall: London, UK; New York, NY, USA, 1990. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alsharari, F.; Fakhfakh, R.; Alshahrani, F. Analytical Bounds for Mixture Models in Cauchy–Stieltjes Kernel Families. Mathematics 2025, 13, 381. https://doi.org/10.3390/math13030381

AMA Style

Alsharari F, Fakhfakh R, Alshahrani F. Analytical Bounds for Mixture Models in Cauchy–Stieltjes Kernel Families. Mathematics. 2025; 13(3):381. https://doi.org/10.3390/math13030381

Chicago/Turabian Style

Alsharari, Fahad, Raouf Fakhfakh, and Fatimah Alshahrani. 2025. "Analytical Bounds for Mixture Models in Cauchy–Stieltjes Kernel Families" Mathematics 13, no. 3: 381. https://doi.org/10.3390/math13030381

APA Style

Alsharari, F., Fakhfakh, R., & Alshahrani, F. (2025). Analytical Bounds for Mixture Models in Cauchy–Stieltjes Kernel Families. Mathematics, 13(3), 381. https://doi.org/10.3390/math13030381

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop