 Next Article in Journal
Using Multiscale Entropy to Assess the Efficacy of Local Cooling on Reactive Hyperemia in People with a Spinal Cord Injury
Next Article in Special Issue
Dual Loomis-Whitney Inequalities via Information Theory
Previous Article in Journal
Symmetries among Multivariate Information Measures Explored Using Möbius Operators
Previous Article in Special Issue
On the Impossibility of Learning the Missing Mass

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Poincaré and Log–Sobolev Inequalities for Mixtures

Institut für Geometrie und Praktische Mathematik, RWTH Aachen, Templergraben 55, 52056 Aachen, Germany
Entropy 2019, 21(1), 89; https://doi.org/10.3390/e21010089
Received: 30 November 2018 / Revised: 30 December 2018 / Accepted: 11 January 2019 / Published: 18 January 2019

## Abstract

:
This work studies mixtures of probability measures on $R n$ and gives bounds on the Poincaré and the log–Sobolev constants of two-component mixtures provided that each component satisfies the functional inequality, and both components are close in the $χ 2$-distance. The estimation of those constants for a mixture can be far more subtle than it is for its parts. Even mixing Gaussian measures may produce a measure with a Hamiltonian potential possessing multiple wells leading to metastability and large constants in Sobolev type inequalities. In particular, the Poincaré constant stays bounded in the mixture parameter, whereas the log–Sobolev may blow up as the mixture ratio goes to 0 or 1. This observation generalizes the one by Chafaï and Malrieu to the multidimensional case. The behavior is shown for a class of examples to be not only a mere artifact of the method.

## 1. Introduction

A mixture of two probability measures $μ 0$ and $μ 1$ on $R n$ is for some parameter $p ∈ [ 0 , 1 ]$ the probability measure $μ p$ defined by
$μ p : = p μ 0 + ( 1 − p ) μ 1 .$
Hereby, both measures $μ 0$ and $μ 1$ are assumed to be absolutely continuous with respect to the Lebesgue measure and their supports are nested, i.e., $supp μ 0 ⊆ supp μ 1$ or $supp μ 1 ⊆ supp μ 0$. Under these assumptions at least one measure is absolutely continuous to the other one
$μ 0 ≪ μ 1 or μ 1 ≪ μ 0 ,$
which implies that at least one of the measures has a density with respect to the other one
$d μ 0 = d μ 0 d μ 1 d μ 1 or d μ 1 = d μ 1 d μ 0 d μ 0 .$
This work establishes criteria to check in a simple way under which a mixture of measures satisfies a Poincaré $PI ( ϱ )$ or log–Sobolev inequality $LSI ( α )$ with constants $ϱ$ and $α$, respectively, provided that each of the components satisfies one.
Definition 1
($PI ( ϱ )$ and $LSI ( α )$). A probability measure μ on $R n$ satisfies the Poincaré inequality with constant $ϱ > 0$, if for all functions $f : R n → R$
$Var μ [ f ] : = ∫ | f − ∫ f d μ | 2 d μ ≤ 1 ϱ ∫ | ∇ f | 2 d μ .$
A probability measure μ satisfies the log–Sobolev inequality with constant $α > 0$, if for all functions $f : R n → R +$ holds
$Ent μ [ f ] : = ∫ f log f d μ − ∫ f d μ log ( ∫ f d μ ) ≤ 1 α ∫ | ∇ f | 2 2 f d μ .$
By the change of variable $f ↦ f 2$, the log–Sobolev inequality $LSI ( α )$ is equivalent to
$Ent μ [ f 2 ] ≤ 2 α ∫ | ∇ f | 2 d μ .$
The question of how $ϱ p$ and $α p$ in $PI ( ϱ p )$ and $LSI ( α p )$ depend for a mixture $μ p$ on the parameter $p ∈ [ 0 , 1 ]$ was first studied by Chafaï and Malrieu  for measures on $R n$. The aim is to deduce simple criteria under which the measure $μ p$ (1) satisfies $PI ( ϱ p )$ and $LSI ( α p )$ knowing that $μ 0$ and $μ 1$ satisfy $PI ( ϱ 0 )$, $PI ( ϱ 1 )$ and $LSI ( α 0 )$, $LSI ( α 1 )$, respectively. The approach by Chafaï and Malrieu  is based on a functional depending on the distribution function of the measures $μ 0$ and $μ 1$, which then lead to bounds on the Poincaré and log–Sobolev constant of the mixture in one dimension.
This work generalizes part of the results from Chafaï and Malrieu  to the multidimensional case by a simple argument. The estimates on the Poincaré and log–Sobolev constants hold for the case, when the $χ 2$-distance of $μ 0$ and $μ 1$ is bounded (see Label (5) for its definition). For this to be true, at least one of the measures $μ 0$ and $μ 1$ needs to be absolutely continuous to the other, which is also a necessary condition for the mixture having connected support. The resulting bound is optimal in the scaling behavior of the mixture parameter $p → 0 , 1$, i.e., a logarithmic blow-up behavior in p for the log–Sobolev constant, whereas the Poincaré constant stays bounded. This different behavior of the Poincaré and log–Sobolev constant was also observed in the setting of metastability in (, Remark 2.20).
Let us first introduce the principle for the Poincaré inequality in Section 2 and then for the log–Sobolev inequality in Section 3. Then, the procedure is illustrated on specific examples of mixtures in Section 4.

## 2. Poincaré Inequality

To keep the presentation concise, the following notation for the mean of a function $f : R n → R$ with respect to a measure $μ$ is introduced
$E μ [ f ] : = ∫ f d μ .$
In this way, the variance in $PI ( ϱ )$ and relative entropy in $LSI ( α )$ become
$Var μ [ f ] = E μ [ ( f − E μ [ f ] ) 2 ] = E μ [ f ] − ( E μ [ f ] ) 2 and Ent μ [ f ] = E μ [ f log f ] − E μ [ f ] log ( E μ [ f ] ) .$
Likewise, the covariance of two functions $f , g : R n → R$ is defined by
$Cov μ [ f , g ] = E μ [ ( f − E μ [ f ] ( g − E μ [ g ] ) = E μ [ f g ] − E μ [ f ] E μ [ g ] .$
The Cauchy–Schwarz inequality for the covariance takes now the form
$Cov μ [ f , g ] ≤ Var μ [ f ] Var μ [ g ] .$
The argument is based on an easy but powerful observation for measures $μ 0$ and $μ 1$ with joint support.
Lemma 1
(Mean-difference as covariance). If $supp μ 0 = supp μ 1$, then for any $ϑ ∈ [ 0 , 1 ]$ and any function $f : R n → R$ holds
$E μ 0 [ f ] − E μ 1 [ f ] = − ϑ Cov μ 0 [ f , d μ 1 d μ 0 ] + ( 1 − ϑ ) Cov μ 1 [ f , d μ 0 d μ 1 ] .$
Proof.
The change of measure formula yields that the covariances above are just the difference of the expectation on the right-hand side
$Cov μ 0 [ f , d μ 1 d μ 0 ] = E μ 0 [ f d μ 1 d μ 0 ] − E μ 0 [ f ] E μ 0 [ d μ 1 d μ 0 ] = E μ 1 [ f ] − E μ 0 [ f ]$
and likewise for $Cov μ 1 [ f , d μ 1 d μ 0 ]$. □
The subsequent strategy is based on the identity (3) by using a Cauchy–Schwarz inequality to arrive at the product of two variances. Then, $PI ( ϱ 0 )$ or $PI ( ϱ 1 )$ can be applied and the parameter $ϑ$ leaves freedom to optimize the resulting expression. This allows for proving the following theorem, which is the generalization of (, Theorem 4.4) to the multidimensional case for the Poincaré inequality provided $μ 0$ and $μ 1$ are absolutely continuous to each other.
Theorem 1 (PI for absolutely continuous mixtures).
Let $μ 0$ and $μ 1$ satisfy $PI ( ϱ 0 )$ and $PI ( ϱ 1 )$, respectively, and let both measures be absolutely continuous to each other. Then, for all $p ∈ ( 0 , 1 )$ and $q = 1 − p$, the mixture measure $μ p = p μ 0 + q μ 1$ satisfies $PI ( ϱ p )$ with
$1 ϱ p ≤ { 1 ϱ 0 , if ϱ 1 ϱ 0 ≥ 1 + p χ 1 , 1 ϱ 1 , if ϱ 0 ϱ 1 ≥ 1 + q χ 0 , p χ 1 + p q χ 0 χ 1 + q χ 0 ϱ 0 p χ 1 + ϱ 1 q χ 0 , else ,$
where
$χ 0 : = Var μ 0 [ d μ 1 d μ 0 ] and χ 1 : = Var μ 1 [ d μ 0 d μ 1 ] .$
Proof.
The variance of f with respect to $μ p$ is decomposed to
$Var μ p [ f ] = p Var μ 0 [ f ] + q Var μ 1 [ f ] + p q ( E μ 0 [ f ] − E μ 1 [ f ] ) 2 .$
Hereby, the first two terms are just the expectation of the conditional variances. The third term is the variance of a Bernoulli random variable. Now, the mean-difference is rewritten by Lemma 1 and the square is estimated with the Young inequality introducing an additional parameter $η > 0$
$( a + b ) 2 ≤ ( 1 + η ) a 2 + ( 1 + η − 1 ) b 2 .$
Then, the Cauchy–Schwarz inequality is applied to the covariances to obtain
$Var μ [ f ] ≤ p Var μ 0 [ f ] + q Var μ 1 [ f ] + + p q ( ( 1 + η ) ϑ 2 Cov μ 0 2 [ f , d μ 1 d μ 0 ] + ( 1 + η − 1 ) ( 1 − ϑ ) 2 Cov μ 1 [ f , d μ 0 d μ 1 ] ) ≤ ( 1 + ( 1 + η ) ϑ 2 q χ 0 ) p Var μ 0 [ f ] + ( 1 + ( 1 + η − 1 ) ( 1 − ϑ ) 2 p χ 1 ) q Var μ 1 [ f ] ≤ 1 + ( 1 + η ) ϑ 2 q χ 0 ϱ 0 ∫ | ∇ f | 2 p d μ 0 + 1 + ( 1 + η − 1 ) ( 1 − ϑ ) 2 p χ 1 ϱ 1 ∫ | ∇ f | 2 q d μ 1 ≤ max { 1 + ( 1 + η ) ϑ 2 q χ 0 ϱ 0 , 1 + ( 1 + η − 1 ) ( 1 − ϑ ) 2 p χ 1 ϱ 1 } ∫ | ∇ f | 2 d μ .$
The resulting maximum is now minimized in $η > 0$ and $ϑ ∈ [ 0 , 1 ]$. To do so without loss of generality, $ϱ 0 ≥ ϱ 1$ is assumed. The other case can always be obtained by interchanging the roles of $μ 0$ and $μ 1$. If $ϱ 0 > ϱ 1$, then $ϑ = 1$ and $η → 0$ is optimal as long as
$1 + q χ 0 ϱ 0 ≤ 1 ϱ 1 .$
This corresponds to the second case in (4). By symmetry, the first case follows if $ϱ 1 ≥ ϱ 0$.
Now, in the case $ϱ 0 ≥ ϱ 1$ and $ϱ 0 ≤ ( 1 + q χ 0 ) ϱ 1$, there exists by monotonicity for every $ϑ ∈ ( 0 , 1 )$ a unique $η * = η * ( ϑ ) > 0$ such that both terms in the max of the right-hand side in (6) are equal and hence the max is minimal. Since $q χ 0 > 0$ and $p χ 1 > 0$, the sum of the coefficients in the front is then given by $h ( ϑ ) : = ( 1 + η ) ϑ 2 + ( 1 + 1 η ) ( 1 − ϑ ) 2$ in $ϑ$ as a function of $η$. The minimization of h in $ϑ ∈ ( 0 , 1 )$ leads to $ϑ * = 1 1 + η$ and
$h ( ϑ * ) = 1 1 + η + η 1 + η = 1$
holds. Hence, in this case, the parameter $s = ( 1 + η * ) ϑ * 2 = 1 1 + η * ∈ ( 0 , 1 )$ and $( 1 + η * − 1 ) ( 1 − ϑ * ) 2 = η * 1 + η * = 1 − s$. Thus, the problem can be rephrased: Find $s * ∈ ( 0 , 1 )$ which solves
$1 + s q χ 0 ϱ 0 = 1 + ( 1 − s ) p χ 1 ϱ 1 .$
The solution $s *$ is given by
$s * = ( 1 + p χ 1 ) ϱ 0 − ϱ 1 ϱ 0 p χ 1 + ϱ 1 q χ 0 .$
For this value of $s *$, the value of the max in (6) is given by
$1 + s * q χ 0 ϱ 0 = p χ 1 + ϱ 1 ϱ 0 q χ 0 + ( 1 + p , χ 1 ) q χ 0 − ϱ 1 ϱ 0 q χ 0 ϱ 0 p χ 1 + ϱ 1 q χ 0 = p χ 1 + p q χ 0 χ 1 + q χ 0 ϱ 0 p χ 1 + ϱ 1 q χ 0 .$
□
Remark 1.
The constants $χ 0$ and $χ 1$ can be rewritten if $μ 0$ and $μ 1$ are mutual absolutely continuous as
$χ 0 = ∫ ( d μ 1 d μ 0 ) 2 d μ 0 − 1 = ∫ d μ 1 d μ 0 d μ 1 − 1 and χ 1 = ∫ ( d μ 0 d μ 1 ) 2 d μ 1 − 1 = ∫ d μ 0 d μ 1 d μ 0 − 1 .$
This quantity is also known as$χ 2$-distance on the space of probability measures (cf. ). The $χ 2$-distance is a rather weak distance and therefore bounds many other probability distances. Among them is also the relative entropy. Indeed, by the concavity of the logarithm and the Jensen inequality follows
$Ent μ 0 [ d μ 1 d μ 0 ] = ∫ log μ 1 μ 0 d μ 1 ≤ log ( ∫ μ 1 μ 0 d μ 1 ) = log ( 1 + χ 0 ) ≤ χ 0 .$
Remark 2.
The proof of Theorem 1 shows that the expression for $1 ϱ$ in the last case of (4) can be bounded above and below by
$max { 1 ϱ 0 , 1 ϱ 1 } ≤ p χ 1 + p q χ 0 χ 1 + q χ 0 ϱ 0 p χ 1 + ϱ 1 q χ 0 ≤ max { 1 + q χ 0 ϱ 0 , 1 + p χ 1 ϱ 1 } .$
In the case, where $χ 0 = χ 1 = χ$, the formula for $ϱ p$ (4) simplifies to
$1 ϱ p ≤ 1 + p q χ p ϱ 0 + q ϱ 1 .$
Corollary 1.
Let $μ 0 ≪ μ 1$ and $μ 0$, $μ 1$ satisfy $PI ( ϱ 0 )$, $PI ( ϱ 1 )$, respectively. Then, for all $p ∈ [ 0 , 1 ]$ with $q = 1 − p$, the mixture measure $μ p = p μ 0 + q μ 1$ satisfies $PI ( ϱ p )$ with
$1 ϱ p = max { 1 ϱ 0 , 1 + p χ 1 ϱ 1 } .$
Likewise, if $μ 1 ≪ μ 0$, then it holds
$1 ϱ p = max { 1 ϱ 1 , 1 + q χ 0 ϱ 0 } .$
Proof.
The proof is a simple consequence of Lemma 1 with $ϑ = 0$ and a similar line of estimates as in (6). □

## 3. Log–Sobolev Inequality

In this section, a criterion for $LSI ( α )$ is established. It will be convenient to establish it in the form (2). For a function $g : R n → R +$ and two probability measures $μ 0$ and $μ 1$, the averaged function $g ¯ : 0 , 1 → R +$ is defined by
$g ¯ ( 0 ) : = E μ 0 [ g ] and g ¯ ( 1 ) : = E μ 1 [ g ] .$
Moreover, the mixture of two Dirac measures $δ 0$ and $δ 1$ is by slight abuse of notation denoted by $δ p : = p δ 0 + q δ 1$ for $p ∈ [ 0 , 1 ]$ and $q = 1 − p$. Then, the entropy of the mixture $μ p = p μ 0 + q μ 1$ is given by
$Ent μ p [ f 2 ] = p Ent μ 0 [ f 2 ] + q Ent μ 1 [ f 2 ] + Ent δ p [ f 2 ¯ ] .$
The following discrete log–Sobolev inequality for a Bernoulli random variable is used to estimate the entropy of the averaged function $f ¯$. The optimal log–Sobolev constant was found by Higuchi and Yoshida  and Diaconis and Saloff-Coste (, Theorem A.2.) at the same time.
Lemma 2 (Optimal log–Sobolev inequality for Bernoulli measures).
A Bernoulli measure $μ p$ on $0 , 1$, i.e., a mixture of two Dirac measures $δ p = p δ 0 + q δ 1$ with $p ∈ [ 0 , 1 ]$ and $q = 1 − p$ satisfies the discrete log–Sobolev inequality
$Ent δ p [ g ] ≤ p q Λ ( p , q ) ( g ( 0 ) − g ( 1 ) ) 2 for all g : 0 , 1 → R + ,$
where $Λ : R + × R + → R +$ is the logarithmic mean defined by
$Λ ( p , q ) : = p − q log p − log q , for p ≠ q and Λ ( p , p ) : = lim q → p Λ ( p , q ) = p .$
The above result allows for estimating the coarse-grained entropy in (9).
Lemma 3 (Estimate of the coarse-grained entropy).
Let $f 2 ¯ : { 0 , 1 } → R +$ be given by $f 2 ¯ ( i ) : = E μ i [ f 2 ]$ for $i ∈ { 0 , 1 }$. Then, for all $p ∈ [ 0 , 1 ]$ and $q = 1 − p$,
$Ent δ p [ f 2 ¯ ] ≤ p q Λ ( p , q ) ( Var μ 0 [ f ] + Var μ 1 [ f ] + ( E μ 0 [ f ] − E μ 1 [ f ] ) 2 )$
holds.
Proof.
Lemma 2 applied to $Ent δ p ( f 2 ¯ )$ yields
$Ent μ ¯ ( f 2 ¯ ) ≤ p q Λ ( p , q ) ( f 2 ¯ ( 0 ) − f 2 ¯ ( 1 ) ) 2 .$
The square-root-mean-difference on the right-hand side of (11) can be estimated by using the fact that the function $( a , b ) ↦ ( a − b ) 2$ is jointly convex on $R + × R +$. Indeed, by introducing the functions $f 0 , f 1 : R n × R n → R +$ defined by $f 0 ( x , y ) = f ( x )$ and $f 1 ( x , y ) = f ( y )$, an application of the Jensen inequality yields the estimate
$( E μ 0 [ f 2 ] − E μ 1 [ f 2 ] ) 2 = ( E μ 0 × μ 1 [ f 0 2 ] − E μ 0 × μ 1 [ f 1 2 ] ) 2 ≤ E μ 0 × μ 1 [ ( f 0 − f 1 ) 2 ] ≤ E μ 0 [ f 2 ] − 2 E μ 0 [ f ] E μ 1 [ f ] + E μ 1 [ f 2 ] = Var μ 0 [ f ] + Var μ 1 [ f ] + ( E μ 0 [ f ] − E μ 1 [ f ] ) 2 .$
Now, a combination (11) and (12) gives (10). □
The decomposition (9) together with (10) yields that a mixture $μ p = p μ 0 + q μ 1$ for $p ∈ [ 0 , 1 ]$ and $q = 1 − p$ satisfies
$Ent μ p [ f 2 ] ≤ p Ent μ 0 [ f 2 ] + q Ent μ 1 [ f 2 ] + p q Λ ( p , q ) Var μ 0 [ f ] + Var μ 1 [ f ] + ( E μ 0 [ f ] − E μ 1 [ f ] ) 2 .$
The right-hand side of (13) consists of quantities, which can be estimated under the assumption that $μ 0$ and $μ 1$ satisfy $LSI ( α 0 )$ and $LSI ( α 1 )$. The following theorem provides an extension of the result ( Theorem 4.4) to the multidimensional case for the log–Sobolev inequality.
Theorem 2 (LSI for absolutely continuous mixtures).
Let $μ 0$ and $μ 1$ satisfy $LSI ( α 0 )$ and $LSI ( α 1 )$, respectively, and let both measures be absolutely continuous to each other. Then, for all $p ∈ ( 0 , 1 )$ and $q = 1 − p$, the mixture measure $μ p = p μ 0 + q μ 1$ satisfies $LSI ( α p )$ with
$1 α p ≤ { 1 + q λ p α 0 , if α 1 α 0 ≥ 1 + p λ p ( 1 + χ 1 ( 1 + q λ p ) ) , 1 + p λ p α 1 , if α 0 α 1 ≥ 1 + q λ p ( 1 + χ 0 ( 1 + p λ p ) ) , p ( 1 + q λ p ) χ 1 + p q λ p χ 0 χ 1 + q ( 1 + p λ p ) χ 0 α 0 p χ 1 + α 1 q χ 0 , else .$
Hereby, $χ 0$ and $χ 1$ are given in (5) and $λ p$ is used for the inverse logarithmic mean
$λ p : = 1 Λ ( p , q ) = log p − log q p − q , for p ≠ 1 2 , and λ 1 / 2 = 2 .$
Proof.
The starting point is the splitting obtained from (13). The variances and mean-difference in (13) can be estimated in the same way as in the proof (6) of Theorem 1. Additionally, the fact  that $LSI ( α )$ implies $PI ( α )$ is used to derive for any $η > 0$ and any $ϑ ∈ ( 0 , 1 )$
$Ent μ p [ f 2 ] ≤ 1 α 0 ( 1 + q λ p ( 1 + ( 1 + η ) ϑ 2 χ 0 ) ) ∫ | ∇ f | 2 p d μ 0 + 1 α 1 ( 1 + p λ p ( 1 + ( 1 + η − 1 ) ( 1 − ϑ ) 2 χ 1 ) ) ∫ | ∇ f | 2 q d μ 1 ≤ max { 1 + q λ p ( 1 + ( 1 + η ) ϑ 2 χ 0 ) α 0 , 1 + p λ p ( 1 + ( 1 + η − 1 ) ( 1 − ϑ ) 2 χ 1 ) α 1 } ∫ | ∇ f | 2 d μ p .$
By introducing reduced log–Sobolev constants
$α ˜ 0 : = α 0 1 + q λ p and α ˜ 1 : = α 1 1 + p λ p ,$
as well as defining the constants $χ ˜ 0$ and $χ ˜ 1$ by
$χ ˜ 0 : = χ 0 λ p 1 + q λ p and χ ˜ 1 = χ 1 λ p 1 + p λ p ,$
the bound (15) takes the form
$Ent μ p ( f 2 ) ≤ max { 1 + ( 1 + η ) ϑ 2 χ ˜ 0 α ˜ 0 , 1 + ( 1 + 1 η ) ( 1 − ϑ ) 2 χ ˜ 1 α ˜ 1 } ∫ | ∇ f | 2 d μ p .$
The estimate (18) has the same structure as the estimate (6), where $α ˜ 0$, $α ˜ i$ play the role of $ϱ 0$, $ϱ 1$ and $χ ˜ 0$, $χ ˜ 1$ the roles of $χ 0$, $χ 1$. Hence, the optimization procedure from the proof of Theorem 1 applies to this case and the last step consists of translating the constants $α ˜ 0$, $α ˜ 1$ and $χ ˜ 0$, $χ ˜ 1$ back to the original ones. □
Remark 3.
Let the bound for $1 α p$ in the last case of (14) be denoted by $1 A p$. Then, the proof shows that it can be bounded above and below in the same way as in (7) in terms of the reduced constants (16) and (17)
$max { 1 + q λ p α 0 , 1 + p λ p α 1 } ≤ 1 A p ≤ max { 1 + q λ p ( 1 + χ 0 ) α 0 , 1 + p λ p ( 1 + χ 1 ) α 1 } .$
In the case $χ 0 = χ 1 = χ$, the simplified bound
$1 α p ≤ 1 + λ p + p q λ p χ p α 0 + q α 1$
holds. The inverse logarithmic mean $λ p = 1 Λ ( p , q )$ blows up logarithmically for $p → 0 , 1$. Hence, even in the case $χ = 0$, the bound (19) diverges logarithmically. This logarithmic divergence looks at first sight artificial, especially in comparison to (8) showing that the Poincaré constant is bounded. However, the next section with examples shows that this blow-up may actually occur. Hence, the bound in (14) is actually optimal on this level of generality.
An analogue statement as Corollary 1 for the Poincaré constant is obtained for the lob-Sobolev constant, whose proof follows along the same lines and is omitted.
Corollary 2.
Let $μ 0 ≪ μ 1$ and $μ 0$, $μ 1$ satisfy $LSI ( α 0 )$ and $LSI ( α 1 )$, respectively. Then, for any $p ∈ ( 0 , 1 )$ and $p = 1 − q$, the mixture measure $μ p = p μ 0 + q μ 1$ satisfies $LSI ( α p )$ with
$1 α p ≤ max { 1 + q λ p α 0 , 1 + p λ p ( 1 + χ 1 ) α 1 } .$
Likewise, if $μ 1 ≪ μ 0$, then
$1 α p ≤ max { 1 + p λ p α 1 , 1 + q λ p ( 1 + χ 0 ) α 0 }$
holds.

## 4. Examples

The results of Theorems 1 and 2 are illustrated for some specific examples and also compared to the results (, Section 4.5), which however are restricted to one-dimensional measures. Although the criterion of Theorems 1 and 2 can only give upper bounds for the multidimensional case, when at least one of the mixture component is absolutely continuous to the other, it is still possible to obtain the optimal results in terms of scaling in the mixture parameter $p → 0 , 1$.

#### 4.1. Mixture of Two Gaussian Measures with Equal Covariance Matrix

Let us consider the mixtures of two Gaussians $μ 0 : = N ( 0 , Σ )$ and $μ 1 : = N ( y , Σ )$, for some $y ∈ R n$ and a strictly positive definite covariance matrix $Σ ≥ σ Id$ in the sense of quadratic forms for some $σ > 0$. Then, $μ 0$ and $μ 1$ satisfy $PI ( σ − 1 )$ and $LSI ( σ − 1 )$ by the Bakry–Émery criterion (Theorem A1), i.e., $ϱ 0 = α 0 = ϱ 1 = α 1 = σ − 1$. Furthermore, the $χ 2$-distance between $μ 0$ and $μ 1$ can be explicitly calculated as a Gaussian integral (see also )
$χ 0 = χ 1 = 1 ( 2 π ) n 2 det Σ ∫ exp ( − x · Σ − 1 x + 1 2 ( x − y ) · Σ − 1 ( x − y ) ) d x − 1 = exp ( y · Σ − 1 y ) 1 ( 2 π ) n 2 det Σ ∫ exp ( − 1 2 ( x + y ) Σ − 1 ( x + y ) ) d x − 1 ≤ e | y | 2 / σ − 1 .$
Then, the bound from Theorem 1 in the form (8) yields
$1 ϱ p ≤ ( 1 + p q ( e | y | 2 / σ − 1 ) ) σ .$
Likewise, the log–Sobolev constant follows from Theorem 2 in the form (19) leads to
$1 α p ≤ ( 1 + p q λ p ( e | y | 2 / σ + 1 ) ) σ .$
By noting that $p q ≤ p q λ p ≤ 1 4$, both constants stay uniformly bounded in p. The large exponential factor in the distance $e | y | 2 / σ$ cannot be avoided on this level of generality since the mixed measure $μ p$ has a bimodal structure leading to metastable effects (, Remark 2.20).
The result ( Corollary 4.7) deduced the following bound on $1 ϱ p$ for the mixture of two one-dimensional standard Gaussians $σ = 1$ in (20)
$1 ϱ p ≤ 1 + p q | y | 2 ( Φ ( | y | ) e | y | 2 + | y | 2 π e | y | 2 / 2 + 1 2 ) ,$
where $Φ ( a ) = 1 2 π ∫ − ∞ a e − y 2 / 2 d y$. The elementary inequalities $e a 2 − 1 ≤ a 2 e a 2$ and $Φ ( a ) ≥ 1 + a 2 π e − a 2 / 2$ for all $a ∈ R$ show that the bound (20) is better than the bound (21) for all parameter values $p ∈ [ 0 , 1 ]$ and $| y | ≥ 0$.
Hence, this example shows that, for mixtures with components that are absolutely continuous to each other as well as whose tail behavior is controlled in terms of the $χ 2$-distance, Theorems 1 and 2 even improve the bound of  and generalize it to the multidimensional case.

#### 4.2. Mixture of a Gaussian and Sub-Gaussian Measure

Let us consider $μ 1 = N ( 0 , Σ )$ where $Σ ≥ σ Id$ is strictly positive definite. In addition, let the density of $μ 0$ with respect to $μ 1$ be bounded uniformly by some $κ ≥ 1$, that is the relative density satisfies $d μ 0 / d μ 1 ≤ κ$ almost everywhere on $R n$. By the Bakry–Émery criterion (Theorem A1), $ϱ 1 = α 1 = 1 σ$ holds. Furthermore, an upper bound for $χ 1$ is obtained by the assumption on the bound on the relative density
$χ 1 = Var μ 1 [ μ 0 μ 1 ] = ∫ ( μ 0 μ 1 ) 2 d μ 1 − 1 ≤ κ 2 − 1 .$
Provided that $μ 0$ satisfies $PI ( ϱ 0 )$, the Poincaré constant of the mixture $μ p = p μ 0 + q μ 1$ satisfies by Corollary 1 the estimate
$1 ϱ p ≤ max { 1 ϱ 0 , ( 1 + p ( κ 2 − 1 ) ) σ } .$
Similarly, Corollary 2 provides whenever $μ 0$ satisfies $LSI ( α 0 )$ the following bound for the log–Sobolev constant of the mixture measure $μ p$
$1 α p ≤ max { 1 + q λ p α 0 , ( 1 + p λ p κ 2 ) σ } .$
In this case, the logarithmic blow-up of the log–Sobolev constant cannot be ruled out for $p → 0 , 1$, without any further information on $μ 0$.

#### 4.3. Mixture of Two Centered Gaussians with Different Variance

For $μ 0 = N ( 0 , Id )$ and $μ 1 = N ( 0 , σ Id )$, the Bakry–Émery criterion (Theorem A1) implies $ϱ 0 = α 0 = 1$ and $ϱ 1 = α 1 = σ − 1$. The calculation of the $χ 2$-distance can be done using the spherical symmetry and is reduced to the one dimensional integral
$χ 0 = ∫ d μ 1 d μ 0 d μ 1 − 1 = H n − 1 ( ∂ B 1 ) ( 2 π ) n 2 σ n ∫ R + r n − 1 e − ( 1 σ − 1 2 ) r 2 d r − 1 .$
Hereby, $H n − 1 ( S n − 1 )$ denotes the $n − 1$-dimensional Hausdorff measure of the sphere $∂ B 1 = x ∈ R n : | x | = 1$. The integral does only exist for $σ < 2$. In this case, it can be evaluated and simplified. The bound for the constant $χ 1$ follows by duality under the substitution $σ ↦ σ − 1$ and is given by
$χ 0 = 1 ( σ ( 2 − σ ) ) n 2 − 1 , σ < 2 , + ∞ , σ ≥ 2 , and χ 1 = 1 ( σ − 1 ( 2 − σ − 1 ) ) n 2 − 1 , σ > 1 2 , + ∞ , σ ≤ 1 2 .$
If $σ ≤ 1 / 2$, that is for $χ 1 = ∞$, the bound given in Corollary 1 yields
$1 ϱ p ≤ max { σ , 1 + q χ 0 } = max { σ , ( 1 − q ) + q ( σ ( 2 − σ ) ) n 2 } = p + q ( σ ( 2 − σ ) ) n 2 .$
Similarly, if $σ ≥ 2$, that is, for $χ 0 = ∞$, the bound becomes
$1 ϱ p ≤ max { 1 , ( 1 + p χ 1 ) σ } ≤ σ ( q + p ( σ − 1 ( 2 − σ − 1 ) ) n 2 ) .$
In the case $1 2 < σ < 2$, the interpolation bound (4) of Theorem 1 could be applied. However, the scaling behavior for the Poincaré constant can already be observed with the estimate (7) in Remark 2, where again, thanks to the symmetry $σ ↦ 1 σ$,
$1 ϱ p ≤ { p + q ( σ ( 2 − σ ) ) n 2 , for σ ≤ 1 , σ ( q + p ( σ − 1 ( 2 − σ − 1 ) ) n 2 ) , for σ ≥ 1 ,$
holds. Hence, the Poincaré constant stays bounded for the full range of parameter $p ∈ [ 0 , 1 ]$ and $σ > 0$.
In the case for the log–Sobolev constant, the bound from Corollary 2 gives
$1 α p ≤ 1 + q λ p ( σ ( 2 − σ ) ) n 2 , σ ≤ 1 , σ ( 1 + p λ p ( σ − 1 ( 2 − σ − 1 ) ) n 2 ) , σ ≥ 1 .$
The bound (24) blows up logarithmically for $p → 0 , 1$ in general. However, the special case $σ = 1$, although trivially, allows for the combined bound $1 α p ≤ 1 + min p , q λ p$, which stays bounded. This behavior can be extended to the range $σ ∈ ( 1 2 , 2 )$ thanks to (22) and the interpolation bound of Theorem 2.
The result (23) can be compared with the one of (, Section 4.5.2), which states that, for some $C > 0$, all $σ > 1$ and $p ∈ ( 0 , 1 / 2 ) ,$
$1 ϱ p , CM ≤ σ + C p 1 σ − 1$
holds. In general, depending on the constant C, the bound (23) is better for $σ$ small, whereas the scaling in $σ$ is better for (25), namely linear instead of $σ 3 2$ as in (20).

#### 4.4. Mixture of Uniform and Gaussian Measure

Let $μ 0 = N ( 0 , Id )$ and $μ 1 = 1 H n ( B 1 ) 1 B 1$ with $B 1$ the unit ball around zero. Then, $ϱ 0 = 1$ holds by the Bakry–Émergy criterion (Theorem A1) and $ϱ 1 ≥ π 2 diam ( B 1 ) 2 = π 2 4$ by the result of . Furthermore, since $μ 1 ≪ μ 0$, the $χ 2$-distance between $μ 0$ and $μ 1$ becomes thanks to the spherical symmetry
$χ 0 + 1 = ∫ ( μ 1 μ 0 ) 2 d μ 0 = ( 2 π ) n 2 H n ( B 1 ) 2 ∫ B 1 e | x | / 2 d x = ( 2 π ) n 2 H n − 1 ( ∂ B 1 ) H n ( B 1 ) 2 ∫ 0 1 r n − 1 e r 2 / 2 d r .$
The volume $H n ( B 1 )$ and the surface area $H n − 1 ( ∂ B 1 )$ of the n-sphere satisfy the following relations
$H n − 1 ( ∂ B 1 ) H n ( B 1 ) = n and ( 2 π ) n 2 H n ( B 1 ) = 2 n 2 Γ ( n 2 + 1 ) = : g n .$
The integral on the right-hand side in (26) can be bounded below by $1 n$ and above by $e n$, which altogether yields
$g n ≤ χ 0 + 1 ≤ e g n .$
Corollary 1 implies that the Poincaré constant of the mixture $μ p = p μ 0 + q μ 1$ satisfies
$1 ϱ p ≤ max { 1 ϱ 1 , 1 + q χ 0 } ≤ p + q e g n ,$
where the last inequality follows from $4 π 2 ≤ p + q e g n$ for $n ≥ 1$ and all $p ∈ [ 0 , 1 ]$.
The estimate of the log–Sobolev constant uses $α 0 = 1$ by the Bakry–Émergy criterion (Theorem A1) and $α 1 ≥ 2 e$ from (A1). Then, Corollary 2 yields the bound
$1 α p ≤ max { 1 + p λ p α 1 , 1 + q λ p ( 1 + χ 0 ) α 0 } ≤ max { ( 1 + p λ p ) e 2 , 1 + q λ p e g n } .$
There is a logarithmically blow-up of the bound for $p → 0 , 1$.
The blow-up for $p → 1$ is artificial, which can be shown by a combination Bakry–Émery criterion and the Holley–Stroock perturbation principle. To do so, the Hamiltonian of $μ p$ is decomposed into a convex function and some error term
$H p ( x ) : = − log μ p ( x ) = − log ( p ( 2 π ) n 2 e − | x | 2 2 + 1 − p H n ( B 1 ) 𝟙 B 1 ( 0 ) ( x ) ) = − log ( e − | x | 2 2 + 1 2 + 1 − p p ( 2 π ) n 2 H n ( B 1 ) e 𝟙 B 1 ( 0 ) ( x ) ) + C p , n = | x | 2 − 1 2 − ψ p ( x ) + C ˜ p , n ,$
where
$ψ p ( x ) : = ( log ( e − | x | 2 2 + 1 2 + 1 − p p ( 2 π ) n 2 H n ( B 1 ) e ) + | x | 2 − 1 2 ) 𝟙 B 1 ( 0 ) ( x ) .$
The function $ψ p$ is radially monotone towards the boundary of $B 1$, which yields for $| x | → 1$ the bound
$0 ≤ ψ p ( x ) ≤ log ( 1 + 1 − p p ( 2 π ) n 2 H n ( B 1 ) e ) .$
From (30), the Hamiltonian $H p$ is compared with the convex potential $| x | 2 − 1 2$ with the bound (31) on the perturbation $ψ p$. This together yields, by the Bakry–Émergy criterion (Theorem A1) and the Holley–Stroock perturbation principle (Theorem A2), the $μ p$ satisfies $PI ( ϱ ˜ p )$ and $LSI ( α ˜ p )$ with
$1 ϱ ˜ p ≤ 1 α ˜ p ≤ 1 + 1 − p p e g n ,$
where $g n$ is the same constant as in (27). This bound only blows up for $p → 0$. However, the blow-up is like $1 p$. Furthermore, the bound on the Poincaré constant is worse than the one from (28). Therefore, both approaches need to be combined.
The combination of the bounds obtained in (29) and (32) results in the improved bound
$1 α ≤ C n ( 1 + q λ p g n ) , with C n some universal constant ,$
which only logarithmically blows up for $p → 0$.
This example shows that the Poincaré constant and log–Sobolev constant may have different scaling behavior for $p → 0$. Indeed, Ref.  shows, for this specific mixture in the one-dimensional case that the log–Sobolev constant can be bounded below by
$C | log p | ≤ 1 α ,$
for p small enough and a constant C independent of p. In one dimension, lower bounds are accessible via the functional introduced by Bobkov–Götze . Hence, the bound (33) is optimal in the one-dimensional case, which strongly indicates also optimality for the higher dimension case in terms of scaling in the mixture ration p.
To conclude, the Bakry–Émery criterion in combination with the Holley–Stroock perturbation principle is effective for detecting blow-ups of the log–Sobolev constant for mixtures, but has, in general, the wrong scaling behavior in the mixing parameter p. On the other hand, the criterion presented in Theorem 2 provides the right scaling of the blow-up but may give artificial blow-ups, if the components of the mixture become singular in the sense of the $χ 2$-distance.

## 5. Conclusions

Recently, the investigation of mixtures can be found in many different applications, and the main results of this work may be useful to the investigation of asymmetric Kalman filter estimates , the study of asymmetric mixtures in Marine Biology , Econometrics , Gradient-quadratic and fixed-point iteration algorithms  and estimates of multivariate Gaussian mixtures .
Theorems 1 and 2 provide a simple estimate of the Poincaré and log–Sobolev constants of a two-component mixture measure $μ p = p μ 0 + q μ 1$ if the $χ 2$-distance of $μ 0$ and $μ 1$ is bounded and each of the components satisfies a Poincaré or log–Sobolev inequality. Section 4 reviews several examples with the following findings:
• For mixtures with components that are mutually absolutely continuous and whose tail behavior is mutually controlled in terms of the $χ 2$-distance, Theorems 1 and 2 are very effective.
• If only one of the components is absolutely continuous to the other one with bounded density, then it is still possible to obtain a bound on the Poincaré and log–Sobolev constant. However, the log–Sobolev constant blows up logarithmically in the mixture parameter p approaching 0 or 1. It is shown for specific examples that this blow-up is at least for one limit $p → 0$ or $p → 1$ not artificial due to the applied method.
• A necessary condition for the finiteness of the $χ 2$-distance between two measures is that at least one of the measures $μ 0$ and $μ 1$ is absolutely continuous to the other one, which in particular provides a mixture with connected support. This condition is too strong since one can easily decompose a measure into a mixture, where the joint support of the components is a null set. In this case, the present approach would not be helpful, even though the mixture may still satisfy both functional inequalities.
Future work could overcome the limits of the present approach by revisiting the crucial ingredient for both the Poincaré and log–Sobolev inequality, which was the representation of the mean-difference in Lemma 1 regarding covariances. Formula (3) from Lemma 1 applies only in the case where both measures are mutually absolutely continuous. However, the idea of an interpolation bound can be generalized to suitable weighted Sobolev spaces. For this, since $μ 0 , μ 1 ≪ μ p$ for all $p ∈ ( 0 , 1 )$, one can formally write and estimate
$E μ 0 [ f ] − E μ 1 [ f ] = Cov μ p [ f , d μ 0 d μ p − d μ 1 d μ p ] ≤ ‖ f ‖ H ˙ 1 ( μ p ) ‖ d μ 0 d μ p − d μ 1 d μ p ‖ H ˙ − 1 ( μ p ) .$
Hereby, $H ˙ 1 ( μ p )$ is the homogeneous weighted $H ˙ 1$ space with norm $f H 1 ( μ p ) 2 : = ∫ | ∇ f | 2 d μ p$ and $H ˙ − 1 ( μ p )$ is its dual space with norm
$‖ ω ‖ H ˙ − 1 ( μ p ) 2 : = sup f ∈ H ˙ 1 ( μ p ) { 2 〈 f , ω 〉 μ p − ‖ f ‖ H 1 ( μ p ) 2 } .$
The representation (34) is fruitful to many more applications in which the components of the mixture do not need to be absolutely continuous. Similar ideas for estimating mean-differences were successfully applied in the metastable setting [2,14], in which suitable bounds on the $H ˙ − 1$-norm are obtained. In this regard, the bound (34) promises many interesting new insights for future studies.

## Funding

This research received no external funding.

## Acknowledgments

This work is based on part of the Ph.D. thesis  written under the supervision of Stephan Luckhaus at the University of Leipzig. The author thanks the Max-Planck-Institute for Mathematics in the Sciences in Leipzig for providing excellent working conditions. The author thanks Georg Menz for many discussions on mixtures and metastability.

## Conflicts of Interest

The author declares no conflict of interest.

## Appendix A. Bakry–Émery Criterion and Holley–Stroock Perturbation Principle

Two classical conditions for Poincaré and log–Sobolev inequalities are stated in this part of the appendix. The Bakry–Émery criterion relates the convexity of the Hamiltonian of a measure and positive curvature of the underlying space to constants for the Poincaré and log–Sobolev inequalities. Although the result is classical for the case of $R n$, the result for general convex domain was established in (, Theorem 2.1).
Theorem A1
(Bakry–Émery criterion ( Proposition 3, Corollary 2), (, Theorem 2.1)). Let $Ω ⊂ R n$ be convex and let $H : Ω → R$ be a Hamiltonian with Gibbs measure $μ ( d x ) = Z μ − 1 e − H ( x ) 𝟙 Ω ( x ) d x$ and assume that $∇ 2 H ( x ) ≥ κ > 0$ for all $x ∈ supp μ$. Then, μ satisfies $PI ( ϱ )$ and $LSI ( α )$ with
$ϱ ≥ κ and α ≥ κ .$
The second condition is the Holley–Stroock perturbation principle, which allows to show Poincaré and log–Sobolev inequalities for a very large class of measures.
Theorem A2
(Holley–Stroock perturbation principle (, p. 1184)). Let $Ω ⊂ R n$ and $H : Ω → R$ and $ψ : Ω → R n$ be a bounded function. Let μ and $μ ˜$ be the Gibbs measures with Hamiltonian H and $H + ψ$, respectively
$μ ( d x ) = 1 Z μ e − H ( x ) 𝟙 Ω ( x ) d x and μ ˜ ( d x ) = 1 Z μ ˜ e − H ( x ) − ψ ( x ) 𝟙 Ω ( x ) d x .$
Then, if μ satisfies $PI ( ϱ )$ and $LSI ( α )$, then $μ ˜$ satisfies $PI ( ϱ ˜ )$ and $LSI ( α ˜ )$, respectively. Hereby, the constants satisfy
$ϱ ˜ ≥ e − osc Ω ψ ϱ and α ˜ ≥ e − osc Ω ψ α ,$
where $osc Ω ψ : = sup Ω ψ − inf Ω ψ$.
Proofs relying on semigroup theory of Theorems A1 and A2 can be found in the exposition by Ledoux (, Corollary 1.4, Corollary 1.6 and Lemma 1.2).
Example A1 (Uniform measure on the ball).
The measure $μ 1 = 1 H n ( B 1 ) 𝟙 B 1$, with $B 1$ is the unit ball around zero, satisfies $LSI ( α 1 )$ with
$α 1 ≥ 2 e .$
The proof compares the measure $μ 1$ with a family of measures
$ν σ ( d x ) = 1 Z σ exp ( − σ | x | 2 + σ 2 ) 𝟙 ( x ) d x for σ > 0 .$
Then, it holds that $ν σ$ satisfies $LSI ( 2 σ )$ by the Bakry–Émery criterion (Theorem A1). Moreover, it holds that $osc x ∈ B 1 | − σ | x | 2 + σ / 2 | = σ 2$ and hence $μ 1$ satisfies $LSI ( 2 σ e − σ )$ by the Holley–Stroock perturbation principle (Theorem A2) for all $σ > 0$. Optimizing the expression $2 σ e − σ$ in σ gives the bound (A1).

## References

1. Chafaï, D.; Malrieu, F. On fine properties of mixtures with respect to concentration of measure and Sobolev type inequalities. Annales de l’Institut Henri Poincaré Probabilités et Statistiques 2010, 46, 72–96. [Google Scholar] [CrossRef][Green Version]
2. Menz, G.; Schlichting, A. Poincaré and logarithmic Sobolev inequalities by decomposition of the energy landscape. Ann. Probab. 2014, 42, 1809–1884. [Google Scholar] [CrossRef][Green Version]
3. Gibbs, A.L.; Su, F.E. On Choosing and Bounding Probability Metrics. Int. Stat. Rev. 2002, 70, 419–435. [Google Scholar] [CrossRef][Green Version]
4. Higuchi, Y.; Yoshida, N. Analytic Conditions and Phase Transition for Ising Models. Unpublished lecture notes in Japanese. 1995. [Google Scholar]
5. Diaconis, P.; Saloff-Coste, L. Logarithmic Sobolev inequalities for finite Markov chains. Ann. Appl. Probab. 1996, 6, 695–750. [Google Scholar] [CrossRef]
6. Ledoux, M. Logarithmic Sobolev Inequalities for Unbounded Spin Systems Revisited; Séminaire de Probabilités XXXV; Springer: Berlin, Germany, 1999; pp. 167–194. [Google Scholar] [CrossRef]
7. Carreira-Perpinan, M.A. Mode-finding for mixtures of Gaussian distributions. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1318–1323. [Google Scholar] [CrossRef][Green Version]
8. Payne, L.E.; Weinberger, H.F. An optimal Poincaré inequality for convex domains. Arch. Ration. Mech. Anal. 1960, 5, 286–292. [Google Scholar] [CrossRef]
9. Bobkov, S.G.; Götze, F. Exponential Integrability and Transportation Cost Related to Logarithmic Sobolev Inequalities. J. Funct. Anal. 1999, 163, 1–28. [Google Scholar] [CrossRef][Green Version]
10. Nurminen, H.; Ardeshiri, T.; Piche, R.; Gustafsson, F. Skew-t Filter and Smoother with Improved Covariance Matrix Approximation. IEEE Trans. Signal Process. 2018, 66, 5618–5633. [Google Scholar] [CrossRef]
11. Contreras-Reyes, J.; López Quintero, F.; Yáñez, A. Towards Age Determination of Southern King Crab (Lithodes santolla) Off Southern Chile Using Flexible Mixture Modeling. J. Mar. Sci. Eng. 2018, 6, 157. [Google Scholar] [CrossRef]
12. Tasche, D. Exact Fit of Simple Finite Mixture Models. J. Risk Financ. Manag. 2014, 7, 150–164. [Google Scholar] [CrossRef][Green Version]
13. McLachlan, G.; Peel, D. Finite Mixture Models; Wiley Series in Probability and Statistics; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2000. [Google Scholar] [CrossRef]
14. Schlichting, A.; Slowik, M. Poincaré and logarithmic Sobolev constants for metastable Markov chains via capacitary inequalities. arXiv, 2017; arXiv:1705.05135. [Google Scholar]
15. Schlichting, A. The Eyring-Kramers Formula for Poincaré and Logarithmic Sobolev Inequalities. Ph.D. Thesis, Universität Leipzig, Leipzig, Germany, 2012. [Google Scholar]
16. Kolesnikov, A.V.; Milman, E. Riemannian metrics on convex sets with applications to Poincaré and log–Sobolev inequalities. Calc. Var. Part. Differ. Equ. 2016, 55, 1–36. [Google Scholar] [CrossRef]
17. Bakry, D.; Émery, M. Diffusions Hypercontractives; Séminaire de Probabilités, XIX; Springer: Berlin, Germany, 1985; pp. 177–206. [Google Scholar]
18. Holley, R.; Stroock, D. Logarithmic Sobolev inequalities and stochastic Ising models. J. Stat. Phys. 1987, 46, 1159–1194. [Google Scholar] [CrossRef][Green Version]

## Share and Cite

MDPI and ACS Style

Schlichting, A. Poincaré and Log–Sobolev Inequalities for Mixtures. Entropy 2019, 21, 89. https://doi.org/10.3390/e21010089

AMA Style

Schlichting A. Poincaré and Log–Sobolev Inequalities for Mixtures. Entropy. 2019; 21(1):89. https://doi.org/10.3390/e21010089

Chicago/Turabian Style

Schlichting, André. 2019. "Poincaré and Log–Sobolev Inequalities for Mixtures" Entropy 21, no. 1: 89. https://doi.org/10.3390/e21010089

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.