Relative Entropy, Gaussian Concentration and Uniqueness of Equilibrium States

For a general class of lattice spin systems, we prove that an abstract Gaussian concentration bound implies positivity of the lower relative entropy density. As a consequence, we obtain uniqueness of translation-invariant Gibbs measures from the Gaussian concentration bound in this general setting. This extends earlier results with a different and very short proof.


Uniqueness Criteria for Gibbs Measures
In mathematical statistical mechanics, it is important to have good and useful criteria for the absence of phase transition, or equivalently, uniqueness of the Gibbs measure associated with a given potential. Such criteria, also known under the name high-temperature criteria, show that when the interaction is small enough (high temperature), there is no phase transition, and the unique phase has strong mixing properties, i.e., it is close to a product measure (infinite temperature).
The most famous among such criteria is the Dobrushin uniqueness criterion; see, e.g., ( [1], Chapter 8). Under the Dobrushin uniqueness criterion, besides uniqueness, one derives strong mixing properties of the unique Gibbs measure, i.e., quantitative bounds on the decay of covariance of local observables, and quantitative bounds on the difference between finite and infinite-volume expectations, i.e., on the influence of the boundary condition on the expectation of a local function. The basic idea behind the Dobrushin uniqueness criterion is that, when it holds, the conditional expectation operator acts as a contraction on the space of probability measures equipped with the Wasserstein distance. Because the Gibbs measure is a fixed point of this contraction, and fixed points of contractions are unique, one obtains uniqueness of Gibbs measures. Later on, the Dobrushin criterion was generalized to the Dobrushin-Shlosman criterion, and a connection has been made between this criterion and an important functional inequality, the log-Sobolev inequality. More precisely, for finite-range Glauber dynamics of Ising spins, in [2] the equivalence between the Dobrushin-Shlosman criterion and the log-Sobolev inequality was proved. This implies e.g., that under the Dobrushin-Shlosman criterion, the reversible Glauber dynamics converges exponentially fast (in L ∞ ) to its unique stationary measure.
Related to the Dobrushin criterion, there is a general criterion in the context of interacting particle systems under which one obtains uniqueness of the stationary measure and uniform ergodicity, i.e., from any initial measures, in time, the dynamics converges exponentially fast to the unique stationary measures. This criterion, the so-called "M < ε criterion" ( [3], Chapter 1), is based on a similar contraction argument, i.e., when it holds, the semigroup of the interacting particle system acts as a contraction on a suitable space where C > 0 is a constant which does not depend on f , and in particular, it does not depend on n. For instance, if f (σ) = 1 n ∑ n i=1 σ i , and σ i ∈ {−1, 1}, then δ i f = 2/n, and we find the upper bound 2 e −Cnε 2 /4 for all n ≥ 1. The power of concentration inequalities of the type (1) is that they hold for general f , i.e., far beyond empirical averages.
Concentration inequalities in the context of Gibbs measures for lattice spin systems have been studied in several works. In particular, in [6] the author proves an inequality of the type (1) under the Dobrushin uniqueness criterion. See [7] for a recent overview of concentration inequalities in the context of Gibbs measures.

Concentration and Uniqueness
The central question of this paper is the following. Assume that a Gibbs measure associated with a given potential satisfies a Gaussian concentration bound, i.e., an inequality of the type (1). Can we then conclude that it is the unique Gibbs measure, i.e., that there is no phase transition?
In this paper, we restrict ourselves to translation-invariant Gibbs measures (i.e., socalled equilibrium states), but in a very general setting. Following [8], we started in [9] the study of the relation between the Gaussian concentration bound and the uniqueness of equilibrium states in the context of spin systems on the lattice Z d , where the spin at each lattice site takes a finite number of values. Examples there include the Ising model at high temperature. Notice that for this model, at low temperature in d ≥ 2, there is a phase transition, and the large deviation probabilities of the magnetization are surface-like, rather than volume-like. This manifestation of a phase transition excludes the Gaussian concentration bound, under which all ergodic averages have volume-like large-deviation probabilities.
Here, we show uniqueness of equilibrium states under an inequality of the type (1), and next, we generalize both the context of the concentration inequality, as well as the context of Gibbs measures, showing uniqueness in the context of so-called zero-information sets. An important result in the context of equilibrium states is the variational principle, which implies that the relative entropy density between two equilibrium states is zero. Therefore, if one can show a strictly positive lower bound for relative entropy density, one obtains uniqueness of the equilibrium states. The set of equilibrium states associated with a given translation-invariant potential is a special case of a set in which the relative entropy density between two elements of the set is always zero. We call such a set a zeroinformation set, and generalize our results of uniqueness to this context, which includes, e.g., transformations of Gibbs measures, and stationary measures of certain interacting particle systems.

Content and Organization of the Paper
As sketched above, we obtain a lower bound for the lower relative entropy density in terms of a natural distance between translation-invariant probability measures, reminiscent and in the spirit of the results of Bobkov and Götze [10], who proved (in a different setting) a lower bound for the relative entropy in terms of the square of the Wasserstein distance. Because we work in the thermodynamic limit on a product space and are interested in translation-invariant probability measures, there is no translation-invariant distance on the configuration space for which we can apply the Bobkov-Götze theorem. We can avoid this problem by introducing a suitable distance on the translation-invariant probability measures (rather than on configurations).
We start by proving the lower bound on the lower relative entropy density in the context of general lattice spin systems with state space Ω = S Z d , where the single spins take values in a metric space S of bounded diameter. The bounded diameter property allows us to associate a quasi-local function f with a natural sequence of oscillations δ i f , i ∈ Z d , where δ i f represents the maximal influence on the function f of a change in the spin at site i. In the final section of this paper, we provide a generalization of this by allowing more abstract single-spin spaces, and more general associated sequences of oscillations.
The rest of our paper is organized as follows. In Section 2, we introduce the basic setting of lattice spin systems and important function spaces. In Section 3, we introduce the Gaussian concentration bound, the relative entropy (density), and formulate and prove our main result in the context of a single-spin space with finite diameter. In Section 4, we discuss applications of our result to zero information distance sets, including, e.g., the set of equilibrium states with regard to absolutely summable translation-invariant potentials. In Section 5, we consider a generalization by introducing an abstract sequence of oscillations, the associated Gaussian concentration bound and state, and prove the analogue of our main result in this generalized context.

Configuration Space and the Translation Operator
We start from a standard Borel space (S, b) with metric d S , and we let diam(S) = sup s,s ∈S d S (s, s ). (A measurable space (S, b) is said to be standard Borel if there exists a metric on S which makes it a complete separable metric space, and b then denotes the associated Borel σ-algebra.) In the sequel, for notational convenience, we omit the symbol b and call S a standard Borel space, where we always assume that the associated σ-algebra is the Borel σ-algebra b.
We assume that diam(S) < ∞. Later on, in Section 5, we will show how to weaken this assumption. This space S represents the "single-spin space", i.e., we will consider lattice spin configurations in which individual "spins" take values in S. We denote by (Ω Λ , b Λ ) the product space (S Λ , ∏ i∈Λ b i , and (Ω, B) stands for the lattice spin configuration space (Ω Z d , b Z d ). We equip this space with the product topology. Elements of Ω are called configurations. For η ∈ Ω, we denote by η i ∈ S its evaluation at site i ∈ Z d . By σ Λ we mean an element of Ω Λ , and by η Λ ξ Λ c , a configuration coinciding with η on Λ and with ξ on Λ c . We denote by S the set of finite subsets of Z d .
We denote by τ i : Z d → Z d , i ∈ Z d , the map which shifts, or translates, by i; that is, We define the translation operator acting on configurations as follows (and use the same symbol). For each i ∈ Z d , (τ i σ) j = σ j−i , for all j ∈ Z d . This corresponds to translating σ forward by i. We denote by the same symbol the translation operator acting on a function We denote by P τ (Ω) the set of translation-invariant probability measures on Ω. We denote by C (Ω), C b (Ω) the space of continuous, respectively bounded continuous, realvalued functions on Ω.

Local Oscillations and Function Spaces
To a continuous function f : Ω → R we associate a "sequence" of "local oscillations", Later on, in Section 5, in which we consider the case where S is allowed to have infinite diameter, we will consider a more abstract definition of δ f . In the case where S has finite diameter, (2) is the most natural choice. For The set D f is then called the dependence set of f . We denote by L(Ω) the set of local continuous functions on Ω.
We call a continuous function quasi-local if it is the uniform limit of a sequence of local continuous functions. If S is compact, then, according to the Stone-Weierstrass theorem, local continuous functions are uniformly dense in C (Ω).
We denote by QL(Ω) the space of all continuous quasi-local functions on Ω. For 1 ≤ p ≤ ∞, we introduce the spaces which, upon taking the limit Λ ↑ Z d , using the assumed quasilocality of f gives If f is local, then we still have the inequality (3) for Λ containing the dependence set of f . Because, by assumption, ∑ i (δ i f ) p is finite, it follows that δ i f < ∞ for all i ∈ Λ, and therefore δ f 1 < ∞. Then, we obtain (4), which implies that f is bounded.
We say that µ n → µ if, for all bounded continuous local functions, we have f dµ n → f dµ (then by definition of quasilocality, the same holds for bounded continuous quasilocal functions). This induces the so-called weak quasi-local topology on probability measures. Notice that in our setting, where by assumption the single-spin space S is a complete separable metric space, this topology coincides with the ordinary weak topology; see [11] (p. 898).
In our setting, the set of bounded quasi-local continuous functions is measure separating, i.e., for two probability measures µ = µ , there exists a bounded quasi-local continuous f , such that Because, by definition, bounded continuous quasi-local functions can be uniformly approximated by bounded continuous local functions, if the set of bounded quasi-local functions is measure separating, then the set of bounded continuous local functions is also measure separating. Therefore, in our setting, for two probability measures µ = µ , there exists a bounded local f (which is not constant), such that This can be seen as follows. If µ = µ , then there exists a bounded closed cylindrical set A ⊂ Ω, such that µ(A) = µ (A), because the Borel σ-algebra on Ω is generated by such sets. The indicator of this set can be approximated by bounded local continuous functions in both L 1 (µ) and L 1 (µ ).

Abstract Gaussian Concentration Bound
We can now give the definition of the Gaussian concentration bound in our setting.

Definition 1.
Let Ω = S Z d , where S is a standard Borel space with a finite diameter. Let µ be a probability measure on Ω. We say that µ satisfies the Gaussian concentration bound with constant C > 0, abbreviated GCB(C), if for all bounded local functions f we have This "insensitivity" to constant offsets on the left-hand side is ensured by the fact that we center f around its expected value. We also observe that (5) is trivially true for functions which are constant.
This quadratic upper bound will be crucial in the sequel.
(c) The quadratic nature of the upperbound in (5) resembles the quadratic upperbound for the pressure in [12], Theorem 1.1, Equation (2.7), in terms of the Dobrushin norm. This suggests that in the Dobrushin uniqueness regime, the quadratic bound which is obtained from (5) might also be obtainable from this result. However, the Gaussian concentration inequality does not require the Dobrushin uniqueness condition; the latter is sufficient, but not necessary.
The following proposition asserts that (5) automatically extends to a wider class of functions.
Proof. By assumption, for a fixed ξ ∈ Ω, f ∈ QL(Ω) ∩ ∆ 2 (Ω) can be uniformly approximated by the local functions By definition (2), δ i f Λ, ξ is non-decreasing when Λ grows, and is bounded by δ i f . According to Lemma 1, it follows that f Λ, ξ is bounded. Therefore δ f Λ, ξ 2 is bounded by δ f 2 , which is finite because f ∈ ∆ 2 (Ω). Therefore, using the assumed uniform convergence of f Λ, ξ to f , and the assumed bound (5) for bounded local functions in ∆ 2 (Ω), we obtain Here, in the first equality, we used the uniform convergence of f Λ, ξ to f . In the last equality, we used δ i f Λ, ξ ≤ δ i f , and by assumption ∑ i (δ i f ) 2 < ∞, so by dominated convergence applied to the counting measure on Z d , we have

Relative Entropy
For a probability measure µ, we denote by µ Λ its restriction to the sub-σ-algebra B Λ = σ{η i , i ∈ Λ}, generated by the projection p Λ : Ω → Ω Λ . We also denote by B Λ the set of bounded B Λ -measurable functions from Ω to R.
For two probability measures µ, µ on Ω and Λ ∈ S, we define the relative entropy of µ with respect to µ by We further denote by (Λ n ) n∈N the sequence of "cubes" Λ n = [−n, n] d ∩ Z d , n ≥ 1.
Definition 2 (Lower relative entropy density). For two probability measures µ, µ on Ω, we define the lower relative entropy density by We have the following variational characterization of relative entropy (for proof, see, for instance, [13] (p. 100)) where the supremum is taken over all B Λ -measurable functions, such that e f dµ Λ < ∞.

Main Result
In the main theorem below, we prove that the Gaussian concentration bound implies strict positivity of the lower relative entropy density. Introducing an appropriate metric on the set of probability measures, we show that the lower relative entropy density is lower bounded by a constant multiplied by the square of this distance. This result substantially generalizes the corresponding result from [9], where it is essential that the single-spin space is finite. Moreover, the proof is simpler and based on the variational formula for the relative entropy, combined with a quadratic estimate for the log-moment-generating function coming from the assumed Gaussian concentration bound. Definition 3. Define the following distance between probability measures The metric defined above generates the quasi-local topology, and therefore convergence in this metric implies weak convergence. Indeed, convergence µ n → µ in the metric d clearly implies f dµ n → f dµ for all local continuous f , and hence also for all quasilocal continuous f . The latter implies µ n → µ in the quasi-local topology, which coincides with the weak topology.
We can then formulate our main result.
More precisely, we have where d is the distance (7).
We start with a lemma from [7]. For the reader's convenience, we repeat the short proof here.

Lemma 2.
For f , such that δ f 1 < +∞ and Λ ∈ S, we have Proof. For Λ ⊂ Z d , let 1 Λ denote the indicator function of Λ (that is, 1 Λ (i) = 1 if i ∈ Λ and 1 Λ (i) = 0 otherwise). Then, for every j ∈ Z d we have As a consequence, using Young's inequality for convolutions, we obtain Proof of Theorem 1. For the cube Λ n and a bounded local function f whose dependence set is included in the cube Λ r , for some r, it follows from (6) that where we used that ∑ i∈Λ n τ i f is measurable with respect to B Λ n+r . Now, if µ satisfies GCB(C) and both µ and µ are translation invariant, then we can estimate further as follows. Start by noticing that, through combination of the assumed GCB(C) and Lemma 2, we have As a consequence, using translation invariance of both µ and µ , we obtain s Λ n+r (µ |µ) Consider a bounded local function f , such that f dµ − f dµ ≥ u > 0 (this function exists by the assumption that bounded local functions are measure separating). Put δ f 2 1 =: . (Observe that < ∞ by assumption, and = 0 since f cannot be a constant.) Assume that the dependence set of f is included in the cube Λ r . Replace f by β f in the inequality (9), and optimize over β. Then, we obtain, for all n ∈ N, the inequality Since r is fixed, we can take the limit inferior in n, and using |Λ n | |Λ n+r | → 1 as n → ∞, we obtain lim inf n→+∞ s Λ n (µ |µ) |Λ n | > 0.
From (10), we infer that for f , such that Therefore, for By definition of the distance (7), this is equivalent with the statement that d(µ , µ) ≥ ε implies (11). This implies (8).
The following corollary shows that convergence in relative entropy density implies convergence in the distance d. This can be used for stochastic dynamics, provided one can show that the relative entropy density converges. See the application section below for some examples.
Proof. By (8), (12) implies Therefore, we have convergence in the metric d, which, as we remarked before, implies weak convergence.

Remark 2.
As an example of application of Corollary (1), we mention the iteration of renormalization group transformations in the high-temperature regime [14], where convergence of the renormalized potentials can be established, and as a consequence, we obtain convergence of the relative entropy density. Then, Corollary (1) implies that the renormalized measures converge in the metric d as least as fast as the potentials. In the context of stochastic dynamics, i.e., where µ n is a time-evolved measure (at time n), it is usually not simple to obtain the convergence s * (µ n |µ) → 0.
In the high-temperature setting (high-temperature dynamics, high-temperature initial measure) this can be obtained with similar means as in [14].
We conclude this section with two further remarks relating our result to the Bobkov-Götze criterion.

Remark 3.
Our distance d(µ, µ ) between probability measures resembles the so-called Dobrushin distance, denoted by D(µ, µ ), which consists of taking the supremum of f dµ − f dµ over a wider set of functions. Namely, f is required to be measurable, and such that δ f 1 ≤ 1. Hence d(µ, µ ) ≤ D(µ, µ ) for a general pair µ, µ of probability measures. In the special case of finite S, one has d = D. In [15], it is proved that D is equal to what the authors called the Steiff distanced, which is defined in terms of couplings, and which generalizes the Ornstein distance. The equality between D andd is reminiscent of the Kantorovich-Rubinstein duality theorem. (8) is reminiscent of a well-known abstract inequality relating the relative entropy and the Wasserstein distance due to Bobkov and Götze [10]. However, our context is different, because we consider the thermodynamic limit and the relative entropy density. Nevertheless, as shown in [7], we can exploit the Bobkov-Götze theorem in the special case of finite S, putting the Hamming distance on S Λ n , to get s * (ν|µ) ≥d

Applications: Uniqueness of Equilibrium States and Beyond
In this subsection, we provide some settings where we can conclude uniqueness of a set of "(generalized) translation-invariant Gibbs measures" via Theorem 1. We start with the set of translation-invariant Gibbs measures associated with an absolutely summable potential. Then, we consider generalizations and modifications of such sets.

Uniqueness of Equilibrium States
In this subsection, we briefly introduce the necessary basics of Gibbs measures. The reader familiar with the theory of Gibbs measure can skip this subsection. The reader is referred to [1] (especially Chapter 16) or [11] (Chapter 2) for more background on the Gibbs formalism.
Let λ be a probability measure on S, and for λ Λ (dσ Λ ) = ⊗ i∈Λ λ(dσ i ) the corresponding product measure on S Λ . The measure λ is called the "a priori" measure on S, with associated a priori measure ⊗ i∈Z d λ(dσ i ) on Ω.
We call a uniformly absolutely summable translation-invariant potential a function U : S × Ω → R with the following properties: (a) Locality: for all A ∈ S, U(A, ·) is B A -measurable and continuous. (b) Absolute summability: Let us call U the set of uniformly absolutely summable translation-invariant potentials. Then, we build the local Gibbs measures with boundary condition ξ ∈ Ω. For a finite subset Λ ∈ S, the Gibbs measure in volume Λ with boundary condition ξ outside Λ is defined via where H ξ Λ is the Hamiltonian in volume Λ with boundary condition ξ: The family (γ Λ (dσ Λ |·)) Λ∈S is called the Gibbsian specification associated with the potential U (with a priori measure λ). By the uniform absolute summability of U, we automatically have that for all f local and continuous, the function ξ → f (σ Λ ξ Λ c ) γ Λ (dσ Λ |ξ) is quasi-local and continuous. We say that the specification γ Λ (·|·) is quasi-local.
We then call a measure Gibbs µ with potential U (and a priori measure λ) if γ Λ (dσ Λ |ξ) is consistent with the finite-volume Gibbs measures, i.e., if for all f : Ω → R bounded and measurable, and Λ ∈ S, we have We denote by G τ (U) the set of translation-invariant Gibbs measures associated with the potential U. These measures are called the "equilibrium states" associated with U.

Remark 5.
Because our techniques are based on relative entropy density, we cannot exclude the existence of non-translation-invariant Gibbs measures, even in the presence of a translation-invariant Gibbs measure satisfying GCB(C). In other words, even if there exists a unique equilibrium state, there might still be non-translation-invariant Gibbs measures. We believe, however, that the presence of a translation-invariant Gibbs measure satisfying GCB(C) implies a stronger form of uniqueness, which excludes the presence of non-translation-invariant Gibbs measures.

Sets of Zero-Information Distance
The example of the set of equilibrium states from the previous subsection leads naturally to the more general notion of "zero-information distance sets" defined below. Definition 4. We call a subset K ⊂ P τ (Ω) a zero-information distance set if for all µ, µ ∈ K , s * (µ|µ ) = s * (µ |µ) = 0.
From Theorem 1, we then immediately obtain the following proposition. Proposition 3. Let K ⊂ P τ (Ω) be a zero-information distance set. If there exists µ ∈ K , which satisfies GCB(C) for some C ∈ (0, ∞), then K is a singleton.
We provide four further examples (beyond equilibrium states) of such zero-information distance sets, illustrating Proposition 3.

(a) Asymptotically decoupled measures and Π f -compatible measures.
A first generalization of the Gibbsian context is provided in the realm of "asymptotically decoupled measures" via the notion of Π f -compatible measures, see [16]. This setting goes beyond quasi-local specifications, and therefore includes many relevant examples of non-Gibbsian measures.
In this setting, the set of Π f -compatible measures (associated with a local function f ) is a zero-information set ((see [16] Theorem 4.1), and therefore, if this set contains an element µ satisfying GCB(C), then it coincides with the singleton {µ}.

(b) Renormalization group transformations of Gibbs measures.
Another important class of examples is the following. We say that a transformation T : P τ (Ω) → P τ (Ω ) preserves zero-information distance sets if a zero-information distance set is mapped by T onto a zero-information distance set. Important examples of such transformations T are local and translation-invariant renormalization group transformations studied in [11], Section 3.1, p 960, conditions T1-T2-T3. Examples of such transformations include block-spin averaging, decimation, and stochastic transformations such as the Kadanoff transformation. Because the transformations are "local and translation-invariant probability kernels", one immediately infers the property s * (µT|νT) ≤ s * (µ|ν). In this setting, Proposition 3 implies that if U ∈ U , µ ∈ G τ (U) is an associated translation-invariant Gibbs measure, and µT satisfies GCB(C) for some C ∈ (0, ∞), then ν = µT for all ν, such that s * (ν|µT) = 0. In particular, this implies that µ T = µT for all µ ∈ G τ (U). Indeed, in that case s * (µ T|µT) ≤ s * (µ |µ) = 0. Notice that µT can be non-Gibbs; therefore, the implication ν = µT for all ν such that s * (ν|µT) = 0 cannot be derived from the variational principle. (c) Projections of Gibbs measures.
Let µ be a translation-invariant Gibbs measure on the state space S Z d (associated with a translation-invariant potential) which satisfies GCB(C) for some C ∈ (0, ∞). Let for d < d, µ d denote its restriction to the sublattice L d := {(x 1 , . . . , x d , 0, . . . , 0) : x 1 , . . . , x d ∈ Z}. It is clear that µ d satisfies GCB(C) with the same constant C ∈ (0, ∞). Therefore, any translation-invariant measure on S L d that differs from µ d has strictly positive lower relative entropy density with regard to µ d . As a consequence, if µ d is a Gibbs measure for a translation-invariant potential U d , then this potential U d has no other translation-invariant Gibbs measures. This gives uniqueness for a set of Gibbs measures where the potential is only implicitly defined, and can be complicated, i.e., uniqueness is not a consequence of a simple criterion. Projections of Gibbs measures arise naturally in the context of probabilistic cellular automata, where the stationary measures are projections of the space-time Gibbs measures [4]. In this setting, the result tells us that if the space-time measure satisfies GCB(C) for some C > 0, then the unique stationary measure, if Gibbs, has a potential with a unique equilibrium state. Projections of Gibbs measures can fail to be Gibbs, as is shown in [17] for projection of the low-temperature Ising model in d = 2 on the X-axis. It is an open and interesting problem to investigate whether this projected measure satisfies the Gaussian concentration bound. (d) Stationary measures for Ising spin Glauber dynamics.
An additional example of a zero-information distance set is the set of stationary and translation-invariant measures for (Ising spin, i.e., S is finite) Glauber dynamics under the condition that this set contains at least one translation-invariant Gibbs measure as a stationary measure; see [18], Section 4. See also [19,20] for earlier results in the setting of reversible Glauber dynamics, and [21] for recent results in this spirit for more general local dynamics. As a consequence of Proposition 3, we then conclude that if there exists a translation-invariant Gibbs measure ν as stationary measure, and there exists a translation-invariant stationary measure µ satisfying GCB(C) for some C > 0, then µ = ν coincide, and µ is the unique translation-invariant stationary measure. Moreover, if, in this setting, one can show that when starting the dynamics from a translation-invariant initial measure µ and denoting µ t for the measure at time t > 0, we have s * (µ t |ν) → 0 as t → ∞, then, from Corollary 1, we obtain that µ t → ν as t → ∞ in the sense of the distance (7).

Generalization
In the setting of Section 2.1, without the additional assumption of finiteness of the diameter of S, the definition of the oscillation of f in (2) is no longer appropriate. Indeed, it becomes natural to include unbounded functions, which makes (2) infinite. Consider, e.g., S = R, and Ω = S Z d equipped with a product of Gaussian measures, then the function f (η) = η i should be a possible choice. We consider now a general standard Borel S, which is such that for the product space Ω = S Z d , quasi-local bounded functions are measure separating.
In order to proceed, we therefore associate with a function f : Ω → R an abstract sequence of oscillations δ f = (δ i f ) i∈Z d satisfying the following conditions. Definition 5. We say that a map δ : C (Ω) → [0, ∞] Z d is an allowed sequence of oscillations if the following four conditions are met.

2.
Non-degeneracy: δ i f is zero for a function f if and only if f does not depend on the i-th coordinate, i.e., δ i f = 0 if and only if for all η, σ such that η j = σ j for all j = i, f (η) = f (σ).
Then, we require that for all ξ, for all Λ and for all i ∈ Z d , δ i f Λ, ξ ≤ δ i f .

4.
Degree one homogeneity: δ i (β f ) = |β|δ i f for all β ∈ R and for all i ∈ Z d .
The most natural example different from (2) is More generally, one can define For a given sequence of oscillations δ, we call a function δ-Lipschitz if sup i ∈ Z d δ i f < ∞. We then introduce ∆ p (Ω) = { f ∈ C (Ω) : δ f p < ∞}.

Definition 6.
Let Ω = S Z d , where S is a standard Borel space. Assume an allowed sequence of oscillations δ is given. Let µ be a probability measure on Ω. We say that µ satisfies the Gaussian concentration bound with regard to δ with constant C > 0 (still abbreviated GCB(C)), if for all bounded local functions we have We then have the following analogue of Theorem 1. Because the proof follows exactly the same steps as the proof of Theorem 1, we leave it to the reader.

Theorem 2.
Assume δ is an allowed vector of oscillations. Assume that the set of bounded local δ-Lipschitz functions is measure separating. If µ is translation invariant and satisfies GCB(C), then for all µ translation invariant, and µ = µ we have s * (µ |µ) > 0.
As a final comment, we remark that the fact that we have chosen the group Z d is for the sake of simplicity. We can work with more general amenable groups, as in [22].

Author Contributions:
The authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.