Abstract
For a general class of lattice spin systems, we prove that an abstract Gaussian concentration bound implies positivity of the lower relative entropy density. As a consequence, we obtain uniqueness of translation-invariant Gibbs measures from the Gaussian concentration bound in this general setting. This extends earlier results with a different and very short proof.
1. Introduction
1.1. Uniqueness Criteria for Gibbs Measures
In mathematical statistical mechanics, it is important to have good and useful criteria for the absence of phase transition, or equivalently, uniqueness of the Gibbs measure associated with a given potential. Such criteria, also known under the name high-temperature criteria, show that when the interaction is small enough (high temperature), there is no phase transition, and the unique phase has strong mixing properties, i.e., it is close to a product measure (infinite temperature).
The most famous among such criteria is the Dobrushin uniqueness criterion; see, e.g., ([1], Chapter 8). Under the Dobrushin uniqueness criterion, besides uniqueness, one derives strong mixing properties of the unique Gibbs measure, i.e., quantitative bounds on the decay of covariance of local observables, and quantitative bounds on the difference between finite and infinite-volume expectations, i.e., on the influence of the boundary condition on the expectation of a local function. The basic idea behind the Dobrushin uniqueness criterion is that, when it holds, the conditional expectation operator acts as a contraction on the space of probability measures equipped with the Wasserstein distance. Because the Gibbs measure is a fixed point of this contraction, and fixed points of contractions are unique, one obtains uniqueness of Gibbs measures. Later on, the Dobrushin criterion was generalized to the Dobrushin–Shlosman criterion, and a connection has been made between this criterion and an important functional inequality, the log-Sobolev inequality. More precisely, for finite-range Glauber dynamics of Ising spins, in [2] the equivalence between the Dobrushin-Shlosman criterion and the log-Sobolev inequality was proved. This implies e.g., that under the Dobrushin–Shlosman criterion, the reversible Glauber dynamics converges exponentially fast (in ) to its unique stationary measure.
Related to the Dobrushin criterion, there is a general criterion in the context of interacting particle systems under which one obtains uniqueness of the stationary measure and uniform ergodicity, i.e., from any initial measures, in time, the dynamics converges exponentially fast to the unique stationary measures. This criterion, the so-called “ criterion” ([3], Chapter 1), is based on a similar contraction argument, i.e., when it holds, the semigroup of the interacting particle system acts as a contraction on a suitable space of functions, equipped with a norm (the so-called triple (semi)-norm) which controls the oscillations of a function, and the semigroup acts as a contraction in this norm. Similar to the setting of the unique measures in the setting of the Dobrushin uniqueness, also under the criterion, one obtains strong mixing properties of the unique stationary measure.
In the context of probabilistic cellular automata, as well as in the context of Glauber dynamics, the criterion is shown to be equivalent to the Dobrushin uniqueness criterion for the space–time Gibbs measure; see [4,5].
1.2. Concentration Inequalities
Concentration inequalities are inequalities in which one estimates the deviation between a function of n random variables and its expectation . The idea is that whenever the function depends only weakly on individual variables , and the distribution of is a product, or close to a product, then the probability that f deviates from becomes very small. To measure the dependence of f of individual coordinates, one considers, e.g., the oscillation
An important example of a concentration inequality is the so-called Gaussian concentration bound
where is a constant which does not depend on f, and in particular, it does not depend on n. For instance, if , and , then , and we find the upper bound for all . The power of concentration inequalities of the type (1) is that they hold for general f, i.e., far beyond empirical averages.
Concentration inequalities in the context of Gibbs measures for lattice spin systems have been studied in several works. In particular, in [6] the author proves an inequality of the type (1) under the Dobrushin uniqueness criterion. See [7] for a recent overview of concentration inequalities in the context of Gibbs measures.
1.3. Concentration and Uniqueness
The central question of this paper is the following. Assume that a Gibbs measure associated with a given potential satisfies a Gaussian concentration bound, i.e., an inequality of the type (1). Can we then conclude that it is the unique Gibbs measure, i.e., that there is no phase transition?
In this paper, we restrict ourselves to translation-invariant Gibbs measures (i.e., so-called equilibrium states), but in a very general setting. Following [8], we started in [9] the study of the relation between the Gaussian concentration bound and the uniqueness of equilibrium states in the context of spin systems on the lattice , where the spin at each lattice site takes a finite number of values. Examples there include the Ising model at high temperature. Notice that for this model, at low temperature in , there is a phase transition, and the large deviation probabilities of the magnetization are surface-like, rather than volume-like. This manifestation of a phase transition excludes the Gaussian concentration bound, under which all ergodic averages have volume-like large-deviation probabilities.
Here, we show uniqueness of equilibrium states under an inequality of the type (1), and next, we generalize both the context of the concentration inequality, as well as the context of Gibbs measures, showing uniqueness in the context of so-called zero-information sets. An important result in the context of equilibrium states is the variational principle, which implies that the relative entropy density between two equilibrium states is zero. Therefore, if one can show a strictly positive lower bound for relative entropy density, one obtains uniqueness of the equilibrium states. The set of equilibrium states associated with a given translation-invariant potential is a special case of a set in which the relative entropy density between two elements of the set is always zero. We call such a set a zero-information set, and generalize our results of uniqueness to this context, which includes, e.g., transformations of Gibbs measures, and stationary measures of certain interacting particle systems.
1.4. Content and Organization of the Paper
As sketched above, we obtain a lower bound for the lower relative entropy density in terms of a natural distance between translation-invariant probability measures, reminiscent and in the spirit of the results of Bobkov and Götze [10], who proved (in a different setting) a lower bound for the relative entropy in terms of the square of the Wasserstein distance. Because we work in the thermodynamic limit on a product space and are interested in translation-invariant probability measures, there is no translation-invariant distance on the configuration space for which we can apply the Bobkov-Götze theorem. We can avoid this problem by introducing a suitable distance on the translation-invariant probability measures (rather than on configurations).
We start by proving the lower bound on the lower relative entropy density in the context of general lattice spin systems with state space , where the single spins take values in a metric space S of bounded diameter. The bounded diameter property allows us to associate a quasi-local function f with a natural sequence of oscillations , where represents the maximal influence on the function f of a change in the spin at site i. In the final section of this paper, we provide a generalization of this by allowing more abstract single-spin spaces, and more general associated sequences of oscillations.
The rest of our paper is organized as follows. In Section 2, we introduce the basic setting of lattice spin systems and important function spaces. In Section 3, we introduce the Gaussian concentration bound, the relative entropy (density), and formulate and prove our main result in the context of a single-spin space with finite diameter. In Section 4, we discuss applications of our result to zero information distance sets, including, e.g., the set of equilibrium states with regard to absolutely summable translation-invariant potentials. In Section 5, we consider a generalization by introducing an abstract sequence of oscillations, the associated Gaussian concentration bound and state, and prove the analogue of our main result in this generalized context.
2. Setting
2.1. Configuration Space and the Translation Operator
We start from a standard Borel space with metric , and we let . (A measurable space is said to be standard Borel if there exists a metric on S which makes it a complete separable metric space, and then denotes the associated Borel -algebra.) In the sequel, for notational convenience, we omit the symbol and call S a standard Borel space, where we always assume that the associated -algebra is the Borel -algebra .
We assume that . Later on, in Section 5, we will show how to weaken this assumption.
This space S represents the “single-spin space”, i.e., we will consider lattice spin configurations in which individual “spins” take values in S. We denote by the product space , and stands for the lattice spin configuration space . We equip this space with the product topology. Elements of are called configurations. For , we denote by its evaluation at site . By we mean an element of , and by , a configuration coinciding with on and with on . We denote by S the set of finite subsets of .
We denote by , , the map which shifts, or translates, by i; that is, , , and for , we write , . We define the translation operator acting on configurations as follows (and use the same symbol). For each , , for all . This corresponds to translating forward by i. We denote by the same symbol the translation operator acting on a function . For each , is the function defined as . A (Borel) probability measure on is translation invariant if, for all and for all , we have .
We denote by the set of translation-invariant probability measures on . We denote by the space of continuous, respectively bounded continuous, real-valued functions on .
2.2. Local Oscillations and Function Spaces
To a continuous function we associate a “sequence” of “local oscillations”, , defined via
Later on, in Section 5, in which we consider the case where S is allowed to have infinite diameter, we will consider a more abstract definition of . In the case where S has finite diameter, (2) is the most natural choice.
For an integer , we define the usual -norm of , , and .
We call a continuous function local if for finitely many . The set is then called the dependence set of f. We denote by the set of local continuous functions on .
We call a continuous function quasi-local if it is the uniform limit of a sequence of local continuous functions. If S is compact, then, according to the Stone–Weierstrass theorem, local continuous functions are uniformly dense in .
We denote by the space of all continuous quasi-local functions on .
For , we introduce the spaces
Lemma 1.
If , then f is bounded. If for some , then f is bounded.
Proof.
Choose . We have for every
which, upon taking the limit , using the assumed quasilocality of f gives
If f is local, then we still have the inequality (3) for containing the dependence set of f. Because, by assumption, is finite, it follows that for all , and therefore . Then, we obtain (4), which implies that f is bounded. □
We say that if, for all bounded continuous local functions, we have (then by definition of quasilocality, the same holds for bounded continuous quasi-local functions). This induces the so-called weak quasi-local topology on probability measures. Notice that in our setting, where by assumption the single-spin space S is a complete separable metric space, this topology coincides with the ordinary weak topology; see [11] (p. 898).
In our setting, the set of bounded quasi-local continuous functions is measure separating, i.e., for two probability measures , there exists a bounded quasi-local continuous f, such that
Because, by definition, bounded continuous quasi-local functions can be uniformly approximated by bounded continuous local functions, if the set of bounded quasi-local functions is measure separating, then the set of bounded continuous local functions is also measure separating. Therefore, in our setting, for two probability measures , there exists a bounded local f (which is not constant), such that
This can be seen as follows. If , then there exists a bounded closed cylindrical set , such that , because the Borel -algebra on is generated by such sets. The indicator of this set can be approximated by bounded local continuous functions in both and .
3. Gaussian Concentration Bound and Relative Entropy
3.1. Abstract Gaussian Concentration Bound
We can now give the definition of the Gaussian concentration bound in our setting.
Definition 1.
Let , where S is a standard Borel space with a finite diameter. Let μ be a probability measure on Ω. We say that μ satisfies the Gaussian concentration bound with constant , abbreviated , if for all bounded local functions f we have
Remark 1.
- (a)
- Observe that the bound (5) does not change if f is replaced by , where is arbitrary, since for any . This “insensitivity” to constant offsets on the left-hand side is ensured by the fact that we center f around its expected value. We also observe that (5) is trivially true for functions which are constant.
- (b)
- We have , for all and ; we thus have
This quadratic upper bound will be crucial in the sequel.
- (c)
- The quadratic nature of the upperbound in (5) resembles the quadratic upperbound for the pressure in [12], Theorem 1.1, Equation (2.7), in terms of the Dobrushin norm. This suggests that in the Dobrushin uniqueness regime, the quadratic bound which is obtained from (5) might also be obtainable from this result. However, the Gaussian concentration inequality does not require the Dobrushin uniqueness condition; the latter is sufficient, but not necessary.
The following proposition asserts that (5) automatically extends to a wider class of functions.
Proposition 1
(Self-enhancement of GCB). Suppose that (5) holds for all bounded local f. Then, it holds for all .
Proof.
By assumption, for a fixed , can be uniformly approximated by the local functions
By definition (2), is non-decreasing when grows, and is bounded by . According to Lemma 1, it follows that is bounded. Therefore is bounded by , which is finite because . Therefore, using the assumed uniform convergence of to f, and the assumed bound (5) for bounded local functions in , we obtain
Here, in the first equality, we used the uniform convergence of to f. In the last equality, we used , and by assumption , so by dominated convergence applied to the counting measure on , we have
□
3.2. Relative Entropy
For a probability measure , we denote by its restriction to the sub--algebra , generated by the projection . We also denote by the set of bounded -measurable functions from to .
For two probability measures on and , we define the relative entropy of with respect to by
We further denote by the sequence of “cubes” , .
Definition 2
(Lower relative entropy density). For two probability measures on
, we define the lower relative entropy density by
We have the following variational characterization of relative entropy (for proof, see, for instance, [13] (p. 100))
where the supremum is taken over all -measurable functions, such that .
3.3. Main Result
In the main theorem below, we prove that the Gaussian concentration bound implies strict positivity of the lower relative entropy density. Introducing an appropriate metric on the set of probability measures, we show that the lower relative entropy density is lower bounded by a constant multiplied by the square of this distance. This result substantially generalizes the corresponding result from [9], where it is essential that the single-spin space is finite. Moreover, the proof is simpler and based on the variational formula for the relative entropy, combined with a quadratic estimate for the log-moment-generating function coming from the assumed Gaussian concentration bound.
Definition 3.
Define the following distance between probability measures
The metric defined above generates the quasi-local topology, and therefore convergence in this metric implies weak convergence. Indeed, convergence in the metric d clearly implies for all local continuous f, and hence also for all quasi-local continuous f. The latter implies in the quasi-local topology, which coincides with the weak topology.
We can then formulate our main result.
Theorem 1.
If μ is translation invariant and satisfies , then for all translation invariant, and , we have
More precisely, we have
where d is the distance (7).
We start with a lemma from [7]. For the reader’s convenience, we repeat the short proof here.
Lemma 2.
For f, such that and , we have
Proof.
For , let denote the indicator function of (that is, if and otherwise). Then, for every we have
As a consequence, using Young’s inequality for convolutions, we obtain
□
Proof of Theorem 1.
For the cube and a bounded local function f whose dependence set is included in the cube , for some r, it follows from (6) that
where we used that is measurable with respect to . Now, if satisfies and both and are translation invariant, then we can estimate further as follows. Start by noticing that, through combination of the assumed and Lemma 2, we have
As a consequence, using translation invariance of both and , we obtain
Consider a bounded local function f, such that (this function exists by the assumption that bounded local functions are measure separating). Put . (Observe that by assumption, and since f cannot be a constant.) Assume that the dependence set of f is included in the cube . Replace f by in the inequality (9), and optimize over . Then, we obtain, for all , the inequality
Since r is fixed, we can take the limit inferior in n, and using as , we obtain
From (10), we infer that for f, such that
we have
Therefore, for
we have
By definition of the distance (7), this is equivalent with the statement that implies (11). This implies (8).
The following corollary shows that convergence in relative entropy density implies convergence in the distance d. This can be used for stochastic dynamics, provided one can show that the relative entropy density converges. See the application section below for some examples.
Corollary 1.
Let μ be a translation-invariant probability measure which satisfies . Assume is a sequence of translation-invariant probability measures, such that
Then, in the sense of the distance (7), and therefore also weakly.
Proof.
Remark 2.
As an example of application of Corollary (1), we mention the iteration of renormalization group transformations in the high-temperature regime [14], where convergence of the renormalized potentials can be established, and as a consequence, we obtain convergence of the relative entropy density. Then, Corollary (1) implies that the renormalized measures converge in the metric d as least as fast as the potentials. In the context of stochastic dynamics, i.e., where is a time-evolved measure (at time n), it is usually not simple to obtain the convergence . In the high-temperature setting (high-temperature dynamics, high-temperature initial measure) this can be obtained with similar means as in [14].
We conclude this section with two further remarks relating our result to the Bobkov-Götze criterion.
Remark 3.
Our distance between probability measures resembles the so-called Dobrushin distance, denoted by , which consists of taking the supremum of over a wider set of functions. Namely, f is required to be measurable, and such that . Hence for a general pair of probability measures. In the special case of finite S, one has . In [15], it is proved that D is equal to what the authors called the Steiff distance , which is defined in terms of couplings, and which generalizes the Ornstein distance. The equality between D and is reminiscent of the Kantorovich–Rubinstein duality theorem.
Remark 4.
Inequality (8) is reminiscent of a well-known abstract inequality relating the relative entropy and the Wasserstein distance due to Bobkov and Götze [10]. However, our context is different, because we consider the thermodynamic limit and the relative entropy density. Nevertheless, as shown in [7], we can exploit the Bobkov-Götze theorem in the special case of finite S, putting the Hamming distance on , to get
where
and where , is the Wassertein distance between and .
4. Applications: Uniqueness of Equilibrium States and Beyond
In this subsection, we provide some settings where we can conclude uniqueness of a set of “(generalized) translation-invariant Gibbs measures” via Theorem 1. We start with the set of translation-invariant Gibbs measures associated with an absolutely summable potential. Then, we consider generalizations and modifications of such sets.
4.1. Uniqueness of Equilibrium States
In this subsection, we briefly introduce the necessary basics of Gibbs measures. The reader familiar with the theory of Gibbs measure can skip this subsection. The reader is referred to [1] (especially Chapter 16) or [11] (Chapter 2) for more background on the Gibbs formalism.
Let be a probability measure on S, and for the corresponding product measure on . The measure is called the “a priori” measure on S, with associated a priori measure on .
We call a uniformly absolutely summable translation-invariant potential a function
with the following properties:
- (a)
- Locality: for all , is -measurable and continuous.
- (b)
- Absolute summability: .
- (c)
- Translation invariance: for all , .
Let us call the set of uniformly absolutely summable translation-invariant potentials. Then, we build the local Gibbs measures with boundary condition . For a finite subset , the Gibbs measure in volume with boundary condition outside is defined via
where is the Hamiltonian in volume with boundary condition :
and where is the normalization
The family is called the Gibbsian specification associated with the potential U (with a priori measure ).
By the uniform absolute summability of U, we automatically have that for all f local and continuous, the function is quasi-local and continuous. We say that the specification is quasi-local.
We then call a measure Gibbs with potential U (and a priori measure ) if is consistent with the finite-volume Gibbs measures, i.e., if for all bounded and measurable, and , we have
for -almost every .
We denote by the set of translation-invariant Gibbs measures associated with the potential U. These measures are called the “equilibrium states” associated with U.
The variational principle ([1] Chapter 16) implies that if , then , and conversely if and , is such that , then . As a consequence of Theorem 1, we then obtain the following result:
Proposition 2.
Let. Ifsatisfiesfor some, then.
This substantially extends the implication between and uniqueness of equilibrium states from [9], where we only considered finite single-spin spaces S.
Remark 5.
Because our techniques are based on relative entropy density, we cannot exclude the existence of non-translation-invariant Gibbs measures, even in the presence of a translation-invariant Gibbs measure satisfying . In other words, even if there exists a unique equilibrium state, there might still be non-translation-invariant Gibbs measures. We believe, however, that the presence of a translation-invariant Gibbs measure satisfying implies a stronger form of uniqueness, which excludes the presence of non-translation-invariant Gibbs measures.
4.2. Sets of Zero-Information Distance
The example of the set of equilibrium states from the previous subsection leads naturally to the more general notion of “zero-information distance sets” defined below.
Definition 4.
We call a subset a zero-information distance set if for all , .
From Theorem 1, we then immediately obtain the following proposition.
Proposition 3.
Letbe a zero-information distance set. If there exists, which satisfiesfor some, thenis a singleton.
We provide four further examples (beyond equilibrium states) of such zero-information distance sets, illustrating Proposition 3.
- (a)
- Asymptotically decoupled measures and -compatible measures.A first generalization of the Gibbsian context is provided in the realm of “asymptotically decoupled measures” via the notion of -compatible measures, see [16]. This setting goes beyond quasi-local specifications, and therefore includes many relevant examples of non-Gibbsian measures.In this setting, the set of -compatible measures (associated with a local function f) is a zero-information set ((see [16] Theorem 4.1), and therefore, if this set contains an element μ satisfying , then it coincides with the singleton .
- (b)
- Renormalization group transformations of Gibbs measures.Another important class of examples is the following. We say that a transformation preserves zero- information distance sets if a zero-information distance set is mapped by T onto a zero-information distance set. Important examples of such transformations T are local and translation-invariant renormalization group transformations studied in [11], Section 3.1 p 960, conditions T1-T2-T3. Examples of such transformations include block-spin averaging, decimation, and stochastic transformations such as the Kadanoff transformation. Because the transformations are “local and translation-invariant probability kernels”, one immediately infers the property .In this setting, Proposition 3 implies that if , is an associated translation-invariant Gibbs measure, and satisfies for some , then for all ν, such that . In particular, this implies that for all . Indeed, in that case .Notice that can be non-Gibbs; therefore, the implication for all ν such that cannot be derived from the variational principle.
- (c)
- Projections of Gibbs measures.Let μ be a translation-invariant Gibbs measure on the state space (associated with a translation-invariant potential) which satisfies for some . Let for , denote its restriction to the sublattice . It is clear that satisfies with the same constant . Therefore, any translation-invariant measure on that differs from has strictly positive lower relative entropy density with regard to .As a consequence, if is a Gibbs measure for a translation-invariant potential , then this potential has no other translation-invariant Gibbs measures. This gives uniqueness for a set of Gibbs measures where the potential is only implicitly defined, and can be complicated, i.e., uniqueness is not a consequence of a simple criterion.Projections of Gibbs measures arise naturally in the context of probabilistic cellular automata, where the stationary measures are projections of the space–time Gibbs measures [4]. In this setting, the result tells us that if the space–time measure satisfies for some , then the unique stationary measure, if Gibbs, has a potential with a unique equilibrium state. Projections of Gibbs measures can fail to be Gibbs, as is shown in [17] for projection of the low-temperature Ising model in on the X-axis. It is an open and interesting problem to investigate whether this projected measure satisfies the Gaussian concentration bound.
- (d)
- Stationary measures for Ising spin Glauber dynamics.An additional example of a zero-information distance set is the set of stationary and translation-invariant measures for (Ising spin, i.e., S is finite) Glauber dynamics under the condition that this set contains at least one translation-invariant Gibbs measure as a stationary measure; see [18], Section 4. See also [19,20] for earlier results in the setting of reversible Glauber dynamics, and [21] for recent results in this spirit for more general local dynamics. As a consequence of Proposition 3, we then conclude that if there exists a translation-invariant Gibbs measure ν as stationary measure, and there exists a translation-invariant stationary measure μ satisfying for some , then coincide, and μ is the unique translation-invariant stationary measure. Moreover, if, in this setting, one can show that when starting the dynamics from a translation-invariant initial measure μ and denoting for the measure at time , we have as , then, from Corollary 1, we obtain that as in the sense of the distance (7).
5. Generalization
In the setting of Section 2.1, without the additional assumption of finiteness of the diameter of S, the definition of the oscillation of f in (2) is no longer appropriate. Indeed, it becomes natural to include unbounded functions, which makes (2) infinite. Consider, e.g., , and equipped with a product of Gaussian measures, then the function should be a possible choice. We consider now a general standard Borel S, which is such that for the product space , quasi-local bounded functions are measure separating.
In order to proceed, we therefore associate with a function an abstract sequence of oscillations satisfying the following conditions.
Definition 5.
We say that a map is an allowed sequence of oscillations if the following four conditions are met.
- 1.
- Translation invariance: , .
- 2.
- Non-degeneracy: is zero for a function f if and only if f does not depend on the i-th coordinate, i.e., if and only if for all such that for all , .
- 3.
- Monotonicity: for and f a bounded quasi-local function, we consider the local approximation of f given byThen, we require that for all ξ, for all Λ and for all , .
- 4.
- Degree one homogeneity: for all and for all .
Notice that Condition 3 implies that for given f, ξ, , and , we have . Indeed, notice that for
For a given sequence of oscillations δ, we call a function δ-Lipschitz if . We then introduce
Definition 6.
Let , where S is a standard Borel space. Assume an allowed sequence of oscillations δ is given. Let μ be a probability measure on Ω. We say that μ satisfies the Gaussian concentration bound with regard to δ with constant (still abbreviated ), if for all bounded local functions we have
We then have the following analogue of Theorem 1. Because the proof follows exactly the same steps as the proof of Theorem 1, we leave it to the reader.
Theorem 2.
Assume δ is an allowed vector of oscillations. Assume that the set of bounded local δ-Lipschitz functions is measure separating. If μ is translation invariant and satisfies , then for all translation invariant, and we have
As a final comment, we remark that the fact that we have chosen the group is for the sake of simplicity. We can work with more general amenable groups, as in [22].
Author Contributions
The authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Georgii, H.-O. Gibbs Measures and Phase Transitions, 2nd ed.; De Gruyter: Berlin, Germany, 2011. [Google Scholar]
- Stroock, D.W.; Zegarlinski, B. The logarithmic Sobolev inequality for discrete spin systems on a lattice. Commun. Math. Phys. 1992, 149, 175–193. [Google Scholar] [CrossRef]
- Liggett, T.M. Interacting Particle Systems, 2nd ed.; Springer: Berlin, Germany, 2005. [Google Scholar]
- Lebowitz, J.L.; Maes, C.; Speer, E.R. Statistical mechanics of probabilistic cellular automata. J. Stat. Phys. 1990, 59, 117–170. [Google Scholar] [CrossRef]
- Maes, C.; Shlosman, S.B. Ergodicity of probabilistic cellular automata: A constructive criterion. Comm. Math. Phys. 1991, 135, 233–251. [Google Scholar] [CrossRef]
- Külske, C. Concentration inequalities for functions of Gibbs fields with application to diffraction and random Gibbs measures. Comm. Math. Phys. 2003, 239, 29–51. [Google Scholar] [CrossRef]
- Chazottes, J.-R.; Collet, P.; Redig, F. On concentration inequalities and their applications for Gibbs measures in lattice systems. J. Stat. Phys. 2017, 169, 504–546. [Google Scholar] [CrossRef]
- Chazottes, J.-R.; Gallo, S.; Takahashi, D. Gaussian concentration bounds for stochastic chains of unbounded memory. Ann. Appl. Probab. 2022; in press. [Google Scholar]
- Chazottes, J.-R.; Moles, J.; Redig, F.; Ugalde, E. Gaussian concentration and uniqueness of equilibrium states in lattice systems. J. Stat. Phys. 2020, 181, 2131–2149. [Google Scholar] [CrossRef]
- Bobkov, S.G.; Götze, F. Exponential integrability and transportation cost related to logarithmic Sobolev inequalities. J. Funct. Anal. 1999, 163, 1–28. [Google Scholar] [CrossRef]
- van Enter, A.C.D.; Fernández, R.; Sokal, A.D. Regularity properties and pathologies of position-space renormalization-group transformations: Scope and limitations of Gibbsian theory. J. Statist. Phys. 1993, 72, 879–1167. [Google Scholar] [CrossRef]
- Gross, L. Absence of second-order phase transitions in the Dobrushin uniqueness region. J. Stat. Phys. 1981, 25, 57–72. [Google Scholar] [CrossRef]
- Boucheron, S.; Lugosi, G.; Massart, P. Concentration Inequalities: A Nonasymptotic Theory of Independence; Oxford University Press: Oxford, UK, 2013. [Google Scholar]
- Bertini, L.; Cirillo, E.N.M.; Olivieri, E. Renormalization-group transformations under strong mixing conditions: Gibbsianness and convergence of renormalized interactions. J. Stat. Phys. 1999, 97, 831–915. [Google Scholar] [CrossRef]
- Armstrong-Goodall, J.; MacKay, R.S. Dobrushin and Steif metrics are equal. arXiv 2021, arXiv:2104.08365. [Google Scholar]
- Pfister, C.-E. Thermodynamical aspects of classical lattice systems. In In and Out of Equilibrium; Progress in Probability; Birkhäuser: Boston, MA, USA, 2002; Volume 51, pp. 393–472. [Google Scholar]
- Schonmann, R.H. Projections of Gibbs measures may be non-Gibbsian. Comm. Math. Phys. 1989, 124, 1–7. [Google Scholar] [CrossRef]
- Künsch, H. Nonreversible stationary measures for infinite interacting particle systems. Z. Wahrscheinlichkeitstheorie Verwandte Geb. 1984, 66, 407–424. [Google Scholar] [CrossRef]
- Holley, R.A.; Stroock, D.W. In one and two dimensions, every stationary measure for a stochastic Ising model is a Gibbs state. Comm. Math. Phys. 1977, 55, 37–45. [Google Scholar] [CrossRef]
- Higuchi, Y.; Shiga, T. Some results on Markov processes of infinite lattice spin systems. J. Math. Kyoto Univ. 1975, 15, 211–229. [Google Scholar] [CrossRef]
- Jahnel, B.; Külske, C. Attractor properties for reversible and irreversible particle systems. Comm. Math. Phys. 2019, 366, 139–172. [Google Scholar] [CrossRef]
- Tempelman, A. Ergodic Theorems for Group Actions. Informational and Thermodynamical Aspects. Mathematics and Its Applications; Springer: Berlin, Germany, 1992; Volume 78. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).