PAC-Bayes Bounds on Variational Tempered Posteriors for Markov Models

Datasets displaying temporal dependencies abound in science and engineering applications, with Markov models representing a simplified and popular view of the temporal dependence structure. In this paper, we consider Bayesian settings that place prior distributions over the parameters of the transition kernel of a Markov model, and seek to characterize the resulting, typically intractable, posterior distributions. We present a Probably Approximately Correct (PAC)-Bayesian analysis of variational Bayes (VB) approximations to tempered Bayesian posterior distributions, bounding the model risk of the VB approximations. Tempered posteriors are known to be robust to model misspecification, and their variational approximations do not suffer the usual problems of over confident approximations. Our results tie the risk bounds to the mixing and ergodic properties of the Markov data generating model. We illustrate the PAC-Bayes bounds through a number of example Markov models, and also consider the situation where the Markov model is misspecified.


Introduction
This paper presents probably approximately correct (PAC)-Bayesian bounds on variational Bayesian (VB) approximations of fractional or tempered posterior distributions for Markov data generation models. Exact computation of either standard or tempered posterior distributions is a hard problem that has, broadly speaking, spawned two classes of computational methods. The first, Markov chain Monte Carlo (MCMC), constructs ergodic Markov chains to approximately sample from the posterior distribution. MCMC is known to suffer from high variance and complex diagnostics, leading to the development of variational Bayesian (VB) [1] methods as an alternative in recent years. VB methods pose posterior computation as a variational optimization problem, approximating the posterior distribution of interest by the 'closest' element of an appropriately defined class of 'simple' probability measures. Typically, the measure of closeness used by VB methods is the Kullback-Leibler (KL) divergence. Excellent introductions to this so-called KL-VB method can be found in [2][3][4]. More recently, there has also been interest in alternative divergence measures, particularly the α-Rényi divergence [5][6][7], though in this paper, we focus on the KL-VB setting.
Theoretical properties of VB approximations, and in particular asymptotic frequentist consistency, have been studied extensively under the assumption of an independent and identically distributed (i.i.d.) data generation model [4,8,9]. On the other hand, the common setting where data sets display temporal dependencies presents unique challenges. In this paper, we focus on homogeneous Markov chains with parameterized transition kernels, representing a parsimonious class of data generation models with a wide range of applications. We work in the Bayesian framework, focusing on the posterior distribution over the unknown parameters of the transition kernel. Our theory develops PAC bounds that link the ergodic and mixing properties of the data generating Markov chain to the Bayes risk associated with approximate posterior distributions.
Frequentist consistency of Bayesian methods, in the sense of concentration of the posterior distribution around neighborhoods of the 'true' data generating distribution, have been established in significant generality, in both the i.i.d. [10][11][12] and in the non-i.i.d. data generation setting [13,14]. More recent work [14][15][16] has studied fractional or tempered posteriors, a class of generalized Bayesian posteriors obtained by combining the likelihood function raised to a fractional power with an appropriate prior distribution using Bayes' theorem. Tempered posteriors are known to be robust against model misspecification: in the Markov setting we consider, the associated stationary distribution as well as mixing properties are sensitive to model parameterization. Further, tempered posteriors are known to be much simpler to analyze theoretically [14,16]. Therefore, following [14][15][16] we focus on tempered posterior distributions on the transition kernel parameters, and study the rate of concentration of variational approximations to the tempered posterior. Equivalently, as shown in [16] and discussed in Section 1.1, our results also apply to so-called α-variational approximations to standard posterior distributions over kernel parameters. The latter are modifications of the standard KL-VB algorithm to address the well-known problem of overconfident posterior approximations.
While there have been a number of recent papers studying the consistency of approximate variational posteriors [5,8,15] in the large sample limit, rates of convergence have received less attention. Exceptions include [9,15,17], where an i.i.d. data generation model is assumed. [15] establishes PAC-Bayes bounds on the convergence of a variational tempered posterior with fractional powers in the range [0, 1), while [9] considers the standard variational posterior case (where the fractional power equals 1). [17], on the other hand, establishes PAC-Bayes bounds for risk-sensitive Bayesian decision making problems in the standard variational posterior setting. The setting in [15] allows for model misspecification and the analysis is generally more straightforward than that in [9,17]. Our work extends [15] to the setting of a discrete-time Markov data generation model.
Our first results in Theorem 1 and Corollary 1 of Section 2 establish PAC-Bayes bounds for sequences with arbitrary temporal dependence. Our resultsgeneralize [15], [Theorem 2.4] to the non-i.i.d. data setting in a straightforward manner. Note that Theorem 1 also recovers ( [16], [Theorem 3.3]), which is established under different 'existence of test' conditions. Our objective in this paper is to explicate how the ergodic and mixing properties of the Markov data generating process influences the PAC-Bayes bound. The sufficient conditions of our theorem, bounding the mean and variance of the log-likelihood ratio of the data, allows for developing this understanding, without the technicalities of proving the existence of test conditions intruding on the insights.
In Section 3, we study the setting where the data generating model is a stationary α-mixing Markov chain. Stationarity means that the Markov chain is initialized with the invariant distribution corresponding to the parameterized transition kernel, implying all subsequent states also follow this marginal distribution. The α-mixing condition ensures that the variance of the likelihood ratio of the Markov data does not grow faster than linear in the sample size. Our main results in this setting are applicable when the state space of the Markov chain is either continuous or discrete. The primary requirement on the class of data generating Markov models is for the log-likelihood ratio of the parameterized transition kernel and invariant distribution to satisfy a Lipschitz property. This condition implies a decoupling between the model parameters and the random samples, affording a straightforward verification of the mean and variance bounds. We highlight this main result by demonstrating that it is satisfied by a finite state Markov chain, a birth-death Markov chain on the positive integers, and a one-dimensional Gaussian linear model.
In practice, the assumption that the data generating model is stationary is unlikely to be satisfied. Typically, the initial distribution is arbitrary, with the state distribution of the Markov sequence converging weakly to the stationary distribution. In this setting, we must further assume that the class of data generating Markov chains are geometrically ergodic.
We show that this implies the boundedness of the mean and variance of the log-likelihood ratio of the data generating Markov chain. Alternatively, in Theorem 4 we directly impose a drift condition on random variables that bound the log-likelihood ratio. Again, in this more general nonstationary setting, we illustrate the main results by showing that the PAC-Bayes bound is satisfied by a finite state Markov chain, a birth-death Markov chain on the positive integers, and a one-dimensional Gaussian linear model.
In preparation for our main technical results starting in Section 2 we first note relevant notations and definitions in the next section.

Notations and Definitions
We broadly adopt the notation in [15]. Let the sequence of random variables X n = (X 0 , . . . , X n ) ⊂ R m×(n+1) represent a dataset of n + 1 observations drawn from a joint distribution P (n) θ 0 , where θ 0 ∈ Θ ⊆ R d is the 'true' parameter underlying the data generation process. We assume the state space S ⊆ R m of the random variables X i is either discrete-valued or continuous, and write {x 0 , . . . , x n } for a realization of the dataset. We also adopt the convention that 0 log(0/0) = 0.
For each θ ∈ Θ, we will write p (n) θ as the probability density of P (n) θ with respect to some measure Q (n) , i.e., p Let π(θ) be a prior distribution with support Θ. The α te -fractional posterior is defined as where, for θ 0 , θ ∈ Θ, r n (θ, θ 0 )(·) := log , is the log-likelihood ratio of the corresponding density functions, and α te ∈ (0, ∞) is a tempering coefficient. Setting α te = 1 recovers the standard Bayesian posterior. Note that we will use superscripts to distinguish different quantities that are referred to just as α in the literature. The Kullback-Leibler (KL) divergence between distributions P, Q is defined as where p, q are the densities corresponding to P, Q on some sample space X . In particular, the KL divergence between the distributions parameterized by θ 0 and θ is The α re -Rényi divergence D α re (P (n) where α re ∈ (0, 1). As α re → 1, the α re -Rényi divergence recovers the KL divergence. Let F be some class of distributions with support in R d and such that any distribution P in F is absolutely continuous with respect to the tempered posterior: P π n,α te |X n .
Many choices of F exist; for instance (see also [15]), F can be the set of Gaussian measures, denoted F Φ id : where P.D. references the class of positive definite matrices. Alternately, F can be the family of mean-field or factored distributions where the components θ i of θ are independent of each other. Letπ n,α te |X n be the variational approximation to the tempered posterior, defined as π n,α te |X n := arg min ρ∈F K(ρ, π n,α te |X n ) It is easy to see that findingπ n,α te |X n in Equation (5) is equivalent to the following optimization problem: Setting α te = 1 again recovers the usual variational solution that seeks to approximate the posterior distribution with the closest element of F (the right-hand side above is called the evidence lower bound (ELBO)). Other settings of α te constitute α te -variational inference [16], which seeks to regularize the 'overconfident' approximate posteriors that standard variational methods tend to produce. Our results in this paper focus on parametrized Markov chains. We term a Markov chain as 'parameterized' if the transition kernel p θ (·|·) is parametrized by some θ ∈ Θ ⊆ R d . Let q (0) (·) be the initial density (defined with respect to the Lebesgue measure over R m ) or initial probability mass function. Then, the joint density is p θ (x 0 , . . . , x n ) corresponds to the walk probability of a time-homogeneous Markov chain. We assume that corresponding to each transition kernel p θ , θ ∈ Θ, there exists an invariant distribution q We will also use q θ to designate the density of the invariant measure (as before, this is with respect to the Lebesgue or counting measure for continuous or discrete state spaces, respectively). A Markov chain is stationary if its initial distribution is the invariant probability distribution, that is, X 0 ∼ q θ .
Our results in the ensuing sections will be established under strong mixing conditions [18] on the Markov chain. Specifically, recall the definition of the α-mixing coefficients of a Markov chain {X n }: Definition 1 (α-mixing coefficient). Let M j i denote the σ-field generated by the Markov chain {X k : i ≤ k ≤ j} parameterized by θ ∈ Θ. Then, the α-mixing coefficient is defined as Informally speaking, the α-mixing coefficients {α k } measure the dependence between any two events A (in the 'history' σ-algebra) and B (in the 'future' σ-algebra) with a time lag k. We note that we do not use superscripts to identify these α parameters, since they are the only ones with subscripts, and can be identified through this.

A Concentration Bound for the α re -Rényi Divergence
The object of analysis in what follows is the probability measureπ n,α te |X n (θ), the variational approximation to the tempered posterior. Our main result establishes a bound on the Bayes risk of this distribution; in particular, given a sequence of loss functions n (θ, θ 0 ), we bound n (θ, θ 0 )π n,α te |X n (θ)dθ. Following recent work in both the i.i.d. and dependent sequence settings [14][15][16], we will use n (θ, θ 0 ) = D α re (P (n) θ , P (n) θ 0 ), the α re -Rényi divergence between P (n) θ and P (n) θ 0 as our loss function. Unlike loss functions like Euclidean distance, Rényi divergence compares θ and θ 0 through their effect on observed sequences, so that issues like parameter identifiability no longer arise. Our first result generalizes [15], [Theorem 2.1] to a general non-i.i.d. data setting.
Proposition 1. Let F be a subset of all probability distributions on Θ. For any α re ∈ (0, 1), ∈ (0, 1) and n ≥ 1, the following probabilistic uniform upper bound on the expected α re -Rényi divergence holds: The proof of Proposition 1 follows easily from [15], and we include it in Appendix B.1.1 for completeness. Mirroring the comments in [15], when ρ =π n,α te this result is precisely [14,Theorem 3.4]. We also note from [14] that ∀ α re , β ∈ (0, 1] α re -Rényi divergences are all equivalent through the following inequality Hence, for the subsequent results, we simplify by assuming that α te = α re . This probabilistic bound implies the following PAC-Bayesian concentration bound on the model risk computed with respect to the fractional variational posterior: Theorem 1. Let F be a subset of all probability distributions parameterized by Θ, and assume there exist n > 0 and ρ n ∈ F such that i. Var(r n (θ, θ 0 ))ρ n (dθ) ≤ n n , and iii.
Then, for any α re ∈ (0, 1) and ( , η) ∈ (0, 1) × (0, 1), The proof of Theorem 1 is a generalization of [15] (Theorem 2.4) to the non-i.i.d. setting, and a special case of [16] (Theorem 3.1), where the problem setting includes latent variables. We include a proof for completeness. As noted in [15], the sufficient conditions follow closely from [13] and we will show that they hold for a variety of Markov chain models.
A direct corollary of Theorem 1 follows by setting η = 1 n n , = e −n n and using the fact that e −n n ≥ 1 n n . Note that Equation (9) is vacuous if η + > 1. Therefore, without loss of generality, we restrict ourselves to the condition 2 n n < 1.
Then, for any α re ∈ (0, 1), We observe that Theorem 1 and Corollary 1 place no assumptions on the nature of the statistical dependence between data points. However, verification of the sufficient conditions is quite hard, in general. One of our key contributions is to verify that under reasonable assumptions on the smoothness of the transition kernel, the sufficient conditions of Theorem 1 and Corollary 1 are satisfied by ergodic Markov chains.
Observe that the first two conditions in Corollary 1 ensure that the distribution ρ n concentrates on parameters θ ∈ Θ around the true parameter θ 0 , while the third condition requires that ρ n not diverge from the prior π rapidly as a function of the sample size n. In general, verifying the first and third conditions is relatively straightforward. The second condition, on the other hand, is significantly more complicated in the current setting of dependent data, as the variance of r n (θ, θ 0 ) includes correlations between the observations {X 0 , . . . , X n }. In the next section, we will make assumptions on the transition kernels (and corresponding invariant densities) that 'decouple' the temporal correlations and the model parameters in the setting of strongly mixing and ergodic Markov chain models, and allow for the verification of the conditions in Corollary 1. Towards this, Propositions 2 and 3 below characterize the expectation and variance of the log-likelihood ratio r n (·, ·) in terms probability densities of the sequence (x 0 , . . . , x n ), and q (i) θ j the marginal density for i ∈ {1, . . . , n} and j ∈ {1, 2}. Fix δ > 0 and, for each i ∈ {1, . . . , n}, define Similarly, define Z 0 := log , and D 1,2 := E θ 1 |Z 0 | 2+δ . Suppose the Markov chain corresponding to θ 1 is α-mixing with coefficients {α k }. Then, Note that this result holds for any parameterized Markov chain. In particular, when the Markov chain is stationary, C θ 1 ,θ 2 ∀ i and ∀θ ∈ Θ, and Equation (14) simplifies to If the sum ∑ k≥0 α δ/(2+δ) k is infinite, the bound is trivially true. For it to be finite, of course, the coefficients α k must decay to zero sufficiently quickly. For instance, Theorem A.1.2 shows that if the Markov chain is geometrically ergodic, then the α-mixing coefficients are geometrically decreasing. We will use this fact when the Markov chain is non-stationary, as in Section 4. In the next section, however, we first consider the simpler stationary Markov chain setting where geometric ergodic conditions are not explicitly imposed. We also note that unless only a finite number of α k are nonzero, the sum ∑ k≥0 α δ/(2+δ) k is infinite when δ = 0, and our results will typically require δ > 0.

Stationary Markov Data-Generating Models
Observe that the PAC-Bayesian concentration bound in Corollary 1 specifically requires bounding the mean and variance of the log-likelihood ratio r n (θ, θ 0 ). We ensure this by imposing regularity conditions on the log-ratio of the one-step transition kernels and the corresponding invariant densities. Specifically, we assume the following conditions that decouple the model parameters from the random samples, allowing us to verify the bounds in Corollary 1.   k (·), k ∈ {1, 2, . . . , m} such that for any parameters θ 1 , θ 2 ∈ Θ, the log of the ratio of one-step transition kernels and the log of the ratio of the invariant distributions satisfy, respectively, We further assume that for some δ > 0, the functions f k and M (1) k satisfy the following: for t ∈ {1, 2}, n ≥ 1 and k ∈ {1, 2, . . . , m}, and ii.
there exists a constant B such that M (1) Example 2. Suppose {X 0 , . . . , X n } is generated by the 'simple linear' Gauss-Markov model where {W n } is a sequence of i.i.d. standard Gaussian random variables. Then, m = 2, with M (1) 1 (X n , X n−1 ) = |X n X n−1 |, M 2 (X n , X n−1 ) = X 2 n , M 1 (x) = x 2 2 and M 2 (X) = 0. Corresponding to these, we have f The derivation of these quantities and that these satisfy the conditions of Assumption 1 under appropriate choice of ρ n is shown in the proof of Proposition 10.
Note that assuming the same number m of M (1) k and M (2) k involves no loss of generality, since these functions can be set to 0. Both Equations (17) and (18) and Assumption 1(i) above implies that for some constant C > 0 and k ∈ {1, 2, . . . , m}, t ∈ {1, 2}, Assumption 1(i) is satisfied in a variety of scenarios, for example, under mild assumptions on the partial derivatives of the functions f (t) k . To illustrate this, we present the following proposition.
Proposition 4. Let f (θ, θ 0 ) be a function on a bounded domain with bounded partial derivatives with f (θ 0 , θ 0 ) = 0. Let {ρ n (·)} be a sequence of probability densities on θ such that E ρ n [θ] = θ 0 and Var ρ n [θ] = σ 2 n for some σ > 0. Then, for some C > 0, as the partial derivative of the function f . By the mean value theorem, We can now state the main theorem of this section. Theorem 2. Let {X 0 , . . . , X n } be generated by a stationary, α-mixing Markov chain parametrized by θ 0 ∈ Θ. Suppose that Assumption 1 holds and that the α-mixing coefficients satisfy Then, the conditions of Corollary 1 are satisfied with n = O max( 1 Theorem 2 is satisfied by a large class of Markov chains, including chains with countable and continuous state spaces. In particular, if the Markov chain is geometrically ergodic, then it follows from Equation (A4) (in the appendix) that ∑ k≥1 α δ/(2+δ) k < +∞. Observe that in order to achieve O( 1 √ n ) convergence, we need δ ≤ 1. Key to the proof of Theorem 2 is the fact that the variance of the log-likelihood ratio can be controlled via the application of Assumption 1 and Proposition 3. Note also that as δ decreases, satisfying the condition requires the Markov chain to be faster mixing. We now illustrate Theorem 2 for a number of Markov chain models. First, consider a birth-death Markov chain on a finite state space.

Proposition 5.
Suppose the data-generating process is a birth-death Markov chain, with onestep transition kernel parametrized by the birth probability θ 0 ∈ Θ. Let F be the set of all Beta distributions. We choose the prior to be a Beta distribution. Then, the conditions of Theorem 2 are satisfied and n = O 1 √ n .
Proof. The proof of Proposition 5 follows from the more general Proposition 8, by fixing the initial distribution to the invariant distribution under θ 0 . Therefore it has been omitted. We simply refer to the proof of Proposition 8 under a more general setup in Appendix B.3.
The birth-death chain on the finite state space is, of course, geometrically ergodic and the α-mixing coefficients α k decay geometrically. Note that the invariant distribution of this Markov chain is uniform over the state space, and consequently this is a particularly simple example. A more complicated and more realistic example is a birth-death Markov chain on the nonnegative integers. We note that if the probability of birth θ in a birth-death Markov chain on positive integers is greater than 0.5, then the Markov chain is transient, and consequently, not ergodic. Hence, our prior should be chosen to have support within (0, 0.5). For that purpose, we define the class of scaled beta distributions.

Definition 2 (Scaled Beta). If X is a beta distribution on with parameters a and b, then Y is said to be a scaled beta distribution with same parameters on the interval
and in that case, the pdf of Y is obtained as . For the birth-death chain, we set m = 0.5 and c = 0 giving it support on (0, 1 2 ). Setting m = 2 and c = −1 gives a beta distribution rescaled to have support on (−1, 1).

Proposition 6.
Suppose the data-generating process is a positive recurrent birth-death Markov chain on the positive integers parameterized by the birth probability θ 0 ∈ (0, 1 2 ). Further let F be the set of all Beta distributions rescaled to have support (0, 1 2 ). We choose the prior to be a scaled Beta distribution on (0, 1/2) with parameters a and b. Then, the conditions of Theorem 2 are satisfied with n = O 1 √ n .
Proof. The proof of Proposition 6 (for the stationary case) follows from the more general Proposition 9 (the nonstationary case) by fixing the initial distribution to the invariant distribution under θ 0 . We omit the proof and simply refer to the proof of Proposition 9 under a more general setup in Appendix B.3.
Unlike with the finite state-space, the invariant distribution now depends on the parameter θ ∈ Θ, and verification of the conditions of the proposition is more involved. In Appendix A.2, we prove that the class of scaled beta distributions satisfy the condition K(ρ n , π) ≤ n n when the prior π is a beta or an uniform distribution. This fact will allow us to prove the above propositions.
Both Proposition 5 and Proposition 6 assume a discrete state space. The next example considers a strictly stationary simple linear model (as defined in Example 2), which has a continuous, unbounded state space.

Proposition 7.
Suppose the data-generating model is a stationary simple linear model: where {W n } are i.i.d. standard Gaussian random variables and |θ 0 | < 1. Suppose that F is the class of all beta distributions rescaled to have the support (−1, 1). Then, the conditions of Theorem 2 are satisfied with n = O 1 √ n .
Proof. This is a special case of the more general non-stationary simple linear model which is detailed in Proposition 10. Therefore, the proof of the fact that the simple linear model satisfies Assumption 1 when starting from stationarity is deferred to the proof of Proposition 10. The simple linear model with |θ 0 | < 1 has geometrically decreasing (and therefore summable) α-mixing coefficients as a consequence of [20] (eq. (15.49)) and Theorem A.1.2. Combining these two facts, it follows that the conditions of Theorem 2 are satisfied.
Observe that Theorem 1 (and Corollary 1) are general, and hold for any dependent data-generating process. Therefore, there can be Markov chains that satisfy these, but do not satisfy Assumption 1 which entails some loss of generality. However, as our examples demonstrate, common Markov chain models do indeed satisfy the latter assumption.

Non-Stationary, Ergodic Markov Data-Generating Models
We call a time-homogeneous Markov chain non-stationary if the initial distribution q (0) is not the invariant distribution. There are two sets of results in this setting: in Theorem 3 and Theorem 4 we explicitly impose the α-mixing condition, while in Theorem 5 we impose a f -geometric ergodicity condition (Definition A.1.2 in the appendix). As seen in Equation (A4) (in the appendix) if the Markov chain is also geometrically er- This condition can be relaxed, albeit at the risk of more complicated calculations that, nonetheless, mirror those in the geometrically ergodic setting. A common thread through these results is that we must impose some integrability or regularity conditions on the functions M (1) k . First, in Theorem 3 we assume that the M (1) k functions in Assumption 1 are uniformly bounded and that the α-mixing condition is satisfied. This result holds for both discrete and continuous state space settings. Theorem 3. Let {X 0 , . . . , X n } be generated by an α-mixing Markov chain parametrized by θ 0 ∈ Θ with transition probabilities satisfying Assumption 1 and with known initial distribution q (0) . Let {α k } be the α-mixing coefficients under θ 0 , and assume that ∑ k≥1 α The following result in Proposition 8 illustrates Theorem 3 in the setting of a finite state birth-death Markov chain.

Proposition 8.
Suppose the data-generating process is a finite state birth-death Markov chain, with one-step transition kernel parametrized by the birth probability θ 0 . Let F be the set of all Beta distributions. We choose the prior on θ 0 to be a Beta distribution. Then, the conditions of Theorem 3 are satisfied with n = O 1 √ n for any initial distribution q (0) . Theorem 3 also applies to data generated by Markov chains with countably infinite state spaces, so long as the class of data-generating Markov chains is strongly ergodic and the initial distribution has finite second moments. The following example demonstrates this in the setting of a birth-death Markov chain on the positive integers, where the initial distribution is assumed to have finite second moments.

Proposition 9.
Suppose the data-generating process is a birth-death Markov chain on the nonnegative integers, parameterized by the probability of birth θ 0 ∈ (0, 1 2 ). Further let F be the set of all Beta distributions rescaled upon the support (0, 1 2 ). Let q (0) be a probability mass function on non-negative integers such that ∑ ∞ i=1 i 2 q (0) (i) < +∞. We choose the prior to be a scaled Beta distribution on (0, 1/2) with parameters a and b. Then, the conditions of Theorem 3 are satisfied with n = O 1 √ n .
Since continuous functions on a compact domain are bounded, we have the following (easy) corollary (stated without proof).

Corollary 2.
Let {X 0 , . . . , X n } be generated by an α-mixing Markov chain parametrized by θ 0 ∈ Θ on a compact state space, and with initial distribution q (0) . Suppose the α-mixing coefficients satisfy ∑ k≥1 α δ/(2+δ) k < +∞, and that Assumption 1 holds with continuous functions M (1) k (·, ·), k ∈ {1, 2, . . . , m}. Furthermore, assume that there exists ρ n such that K(ρ n , π) ≤ √ nC for some In general, the M (1) k functions will not be uniformly bounded (consider the case of the Gauss-Markov simple linear model in Example 2), and stronger conditions must be imposed on the data-generating Markov chain itself. The following assumption imposes a 'drift' condition from [21]. Specifically, [21] (Theorem 2.3) shows that under the conditions of Assumption 2, the moment generating function of an aperiodic Markov chain {X n } can be upper bounded by a function of the moment generating function of X 0 . Together with the α-mixing condition, Assumption 2 implies that this Markov data generating process satisfies Corollary 1.
k , for each k = 1, . . . , m 1 , are defined in Assumption 1. For each k = 1, . . . , m, assume the process {M k n } satisfies the following conditions: Under this drift condition, the next theorem shows that Corollary 1 is satisfied.
Theorem 4. Let {X 0 , . . . , X n } be generated by an aperiodic α-mixing Markov chain parametrized by θ 0 ∈ Θ and initial distribution q (0) . Suppose that Assumption 1 and Assumption 2 hold, and that the α-mixing coefficients satisfy ∑ k≥1 α Verifying the conditions in Theorem 4 can be quite challenging. Instead, we suggest a different approach that requires f -geometric ergodicity. Unlike the drift condition in Assumption 2, f -geometric ergodicity additionally requires the existence of a petite set. As noted before, geometric ergodicity implies α-mixing with geometrically decaying mixing coefficients. As with Theorem 4, we assume for simplicity that the Markov chain is aperiodic.

Misspecified Models
We show next how our results can be extended to the misspecified model setting. Assume that the true data generating distribution is parametrized by θ 0 ∈ Θ. Let θ * n := arg min θ∈Θ K(P (n) θ 0 , P (n) θ ) represent the closest parametrized distribution in the variational family to the data-generating distribution. Further, assume our usual conditions: Var(r n (θ, θ * n ))ρ n (dθ) ≤ n n . Now, since r n (θ, θ 0 ) = r n (θ, θ * n ) + r n (θ * n , θ 0 ), we have Similarly, decomposing the variance it follows that Using the fact that 2ab ≤ a 2 + b 2 on the covariance term 2Cov[r n (θ, θ * n ), r n (θ * n , Integrating both sides with respect to ρ n (dθ) we get Consequently, we arrive at the following result: Theorem 6. Let F be a subset of all probability distributions parameterized by Θ. Let θ * n = arg min θ∈Θ K(P (n) θ 0 , P (n) θ ) and assume there exist n > 0 and ρ n ∈ F such that i.
Then, for any α re ∈ (0, 1) and ( , η) ∈ (0, 1) × (0, 1), The proof of this theorem is straightforward and follows from the proof of Theorem 1 by plugging in the upper bounds for KL-divergence from Equation (23), and variance from Equation (26) to Equation (A13). A sketch of the proof is presented in the appendix.

Conclusions
Concentration of the KL-VB model risk, in terms of the expected α re -Rényi divergence, is well established under the i.i.d. data generating model assumption. Here, we extended this to the setting of Markov data generating models, linking the concentration rate to the mixing and ergodic properties of the Markov model. Our results apply to both stationary and non-stationary Markov chains, as well as to the situation with misspecified models. There remain a number of open questions. An immediate one is to extend the current analysis to continuous-time Markov chains and Markov jump processes, possibly using uniformization of the continuous time model. Another direction is to extend this to the setting of non-homogeneous Markov chains, where analogues of notions such as stationarity are less straightforward. Further, as noted in the introduction, [14] establish PAC-Bayes bounds under slightly weaker 'existence of test functions' conditions, while our results are established under the stronger conditions used by [15] for the i.i.d. setting. Weakening the conditions in our analysis is important, but complicated. A possible path is to build on results from [22], who provides conditions form the existence of exponentially powerful test functions exist for distinguishing between two Markov chains. It is also known that there exists a likelihood ratio test separating any two ergodic measures [23]. However, leveraging these to establish the PAC-Bayes bounds for the KL-VB posterior is a challenging effort that we leave to future papers. Finally it is of interest to generalize our PAC-bounds to posterior approximations beyond KL-variational inference, such as α re -Rényi posterior approximations [6], and loss-calibrated posterior approximations [24,25]. As noted before, ergodicity plays an acute role in establishing our results. We consolidate various definitions used throughout the paper in this appendix. Recall that we assume the parameterized Markov chain possesses an invariant probability density or mass function q θ under parameter θ ∈ Θ. Our results in Section 4 also rely on the ergodic properties of the Markov chain, and we assume that the Markov chain is f -geometrically ergodic [20] where f and g are any two functions.
An immediate consequence of this definition is that if f 1 , f 2 are two functions such that f 1 < f 2 (i.e., for all points in the support of the functions), then Now that we have defined the · f norm, we can now define f -geometric ergodicity. In the following, we assume the Markov chain is positive Harris; see [20] for a definition. This is a mild and fairly standard assumption in Markov chain theory.
Definition A.1.2 ( f -geometric ergodicity). For any function f , Markov chain {X n } parameterized by θ ∈ Θ is said to be f -geometrically ergodic if it is positive Harris and there exists a constant r f > 1, that depends on f , such that for any A ∈ B(X), It is straightforward to see that this is equivalent to for an appropriate constant C (which may depend on the state x), that is, the Markov chain approaches steady state at a geometrically fast rate. If a Markov chain is f -geometrically ergodic for f ≡ 1, then, it is simply termed as geometrically ergodic. It is straightforward to see (via Theorem A.1.2 in the Appendix) that a geometrically ergodic Markov chain is also α-mixing, with mixing coefficients satisfying showing that, under geometric ergodicity, the α-mixing coefficients raised to any positive power υ are finitely summable. We note here that the most standard procedure to establish f -geometric ergodicity for any Markov chain is through the verification of the drift condition. The drift condition is a sufficient condition for a Markov chain to be f -geometrically ergodic, as long as there exists a set (called petite set) towards which the Markov chain drifts to (see Assumption A.1.1 in the appendix). If a Markov chain is f -geometrically ergodic with f ≡ V, for some particular function V, then we call it V-geometrically ergodic. We defined V-geometric ergodicity in the previous sections. In this section, we provide a sufficient condition for a Markov chain to be V-geometrically ergodic. First, we recall the definition of resolvent from [20] (Chapter 5).
Definition A.1.3 (Resolvent). Let n ∈ {0, 1, 2, . . . } and q n be such that q n ≥ 0 ∀ n and ∑ ∞ n=1 q n = 1. Note that q n can be thought of being a probability mass function for a random variable "q" taking values on non-negative integers. Then, the resolvent of a Markov chain with respect to q is given by K q (x, A) where, Then, the definition of petite sets follows (see, for Reference, [20] (Chapter 5)).
Definition A.1.4 (Petite Sets). Let X 0 , . . . , X n be n samples from a Markov chain taking values on the state space X . Let C be a set. We shall call C to be v q petite if for all x ∈ C and B ∈ B(X ), and a non-trivial measure υ q on B(X ), and a probability mass If a Markov chain drifts towards a petite set then it is V-geometrically ergodic. Suppose, for simplicity, that V(x) = |X|. Then, the drift condition becomes E[|X n X x−1 ] − |X n−1 | = −β|X n | + bI X n ∈C . The left hand side of this equation represents the change in the state of the Markov chain in one time epoch. Thus, the condition in Assumption A.1.1 essentially states that the Markov chain drifts towards a petite set C and then, once it reaches that set, moves to any point in the state space with at least some probability independent of C.
Suppose that {X n } is satisfies Assumption A.1.1. Then, the set S V = {x : V(x) < ∞} is absorbing, i.e., P θ (X 1 ∈ S V |X 0 = x) = 1 ∀x ∈ S V , and full, i.e., ψ(S c V ) = 0. Furthermore, ∃ constants r > 1, R < ∞ such that, for any A ∈ B(S), Any aperiodic and ψ-irreducible Markov chain satisfying the drift condition is geometrically ergodic. A consequence of Equation (A2) is that if, {X n } is V-geometrically ergodic, then for any other function U, such that |U| < V, it is also U-geometrically ergodic. In essence, a geometrically ergodic Markov chain is asymptotically uncorrelated in a precise sense. Recall ρ-mixing coefficients defined as follows. Let A be a sigma field and L 2 (A) be the set of square integrable, real valued, A measurable functions.
Definition A.1.5 (ρ-mixing coefficient). Let M j i denote the sigma field generated by the measures X k , where i ≤ k ≤ j. Then, where Corr is the correlation function.
Theorem A.1.2. If X n is geometrically ergodic, then it is α-mixing. That is, there exists a constant c > 0 such that α k = O(e −ck ).

Appendix A.2. Bounding the KL-Divergence between Beta Distributions
The following results will be utilized in the proofs of Propositions 8-10.
Proof. Without loss of generality, we can assume a n > 1 and b n > 1. The same form of the result can be obtained in all the other cases, by appropriate use of the bounds presented in the proof. We write the KL divergence K(ρ n , π) as log ρ n π ρ n (dθ). Since π is uniform, π(θ) = 1 whenever θ ∈ (0, 1). Hence, the KL-divergence can be written as the negative of the entropy of ρ n 1 0 log(ρ n (θ))ρ n (dθ), which can be written as K(ρ n , π) = (a n − 1)ψ(a n ) + (b n − 1)ψ(b n ) − (a n + b n − 2)ψ(a n + b n ) − log Beta(a n , b n ), where ψ is the digamma function. Using Stirling's approximation on Beta(a n , b n ) yields, Beta(a n , b n ) = √ 2π a a n −1/2 n b b n −1/2 n (a n + b n ) a n +b n −1/2 (1 + o(1)).
Therefore, after much cancellation, the KL-divergence (a n − 1)ψ(a n ) + (b n − 1)ψ(b n ) − (a n + b n − 2)ψ(a n + b n ) − log Beta(a n , b n ) can be upper bounded by − 1 2 log(a n ) − 1 2 log(b n ) + 3 2 log(a n + b n ) + a n + b n − 2 a n + b n − a n − 1 2a n − b n − 1 2b n . Now, plugging in the values of a n and b n , we get Plugging in the values of a n and b n , we get as upper bound for the KL-divergence as, for some large enough positive constant C. This completes our proof.
Proof. Without loss of generality, we assume a > 1 and b > 1. As mentioned in the proof of Lemma A.2.1, the other cases follows similarly. We write the KL-divergence between ρ n and π as, where, U is an uniform distribution on (0, 1). We analyze the second term in the above expression. The second term can be written as, where C 1 is log (Beta(a, b)). Since, ρ n follows a Beta distribution with parameters a n = nθ 0 and b n = n(1 − θ 0 ), we get that, Using the lower bound on ψ(nθ 0 ) and the upper bound on ψ(n), we get −[ψ(a n ) − ψ(a n + b n )] < − log(nθ 0 ) Furthermore, similarly, we get that, Therefore it follows that for a large positive constant C. Using the above bounds, we finally show that, Now, fix α re ∈ (0, 1), and θ ∈ Θ. First, observe that by the definition of the α re -Rényi divergence we have h(θ)ρ(dθ) − K(ρ, π) = 1.
It follows by definition of the KL divergence that π n,α re |X n = arg min ρ∈F −α re r n (θ, θ 0 )ρ(dθ) + K(ρ, π) , (A12) where π is the prior distribution. Following Proposition 1 it follows that for any > 0 with probability 1 − . We fix an η ∈ (0, 1). Using Chebychev's inequality, we have Note that α re 1−α re E(r n (θ, θ 0 ))ρ n (dθ) and K(ρ n ,π) 1−α re are constants with respect to the data, implying Therefore, we have From Proposition 1, with probability 1 − the following holds Therefore, with probability 1 − η − the following statement holds Next, we observe that by a straightforward application of Jensen's inequality to the inner integral on the left hand side. Finally, following the hypotheses (i), (ii) and (iii), we have, thereby concluding the proof.
Appendix B.1.3. Proof of Proposition 2 We define Y i := log for i = 1, . . . , n, and Z 0 = log . Then, using the Markov property we can see that the Kullback-Leibler divergence between the joint distributions P (n) θ 1 and P Appendix B.1.4. Proof of Proposition 3 First, recall the following result from [19].
is also α-mixing with mixing coefficients {α t } wherẽ Proof. By Z i denote the paired random measure (X i , X i−1 ). Let M j i denote the sigma field generated by the measures X k , where i ≤ k ≤ j. By G j i denote the sigma field generated by the measures Z k , where i ≤ k ≤ j. Let C ∈ M j i−1 . Then, C can be expressed as (C i−1 × C i × · · · × C j ). for C i−1 ∈ M i−1 i−1 , C i ∈ M i i . . . and so on. Now, consider a map. T j i : where, Then, the expression for the α-mixing coefficient can be reduced into Note that, by bijection property of T j i , we can find A ∈ M i −∞ and B ∈ M ∞ i+k−1 such that is just a function of the paired Markov chain Z i , therefore it has α-mixing coefficient α k−1 .
We now proceed to the proof of Proposition 3. Let {X k } be a stationary α-mixing Markov chain under θ 1 with mixing coefficients {α k }. Observe that the log-likelihood can be expressed as Therefore, the variance of the log-likelihood ratio is simply It follows from Lemma B.1.5 that {Y k } is a stochastic process with α-mixing coefficients α k−1 . Therefore, using Lemma B.1.4 we have Similarly, as above we can also say Combining, the two upper bounds above, we get the first result: Var θ 1 r n (θ 2 , θ 1 ) < n ∑ i,j=1 Finally, using Equations (A31) and (A32) we have

Appendix B.2. Proofs for Stationary Markov Data-Generating Models
Proof of Theorem 2 Part 1: Verifying condition (i) of Corollary 1.
By Assumption 1(i), it follows that n . Part 2: Verifying condition (ii) of Corollary 1. Again, using Proposition 3 along with the fact that the Markov chain is stationary we have It then follows that First, consider the term C (1) θ 0 ,θ ρ n (θ), and observe that By Assumption 1, we have E log Since the function x → x 2+δ is convex, we can apply Jensen's inequality to obtain, Therefore, it follows that . Similarly, we can show that D θ 0 ,θ ρ n (dθ) = O( 1 n ), and Var[Z 0 ]ρ n (dθ) = O( 1 n ). For the final term C (1) θ 0 ,θ D θ 0 ,θ ρ n (dθ), use the Cauchy-Schwarz inequality to obtain the upper bound C (1) which is also of order O( 1 n ). Combining all of these together we have for some (2) n , n ), our theorem is proved. 2 to the distribution q (0) . Applying Proposition 2 to the corresponding transition kernels and initial distribution we have, Now, applying Assumption 1, we can bound the previous equation as follows, Since M (1) k 's are bounded there exists a constant Q so that, By Assumption 19 in Assumption 1, it follows that for some (1) . Part 2: Verifying condition (ii) of Corollary 1: As in the previous part, Z 0 = 0, implying that D θ,θ 0 . Applying Proposition 3 and integrating with respect to ρ n , we obtain First, consider the term C (i) θ 0 ,θ ρ n (dθ). Using Assumption 1, we can upper bound C (i) θ 0 ,θ as, Since M (1) k 's are upper bounded by Q, it follows from the previous expression that, C Hence, from Assumption 1, we get, Using the upper bound above, we can say for an L large enough, C (i) θ 0 ,θ ρ n (dθ) ≤ L n . Next, by the Cauchy-Schwarz inequality, we have that C θ 0 ,θ ρ n (dθ)) ≤ L n . Thus, we have the following upper bound.
Since K(ρ n , π) ≤ √ nC, following the concluding argument in Theorem 2 completes the proof.
From our choice of a n and b n , 2Beta(a n −3,b n ) Beta(a n ,b n ) = O(1), and plugging the values of a n and b n into (a n −3)(b n ) (a n +b n −3) 2 (a n +b n −2) , we get (a n −3)(b n ) (a n +b n −3) 2 (a n +b n −2) = 1 , which is upper bounded by C 1 n for some constant C 1 > 0. Hence, Similarly, we can also show that, Finally, we take the KL-divergence K(ρ n , π). ρ n follows a scaled Beta distribution on (0, 1/2) with parameters a n = n(θ 0 /2) and b n = n(1 − θ 0 /2), while π follows a scaled Beta distribution on (0, 1/2) with parameters a and b. Thus, which, by substituting t = 2θ, we get, π(t) ρ n (dt) is the KL-divergence between a Beta distribution with parameters a n and b n and a Beta distribution with parameters a and b. An application of Proposition A.2.1 gives us for a constant C 1 > 0, Hence we can say, K(ρ n , π) < 2 C 1 + 1 2 log(n) . Thus, we now get that for some constant Part 1: Verifying condition (i) of Corollary 1 As in the proof of Theorem 2 substitute the true parameter θ 0 for θ 1 and θ for θ 2 . We also set our initial distributions q (0) 1 and q (0) 2 to the known initial distribution q (0) . A method similar to Equation (A35), yields (1) k s satisfy Assumption 2, it follows by the application of Theorem 2.3, [21] that ∃ λ > 0 such that for any 0 < κ ≤ λ, and for some ζ ∈ (0, 1) possibly depending upon λ, We rewrite E M (1) k (X i , X i−1 )|X 1 , X 0 as follows: Since, ζ ∈ (0, 1), ζ i < 1. Hence, we can write that, for a large constant L. Therefore K(P (n) θ 0 , P (n) θ )ρ n (dθ) can be upper bounded as follows, Hence, for some (1) n . Part 2: Verifying condition (ii) of Corollary 1: Similar to as in the proof of Theorem 3, we upper bound Var[r n (θ, θ 0 )]ρ n (dθ) by where C θ 0 ,θ is upper bounded as There exists a constant C δ depending upon δ such that, By expressing E M (1) k (X i , X i−1 ) 2+δ |X 1 , X 0 and following a method similar to the previous part, we get, The fact that 0 < ζ < 1 implies that 0 < ζ i < ζ. This gives us the following, Since κ < λ, by the application of Jensen's inequality, we get We know that | f (1) k (θ, θ 0 )| 2+δ ρ n (dθ) < C n . Thus, following Assumption 1 we can say that, for a large constant L, C (i) θ 0 ,θ ρ n (dθ) ≤ L n . The rest of the proof follows similarly as in the proof of Theorem 3, and we obtain an (2) n = O( n δ/2 n ), such that, Var[r n (θ, θ 0 )]ρ n (dθ) < n (2) n .
Plugging these into Equation (A44), we are done.