Bayesian Update with Importance Sampling: Required Sample Size

Importance sampling is used to approximate Bayes’ rule in many computational approaches to Bayesian inverse problems, data assimilation and machine learning. This paper reviews and further investigates the required sample size for importance sampling in terms of the χ2-divergence between target and proposal. We illustrate through examples the roles that dimension, noise-level and other model parameters play in approximating the Bayesian update with importance sampling. Our examples also facilitate a new direct comparison of standard and optimal proposals for particle filtering.


Introduction
Importance sampling is a mechanism to approximate expectations with respect to a target distribution using independent weighted samples from a proposal distribution.The variance of the weights -quantified by the  2 -divergence between target and proposal-gives both necessary and sufficient conditions on the sample size to achieve a desired worst-case error over large classes of test functions.This paper contributes to the understanding of importance sampling to approximate the Bayesian update, where the target is a posterior distribution obtained by conditioning the proposal to observed data.We consider illustrative examples where the  2 -divergence between target and proposal admits a closed formula and it is hence possible to characterize explicitly the required sample size.These examples showcase the fundamental challenges that importance sampling encounters in high dimension and small noise regimes where target and proposal are far apart.They also facilitate a direct comparison of standard and optimal proposals for particle filtering.
We denote the target distribution by  and the proposal by  and assume that both are probability distributions in Euclidean space R  .We further suppose that the target is absolutely continuous with respect to the proposal, and denote by  the unnormalized density between target and proposal so that, for any suitable test function , (1.1) We write this succinctly as () = ()/().Importance sampling approximates () using independent samples { () }  =1 from the proposal , computing the numerator and denominator in (1.1) by Monte Carlo integration, () ≈ 1  ∑︀  =1 ( () )( () ) () ( () ),  () := ( () ) ∑︀  ℓ=1 ( (ℓ) ) . (1. 2) The weights  () -called autonormalized or self-normalized since they add up to one-can be computed as long as the unnormalized density  can be evaluated point-wise; knowledge of the normalizing constant () is not needed.We write (1.2) briefly as () ≈   (), where   is the random autonormalized particle approximation measure ()   () ,  () i.i.d.∼ .(1.3)This paper is concerned with the study of importance sampling in Bayesian formulations to inverse problems, data assimilation and machine learning tasks [1,26,3,13,14], where the relationship () ∝ ()() arises from application of Bayes' rule P(|) ∝ P(|) P(); we interpret  ∈ R  as a parameter of interest,  ≡ P() as a prior distribution on , () ≡ (; ) ≡ P(|) as a likelihood function which tacitly depends on observed data  ∈ R  , and  ≡ P(|) as the posterior distribution of  given .With this interpretation and terminology, the goal of importance sampling is to approximate posterior expectations using prior samples.Since the prior has fatter tails than the posterior, the Bayesian setting poses further structure into the analysis of importance sampling.In addition, there are several specific features of the application of importance sampling in Bayesian inverse problems, data assimilation and machine learning that shape our presentation and results.
First, Bayesian formulations have the potential to provide uncertainty quantification by computing several posterior quantiles.This motivates considering a worst-case error analysis [11] of importance sampling over large classes of test functions  or, equivalently, bounding a certain distance between the random particle approximation measure   and the target , see [1].As we will review in Section 2, a key quantity in controlling the error of importance sampling with bounded test functions is the  2 -divergence between target and proposal, given by Second, importance sampling in inverse problems, data assimilation and machine learning applications is often used as a building block of more sophisticated computational methods, and in such a case there may be little or no freedom in the choice of proposal.For this reason, throughout this paper we view both target and proposal as given and we focus on investigating the required sample size for accurate importance sampling with bounded test functions, following a similar perspective as [8,1,25].The complementary question of how to choose the proposal to achieve a small variance for a given test function is not considered here.This latter question is of central interest in the simulation of rare events [23] and has been widely studied since the introduction of importance sampling in [16,15], leading to a plethora of adaptive importance sampling schemes [7].
Third, high dimensional and small noise settings are standard in inverse problems, data assimilation and machine learning, and it is essential to understand the scalability of sampling algorithms in these challenging regimes.The curse of dimension of importance sampling has been extensively investigated [4,5,27,22,9,1].The early works [4,5] demonstrated a weight collapse phenomenon, by which unless the number of samples is scaled exponentially with the dimension of the parameter, the maximum weight converges to one.The paper [1] also considered small noise limits and further emphasized the need to define precisely the dimension of learning problems.Indeed, while many inverse problems, data assimilation models and machine learning tasks are defined in terms of millions of parameters, their intrinsic dimension is often substantially lower since () all parameters are typically not equally important; () substantial a priori information about some parameters may be available; and () the data may be lower dimensional than the parameter space.Here we will provide a unified and accessible understanding of the roles that dimension, noise-level and other model parameters play in approximating the Bayesian update.We will do so through examples where it is possible to compute explicitly the  2 -divergence between target and proposal, and hence the required sample size.
Finally, in the Bayesian context the normalizing constant () represents the marginal likelihood and is often computationally intractable.This motivates our focus on the auto-normalized importance sampling estimator in (1.2), which estimates both () and () using Monte Carlo integration, as opposed to unnormalized variants of importance sampling [25].

Main Goals, Specific Contributions and Outline
The main goal of this paper is to provide a rich and unified understanding of the use of importance sampling to approximate the Bayesian update, while keeping the presentation accessible to a large audience.In Section 2 we investigate the required sample size for importance sampling in terms of the  2 -divergence between target and proposal.Section 3 builds on the results in Section 2 to illustrate through numerous examples the fundamental challenges that importance sampling encounters when approximating the Bayesian update in small noise and high dimensional settings.In Section 4 we show how our concrete examples facilitate a new direct comparison of standard and optimal proposals for particle filtering.These examples also allow us to identify model problems where the advantage of the optimal proposal over the standard one can be dramatic.
Next, we provide further details on the specific contributions of each section and link them to the literature.We refer to [1] for a more exhaustive literature review.
• Section 2 provides a unified perspective on the sufficiency and necessity of having a sample size of the order of the  2 -divergence between target and proposal to guarantee accurate importance sampling with bounded test functions.Our analysis and presentation are informed by the specific features that shape the use of importance sampling to approximate Bayes' rule.
The key role of the second moment of the  2 -divergence has long been acknowledged [19,21], and it is intimately related to an effective sample size used by practitioners to monitor the performance of importance sampling [17,18].A topic of recent interest is the development of adaptive importance sampling schemes where the proposal is chosen by minimizing -over some admissible family of distributions-the  2 -divergence with respect to the target [24,2].
The main original contributions of Section 2 are Proposition 2.2 and Theorem 2.3, which demonstrate the necessity of suitably increasing the sample size with the  2 -divergence along singular limit regimes.The idea of Proposition 2.2 is inspired by [8], but adapted here from relative entropy to  2 -divergence.Our results complement sufficient conditions on the sample size derived in [1] and necessary conditions for unnormalized (as opposed to autonormalized) importance sampling in [25].
• In Section 3, Proposition 3.1 gives a closed formula for the  2 -divergence between posterior and prior in a linear-Gaussian Bayesian inverse problem setting.This formula allows us to investigate the scaling of the  2 -divergence (and thereby the rate at which the sample size needs to grow) in several singular limit regimes, including small observation noise, large prior covariance and large dimension.Numerical examples motivate and complement the theoretical results.In an infinite dimensional setting, Corollary 3.8 establishes an equivalence between absolute continuity, finite  2 -divergence and finite intrinsic dimension.A similar result was proved in more generality in [1] using the advanced theory of Gaussian measures in Hilbert space [6]; our presentation and proof here are elementary, while still giving the same degree of understanding.
• In Section 4 we follow [4,5,27,28,1] and investigate the use of importance sampling to approximate Bayes' rule within one filtering step in a linear-Gaussian setting.We build on the examples and results in Section 3 to identify model regimes where the performance of standard and optimal proposals can be dramatically different.We refer to [12,26] for an introduction to standard and optimal proposals for particle filtering, and to [10] for a more advanced presentation.The main original contribution of this section is Theorem 4.1, which gives a direct comparison of the  2 -divergence between target and standard/optimal proposals.This result improves on [1], where only a comparison between the intrinsic dimension was established.

Importance Sampling and 𝜒 -divergence
The aim of this section is to demonstrate the central role of the  2 -divergence between target and proposal in determining the accuracy of importance sampling.In Subsection 2.1 we show how the  2 -divergence arises in both sufficient and necessary conditions on the sample size for accurate importance sampling with bounded test functions.Subsection 2.2 describes a well-known connection between the effective sample size and the  2 -divergence.Our investigation of importance sampling to approximate the Bayesian update -developed in Sections 3 and 4-will make use of a closed formula for the  2 -divergence between Gaussians, which we include in Subsection 2.3 for later reference.

Sufficient and Necessary Sample Size
Here we provide general sufficient and necessary conditions on the sample size in terms of We first review upper-bounds on the worst-case bias and mean-squared error of importance sampling with bounded test functions, which imply that accurate importance sampling is guaranteed if  ≫ .
The proofs can be found in [1,26] and are therefore omitted.
The next result shows the existence of bounded test functions for which the error may be large with a high probability if  ≪ .The idea is taken from [8], but we adapt it here to obtain a result in terms of the  2 -divergence rather than relative entropy.We denote by g := /() the normalized density between  and , and note that  = (g 2 ) = (g).  .On the other hand,   () = 1 if and only if g( () ) ≤  for all 1 ≤  ≤  .This implies that 2) The power of Proposition 2.2 is due to the fact that in some singular limit regimes the distribution of g( ) concentrates around its expected value .In such a case, for any fixed  ∈ (0, 1) the probability of the event g( ) >  will not vanish as the singular limit is approached.This idea will become clear in the proof of Theorem 2.3 below.
In Sections 3 and 4 we will investigate the required sample size for importance sampling approximation of the Bayesian update in various singular limits, where target and proposal become further apart as a result of reducing the observation noise, increasing the prior uncertainty, or increasing the dimension of the problem.To formalize the discussion in a general abstract setting, let {(  ,   )} >0 be a family of targets and proposals such that The parameter  may represent for instance the size of the precision of the observation noise, the size of the prior covariance, or a suitable notion of dimension.Our next result shows a clear dichotomy in the performance of importance sampling along the singular limit depending on whether the sample size grows sublinearly or superlinearly with   .
The assumption that  < 1 can be verified for some singular limits of interest, in particular for small noise and large prior covariance limits studied in Sections 3 and 4; details will be given in Example 3.5.While the assumption  < 1 may fail to hold in high dimensional singular limit regimes, the works [4,5] and our numerical example in Subsection 4.4 provide compelling evidence of the need to suitably scale  with  along those singular limits in order to avoid a weigh-collapse phenomenon.Further theoretical evidence was given for unnormalized importance sampling in [25].

𝜒 2 -divergence and Effective Sample Size
The previous subsection provides theoretical non-asymptotic and asymptotic evidence that a sample size larger than  is necessary and sufficient for accurate importance sampling.Here we recall a well known connection between the  2 -divergence and the effective sample size ESS := 1 widely used by practitioners to monitor the performance of importance sampling.Note that always 1 ≤ ESS ≤  ; it is intuitive that ESS = 1 if the maximum weight is one and ESS =  if the maximum weight is 1/.To see the connection between ESS and , note that Therefore, ESS ≈ / : if the sample-based estimate of  is significantly larger than  , ESS will be small which gives a warning sign that a larger sample size  may be needed.

𝜒 2 -divergence Between Gaussians
We conclude this section by recalling an analytical expression for the  2 -divergence between Gaussians.In order to make our presentation self-contained, we include a proof in Appendix A.
It is important to note that non-degenerate Gaussians  =  (, ) and  =  (0, Σ) in R  are always equivalent.However,  = ∞ unless 2Σ ≻ .In Sections 3 and 4 we will interpret  as a posterior and  as a prior, in which case automatically Σ ≻  and  < ∞.

Importance Sampling for Inverse Problems
In this section we study the use of importance sampling in a linear Bayesian inverse problem setting where the target and the proposal represent, respectively, the posterior and the prior distribution.In Subsection 3.1 we describe our setting and we also derive an explicit formula for the  2 -divergence between the posterior and the prior.This explicit formula allows us to determine the scaling of the  2 -divergence in small noise regimes (Subsection 3.2), in the limit of large prior covariance (Subsection 3.3), and in a high dimensional limit (Subsection 3.4).Our overarching goal is to show how the sample size for importance sampling needs to grow along these limiting regimes in order to maintain the same level of accuracy.

Inverse Problem Setting and 𝜒 2 -divergence Between Posterior and Prior
Let  ∈ R × be a given design matrix and consider the linear inverse problem of recovering  ∈ R  from data  ∈ R  related by where  represents measurement noise.We assume henceforth that we are in the underdetermined case  ≤ , and that  is full rank.We follow a Bayesian perspective and set a Gaussian prior on ,  ∼  =  (0, Σ).We assume throughout that Σ and Γ are given symmetric positive definite matrices.
The solution to the Bayesian formulation of the inverse problem is the posterior distribution  of  given .We are interested in studying the performance of importance sampling with proposal  (the prior) and target  (the posterior).We recall that under this linear-Gaussian model the posterior distribution is Gaussian [26], and we denote it by  =  (, ).In order to characterize the posterior mean  and covariance , we introduce standard data assimilation notation where  is the Kalman gain.Then we have The proof of the following result is then immediate and therefore omitted.
In the following two subsections we employ this result to derive by direct calculation the rate at which the posterior and prior become further apart -in  2 -divergence-in small noise and large prior regimes.To carry out the analysis we use parameters  2 ,  2 > 0 to scale the noise covariance,  2 Γ, and the prior covariance,  2 Σ.

Importance Sampling in Small Noise Regime
To illustrate the behavior of importance sampling in small noise regimes, we first introduce a motivating numerical study.A similar numerical setup was used in [4] to demonstrate the curse of dimension of importance sampling.We consider the inverse problem setting in Equation (3.1) with  =  = 5 and noise covariance  2 Γ.We conduct 18 numerical experiments with a fixed data .For each experiment, we perform importance sampling 400 times, and report in Figure 1    We can see from Figure 1.a that  =  −4 is not a fast enough growth of  to avoid weight collapse: the histograms skew to the right along the bottom-left to top-right diagonal, suggesting that weight collapse (i.e. one weight dominating the rest, and therefore the variance of the weights being large) is bound to occur in the joint limit  → ∞,  → 0 with  =  −4 .In contrast, the histograms in Figure 1.b skew to the left along the same diagonal, suggesting that the probability of weight collapse is significantly reduced if  =  −6 .We observe a similar behavior with other choices of dimension  by conducting experiments with sample sizes  =  −+1 and  =  −−1 , and we include the histograms with  =  = 4 in Appendix C. Our next result shows that these empirical findings are in agreement with the scaling of the  2 -divergence between target and proposal in the small noise limit.

Importance Sampling and Prior Scaling
Here we illustrate the behavior of importance sampling in the limit of large prior covariance.We start again with a motivating numerical example, similar to the one reported in Figure 1.The behavior is analogous to the small noise regime, which is expected since the ratio of prior and noise covariances determines the closeness between target and proposal.Figure 2 shows that when  =  = 5 weight collapse is observed frequently when the sample size  grows as  4 , but not so often with sample size  =  6 .Similar histograms with  =  = 4 are included in Appendix C.These empirical results are in agreement with the theoretical growth rate of the  2 -divergence between target and proposal in the limit of large prior covariance, as we prove next.Let   denote the posterior and   =   2 (  ‖  ) + 1.Then, for almost every ,   ∼ (  ) in the large prior limit  → ∞.

𝜎
, we apply Proposition 3.2 and deduce that when  → ∞: 2.  +    has a well-defined and invertible limit; On the other hand, we notice that the quadratic term vanishes in limit.The conclusion follows by Proposition 3.1.

Importance Sampling in High Dimension
In this subsection we study importance sampling in high dimensional limits.To that end, we let be infinite sequences and we define, for any  ≥ 1, We then consider the inverse problem of reconstructing  ∈ R  from data  ∈ R  under the setting We denote the corresponding posterior distribution by  1: , which is Gaussian with a diagonal covariance.Given observation , we may find the posterior distribution   of   by solving the one dimensional linear-Gaussian inverse problem with prior   =  (0,  2  ).In this way we have defined, for each  ∈ N ∪ {∞}, an inverse problem with prior and posterior In Subsection 3.4.1 we include an explicit calculation in the one dimensional inverse setting (3.4), which will be used in Subsection 4.4 to establish the rate of growth of   =   2 ( 1: ‖ 1: ) and thereby how the sample size needs to be scaled along the high dimensional limit  → ∞ to maintain the same accuracy.Finally, in Subsection 3.4.3we establish from first principles and our simple one dimensional calculation the equivalence between () certain notion of dimension being finite; ()  ∞ < ∞; and () absolute continuity of  1:∞ with respect to  1:∞ .

One Dimensional Setting
Let  ∈ R be given and consider the one dimensional inverse problem of reconstructing  ∈ R from data  ∈ R, under the setting In particular, . (3.9) Proof.A direct calculation shows that The same calculation, but replacing  2 by  2 /ℓ and  by ℓ, gives similar expressions for ( ℓ ), which leads to (3.7).The other two equations follow by setting ℓ to be 2 and 1 2 .
Lemma 3.4 will be used in the two following subsections to study high dimensional limits.Here we show how this lemma also allows us to verify directly that the assumption  < 1 in Theorem 2.3 holds in small noise and large prior limits.
Example 3.5.Consider a sequence of inverse problems of the form (3.6) with  =  2  2 / 2 approaching infinity.Let {(  ,   )} >0 be the corresponding family of posteriors and priors and let g  be the normalized density.Lemma 3.4 implies that as  → ∞.This implies that, for  sufficiently large,

Large Dimensional Limit
Now we investigate the behavior of importance sampling in the limit of large dimension, in the inverse problem setting (3.3).We start with an example similar to the ones in Figure 1 and Figure 2. Figure 3 shows that for  = 1.3 fixed, weight collapse happens frequently when the sample size  grows polynomially as  2 , but not so often if  grows at rate . Similar histograms for  = 2.4 are included in Appendix C.These empirical results are in agreement with the growth rate of   in the large  limit.
Proof.The formula for   is a direct consequence of Equation (3.8) and the product structure.
Similarly, we have Note that the outer expected value in the latter equation averages over the data, while the inner one averages over sampling from the prior  1: .This suggests that log The quantity  := ∑︀  =1   had been used as an intrinsic dimension of the inverse problem (3.3).This simple heuristic together with Theorem 2.3 suggest that increasing  exponentially with  is both necessary and sufficient to maintain accurate importance sampling along the high dimensional limit  → ∞.In particular, if all coordinates of the problem play the same role, this implies that  needs to grow exponentially with , a manifestation of the curse of dimension of importance sampling [1,4,5].

Infinite Dimensional Singularity
Finally, we investigate the case  = ∞.Our goal in this subsection is to establish a connection between the effective dimension, the quantity  ∞ , and absolute continuity.The main result, Corollary 3.8, had been proved in more generality in [1].However, our proof and presentation here requires minimal technical background and is based on the explicit calculations obtained in the previous subsections and in the following lemma.
where   is an unnormalized density between   and   .Moreover, we have the following explicit characterizations of the Hellinger integral ℋ( 1:∞ ,  1:∞ ) and its average with respect to data realizations, Proof.The formula for the Hellinger integral is a direct consequence of Equation (3.9) and the product structure.On the other hand, The proof of the equivalence between finite Hellinger integral and absolute continuity is given in Appendix B.
Proof.Observe that   → 0 is a direct consequence of all three statements, so we will assume   → 0 from now on.() ⇔ () : By Proposition 3.6, log since log(1 +   ) ≈   for large .

Importance Sampling for Data Assimilation
In this section, we study the use of importance sampling in a particle filtering setting.Following [4,5,27] we focus on one filtering step.Our goal is to provide a new and concrete comparison of two proposals, referred to as standard and optimal in the literature [1].In Subsection 4.1 we introduce the setting and both proposals, and show that the  2 -divergence between target and standard proposal is larger than the  2 -divergence between target and optimal proposal.Subsections 4.3 and 4.4 identify small noise and large dimensional limiting regimes where the sample size for the standard proposal needs to grow unboundedly to maintain the same level of accuracy, but the required sample size for the optimal proposal remains bounded.

One-step Filtering Setting
Let  and  be given matrices.We consider the one-step filtering problem of recovering  0 ,  1 from , under the following setting ) Similar to the setting in Subsection 3.1, we assume that , ,  are symmetric positive definite and that  and  are full rank.From a Bayesian point of view, we would like to sample from the target distribution P  0 , 1 | .To achieve this, we can either use  std = P  1 | 0 P  0 or  opt = P  1 | 0 , P  0 as the proposal distribution.The standard proposal  std is the prior distribution of ( 0 ,  1 ) determined by the prior  0 ∼  (0,  ) and the signal dynamics encoded in Equation (4.1).Then assimilating the observation  leads to an inverse problem [1,26] with design matrix, noise covariance, and prior covariance given by  std := , We denote  std =  (0, Σ std ) the prior distribution and by  std the corresponding posterior distribution.
The optimal proposal  opt samples from  0 and the conditional kernel  1 | 0 , .Then assimilating  leads to the inverse problem [1,26]  =   0 +  + , where the design matrix, noise covariance and prior covariance are given by  We denote  opt =  (0, Σ opt ) the prior distribution and  std the corresponding posterior distribution.

𝜒 2 -divergence Comparison between Standard and Optimal Proposal
Here we show that The proof is a direct calculation using the explicit formula in Proposition 3.1.We introduce, as in Section 3, standard Kalman notation It follows from the definitions in (4.3) and (4.4) that Since  std =  opt we drop the subscripts in what follows, and denote both simply by . .
Therefore, it suffices to prove the following two inequalities: ) We start with inequality (4.6).Note that Using the definitions in (4.3) and (4.4) it follows that For inequality (4.5), we notice that as desired.

Standard and Optimal Proposal in Small Noise Regime
It is possible that along a certain limiting regime,  diverges for the standard proposal, but not for the optimal proposal.The proposition below gives an explicit example of this scenario.Precisely, consider the following one-step filtering setting where  → 0. Let  std diverges at rate ( − ).

Standard and Optimal Proposal in High Dimension
The previous subsection shows that the standard and optimal proposals can have dramatically different behavior in the small noise regime  → 0.Here we show that both proposals can also lead to dramatically different behavior in high dimensional limits.Precisely, as a consequence of Corollary 3.8 we can easily identify the exact regimes where both proposals converge or diverge in limit.The notation is analogous to that in Subsection 4.4, and so we omit the details.Expanding the square of the right-hand side gives )() = ∫︀ R  ()()() ∫︀ R  ()().

Proposition 3 . 1 .
Consider the inverse problem (3.1) with prior  ∼  =  (0, Σ) and posterior  =  (, ) with  and  defined in(3.2).Then  =   2 (‖) + 1 admits the explicit characterization a histogram with the largest autonormalized weight in each of the 400 realizations.The 18 experiments differ in the sample size  and the size of the observation noise  2 .In both Figures 1.a and 1.b we consider three choices of  (rows) and three choices of  2 (columns).These choices are made so that in Figure 1.a it holds that  =  −4 along the bottom-left to top-right diagonal, while in Figure 1.b  =  −6 along the same diagonal.