Inferring Evidence from Nested Sampling Data via Information Field Theory †

: Nested sampling provides an estimate of the evidence of a Bayesian inference problem via probing the likelihood as a function of the enclosed prior volume. However, the lack of precise values of the enclosed prior mass of the samples introduces probing noise, which can hamper high-accuracy determinations of the evidence values as estimated from the likelihood-prior-volume function. We introduce an approach based on information field theory, a framework for non-parametric function reconstruction from data, that infers the likelihood-prior-volume function by exploiting its smoothness and thereby aims to improve the evidence calculation. Our method provides posterior samples of the likelihood-prior-volume function that translate into a quantification of the remaining sampling noise for the evidence estimate, or for any other quantity derived from the likelihood-prior-volume


Introduction
Nested sampling is a computational technique for Bayesian inference developed by [1].Whereas previous statistical sampling algorithms were primarily designed to sample the posterior, the nested sampling algorithm focuses on computing the evidence by estimating how the likelihood function relates to the prior.As discussed in [2], Bayesian inference consists of parameter estimation and model comparison.In Bayesian parameter estimation, the model parameters θ M for a given model M and data d are inferred via Bayes' theorem, P (θ M |d, M) = P (d|θ M , M)P (θ M |M) P (d|M) .
Here, P (θ M |d, M) is the posterior probability for the model parameters θ M given the data d.The likelihood P (d|θ M , M) describes the measurement process, which generated the data d, and the prior P (θ M |M) encodes our prior knowledge of the parameters within the given model.The normalization of the posterior, is called the evidence and is the focus of this study.In Bayesian parameter estimation, it is common to work with not normalized posteriors.Thus, in this scenario, the computation of the evidence is less critical.In contrast, when comparing different Bayesian models, estimating the evidence for different models is very important.In this case, the aim is to find the most probable model M i given the data, P (M i |d) = P (d|M i )P (M i ) P (d) . (3)

arXiv:2312.11907v1 [physics.comp-ph] 19 Dec 2023
Assuming a uniform prior for all arbitrary models P (M i ) = const, this turns out to be equivalent to choosing the model with the highest evidence.
In nested sampling, the possibly multidimensional integral of the posterior in Equation ( 2) is transformed into a one-dimensional integral by directly using the prior mass X.In particular, by transforming the problem into a series of nested spaces, nested sampling provides an elegant way to compute the evidence.The algorithm starts by drawing N samples from the prior, called the live points.For each of these points, the likelihood is calculated and the live point with the lowest likelihood is removed from the set of live points and added to another set, called the dead points.A new live point is then sampled that has a higher likelihood value than the last added dead point.This type of sampling is commonly referred to as likelihood-restricted sampling.However, the specific methods associated with likelihood-restricted sampling are not discussed further in this paper.As a consequence of the procedure, the prior volume shrinks from one to zero, contracting around the peak of the posterior.The prior mass X contained in the parameter space volume with likelihood values larger than L can be computed by Thus, Equation ( 2) simplifies to a one-dimensional integral, where L(X) is the inverse of Equation ( 4).Accordingly, this integral can be approximated by the weighted sum over all m dead points As proposed in [1], we calculate the weights via ) assuming X 0 = 1 and X m+1 = 0. Adding dead points to their set and adjusting the evidence accordingly continues until the remaining live points occupy a tiny prior volume that would contribute little to the weighted sum in Equation (6).
For the calculation in Equation ( 6) not only the known live and dead contours of the likelihood are needed but also the corresponding prior volumes encoded in ω i , which are not precisely known.According to [1] there are two different approaches to approximate the prior volumes X i , a stochastic scheme and a deterministic scheme.In the stochastic scheme the change of volume due to each removed shell i is a stochastic process characterised by a Beta distributed random variable t i , where we assume a constant number of live points N. Approaches with a varying number of live points were i.a.introduced in dynamic nested sampling by [3,4] and extend beyond the boundaries of this research until the present moment.This probabilistic description of the prior volume evolution allows to draw several samples of prior volumes X, according to the likelihood values L, and to thereby get uncertainty estimates on the evidence calculation (Equation ( 6)).In the deterministic scheme the logarithmic prior volume is estimated via, at the ith iteration.This estimate is derived from the fact that the expectation value of the logarithmic volume changes is ⟨ln t i ⟩ = −1/N.However, this estimate does not take the uncertainties in the evidence calculation [5] into account and differs from unbiased approaches introduced and analysed in [6][7][8].In any case, the imprecise knowledge of the prior volume introduces probing noise that can potentially hinder the accurate calculation of the evidence.In order to improve the accuracy of the posterior integration, we aim to reconstruct the likelihood-prior-volume function given certain a priori assumptions on the function itself using Bayesian inference.Here, we introduce a prior and likelihood model for the reconstruction of the likelihood-prior-volume function, which we will call the reconstruction prior and the reconstruction likelihood to avoid confusion with likelihood contour and prior volume information obtained from nested sampling.The left side of Figure 1 illustrates the nested sampling likelihood dead contours generated by the software package anesthetic [9] for the simple Gaussian example discussed in Section 2 and two live points (N = 2) as a function of prior volume.In the following, we call the likelihood values of the dead points the likelihood data ⃗ d L and the prior volume, approximated by Equation ( 8), the prior volume data ⃗ d X .Additionally, the analytical solution of the likelihood-prior-volume function, which we call the ground truth, is plotted.In accordance with the here considered example, we assume the likelihood-priorvolume function to be smooth for most real-world applications of nested sampling.In this study, we propose an approach that incorporates this assumption of a-priori-smoothness and enforces monotonicity.In particular, we use Information Field Theory (IFT) [10] as a versatile mathematical tool to reconstruct a continuous likelihood-prior-volume function from a discrete dataset of likelihood contours and to impose the prior knowledge on the function.
As noted in [11], the time complexity of the nested sampling algorithm depends on several factors.First, the time complexity depends on the information gain of the posterior over the prior, which is equal to the shrinkage of the prior required to reach the bulk of the posterior.This is described mathematically by the Kullback-Leibler divergence (KL) [12], Second, the time complexity increases with the number of live points N, which defines the shrinkage per iteration.Furthermore, the time for evaluating the likelihood L(θ), T L , and the time for sampling a new live point in the likelihood restricted volume, T samp , contribute to the time complexity.Accordingly, in [13] the time complexity of the nested sampling algorithm T and the error σ Z have been characterised via, Upon examining the error, σ Z , it becomes evident that reducing the error by increasing the number of live points leads to significantly longer execution times.Accordingly, by inferring the likelihood-prior-volume function, we aim to reduce the error in the logevidence for a given D KL and a fixed number of live points, N, avoiding a significant increase in time complexity.The rest of the paper is structured as follows.In Section 2, the description of the reconstruction prior of the likelihood-prior-volume curve is discussed.The model for the reconstruction likelihood and the inference of the likelihood-prior-volume function and the prior volumes using IFT is described in Section 3. The corresponding results for a Gaussian example and the impact on the evidence calculation are shown in Section 4. And eventually, the conclusion and outlook for future work are given in Section 5.

The Reconstruction Prior Model for the Likelihood-Prior-Volume Function
A priori we assume that the likelihood-prior-volume function is smooth and monotonically decreasing.This is achieved by representing the negative rate of change of the logarithmic prior volume, ln X, with a monotonic function of the likelihood a L as a log-normal process, In the words of IFT, we model the one-dimensional field τ, which assigns to each logarithmic prior volume a value, as a Gaussian process with P (τ) = G(τ, T).Thereby, we do not assume a fixed power spectrum for the Gaussian process, but reconstruct it simultaneously with τ itself.An overview of this Gaussian process model is given in Appendix A. The details can be found in [14].
In the most relevant volume for the evidence, the peak region of the posterior is expected to be similar to a Gaussian in a first order approximation.Therefore, the function a L is chosen such that τ is a constant for the Gaussian case.Deviations from the Gaussian are reflected in deviations of τ from the constant.Accordingly, we define, with L max being the maximal likelihood.We consider the simple Gaussian example proposed by [1], where D is the dimension and σ X is the standard deviation.We find that the function a L (ln X), defined in Equation ( 13), becomes linear in this case, Figure 1 illustrates the data and the ground truth on log-log-scale on the left and the linear relation a L (ln X) on the right.According to the log-normal process defined in Equation ( 12), we define the function a L (ln X), for arbitrary likelihoods, which is able to account for deviations from the Gaussian case, By inverting Equation ( 13) we then get the desired likelihood-prior-volume function.The logarithmic prior volume values given the likelihood contours are obtained by inversion of Equation ( 16).In Figure 2 several prior samples given the model for the reconstruction prior according to Equation ( 16) are shown.However, often the maximum log-likelihood, ln L max , is not known.In [15], the calculation of the maximum Shannon entropy I = ln P (θ|d) is given.Using this approach, we can calculate the logarithmic maximum likelihood ln L max and thus calculate a L for unknown likelihoods.
Hence, based on the likelihood contours obtained from the nested sampling run, we calculate the data based evidence, Z d , using the approximated prior volumes according to Equation (8).This allows us to obtain an estimate of the maximum log-likelihood, ln L max , of the model for reparametrisation,

The Reconstruction Likelihood Model and Joint Inference
In this section we will derive a model for the reconstruction likelihood for the joint inference of the likelihood-prior-volume function and the changes of logarithmic prior volume according to the likelihood data ⃗ d L .Here, IFT and the software package NIFTy [16], which facilitates the implementation of IFT algorithms, allow us to infer the reparametrised likelihood-prior-volume function in Equation ( 16) from the data, given the reconstruction prior and reconstruction likelihood model.For the inference of the likelihood-prior-volume function we first define the likelihood function a L as a function of the Beta distributed t i (Equation ( 7)), where a 0 is the likelihood for X 0 = 1.We perform a joint reconstruction of the function a L and the vector ⃗ t d representing changes in prior volume according to the likelihood data ⃗ d a .
P (a, Here, the Gaussian uncertainty σ δ is supposed to be chosen small in order to approximately represent the δ-function.So far, we have managed to obtain a probabilistic model for the non-normalized reconstruction posterior.To this end, we use variational inference, in particular the geoVI algorithm supposed by [17], to get an approximate of the true reconstruction posterior distribution.In the end, this statistical approach allows us to get an estimate of the likelihood-prior-volume function and the prior volumes via the posterior mean, its uncertainty and any other quantity of interest, which can be derived from the posterior samples.The results of the here developed method are shown in Section 4.

Results
To test the presented method we perform a reconstruction for the simple Gaussian example discussed in Section 2 and introduced in Figure 1.The according results for the likelihood-prior-volume function are shown in Figure 3.  Since the main goal of nested sampling is to compute the evidence, we want to quantify the impact of the proposed method on the evidence calculation.To do this, we use n samp = 200 posterior samples for the prior volumes X * and calculate the evidence given the likelihood contours ⃗ d L for each of these samples according to Equation ( 6), Similarly, we generate by means of anesthetic [9] n samp = 200 samples of the prior volume via the probabilistic nested sampling approach described in Equation ( 7).Also for these samples we calculate the evidence according to Equation (24).A comparison of the histograms of evidences for both sample sets (classical nested sampling and reconstructed prior volumes) is shown in Figure 5. From the comparison of the histograms, one can already see that the standard deviation for the posterior sample evidences for the reconstructed prior volumes got smaller.This is also mirrored as soon as we look at numbers: The ground truth logarithmic evidence for this Gaussian case is ln Z gauss = −37.798.The result for the evidence for the classical nested sampling approach given n samp = 200 is ln Z d = −38.92± 4.50.And finally, the result for the evidence inferred with the here presented approach from the likelihood contours assuming smoothness and enforcing monotonicity is ln Z = −37.97± 2.89.

Conclusions & Outlook
In our search for a more accurate estimate of the evidence, we set out to reconstruct the likelihood-prior-volume function.In particular, a Bayesian method was developed to infer jointly the likelihood-prior-volume function and the vector of prior volumes from the deadpoint likelihood contours given by the nested sampling algorithm.For the reconstruction we enforce monotonicity and assume smoothness.The test of the reconstruction algorithm on a Gaussian example shows a significant improvement in the accuracy of the computed logarithmic evidence.
In general, the approach presented here will only show notable improvements if the assumption of smoothness for the likelihood-prior-volume-curve holds.Fortunately, we can reasonably expect this assumption to hold in the majority of cases.Future work will apply the reconstruction algorithm to further likelihoods where the ground truth likelihood-prior-volume function is known for testing, with the ultimate goal of applying it to actual nested sampling outputs.In particular, the results on non-Gaussian likelihoods shall be tested.
where F is the corresponding Fourier transformation.Here, we model the logarithmic amplitude spectrum, p T (|k|), as a power-law with non-parametric deviations represented by an integrated Wiener process on logarithmic coordinates l = ln(|k|) according to [14], p T (l) ∝ e γ(l) , d 2 γ dl 2 = ηξ W (l), P (ξ W ) = G(ξ W |1). (A3) The resulting shape of the power spectrum is encoded in ξ W , η and additional integration and normalization parameters.These additional parameters are represented by Gaussian and log-normal processes themselves and such their generative prior models are defined by hyperparameters characterising their mean and variance.

Figure 1 .
Figure 1.(Left): Visualisation of the nested sampling dead point logarithmic likelihoods, ⃗ d L , as a function of logarithmic prior mass data, ⃗ d X , for the normalized simple Gaussian in Equation (14) (σ X = 0.01, D = 10).The corresponding data was generated by the software package anesthetic [9].(Right): Visualisation of the reparametrised nested sampling dead point logarithmic likelihoods according to Equation (13) as a function of logarithmic prior mass for the same case as shown left.

Figure 3 .
Figure 3. Reconstruction results for the likelihood-prior-volume function for the simple Gaussian example in Equation (14).The plots show the data, the ground truth and the reconstruction as well as its uncertainty.(Left): Reconstruction results on log-log-scale.(Right): Reconstruction results in reparametrised coordinates according to Equation (13).Moreover, the posterior estimates of the logarithmic prior volumes to the according likelihood data ⃗ d L are shown in Figure 4.

Figure 4 .
Figure 4. Reconstruction results for the prior volumes given the likelihood data ⃗ d L for the simple Gaussian example in Equation (14).The plots show the data, the ground truth and the reconstruction as well as its uncertainty.(Left): Reconstruction results on log-log-scale.(Right): Reconstruction results in reparametrised coordinates according to Equation (13).

Figure 5 .
Figure 5.Comparison of histograms for logarithmic evidences for n samp = 200 samples for the classical nested sampling (NSL) approach and the reconstructed prior volumes.