An Objective Prior from a Scoring Rule

In this paper, we introduce a novel objective prior distribution levering on the connections between information, divergence and scoring rules. In particular, we do so from the starting point of convex functions representing information in density functions. This provides a natural route to proper local scoring rules using Bregman divergence. Specifically, we determine the prior which solves setting the score function to be a constant. Although in itself this provides motivation for an objective prior, the prior also minimizes a corresponding information criterion.


Introduction
A major drawback of objective priors, such as Jeffreys prior [1] and the reference prior [2], is that in many cases, they are improper. The case for objective priors has been made, see for example [3], yet while for a parameter that is defined over a bounded interval, such as (0, 1), it is generally possible to derive objective prior distributions that are proper, this is not the case for parameters on (0, ∞) or (−∞, ∞). The literature provides many examples where improper prior distributions cannot be suitably employed; such as Bayes factors, mixture models and hierarchical models, to name but a few. Methods have been proposed to overcome these obstacles, for example, Intrinsic Bayes Factors [4] and Fractional Bayes factors [5] or reparametrizing mixture models [6]. However, these types of results are generally valid for a limited number of specific conditions. Additionally, improper prior distributions are not too suitable to be employed where large numbers of parameters are involved as it would be difficult to establish properness of the full posterior distribution.
The idea of this paper is to present a novel objective prior distribution for continuous parameter spaces by considering the connection between information, divergence and scoring rules. In particular, the proposed prior can be defined over (0, ∞) and (−∞, ∞), the latter by extending the former, and it has the appealing property of being proper.
Recently, Ref. [7], introduced a new class of objective prior which solved a differential equation of the form S(q, q , q ) = 0, where S is a score function and the solution q acts as the prior distribution, and where q and q are, respectively, the first and second derivative of q. The solution is also shown to minimize an information criterion.
There are two well-known relations that connect information, proper local scoring rules and divergences. The most famous of which links Shannon information, Kullback-Leibler divergence and the log-score, given by where p and q are two densities, and integrals will be generally defined with respect to the Lebesgue measure. The term on the left-hand-side of (1) is the Kullback-Leibler divergence [8] between p and q, the first term on the right-hand-side is the Shannon information associated with density p, and the second term is the expectation of the log-score function. Another way to connect information, divergence and proper local scoring rules, involves Fisher divergence, Fisher information, and the Hyvärinen score function ( [9]): where the final term has been obtained using an integration by parts, requiring p q /q to vanish at the boundary values. In general, these relationships can be expressed as where D denotes the divergence, I the measure of information and S the score, and clearly Recently, in [10], a new class of score function was introduced, where the starting point is the property of the score function, which is that for all densities p. In other words, a score is said to be proper if the above is minimized by the choice of q = p. Let us consider the well-known log-score, S(q) = − log q(x). Then, we have that it satisfies the above property, since for any density p it is that p log(p/q) ≥ 0, with equality only when q ≡ p. As such, we have that the log-score is a proper score. Furthermore, a score is said to be local if it only depends on q through the density value q(x). See [10,11]. It must be noted that the log-score is the only proper score to be local.
If we consider the Hyvärinen score function [9], which is given by which we note depends on (q, q , q ), i.e., q and the first two derivatives, as such it is not local in the above sense. However, the locality condition can be weakened [10] by allowing the score to depend on a finite number m of derivatives. Therefore, the Hyvärinen score will be an order-2 proper local scoring rule. More generally, if a proper score depends on m derivatives, then it will be defined an order-m local scoring rule. The theory in support of this is based on the fact that the minimizer of p S(q) is p, and this can be investigated using variational analysis. The relevant Euler-Lagrange equation of order two being The corresponding general case of (3) is given as Equation (18) in [10]. Throughout this paper we will focus on the case m = 2, since this is where we draw our prior from. The Appendix A provides the expression for a general m.
In [10], the solution to Equation (3), is proposed using properties of differential operators and 1-homogeneous functions. Recall that a 1-homogeneous function f is such that f (x, λq, λq ) = λ f (x, q, q ) for any λ > 0. In particular, the Hyvärinen score arises with f (x, q, q ) = (q ) 2 /q and Furthermore, Refs. [10,11] characterize all local and proper scoring rules of order m = 2. With this respect, as an additional interesting result, in the Appendix A we present the characterization using measures of information and the Bregman divergence [12]. The benefits of the proposed approach are that complicated mathematical analysis is avoided and the derivation of the local rule is made explicit. Following [10,11] and the novel derivation of their results using Bregman divergence, which is the focus of the Appendix A, information, divergence and scores can be obtained as follows: For some convex function α : R → R, 1. Divergence: Given the result (A10), we obtain Using integration by parts on the right most integral, and assuming that [p · ∂φ/∂q ] vanishes at the extremes of the integral, 2. Information: This follows from the divergence, and from (A4), and is given by 3. Score: Again, from the form of the divergence and (A8), this is given by The score S(q, q , q ) in (5) generalizes the Hyvärinen score, which arises when α(u) = u 2 . The paper is organized as follows. Section 2 introduces the proposed objective prior. Section 3 includes a thorough simulation study, and an application to mixture models that involves both simulated and real data. In Section 4 we have discussed another critical scenario where improper priors may resolve in improper posteriors, i.e., assigning an objective prior to the variance parameter in a hierarchical model. The supporting theory is presented in the Appendix A. In Appendix A.1 we use Bregman divergence to obtain general forms for score functions and associated divergences and following on from this in Appendix A.2 we detail how we use Bregman divergences to obtain a divergence between probability density functions using their first derivatives, and show how to obtain score functions from these divergences. Appendix A.3 provides the general case using m derivatives. Finally, in Appendix A.4 we make the connection with our derivations of scores and that of [10].

New Objective Prior
Ref. [7] proposed constructing objective prior distributions on parameter spaces by solving equations of the kind S(q) = 0. Specifically, they used a weighted mixture of the log-score and the Hyvärinen score functions. Please note that the sole use of the log-score function would result in the uniform prior, which is not appropriate in many cases and may yield improper posterior distributions. On the other hand, a weighted combination of the two score functions yields a differential equation given by where q denotes the prior density and w > 0 the weight balancing the two score functions. Solutions to the differential Equation (6) can be found for different spaces, and constraints on the shape of q can be considered; so, to have a prior density with desirable behavior, such as monotone, convex, log-concave and more.
We have already seen that the Hyvärinen score arises with α(u) = u 2 ; see (5). An important property an objective prior distribution may be required to have is a heavy tail and it is this type of prior we are seeking to obtain in a formal way through an appropriate choice of α. We will consider such on (0, ∞). Mirroring the Hyvärinen score, we adopt α(u) = u −2 with u = −q /q, and q a decreasing density on (0, ∞). In this case, Equation (5) becomes 6 u /u 4 − 3/u 2 , which, by setting to 0, becomes u = 1 2 u 2 . The solution is easily seen to be u(x) = −2/(a + x), for some constant a. In this case, the prior on the parameter To obtain a density function on (0, ∞); i.e., q is non-negative, we choose a > 0, as we are permitted to do through the constant of integration from the differential equation. Interestingly, the prior in (7), is a Lomax distribution [13] with scale parameter a and shape parameter equal to 1. Recalling that the Lomax distribution can be directly connected to the Pareto Type I and Pareto Type II distributions, its heavy-tailed nature is immediately obvious. Figure 1 shows the prior with a = 1. Making the connection more directly with the theory set out in the paper with α(u) = u −2 , we have φ(u, v) = u 3 /v 2 which is easy to show satisfies φ = u∂φ/∂u + v∂φ/∂v. Then using (A8) we obtain Setting this to zero; i.e., 2qq = 3(q ) 2 , this can be solved and the solution is precisely of the form a/(a + x) 2 . We now write this all out in a theorem.
Then φ is convex for either ξ < 0 or ξ > 0. The Euler equation associated with this φ; i.e., d/dx ∂φ/∂p = ∂φ/∂p, yields the solution to which can be written as S(p, p , p ) = 0 where S is the corresponding score function (8).
To obtain the corresponding prior on (−∞, ∞) through symmetry about 0, we obtain Here we motivate the natural objective choice for the constant a as 1. We note that a is a scale parameter in the above and as such the most fundamental transformation to be considered here is φ = 1/x. This, for example, would take variance to precision (and vice versa). For the prior in (7) to be invariant, i.e., p a (φ) = a/(a + φ) 2 , we need to have a = 1, since which yields p a (x) = a/(a + x) 2 iff a = 1. All the illustrations that follow have been made taking this choice for a. Although other transformations are clearly available, the dominance of the reciprocal transform is sufficient motivation to select the a = 1.

First Examples
The first simulation study was to make inference on a scale parameter; specifically, the standard deviation of a normal density with mean µ = 0 and standard deviation σ ∈ (0, ∞). We compare prior (7) with Jeffreys prior, i.e., π(σ) ∝ 1/σ. We took 250 samples of size n = 30 and n = 100, obtained the posterior distributions using standard MCMC methods (Metropolis-Hastings with normal random walk proposal, 6000 iterations, with a burn-in of 1000 and a thinning of 10) and computed the following two indexes. The root Mean Squared Error (MSE) divided by the true parameter value from the sample mean; where σ is the posterior mean, and the coverage of the 95% posterior credible interval for σ. Table 1 shows the results for the MSE for σ = {0.25, 0.50, 1, 2, 5, 10, 20}, where we see little difference between the performance of the two priors. As one would expect, the MSE is lower for the largest sample size. However, the important point is that the score prior is proper, an important property. The coverage of the 95% posterior credible interval is shown in Table 2, where we can also see very similar behavior between the two priors.
To illustrate the frequentist properties of the prior in (9), we have compared it to a flat prior, π(µ) ∝ 1, in making inference for a location parameter of a log-normal density with unknown µ and known scale parameter σ = 1. Similar to the previous case, we have drawn 250 samples of size n = 30 and n = 100 and computed the MSE and coverage of the 95% posterior credible interval. The values of µ considered were from the set {0, 1, 5, 50, 100}. Table 3 shows the MSE for the two priors, where we see that apart for a small difference for µ = 0, the two priors appear to perform in a very similar fashion.
The coverage of the 95% posterior credible interval for µ is shown in Table 4, where we can see a very similar behavior for the two priors, with an exception for µ = 0, although the two coverage levels are perfectly acceptable. The general conclusion for the two experiments above, is that the prior obtained via α(u) = 1/u 2 exhibits tails which are sufficiently heavy to generate optimal frequentist performance even for large parameter values. These properties are comparable to those obtained by Jeffreys prior, which is well-known for being the objective prior yielding good frequentist properties of the posterior. The advantage with our prior is that it is always proper.

Mixture Models
A major area of challenge for objective priors is finite mixture models, where observations are assumed to be generated by the following model; for densities ( f l (·|θ l )), where θ l is vector of parameters characterizing the densities. Given the ill-defined nature of the model in (10), Ref. [6], the use of improper priors for the parameters is not acceptable. In particular, if we consider densities f l to be location-scale distributions, Ref. [6] shows that under certain circumstances, Jeffreys priors cannot be used, due to their improperness. For example, if all parameters are unknown (i.e., weights, location and scale parameters), then Jeffreys prior yields improper posteriors. Even in more restrictive circumstances the use of improper priors if troublesome; if we consider only the location parameters unknown, then Jeffreys prior yields improper posteriors for k > 2. The above represents severe limitations in Bayesian analysis. Therefore, the objective priors proposed in this paper represent a clear solution to the above problem, avoiding reparameterization or addition of extra parameters, as proposed, for example in [6], the latter resulting in an increased uncertainty.

Single Sample
In this first simulation study, we illustrate the performance of the proposed prior on the following three-component mixture model, from which we have drawn a sample of size n = 200, 0.25 N(0, 1.2 2 ) + 0.65 N(−10, 1) + 0.10 N(7, 0.8 2 ).
In terms of prior distributions, we have assumed a symmetric Dirichlet prior with concentration parameters equal to one, and for the means and standard deviations of the Gaussian components the proposed prior on (−∞, ∞) and on (0, ∞), respectively. We have also assumed independent information, so the priors for the parameters of the component have the following form, where µ µ µ = (µ 1 , µ 2 , µ 3 ) and σ σ σ = (σ 1 , σ 2 , σ 3 ). The histogram of the sample, together with the true density, is shown in Figure 2. The analysis uses a Metropolis-within-Gibbs algorithm with normal random walk proposal, with a total of 60,000 iterations, a burn-in of 10,000 and a thinning of 100. In Table 5 we have reported the posterior means and the 95% posterior credible intervals for the parameters of the mixture model, where we note that the true values are within the posterior credible intervals.

Repeated Sampling
To have a more thorough understanding of the proposed prior implementation, we have performed some experiment on repeated sampling, taking into consideration different scenarios, which include different sample sizes and model structure. We have limited the analysis to mixture of normal densities, but it is obvious that due to the properness of the prior, its implementation can be extended to any family of densities for the mixture components. We computed the posterior distribution for M = 20 replications of sample of size n = (50, 100, 200) of mixture models with number of components k = (3,4,5). For these illustrations, we reported the results for the means and the standard deviations of each component, as they are estimated using the proposed prior. The models used are as follows: Please note that we have not chosen variable weights as these are not associated with a proposed prior. Figure 3 shows the boxplots of the posterior means for the µ µ µ of the mixture components, while Figure 4 shows the boxplots of the posterior means for the standard deviations σ σ σ. As one would expect, the larger the sample size the less variability in the repeated estimates, for the same number of components. Keeping the sample size fixed, we notice an increase in the variability of the estimates as the number of components grows, which is also an expected result.

Real Data Analysis
In this section, we analyze the well-known galaxy data set, which contains the velocity of 82 galaxies in the Corona Borealis region. To support a particular theory about the formation of galaxies, the analysis aims to estimate the number of stellar populations. This is a benchmark data set, well investigated in the literature, for example in [14][15][16], among others. We consider the galaxies velocities as random variables distributed according to a mixture of k normal densities. The estimation of the number of components has proved to be delicate, with estimates ranging from 3 to 7, depending on factors, such as the priors for the parameters and the Bayesian method used. The histogram of the data set with a superimposed density is presented in Figure 5. To select the number of components in the mixture of normal densities, we have fitted models with k = (2, 3, 4, 5, 6, 7, 8) components, computing the Deviance Information Criterion (DIC) under each model. The results are reported in Table 6. We notice that according to the computed index, we identify as best model the mixture with k = 4 components, which is in line with [16], and slightly more conservative than [15,16], where the number of components with non-zero weight is 5. Table 7 shows the posterior means for the parameters of the 4 components estimated.

Variance Parameters in Hierarchical Models
In this section, we discuss a well-known implementation of a hierarchical model that is proposed, for example, in [17]. The basic two-level hierarchical model is as follows: This model has three parameters, namely µ, σ y and σ α . However, out interest for this paper is in σ α only, noting that "regular" objective priors can be used on the remaining parameters, such as π(µ, σ y ) ∝ 1, for example. Although being improper, this prior yields a proper posterior on the parameters. The actual concern is on the variance (scale) parameter σ α , as if we were to put an improper prior on it, then the corresponding posterior, most likely, would be improper as well.
To compare the proposed prior, we assign an inverse-gamma prior on the variance with parameters set so to define a very sparse probability distribution. This is recommended, for example, in [18], where the prior is π(σ 2 α ) ∼ IG(ε, ε), with ε > 0 sufficiently small. We do not discuss in detail the appropriateness of the above choice, or other alternatives; the reader can refer to [17], for example, for a through discussion.
The method used to obtain the posterior samples is a Metropolis-within-Gibbs in both cases, with 40,000 iterations, a burn-in of 20,000 and a thinning of 10.
The data consists of J = 8 educational testing experiments, where the parameters α 1 , . . . , α 8 represent the relative effects of Scholastic Aptitude Test coaching programs in different schools. In this example, the parameter σ α represents the between-schools variability (standard deviation) of the effects. Table 8 shows the data. We have compared the marginal posteriors π(σ 2 α |y) obtained using the inverse-gamma prior with ε = 1 and the proposed prior in (7) with a = 1. The histograms of the marginal posteriors are in Figure 6, where we note similar results. The statistics of the posteriors are reported in Table 9, where we note a less-informative distribution when the proposed prior is employed. This is expected, as the inverse-gamma distribution is considered a relatively informative one [17].

Discussion
In this paper, we have derived a class of objective prior distributions that have the appealing properties of being proper and heavy-tailed. These have been obtained by exploiting a straightforward approach to the construction of score functions (here proposed). In detail, using convex function α(·) we can find the score function with first two derivatives using (5). The Hyvärinen score arises with α(u) = u 2 ; whereas we have used α(u) = u −2 and used it to construct objective prior distributions using methodology introduced in [7].
The class of prior is heavy-tailed, behaving as 1/x 2 for large |x|; this result is immediately obvious as the prior on (0, ∞) is a Lomax distribution with shape 1. In this respect, it behaves similar to standard objective priors but comes without the problems of being improper. The benefits of using a proper prior is that the posterior is automatically proper and so does not need to be checked.
We have showed that when compared to Jeffreys prior on simulated data, the frequentist performance of the prior distribution derived from score functions are nearly equivalent. In addition, we have showed that on both simulated and real data, the proposed prior is suitable to be used in a key scenario where improper priors (e.g., Jeffreys and reference) are not suitable (or are yet to be found). We have also illustrated the prior on a common problem for hierarchical models, i.e., assigning an objective prior for the variance parameter.
In other words, parameters are assumed to be independent a priori, so the join prior distribution is represented by the product of the marginal priors on each parameter. We can then set π j (θ j ) to be either (7) or (9), for j = 1, . . . , k, depending on Θ j . which is the well-known Kullback-Leibler divergence. The L 2 divergence arises when φ(p) = p 2 , with B φ (p, q) = (p − q) 2 .
Please note that we can write and so we see that is a score function, the Bregman score, and is local if φ satisfies q ∂φ/∂q = φ(q). In this case, the only solution is the log-score. As mentioned above, the log-score is the sole local and proper score function that depends on q only. Therefore, the Bregman divergence and the Bregman score confirm this, together with the results in [2]. We will see this is also the case with order m = 2. The extension to the Bregman divergence and the Bregman score that we discuss in this paper, follows from a two-dimensional Bregman divergence with arguments (p, p ), where p (x) = dp/dx, and for some two-dimensional convex function φ(u, v). It is defined by For example, if φ(u, v) = v 2 /u, which is easily shown to be convex, we obtain D(p, q) = p p p − q q 2 , known as the Fisher information divergence; see, for example, [21].

Appendix A.2. Divergences and Scores
The two most well-known measures of information are differential Shannon and Fisher. See, for example, [22]. Associated divergences are the Kullback-Leibler and Fisher, respectively, with corresponding score functions the logarithmic and Hyvärinnen. The connection between divergence, information and score functions can be understood from the following; D(p, q) = I(p) + p S(q). (A4) Here D denotes divergence, I information and S score. From this it is clear that pS(q) is minimized at q = p. For (A4) based on the Kullback-Leibler divergence, we have D(p, q) = p log(p/q), I(p) = p log p and S(q) = − log q.
For (A4) based on Fisher information, we have D(p, q) = p p /p − q /q 2 , I(p) = (p ) 2 /p and S(q) = 2q /q − (q /q) 2 . (A5) From (A1) we obtain D(p, q) = φ(p) + p − ∂φ ∂q More generally, using the two-dimensional Bregman divergence in (A3), and put it in (A2), we obtain D(p, q) = φ(p, p ) + p S(q, q , q ), where the score is S(q, q , q ) = − ∂φ ∂q The score S(q) has been obtained by implementing integration by parts, and assuming [p · ∂φ/∂q ] vanishes at the boundary points. To ensure the score is local, we use the following condition on φ, Hence, the proposed class of score functions, which effectively is the class introduced in [10], is given by S(q, q , q ) = − ∂φ ∂q The derivation just presented is arguably much simpler to the one used in [10], as we have avoided any variational analysis and the use of differential operators.
For a specific example, consider the class of convex functions, satisfying (A7), given by for some convex function α : R → R. The convexity of α implies the convexity of function φ in (A9), as the following Lemma A1 shows.
Proof. The Hessian matrix corresponding to φ is seen to be which can be shown to be positive definite when α > 0. Given that α is assumed to be convex, then the condition α > 0 is true.
Given the above form for α, we are now able to find the divergence, the information and the score. However, before proceeding, we first note that −φ(q, q ) + q ∂φ(q, q ) ∂q + q ∂φ(q, q ) ∂q = 0.
That is, φ satisfies the condition in (A7). In fact, we have for all (u, v).