Next Article in Journal
Characterization of Self-Assembled 2D Patterns with Voronoi Entropy
Next Article in Special Issue
Likelihood Ratio Testing under Measurement Errors
Previous Article in Journal
Flexibility of Boolean Network Reservoir Computers in Approximating Arbitrary Recursive and Non-Recursive Binary Filters
Previous Article in Special Issue
Convex Optimization via Symmetrical Hölder Divergence for a WLAN Indoor Positioning System
Open AccessArticle

Asymptotic Properties for Methods Combining the Minimum Hellinger Distance Estimate and the Bayesian Nonparametric Density Estimate

by 1,*,† and 2
1
Department of Mathematics and Computer Science, University of Missouri Saint Louis, St. Louis, MO 63121, USA
2
Department of Biological Statistics and Computational Biology, Cornell University, Ithaca, NY 14853, USA
*
Author to whom correspondence should be addressed.
Current address: ESH 320, One University Blvd. Saint Louis, MO 63121, USA.
Entropy 2018, 20(12), 955; https://doi.org/10.3390/e20120955
Received: 18 October 2018 / Revised: 29 November 2018 / Accepted: 7 December 2018 / Published: 11 December 2018

Abstract

In frequentist inference, minimizing the Hellinger distance between a kernel density estimate and a parametric family produces estimators that are both robust to outliers and statistically efficient when the parametric family contains the data-generating distribution. This paper seeks to extend these results to the use of nonparametric Bayesian density estimators within disparity methods. We propose two estimators: one replaces the kernel density estimator with the expected posterior density using a random histogram prior; the other transforms the posterior over densities into a posterior over parameters through minimizing the Hellinger distance for each density. We show that it is possible to adapt the mathematical machinery of efficient influence functions from semiparametric models to demonstrate that both our estimators are efficient in the sense of achieving the Cramér-Rao lower bound. We further demonstrate a Bernstein-von-Mises result for our second estimator, indicating that its posterior is asymptotically Gaussian. In addition, the robustness properties of classical minimum Hellinger distance estimators continue to hold.
Keywords: robustness; efficiency; Bayesian nonparametric; Bayesian semi-parametric; asymptotic property; minimum disparity methods; Hellinger distance; Berstein von Mises theorem robustness; efficiency; Bayesian nonparametric; Bayesian semi-parametric; asymptotic property; minimum disparity methods; Hellinger distance; Berstein von Mises theorem

1. Introduction

This paper develops Bayesian analogs of minimum Hellinger distance methods. In particular, we aim to produce methods that enable a Bayesian analysis to be both robust to unusual values in the data and to retain their asymptotic precision when a proposed parametric model is correct.
All statistical models include assumptions which may or may not be true of the mechanisms producing a given data set. Robustness is a desired property in which a statistical procedure is relatively insensitive to deviations from these assumptions. For frequentist inference, concerns are largely associated with distributional robustness: the shape of the true underlying distribution deviates slightly from the assumed model. Usually, this deviation represents the situation where there are some outliers in the observed data set; see [1] for example. For Bayesian procedures, the deviations may come from the model, prior distribution, or utility function, or some combination thereof. Much of the literature on Bayesian robustness has been concerned with the prior distribution or utility function. By contrast, the focus of this paper is robustness with respect to outliers in a Bayesian context, a relatively understudied form of robustness for Bayesian models. For example, we know that Bayesian models with heavy tailed data distributions are robust with respect to outliers for the case of one single location parameter estimated by many observations. However, as a consequence of the Crámer–Rao lower bound and the efficiency of the MLE, modifying likelihoods to account for outliers will usually result in a loss of precision in parameter estimates when they are not necessary. The methods we propose, and the study of their robustness properties, will provide an alternative means of making any i.i.d. data distribution robust to outliers that do not lose efficiency when no outliers are present. We speculate that they can be extended beyond i.i.d. data as in [2], but we do not pursue this here.
Suppose we are given the task of estimating θ 0 Θ from independent and identically distributed univariate random variables X 1 , , X n , where we assume each X i has density f θ 0 F = { f θ : θ Θ } . Within the frequentist literature, minimum Hellinger distance estimates proceed by first estimating a kernel density g ^ n ( x ) and then choosing θ to minimize the Hellinger distance h ( f θ , g n ) = [ { f θ 1 / 2 ( x ) g ^ n 1 / 2 ( x ) } 2 d x ] 1 / 2 . The minimum Hellinger distance estimator was shown in [3] to have the remarkable properties of being both robust to outliers and statistically efficient, in the sense of asymptotically attaining the information bound, when the data are generated from f θ 0 . These methods have been generalized to a class of minimum disparity estimators, based on alternative measures of the difference between a kernel density estimate and a parametric model, which have been studied since then, e.g., [4,5,6,7,8]. While some adaptive M-estimators can be shown to retain both robustness and efficiency, e.g., [9], minimum disparity methods are the only generic methods we are aware of that retain both properties and can also be readily employed within a Bayesian context. In this paper, we only consider Hellinger distance in order to simplify the mathematical exposition; the extension to more general disparity methods can be made following similar developments to those in [5,7].
Recent methodology proposed in [2] suggested the use of disparity-based methods within Bayesian inference via the construction of a “disparity likelihood” by replacing the likelihood function when calculating the Bayesian posterior distribution; they demonstrated that the resulting expected a posteriori estimators retain the frequentist properties studied above. These methods first obtain kernel density estimates from data and then calculate the disparity between the estimated density function and the corresponding density functions in the parametric family.
In this paper, we propose the use of Bayesian non-parametric methods instead of the classical kernel methods in applying the minimum Hellinger distance method. One method we proposed is just to replace the kernel density estimate used in classical minimum Hellinger distance estimate by the Bayesian nonparametric expected a posteriori density, which we denote by MHB (minimum Hellinger distance method using a Bayesian nonparametric density estimate). The second method combines the minimum Hellinger distance estimate with the Bayesian nonparametric posterior to give a posterior distribution of the parameter of interest. This latter method is our main focus. We show that it is more robust than usual Bayesian methods and demonstrate that it retains asymptotic efficiency, hence the precision of the estimate is maintained. So far as we are aware, this is the first Bayesian method that can be applied generically and retain both robustness and (asymptotic) efficiency. We denote it by BHM (Bayesian inference using a minimum Hellinger distance).
To study the properties of the proposed new methods, we treat both MHB and BMH as special cases of semi-parametric models. The general form of a semi-parametric model has a natural parametrization ( θ , η ) P θ , η , where θ Θ is a Euclidean parameter and η H belongs to an infinite-dimensional set. For such models, θ is the parameter of primary interest, while η is a nuisance parameter. Asymptotic properties of some of Bayesian semi-parametric models have been discussed in [10]. Our disparity based methods involve parameters in Euclidean space and Hilbert space, with the former being of most interest. However, unlike many semi-parametric models in which P θ , η P is specified jointly by θ and η , in our case, the finite dimensional parameter and the nonparametric density functions are parallel specifications of the data distribution. Therefore, standard methods to study asymptotic properties of semi-parametric models will not apply to the study of disparity-based methods. Nevertheless, considering the problem of estimating ψ ( P ) of some function ψ : P R d , where P is the space of the probability models P, semi-parametric models and disparity-based methods can be unified into one framework.
The MHB and BMH methods are introduced in detail in Section 2, where we also discuss some related concepts and results, such as tangent sets, information, consistency, and the specific nonparametric prior that we employ. In Section 3, both MHB and BMH are shown to be efficient, in the sense that asymptotically the variance of the estimate achieves the lower bound of the Cramér–Rao theorem. For MHB, we show that asymptotic normality of the estimate holds, where the asymptotic variance is the inverse of the Fisher information. For BMH, we show that the Bernstein von Mises (BvM) theorem holds. The robustness property and further discussion of these two methods are given in Section 4 and Section 5, respectively. A broader discussion is given in Section 6.

2. Minimum Hellinger Distance Estimates

Assume that random variables X 1 , , X n are independent and identically distributed (iid) with density belonging to a specified parametric family F = { f θ : θ Θ } , where all the f θ in the family have the same support, denoted by s u p p ( f ) . For simplicity, we use X n to denote the random variables X 1 , , X n . More flexibly, we model X n g n , where g is a probability density function with respect to the Lebesgue measure on s u p p ( f ) . Let G denote the collection of all such probability density functions. If the parametric family contains the data-generating distribution, then g = f θ for some θ . Formally, we can denote the probability model of the observations in the form of a semi-parametric model ( θ , g ) P θ , g . We aim at estimating θ and consider g as a nuisance parameter, which is typical of semi-parametric models.
Let π denote a prior on G , and for any measurable subset B G , the posterior probability of g B given X n is
π ( B X n ) = B i = 1 n g ( X i ) π ( d g ) G i = 1 n g ( X i ) π ( d g ) .
Let g n * = g π ( d g X n ) denote the Bayesian nonparametric expected a posteriori estimate. Our first proposed method can be described formally as follows:
MHB: Minimum Hellinger distance estimator with Bayesian nonparametric density estimation:
θ ^ 1 = argmin θ Θ h f θ , g n * .
This estimator replaces the kernel density estimate in the classical minimum Hellinger distance method introduced in [3] by the posterior expectation of the density function.
For this method, we will view θ ^ 1 as the value at g n * of a functional T : G Θ , which is defined via
f T ( g ) 1 / 2 g 1 / 2 = min t Θ f t 1 / 2 g 1 / 2
where · denotes the L 2 metric. We can also write θ ^ 1 as T ( g n * ) .
In a more general form, what we estimate is the value ψ ( P ) of some functional ψ : P R d , where the P stands for the common distribution from which data are generated, and P is the set of all possible values of P, which also denotes the corresponding probability model. In the setting of minimum Hellinger distance estimation, the model P is set as F × G , P can be specified as P θ , g , and ψ ( P ) = ψ ( P θ , g ) = θ . For the methods we proposed in this paper, we will focus on the functional T : G Θ , for a given F , as defined above. Note that the constraint associated with the family F is implicitly applied by T.
Using functional T, we can also propose a Bayesian method, which assigns nonparametric prior on the density space and gives inference on the unknown parameter θ of a parametric family as follows:
BMH: Bayesian inference with minimum Hellinger distance estimation:
π ( θ X n ) = π ( T ( g ) X n ) .
A nonparametric prior π on the space G and the observation X n leads to the posterior distribution π ( g X n ) , which can then be converted to the posterior distribution of the parameter θ Θ through the functional T : G Θ .
In the following subsections, we discuss properties associated with the functional T as well as the consistency of MHB and BHM, and we provide a detailed example of the random histogram prior that we will employ and its properties that will be used for the discussion of efficiency in Section 2.1.

2.1. Tangent Space and Information

In this subsection, we obtain the efficient influence function of the functional T on the linear span of the tangent set on g 0 and show that the local asymptotic normality (LAN) expansion related to the norm of the efficient influence function attains the Caramér–Rao bound. These results play important roles in showing that BvM holds for the BMH method in the next section.
Estimating the parameter by T ( g ) under the assumption g G uses less information than estimating this parameter for g G * G . Hence, the lower bound of the variance of T ( g ) for g G should be at least the supremum of the lower bounds of all parametric sub-models G * = { G λ : λ Λ } G .
To use mathematical tools such as functional analysis to study the properties of the proposed methods, we introduce some notations and concepts below. Without loss of generality, we consider one-dimensional sub-models G * , which pass through the “true” distribution, denoted by G 0 with density function g 0 . We say a sub-model indexed by t, { g t : 0 < t < ϵ } G , is differentiable in quadratic mean at t = 0 if we have that, for some measurable function q : s u p p ( g 0 ) R ,
d G t 1 / 2 d G 0 1 / 2 t 1 2 q d G 0 1 / 2 2 0
where G t is the cumulative distribution function associated with g t . Functions q ( x ) s are known as the score functions associated with each sub-model. The collection of these score functions, which is called a tangent set of the model G at g 0 and denoted by G ˙ g 0 , is induced by the collection of all sub-models that are differentiable at g 0 .
We say that T is differentiable at g 0 relative to a given tangent set G ˙ g 0 , if there exists a continuous linear map T ˙ g 0 : L 2 ( G 0 ) R such that for every q G ˙ g 0 and a sub-model t g t with score function q, there is
T ( g t ) T ( g 0 ) t T ˙ g 0 q
where L 2 ( G 0 ) = { q : s u p p ( g 0 ) R , q 2 ( x ) g 0 ( x ) d x < } . By the Riesz representation theorem for Hilbert spaces, the map T ˙ g 0 can always be written in the form of an inner product with a fixed vector-valued, measurable function T ˜ g 0 : s u p p ( g 0 ) R ,
T ˙ g 0 q = T ˜ g 0 , q G 0 = T ˜ g 0 q d G 0 .
Let T ˜ g 0 denote the unique function in l i n ¯ G ˙ g 0 , the closure of the linear span of the tangent set. The function T ˜ g 0 is the efficient influence function and can be found as the projection of any other “influence function” onto the closed linear span of the tangent set.
For a sub-model t g t whose score function is q, the Fisher information about t at 0 is G 0 q 2 = q 2 d G 0 . In this paper, we use the notation F g to denote g d F for a general function g and distribution F. Therefore, the “optimal asymptotic variance” for estimating the functional t T ( g t ) , evaluated at t = 0 , is greater than or equal to the Caramér–Rao bound
( d T ( g t ) / d t ) 2 G 0 q 2 = T ˜ g 0 , q G 0 2 q , q G 0 .
The supremum of the right-hand side (RHS) of the above expression over all elements of the tangent set is a lower bound for estimating T ( g ) given model G , if the true model is g 0 . The supremum can be expressed in the norm of the efficient influence function T ˜ g 0 by Lemma 25.19 in [11]. The lemma and its proof is quite neat, and we reproduce it here for the completeness of the argument.
Lemma 1.
Suppose that the functional T : G R is differentiable at g 0 relative to the tangent set G ˙ g 0 . Then
sup q l i n G ˙ g 0 T ˜ g 0 , q G 0 2 q , q G 0 = G 0 T ˜ g 0 2 .
Proof. 
This is a consequence of the Cauchy–Schwarz inequality ( G 0 T ˜ g 0 q ) 2 G 0 T ˜ g 0 2 G 0 q 2 and the fact that, by definition, the efficient influence function, T ˜ g 0 , is contained in the closure of l i n G ˙ G 0 . □
Now we show that functional T is differentiable under some mild conditions and construct its efficient influence function in the following theorem.
Theorem 1.
For the functional T defined in Equation (2), and for t Θ R , let s t ( x ) denote f θ 1 / 2 ( x ) for θ = t . We assume that there exist s ˙ t ( x ) and s ¨ t ( x ) both in L 2 , such that for α in a neighborhood of zero,
s t + α ( x ) = s t ( x ) + α s ˙ t ( x ) + α u α ( x )
s ˙ t + α ( x ) = s ˙ t ( x ) + α s ¨ t ( x ) + α v α ( x ) ,
where u α and v α converge to zero as α 0 . Assuming T ( g 0 ) i n t ( Θ ) , the efficient influence function of T is
T ˜ g 0 = s ¨ T ( g 0 ) ( x ) g 0 1 2 ( x ) d x 1 + a t s ˙ T ( g 0 ) ( x ) 2 g 0 1 2 ( t )
where a t converges to 0 as t 0 . In particular, for g 0 = f θ ,
T ˜ f θ = s ¨ θ ( x ) s θ ( x ) d x 1 + a t s ˙ θ ( x ) 2 s θ ( x ) .
Proof. 
Let the t-indexed sub-model be
g t : = ( 1 + t q ( x ) ) g 0 ( x )
where q ( x ) satisfies q ( x ) g 0 ( x ) d x = 0 and q L 2 ( g 0 ) . By direct calculation, we see that q is the score function associated with such a sub-model at t = 0 in the sense of Equation (4) and thus the collection of q is the maximal tangent set.
By the definition of T, T ( g 0 ) maximizes s t ( x ) g 0 1 / 2 ( x ) d x . From Equation (6), we have that
l i m α 0 α 1 [ s t + α ( x ) s t ( x ) ] g 0 1 / 2 ( x ) d x = s ˙ t ( x ) g 0 1 / 2 ( x ) d x .
Since T ( g 0 ) i n t ( Θ ) , we have that
s ˙ T ( g 0 ) ( x ) g 0 1 / 2 ( x ) d x = 0 .
Similarly, s ˙ T ( g t ) ( x ) g t 1 / 2 ( x ) d x = 0 . Using Equation (7) to substitute s ˙ T ( g t ) , we have that
0 = [ s ˙ T ( g 0 ) ( x ) + s ¨ T ( g 0 ) ( x ) ( T ( g t ) T ( g 0 ) ) + v t ( x ) ( T ( g t ) T ( g 0 ) ) ] g t 1 / 2 ( x ) d x
where v t ( x ) converge in L 2 to zero as t 0 since T ( g t ) T ( g 0 ) . Thus,
lim t 0 1 t [ T ( g t ) T ( g 0 ) ] = lim t 0 1 t ( s ¨ T ( g 0 ) ( x ) + v t ( x ) ) g t 1 2 ( x ) d x 1 s ˙ T ( g 0 ) ( x ) g t 1 / 2 ( x ) d x = lim t 0 1 t ( s ¨ T ( g 0 ) ( x ) ) g 0 1 2 ( x ) d x 1 + a t s ˙ T ( g 0 ) ( x ) ( g t 1 2 ( x ) g 0 1 2 ( x ) ) d x = ( s ¨ T ( g 0 ) ( x ) ) g 0 1 2 ( x ) d x 1 + a t s ˙ T ( g 0 ) ( x ) 2 g 0 1 2 ( x ) q ( x ) g 0 ( x ) d x .
Since by the definition of T ˜ , which requires T ˜ g 0 g 0 ( x ) d x = 0 , we have that
T ˜ g 0 = s ¨ T ( g 0 ) ( x ) g 0 1 2 ( x ) d x 1 + a t s ˙ T ( g 0 ) ( x ) 2 g 0 1 2 ( x ) s ˙ T ( g 0 ) ( x ) 2 g 0 1 2 ( x ) d x = s ¨ T ( g 0 ) ( x ) g 0 1 2 ( x ) d x 1 + a t s ˙ T ( g 0 ) ( x ) 2 g 0 1 2 ( x ) .
By the same argument we can show that, when g 0 = f θ , Equation (9) holds. □
Some relatively accessible conditions under which Equations (6) and (7) hold are given by Lemmas 1 and 2 in [3]. We do not repeat them here.
Now we can expand T at g 0 as
T ( g ) T ( g 0 ) = g g 0 g 0 , T ˜ g 0 G 0 + r ˜ ( g , g 0 )
where T ˜ is given in Theorem 1 and r ˜ = 0 .

2.2. Consistency of MHB and BMH

Since T ( g ) may have more than one value, the notation T ( g ) is used to denote any arbitrary one of the possible values. In [3], the existence, continuity in Hellinger distance, and uniqueness of functional T are ensured under the following condition:
A1 
(i) Θ is compact, (ii) θ 1 θ 2 implies f θ 1 f θ 2 on a set of positive Lebesgue measures, and (iii), for almost every x, f θ ( x ) is continuous in θ .
When a Bayesian nonparametric density estimator is used, we assume the posterior consistency:
A2 
For any given ϵ > 0 , π { g : h ( g , f θ 0 ) > ϵ X n } 0 in probability.
Under Conditions A1 and A2, consistency holds for MHB and BMH.
Theorem 2.
Suppose that Conditions A1 and A2 hold, then
1.
g n * 1 / 2 f θ 0 1 / 2 2 0 in probability, T ( g n * ) T ( f θ 0 ) in probability, and hence θ ^ 1 θ 0 in probability;
2.
For any given ϵ > 0 , π ( | θ θ 0 | > ϵ X n ) 0 in probability.
Proof. 
Part 1: To show that g n * 1 / 2 f θ 0 1 / 2 2 0 in probability, which is equivalent to showing that g π ( d g X n ) 1 / 2 f θ 0 1 / 2 2 d x 0 in probability, it is sufficient to show that g π ( d g X n ) f θ 0 d x 0 in probability, since h 2 ( f , g ) f g 1 . We have that
g π ( d g X n ) f θ 0 d x = ( g f θ 0 ) π ( d g X n ) d x g f θ 0 π ( d g X n ) d x = g f θ 0 d x π ( d g X n ) 2 h ( g , f θ 0 ) π ( d g X n ) .
Note that the change of order of integration is due to Fubini’s theorem and the last inequality is due to f g 1 2 h ( f , g ) . Split the integral on the right-hand side of the above expression into two parts:
A 2 h ( g , f θ 0 ) π ( d g X n ) + A c 2 h ( g , f θ 0 ) π ( d g X n )
where A = { g : h ( g , f θ 0 ) ϵ } for any given ϵ > 0 . The first term is bounded by ϵ by construction. By Condition A1, the posterior of measure of A c to 0 in probability as n . Since Hellinger distance is bounded by 2, so does the second term above. This completes the proof for g n * 1 / 2 f θ 0 1 / 2 2 0 in probability.
To show T ( g n * ) T ( f θ 0 ) and θ ^ 1 θ 0 in probability, we need that the functional T is continuous and unique at f θ 0 , which is proved by Theorem 1 in [3] under Condition A1.
Part 2: By Condition A1 and Theorem 1 in [3], the functional T is continuous and unique at f θ 0 . Hence, for any given ϵ > 0 , there exist δ > 0 such that | T ( g ) T ( f θ 0 ) | < ϵ when h ( g , f θ 0 ) < δ . By Condition A2, we have that π ( h ( g , f θ 0 ) < δ ) 1 , which implies that π ( | θ θ 0 | < ϵ ) 1 in probability. □
It should be noted that, if we change the ϵ in Condition A2 to ϵ n , a sequence converging to 0, then we can apply the results for the concentration rate of the Bayesian nonparametric density estimation here. However, such an approach cannot lead to the general “efficiency” claim, no matter in the form of rate of concentration or asymptotic normality. There are two reasons for this. First, the rate of concentration for Bayesian nonparametric posterior is about n 2 / 5 for a rather general situation and ( log n ) a × n 1 / 2 , where a > 0 , for some special cases (see [12,13,14]). This concentration rate is not sufficient in many situations to directly imply that the concentration of the corresponding parametric estimates achieves the lower bound of the variance given in the Cramér–Rao theorem. Second, the Hellinger distances between pairs of densities as functions of parameters vary among different parametric families. Therefore, obtaining the rate of concentration in parameters from the rate of convergence in the densities cannot be generally applied to different distribution families.
It should also be noted that, although Θ is required to be compact in Condition A1, Theorem 2 is useful for a Θ that is not compact, as long as the parametric family f θ : θ Θ can be re-parameterized where the space of new parameters can be embedded within a compact set. An example of re-parameterizing a general location-scale family with parameters μ R and σ R + to a family with parameters t 1 = t a n 1 ( μ ) and t 2 = t a n 1 ( σ ) , where Θ ( t 1 , t 2 ) = ( π / 2 , π / 2 ) × ( 0 , π / 2 ) and Θ Θ ¯ = [ π / 2 , π / 2 ] × [ 0 , π / 2 ] , is discussed in [3], and the conclusions of Theorem 1 in [3] is still valid for a location-scale family. Therefore, Theorem 2 remains valid for the same type of the families, whose parameter space may not be compact and for the same reasons; the compactness requirement stated in the theorem is mainly for mathematical simplicity.

2.3. Prior on Density Functions

We introduce a random histogram as an example for priors used in Bayesian nonparametric density estimation. It can be seen as a simplified version of a Dirichlet process mixture (DPM) prior, which is commonly used in practice. Both DPM and random histogram are mixture densities. While DPM uses a Dirichlet process to model the weights within an infinite mixture of kernels, the random histogram prior only has a finite number of components. Another difference is that, although we specify the form of the kernel function for DPM, the kernel function could be any density function in general, while the random histogram uses only the uniform density as its mixing kernel. Nevertheless, the limit on the finite number of the mixing components is not that important in practice, since the Dirichlet process will always be truncated in computation. In the next section, we will verify that the random histogram satisfies the conditions that are needed for our proposed methods to be efficient. On the other hand, although we believe that DPM should also lead to efficiency, the authors are unaware of the theoretical results or tools required to prove it. This is mostly due to the flexibility of DPM, which in turn significantly increases the mathematical complexity of the analysis.
For any k N , denote the set of all regular k bin histograms on [ 0 , 1 ] by H k = { f L 2 ( [ 0 , 1 ] ) : m ( x ) = j = 1 k f j 1 l I j ( x ) , f j R , j = 1 , , k } , where I j = [ ( j 1 ) / k , j / k ) . Denote the unit simplex in R k by S k = { ω [ 0 , 1 ] k : j = 1 k ω j = 1 } . The subset of H k , H k 1 = { f L 2 ( R ) , f ( x ) = f ω , k = k · j = 1 k ω j 1 l I j ( x ) , ( ω 1 , , ω k ) S k } , denotes the collection of densities on [ 0 , 1 ] in the form of a histogram.
The set H k is a closed subset of L 2 [ 0 , 1 ] . For any function f L 2 [ 0 , 1 ] , denote its projection in the L 2 sense on H k by f [ k ] , where f [ k ] = k j = 1 k 1 l I j I j f .
We assign priors on H k 1 via k and ( ω 1 , , ω k ) for each k. A degenerate case is to let k = K n = o ( n ) . Otherwise, let p k be a distribution on positive integers, where
k p k , e b 1 k log ( k ) p k ( k ) e b 2 k log ( k )
for all k large enough and some 0 < b 1 < b 2 < . For example, Condition (13) is satisfied by the Poisson distribution, which is commonly used in Bayesian nonparametric models.
Conditionally on k, we consider a Dirichlet prior on ω = { ω 1 , , ω k } :
ω D ( α 1 , k , , α k , k ) , c 1 k a α j , k c 2
for some fixed constants a , c 1 , c 2 > 0 and any 1 j k . For posterior consistency, we need the following condition:
sup k K n j = 1 k α j , k = o ( n )
where K n { 1 , 2 , , n / ( log n ) 2 } .
The consistency result of this prior is given by Proposition 1 in the supplement to [15]. For n 2 , k 1 , M > 0 , let
A n , k ( M ) = { g H k 1 , h ( g , g 0 , [ k ] ) } < M ϵ n , k
where ϵ n , k 2 = k log n / n denote a neighborhood of g 0 , [ k ] , and we have that
  • (a) there exist c, M > 0 such that
    P 0 k n log n ; π [ g A n , k ( M ) X n , k ] > e c k log n = o ( 1 ) .
  • (b) Suppose g 0 C β with 0 < β 1 , if k n ( β ) = ( n log n ) 1 / ( 2 β + 1 ) and ϵ n ( β ) = k n ( β ) β , then, for k 1 and a sufficiently large M,
    π [ h ( g 0 , g ) M ϵ n ( β ) ; k k 1 k n ( β ) X n ] = 1 + o p ( 1 ) ,
    where C β denotes the class of β -Hölder functions on [ 0 , 1 ] .
This means that the posterior of the density function concentrates around the projection g 0 [ k ] of g 0 and around g 0 itself in terms of the Hellinger distance. We can easily conclude that π ( K n X n ) = 1 + o ( 1 ) from Equation (18) for g 0 C β .
It should be noted that, although the priors we defined above are on the densities on [ 0 , 1 ] , this is for mathematical simplicity, which could easily be extended to the space of probability densities on any given compact set. Further, transformations of X n , similar to those discussed at the end of Section 2.2, can extend the analysis to the real line (refer to [3,16] for more example and details).

3. Efficiency

We say that both MHB and BMH methods are efficient if the lower bound of the variance of the estimate, in the sense of Cramèr and Rao’s theorem, is achieved.

3.1. Asymptotic Normality of MHB

Consider the maximal tangent set at g 0 , which is defined as H T = { q L 2 ( g 0 ) , q g 0 = 0 } . Denote the inner product on H T by q 1 , q 2 L = q 1 q 2 g 0 , which induces the L-norm as
g L 2 = 0 1 ( g G 0 g ) 2 g 0 .
Note that the inner product · , · L is equivalent to the inner product introduced in Section 2.1, and the induced L-norm corresponds to the local asymptotic normality (LAN) expansion. Refer to [17] and Theorem 25.14 in [11] for more details.
With functional T and priors on g defined in the previous section, Theorem 3 shows that the MHB method is efficient when the parametric family contains the true model.
Theorem 3.
Let two priors π 1 and π 2 be defined by Equations (13)–(14) and let a prior on k be either a Dirac mass at k = K n = n 1 / 2 ( log n ) 2 for π 1 or k π k given by Equation (13) for π 2 . Then the limit distribution of n 1 / 2 [ T ( g n * ) T ( g 0 ) ] under g 0 as n is N o r m ( 0 , T ˜ g 0 L 2 ) , where T ˜ g 0 L 2 = I ( θ 0 ) 1 when g 0 = f θ 0 .
Proof. 
To prove this result, we verify Lemma 25.23 in [11], which is equivalent to showing that
n ( T ( g n * ) T ( g 0 ) ) = 1 n i = 1 n T ˜ g 0 ( X i ) + o p ( 1 ) .
By the consistency result provided for priors π 1 and π 2 in the previous section, we consider only g n * A n , k for an n that is sufficiently large. Then by Equation (12) we have that
n ( T ( g n * ) T ( g 0 ) ) = n g n * g 0 g 0 , T ˜ g 0 L + o p ( 1 ) .
Therefore, showing
n 0 1 ( g n * ( x ) g 0 ( x ) ) T ˜ g 0 ( x ) d x = 1 n i = 1 n T ˜ g 0 ( X i ) + o p ( 1 )
will complete the proof. Due to 0 1 g 0 ( x ) T ˜ g 0 ( x ) d x = 0 , we now need to show that 0 1 g n * ( x ) T ˜ g 0 ( x ) d x = ( 1 / n ) i = 1 n T ˜ g 0 ( X i ) + o p ( 1 ) . By the law of large numbers, we have that 1 n i = 1 n T ˜ g 0 ( X i ) G 0 T ˜ g 0 = o p ( 1 ) , and 0 1 g n * ( x ) T ˜ g 0 ( x ) d x G 0 T ˜ g 0 = o p ( 1 ) due to the posterior consistency demonstrated above. Therefore, we have that
1 n i = 1 n T ˜ g 0 ( X i ) 0 1 g n * ( x ) T ˜ g 0 ( x ) d x = 1 n i = 1 n T ˜ g 0 ( X i ) 0 1 g n * ( x ) T ˜ g 0 ( x ) d x + 0 1 g n * ( x ) T ˜ g 0 ( x ) d x G 0 T ˜ g 0 = o p ( 1 ) .
 □

3.2. The Bernstein von Mises Theorem for BMH

Theorem 2.1 in [15] yielded a general result and approach to show that the BvM Theorem holds for smooth functionals in some semi-parametric models. The theorem shows that, under the continuity and consistency condition, the moment generating function (MGF) of the parameter endowed with a posterior distribution can be calculated approximately through the local asymptotic normal (LAN) expansion, and its convergence to an MGF of some normal random variable can then be shown under some assumptions on the statistical model.
We will show that the BvM theorem holds for the BMH Method via Theorem 4. The result also shows that the approach given in [15] can be applied not only to simple examples but also to relatively complicated frameworks. To prove it, we introduced Lemma 2, which is modified from Proposition 1 in [15], the proof of which was not given explicitly in the original paper.
For mathematical simplicity, we assume that the true density f θ 0 belongs to the set F , which is restricted to the space of all densities that are bounded away from 0 and on [ 0 , 1 ] . As noted above, the compactness of the domain can be relaxed by considering transformations of the parameters and random variables.
To state the lemma, we need several more notations. Assume that the functional T satisfies Equation (12) with bounded efficient influence function T ˜ g 0 0 . We denote T ˜ g 0 by T ˜ , where T ˜ [ k ] denotes the projection of T ˜ on H k . For k 1 , let
T ^ k = T ( g 0 [ k ] ) + G n T ˜ [ k ] n , V k = T ˜ [ k ] L 2 T ^ = T ( g 0 ) + G n T ˜ n , V = T ˜ L 2
and denote
G n ( g ) = W n ( g ) = 1 n i = 1 n [ g ( x i ) G 0 ( g ) ] .
Lemma 2.
Let g 0 belong to G , let the prior π be defined as in Section 2.3, and let Conditions (13, 14, 15) be satisfied. Consider estimating a functional T ( g ) , differentiable with respect to the tangent set H T : = { q L 2 ( g 0 ) , [ 0 , 1 ] q g 0 = 0 } H = L 2 ( g 0 ) , with efficient influence function T ˜ g 0 bounded on [ 0 , 1 ] , and with r ˜ defined in Equation (12), for K n as introduced in Equation (15). If
max k K n T ˜ [ k ] L 2 T ˜ L 2 = o p ( 1 )
max k K n G n ( T ˜ [ k ] T ˜ ) = o p ( 1 )
sup k K n sup g A n , k ( M ) n r ˜ ( g , g 0 ) = o p ( 1 )
for any M > 0 and A n , k ( M ) defined as in (16), as n , and
max k K n n ( T ˜ T ˜ [ k ] ) ( g g 0 ) = o ( 1 ) ,
then the BvM theorem for the functional T holds.
Proof. 
To show that BvM holds is to show that the posterior distribution converges to a normal distribution. If we have that
π [ n ( T T ^ k ) z X n ] = k K n π [ k X n ] π n ( T T ^ ) z + n ( T ^ T ^ k ) X n , k + o p ( 1 ) = k K n π [ k X n ] Φ z + n ( T ^ T ^ k ) V k + o p ( 1 ) ,
then the proof will be completed by showing that the RHS of Equation (26) reduces from the mixture of normal to the target law N ( 0 , V ) .
By Condition (22), we have that V k goes to V uniformly for k K n . Due to the definition of T ˜ and the Lemma 4 result (iii) in the supplement of [15], we have that
n ( T ^ T ^ k ) = n T ( g 0 ) T ( g [ k ] ) + G n ( T ˜ T ˜ [ k ] ) = n T ˜ ( g 0 [ k ] g 0 ) + G n ( T ˜ [ k ] T ˜ ) + o p ( 1 ) = n ( T ˜ T ˜ [ k ] ) ( g 0 [ k ] g 0 ) + G n ( T ˜ [ k ] T ˜ ) + o p ( 1 ) .
By Conditions (25) and (23), the last line converges to 0 uniformly for k K n .
Therefore, showing that for any given k, Equation (26) holds will complete the proof. We prove this by showing that the MGF (Laplace transformation) of the posterior distribution of the parameter of interest converges to the MGF of some normal distribution, which implies that the posterior converges to the normal distribution weakly by Lemmas 1 and 2 in supplement to [15] or Theorem 2.2 in [18].
First, consider the deterministic k = K n case. We calculate the MGF as
E [ e t n ( T ( g ) T ^ ( g 0 [ k ] ) ) X n , A n ] = A n e t n ( T ( g ) T ^ ( g 0 [ k ] ) ) + l n ( g ) l n ( g 0 [ k ] ) d π ( g ) A n e l n ( g ) l n ( g 0 [ k ] ) d π ( g )
where l n ( g ) is the log-likelihood for given g and X n . Based on the LAN expansion of the log-likelihood and the smoothness of the functional, the exponent in the numerator on the RHS of the equation can be transformed with respect to T ¯ ( k ) = T ˜ [ k ] T ˜ [ k ] g 0 [ k ]
t n ( T ( g ) T ^ k ) + l n ( g ) l n ( g 0 [ k ] ) = t n T ( g ) T ( g 0 [ k ] ) G n T ˜ [ k ] n + l n ( g ) l n ( g 0 [ k ] ) = t n log g g 0 [ k ] log g g 0 [ k ] g 0 [ k ] , T ¯ [ k ] L + B ( g , g 0 [ k ] ) + r ˜ ( g , g 0 [ k ] ) G n T ¯ [ k ] n 1 2 n log g g 0 [ k ] L 2 + W n n log g g 0 [ k ] + R n , k ( g , g 0 [ k ] )
where B ( g , g 0 ) = 0 1 [ log ( g / g 0 ) ( g g 0 ) / g 0 ] ( x ) T ˜ g 0 ( x ) g 0 ( x ) d x . Note that G n = W n and add a term of ( t 2 / 2 ) T ¯ ( k ) L 2 . Re-arranging the RHS expression above, we have
t n ( T ( g ) T ^ k ) + l n ( g ) l n ( g 0 , [ k ] ) = n 2 log g g 0 [ k ] t n T ¯ ( k ) L , k 2 + n W n log g g 0 [ k ] t n T ¯ ( k ) + t 2 2 T ¯ ( k ) L , k 2 + t n B n , k + R n , k ( g , g 0 [ k ] ) + r ˜ ( g , g 0 [ k ] ) = n 2 log g e t n T ¯ ( k ) g 0 [ k ] L , k 2 + n W n log g e t n T ¯ ( k ) g 0 [ k ] + t 2 2 T ¯ ( k ) L , k 2 + t n B n , k + R n , k ( g , g 0 [ k ] ) + r ˜ ( g , g 0 [ k ] ) .
This is because the cross term in calculating the first term in the second line above is equal to the inner product term in the equation above it.
Let g t , k = g e t n T ¯ ( k ) / G e t n T ¯ ( k ) , the RHS of the above equation can be written as
t 2 2 T ¯ ( k ) L , k 2 + l n ( g t , k ) l n ( g 0 [ k ] ) + o ( 1 ) .
Substituting the corresponding terms on the RHS of Equation (27) by (28), we have that
E [ e t n ( T ( g ) T ^ ( g 0 [ k ] ) ) X n , A n ] = e ( t 2 / 2 ) T ¯ ( k ) L , k 2 + o ( 1 ) × A n , k e l n ( g t , k ) l n ( g 0 [ k ] ) d π k ( g ) A n , k e l n ( g ) l n ( g 0 [ k ] ) d π k ( g ) .
Notice that the integration in the denominator of the second term is an expectation based on a Dirichlet distribution on ω as described in Equation (14) and that g t , k = k j = 1 k ζ j 1 l I j , where
ζ j = ω j γ j 1 j = 1 k ω j γ j 1
with γ j = e t T ¯ j / n and T ¯ j : = k I j T ¯ ( k ) . Let S γ 1 ( ω ) = j = 1 k ω j γ j 1 , by (30). We then have S γ 1 ( ζ ) = S γ 1 ( ω ) . Now using these notations,
A n , k e l n ( g t , k ) l n ( g 0 [ k ] ) d π k ( g ) A n , k e l n ( g ) l n ( g 0 [ k ] ) d π k ( g ) = A n , k e l n ( g t , k ) l n ( g 0 [ k ] ) j = 1 k ω j α j , k 1 / B ( α ) d ω A n , k e l n ( g ) l n ( g 0 [ k ] ) j = 1 k ω j α j , k 1 / B ( α ) d ω = A n , k e l n ( k j = 1 k ω j γ j 1 j = 1 k ω j γ j 1 1 l I j ) l n ( g 0 [ k ] ) j = 1 k ω j α j , k 1 d ω A n , k e l n ( k j = 1 k ω j 1 l I j ) l n ( g 0 [ k ] ) j = 1 k ω j α j , k 1 d ω = A n , k e l n ( k j = 1 k ζ j 1 l I j ) l n ( g 0 [ k ] ) Δ ζ j = 1 k [ γ j ζ j S γ 1 ( ζ ) ] α j , k 1 d ζ A n , k e l n ( k j = 1 k ω j 1 l I j ) l n ( g 0 [ k ] ) j = 1 k ω j α j , k 1 d ω
where Δ ζ = S γ k ( ζ ) j = 1 k γ j is the Jacobian of the change of variable, ( ω 1 , , ω k 1 ) ( ζ 1 , , ζ k 1 ) , which is given in Lemma 5 in supplement of [15], and B ( α ) = i = 1 k Γ ( α i ) / Γ ( i = 1 k α i ) is the constant for normalizing Dirichlet distribution.
Notice that, over the set A n , k ,
j = 1 k [ γ j S γ 1 ( ζ ) ] α j , k 1 Δ ζ = S γ ( ζ ) j = 1 k α j , k γ j j = 1 k α j , k = S γ ( ζ ) j = 1 k α j , k e t j = 1 k a j , k T ¯ j / n = e t j = 1 k α j , k T ¯ j / n 1 t n 0 1 T ¯ ( k ) ( g g 0 ) + O ( n 1 ) j = 1 k α j , k ,
since
S γ 1 ( ω ) = 0 1 e t T ¯ ( k ) ( x ) / n g [ k ] ( x ) d x = 1 t n 0 1 T ¯ ( k ) ( g [ k ] g 0 ) + O ( n 1 )
by Taylor’s expansion. Expression (32) converges to 1 under Condition (15), so Expression (31) converges to
A n , k e l n ( k j = 1 k ζ j 1 l I j ) l n ( g 0 [ k ] ) j = 1 k ζ j α j , k 1 / B ( α k ) d ζ A n , k e l n ( k j = 1 k ω j 1 l I j ) l n ( g 0 [ k ] ) j = 1 k ω j α j , k 1 / B ( α k ) d ω
since, when ω ω 0 1 M k log n / n ,
ζ ω 0 1 ω ω 0 1 + ω ζ 1 = M k log n + 2 | t | T ˜ n ( M + 1 ) k log n n
and, vice versa, when ζ ω 0 1 M k log n / n ,
ω ω 0 1 ω ζ 1 + ω 0 ζ 1 = M k log n + 2 | t | T ˜ n ( M + 1 ) k log n n .
Choosing M, such that
π ω ω 0 1 ( M + 1 ) k log n X n , k = 1 + o p ( 1 ) ,
Expression (33) equals 1 + o p ( 1 ) . Notice that T ¯ ( k ) L , k = T ˜ [ k ] L . We then have that
E π e t n ( T ( g ) T ^ k ) X n , A n , k = e t 2 T ˜ [ k ] L 2 1 + o p ( 1 ) ,
which completes the proof for a fixed k case.
For a random k case, the proof will follow the same steps as the corresponding part in the proof for Theorem 4.2 in [15]. For completeness, we briefly sketch the proof here. Since k is not fixed, we will calculate E π [ e t n ( T ( f ) T ^ k ) X n ] on B n = 1 k n A n , k { f = f ω , k , k K n } . Consider K n a subset of { 1 , 2 , , n / log 2 n } such that π ( K n X n ) = 1 + o p ( 1 ) by the concentration property (a) of the random histogram, we have that π [ B n X n ] = 1 + o p ( 1 ) . We rewrite the left-hand side (LHS) of Equation (35) as E π [ e t n ( T ( f ) T ^ k ) X n , B n , k ] , which is also equal to e t 2 T ˜ [ k ] L 2 ( 1 + o p ( 1 ) ) . Notice that o ( 1 ) in this expression is uniform in k. This is because it holds in the proof for a deterministic case for any given k < n . Therefore,
E π e t n ( T ( f ) T ^ ) X n , B n = k K n E π e t n ( T ( f ) T ^ k ) + T ^ k ) T ^ ) X n , A n , k , k π [ k X n ] = ( 1 + o ( 1 ) ) k K n e t 2 V k / 2 + t n ( T ^ k T ^ ) π [ k X n ] .
Using Equations (23) and (25) together with the continuous mapping theorem for the exponential function yields that the last display converges in probability to e t 2 V / 2 as n , which completes the proof. □
The following theorem shows that Method 2 is efficient, the proof of which consists in verifying that the conditions in the above lemma are satisfied.
Theorem 4.
Suppose g 0 C β with β > 0 . Let the prior on k be either a Dirac mass at k = K n = n 1 / 2 ( log n ) 2 or k π k given by (13), and let two priors π 1 and π 2 be defined by (14) and satisfy (13). Then, for all β > 1 / 2 , the BvM holds for T ( f ) for both π 1 and π 2 .
Proof. 
For T ( f ) such that Equation (12) is satisfied, Condition (24) is satisfied obviously.
For Equation (23), the empirical process G n ( T ˜ [ k ] T ˜ ) is controlled and will converge to 0 by applying Lemma 19.33 in [11].
Condition (25) is satisfied by Lemma 3 below.
Now we show that Equation (22) holds:
T ˜ f L 2 T ˜ [ k ] L 2 s ˙ T ( f ) ( x ) d x s ˙ T ( f [ k ] ( x ) ) f ( x ) f [ k ] ( x ) d x s ˙ T ( f ) ( x ) f [ k ] ( x ) s ˙ T ( f [ k ] ) ( x ) f ( x ) = s ˙ T ( f [ k ] ) ( x ) [ f [ k ] ( x ) f ( x ) ] | f [ k ] ( x ) f ( x ) | d x .
The last equality is based on Conclusion (3) in Lemma 4 in [19], and the last inequality is due to the assumption that T ˜ is bounded. Then the last term is controlled by h ( f , f n ) , which completes the proof. □
Lemma 3.
Under the same conditions as in Theorem 4, Equation (25) holds.
Proof. 
Since T ˜ = s ¨ T ( g 0 ) ( x ) g 0 1 2 ( x ) d x 1 + a t s ˙ T ( g 0 ) ( x ) 2 g 0 1 2 ( x ) , under the deterministic k-prior with k = K n = n 1 / 2 ( log n ) 2 and β > 1 / 2 ,
( T ˜ T ˜ [ k ] ) ( g 0 g 0 [ k ] ) h 2 ( g 0 , g 0 [ k ] ) = o ( 1 / n ) .
For the random k-prior, since we restrict g to be bounded from above and below, so the Hellinger and L 2 -distances considered are comparable. For a given k K n , by definition, there exists g k * H k 1 with h ( g 0 , g k * ) M ϵ n ( β ) , so
h 2 ( g 0 , g 0 [ k ] ) ( g 0 g 0 [ k ] ) 2 ( g 0 g k * ) 2 h 2 ( g 0 , g k * ) ϵ n 2 ( β ) ,
which completes the proof.

4. Robustness Properties

In frequentist analysis, robustness is usually measured by the influence function and breakdown point of estimators. These have been used to study robustness in minimum Hellinger distance estimators in [3] and in more general minimum disparity estimators in [2,7].
In Bayesian inference, robustness is labeled “outlier rejection” and is studied under the framework of the “theory of conflict resolution”. There is a large literature on this topic, e.g., [20,21,22]. While the results of [22] are only about symmetric distributions, [23] provides corresponding results covering a wider class of distributions with tails in the general exponential power family. These results provide a complete theory for the case of many observations and a single location parameter.
We examine the behavior of the methods MHB and BMH under a mixture model for gross errors. Let δ z denote the uniform density of the interval ( z ϵ , z + ϵ ) , where ϵ > 0 is small, and let f θ , α , z = ( 1 α ) f θ + α δ z , where θ Θ and α [ 0 , 1 ) ] and z is a real number. The density f θ , α , z models a situation, where 100 ( 1 α ) % observations are distributed from f θ , and 100 α % of the observations are the gross errors located near z.
Theorem 5.
For every α ( 0 , 1 ) and every θ Θ , denote the mixture model for gross errors by f θ , α , z . We then have that l i m z l i m n T ( g n * ) = θ , under the assumptions of Theorem 3 and that, for the BMH method, π ( T ( g ) X n ) ϕ ( θ , T ˜ f θ , α , z L 2 ) in the distribution as n and z , where ϕ denotes the probability function of the normal distribution, when conditions in Theorem 4 are satisfied.
Proof. 
By Theorem 7 in [3], for functional T, as we defined and under the conditions in this theorem, we have that
l i m z T ( f θ , α , z ) = θ .
We also have that, for MHB, under conditions of Theorem 3, l i m n T ( g n * ) T ( f θ , α , z ) in probability. Combining the two results, l i m z l i m n T ( g n * ) = θ , when the data is generated from a contaminated distribution as f θ , α , z . Similarly, by Theorem 4, we have that π ( T ( g ) X n ) ϕ ( T ( f θ , α , a ) , T ˜ f θ , α , z L 2 ) in distribution as n , and which converges to ϕ ( θ , T ˜ f θ , α , z L 2 ) , as z . □

5. Demonstration

We provide a demonstration of both BMH and MHB methods on two data sets: the classical Newcomb light speed data (see [24,25]), in which 2 out of 66 values are clearly negative oultiers, and a bivariate simulation containing 10% contamination in two asymmetric locations.
We have implemented the BMH and MHB methods using two Bayesian nonparametric priors:
  • the random histogram prior studied in this paper based on a fixed k = 100 with the range naturally extended to the range of the observed data (this is applied only to our first univariate example).
  • the popular Dirichlet Process (DP) kernel mixture of the form
    y i μ i , Σ i N ( μ i , Σ i ) ( μ i , Σ i ) G G G α , G 0 D P ( α G 0 )
    where the baseline distribution is the conjugate normal-inverted Wishart,
    G 0 = N ( μ m 1 , ( 1 / k 0 ) Σ ) I W ( Σ ν 1 , ψ 1 ) .
    Note that, when y i values are univariate observations, the inverse Wishart (IW) distribution reverts to an inverse Gamma distribution. To complete the model specification, independent hyperpriors are assumed
    α a 0 , b 0 G a m m a ( a 0 , b 0 ) m 1 m 2 , s 2 N ( m 2 , s 2 ) k 0 τ 1 , τ 2 G a m m a ( τ 1 / 2 , τ 2 / 2 ) ψ 1 ν 2 , ψ 2 I W ( ν 2 , ψ 2 ) .
We obtain posteriors for both using BUGS. We have elected to use BUGS here as opposed to the package DPpackage within R despite the latter’s rather efficient MCMC algorithms because our BMH method requires direct access to samples from the posterior distribution as opposed to the expected a posteriori estimate. The R package distrEx is then used to construct the sampled density functions and calculated the Hellinger distance between the sampled densities from the nonparametric model and the assumed normal distribution. The R package optimx is also used to find the minima of the Hellinger distances. The time cost of our methods are dominated by the optimization step rather than by the obtaining of samples from the posterior density.
We first apply BMH and MHB on the Simon Newcomb’s measurements to measure the speed of light. The data contains 66 observations. For this example, we specify the parameters and hyper-parameters of the DPM as α = 1 , m 2 = 0 , s 2 = 1000 , τ 1 = 1 , τ 2 = 100 , and ν 2 = 2 , ψ 2 = 1 . We plot the data and a bivariate contour of the BMH posterior for both the mean and variance of the assumed normal in Figure 1, where, despite outliers, the BvM result is readily apparent.
Table 1 summarizes these estimates. We report the estimated mean and variance with and without the obvious outliers as well as the same quantities estimated using both MHB and BMH methods with the last of these being the expected a posteriori estimates. Quantities in parentheses given the “natural” standard error for each quantity: likelihood estimates correspond to standard normal theory—dividing the estimated standard error by n , and BMH standard errors are obtained from the posterior distribution. For MHB, we used a bootstrap and note that, while the computational cost involved in estimating MHB is significantly lower than BMH when obtaining a point estimate, the standard errors require and MCMC chain for each bootstrap, significantly raising the cost of obtaining these estimates. We observe that both prior specifications result in parameter estimates that are identical to two decimal places and very close to those obtained after removing outliers.
To examine the practical implementation of methods that go beyond our theoretical results, we applied these methods to a simulated two-dimensional data set of 100 data points generated from a standard normal with two contamination distributions. Specifically, our data distribution comes from
9 10 N 10 5 , 3 1 1 5 + 1 20 N 2 5 , 0 . 5 0 . 1 0 . 1 0 . 5 + 1 20 N 10 14 , 0 . 4 0 . 1 0 . 1 0 . 4
where exactly five points were generated from each of the second-two Gaussians. Our DP prior used the same hyper-parameters as above with the exception that Ψ 1 was obtained from the empirical variance of the (contaminated) data, and ( m 2 , S 2 ) were extended to their 2-dimensional form as ( 0 , 0 ) T , d i a g ( 1000 , 1000 ) . Figure 2 plots these data along with the posterior for the two means. Figure 3 provides posterior distributions for the components of the variance matrix. Table 2 presents estimation results for the full data and those with the contaminating distributions removed as well as from the BMH method. Here we again observe that BMH yields results that are very close to those obtained using the uncontaminated data. There is some more irregularity in our estimates, particularly in Figure 3, which we speculate is due to poor optimization. There is considerable scope to improve the numerics of minimum Hellinger distance methods more generally, but this is beyond the scope of this paper.

6. Discussion

This paper investigates the use of minimum Hellinger distance methods that replace kernel density estimates with Bayesian nonparametric models. We show that simply substituting the expected a posteriori estimator will reproduce the efficiency and robustness properties of the classical disparity methods first derived in [3]. Further, inducing a posterior distribution on θ through the posterior for g results in a Bernstein von Mises theorem and a distributional robustness result.
There are multiple potential extensions of this work. While we have focused on the specific pairing of Hellinger distance and random histogram priors, both of these can be generalized. A more general class of disparities was examined in [7], and we believe the extension of our methods to this class are straightforward. More general Bayesian nonparametric priors are discussed in [14], where the Dirichlet process prior has been particularly popular. Extensions to each of these priors will require separate analysis (e.g., [26]). Extensions of disparities to regression models were examined in [27] using a conditional density estimate, where equivalent Bayesian nonparametrics are not as well developed. Other modeling domains such as time series may require multivariate density estimates, resulting in further challenges.
Our results are a counterpoint to the Bayesian extensions of Hellinger distance methods in [2] where the kernel density was retained for g n but a prior was given for θ and the disparity treated as a log likelihood. Combining both these approaches represents a fully Bayesian implementation of disparity methods and is an important direction of future research.

Author Contributions

Conceptualization: Y.W. and G.H; methodology: Y.W. and G.H.; formal analysis: Y.W.; investigation: Y.W. and G.H.; writing—original draft preparation: Y.W.; writing—review and editing: Y.W. and G.H.

Funding

This research based on work supported by NASA under award No(s) NNX15AK38A and by the National Science Foundation grants NSF DEB-0813743 and DMS-1712554.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BMHBayesian Minimum Hellinger Method
MHBMinimum Hellinger Method with Bayesian Density estimation
BvMBernstein von Mises
MGFMoment Generating Function
DPDirichlet Process
DPMDirichlet Process Mixture
LANlocal asymptotic normality
BUGSBayesian inference Using Gibbs Sampling
MCMCMarkov Chain Monte Carlo
IWinverse Wishart
RHSRight hand side
LHSLeft hand side

References

  1. Huber, P.J. Robust Statistics; Wiley: Hoboken, NJ, USA, 2004. [Google Scholar]
  2. Hooker, G.; Vidyashankar, A.N. Bayesian model robustness via disparities. Test 2014, 23, 556–584. [Google Scholar] [CrossRef][Green Version]
  3. Beran, R. Minimum Hellinger Distance Estimates for Parametric Models. Ann. Stat. 1977, 5, 445–463. [Google Scholar] [CrossRef]
  4. Basu, A.; Lindsay, B.G. Minimum disparity estimation for continuous models: Efficiency, distributions and robustness. Ann. Inst. Statist. Math. 1994, 46, 683–705. [Google Scholar] [CrossRef]
  5. Basu, A.; Sarkar, S.; Vidyashankar, A.N. Minimum Negative Exponential Disparity Estimation in Parametric Models. J. Stat. Plan. Inference 1997, 58, 349–370. [Google Scholar] [CrossRef]
  6. Pak, R.J.; Basu, A. Minimum Disparity Estimation in Linear Regression Models: Distribution and Efficiency. Ann. Inst. Stat. Math. 1998, 50, 503–521. [Google Scholar] [CrossRef]
  7. Park, C.; Basu, A. Minimum Disparity Estimation: Asymptotic Normality and Breakdown Point Results. Bull. Inform. Cybern. 2004, 38, 19–33. [Google Scholar]
  8. Lindsay, B.G. Efficiency versus Robustness: The case for minimum Hellinger distance and related methods. Ann. Stat. 1994, 22, 1081–1114. [Google Scholar] [CrossRef]
  9. Gervini, D.; Yohai, V.J. A class of robust and fully efficient regression estimators. Ann. Stat. 2002, 30, 583–616. [Google Scholar] [CrossRef]
  10. Wu, Y.; Ghosal, S. Posterior consistency for some semi-parametric problems. Sankhyā Ser. A 2008, 70, 267–313. [Google Scholar]
  11. Van der Vaart, A. Asymptotic Statistics; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  12. Ghosal, S.; Ghosh, J.K.; van der Vaart, A. Convergence rates of posterior distributions. Ann. Stat. 2000, 28, 500–531. [Google Scholar] [CrossRef][Green Version]
  13. Ghosal, S.; van der Vaart, A. Convergence rates of posterior distributions for noniid observations. Ann. Stat. 2007, 35, 192–223. [Google Scholar] [CrossRef][Green Version]
  14. Ghosh, J.K.; Ramamoorthi, R.V. Bayesian Nonparametrics; Springer: New York, NY, USA, 2003. [Google Scholar]
  15. Castillo, I.; Rousseau, J.A. Bernstein–von Mises theorem for smooth functionals in semiparametric models. Ann. Stat. 2015, 43, 2353–2383. [Google Scholar] [CrossRef]
  16. Amewou-Atisso, M.; Ghosal, S.; Ghosh, J.; Ramamoorthi, R. Posterior consistency for semi-parametric regression problems. Bernoulli 2003, 9, 291–312. [Google Scholar] [CrossRef]
  17. Rivoirard, V.; Rousseau, J. Bernstein-von Mises theorem for linear functionals of the density. Ann. Stat. 2012, 40, 1489–1523. [Google Scholar] [CrossRef]
  18. Bagui, S.C.; Mehra, K.L. Convergence of Binomial, Poisson, Negative-Binomial, and Gamma to normal distribution: Moment generating functions technique. Am. J. Math. Stat. 2016, 6, 115–121. [Google Scholar]
  19. Castillo, I.; Nickl, R. Nonparametric Bernstein-von Mises Theorems in Gaussian White Noise. Ann. Stat. 2013, 41, 1999–2028. [Google Scholar] [CrossRef]
  20. De Finetti, B. The Bayesian approach to the rejection of outliers. In Proceedings of the 4th Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, 20 June–30 July 1960; University of California Press: Berkeley, CA, USA, 1961; Volume 1, pp. 199–210. [Google Scholar]
  21. O’Hagan, A. On outlier rejection phenomena in Bayes inference. J. R. Stat. Soc. B 1979, 41, 358–367. [Google Scholar] [CrossRef]
  22. O’Hagan, A. Outliers and credence for location parameter inference. J. Am. Stat. Assoc. 1990, 85, 172–176. [Google Scholar] [CrossRef]
  23. Desgagnè, A.; Angers, J.-F. Confilicting information and location parameter inference. Metron 2007, 67, 67–97. [Google Scholar]
  24. Stigler, S.M. Do Robust Estimators Work with Real Data? Ann. Stat. 1977, 5, 1055–1098. [Google Scholar] [CrossRef]
  25. Basu, A.; Shioya, H.; Park, C. Statistical Inference: The Minimum Distance Approach; Chapman and Hall: Boca Raton, FL, USA, 2011. [Google Scholar]
  26. Wu, Y.; Ghosal, S. Kullback Leibler property of kernel mixture priors in Bayesian density estimation. Electron. J. Stat. 2008, 3, 298–331. [Google Scholar] [CrossRef]
  27. Hooker, G. Consistency, Efficiency and Robustness of Conditional Disparity Methods. Bernoulli 2016, 22, 857–900. [Google Scholar] [CrossRef]
Figure 1. Left: Histogram of the light speed data; Right: bivariate contour plots of the posterior for the mean and variance of these data from the BMH method.
Figure 1. Left: Histogram of the light speed data; Right: bivariate contour plots of the posterior for the mean and variance of these data from the BMH method.
Entropy 20 00955 g001
Figure 2. Left: simulated two-dimensional normal example with two contamination components; Right: BMH posterior for the mean vector ( μ 1 , μ 2 ) .
Figure 2. Left: simulated two-dimensional normal example with two contamination components; Right: BMH posterior for the mean vector ( μ 1 , μ 2 ) .
Entropy 20 00955 g002
Figure 3. Posterior distributions for the elements of Σ in the simulated bivariate normal example.
Figure 3. Posterior distributions for the elements of Σ in the simulated bivariate normal example.
Entropy 20 00955 g003
Table 1. Estimation results for Newcomb’s light speed data. Direct Estimate refers to the standard mean and variance estimates, and Without Outliers indicates the same estimates with outliers removed. The first row for each parameter gives the estimate under a Dirichlet process prior and the second using a random histogram. Standard errors for each estimate are given in parentheses: these are from the normal theory for the first two columns via a bootstrap for MHB and from posterior samples for BMH.
Table 1. Estimation results for Newcomb’s light speed data. Direct Estimate refers to the standard mean and variance estimates, and Without Outliers indicates the same estimates with outliers removed. The first row for each parameter gives the estimate under a Dirichlet process prior and the second using a random histogram. Standard errors for each estimate are given in parentheses: these are from the normal theory for the first two columns via a bootstrap for MHB and from posterior samples for BMH.
Direct EstimateWithout OutliersMHBBMH
μ ^ 26.21 (1.32)27.75 (0.64)27.72 (0.64)27.73 (0.63)
27.72 (0.64)27.73 (0.63)
σ ^ 10.75 (3.40)5.08 (0.46)5.07 (0.46)5.00 (0.47)
5.07 (0.46)5.00 (0.47)
Table 2. Estimation results for a contaminated bivariate normal. We provide generating estimates, the natural maximum likelihood estimates with and without outliers and the BMH estimates. Reported BMH estimates are expected a posteriori estimates with posterior standard errors given in parentheses.
Table 2. Estimation results for a contaminated bivariate normal. We provide generating estimates, the natural maximum likelihood estimates with and without outliers and the BMH estimates. Reported BMH estimates are expected a posteriori estimates with posterior standard errors given in parentheses.
μ 01 μ 02 Σ 11 Σ 12 Σ 22
True105312
Contaminated data9.075.369.761.675.80
Data with outliers removed9.62 (0.13)4.91 (0.11)3.45 (0.13)1.49 (0.13)2.29 (0.11)
Estimated by BMH9.59 (0.27)4.93 (0.19)2.79 (0.18)0.98 (0.18)1.97 (0.076)
Back to TopTop