Next Article in Journal
Fused Entropy Algorithm in Optical Computed Tomography
Previous Article in Journal
Thermodynamic Analysis of the V-Shaped Area of High Pressure and High Temperature in Cubic Boron Nitride Synthesis with Li3N as a Catalyst
Open AccessArticle

Statistical Analysis of Distance Estimators with Density Differences and Density Ratios

1
Nagoya University, Furocho, Chikusaku, Nagoya 464-8603, Japan
2
Tokyo Institute of Technology, 2-12-1 O-okayama, Meguro-ku, Tokyo 152-8552, Japan
*
Author to whom correspondence should be addressed.
Entropy 2014, 16(2), 921-942; https://doi.org/10.3390/e16020921
Received: 21 October 2013 / Revised: 27 January 2014 / Accepted: 7 February 2014 / Published: 17 February 2014

Abstract

Estimating a discrepancy between two probability distributions from samples is an important task in statistics and machine learning. There are mainly two classes of discrepancy measures: distance measures based on the density difference, such as the Lp-distances, and divergence measures based on the density ratio, such as the Φ-divergences. The intersection of these two classes is the L1-distance measure, and thus, it can be estimated either based on the density difference or the density ratio. In this paper, we first show that the Bregman scores, which are widely employed for the estimation of probability densities in statistical data analysis, allows us to estimate the density difference and the density ratio directly without separately estimating each probability distribution. We then theoretically elucidate the robustness of these estimators and present numerical experiments.
Keywords: density difference; density ratio; L1-distance; Bregman score; robustness density difference; density ratio; L1-distance; Bregman score; robustness

1. Introduction

In statistics and machine learning, estimating a discrepancy between two probability distributions from samples has been extensively studied [1], because discrepancy estimation is useful in solving various real-world data analysis tasks, including covariate shift adaptation [2,3], conditional probability estimation [4], outlier detection [5] and divergence-based two-sample testing [6].

There are mainly two classes of discrepancy measures for probability densities. One is genuine distances on function spaces, such as the Ls-distance for s ≥ 1, and the other is divergence measures, such as the Kullback–Leibler divergence and the Pearson divergence. Typically, distance measures in the former class can be represented using the difference of two probability densities, while those in the later class are represented using the ratio of two probability densities. Therefore, it is important to establish statistical methods to estimate the density difference and the density ratio.

A naive way to estimate the density difference and the density ratio consists of two steps: two probability densities are separately estimated in the first step, and then, their difference or ratio is computed in the second step. However, such a two-step approach is not favorable in practice, because the density estimation in the first step is carried out without regard to the second step of taking the difference or ratio. To overcome this problem, the authors in [711] studied the estimation of the density difference and the density ratio in a semi-parametric manner without separately modeling each probability distribution.

The intersection of the density difference-based distances and the density ratio-based divergences is the L1-distance, and thus, it can be estimated either based on the density difference or the density ratio. In this paper, we first propose a novel direct method to estimate the density difference and the density ratio based on the Bregman scores [12]. We then show that the density-difference approach to L1-distance estimation is more robust than the density-ratio approach. This fact has already been pointed out in [10] based on a somewhat intuitive argument: the density difference is always bounded, while the density ratio can be unbounded. In this paper, we theoretically support this claim by providing detailed theoretical analysis of the robustness properties.

There are some related works to our study. Density ratio estimation was intensively investigated in the machine learning community [4,7,8]. As shown in [6], the density-ratio is available to estimate the ϕ-divergence [13,14]. However, estimation of the L1-distance, which is a member of the ϕ-divergence, was not studied, since it does not satisfy the regularity condition, which is required to investigate the statistical asymptotic property. On the other hand, the least mean squares estimator of the density-difference was proposed in [10], and the robustness property was numerically investigated. In the present paper, not only the least squares estimator, but also general score estimators for density differences are considered, and their robustness properties are theoretically investigated.

The rest of the paper is structured as follows. In Section 2, we describe two approaches to L1-distance estimation based on the density difference and the density ratio. In Section 3, we introduce the Bregman scores, which are widely employed for the estimation of probability densities in statistical data analysis. In Section 4, we apply the Bregman score to the estimation of the density difference and the density ratio. In Section 5, we introduce a robustness measure in terms of which the proposed estimators are analyzed in the following sections. In Section 6, we consider statistical models without the scale parameter (called the non-scale models) and investigate the robustness of the density difference and density ratio estimators. In Section 7, we consider statistical models with the scale parameter (called the scale models) and show that the estimation using the scale models is reduced to the estimation using the non-scale models. Then, we apply the theoretical results on the non-scale models to the scale models and elucidate the robustness of the scale models. In Section 8, numerical examples on L1-distance estimation are presented. Finally, we conclude in Section 9.

2. Estimation of L1-Distance

Let p(x) and q(x) be two probability densities. In this section, we introduce two approaches to estimating discrepancy measures: an approach based on the density difference, pq, and an approach based on the density ratio, p/q.

2.1. L1-Distance As the Density Difference and Density Ratio

The density difference, pq, is directly used to compute the Ls-distance between two probability densities:

d s ( p , q ) = ( | p ( x ) q ( x ) | s d x ) 1 / s ,
where s ≥ 1. On the other hand, the density ratio, p/q, appears in the ϕ-divergence [13,14] defined as:
ϕ ( p ( x ) q ( x ) ) q ( x ) d x ,
where ϕ is a strictly convex function, such that ϕ(1) = 0. The ϕ-divergence is non-negative and vanishes only when p = q holds. Hence, it can be regarded as an extension of the distance between p and q. The class of ϕ-divergences includes many important discrepancy measures, such as the Kullback–Leibler divergence (ϕ(z) = z log z), the Pearson distance (ϕ(z) = (1−z)2) and the L1-distance (ϕ(z) = |1−z|). The intersection of the ϕ-divergence and the Ls-distance is the L1-distance:
d 1 ( p , q ) = | p ( x ) q ( x ) | d x .

The purpose of our work is to compare the statistical properties of the density-difference approach and the density-ratio approach to the estimation of the L1-distance between probability densities p and q defined on ℝd. For the estimation of the L1-distance, we use two sets of identically and independently distributed (i.i.d.) samples:

x 1 , , x n ~ p , y 1 , , y m ~ q .
In both density-difference and density-ratio approaches, semi-parametric statistical models are used, which will be explained below.

2.2. Density-Difference Approach

The difference of two probability densities, f(x) = p(x) – q(x), is widely applied to statistical inference [10]. A parametric statistical model for the density difference f(x) is denoted as:

diff = { f ( x ; θ ) = f θ ( x ) | θ Θ k } ,
where Θk is the k-dimensional parameter space. The density-difference model, f(x; θ), can take both positive and negative values, and its integral should vanish. Note that there are infinitely many degrees of freedom to specify the probability densities, p and q, even when the density difference f = pq is specified. Hence, the density-difference model is regarded as a semi-parametric model for probability densities.

Recently, a density-difference estimator from Samples (2) that does not involve separate estimation of two probability densities has been proposed [10]. Once a density-difference estimator, diff, is obtained, the L1-distance can be immediately estimated as:

d 1 ( p , q ) = | f ( x ) | d x | f ^ ( x ) | d x .

The L1-distance has the invariance property under the variable change. More specifically, let x = ψ (z) be a one-to-one mapping on ℝd and fψ(z) be f(ψ(z))|Jψ(z)|, where Jψ is the Jacobian determinant of ψ. For f(x) = p(x) – q(x), the function, fψ(z), is the density difference between p and q in the z-coordinate. Then, we have:

| f ( x ) | d x = | f ψ ( z ) | d z ,
due to the formula of variable change for probability densities. When the transformed data, z, with the model, fψ(z), is used instead of the model, f(x), the L1-distance in the z-coordinate can be found in the same way as the L1-distance in the x-coordinate.

Note that this invariance property does not hold for general distance measures. Indeed, we have:

( d s ( p , q ) ) s = | f ( x ) | s d x = | f ψ ( z ) | s | J ψ ( z ) | 1 s d z
for general distance measures.

2.3. Density-Ratio Approach

The density ratio of two probability densities, p(x) and q(x), is defined as r(x) = p(x)/q(x), which is widely applied in statistical inference as the density difference [4]. Let:

ratio = { r ( x ; θ ) = r θ ( x ) | θ Θ k }
be the k-dimensional parametric statistical model of the density ratio, r(x). From the definition of the density ratio, the function, r(x; θ), should be non-negative. Various estimators of the density ratio based on the Samples (2) that do not involve separate estimation of two probability densities have been developed so far [7,8,11]. Once the density ratio estimator, ratio, is obtained, the L1-distance between p and q can be immediately estimated as:
d 1 ( p , q ) = | 1 r ( x ) | q ( y ) d y | 1 r ^ ( x ) | q ( y ) d y 1 m j = 1 m | 1 r ^ ( y j ) | .
Differently from the L1-distance estimator using the density difference, the numerical integral should be replaced with the sample mean, since the density, q(y), is unknown. In the density-ratio approach, the variable transformation maintains the L1-distance, as well as the estimation of the density difference itself. For the one-to-one mapping y = ψ(z), let rψ(z) be r(ψ(z)) and the probability density, qψ(z), be q(ψ(z))|Jψ(z)|. Then, we have:
d 1 ( p , q ) = | 1 r ( y ) | q ( y ) d y = | 1 r ψ ( z ) | q ψ ( z ) d z 1 m j = 1 m | 1 r ψ ( z j ) | ,
where zj is the transformed sample, such that yj = ψ (zj). In the density-ratio approach, the L1-distance for the transformed data does not require the computation of the Jacobian determinant, Jψ.

3. Bregman Scores

The Bregman score is an extension of the log-likelihood function, and it is widely applied in statistical inference [12,1518]. In this section, we briefly review the Bregman score. See [12] for details.

For functions f and g on ℝd, the Bregman score, S(f, g), is a class of real-valued functions that satisfy the inequality:

S ( f , g ) S ( f , f ) .
Clearly, the inequality becomes the equality for f = g. If the equality S(f, g) = S(f, f) leads to f = g, S(f, g) is called the strict Bregman score. The minimization problem of the strict Bregman score, S(f, g), i.e., ming S(f, g), has the uniquely optimal solution g = f.

Let us introduce the definition of Bregman scores. For a function, f, defined on the Euclidean space, ℝd, let G(f) be a real-valued convex functional. The functional, G(f), is called the potential below. The functional derivative of G(f) is denoted as G′(x; f), which is defined as the function satisfying the equality:

lim ε 0 G ( f + ε h ) G ( f ) ε = G ( x ; f ) h ( x ) λ ( d x )
for any function, h(x), with a regularity condition, where λ(·) is the base measure. Then, the Bregman score, S(f, g), for functions f, g is defined as:
S ( f , g ) = G ( g ) G ( x , g ) ( f ( x ) g ( x ) ) λ ( d x ) .
Due to the convexity of G(f), we have:
G ( f ) G ( g ) G ( x , g ) ( f ( x ) g ( x ) ) λ ( d x ) 0 ,
which is equivalent to Inequality (6). Let be a set of functions defined on ℝd. If is a convex set and the potential G(f) is strictly convex on , the associated Bregman score is strict.

When G(f) is expressed as:

G ( f ) = U ( f ( x ) ) λ ( d x ) ,
with a convex differentiable function U : ℝ → ℝ, the corresponding Bregman score is referred to as the separable Bregman score, which is given as:
S ( f , g ) = { U ( g ( x ) ) + U ( g ( x ) ) f ( x ) g ( x ) } λ ( d x ) .
Due to the computational tractability, the separable Bregman scores are often used in real-world data analysis.

If f is a probability density, the Bregman score is expressed as:

S ( f , g ) = f ( x ) ( x , g ) λ ( d x ) ,
where (x, g) is given by
( x , g ) = G ( x , g ) G ( g ) + G ( y , g ) g ( y ) λ ( d y ) .
The function, (x, g), is regarded as the loss of the forecast using g for an outcome, x ∈ ℝd. The function of the form of Equation (9) is called a proper scoring rule, and its relation to the Bregman score has been extensively investigated [12,17,19]. When the i.i.d. samples, x1, . . . , xn, are observed from the probability density, f, the minimization problem of the empirical mean over the probability model, g, i.e.,
min g 1 n i = 1 n ( x i , g ) ,
is expected to provide a good estimate of the probability density, f.

Below, let us introduce exemplary Bregman scores:

Example 1 (Kullback–Leibler (KL) score). The Kullback–Leibler (KL) score for the probability densities, p(x) and q(x), is defined as:

S ( p , q ) = p ( x ) log q ( x ) d x ,
which is the separable Bregman score with the potential function:
G ( p ) = p ( x ) log p ( x ) d x ,
i.e., the negative entropy. The difference S(p, q)−S(p, p) is called the KL divergence [20]. The KL score is usually defined for probability densities, but an extension to non-negative functions is also available. Hence, the KL score is applicable to the estimation of the density ratio [8,11]. However, it is not possible to directly use the KL score to estimate the density difference, because it can take negative values.

Example 2 (Density-power score). Let α be a positive number and f and g be functions that can take both positive and negative values. Then, the density-power score with the base measure λ(·) is defined as:

S ( f , g ) = α | g ( x ) | 1 + α λ ( d x ) ( 1 + α ) f ( x ) | g ( x ) | α 1 g ( x ) λ ( d x ) .
See [21,22] for the details of the density-power score for probability densities. The potential of the density power score is given as:
G ( f ) = | f ( x ) | 1 + α dx .
Hence, the density-power score is the separable Bregman score. Letting α be zero, then the difference of the density-power scores (S(p, q) − S(p, p))/α for probability densities p and q tends to the KL divergence.

Example 3 (Pseudo-spherical score; γ-score). For α > 0 and g ≠ 0, the pseudo-spherical score [16] is defined as:

S ( f , g ) = f ( x ) | g ( x ) | α 1 g ( x ) λ ( d x ) ( | g ( x ) | 1 + α λ ( d x ) ) α / ( 1 + α ) .
This is the Bregman score derived from the potential function:
G ( f ) = ( | f ( x ) | 1 + α λ ( dx ) ) 1 / ( 1 + α ) ,
implying that the pseudo-spherical score is a non-separable Bregman score. For probability densities, p and q, the monotone transformation of the pseudo-spherical score, − log(−S(p, q)), is called the γ-score [23], which is used for robust parameter estimation of probability densities. In the limiting case of α → 0, the difference of the γ-scores, − log(−S(p, q)) + log(−S(p, p)), recovers the KL-divergence. Note that the corresponding potential is not strictly convex on a set of functions, but is strictly convex on the set of probability densities. As a result, the equality S(p, q) = S(p, p) for the probability densities, p, q, leads to p = q, while the equality S(f, g) = S(f, f) for functions f and g yields that f and g are linearly dependent. The last assertion comes from the equality condition of the Hölder’s inequality.

When the model, f(x; θ), includes the scale parameter, c, i.e., f(x; θ) = cg(x; θ̄) with the parameter θ = (c, θ̄) ∈ Θk for c ∈ ℝ and θ̄ ∈ Θk−1, the pseudo-spherical score does not work. This is because the potential is not strictly convex on the statistical model with the scale parameter, and hence, the scale parameter, c, is not estimable when the pseudo-spherical score is used.

The density-power score and the pseudo-spherical score in the above examples include the non-negative parameter, α. When α is an odd integer, the absolute-value operator in the scores can be removed, which is computationally advantageous. For this reason, we set the parameter, α, to a positive odd integer when the Bregman score is used for the estimation of the density difference.

4. Direct Estimation of Density Differences and Density Ratios Using Bregman Scores

The Bregman scores are applicable not only to the estimation of probability densities, but also to the estimation of density differences and density ratios. In this section, we propose estimators for density differences and density ratios, and show their theoretical properties.

4.1. Estimators for Density Differences and Density Ratios

First of all, let us introduce a way to directly estimate the density difference based on the Bregman scores. Let diff be the statistical Model (3) to estimate the true density-difference f(x) = p(x) – q(x) defined on the Euclidean space, ℝd. Let the base measure, λ(·), be the Lebesgue measure. Then, for the density-difference model, fθdiff, the Bregman score Equation (8) is given as:

S diff ( f , f θ ) = p ( x ) ( x , f θ ) d x q ( x ) ( x , f θ ) d x ,
where (x, fθ) is defined in Equation (10). This can be approximated by the empirical mean based on the Samples (2) as follows. Let δ be the Dirac delta function and be the difference between two empirical densities,
f ˜ ( z ) = p ˜ ( z ) q ˜ ( z ) = 1 n i = 1 n δ ( z x i ) 1 m j = 1 m δ ( z y j ) .
Then, we have:
S diff ( f , f θ ) S diff ( f ˜ , f θ ) = 1 n i = 1 n ( x i , f θ ) 1 m j = 1 m ( y j , f θ ) .
If the target density difference, f, is included in the model diff, the minimizer of the strict Bregman score, Sdiff (, fθ), with respect to fθdiff is expected to produce a good estimator of f.

Next, we use the Bregman scores to estimate the density ratio r(x) = p(x)/q(x). Let us define q(x) as the base measure of the Bregman score. Given the density-ratio model, ratio, of Equation (5), the Bregman score of the model, rθratio, is given as:

S ratio ( r , r θ ) = G ( r θ ) G ( x , r θ ) ( r ( x ) r θ ( x ) ) q ( x ) d x = G ( r θ ) G ( x , r θ ) p ( x ) d x + G ( x , r θ ) r θ ( x ) q ( x ) d x .
Using the Samples (2), we can approximate the score, Sratio(r, rθ), with the base measure, q, as:
S ratio ( r , r θ ) G ( r θ ) 1 n i = 1 n G ( x i , r θ ) + 1 m j = 1 m G ( y j , r θ ) r θ ( y j ) .
For example, the density-power score for the density ratio is given as:
S ratio ( r , r θ ) = ( 1 + α ) r θ ( x ) α r ( x ) q ( x ) d x + α r θ ( x ) 1 + α q ( x ) d x = ( 1 + α ) r θ ( x ) α p ( x ) d x + α r θ ( x ) 1 + α q ( x ) d x ,
in which the equality r(x)q(x) = p(x) is used. We can also obtain similar approximation for the pseudo-spherical score.

4.2. Invariance of Estimators

We show that the estimators obtained by the density-power score and the pseudo-spherical score have the affine invariance property. Suppose that the Samples (2) are distributed on the d-dimensional Euclidean space, and let us consider the affine transformation of samples, such that xi = Ax′i + b and yj = Ay′j + b, where A is an invertible matrix and b is a vector. Let fA,b(x) be the transformed density-difference |det A|f(Ax′ + b) and A,b be the difference of the empirical distributions defined from the samples, x′i, y′j. Let Sdiff (f, g) be the density-power score or the pseudo-spherical score with a positive odd integer, α, for the estimation of the density difference. Then, we have:

S diff ( f ˜ A , b , f A , b ) = | det A | α S diff ( f ˜ , f ) .
Let f ^ ( f A , b ^ ) be the estimator based on the samples, {xi} and {yj} ({x′i} and {y′j}). Then, the above equality leads to ( f ^ ) A , b = f A , b ^, implying that the estimator is invariant under the affine transformation of the data. In addition, Equality (4) leads to:
| f ^ ( x ) | d x = | ( f ^ ) A , b ( x ) | d x = | f A , b ^ ( x ) | d x .
This implies that the affine transformation of the data does not affect the estimated L1-distance. The same invariance property holds for the density-ratio estimators based on the density-power score and the pseudo-spherical score.

5. Robustness Measure

The robustness of the estimator is an important feature in practice, since typically real-world data includes outliers that may undermine the reliability of the estimator. In this section, we introduce robustness measures of estimators against outliers.

In order to define robustness measures, let us briefly introduce the influence function in the setup of the density-difference estimation. Let p(x) and q(x) be the true probability densities of each dataset in Samples (2). Suppose that these probabilities are shifted to:

p ε ( x ) = ( 1 ε ) p ( x ) + ε δ ( x z p ) , q ε ( x ) = ( 1 ε ) q ( x ) + ε δ ( x z p ) ,
by the outliers, zp and zq, respectively. A small positive number, ε, denotes the ratio of outliers. Let θ* be the true model parameter of the density difference f(x) = p(x) – q(x), i.e., f(x) = f(x; θ*) ∈ diff. Let us define the parameter, θε, as the minimum solution of the problem,
min θ Θ S diff ( p ε q ε , f θ ) .
Clearly, θ0 = θ* holds. For the density-difference estimator using the Bregman score, Sdiff (f, fθ), with the model, diff, the influence function is defined as:
IF diff ( θ * ; z p , z q ) = lim ε 0 θ ε θ * ε .
Intuitively, the estimated parameter is distributed around θ* + ε · IFdiff (θ*; zp, zq) under the existence of outliers, zp and zq, with the small contamination ratio, ε. The influence function for the density ratio is defined in the same manner, and it is denoted as IFratio(θ*; zp, zq).

The influence function provides several robustness measures of estimators. An example is the gross error sensitivity defined as supzp,zq ‖IFdiff (θ*; zp, zq)‖, where ‖ · ‖ is the Euclidean norm. The estimator that uniformly minimizes the gross error sensitivity over the parameter, θ, is called the most B(bias)-robust estimator. The most B-robust estimator minimizes the worst-case influence of outliers. For the one-dimensional normal distribution, the median estimator is the most B-robust for the estimation of the mean parameter [24].

In this paper, we consider another robustness measure, called the redescending property. The estimator satisfying the following redescending property,

lim z p , z q IF diff ( θ ; z p , z q ) = 0 for all θ Θ ,
is called the redescending estimator [2326]. The redescending property is preferable to stable inference, since the influence of extreme outliers can be ignored. Furthermore, in the machine learning literature, learning algorithms with the redescending property for classification problems have been proposed, under the name of the robust support vector machines [2729]. Note that the most B-robust estimator is not necessarily a redescending estimator, and vice versa. It is known that, for the estimation of probability densities, the pseudo-spherical score has the redescending property, while the density-power score does not necessarily provide the redescending estimator [23,30].

In the next sections, we apply the density-power score and the pseudo-spherical score to estimate the density difference or the density ratio and investigate their robustness.

6. Robustness under Non-Scale Models

In this section, we consider statistical models without the scaling parameter, and investigate the robustness of the density-difference and density-ratio estimators based on the density-power score and the pseudo-spherical score.

6.1. Non-Scale Models

The model satisfying the following assumption is called the non-scale model:

Assumption 1. Let be the model of density differences or density ratios. For c ∈ ℝ and f, such that c ≠ 0 and f ≠ 0, cf holds only when c = 1.

The density-power score and the pseudo-spherical score are the strict Bregman score on the non-scale models. Indeed, the density-power score is the strict Bregman score, as pointed out in Example 2. For the pseudo-spherical score, suppose that the equality S(f, g) = S(f, f) holds for the non-zero functions, f and g. Then, g is proportional to f. When f and g are both included in a non-scale model, we have f = g. Thus, the pseudo-spherical score on the non-scale model is also the strict Bregman score.

6.2. Density-Difference Approach

Here, we consider the robustness of density-difference estimation using the non-scale models. Assumption 1 implies that the model, fθ(x), does not include the scale parameter. An example is the model consisting of two probability models,

f θ ( x ) = p θ 1 ( x ) p θ 2 ( x ) , θ = ( θ 1 , θ 2 ) ,
such that θ1 ≠ θ2, where pθ1 and pθ2 are parametric models of the normal distributions. The above model, fθ, is still a semi-parametric model, because even when fθ(x) is specified, the pair of probability densities, p and q, such that fθ = pq, have infinitely many degrees of freedom.

The following theorem shows the robustness of the density-difference estimator. The proof is found in Appendix A.

Theorem 1. Suppose that Assumption 1 holds for the density-difference model, diff. We assume that the true density-difference, f, is included in diff and that f = f θ * diff holds. For the Bregman score, Sdiff (f, g), of the density difference, let J be the matrix, each element of which is given as:

J ij = θ i θ j S diff ( f , f θ ) | θ = θ * .
Suppose that J is invertible. Then, under the model, diff, the influence function of the density-power score with a positive odd parameter, α, is given as:
IF diff ( θ * ; z p , z q ) = α ( 1 + α ) J 1 ( f ( z p ) α 1 f θ ( z p ; θ * ) f ( z q ) α 1 f θ ( z q ; θ * ) f ( x ) α f θ ( x ; θ * ) d x ) ,
where f θ is the k-dimensional gradient vector of the function, f, with respect to the parameter, θ. In addition, we suppose that f is not the zero function. Then, under the model, diff, the influence function of the pseudo-spherical score with a positive odd parameter, α, is given as:
IF diff ( θ * ; z p , z q ) = α ( 1 + α ) J 1 × ( ( f ( z p ) α f ( z p ) α ) f ( x ) α f θ ( x ; θ * ) d x f ( x ) 1 + α d x f ( z p ) α 1 f θ ( z p ; θ * ) + f ( z q ) α 1 f θ ( z q ; θ * ) ) .

Theorem 1 implies that the density-difference estimation with the pseudo-spherical score under non-scale models has the redescending property. For the density difference f = pq, the limiting condition,

lim x f ( x ) = 0 ,
will hold in many practical situations. Hence, for α > 1, the assumption:
lim x f θ ( x ) α 1 f θ ( x ; θ ) = 0
for all θ ∈ Θk will not be a strong condition for the density-difference model. Under the above limiting conditions, the influence Function (13) tends to zero, as zp and zq go to the infinite point. As a result, the pseudo-spherical score produces a redescending estimator. On the other hand, the density-power score does not have the redescending property, since the last term in Equation (12) does not vanish, when zp and zq tend to the infinite point.

Let us consider the L1-distance estimation using the density-difference estimator. The L1-distance estimator under the Contamination (11) is distributed around:

| f ( x ; θ ε ) | d x | f ( x ; θ * ) + ε IF diff ( θ * ; z p , z q ) f θ ( x ; θ * ) | d x ,
which implies that the bias term is expressed as the inner product of the influence function and the gradient of the density-difference model. Let bdiff,ε be:
b diff , ε = ε | IF diff ( θ * ; z p , z q ) f θ ( x ; θ * ) | d x .
Then, the bias of the L1-distance estimator induced by outliers is approximately bounded above by bdiff,ε. Since the pseudo-spherical score with the non-scale model provides the redescending estimator for the density difference, the L1-distance estimator based on the pseudo-spherical score also has the redescending property against the outliers.

6.3. Density-Ratio Approach

The following theorem provides the influence function of the density-ratio estimators. Since the proof is almost the same as that of Theorem 1, we omit the detailed calculation.

Theorem 2. Suppose that Assumption 1 holds for the density-ratio model, ratio. We assume that the true density-ratio r(x) = p(x)/q(x) is included in:

ratio = { r ( x ; θ ) = r θ ( x ) | θ Θ k } ,
and that r = rθ*ratio holds. For the Bregman score, Sratio(r, rθ), with the base measure, q(x), let J be the matrix, each element of which is given as:
J ij = θ i θ j S ratio ( r , r θ ) | θ = θ * .
Suppose that J is invertible. Then, the influence function of the density-power score with a positive real parameter, α, is given as:
IF ratio ( θ * ; z p , z q ) = α ( 1 + α ) J 1 ( r ( z p ) α 1 r θ ( z p ; θ * ) r ( z p ) α r θ ( z q ; θ * ) ) .
The influence function of the pseudo-spherical score with a positive real parameter, α, is given as:
IF ratio ( θ * ; z p , z q ) = α ( 1 + α ) J 1 × ( ( r ( z p ) α r ( z q ) α + 1 ) r ( x ) α r θ ( x ; θ * ) q ( x ) d x r ( x ) α + 1 q ( x ) d x + r ( z p ) α 1 r θ ( z p ; θ * ) r ( z q ) α r θ ( z q ; θ * ) ) .

The density ratio is a non-negative function. Hence, we do not need to care about the absolute value in the density-power score and the pseudo-spherical score. As a result, the parameter, α, in these scores is allowed to take any positive real number in the above theorem.

For the density ratio r(x) = p(x)/q(x), a typical limiting condition is:

lim x r ( x ) = .
For example, the density ratio of two Gaussian distributions with the same variance and different means leads to an unbounded density ratio. Hence, the influence function can tend to infinity. As a result, the density-ratio estimator is sensitive to the shift in probability distributions.

Let us consider the L1-distance estimation using the density ratio. The L1-distance estimator under the Contamination (11) is distributed around:

| 1 r ( y ; θ ε ) | q ( y ) d y | 1 r ( y ; θ * ) ε IF ratio ( θ * ; z p , z q ) r θ ( y ; θ * ) | q ( y ) d y .
Thus, the bias of the L1-distance estimator induced by the outliers is approximately bounded above by:
b ratio , ε = ε | IF diff ( θ * ; z p , z q ) r θ ( y ; θ * ) | q ( y ) d y .
The influence function for the density-ratio estimator can take an arbitrarily large value. In addition, the empirical approximation of the integral is also affected by outliers. Hence, the density-ratio estimator does not necessarily provide a robust estimator for the L1-distance measure.

7. Robustness under Scale Models

In this section, we consider the estimation of density differences using the model with the scale parameter. For such a model, the pseudo-spherical score does not work as shown in Example 3. Furthermore, in the previous section, we presented the instability of the density-ratio estimation against the gross outliers. Hence, in this section, we focus on the density-difference estimation using the density-power score with the scale models.

7.1. Decomposition of Density-Difference Estimation Procedure

We show that the estimation procedure of the density difference using the density-power score is decomposed into two steps: estimation using the pseudo-spherical score with the non-scale model and estimation of the scale parameter. Note that the estimation in the first step has already been investigated in the last section.

Let us consider the statistical model satisfying the following assumption:

Assumption 2. Let diff be the model for the density difference. For all fdiff and all c ∈ ℝ, cfdiff holds.

The model satisfying the above assumption is referred to as the scale model. A typical example of the scale model is the linear model:

diff = { = 1 k θ ψ ( x ) | θ , = 1 , , k } ,
where ψ is the basis functions, such that ∫ψ(x)dx = 0 holds for all = 1, . . . , k.

Suppose that the k-dimensional scale model, diff, is parametrized as:

diff = { f ( x ; θ ) = c g θ ¯ ( x ) | c , θ ¯ Θ k 1 , θ = ( c , θ ¯ ) } .
The parameter, c, is the scale parameter, and θ̄ in Equation (14) is called the shape parameter. We assume that any gθ̄ is not equal to the zero-function. The parametrization of the model, diff, may not provide one-to-one correspondence between the parameter θ = (c, θ̄) and the function, cgθ̄, e.g., c = 0. We assume that in the vicinity of the true density-difference, the parametrization, θ, and the function, cgθ̄, has a one-to-one correspondence. Define the model, diff,c, to be the (k − 1)-dimensional non-scale model:
diff , c = { c g θ ( x ) | θ ¯ Θ k 1 } .

For the pseudo-spherical score, the equality S(f, g) = S(f, cg) holds for c > 0. Thus, the scale parameter is not estimable. Let us study the statistical property of the estimator based on the density-power score with the scale models:

Theorem 3. Let us consider the density-difference estimation. Define S diff , α pow ( f , g ) and S diff , α ps ( f , g ) as the density-power score and the pseudo-spherical score with a positive odd number, α, and the base measure of these scores is given by the Lebesgue measure, respectively. Let f0 be a function and c̄gθ̄diff be the optimal solution of the problem,

min f S diff , α pow ( f 0 , f ) , s . t . f diff .
We assume that c̄gθ̄ ≠ 0. Then, gθ̄ is given as the optimal solution of:
min g S diff , α ps ( f 0 , g ) , s . t . g diff , c diff , c ,
where c is any fixed non-zero constant. In addition, the optimal scale parameter is expressed as:
c ¯ = f 0 ( x ) g θ ¯ ( x ) α d x / g θ ¯ ( x ) 1 + α d x .

The empirical density-difference, , is allowed as the function, f0, in the above theorem. The proof is found in Appendix B. The same theorem for the non-negative functions is shown in [25].

Theorem 3 indicates that the minimization of the density-power score on the scale model is decomposed into two stages. Suppose that the true density-difference is f0 = pq = c*gθ̄*diff. At the first stage of the estimation, the minimization problem (15) is solved on the non-scale model diff,±c*. Then, at the second stage, the scale parameter is estimated. Though c* is unknown, the estimation procedure can be virtually interpreted as the two-stage procedure using the non-scale model, diff,±c*.

7.2. Statistical Properties of Density-Difference Estimation

Based on the two-stage procedure for the minimization of the density-power score, we investigate the statistical properties of the density-difference estimator.

As shown in Section 6.2, the estimator using the pseudo-spherical score over the non-scale model, diff,c*, has the redescending property. Hence, the extreme outliers have little impact on the estimation of the shape parameter, θ̄. Under the Contamination (11), let us define θ̄ε as the optimal solution of the problem,

min g S ps , α ( p ε q ε , q ) , g diff , c * .
As the outliers, zp and zq, tend to the infinite point, we have:
θ ¯ ε = θ ¯ * + o ( ε ) ,
because the estimation of the shape parameter using the pseudo-spherical score with the non-scale model has the redescending property, as shown in the last section.

The scale parameter is given as:

c ε = ( p ε ( x ) q ε ( x ) ) g θ ¯ ε ( x ) α d x g θ ¯ ε ( x ) 1 + α d x = ( 1 ε ) f ( x ) g θ ¯ ε ( x ) α d x g θ ¯ ε ( x ) 1 + α d x + ε g θ ¯ ε ( z p ) α g θ ¯ ε ( z q ) α g θ ¯ ε ( x ) 1 + α d x .
As the outliers, zp and zq, tend to the infinite point, the second term in the above expression converges to zero. Hence, the scale parameter is given as:
c ε = ( 1 ε ) c * + o ( ε ) ,
from which we have:
c ε g θ ¯ ε = ( 1 ε ) ( p q ) + o ( ε ) .

The above analysis shows that the extreme outliers with the small contamination ratio, ε, will make the intensity of the estimated density-difference smaller by the factor, 1 − ε, and the estimated density-difference is distributed around (1 − ε)(pq). Hence, the contamination with extreme outliers has little impact on the shape parameter in the density-difference estimator, when the density-power score is used.

Let us consider the L1-distance estimation using the density-difference estimator. Suppose that the true density-difference, pq, is estimated by the density-power score with the scale model. Then, the L1-distance estimator under the Contamination (11) is distributed around:

| c ε g θ ¯ ε ( x ) | d x = ( 1 ε ) | p ( x ) q ( x ) | d x + o ( ε ) .
When the contamination ratio, ε, is small, even extreme outliers do not significantly affect the L1-distance estimator. The bias induced by the extreme outliers depends only on the contamination ratio. If prior knowledge on the contamination ratio, ε, is available, one can approximately correct the bias of the L1-distance measure by multiplying the constant obtained by the prior knowledge to the L1-distance estimator.

8. Numerical Experiments

We conducted numerical experiments to evaluate the statistical properties of L1-distance estimators. We used synthetic datasets. Let N(μ, σ2) be the one-dimensional normal distribution with mean μ and variance σ2. In the standard setup, let us assume that the samples are drawn from the normal distributions,

x 1 , , x n ~ N ( 0 , 1 ) , y 1 , , y m ~ N ( 1 , 1 ) .
In addition, some outliers are observed from:
x ˜ 1 , , x ˜ n ~ N ( 0 , τ 2 ) , y ˜ 1 , , y ˜ m ~ N ( 0 , τ 2 ) ,
where the variance, τ, is much larger than one. Based on the two datasets, {x1, . . . , xn, 1, . . . , n′} and {y1, . . . , ym, 1, . . . , m′}, the L1-distance between N(0, 1) and N(1, 1) is estimated.

Below, we show the models and estimators used in the L1-distance estimation. The non-scale model for the density-difference is defined as:

f θ ( x ) = ϕ ( x ; μ 1 , σ 1 ) ϕ ( x ; μ 2 , σ 2 ) , θ = ( μ 1 , μ 2 , σ 1 , σ 2 ) ,
where ϕ(x; μ, σ) is the probability density of N(μ, σ2). To estimate the parameters, the density-power score and the pseudo-spherical score are used. As the scale model, we employ cfθ(x) with the parameter, θ, and c > 0. As shown in Equation (16), the estimator has the bias, when the samples are contaminated by outliers. Ideally, the multiplication of (1 − ε)−1 to the L1-distance estimator with the scale model will improve the estimation accuracy, where ε is the contamination ratio n′/n (= m′/m). In the numerical experiments, also, the bias corrected estimator was examined, though the bias correction requires prior knowledge on the contamination ratio. For the statistical model for density-ratio estimation, we used the scale model,
r ( x ; θ ) = exp { θ 0 + θ 1 x + θ 2 x 2 } , θ 3 ,
and the density-power score as the loss function. Furthermore, we evaluated the two-step approach in which the L1-distance is estimated from the separately estimated probability densities. We employed the density-power score with the statistical model, ϕ(x; μ, σ), to estimate the probability density of each dataset.

In numerical experiments, the error of the L1-distance estimator, 1(p, q), was measured by the relative error, |1 −1(p, q)/d1(p, q)|. The number of training samples varied from 1, 000 to 10, 000, and that of the outliers varied from zero (no outlier) to 100. The parameter, α, in the score function was set to α = 1 or 3 for the density-difference (DF)-based estimators and α = 0.1 for the density-ratio (DR)-based estimators. For the density-ratio estimation, the score with large α easily yields numerical errors, since the power of the exponential model tends to become extremely large. For each setup, the averaged relative error of each estimator was computed over 100 iterations.

The numerical results are depicted in Figure 1, and details are shown in Table 1. In the figure, estimators with extremely large relative errors are omitted. As shown in Table 1, the estimation accuracy of the DR-based estimator was severely degraded by the contaminated samples. On the other hand, DF-based estimators were robust against outliers. The DF-based estimator with the pseudo-spherical score is less accurate than that with the density-power score. In the statistical inference, there is the tradeoff between the efficiency and robustness. Though the pseudo-spherical score provides a redescending estimator, the efficiency of the estimator is not high in practice. In the estimation of the probability density, the pseudo-spherical score having the parameter, α, ranging from 0.1 to one provides a robust and efficient estimator, and the estimator with large α became inefficient [23]. This is because the estimator with large α tends to ignore most of the samples. In the density-difference estimation, the parameter, α, should be a positive odd number. Hence, in our setup, the estimator using the pseudo-spherical score became inefficient. In terms of the density-power score, the corresponding DF-based estimator has the bounded influence function. As a result, the estimator is efficient and rather robust against the outliers. Furthermore, we found that the bias correction by multiplying the constant factor, (1 − )−1, improves the estimation accuracy.

When there is no outlier, the two-step approach using the separately estimated probability densities has larger relative errors than the DF-based estimators using the density-power score. For the contaminated samples, the two-step approach is superior to the other methods, especially when the sample size is less than 2,000. In this case, the separate density estimation with the density-power score efficiently reduces the influence of the outliers. For the larger sample size, however, the DF-based estimators using the density-power score are comparable with the two-step approach. When the rate of the outliers is moderate, the DF-based approach works well, even though the statistical model is based on the semiparametric modeling, which has less information than the parametric modeling used in the two-step approach.

9. Conclusions

In this paper, we first proposed to use the Bregman score to estimate density differences and density ratios, and then, we studied the robustness property of the L1-distance estimator. We showed that the pseudo-spherical score provides a redescending estimator of the density difference under non-scale models. However, the estimator based on the density-power score does not have the redescending property against extreme outliers. In the scale models, the pseudo-spherical score does not work, since the corresponding potential is not strictly convex on the function space. We proved that the density-power score provides a redescending estimator for the shape parameter in the scale models. Under extreme outliers, the shift in the L1-distance estimator using the scale model is calculated. The density-power score provides a redescending estimator for the shape parameter in the scale models. Moreover, we proved that the L1-distance estimator is not significantly affected by extreme outliers. In addition, we showed that prior knowledge on the contamination ratio, ε, can be used to correct the bias of the L1-distance estimator. In numerical experiments, the density-power score provides an efficient and robust estimator in comparison to the pseudo-spherical score. This is because the pseudo-spherical score with large α tends to ignore most of the samples and, thus, becomes inefficient. In a practical setup, the density-power score will provide a satisfactory result. Furthermore, we illustrated that the bias correction by using the prior knowledge on the contamination ratio improves L1-distance estimators using scale models.

Besides the Bregman scores, there are other useful classes of estimators, such as local scoring rules [12,18,30]. It is therefore an interesting direction to pursue the possibility of applying another class of scoring rules to the estimation of density differences and density ratios.

A. Proof of Theorem 1

For the density-difference f(x) = fθ* (x) = p(x) – q(x), we define fε(x) as the contaminated density-difference,

f ε ( x ) = ( 1 ) p ( x ) + ε δ ( x z p ) ( 1 ) q ( x ) ε δ ( x z p ) = f ( x ) + ε { δ ( x z p ) δ ( x z q ) f ( x ) } .
Let g be the function g(x) = δ(xzp) − δ(xzq) − fθ* (x). By using the implicit function theorem to the ℝk-valued function:
( θ , ε ) θ S ( f ε , f θ ) = θ ( x , f θ ) f ε ( x ) d x
around (θ, ε) = (θ*, 0), we have:
IF diff ( θ * ; z p , z q ) = J 1 θ S ( g , f θ ) | θ = θ * .

The computation of the above derivative for each score yields the results.

B. Proof of Theorem 3

Let us consider the minimization of S diff , α pow ( f 0 , f ) subject to fdiff. For cgdiff, we have:

S diff , α pow ( f 0 , c g ) = α c 1 + α g ( x ) 1 + α d x ( 1 + α ) c α f 0 ( x ) g ( x ) α d x
For a fixed gdiff,1, the minimizer of S diff , α pow ( f * , c g ) with respect to c ∈ ℝ is given as:
c g = f 0 ( x ) g ( x ) α d x / g ( x ) 1 + α d x ,
since g is not the zero function. Substituting the optimal cg into S diff , α pow ( f 0 , c g ), we have:
S diff , α pow ( f 0 , c g g ) = ( f 0 ( x ) g ( x ) α d x ) 1 + α ( g ( x ) 1 + α d x ) α = ( S diff , α ps ( f 0 , g ) ) 1 + α
for the positive odd number, α. Hence, the optimal solution of S diff , α pow ( f 0 , c g g ) subject to cggdiff is obtained by solving:
min g S diff , α ps ( f 0 , g ) , g diff , c diff , c
where c is any fixed non-zero number. Since α + 1 is an even number, we need to take into account two sub-models, diff,c and diff,−c, in order to reduce the optimization of S diff , α pow to S diff , α ps.

Acknowledgments

TK was partially supported by Japan Society for the Promotion of Science (JSPS) KAKENHI Grant number 24500340, and MS was partially supported by JSPS KAKENHI Grant number 25700022 and Asian Office of Aerospace Research and Development (AOARD).

Conflict of Interest

The authors declare no conflict of interest.

References

  1. Sugiyama, M.; Liu, S.; du Plessis, M.C.; Yamanaka, M.; Yamada, M.; Suzuki, T.; Kanamori, T. Direct divergence approximation between probability distributions and its applications in machine learning. J. Comput. Sci. Eng 2013, 7, 99–111. [Google Scholar]
  2. Shimodaira, H. Improving predictive inference under covariate shift by weighting the log-likelihood function. J. Stat. Plan. Infer 2000, 90, 227–244. [Google Scholar]
  3. Sugiyama, M.; Kawanabe, M. Machine Learning in Non-Stationary Environments : Introduction to Covariate Shift Adaptation (Adaptive Computation and Machine Learning); MIT Press: Cambridge, MA, USA, 2012. [Google Scholar]
  4. Sugiyama, M.; Suzuki, T.; Kanamori, T. Density Ratio Estimation in Machine Learning; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  5. Hido, S.; Tsuboi, Y.; Kashima, H.; Sugiyama, M.; Kanamori, T. Inlier-based Outlier Detection via Direct Density Ratio Estimation. Proceedings of IEEE International Conference on Data Mining (ICDM2008), Pisa, Italy, 15–19 December 2008.
  6. Kanamori, T.; Suzuki, T.; Sugiyama, M. f-Divergence estimation and two-sample homogeneity test under semiparametric density-ratio models. IEEE Trans. Inform. Theor 2012, 58, 708–720. [Google Scholar]
  7. Kanamori, T.; Hido, S.; Sugiyama, M. Efficient direct density ratio estimation for non-stationarity adaptation and outlier detection. In Advances in Neural Information Processing Systems 21; MIT Press: Cambridge, MA, USA; 2009. [Google Scholar]
  8. Nguyen, X.; Wainwright, M.J.; Jordan, M.I. Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Trans. Inform. Theor 2010, 56, 5847–5861. [Google Scholar]
  9. Qin, J. Inferences for case-control and semiparametric two-sample density ratio models. Biometrika 1998, 85, 619–639. [Google Scholar]
  10. Sugiyama, M.; Kanamori, T.; Suzuki, T.; du Plessis, M.C.; Liu, S.; Takeuchi, I. Density-difference estimation. Neural. Comput 2013, 25, 2734–2775. [Google Scholar]
  11. Sugiyama, M.; Suzuki, T.; Nakajima, S.; Kashima, H.; von Bünau, P.; Kawanabe, M. Direct importance estimation for covariate shift adaptation. Ann. Inst. Stat. Math 2008, 60, 699–746. [Google Scholar]
  12. Gneiting, T.; Raftery, A.E. Strictly proper scoring rules, prediction, and estimation. J. Am. Stat. Assoc 2007, 102, 359–378. [Google Scholar]
  13. Ali, S.M.; Silvey, S.D. A general class of coefficients of divergence of one distribution from Another. J. Roy. Stat. Soc. Series B 1966, 28, 131–142. [Google Scholar]
  14. Csiszár, I. Information-type measures of difference of probability distributions and indirect observation. Stud. Sci. Math. Hung 1967, 2, 229–318. [Google Scholar]
  15. Brier, G.W. Verification of forecasts expressed in terms of probability. Mon. Weather Rev 1950, 78, 1–3. [Google Scholar]
  16. Good, I.J. Comment on “Measuring Information Uncertainty” by R. J. Buehler. In Foundations of Statistical Inference; Godambe, V.P., Sprott, D.A., Eds.; Dove: Mineola, NY, USA, 1971; p. 337339. [Google Scholar]
  17. Murata, N.; Takenouchi, T.; Kanamori, T.; Eguchi, S. Information geometry of U-Boost and Bregman divergence. Neural Comput 2004, 16, 1437–1481. [Google Scholar]
  18. Parry, M.; Dawid, A.P.; Lauritzen, S. Proper local scoring rules. Ann. Stat 2012, 40, 561–592. [Google Scholar]
  19. Hendrickson, A.D.; Buehler, R.J. Proper scores for probability forecasters. Ann. Mathe. Stat 42.
  20. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley-Interscience: Landon, UK, 2006. [Google Scholar]
  21. Basu, A.; Harris, I.R.; Hjort, N.L.; Jones, M.C. Robust and efficient estimation by minimising a density power divergence. Biometrika 1998, 85, 549–559. [Google Scholar]
  22. Basu, A.; Shioya, H.; Park, C. Monographs on statistics and applied probability. In Statistical Inference: The Minimum Distance Approach; Taylor & Francis: Landon, UK, 2010. [Google Scholar]
  23. Fujisawa, H.; Eguchi, S. Robust parameter estimation with a small bias against heavy contamination. J. Multivar. Anal 2008, 99, 2053–2081. [Google Scholar]
  24. Hampel, F.R.; Rousseeuw, P.J.; Ronchetti, E.M.; Stahel, W.A. Robust Statistics. The Approach Based on Influence Functions; John Wiley and Sons, Inc: Landon, UK, 1986. [Google Scholar]
  25. Eguchi, S.; Kato, S. Entropy and divergence associated with power function and the statistical application. Entropy 2010, 12, 262–274. [Google Scholar]
  26. Maronna, R.; Martin, R.; Yohai, V. Robust Statistics: Theory and Methods; Wiley: Landon, UK, 2006. [Google Scholar]
  27. Wu, Y.; Liu, Y. Robust truncated hinge loss support vector machines. J. Am. Stat. Assoc 2007, 102, 974–983. [Google Scholar]
  28. Xu, H.; Caramanis, C.; Mannor, S.; Yun, S. Risk Sensitive Robust Support Vector Machines. Proceedings of the 48th IEEE Conference on Decision Control, Shanghai, China, 15–18 December 2009; pp. 4655–4661.
  29. Xu, L.; Crammer, K.; Schuurmans, D. Robust Support Vector Machine Training via Convex Outlier Ablation; AAAI: Boston, MA, USA, 2006; pp. 536–542.
  30. Kanamori, T.; Fujisawa, H. Affine invariant divergences associated with composite scores and its applications. Bernoulli, 2014. submitted. [Google Scholar]
Figure 1. Averaged relative error of L1-distance estimators over 100 iterations are plotted. The estimators with extremely large relative errors are omitted. DF denotes the estimator based on the density-difference, and the rightmost number in the legend denotes α in the density-power score. The number of samples in the standard setup is n = m = 1, 000, 2, 000, 5, 000, or 10, 000, and the number of outliers is set to n′ = m′ = 0 (i.e., no outliers), 10 or 100.
Figure 1. Averaged relative error of L1-distance estimators over 100 iterations are plotted. The estimators with extremely large relative errors are omitted. DF denotes the estimator based on the density-difference, and the rightmost number in the legend denotes α in the density-power score. The number of samples in the standard setup is n = m = 1, 000, 2, 000, 5, 000, or 10, 000, and the number of outliers is set to n′ = m′ = 0 (i.e., no outliers), 10 or 100.
Entropy 16 00921f1 1024
Table 1. Averaged relative error and standard deviation of L1-distance estimators over 100 iterations: DF (DR) denotes the estimator based on the density-difference (density-ratio), and “separate” denotes the L1-distance estimator using the separately estimated probability densities. The density-power score or pseudo-spherical score with parameter α is employed with the scale or non-scale model. The number of samples in the standard setup is n = m = 1, 000, 2, 000, 5, 000, or 10, 000, and the number of outliers is set to n′ = m′ = 0 (i.e., no outliers), 10 or 100. When the samples are contaminated by outliers, the density-ratio-based estimator becomes extremely unstable and numerical error occurs.
Table 1. Averaged relative error and standard deviation of L1-distance estimators over 100 iterations: DF (DR) denotes the estimator based on the density-difference (density-ratio), and “separate” denotes the L1-distance estimator using the separately estimated probability densities. The density-power score or pseudo-spherical score with parameter α is employed with the scale or non-scale model. The number of samples in the standard setup is n = m = 1, 000, 2, 000, 5, 000, or 10, 000, and the number of outliers is set to n′ = m′ = 0 (i.e., no outliers), 10 or 100. When the samples are contaminated by outliers, the density-ratio-based estimator becomes extremely unstable and numerical error occurs.
outliers: n′ = m′ = 0 (no outlier)
DF/DR estimator:modelαn = m = 1, 000n = m = 2, 000n = m = 5, 000n = m = 10, 000
DF density-power:nonscale10.033 (0.028)0.026 (0.024)0.016 (0.014)0.013 (0.010)
DF density-power:nonscale30.048 (0.032)0.035 (0.029)0.017 (0.014)0.016 (0.010)
DF density-power:scale10.037 (0.030)0.027 (0.025)0.016 (0.013)0.013 (0.010)
DF density-power:scale30.075 (0.069)0.053 (0.058)0.028 (0.027)0.019 (0.017)
DF pseudo-sphere:nonscale10.610 (0.450)0.604 (0.396)0.451 (0.320)0.452 (0.294)
DF pseudo-sphere:nonscale30.782 (0.532)0.739 (0.491)0.604 (0.440)0.500 (0.379)
DR density-power:scale0.10.035 (0.026)0.025 (0.022)0.015 (0.013)0.013 (0.009)
Separate:density-power10.047 (0.038)0.032 (0.024)0.022 (0.017)0.014 (0.010)
outliers: n′ = m′ = 10, τ = 100
DF/DR estimator:modelαn = m = 1, 000n = m = 2, 000n = m = 5, 000n = m = 10, 000
DF density-power:nonscale10.033 (0.026)0.028 (0.022)0.017 (0.013)0.013 (0.010)
DF density-power:nonscale30.042 (0.033)0.036 (0.029)0.021 (0.016)0.015 (0.012)
DF density-power:scale10.040 (0.030)0.031 (0.025)0.019 (0.014)0.014 (0.011)
DF density-power:scale (bias-correct)10.036 (0.030)0.030 (0.025)0.018 (0.014)0.014 (0.011)
DF density-power:scale30.089 (0.077)0.052 (0.047)0.031 (0.024)0.019 (0.016)
DF density-power:scale (bias-correct)30.083 (0.075)0.049 (0.046)0.030 (0.023)0.019 (0.016)
DF pseudo-sphere:nonscale10.658 (0.474)0.632 (0.424)0.515 (0.370)0.417 (0.297)
DF pseudo-sphere:nonscale30.969 (0.494)0.743 (0.487)0.677 (0.483)0.506 (0.421)
DR density-power:scale0.1
Separate:density-power10.032 (0.023)0.026 (0.019)0.015 (0.011)0.011 (0.008)
outliers: n′ = m′ = 100, τ = 100
DF/DR estimator:modelαn = m = 1, 000n = m = 2, 000n = m = 5, 000n = m = 10, 000
DF density-power:nonscale10.090 (0.042)0.047 (0.028)0.023 (0.014)0.013 (0.010)
DF density-power:nonscale30.093 (0.053)0.049 (0.032)0.025 (0.020)0.015 (0.012)
DF density-power:scale10.099 (0.043)0.053 (0.029)0.028 (0.017)0.016 (0.011)
DF density-power:scale (bias-correct)10.040 (0.031)0.028 (0.022)0.017 (0.013)0.011 (0.009)
DF density-power:scale30.144 (0.100)0.083 (0.047)0.041 (0.030)0.025 (0.016)
DF density-power:scale (bias-correct)30.076 (0.094)0.046 (0.041)0.031 (0.025)0.018 (0.014)
DF pseudo-sphere:nonscale10.557 (0.461)0.511 (0.399)0.501 (0.372)0.465 (0.305)
DF pseudo-sphere:nonscale30.807 (0.507)0.739 (0.508)0.581 (0.458)0.534 (0.396)
DR density-power:scale0.1
Separate:density-power10.052 (0.036)0.036 (0.031)0.024 (0.017)0.014 (0.009)
Back to TopTop