## 1. Introduction

In statistics and machine learning, estimating a discrepancy between two probability distributions from samples has been extensively studied [1], because discrepancy estimation is useful in solving various real-world data analysis tasks, including covariate shift adaptation [2,3], conditional probability estimation [4], outlier detection [5] and divergence-based two-sample testing [6].

There are mainly two classes of discrepancy measures for probability densities. One is genuine distances on function spaces, such as the L_{s}-distance for s ≥ 1, and the other is divergence measures, such as the Kullback–Leibler divergence and the Pearson divergence. Typically, distance measures in the former class can be represented using the difference of two probability densities, while those in the later class are represented using the ratio of two probability densities. Therefore, it is important to establish statistical methods to estimate the density difference and the density ratio.

A naive way to estimate the density difference and the density ratio consists of two steps: two probability densities are separately estimated in the first step, and then, their difference or ratio is computed in the second step. However, such a two-step approach is not favorable in practice, because the density estimation in the first step is carried out without regard to the second step of taking the difference or ratio. To overcome this problem, the authors in [7–11] studied the estimation of the density difference and the density ratio in a semi-parametric manner without separately modeling each probability distribution.

The intersection of the density difference-based distances and the density ratio-based divergences is the L_{1}-distance, and thus, it can be estimated either based on the density difference or the density ratio. In this paper, we first propose a novel direct method to estimate the density difference and the density ratio based on the Bregman scores [12]. We then show that the density-difference approach to L_{1}-distance estimation is more robust than the density-ratio approach. This fact has already been pointed out in [10] based on a somewhat intuitive argument: the density difference is always bounded, while the density ratio can be unbounded. In this paper, we theoretically support this claim by providing detailed theoretical analysis of the robustness properties.

There are some related works to our study. Density ratio estimation was intensively investigated in the machine learning community [4,7,8]. As shown in [6], the density-ratio is available to estimate the ϕ-divergence [13,14]. However, estimation of the L_{1}-distance, which is a member of the ϕ-divergence, was not studied, since it does not satisfy the regularity condition, which is required to investigate the statistical asymptotic property. On the other hand, the least mean squares estimator of the density-difference was proposed in [10], and the robustness property was numerically investigated. In the present paper, not only the least squares estimator, but also general score estimators for density differences are considered, and their robustness properties are theoretically investigated.

The rest of the paper is structured as follows. In Section 2, we describe two approaches to L_{1}-distance estimation based on the density difference and the density ratio. In Section 3, we introduce the Bregman scores, which are widely employed for the estimation of probability densities in statistical data analysis. In Section 4, we apply the Bregman score to the estimation of the density difference and the density ratio. In Section 5, we introduce a robustness measure in terms of which the proposed estimators are analyzed in the following sections. In Section 6, we consider statistical models without the scale parameter (called the non-scale models) and investigate the robustness of the density difference and density ratio estimators. In Section 7, we consider statistical models with the scale parameter (called the scale models) and show that the estimation using the scale models is reduced to the estimation using the non-scale models. Then, we apply the theoretical results on the non-scale models to the scale models and elucidate the robustness of the scale models. In Section 8, numerical examples on L_{1}-distance estimation are presented. Finally, we conclude in Section 9.

#### 2. Estimation of L_{1}-Distance

Let p(x) and q(x) be two probability densities. In this section, we introduce two approaches to estimating discrepancy measures: an approach based on the density difference, p – q, and an approach based on the density ratio, p/q.

#### 2.1. L_{1}-Distance As the Density Difference and Density Ratio

The density difference, p – q, is directly used to compute the L_{s}-distance between two probability densities:

where s ≥ 1. On the other hand, the density ratio, p/q, appears in the ϕ-divergence [13,14] defined as:
where ϕ is a strictly convex function, such that ϕ(1) = 0. The ϕ-divergence is non-negative and vanishes only when p = q holds. Hence, it can be regarded as an extension of the distance between p and q. The class of ϕ-divergences includes many important discrepancy measures, such as the Kullback–Leibler divergence (ϕ(z) = z log z), the Pearson distance (ϕ(z) = (1−z)^{2}) and the L_{1}-distance (ϕ(z) = |1−z|). The intersection of the ϕ-divergence and the L_{s}-distance is the L_{1}-distance:
The purpose of our work is to compare the statistical properties of the density-difference approach and the density-ratio approach to the estimation of the L_{1}-distance between probability densities p and q defined on ℝ^{d}. For the estimation of the L_{1}-distance, we use two sets of identically and independently distributed (i.i.d.) samples:

In both density-difference and density-ratio approaches, semi-parametric statistical models are used, which will be explained below.#### 2.2. Density-Difference Approach

The difference of two probability densities, f(x) = p(x) – q(x), is widely applied to statistical inference [10]. A parametric statistical model for the density difference f(x) is denoted as:

where Θ_{k} is the k-dimensional parameter space. The density-difference model, f(x; θ), can take both positive and negative values, and its integral should vanish. Note that there are infinitely many degrees of freedom to specify the probability densities, p and q, even when the density difference f = p – q is specified. Hence, the density-difference model is regarded as a semi-parametric model for probability densities.Recently, a density-difference estimator from Samples (2) that does not involve separate estimation of two probability densities has been proposed [10]. Once a density-difference estimator, f̂ ∈ $\mathcal{M}$_{diff}, is obtained, the L_{1}-distance can be immediately estimated as:

The L_{1}-distance has the invariance property under the variable change. More specifically, let x = ψ (z) be a one-to-one mapping on ℝ^{d} and f_{ψ}(z) be f(ψ(z))|J_{ψ}(z)|, where J_{ψ} is the Jacobian determinant of ψ. For f(x) = p(x) – q(x), the function, f_{ψ}(z), is the density difference between p and q in the z-coordinate. Then, we have:

due to the formula of variable change for probability densities. When the transformed data, z, with the model, f_{ψ}(z), is used instead of the model, f(x), the L_{1}-distance in the z-coordinate can be found in the same way as the L_{1}-distance in the x-coordinate.Note that this invariance property does not hold for general distance measures. Indeed, we have:

for general distance measures.#### 2.3. Density-Ratio Approach

The density ratio of two probability densities, p(x) and q(x), is defined as r(x) = p(x)/q(x), which is widely applied in statistical inference as the density difference [4]. Let:

be the k-dimensional parametric statistical model of the density ratio, r(x). From the definition of the density ratio, the function, r(x; θ), should be non-negative. Various estimators of the density ratio based on the Samples (2) that do not involve separate estimation of two probability densities have been developed so far [7,8,11]. Once the density ratio estimator, r̂ ∈ $\mathcal{M}$_{ratio}, is obtained, the L_{1}-distance between p and q can be immediately estimated as:
Differently from the L_{1}-distance estimator using the density difference, the numerical integral should be replaced with the sample mean, since the density, q(y), is unknown. In the density-ratio approach, the variable transformation maintains the L_{1}-distance, as well as the estimation of the density difference itself. For the one-to-one mapping y = ψ(z), let r_{ψ}(z) be r(ψ(z)) and the probability density, q_{ψ}(z), be q(ψ(z))|J_{ψ}(z)|. Then, we have:
where z_{j} is the transformed sample, such that y_{j} = ψ (z_{j}). In the density-ratio approach, the L_{1}-distance for the transformed data does not require the computation of the Jacobian determinant, J_{ψ}.## 3. Bregman Scores

The Bregman score is an extension of the log-likelihood function, and it is widely applied in statistical inference [12,15–18]. In this section, we briefly review the Bregman score. See [12] for details.

For functions f and g on ℝ^{d}, the Bregman score, S(f, g), is a class of real-valued functions that satisfy the inequality:

Clearly, the inequality becomes the equality for f = g. If the equality S(f, g) = S(f, f) leads to f = g, S(f, g) is called the strict Bregman score. The minimization problem of the strict Bregman score, S(f, g), i.e., min_{g} S(f, g), has the uniquely optimal solution g = f.Let us introduce the definition of Bregman scores. For a function, f, defined on the Euclidean space, ℝ^{d}, let G(f) be a real-valued convex functional. The functional, G(f), is called the potential below. The functional derivative of G(f) is denoted as G′(x; f), which is defined as the function satisfying the equality:

for any function, h(x), with a regularity condition, where λ(·) is the base measure. Then, the Bregman score, S(f, g), for functions f, g is defined as:
Due to the convexity of G(f), we have:
which is equivalent to Inequality (6). Let $\mathcal{F}$ be a set of functions defined on ℝ^{d}. If $\mathcal{F}$ is a convex set and the potential G(f) is strictly convex on $\mathcal{F}$, the associated Bregman score is strict.When G(f) is expressed as:

with a convex differentiable function U : ℝ → ℝ, the corresponding Bregman score is referred to as the separable Bregman score, which is given as:
Due to the computational tractability, the separable Bregman scores are often used in real-world data analysis.If f is a probability density, the Bregman score is expressed as:

where ℓ(x, g) is given by
The function, ℓ(x, g), is regarded as the loss of the forecast using g ∈ $\mathcal{F}$ for an outcome, x ∈ ℝ^{d}. The function of the form of Equation (9) is called a proper scoring rule, and its relation to the Bregman score has been extensively investigated [12,17,19]. When the i.i.d. samples, x_{1}, . . . , x_{n}, are observed from the probability density, f, the minimization problem of the empirical mean over the probability model, g, i.e.,
is expected to provide a good estimate of the probability density, f.Below, let us introduce exemplary Bregman scores:

**Example 1** (Kullback–Leibler (KL) score). The Kullback–Leibler (KL) score for the probability densities, p(x) and q(x), is defined as:

which is the separable Bregman score with the potential function:i.e., the negative entropy. The difference S(p, q)−S(p, p) is called the KL divergence [20]. The KL score is usually defined for probability densities, but an extension to non-negative functions is also available. Hence, the KL score is applicable to the estimation of the density ratio [8,11]. However, it is not possible to directly use the KL score to estimate the density difference, because it can take negative values.**Example 2** (Density-power score). Let α be a positive number and f and g be functions that can take both positive and negative values. Then, the density-power score with the base measure λ(·) is defined as:

See [21,22] for the details of the density-power score for probability densities. The potential of the density power score is given as:Hence, the density-power score is the separable Bregman score. Letting α be zero, then the difference of the density-power scores (S(p, q) − S(p, p))/α for probability densities p and q tends to the KL divergence.**Example 3** (Pseudo-spherical score; γ-score). For α > 0 and g ≠ 0, the pseudo-spherical score [16] is defined as:

This is the Bregman score derived from the potential function:implying that the pseudo-spherical score is a non-separable Bregman score. For probability densities, p and q, the monotone transformation of the pseudo-spherical score, − log(−S(p, q)), is called the γ-score [23], which is used for robust parameter estimation of probability densities. In the limiting case of α → 0, the difference of the γ-scores, − log(−S(p, q)) + log(−S(p, p)), recovers the KL-divergence. Note that the corresponding potential is not strictly convex on a set of functions, but is strictly convex on the set of probability densities. As a result, the equality S(p, q) = S(p, p) for the probability densities, p, q, leads to p = q, while the equality S(f, g) = S(f, f) for functions f and g yields that f and g are linearly dependent. The last assertion comes from the equality condition of the Hölder’s inequality.When the model, f(x; θ), includes the scale parameter, c, i.e., f(x; θ) = cg(x; θ̄) with the parameter θ = (c, θ̄) ∈ Θ_{k} for c ∈ ℝ and θ̄ ∈ Θ_{k}_{−1}, the pseudo-spherical score does not work. This is because the potential is not strictly convex on the statistical model with the scale parameter, and hence, the scale parameter, c, is not estimable when the pseudo-spherical score is used.

The density-power score and the pseudo-spherical score in the above examples include the non-negative parameter, α. When α is an odd integer, the absolute-value operator in the scores can be removed, which is computationally advantageous. For this reason, we set the parameter, α, to a positive odd integer when the Bregman score is used for the estimation of the density difference.

## 4. Direct Estimation of Density Differences and Density Ratios Using Bregman Scores

The Bregman scores are applicable not only to the estimation of probability densities, but also to the estimation of density differences and density ratios. In this section, we propose estimators for density differences and density ratios, and show their theoretical properties.

#### 4.1. Estimators for Density Differences and Density Ratios

First of all, let us introduce a way to directly estimate the density difference based on the Bregman scores. Let $\mathcal{M}$_{diff} be the statistical Model (3) to estimate the true density-difference f(x) = p(x) – q(x) defined on the Euclidean space, ℝ^{d}. Let the base measure, λ(·), be the Lebesgue measure. Then, for the density-difference model, f_{θ} ∈ $\mathcal{M}$_{diff}, the Bregman score Equation (8) is given as:

where ℓ(x, f_{θ}) is defined in Equation (10). This can be approximated by the empirical mean based on the Samples (2) as follows. Let δ be the Dirac delta function and f̃ be the difference between two empirical densities,
Then, we have:
If the target density difference, f, is included in the model $\mathcal{M}$_{diff}, the minimizer of the strict Bregman score, S_{diff} (f̃, f_{θ}), with respect to f_{θ} ∈ $\mathcal{M}$_{diff} is expected to produce a good estimator of f.Next, we use the Bregman scores to estimate the density ratio r(x) = p(x)/q(x). Let us define q(x) as the base measure of the Bregman score. Given the density-ratio model, $\mathcal{M}$_{ratio}, of Equation (5), the Bregman score of the model, r_{θ} ∈ $\mathcal{M}$_{ratio}, is given as:

Using the Samples (2), we can approximate the score, S_{ratio}(r, r_{θ}), with the base measure, q, as:
For example, the density-power score for the density ratio is given as:
in which the equality r(x)q(x) = p(x) is used. We can also obtain similar approximation for the pseudo-spherical score.#### 4.2. Invariance of Estimators

We show that the estimators obtained by the density-power score and the pseudo-spherical score have the affine invariance property. Suppose that the Samples (2) are distributed on the d-dimensional Euclidean space, and let us consider the affine transformation of samples, such that x_{i} = Ax′_{i} + b and y_{j} = Ay′_{j} + b, where A is an invertible matrix and b is a vector. Let f_{A,b}(x) be the transformed density-difference |det A|f(Ax′ + b) and f̃_{A,b} be the difference of the empirical distributions defined from the samples, x′_{i}, y′_{j}. Let S_{diff} (f, g) be the density-power score or the pseudo-spherical score with a positive odd integer, α, for the estimation of the density difference. Then, we have:

Let
$\widehat{f}(\widehat{{f}_{A,b}})$ be the estimator based on the samples, {x_{i}} and {y_{j}} ({x′_{i}} and {y′_{j}}). Then, the above equality leads to
${(\widehat{f})}_{A,b}=\widehat{{f}_{A,b}}$, implying that the estimator is invariant under the affine transformation of the data. In addition, Equality (4) leads to:
This implies that the affine transformation of the data does not affect the estimated L_{1}-distance. The same invariance property holds for the density-ratio estimators based on the density-power score and the pseudo-spherical score.## 5. Robustness Measure

The robustness of the estimator is an important feature in practice, since typically real-world data includes outliers that may undermine the reliability of the estimator. In this section, we introduce robustness measures of estimators against outliers.

In order to define robustness measures, let us briefly introduce the influence function in the setup of the density-difference estimation. Let p(x) and q(x) be the true probability densities of each dataset in Samples (2). Suppose that these probabilities are shifted to:

by the outliers, z_{p} and z_{q}, respectively. A small positive number, ε, denotes the ratio of outliers. Let θ^{*} be the true model parameter of the density difference f(x) = p(x) – q(x), i.e., f(x) = f(x; θ^{*}) ∈ $\mathcal{M}$_{diff}. Let us define the parameter, θ_{ε}, as the minimum solution of the problem,
Clearly, θ_{0} = θ^{*} holds. For the density-difference estimator using the Bregman score, S_{diff} (f, f_{θ}), with the model, $\mathcal{M}$_{diff}, the influence function is defined as:
Intuitively, the estimated parameter is distributed around θ^{*} + ε · IF_{diff} (θ^{*}; z_{p}, z_{q}) under the existence of outliers, z_{p} and z_{q}, with the small contamination ratio, ε. The influence function for the density ratio is defined in the same manner, and it is denoted as IF_{ratio}(θ^{*}; z_{p}, z_{q}).The influence function provides several robustness measures of estimators. An example is the gross error sensitivity defined as sup_{zp,zq} ‖IF_{diff} (θ^{*}; z_{p}, z_{q})‖, where ‖ · ‖ is the Euclidean norm. The estimator that uniformly minimizes the gross error sensitivity over the parameter, θ, is called the most B(bias)-robust estimator. The most B-robust estimator minimizes the worst-case influence of outliers. For the one-dimensional normal distribution, the median estimator is the most B-robust for the estimation of the mean parameter [24].

In this paper, we consider another robustness measure, called the redescending property. The estimator satisfying the following redescending property,

is called the redescending estimator [23–26]. The redescending property is preferable to stable inference, since the influence of extreme outliers can be ignored. Furthermore, in the machine learning literature, learning algorithms with the redescending property for classification problems have been proposed, under the name of the robust support vector machines [27–29]. Note that the most B-robust estimator is not necessarily a redescending estimator, and vice versa. It is known that, for the estimation of probability densities, the pseudo-spherical score has the redescending property, while the density-power score does not necessarily provide the redescending estimator [23,30].In the next sections, we apply the density-power score and the pseudo-spherical score to estimate the density difference or the density ratio and investigate their robustness.

## 6. Robustness under Non-Scale Models

In this section, we consider statistical models without the scaling parameter, and investigate the robustness of the density-difference and density-ratio estimators based on the density-power score and the pseudo-spherical score.

#### 6.1. Non-Scale Models

The model satisfying the following assumption is called the non-scale model:

**Assumption 1**. Let $\mathcal{M}$ be the model of density differences or density ratios. For c ∈ ℝ and f ∈ $\mathcal{M}$, such that c ≠ 0 and f ≠ 0, cf ∈ $\mathcal{M}$ holds only when c = 1.

The density-power score and the pseudo-spherical score are the strict Bregman score on the non-scale models. Indeed, the density-power score is the strict Bregman score, as pointed out in Example 2. For the pseudo-spherical score, suppose that the equality S(f, g) = S(f, f) holds for the non-zero functions, f and g. Then, g is proportional to f. When f and g are both included in a non-scale model, we have f = g. Thus, the pseudo-spherical score on the non-scale model is also the strict Bregman score.

#### 6.2. Density-Difference Approach

Here, we consider the robustness of density-difference estimation using the non-scale models. Assumption 1 implies that the model, f_{θ}(x), does not include the scale parameter. An example is the model consisting of two probability models,

such that θ_{1} ≠ θ_{2}, where p_{θ1} and p_{θ2} are parametric models of the normal distributions. The above model, f_{θ}, is still a semi-parametric model, because even when f_{θ}(x) is specified, the pair of probability densities, p and q, such that f_{θ} = p – q, have infinitely many degrees of freedom.The following theorem shows the robustness of the density-difference estimator. The proof is found in Appendix A.

**Theorem 1.** Suppose that Assumption 1 holds for the density-difference model, $\mathcal{M}$_{diff}. We assume that the true density-difference, f, is included in $\mathcal{M}$_{diff} and that$f={f}_{\theta}^{*}\in {\mathcal{M}}_{\text{diff}}$ holds. For the Bregman score, S_{diff} (f, g), of the density difference, let J be the matrix, each element of which is given as:

Suppose that J is invertible. Then, under the model, $\mathcal{M}$_{diff}, the influence function of the density-power score with a positive odd parameter, α, is given as:where$\frac{\partial f}{\partial \theta}$ is the k-dimensional gradient vector of the function, f, with respect to the parameter, θ. In addition, we suppose that f is not the zero function. Then, under the model, $\mathcal{M}$_{diff}, the influence function of the pseudo-spherical score with a positive odd parameter, α, is given as:Theorem 1 implies that the density-difference estimation with the pseudo-spherical score under non-scale models has the redescending property. For the density difference f = p – q, the limiting condition,

will hold in many practical situations. Hence, for α > 1, the assumption:
for all θ ∈ Θ_{k} will not be a strong condition for the density-difference model. Under the above limiting conditions, the influence Function (13) tends to zero, as z_{p} and z_{q} go to the infinite point. As a result, the pseudo-spherical score produces a redescending estimator. On the other hand, the density-power score does not have the redescending property, since the last term in Equation (12) does not vanish, when z_{p} and z_{q} tend to the infinite point.Let us consider the L_{1}-distance estimation using the density-difference estimator. The L_{1}-distance estimator under the Contamination (11) is distributed around:

which implies that the bias term is expressed as the inner product of the influence function and the gradient of the density-difference model. Let b_{diff,ε} be:
Then, the bias of the L_{1}-distance estimator induced by outliers is approximately bounded above by b_{diff,}_{ε}. Since the pseudo-spherical score with the non-scale model provides the redescending estimator for the density difference, the L_{1}-distance estimator based on the pseudo-spherical score also has the redescending property against the outliers.#### 6.3. Density-Ratio Approach

The following theorem provides the influence function of the density-ratio estimators. Since the proof is almost the same as that of Theorem 1, we omit the detailed calculation.

**Theorem 2.** Suppose that Assumption 1 holds for the density-ratio model, $\mathcal{M}$_{ratio}. We assume that the true density-ratio r(x) = p(x)/q(x) is included in:

and that r = r_{θ*} ∈ $\mathcal{M}$_{ratio} holds. For the Bregman score, S_{ratio}(r, r_{θ}), with the base measure, q(x), let J be the matrix, each element of which is given as:Suppose that J is invertible. Then, the influence function of the density-power score with a positive real parameter, α, is given as:The influence function of the pseudo-spherical score with a positive real parameter, α, is given as:The density ratio is a non-negative function. Hence, we do not need to care about the absolute value in the density-power score and the pseudo-spherical score. As a result, the parameter, α, in these scores is allowed to take any positive real number in the above theorem.

For the density ratio r(x) = p(x)/q(x), a typical limiting condition is:

For example, the density ratio of two Gaussian distributions with the same variance and different means leads to an unbounded density ratio. Hence, the influence function can tend to infinity. As a result, the density-ratio estimator is sensitive to the shift in probability distributions.Let us consider the L_{1}-distance estimation using the density ratio. The L_{1}-distance estimator under the Contamination (11) is distributed around:

Thus, the bias of the L_{1}-distance estimator induced by the outliers is approximately bounded above by:
The influence function for the density-ratio estimator can take an arbitrarily large value. In addition, the empirical approximation of the integral is also affected by outliers. Hence, the density-ratio estimator does not necessarily provide a robust estimator for the L_{1}-distance measure.## 7. Robustness under Scale Models

In this section, we consider the estimation of density differences using the model with the scale parameter. For such a model, the pseudo-spherical score does not work as shown in Example 3. Furthermore, in the previous section, we presented the instability of the density-ratio estimation against the gross outliers. Hence, in this section, we focus on the density-difference estimation using the density-power score with the scale models.

#### 7.1. Decomposition of Density-Difference Estimation Procedure

We show that the estimation procedure of the density difference using the density-power score is decomposed into two steps: estimation using the pseudo-spherical score with the non-scale model and estimation of the scale parameter. Note that the estimation in the first step has already been investigated in the last section.

Let us consider the statistical model satisfying the following assumption:

**Assumption 2.** Let $\mathcal{M}$_{diff} be the model for the density difference. For all f ∈ $\mathcal{M}$_{diff} and all c ∈ ℝ, cf ∈ $\mathcal{M}$_{diff} holds.

The model satisfying the above assumption is referred to as the scale model. A typical example of the scale model is the linear model:

where ψ_{ℓ} is the basis functions, such that ∫ψ_{ℓ}(x)dx = 0 holds for all ℓ = 1, . . . , k.Suppose that the k-dimensional scale model, $\mathcal{M}$_{diff}, is parametrized as:

The parameter, c, is the scale parameter, and θ̄ in Equation (14) is called the shape parameter. We assume that any g_{θ̄} is not equal to the zero-function. The parametrization of the model, $\mathcal{M}$_{diff}, may not provide one-to-one correspondence between the parameter θ = (c, θ̄) and the function, cg_{θ̄}, e.g., c = 0. We assume that in the vicinity of the true density-difference, the parametrization, θ, and the function, cg_{θ̄}, has a one-to-one correspondence. Define the model, $\mathcal{M}$_{diff,}_{c}, to be the (k − 1)-dimensional non-scale model:
For the pseudo-spherical score, the equality S(f, g) = S(f, cg) holds for c > 0. Thus, the scale parameter is not estimable. Let us study the statistical property of the estimator based on the density-power score with the scale models:

**Theorem 3.** Let us consider the density-difference estimation. Define${S}_{\text{diff},\alpha}^{\text{pow}}(f,g)$ and${S}_{\text{diff},\alpha}^{\text{ps}}(f,g)$ as the density-power score and the pseudo-spherical score with a positive odd number, α, and the base measure of these scores is given by the Lebesgue measure, respectively. Let f_{0} be a function and c̄g_{θ̄} ∈ $\mathcal{M}$_{diff} be the optimal solution of the problem,

We assume that c̄g_{θ̄} ≠ 0. Then, g_{θ̄} is given as the optimal solution of:where c is any fixed non-zero constant. In addition, the optimal scale parameter is expressed as:The empirical density-difference, f̃, is allowed as the function, f_{0}, in the above theorem. The proof is found in Appendix B. The same theorem for the non-negative functions is shown in [25].

Theorem 3 indicates that the minimization of the density-power score on the scale model is decomposed into two stages. Suppose that the true density-difference is f_{0} = p – q = c*g_{θ̄}_{*} ∈ $\mathcal{M}$_{diff}. At the first stage of the estimation, the minimization problem (15) is solved on the non-scale model $\mathcal{M}$_{diff,}_{±c*}. Then, at the second stage, the scale parameter is estimated. Though c* is unknown, the estimation procedure can be virtually interpreted as the two-stage procedure using the non-scale model, $\mathcal{M}$_{diff}_{,±c*}.

#### 7.2. Statistical Properties of Density-Difference Estimation

Based on the two-stage procedure for the minimization of the density-power score, we investigate the statistical properties of the density-difference estimator.

As shown in Section 6.2, the estimator using the pseudo-spherical score over the non-scale model, $\mathcal{M}$_{diff,c*}, has the redescending property. Hence, the extreme outliers have little impact on the estimation of the shape parameter, θ̄. Under the Contamination (11), let us define θ̄_{ε} as the optimal solution of the problem,

As the outliers, z_{p} and z_{q}, tend to the infinite point, we have:
because the estimation of the shape parameter using the pseudo-spherical score with the non-scale model has the redescending property, as shown in the last section.The scale parameter is given as:

As the outliers, z_{p} and z_{q}, tend to the infinite point, the second term in the above expression converges to zero. Hence, the scale parameter is given as:
from which we have:
The above analysis shows that the extreme outliers with the small contamination ratio, ε, will make the intensity of the estimated density-difference smaller by the factor, 1 − ε, and the estimated density-difference is distributed around (1 − ε)(p – q). Hence, the contamination with extreme outliers has little impact on the shape parameter in the density-difference estimator, when the density-power score is used.

Let us consider the L_{1}-distance estimation using the density-difference estimator. Suppose that the true density-difference, p – q, is estimated by the density-power score with the scale model. Then, the L_{1}-distance estimator under the Contamination (11) is distributed around:

When the contamination ratio, ε, is small, even extreme outliers do not significantly affect the L_{1}-distance estimator. The bias induced by the extreme outliers depends only on the contamination ratio. If prior knowledge on the contamination ratio, ε, is available, one can approximately correct the bias of the L_{1}-distance measure by multiplying the constant obtained by the prior knowledge to the L_{1}-distance estimator.## 8. Numerical Experiments

We conducted numerical experiments to evaluate the statistical properties of L_{1}-distance estimators. We used synthetic datasets. Let N(μ, σ^{2}) be the one-dimensional normal distribution with mean μ and variance σ^{2}. In the standard setup, let us assume that the samples are drawn from the normal distributions,

In addition, some outliers are observed from:
where the variance, τ, is much larger than one. Based on the two datasets, {x_{1}, . . . , x_{n}, x̃_{1}, . . . , x̃_{n′}} and {y_{1}, . . . , y_{m}, ỹ_{1}, . . . , ỹ_{m′}}, the L_{1}-distance between N(0, 1) and N(1, 1) is estimated.Below, we show the models and estimators used in the L_{1}-distance estimation. The non-scale model for the density-difference is defined as:

where ϕ(x; μ, σ) is the probability density of N(μ, σ^{2}). To estimate the parameters, the density-power score and the pseudo-spherical score are used. As the scale model, we employ cf_{θ}(x) with the parameter, θ, and c > 0. As shown in Equation (16), the estimator has the bias, when the samples are contaminated by outliers. Ideally, the multiplication of (1 − ε)^{−1} to the L_{1}-distance estimator with the scale model will improve the estimation accuracy, where ε is the contamination ratio n′/n (= m′/m). In the numerical experiments, also, the bias corrected estimator was examined, though the bias correction requires prior knowledge on the contamination ratio. For the statistical model for density-ratio estimation, we used the scale model,
and the density-power score as the loss function. Furthermore, we evaluated the two-step approach in which the L_{1}-distance is estimated from the separately estimated probability densities. We employed the density-power score with the statistical model, ϕ(x; μ, σ), to estimate the probability density of each dataset.In numerical experiments, the error of the L_{1}-distance estimator, d̂_{1}(p, q), was measured by the relative error, |1 −d̂_{1}(p, q)/d_{1}(p, q)|. The number of training samples varied from 1, 000 to 10, 000, and that of the outliers varied from zero (no outlier) to 100. The parameter, α, in the score function was set to α = 1 or 3 for the density-difference (DF)-based estimators and α = 0.1 for the density-ratio (DR)-based estimators. For the density-ratio estimation, the score with large α easily yields numerical errors, since the power of the exponential model tends to become extremely large. For each setup, the averaged relative error of each estimator was computed over 100 iterations.

The numerical results are depicted in Figure 1, and details are shown in Table 1. In the figure, estimators with extremely large relative errors are omitted. As shown in Table 1, the estimation accuracy of the DR-based estimator was severely degraded by the contaminated samples. On the other hand, DF-based estimators were robust against outliers. The DF-based estimator with the pseudo-spherical score is less accurate than that with the density-power score. In the statistical inference, there is the tradeoff between the efficiency and robustness. Though the pseudo-spherical score provides a redescending estimator, the efficiency of the estimator is not high in practice. In the estimation of the probability density, the pseudo-spherical score having the parameter, α, ranging from 0.1 to one provides a robust and efficient estimator, and the estimator with large α became inefficient [23]. This is because the estimator with large α tends to ignore most of the samples. In the density-difference estimation, the parameter, α, should be a positive odd number. Hence, in our setup, the estimator using the pseudo-spherical score became inefficient. In terms of the density-power score, the corresponding DF-based estimator has the bounded influence function. As a result, the estimator is efficient and rather robust against the outliers. Furthermore, we found that the bias correction by multiplying the constant factor, (1 − ∊)^{−1}, improves the estimation accuracy.

When there is no outlier, the two-step approach using the separately estimated probability densities has larger relative errors than the DF-based estimators using the density-power score. For the contaminated samples, the two-step approach is superior to the other methods, especially when the sample size is less than 2,000. In this case, the separate density estimation with the density-power score efficiently reduces the influence of the outliers. For the larger sample size, however, the DF-based estimators using the density-power score are comparable with the two-step approach. When the rate of the outliers is moderate, the DF-based approach works well, even though the statistical model is based on the semiparametric modeling, which has less information than the parametric modeling used in the two-step approach.

## 9. Conclusions

In this paper, we first proposed to use the Bregman score to estimate density differences and density ratios, and then, we studied the robustness property of the L_{1}-distance estimator. We showed that the pseudo-spherical score provides a redescending estimator of the density difference under non-scale models. However, the estimator based on the density-power score does not have the redescending property against extreme outliers. In the scale models, the pseudo-spherical score does not work, since the corresponding potential is not strictly convex on the function space. We proved that the density-power score provides a redescending estimator for the shape parameter in the scale models. Under extreme outliers, the shift in the L_{1}-distance estimator using the scale model is calculated. The density-power score provides a redescending estimator for the shape parameter in the scale models. Moreover, we proved that the L_{1}-distance estimator is not significantly affected by extreme outliers. In addition, we showed that prior knowledge on the contamination ratio, ε, can be used to correct the bias of the L_{1}-distance estimator. In numerical experiments, the density-power score provides an efficient and robust estimator in comparison to the pseudo-spherical score. This is because the pseudo-spherical score with large α tends to ignore most of the samples and, thus, becomes inefficient. In a practical setup, the density-power score will provide a satisfactory result. Furthermore, we illustrated that the bias correction by using the prior knowledge on the contamination ratio improves L_{1}-distance estimators using scale models.

Besides the Bregman scores, there are other useful classes of estimators, such as local scoring rules [12,18,30]. It is therefore an interesting direction to pursue the possibility of applying another class of scoring rules to the estimation of density differences and density ratios.