Composite Tests under Corrupted Data

This paper focuses on test procedures under corrupted data. We assume that the observations Zi are mismeasured, due to the presence of measurement errors. Thus, instead of Zi for i=1,…,n, we observe Xi=Zi+δVi, with an unknown parameter δ and an unobservable random variable Vi. It is assumed that the random variables Zi are i.i.d., as are the Xi and the Vi. The test procedure aims at deciding between two simple hyptheses pertaining to the density of the variable Zi, namely f0 and g0. In this setting, the density of the Vi is supposed to be known. The procedure which we propose aggregates likelihood ratios for a collection of values of δ. A new definition of least-favorable hypotheses for the aggregate family of tests is presented, and a relation with the Kullback-Leibler divergence between the sets fδδ and gδδ is presented. Finite-sample lower bounds for the power of these tests are presented, both through analytical inequalities and through simulation under the least-favorable hypotheses. Since no optimality holds for the aggregation of likelihood ratio tests, a similar procedure is proposed, replacing the individual likelihood ratio by some divergence based test statistics. It is shown and discussed that the resulting aggregated test may perform better than the aggregate likelihood ratio procedure.


Introduction
A situation which is commonly met in quality control is the following: Some characteristic Z of an item is supposed to be random, and a decision about its distribution has to be made based on a sample of such items, each with the same distribution F 0 (with density f 0 ) or G 0 (with density g 0 ). The measurement device adds a random noise V δ to each measurement, mutually independent and independent of the item, with a common distribution function H δ and density h δ , where δ is an unknown scaling parameter. Therefore the density of the measurement X := Z + V δ is either f δ := f 0 * h δ or g δ := g 0 * h δ , where * denotes the convolution operation. We denote F δ (respectively G δ ) to be the distribution function with density f δ (respectively g δ ).
The problem of interest, studied in [1], is how the measurement errors can affect the conclusion of the likelihood ratio test with statistics L n := 1 n ∑ log g 0 f 0 (X i ).
For small δ, the result of [2] enables us to estimate the true log-likelihood ratio (true Kullback-Leibler divergence) even when we only dispose of locally perturbed data by additive measurement errors. The distribution function H 0 of the measurement errors is considered unknown, up to zero expectation and unit variance. When we use the likelihood ratio test, while ignoring the possible measurement errors, we can incur a loss in both errors of the first and second kind. However, it is shown, in [1], that for small δ the original likelihood ratio test (LRT) is still the most powerful, only on a slightly changed significance level. The test problem leads to a composite of null and alternative classes H 0 or H 1 of distributions of random variables Z + V δ with V δ := √ δV, where V has distribution H 1 . If those families are bounded by alternating Choquet capacities of order 2, then the minimax test is based on the likelihood ratio of the pair of the least-favorable distributions of H 0 and H 1 , respectively (see Huber and Strassen [3]). Moreover, Eguchi and Copas [4] showed that the overall loss of power caused by a misspecified alternative equals the Kullback-Leibler divergence between the original and the corrupted alternatives. Surprisingly, the value of the overall loss is independent of the choice of null hypothesis. The arguments of [2] and of [5] enable us to approximate the loss of power locally, for a broad set of alternatives. The asymptotic behavior of the loss of power of the test based on sampled data is considered in [1], and is supplemented with numerical illustration.

Statement of the Test Problem
Our aim is to propose a class of statistics for testing the composite hypotheses H 0 and H 1 , extending the optimal Neyman-Pearson LRT between f 0 and g 0 . Unlike in [1], the scaling parameter δ is not supposed to be small, but merely to belong to some interval bounded away from 0.
We assume that the distribution H of the random variable (r.v.) V is known; indeed, in the tuning of the offset of a measurement device, it is customary to perform a large number of observations on the noise under a controlled environment.
Therefore, this first step produces a good basis for the modelling of the distribution of the density h. Although the distribution of V is known, under operational conditions the distribution of the noise is modified: For a given δ in [δ min , δ max ] with δ min > 0, denote by V δ a r.v. whose distribution is obtained through some transformation from the distribution of V, which quantifies the level of the random noise. A classical example is when V δ = √ δV, but at times we have a weaker assumption, which amounts to some decomposability property with respect to δ: For instance, in the Gaussian case, we assume that for all δ, η, there exists some r.v. W δ,η such that V δ+η = d V δ + W δ,η , where V δ and W δ,η are independent.
The test problem can be stated as follows: A batch of i.i.d. measurements X i := Z i + V δ,i is performed, where δ > 0 is unknown, and we consider the family of tests of H 0 (δ):=[X has density f δ ] vs. H 1 (δ):=[X has density g δ ], with δ ∈ ∆ = [δ min , δ max ] . Only the X i are observed. A class of combined tests of H 0 vs. H 1 is proposed, in the spirit of [6][7][8][9].
Under every fixed n, we assume that δ is allowed to run over a finite set p n of components of the vector ∆ n := [δ min = δ 0,n , ..., δ p n ,n = δ max ]. The present construction is essentially non-asymptotic, neither on n nor on δ, in contrast with [1], where δ was supposed to lie in a small neighborhood of 0. However, with increasing n, it would be useful to consider that the array δ j,n p n j=1 is getting dense in ∆ = [δ min , δ max ] and that lim n→∞ log p n n = 0.
For the sake of notational brevity, we denote by ∆ the above grid ∆ n , and all suprema or infima over ∆ are supposed to be over ∆ n . For any event B and any δ in ∆ , F δ (B) (respectively G δ (B)) designates the probability of B under the distribution F δ (respectively G δ ). Given a sequence of levels α n , we consider a sequence of test criteria T n := T n (X 1 , ..., X n ) of H 0 (δ), and the pertaining critical regions T n (X 1 , ..., such that F δ (T n (X 1 , ..., X n ) > A n ) ≤ α n ∀δ ∈ ∆, leading to rejection of H 0 (δ) for at least some δ ∈ ∆.
In an asymptotic context, it is natural to assume that α n converges to 0 as n increases, since an increase in the sample size allows for a smaller first kind risk. For example, in [8], α n takes the form α n := exp{−na n } for some sequence a n → ∞.
In the sequel, the Kullback-Leibler discrepancy between probability measures Q and P, with respective densities p and q (with respect to the Lebesgue measure on R), is denoted whenever defined, and takes value +∞ otherwise. The present paper handles some issues with respect to this context. In Section 2, we consider some test procedures based on the supremum of Likelihood Ratios (LR) for various values of δ, and define T n . The threshold for such a test is obtained for any level α n , and a lower bound for its power is provided. In Section 3, we develop an asymptotic approach to the Least Favorable Hypotheses (LFH) for these tests. We prove that asymptotically least-favorable hypotheses are obtained through minimization of the Kullback-Leibler divergence between the two composite classes H0 and H1 independently upon the level of the test.
We next consider, in Section 3.3, the performance of the test numerically; indeed, under the least-favorable pair of hypotheses we compare the power of the test (as obtained through simulation) with the theoretical lower bound, as obtained in Section 2. We show that the minimal power, as measured under the LFH, is indeed larger than the theoretical lower bound-this result shows that the simulation results overperform on theoretical bounds. These results are developed in a number of examples.
Since no argument plays in favor of any type of optimality for the test based on the supremum of likelihood ratios for composite testing, we consider substituting those ratios with other kinds of scores in the family of divergence-based concepts, extending the likelihood ratio in a natural way. Such an approach has a long history, stemming from the seminal book by Liese and Vajda [10]. Extensions of the Kullback-Leibler based criterions (such as the likelihood ratio) to power-type criterions have been proposed for many applications in Physics and in Statistics (see, e.g., [11]). We explore the properties of those new tests under the pair of hypotheses minimizing the Kullback-Leibler divergence between the two composite classes H0 and H1. We show that, in some cases, we can build a test procedure whose properties overperform the above supremum of the LRTs, and we provide an explanation for this fact. This is the scope of Section 4.

An Extension of the Likelihood Ratio Test
For any δ in ∆, let and define T n := sup δ∈∆ T n,δ .
Consider, for fixed δ, the Likelihood Ratio Test with statistics T n,δ which is uniformly most powerful (UMP) within all tests of H0(δ):= p T = f δ vs. H1(δ):= p T = g δ , where p T designates the distribution of the generic r.v. X. The test procedure to be discussed aims at solving the question: Does there exist some δ, for which H0(δ) would be rejected vs. H1(δ), for some prescribed value of the first kind risk? Whenever H0(δ) is rejected in favor of H1(δ), for some δ, we reject H0:= f 0 = g 0 in favor of H1:= f 0 = g 0 . A critical region for this test with level α n is defined by Since, for any sequence of events B 1 , . . . , B p n , An upper bound for P H0 (H1) can be obtained, making use of the Chernoff inequality for the right side of (4), providing an upper bound for the risk of first kind for a given A n . The correspondence between A n and this risk allows us to define the threshold A n accordingly.
Turning to the power of this test, we define the risk of second kind by a crude bound which, in turn, can be bounded from above through the Chernoff inequality, which yields a lower bound for the power of the test under any hypothesis g η in H1.
Let α n denote a sequence of levels, such that lim sup n→∞ α n < 1.
We make use of the following hypothesis: making use of the Chernoff-Stein Lemma (see Theorem A1 in the Appendix A), Hypothesis (6) means that any LRT with H0: p T = f δ vs. H1: p T = g δ is asymptotically more powerful than any LRT with H0: p T = f δ vs. H1: p T = f δ .
Both hypotheses (7) and (8), which are defined below, are used to provide the critical region and the power of the test.
For all δ, δ define Z δ := log g δ f δ (X), and let With N δ,δ , the set of all t such that ϕ δ,δ (t) is finite, we assume N δ,δ is a non void open neighborhood of 0.
Define, further, For any η, let Let and We also assume an accessory condition on the support of Z δ and W η , respectively under F δ and under G η (see (A2) and (A5) in the proof of Theorem A1). Suppose the regularity assumptions (7) and (8) are fulfilled for all δ, δ and η. Assume, further, that p n fulfills (1).
The following result holds: Proposition 2. Whenever (6) holds, for any sequence of levels α n bounded away from 1, defining it holds, for large n, that

An Asymptotic Definition for the Least-Favorable Hypotheses
We prove that the above procedure is asymptotically minimax for testing the composite hypothesis H 0 against the composite alternative H 1 . Indeed, we identify the least-favorable hypotheses, say F δ * ∈ H 0 and G δ * ∈ H 1 , which lead to minimal power and maximal first kind risk for these tests. This requires a discussion of the definition and existence of such a least-favourable pair of hypotheses in an asymptotic context; indeed, for a fixed sample size, the usual definition only leads to an explicit definition in very specific cases. Unlike in [1], the minimax tests will not be in the sense of Huber and Strassen. Indeed, on one hand, hypotheses H 0 and H 1 are not defined in topological neighbourhoods of F 0 and G 0 , but rather through a convolution under a parametric setting. On the other hand, the specific test of {H 0 (δ), δ ∈ ∆} against {H 1 (δ), δ ∈ ∆} does not require capacities dominating the corresponding probability measures.
Throughout the subsequent text, we shall assume that there exists δ * such that We shall call the pair of distributions F δ , G δ least-favorable for the sequence of tests 1 {T n > A n } if they satisfy The condition of unbiasedness of the test is captured by the central inequality in (11). Because, for finite n, such a pair can be constructed only in few cases, we should take a recourse of (11) to the asymptotics n → ∞. We shall show that any pair of distributions (F δ * G δ * ) achieving (10) are least-favorable. Indeed, it satisfies the inequality (11) asymptotically on the logarithmic scale.
Specifically, we say that F δ , G δ is a least-favorable pair of distributions when, for any δ ∈ ∆, Define the total variation distance where the supremum is over all Borel sets B of R. We will assume that, for all n, We state our main result, whose proof is deferred to the Appendix B.

Theorem 3.
For any level α n satisfying (13), the pair (F δ * , G δ * ) is a least-favorable pair of hypotheses for the family of tests 1 {T n ≥ A n }, in the sense of (12).

Identifying the Least-Favorable Hypotheses
We now concentrate on (10).
For given δ ∈ [δ min , δ max ] with δ min > 0, the distribution of the r.v. V δ is obtained through some transformation from the known distribution of V. The classical example is V δ = √ δV, which is a scaling, and where √ δ is the signal to noise ratio. The following results state that the Kullback-Leibler discrepancy K (F δ , G δ ) reaches its minimal value when the noise V δ is "maximal", under some additivity property with respect to δ. This result is not surprising: Adding noise deteriorates the ability to discriminate between the two distributions F 0 and G 0 -this effect is captured in K (F δ , G δ ), which takes its minimal value for the maximal δ.
This result holds as a consequence of Lemma A5 in the Appendix C. In the Gaussian case, when h is the standard normal density, Proposition 4 holds, since In order to model symmetric noise, we may consider a symmetrized Gamma density as follows: designates the Gamma density with scale parameter 1 and shape parameter δ, and γ − (1, δ) the Gamma density on R − with same parameter. Hence a r.v. with density h δ is symmetrically distributed and has variance 2δ. Clearly, h δ+η (x) = h δ * h η (x), which shows that Proposition 4 also holds in this case. Note that, except for values of δ less than or equal to 1, the density h δ is bimodal, which does not play in favour of such densities for modelling the uncertainty, due to the noise. In contrast with the Gaussian case, h δ cannot be obtained from h 1 by any scaling. The centred Cauchy distribution may help as a description of heavy tailed symmetric noise, and keeps uni-modality through convolution; it satisfies the requirements of Proposition In this case, δ acts as a scaling, since f δ is the density of δX where X has density f 1 .
In practice, the interesting case is when δ is the variance of the noise and corresponds to a scaling of a generic density, as occurs for the Gaussian case or for the Cauchy case. In the examples, which will be used below, we also consider symmetric, exponentially distributed densities (Laplace densities) or symmetric Weibull densities with a given shape parameter. The Weibull distribution also fulfills the condition in Proposition 4, being infinitely divisible (see [12]).

Numerical Performances of the Minimax Test
As frequently observed, numerical results deduced from theoretical bounds are of poor interest due to the sub-optimality of the involved inequalities. They may be sharpened on specific cases. This motivates the need for simulation. We study two cases, which can be considered as benchmarks.
A. In the first case, f 0 is a normal density with expectation 0 and variance 1, whereas g 0 is a normal density with expectation 0.3 and variance 1. B. The second case handles a situation where f 0 and g 0 belong to different models: f 0 is a log-normal density with location parameter 1 and scale parameter 0.2, whereas g 0 is a Weibull density on R + with shape parameter 5 and scale parameter 3. Those two densities differ strongly, in terms of asymptotic decay. They are, however, very close to one another in terms of their symmetrized Kullback-Leibler divergence (the so-called Jeffrey distance). Indeed, centering on the log-normal distribution f 0 , the closest among all Weibull densities is at distance 0.10-the density g 0 is at distance 0.12 from f 0 .
Both cases are treated, considering four types of distribution for the noise: a. The noise h δ is a centered normal density with variance δ 2 ; b. the noise h δ is a centered Laplace density with parameter λ(δ); c. the noise h δ is a symmetrized Weibull density with shape parameter 1.5 and variable scale parameter β(δ); and d. the noise h δ is Cauchy with density h δ (x) = γ(δ)/π γ(δ) 2 + x 2 .
In order to compare the performances of the test under those four distributions, we have adopted the following rule: The parameter of the distribution of the noise is tuned such that, for each value δ, it holds that P V δ > δ = Φ(1) − Φ(−1) ∼ 0.65, where Φ stands for the standard Gaussian cumulative function. Thus, distributions b to d are scaled with respect to the Gaussian noise with variance δ 2 .
In both cases A and B, the range of δ is ∆ = (δ min = 0.1, δ max ), and we have selected a number of possibilities for δ max , ranging from 0.2 to 0.7.
In case A, we selected = δ 2 max = 0.5, which has a signal-to-noise ratio equal to 0.7, a commonly chosen bound in quality control tests.
In case B, the variance of f 0 is roughly 0.6 and the variance of g 0 is roughly 0.4. The maximal value of δ 2 max is roughly 0.5. This is thus a maximal upper bound for a practical modeling. We present some power functions, making use of the theoretical bounds together with the corresponding ones based on simulation runs. As seen, the performances in the theoretical approach is weak. We have focused on simulation, after some comparison with the theoretical bounds.

Case A: The Shift Problem
In this subsection, we evaluate the quality of the theoretical power bound, defined in the previous sections. Thus, we compare the theoretical formula to the empirical lower performances obtained through simulations under the least-favorable hypotheses.

Theoretical Power Bound
While supposedly valid for finite n, the theoretical power bound given by (A8) still assumes some sort of asymptotics, since a good approximation of the bound implies a fine discretization of ∆ to compute I(A n ) = inf η∈∆ n I η (A n ). Thus, by condition (1), n has to be large. Therefore, in the following, we will compute this lower bound for n sufficiently large (that is, at least 100 observations), which is also consistent with industrial applications.

Numerical Power Bound
In order to obtain a minimal bound for the power of the composite test, we compute the power of the test H 0 (δ * ) against H 1 (δ * ), where δ * defines the LFH pair (F δ * , G δ * ).
Following Proposition 4, the LFH for the test defined by T n when the noise follows a Gaussian, a Cauchy, or a symmetrized Weibull distribution is achieved for (F δ max , G δ max ).
When the noise follows a Laplace distribution, the pair of LFH is the one that satisfies: In both of the cases A and B, this condition is also satisfied for δ * = δ max .

Comparison of the Two Power Curves
As expected, Figures 1-3 show that the theoretical lower bound is always below the empirical lower bound, when n is high enough to provide a good approximation of I(A n ). This is also true when the noise follows a Cauchy distribution, but for a bigger sample size than in the figures above (n > 250).   In most cases, the theoretical bound tends to largely underestimate the power of the test, when compared to its minimal performance given by simulations under the least-favorable hypotheses. The gap between the two also tends to increase as n grows. This result may be explained by the large bound provided by (5), while the numerical performances are obtained with respect to the least-favorable hypotheses.
From a computational perspective, the computational cost of the theoretical bound is far higher than its numeric counterpart.

Case B: The Tail Thickness Problem
The calculation of the moment-generating function, appearing in the formula of I η (x) in (9), is numerically unstable, which renders the computation of the theoretical bound impossible. Thus, in the following sections, the performances of the test will be evaluated numerically, through Monte Carlo replications.

A Family of Composite Tests Based on Divergence Distances
This section provides a similar treatment as above, dealing now with some extensions of the LRT test to the same composite setting. The class of tests is related to the divergence-based approach to testing, and it includes the cases considered so far. For reasons developed in Section 3.3, we argue through simulation and do not develop the corresponding large deviation approach.
The statistics T n can be generalized in a natural way, by defining a family of tests depending on some parameter γ. For γ = 0, 1, let For γ ≤ 2, this class of functions is instrumental in order to define the so-called power divergences between probability measures, a class of pseudo-distances widely used in statistical inference (see, for example, [13]).
Associated to this class, consider the function We also consider from which the statistics  Figure 4 illustrates the functions ϕ γ , according to γ. Fix a risk of first kind α, and the corresponding power of the LRT pertaining to H0(δ * ) vs. H1(δ * ) by Define, accordingly, the power of the test, based on T First, δ * defines the pair of hypotheses (F δ * , G δ * ), such that the LRT with statistics T 1 n,δ * has maximal power among all tests H0(δ * ) vs. H1(δ * ). Furthermore, by Theorem A1, it has minimal power on the logarithmic scale among all tests H0(δ) vs. H1(δ).
On the other hand, (F δ * , G δ * ) is the LF pair for the test with statistics T 1 n among all pairs (F δ , G δ ) . These two facts allow for the definition of the loss of power, making use of T 1 n instead of T 1 n,δ * for testing H0(δ * ) vs. H1(δ * ). This amounts to considering the price of aggregating the local tests T 1 n,δ , a necessity since the true value of δ is unknown. A natural indicator for this loss consists in the difference Consider, now, aggregated test statistics T γ n . We do not have at hand a similar result, as in Proposition 2. We, thus, consider the behavior of the test H0(δ * ) vs. H1(δ * ), although (F δ * , G δ * ) may not be a LFH for the test statistics T γ n . The heuristics, which we propose, makes use of the corresponding loss of power with respect to the LRT, through We will see that it may happen that ∆ γ n improves over ∆ 1 n . We define the optimal value of γ, say γ * , such that ∆ γ * n ≤ ∆ γ n , for all γ.
In the various figures hereafter, NP corresponds to the LRT defined between the LFH's (F δ * , G δ * ) , KL to the test with statistics T 1 n (hence, as presented Section 2), HELL corresponds to T 1/2 n , which is associated to the Hellinger power divergence, and G = 2 corresponds to γ = 2.

A Practical Choice for Composite Tests Based on Simulation
We consider the same cases A and B, as described in Section 3.3.
As stated in the previous section, the performances of the different test statistics are compared, considering the test of H 0 (δ * ) against H 1 (δ * ) where δ * is defined, as explained in Section 3.3 as the LFH for the test T 1 n . In both cases A and B, this corresponds to δ * = δ max .

Case A: The Shift Problem
Overall, the aggregated tests perform well, when the problem consists in identifying a shift in a distribution. Indeed, for the three values of γ (0.5, 1, and 2), the power remains above 0.7 for any kind of noise and any value of δ * . Moreover, the power curves associated to T γ n mainly overlap with the optimal test T 1 n,δ * .
a. Under Gaussian noise, the power remains mostly stable over the values of δ * , as shown by Figure 5. The tests with statistics T 1 n and T 2 n are equivalently powerful for large values of δ * , while the first one achieves higher power when δ * is small. b. When the noise follows a Laplace distribution, the three power curves overlap the NP power curve, and the different test statistics can be indifferently used. Under such a noise, the alternative hypotheses are extremely well distinguished by the class of tests considered, and this remains true as δ * increases (cf. Figure 6). c. Under the Weibull hypothesis, T 1 n and T 2 n perform similarly well, and almost always as well as T 1 n,δ * , while the power curve associated to T 1/2 n remains below. Figure 7 illustrates that, as δ max increases, the power does not decrease much. d. Under a Cauchy assumption, the alternate hypotheses are less distinguishable than under any other parametric hypothesis on the noise, since the maximal power is about 0.84, while it exceeds 0.9 in cases a, b, and c (cf. Figures 5-8). The capacity of the tests to discriminate between H0(δ max ) and H1(δ max ) is almost independent of the value of δ max , and the power curves are mainly flat.

Case B: The Tail Thickness Problem
a. With the noise defined by case A (Gaussian noise), for KL (γ = 1), δ * = δ max due to Proposition 4 and statistics T 1 n provides the best power uniformly upon δ max . Figure 9 shows a net decrease of the power as δ max increases (recall that the power is evaluated under the least favorable alternative G δ max ). b. When the noise follows a Laplace distribution, the situation is quite peculiar. For any value of δ in ∆ , the modes M G δmax and M F δmax of the distributions of ( f δ /g δ ) (X) under G δ max and under F δ max are quite separated; both larger than 1. Also, for δ all the values of φ γ M G δmax − φ γ M F δmax are quite large for large values of γ. We may infer that the distributions of φ γ (( f δ /g δ ) (X)) under G δ max and under F δ max are quite distinct for all δ, which in turn implies that the same fact holds for the distributions of T γ n for large γ. Indeed, simulations presented in Figure 10 show that the maximal power of the test tends to be achieved when γ = 2. c. When the noise follows a symmetric Weibull distribution, the power function when γ = 1 is very close to the power of the LRT between F δ max and G δ max (cf. Figure 11). Indeed, uniformly on δ, and on x, the ratio ( f δ /g δ ) (x) is close to 1. Therefore, the distribution of T 1 n is close to that of T 1 n,δ max , which plays in favor of the KL composite test. d. Under a Cauchy distribution, similarly to case A, Figure 12 shows that T γ n achieves the maximal power for γ = 1 and 2, closely followed by γ = 0.5.

Conclusions
We have considered a composite testing problem, where simple hypotheses in either H0 and H1 were paired, due to corruption in the data. The test statistics were defined through aggregation of simple likelihood ratio tests. The critical region for this test and a lower bound of its power was produced. We have shown that this test is minimax, evidencing the least-favorable hypotheses. We have considered the minimal power of the test under such a least favorable hypothesis, both theoretically and by simulation, and for a number of cases (including corruption by Gaussian, Laplacian, symmetrized Weibull, and Cauchy noise). Whatever this noise, the actual minimal power, as measured through simulation, was quite higher than obtained through analytic developments. Least-favorable hypotheses were defined in an asymptotic sense, and were proved to be the pair of simple hypotheses in H0 and H1 which are closest, in terms of the Kullback-Leibler divergence; this holds as a consequence of the Chernoff-Stein Lemma. We, next, considered aggregation of tests where the likelihood ratio was substituted by a divergence-based statistics. This choice extended the former one, and may produce aggregate tests with higher power than obtained through aggregation of the LRTs, as examplified and analysed. Open questions are related to possible extensions of the Chernoff-Stein Lemma for divergence-based statistics. which satisfies Note that, for all δ, is negative for δ close to δ, assuming that is a continuous mapping. Assume, therefore, that (6) holds, which means that the classes of distributions (G δ ) and (F δ ) are somehow well separated. This implies that E F δ (Z δ ) < 0, for all δ and δ . In order to obtain an upper bound for F δ T n,δ (X n ) > A n , for all δ, δ in ∆, through the Chernoff Inequality, consider The function (δ, δ , x) → J δ,δ (x) is continuous on its domain, and since t → ϕ δ,δ (t) is a strictly convex function which tends to infinity as t tends to t + N δ,δ , it holds that lim x→∞ J δ,δ (x) = +∞ for all δ, δ in ∆ n .
We now consider an upper bound for the risk of first kind on a logarithmic scale. We consider for all δ, δ . Then, by the Chernoff inequality Since A n should satisfy exp −nJ δ,δ (A n ) ≤ α n , with α n bounded away from 1, A n surely satisfies (A1) for large n.
The mapping m δ,δ (t) := (d/dt) ϕ δ,δ (t) is a homeomorphism from N δ,δ onto the closure of the convex hull of the support of the distribution of Z δ under F δ (see, e.g., [14]). Denote ess sup We assume that ess sup which is convenient for our task, and quite common in practical industrial modelling. This assumption may be weakened, at notational cost mostly. It follows that and, as seen previously lim x→∞ J δ,δ (x) = +∞.
On the other hand, By (A2), the interval I is not void. We now define A n such that (4) holds, namely holds for any α n in (0, 1) . Note that for all n large enough, since α n is bounded away from 1. The function J(x) := min is continuous and increasing, as it is the infimum of a finite collection of continuous increasing functions defined on I. Since P H0 (H1) ≤ p n exp (−nJ(A n )) , given α n , define This is well defined for α n ∈ (0, 1), as sup (δ,δ )∈∆×∆ E F δ (Z δ ) < 0 and − (1/n) log (α n /p n ) > 0.

Appendix A.2. The Power Function
We now evaluate a lower bound for the power of this test, making use of the Chernoff inequality to get an upper bound for the second risk.
Starting from (5), It holds that which is an increasing homeomorphism from M η onto the closure of the convex hull of the support of W η under G η . For any η, the mapping is a strictly increasing function of K η := E G η W η , ∞ onto (0, +∞), where the same notation as above holds for ess sup η W η (here under G η ), and where we assumed Making use of the Chernoff inequality, we get Each function x → I η (x) is increasing on (E G η W η , ∞). Therefore the function is continuous and increasing, as it is the infimum of a finite number of continuous increasing functions on the same interval K, which is not void due to (A5). We have proven that, whenever (A7) holds, a lower bound tor the test of H0 vs. H1 is given by We now collect the above discussion, in order to complete the proof.
For the lower bound on the power of the test, we have assumed In order to collect our results in a unified setting, it is useful to state some connection between inf (δ,δ )∈∆×∆ [K(F δ , G δ ) − K(F δ , F δ )] and inf η∈∆ K(G η , F η ). See (A3) and (A7).
Since K(G δ , F δ ) is positive, it follows from (6) that which implies the following fact: Let α n be bounded away from 1. Then (A3) is fulfilled for large n, and therefore there exists A n such that sup δ∈∆ F δ (T n > A n ) ≤ α n .
Furthermore, by (A9), Condition (A7) holds, which yields the lower bound for the power of this test, as stated in (A8).

Appendix B. Proof of Theorem 3
We will repeatedly make use of the following result (Theorem 3 in [15]), which is an extension of the Chernoff-Stein Lemma (see [16]).
Remark A2. The above result indicates that the power of the Neyman-Pearson test only depends on its level on the second order on the logarithmic scale.
This exists and is uniquely defined, due to the regularity of the distribution of T n,δ * under F δ * . Since 1 [T n,δ * > A n ] is the likelihood ratio test of H 0 (δ * ) against H 1 (δ * ) of the size α n , it follows, by unbiasedness of the LRT, that F δ * (T n ≤ A n ) = F δ * (T n,δ * ≤ A n,δ * ) ≥ G δ * (T n,δ * ≤ A n,δ * ) .
By regularity of the distribution of T n,δ * under G δ * , such a B n,δ * is defined in a unique way. We will prove that the condition in Theorem A1 holds, namely lim sup n→∞ F δ * (T n,δ * ≤ B n,δ * ) < 1.
Incidentally, we have obtained that lim n→∞ 1 n log G δ * (T n ≤ A n ) exists. Therefore we have proven that lim sup which is a form of unbiasedness. For δ = δ * , let B n,δ be defined by G δ (T n,δ ≤ B n,δ ) = G δ (T n ≤ A n ) .
It remains to verify the conditions (A10)-(A12). We will only verify (A12), as the two other conditions differ only by notation. We have by hypothesis (13). By the law of large numbers, under G δ lim n→∞ T n,δ = K(G δ , F δ ) [G δ − a.s. ].