Next Article in Journal
Double Entropy Joint Distribution Function and Its Application in Calculation of Design Wave Height
Next Article in Special Issue
Robust Inference after Random Projections via Hellinger Distance for Location-Scale Family
Previous Article in Journal
Information Theory in Neuroscience
Previous Article in Special Issue
Likelihood Ratio Testing under Measurement Errors
Article Menu
Issue 1 (January) cover image

Export Article

Open AccessArticle
Entropy 2019, 21(1), 63;

Composite Tests under Corrupted Data

Laboratoire de Probabilités, Statistique et Modélisation, Sorbonne Université, 75005 Paris, France
Institute of Information Theory and Automation, The Czech Academy of Sciences, 18208 Prague, Czech Republic
Faculty of Mathematics and Physics, Charles University, 18207 Prague, Czech Republic
Department of ECE, Indian Institute of Technology, Palakkad 560012, India
Safran Aircraft Engines, 77550 Moissy-Cramayel, France
Author to whom correspondence should be addressed.
Received: 11 November 2018 / Revised: 8 January 2019 / Accepted: 10 January 2019 / Published: 14 January 2019
PDF [378 KB, uploaded 14 January 2019]


This paper focuses on test procedures under corrupted data. We assume that the observations Z i are mismeasured, due to the presence of measurement errors. Thus, instead of Z i for i = 1 , , n, we observe X i = Z i + δ V i, with an unknown parameter δ and an unobservable random variable V i. It is assumed that the random variables Z i are i.i.d., as are the X i and the V i. The test procedure aims at deciding between two simple hyptheses pertaining to the density of the variable Z i, namely f 0 and g 0. In this setting, the density of the V i is supposed to be known. The procedure which we propose aggregates likelihood ratios for a collection of values of δ. A new definition of least-favorable hypotheses for the aggregate family of tests is presented, and a relation with the Kullback-Leibler divergence between the sets f δ δ and g δ δ is presented. Finite-sample lower bounds for the power of these tests are presented, both through analytical inequalities and through simulation under the least-favorable hypotheses. Since no optimality holds for the aggregation of likelihood ratio tests, a similar procedure is proposed, replacing the individual likelihood ratio by some divergence based test statistics. It is shown and discussed that the resulting aggregated test may perform better than the aggregate likelihood ratio procedure. View Full-Text
Keywords: composite hypotheses; corrupted data; least-favorable hypotheses; Neyman Pearson test; divergence based testing; Chernoff Stein lemma composite hypotheses; corrupted data; least-favorable hypotheses; Neyman Pearson test; divergence based testing; Chernoff Stein lemma

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).

Share & Cite This Article

MDPI and ACS Style

Broniatowski, M.; Jurečková, J.; Moses, A.K.; Miranda, E. Composite Tests under Corrupted Data. Entropy 2019, 21, 63.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics



[Return to top]
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top