E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "New Developments in Statistical Information Theory Based on Entropy and Divergence Measures"

A special issue of Entropy (ISSN 1099-4300).

Deadline for manuscript submissions: closed (30 September 2018)

Special Issue Editor

Guest Editor
Professor Leandro Pardo

Statistics and Operation Research, Faculty of Mathematics, Universidad Complutense de Madrid, 28040 Madrid, Spain
Website | E-Mail
Phone: 34913944425
Interests: minimum divergence estimators: robustness and efficiency; robust test procedures based on minimum divergence estimators; robust test procedures in composite likelihood, empirical likelihood, change point, and time series

Special Issue Information

Dear Colleagues,

The idea of using functionals of Information Theory, such as entropies or divergences, in statistical inference is not new. In fact, the so-called Statistical Information Theory has been the subject of much statistical research over the last fifty years. Minimum divergence estimators or minimum distance estimators have been used successfully in models for continuous and discrete data due to its robustness properties. Divergence statistics, i.e., those ones obtained by replacing either one or both arguments in the measures of divergence by suitable estimators, have become a very good alternative to the classical likelihood ratio test in both continuous and discrete models, as well as to the classical Pearson-type statistic in discrete models.

Divergence statistics, based on maximum likelihood estimators, as well as Wald’s statistics, likelihood ratio statistics and Rao’s score statistics, enjoy several optimum asymptotic properties but they are highly non-robust in case of model misspecification under presence of outlying observations. It is well-known that a small deviation from the underlying assumptions on the model can have drastic effect on the performance of these classical tests. Thus, the practical importance of a robust test procedure is beyond doubt; and it is helpful for solving several real-life problems containing some outliers in the observed sample. For this reason in the last years a robust version of the classical Wald test statistic, for testing simple and composite null hypotheses for general parametric models, have been introduced and studied for different problems in the statistical literature. These test statistics are based on minimum divergence estimators instead of the maximum likelihood and have been considered in many different statistical problems: Censoring, equality of means in normal and lognormal models, logistic regression models in particular and GLM models in general, etc.

The scope of the contributions to this Special Issue will be to present new and original research based on minimum divergence estimators and divergence statistics, from a theoretical and applied point of view, in different statistical problem with special emphasis on efficiency and robustness. Manuscripts summarizing the most recent state-of-the-art of these topics will also be welcome.

Prof. Dr. Leandro Pardo
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Divergence measures
  • Entropy measures
  • Minimum divergence estimator (MDE)
  • Testing based on divergence measures
  • Wald-type tests based on MDE
  • Robustness
  • Efficiency
  • Parametric models
  • Complex random sampling
  • Composite Likelihood
  • Empirical likelihood
  • Change point
  • Censoring
  • GLM models

Published Papers (11 papers)

View options order results:
result details:
Displaying articles 1-11
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Convex Optimization via Symmetrical Hölder Divergence for a WLAN Indoor Positioning System
Entropy 2018, 20(9), 639; https://doi.org/10.3390/e20090639
Received: 3 July 2018 / Revised: 13 August 2018 / Accepted: 14 August 2018 / Published: 25 August 2018
PDF Full-text (5219 KB) | HTML Full-text | XML Full-text
Abstract
Modern indoor positioning system services are important technologies that play vital roles in modern life, providing many services such as recruiting emergency healthcare providers and for security purposes. Several large companies, such as Microsoft, Apple, Nokia, and Google, have researched location-based services. Wireless
[...] Read more.
Modern indoor positioning system services are important technologies that play vital roles in modern life, providing many services such as recruiting emergency healthcare providers and for security purposes. Several large companies, such as Microsoft, Apple, Nokia, and Google, have researched location-based services. Wireless indoor localization is key for pervasive computing applications and network optimization. Different approaches have been developed for this technique using WiFi signals. WiFi fingerprinting-based indoor localization has been widely used due to its simplicity, and algorithms that fingerprint WiFi signals at separate locations can achieve accuracy within a few meters. However, a major drawback of WiFi fingerprinting is the variance in received signal strength (RSS), as it fluctuates with time and changing environment. As the signal changes, so does the fingerprint database, which can change the distribution of the RSS (multimodal distribution). Thus, in this paper, we propose that symmetrical Hölder divergence, which is a statistical model of entropy that encapsulates both the skew Bhattacharyya divergence and Cauchy–Schwarz divergence that are closed-form formulas that can be used to measure the statistical dissimilarities between the same exponential family for the signals that have multivariate distributions. The Hölder divergence is asymmetric, so we used both left-sided and right-sided data so the centroid can be symmetrized to obtain the minimizer of the proposed algorithm. The experimental results showed that the symmetrized Hölder divergence consistently outperformed the traditional k nearest neighbor and probability neural network. In addition, with the proposed algorithm, the position error accuracy was about 1 m in buildings. Full article
Figures

Figure 1

Open AccessArticle Robust Relative Error Estimation
Entropy 2018, 20(9), 632; https://doi.org/10.3390/e20090632
Received: 11 July 2018 / Revised: 15 August 2018 / Accepted: 20 August 2018 / Published: 24 August 2018
PDF Full-text (3321 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Relative error estimation has been recently used in regression analysis. A crucial issue of the existing relative error estimation procedures is that they are sensitive to outliers. To address this issue, we employ the γ-likelihood function, which is constructed through γ-cross
[...] Read more.
Relative error estimation has been recently used in regression analysis. A crucial issue of the existing relative error estimation procedures is that they are sensitive to outliers. To address this issue, we employ the γ -likelihood function, which is constructed through γ -cross entropy with keeping the original statistical model in use. The estimating equation has a redescending property, a desirable property in robust statistics, for a broad class of noise distributions. To find a minimizer of the negative γ -likelihood function, a majorize-minimization (MM) algorithm is constructed. The proposed algorithm is guaranteed to decrease the negative γ -likelihood function at each iteration. We also derive asymptotic normality of the corresponding estimator together with a simple consistent estimator of the asymptotic covariance matrix, so that we can readily construct approximate confidence sets. Monte Carlo simulation is conducted to investigate the effectiveness of the proposed procedure. Real data analysis illustrates the usefulness of our proposed procedure. Full article
Figures

Figure 1

Open AccessArticle Non-Quadratic Distances in Model Assessment
Entropy 2018, 20(6), 464; https://doi.org/10.3390/e20060464
Received: 31 March 2018 / Revised: 11 June 2018 / Accepted: 13 June 2018 / Published: 14 June 2018
PDF Full-text (313 KB) | HTML Full-text | XML Full-text
Abstract
One natural way to measure model adequacy is by using statistical distances as loss functions. A related fundamental question is how to construct loss functions that are scientifically and statistically meaningful. In this paper, we investigate non-quadratic distances and their role in assessing
[...] Read more.
One natural way to measure model adequacy is by using statistical distances as loss functions. A related fundamental question is how to construct loss functions that are scientifically and statistically meaningful. In this paper, we investigate non-quadratic distances and their role in assessing the adequacy of a model and/or ability to perform model selection. We first present the definition of a statistical distance and its associated properties. Three popular distances, total variation, the mixture index of fit and the Kullback-Leibler distance, are studied in detail, with the aim of understanding their properties and potential interpretations that can offer insight into their performance as measures of model misspecification. A small simulation study exemplifies the performance of these measures and their application to different scientific fields is briefly discussed. Full article
Open AccessArticle Robust Estimation for the Single Index Model Using Pseudodistances
Entropy 2018, 20(5), 374; https://doi.org/10.3390/e20050374
Received: 31 March 2018 / Revised: 11 May 2018 / Accepted: 14 May 2018 / Published: 17 May 2018
PDF Full-text (368 KB) | HTML Full-text | XML Full-text
Abstract
For portfolios with a large number of assets, the single index model allows for expressing the large number of covariances between individual asset returns through a significantly smaller number of parameters. This avoids the constraint of having very large samples to estimate the
[...] Read more.
For portfolios with a large number of assets, the single index model allows for expressing the large number of covariances between individual asset returns through a significantly smaller number of parameters. This avoids the constraint of having very large samples to estimate the mean and the covariance matrix of the asset returns, which practically would be unrealistic given the dynamic of market conditions. The traditional way to estimate the regression parameters in the single index model is the maximum likelihood method. Although the maximum likelihood estimators have desirable theoretical properties when the model is exactly satisfied, they may give completely erroneous results when outliers are present in the data set. In this paper, we define minimum pseudodistance estimators for the parameters of the single index model and using them we construct new robust optimal portfolios. We prove theoretical properties of the estimators, such as consistency, asymptotic normality, equivariance, robustness, and illustrate the benefits of the new portfolio optimization method for real financial data. Full article
Figures

Figure 1

Open AccessArticle A Generalized Relative (α, β)-Entropy: Geometric Properties and Applications to Robust Statistical Inference
Entropy 2018, 20(5), 347; https://doi.org/10.3390/e20050347
Received: 30 March 2018 / Revised: 22 April 2018 / Accepted: 1 May 2018 / Published: 6 May 2018
PDF Full-text (449 KB) | HTML Full-text | XML Full-text
Abstract
Entropy and relative entropy measures play a crucial role in mathematical information theory. The relative entropies are also widely used in statistics under the name of divergence measures which link these two fields of science through the minimum divergence principle. Divergence measures are
[...] Read more.
Entropy and relative entropy measures play a crucial role in mathematical information theory. The relative entropies are also widely used in statistics under the name of divergence measures which link these two fields of science through the minimum divergence principle. Divergence measures are popular among statisticians as many of the corresponding minimum divergence methods lead to robust inference in the presence of outliers in the observed data; examples include the ϕ -divergence, the density power divergence, the logarithmic density power divergence and the recently developed family of logarithmic super divergence (LSD). In this paper, we will present an alternative information theoretic formulation of the LSD measures as a two-parameter generalization of the relative α -entropy, which we refer to as the general ( α , β ) -entropy. We explore its relation with various other entropies and divergences, which also generates a two-parameter extension of Renyi entropy measure as a by-product. This paper is primarily focused on the geometric properties of the relative ( α , β ) -entropy or the LSD measures; we prove their continuity and convexity in both the arguments along with an extended Pythagorean relation under a power-transformation of the domain space. We also derive a set of sufficient conditions under which the forward and the reverse projections of the relative ( α , β ) -entropy exist and are unique. Finally, we briefly discuss the potential applications of the relative ( α , β ) -entropy or the LSD measures in statistical inference, in particular, for robust parameter estimation and hypothesis testing. Our results on the reverse projection of the relative ( α , β ) -entropy establish, for the first time, the existence and uniqueness of the minimum LSD estimators. Numerical illustrations are also provided for the problem of estimating the binomial parameter. Full article
Open AccessArticle Minimum Penalized ϕ-Divergence Estimation under Model Misspecification
Entropy 2018, 20(5), 329; https://doi.org/10.3390/e20050329
Received: 8 March 2018 / Revised: 22 April 2018 / Accepted: 27 April 2018 / Published: 30 April 2018
Cited by 1 | PDF Full-text (329 KB) | HTML Full-text | XML Full-text
Abstract
This paper focuses on the consequences of assuming a wrong model for multinomial data when using minimum penalized ϕ -divergence, also known as minimum penalized disparity estimators, to estimate the model parameters. These estimators are shown to converge to a well-defined limit. An
[...] Read more.
This paper focuses on the consequences of assuming a wrong model for multinomial data when using minimum penalized ϕ -divergence, also known as minimum penalized disparity estimators, to estimate the model parameters. These estimators are shown to converge to a well-defined limit. An application of the results obtained shows that a parametric bootstrap consistently estimates the null distribution of a certain class of test statistics for model misspecification detection. An illustrative application to the accuracy assessment of the thematic quality in a global land cover map is included. Full article
Open AccessArticle Robustness Property of Robust-BD Wald-Type Test for Varying-Dimensional General Linear Models
Entropy 2018, 20(3), 168; https://doi.org/10.3390/e20030168
Received: 12 January 2018 / Revised: 1 March 2018 / Accepted: 1 March 2018 / Published: 5 March 2018
PDF Full-text (959 KB) | HTML Full-text | XML Full-text
Abstract
An important issue for robust inference is to examine the stability of the asymptotic level and power of the test statistic in the presence of contaminated data. Most existing results are derived in finite-dimensional settings with some particular choices of loss functions. This
[...] Read more.
An important issue for robust inference is to examine the stability of the asymptotic level and power of the test statistic in the presence of contaminated data. Most existing results are derived in finite-dimensional settings with some particular choices of loss functions. This paper re-examines this issue by allowing for a diverging number of parameters combined with a broader array of robust error measures, called “robust- BD ”, for the class of “general linear models”. Under regularity conditions, we derive the influence function of the robust- BD parameter estimator and demonstrate that the robust- BD Wald-type test enjoys the robustness of validity and efficiency asymptotically. Specifically, the asymptotic level of the test is stable under a small amount of contamination of the null hypothesis, whereas the asymptotic power is large enough under a contaminated distribution in a neighborhood of the contiguous alternatives, thus lending supports to the utility of the proposed robust- BD Wald-type test. Full article
Figures

Figure 1

Open AccessArticle Composite Likelihood Methods Based on Minimum Density Power Divergence Estimator
Entropy 2018, 20(1), 18; https://doi.org/10.3390/e20010018
Received: 6 November 2017 / Revised: 26 December 2017 / Accepted: 28 December 2017 / Published: 31 December 2017
PDF Full-text (322 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a robust version of the Wald test statistic for composite likelihood is considered by using the composite minimum density power divergence estimator instead of the composite maximum likelihood estimator. This new family of test statistics will be called Wald-type test
[...] Read more.
In this paper, a robust version of the Wald test statistic for composite likelihood is considered by using the composite minimum density power divergence estimator instead of the composite maximum likelihood estimator. This new family of test statistics will be called Wald-type test statistics. The problem of testing a simple and a composite null hypothesis is considered, and the robustness is studied on the basis of a simulation study. The composite minimum density power divergence estimator is also introduced, and its asymptotic properties are studied. Full article
Open AccessArticle Robust-BD Estimation and Inference for General Partially Linear Models
Entropy 2017, 19(11), 625; https://doi.org/10.3390/e19110625
Received: 10 October 2017 / Revised: 15 November 2017 / Accepted: 16 November 2017 / Published: 20 November 2017
PDF Full-text (725 KB) | HTML Full-text | XML Full-text
Abstract
The classical quadratic loss for the partially linear model (PLM) and the likelihood function for the generalized PLM are not resistant to outliers. This inspires us to propose a class of “robust-Bregman divergence (BD)” estimators of both the parametric and nonparametric components in
[...] Read more.
The classical quadratic loss for the partially linear model (PLM) and the likelihood function for the generalized PLM are not resistant to outliers. This inspires us to propose a class of “robust-Bregman divergence (BD)” estimators of both the parametric and nonparametric components in the general partially linear model (GPLM), which allows the distribution of the response variable to be partially specified, without being fully known. Using the local-polynomial function estimation method, we propose a computationally-efficient procedure for obtaining “robust-BD” estimators and establish the consistency and asymptotic normality of the “robust-BD” estimator of the parametric component β o . For inference procedures of β o in the GPLM, we show that the Wald-type test statistic W n constructed from the “robust-BD” estimators is asymptotically distribution free under the null, whereas the likelihood ratio-type test statistic Λ n is not. This provides an insight into the distinction from the asymptotic equivalence (Fan and Huang 2005) between W n and Λ n in the PLM constructed from profile least-squares estimators using the non-robust quadratic loss. Numerical examples illustrate the computational effectiveness of the proposed “robust-BD” estimators and robust Wald-type test in the appearance of outlying observations. Full article
Figures

Figure 1

Open AccessArticle Robust and Sparse Regression via γ-Divergence
Entropy 2017, 19(11), 608; https://doi.org/10.3390/e19110608
Received: 30 September 2017 / Revised: 7 November 2017 / Accepted: 9 November 2017 / Published: 13 November 2017
Cited by 1 | PDF Full-text (327 KB) | HTML Full-text | XML Full-text
Abstract
In high-dimensional data, many sparse regression methods have been proposed. However, they may not be robust against outliers. Recently, the use of density power weight has been studied for robust parameter estimation, and the corresponding divergences have been discussed. One such divergence is
[...] Read more.
In high-dimensional data, many sparse regression methods have been proposed. However, they may not be robust against outliers. Recently, the use of density power weight has been studied for robust parameter estimation, and the corresponding divergences have been discussed. One such divergence is the γ -divergence, and the robust estimator using the γ -divergence is known for having a strong robustness. In this paper, we extend the γ -divergence to the regression problem, consider the robust and sparse regression based on the γ -divergence and show that it has a strong robustness under heavy contamination even when outliers are heterogeneous. The loss function is constructed by an empirical estimate of the γ -divergence with sparse regularization, and the parameter estimate is defined as the minimizer of the loss function. To obtain the robust and sparse estimate, we propose an efficient update algorithm, which has a monotone decreasing property of the loss function. Particularly, we discuss a linear regression problem with L 1 regularization in detail. In numerical experiments and real data analyses, we see that the proposed method outperforms past robust and sparse methods. Full article
Figures

Figure 1

Review

Jump to: Research

Open AccessFeature PaperReview ϕ-Divergence in Contingency Table Analysis
Entropy 2018, 20(5), 324; https://doi.org/10.3390/e20050324
Received: 6 April 2018 / Revised: 19 April 2018 / Accepted: 20 April 2018 / Published: 27 April 2018
PDF Full-text (294 KB) | HTML Full-text | XML Full-text
Abstract
The ϕ -divergence association models for two-way contingency tables is a family of models that includes the association and correlation models as special cases. We present this family of models, discussing its features and demonstrating the role of ϕ -divergence in building this
[...] Read more.
The ϕ -divergence association models for two-way contingency tables is a family of models that includes the association and correlation models as special cases. We present this family of models, discussing its features and demonstrating the role of ϕ -divergence in building this family. The most parsimonious member of this family, the model of ϕ -scaled uniform local association, is considered in detail. It is implemented and representative examples are commented on. Full article
Back to Top