A Maximum Entropy Procedure to Solve Likelihood Equations
AbstractIn this article, we provide initial findings regarding the problem of solving likelihood equations by means of a maximum entropy (ME) approach. Unlike standard procedures that require equating the score function of the maximum likelihood problem at zero, we propose an alternative strategy where the score is instead used as an external informative constraint to the maximization of the convex Shannon’s entropy function. The problem involves the reparameterization of the score parameters as expected values of discrete probability distributions where probabilities need to be estimated. This leads to a simpler situation where parameters are searched in smaller (hyper) simplex space. We assessed our proposal by means of empirical case studies and a simulation study, the latter involving the most critical case of logistic regression under data separation. The results suggested that the maximum entropy reformulation of the score problem solves the likelihood equation problem. Similarly, when maximum likelihood estimation is difficult, as is the case of logistic regression under separation, the maximum entropy proposal achieved results (numerically) comparable to those obtained by the Firth’s bias-corrected approach. Overall, these first findings reveal that a maximum entropy solution can be considered as an alternative technique to solve the likelihood equation. View Full-Text
Share & Cite This Article
Calcagnì, A.; Finos, L.; Altoé, G.; Pastore, M. A Maximum Entropy Procedure to Solve Likelihood Equations. Entropy 2019, 21, 596.
Calcagnì A, Finos L, Altoé G, Pastore M. A Maximum Entropy Procedure to Solve Likelihood Equations. Entropy. 2019; 21(6):596.Chicago/Turabian Style
Calcagnì, Antonio; Finos, Livio; Altoé, Gianmarco; Pastore, Massimiliano. 2019. "A Maximum Entropy Procedure to Solve Likelihood Equations." Entropy 21, no. 6: 596.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.