# The Information Geometry of Sparse Goodness-of-Fit Testing

^{1}

^{2}

^{3}

^{*}

Next Article in Journal

Next Article in Special Issue

Next Article in Special Issue

Previous Article in Journal

Previous Article in Special Issue

Previous Article in Special Issue

Department of Statistics and Actuarial Science, University of Waterloo, 200 University Avenue West, Waterloo, ON N2L 3G1, Canada

School of Mathematics and Statistics, The Open University, Walton Hall, Milton Keynes, Buckinghamshire MK7 6AA, UK

Department of Mathematics & ECARES, Université libre de Bruxelles, Avenue F.D. Roosevelt 42, 1050 Brussels, Belgium

Author to whom correspondence should be addressed.

Academic Editors: Frédéric Barbaresco and Frank Nielsen

Received: 31 August 2016 / Revised: 16 November 2016 / Accepted: 19 November 2016 / Published: 24 November 2016

(This article belongs to the Special Issue Differential Geometrical Theory of Statistics)

This paper takes an information-geometric approach to the challenging issue of goodness-of-fit testing in the high dimensional, low sample size context where—potentially—boundary effects dominate. The main contributions of this paper are threefold: first, we present and prove two new theorems on the behaviour of commonly used test statistics in this context; second, we investigate—in the novel environment of the extended multinomial model—the links between information geometry-based divergences and standard goodness-of-fit statistics, allowing us to formalise relationships which have been missing in the literature; finally, we use simulation studies to validate and illustrate our theoretical results and to explore currently open research questions about the way that discretisation effects can dominate sampling distributions near the boundary. Novelly accommodating these discretisation effects contrasts sharply with the essentially continuous approach of skewness and other corrections flowing from standard higher-order asymptotic analysis.

We start by emphasising the threefold achievements of this paper, spelled out in detail in terms of the paper’s section structure below. First, we present and prove two new theorems on the behaviour of some standard goodness-of-fit statistics in the high dimensional, low sample size context, focusing on behaviour “near the boundary” of the extended multinomial family. We also comment on the methods of proof which allow explicit calculations of higher order moments in this context. Second, working again explicitly in the extended multinomial context, we fill a hole in the literature by linking information-geometric-based divergences and standard goodness-of-fit statistics. Finally, we use simulation studies to explore discretisation effects that can dominate sampling distributions “near the boundary”. Indeed, we illustrate and explore how—in the high dimensional, low sample size context—all distributions are affected by boundary effects. We also use these simulation results to explore currently open research questions. As can be seen, the overarching theme is the importance of working in the geometry of the extended exponential family [1], rather than the traditional manifold-based structure of information geometry.

In more detail, the paper extends and builds on the results of [2], and we use notation and definitions consistently across these two papers. Both papers investigate the issue of goodness-of-fit testing in the high dimensional sparse extended multinomial context, using the tools of Computational Information Geometry (CIG) [1].

Section 2 gives formal proofs of two results, Theorems 1 and 2, which were announced in [2]. These results explore the sampling performance of standard goodness-of-fit statistics—Wald, Pearson’s ${\chi}^{2}$, score and deviance—in the sparse setting. In particular, they look at the case where the data generation process is “close to the boundary” of the parameter space where one or more cell probabilities vanish. This complements results in much of the literature, where the centre of the parameter space—i.e., the uniform distribution—is often the focus of attention.

Section 3 starts with a review of the links between Information Geometry (IG) [3] and goodness-of-fit testing. In particular, it looks at the power family of Cressie and Read [4,5] in terms of the geometric theory of divergences. In the case of regular exponential families, these links have been well-explored in the literature [6], as has the corresponding sampling behaviour [7]. What is novel here is the exploration of the geometry with respect to the closure of the exponential family; i.e., the extended multinomial model—a key tool in CIG. We illustrate how the boundary can dominate the statistical properties in ways that are surprising compared to standard—and even high-order—analyses, which are asymptotic in sample size.

Through simulation experiments, Section 4 explores the consequences of working in the sparse multinomial setting, with the design of the numerical experiments being inspired by the information geometry.

One of the first major impacts that information geometry had on statistical practice was through the geometric analysis of higher order asymptotic theory (e.g., [8,9]). Geometric interpretations and invariant expressions of terms in the higher order corrections to approximations of sampling distributions are a good example, [8] (Chapter 4). Geometric terms are used to correct for skewness and other higher order moment (cumulant) issues in the sampling distributions. However, these correction terms grow very large near the boundary [1,10]. Since this region plays a key role in modelling in the sparse setting—the maximum likelihood estimator (MLE) often being on the boundary—extensions to the classical theory are needed. This paper, together with [2], start such a development. This work is related to similar ideas in categorical, (hierarchical) log–linear, and graphical models [1,11,12,13]. As stated in [13], “their statistical properties under sparse settings are still very poorly understood. As a result, analysis of such data remains exceptionally difficult”.

In this section we show why the Wald—equivalently, the Pearson ${\chi}^{2}$ and score statistics—are unworkable when near the boundary of the extended multinomial model, but that the deviance has a simple, accurate, and tractable sampling distribution—even for moderate sample sizes. We also show how the higher moments of the deviance are easily computable, in principle allowing for higher order adjustments. However, we also make some observations about the appropriateness of these classical adjustments in Section 4.

First, we define some notation, consistent with that of [2]. With i ranging over $\{0,1,\dots ,k\}$, let $n=({n}_{i})\sim $ Multinomial $(N,({\pi}_{i}))$, where here each ${\pi}_{i}>0$. In this context, the Wald, Pearson’s ${\chi}^{2}$, and score statistics all coincide, their common value, W, being
Defining ${\pi}^{(\alpha )}:={\sum}_{i}{\pi}_{i}^{\alpha}$, we note the inequality, for each $m\ge 1$,
in which equality holds if and only if ${\pi}_{i}\equiv 1/(k+1)$—i.e., iff $({\pi}_{i})$ is uniform. We then have the following theorem, which establishes that the statistic W is unworkable as ${\pi}_{min}:=min({\pi}_{i})\to 0$ for fixed k and N.

$$W:={\displaystyle \sum _{i=0}^{k}}\frac{{({\pi}_{i}-{n}_{i}/N)}^{2}}{{\pi}_{i}}\equiv \frac{1}{{N}^{2}}{\displaystyle \sum _{i=0}^{k}}\frac{{n}_{i}^{2}}{{\pi}_{i}}-1.$$

$${\pi}^{(-m)}-{(k+1)}^{m+1}\ge 0,$$

For $k>1$ and $N\ge 6$, the first three moments of W are:
and $E[{\left\{W-E(W)\right\}}^{3}]$ given by
where $g(k,N)=4(N-1)k(k+2N-5)>0$.

$$E(W)=\frac{k}{N},\phantom{\rule{1.em}{0ex}}Var(W)=\frac{\left\{{\pi}^{(-1)}-{(k+1)}^{2}\right\}+2k(N-1)}{{N}^{3}}$$

$$\frac{\left\{{\pi}^{(-2)}-{(k+1)}^{3}\right\}-(3k+25-22N)\left\{{\pi}^{(-1)}-{(k+1)}^{2}\right\}+g(k,N)}{{N}^{5}},$$

In particular, for fixed k and N, as ${\pi}_{min}\to 0$
where $\gamma (W):=E[{\left\{W-E(W)\right\}}^{3}]/{\{Var(W)\}}^{3/2}$.

$$Var(W)\to \infty \phantom{\rule{0.277778em}{0ex}}\text{and}\phantom{\rule{0.277778em}{0ex}}\gamma (W)\to +\infty ,$$

A detailed proof is found in Appendix A, and we give here an outline of its important features. The machinery developed is capable of delivering much more than a proof of Theorem 1. As indicated there, it provides a generic way to explicitly compute arbitrary moments or mixed moments of multinomial counts, and could in principle be implemented by computer algebra. Overall, there are four stages. First, a key recurrence relation is established; secondly, it is exploited to deliver moments of a single cell count. Third, mixed moments of any order are derived from those of lower order, exploiting a certain functional dependence. Finally, results are combined to find the first three moments of W, higher moments being similarly obtainable.

The practical implication of Theorem 1 is that standard first (and higher-order) asymptotic approximations to the sampling distribution of the Wald, ${\chi}^{2}$, and score statistics break down when the data generation process is “close to” the boundary, where at least one cell probability is zero. This result is qualitatively similar to results in [10], which shows how asymptotic approximations to the distribution of the maximum likelihood estimate fail; for example, in the case of logistic regression, when the boundary is close in terms of distances as defined by the Fisher information.

Unlike statistics considered in Theorem 1, the deviance has a workable distribution in the same limit: that is, for fixed N and k as we approach the boundary of the probability simplex. In sharp contrast to that theorem, we see the very stable and workable behaviour of the k-asymptotic approximation to the distribution of the deviance, in which the number of cells increases without limit.

Define the deviance D via
where ${\mu}_{i}:=E({n}_{i})=N{\pi}_{i}$. We will exploit the characterisation that the multinomial random vector $({n}_{i})$ has the same distribution as a vector of independent Poisson random variables conditioned on their sum. Specifically, let the elements of $({n}_{i}^{*})$ be independently distributed as Poisson $Po({\mu}_{i})$. Then, ${N}^{*}:={\sum}_{i=0}^{k}{n}_{i}^{*}\sim Po(N)$, while $({n}_{i}):=({n}_{i}^{*}|{N}^{*}=N)\sim \phantom{\rule{0.277778em}{0ex}}\text{Multinomial}(N,({\pi}_{i}))$. Define the vector
where ${D}^{*}$ is defined implicitly and $0log0:=0$. The terms ν, τ, and ρ are defined by the first two moments of ${S}^{*}$ via the vectors
where ${C}_{i}:=Cov({n}_{i}^{*},{n}_{i}^{*}log({n}_{i}^{*}/{\mu}_{i}))$ and ${V}_{i}:=Var({n}_{i}^{*}log({n}_{i}^{*}/{\mu}_{i}))$.

$$\begin{array}{ccc}\hfill D/2& =& {\sum}_{\{0\le i\le k:{n}_{i}>0\}}{n}_{i}log({n}_{i}/N)-\sum _{i=0}^{k}{n}_{i}log({\pi}_{i})\hfill \\ & =& {\sum}_{\{0\le i\le k:{n}_{i}>0\}}{n}_{i}log({n}_{i}/{\mu}_{i}),\hfill \end{array}$$

$${S}^{*}:=\left(\begin{array}{c}{N}^{*}\\ {D}^{*}/2\end{array}\right)={\displaystyle \sum _{i=0}^{k}}\left(\begin{array}{c}{n}_{i}^{*}\\ {n}_{i}^{*}log({n}_{i}^{*}/{\mu}_{i})\end{array}\right),$$

$$\left(\begin{array}{c}N\\ \nu \end{array}\right):=E({S}^{*})=\left(\begin{array}{c}N\\ {\sum}_{i=0}^{k}E({n}_{i}^{*}log\left({n}_{i}^{*}/{\mu}_{i}\right))\end{array}\right),$$

$$\left(\begin{array}{cc}N& \rho \tau \sqrt{N}\\ \xb7& {\tau}^{2}\end{array}\right):=Cov({S}^{*})=\left(\begin{array}{cc}N& {\sum}_{i=0}^{k}{C}_{i}\\ \xb7& {\sum}_{i=0}^{k}{V}_{i}\end{array}\right),$$

Each of the terms ν, τ, and ρ remains bounded as ${\pi}_{min}\to 0$.

We start with some preliminary remarks. We use the following notation: $\mathcal{N}:=\{1,2,\dots \}$ denotes the natural numbers, while ${\mathcal{N}}_{0}:=\{0\}\cup \mathcal{N}$. Throughout, $X\sim Po(\mu )$ denotes a Poisson random variable having positive mean μ—that is, X is discrete with support ${\mathcal{N}}_{0}$ and probability mass function $p:{\mathcal{N}}_{0}\to (0,1)$ given by:

$$p(x):={e}^{-\mu}{\mu}^{x}/x!\phantom{\rule{0.277778em}{0ex}}(\mu >0).$$

Putting:
for given μ, $\{1-{F}^{[m]}(\mu )\}$ is strictly decreasing with m, vanishing as $m\to \infty $. For all $(x,m)\in {\mathcal{N}}_{0}^{2}$, we define ${x}_{(m)}$ by:
so that, if $x\ge m$, ${x}_{(m)}=x!/(x-m)!$.

$$\forall m\in {\mathcal{N}}_{0},\phantom{\rule{0.277778em}{0ex}}{F}^{[m]}(\mu ):=Pr(X\le m)={\textstyle {\sum}_{x=0}^{m}}p(x)\in (0,1),$$

$${x}_{(0)}:=1;\phantom{\rule{0.277778em}{0ex}}{x}_{(m)}:=x(x-1)\dots (x-(m-1))\phantom{\rule{0.277778em}{0ex}}(m\in \mathcal{N})$$

The set ${\mathcal{A}}_{0}$ comprises all functions ${a}_{0}:(0,\infty )\to R$ such that, as $\xi \to {0}_{+}$:
Of particular interest here, by l’Hôspital’s rule,
where ${(log)}^{m}:\xi \to {(log\xi )}^{m}\phantom{\rule{0.277778em}{0ex}}(\xi >0)$. For each ${a}_{0}\in {\mathcal{A}}_{0}$, $\overline{{a}_{0}}$ denotes its continuous extension from $(0,\infty )$ to $[0,\infty )$—that is: $\overline{{a}_{0}}(0):={a}_{0}({0}_{+});\phantom{\rule{0.277778em}{0ex}}\overline{{a}_{0}}(\xi ):={a}_{0}(\xi )\phantom{\rule{0.277778em}{0ex}}(\xi >0)$—while, appealing to continuity, we also define $0\overline{{a}_{0}}(0):=0$. Overall, denoting the extended reals by $\overline{R}:=R\cup \{-\infty \}\cup \{+\infty \}$, and putting
we have that $\mathcal{A}$ contains the disjoint union:
We refer to $\overline{{a}_{0}}{|}_{{\mathcal{N}}_{0}}$ as the member of $\mathcal{A}$ based on ${a}_{0}\in {\mathcal{A}}_{0}$.

$$(\mathrm{i}){a}_{0}(\xi )\text{tends to an infinite limit}{a}_{0}({0}_{+})\in \{-\infty ,+\infty \},\text{while}:(\text{ii})\xi {a}_{0}(\xi )\to 0.$$

$$\forall m\in \mathcal{N},\phantom{\rule{0.277778em}{0ex}}{(log)}^{m}\in {\mathcal{A}}_{0},$$

$$\mathcal{A}:=\{a:{\mathcal{N}}_{0}\to \overline{R}\phantom{\rule{4.pt}{0ex}}\text{such}\phantom{\rule{4.pt}{0ex}}\text{that}\phantom{\rule{4.pt}{0ex}}0a(0)=0\}$$

$$\{\text{all}\phantom{\rule{4.pt}{0ex}}\text{functions}\phantom{\rule{4.pt}{0ex}}a:{\mathcal{N}}_{0}\to R\}\cup \{\overline{{a}_{0}}{|}_{{\mathcal{N}}_{0}}:{a}_{0}\in {\mathcal{A}}_{0}\}.$$

We make repeated use of two simple facts. First:
equality holding in both places if, and only if, $x=0$. Second, (3) and (5) give:
so that, by definition of $\mathcal{A}$:
equality holding trivially when $m=0$. In particular, taking $a=1\in \mathcal{A}$—that is, $a(x)=1$ $(x\in {\mathcal{N}}_{0})$—(9) recovers, at once, the Poisson factorial moments:
whence, in further particular, we also recover:

$$\forall x\in {\mathcal{N}}_{0},\phantom{\rule{0.277778em}{0ex}}0\le log(x+1)\le x,$$

$$\forall (x,m)\in {\mathcal{N}}_{0}^{2}\phantom{\rule{4.pt}{0ex}}\mathrm{with}\phantom{\rule{4.pt}{0ex}}x\ge m,\phantom{\rule{0.277778em}{0ex}}{x}_{(m)}p(x)={\mu}^{m}p(x-m)$$

$$\forall m\in {\mathcal{N}}_{0},\forall a\in \mathcal{A},\phantom{\rule{0.277778em}{0ex}}E({X}_{(m)}a(X))={\mu}^{m}E(a(X+m)),$$

$$\forall m\in {\mathcal{N}}_{0},\phantom{\rule{0.277778em}{0ex}}E({X}_{(m)})={\mu}^{m}$$

$$E(X)=\mu ,\phantom{\rule{0.277778em}{0ex}}E({X}^{2})={\mu}^{2}+\mu \phantom{\rule{0.277778em}{0ex}}\text{and}\phantom{\rule{0.277778em}{0ex}}E({X}^{3})={\mu}^{3}+3{\mu}^{2}+\mu .$$

We are ready now to prove Theorem 2.

Let $X\sim Po(\mu )$ $(\mu >0)$, and put ${X}_{\mu}:=Xlog(X/\mu )$, with $0log0:=0$. Then, there exist ${b}^{(1)},{b}^{(2)}:(0,\infty )\to (0,\infty )$ such that:

- (a)
- $0\le E({X}_{\mu})\le {b}^{(1)}(\mu )$ and $0\le E({X}_{\mu}^{2})\le {b}^{(2)}(\mu )$, while:
- (b)
- for $i=1,2:$ ${b}^{(i)}(\mu )\to 0$ as $\mu \to {0}_{+}$.

By (6), ${a}_{0}^{(1)}(\xi ):=log(\xi /\mu )\in {\mathcal{A}}_{0}$. Taking $m=1$ and $a\in \mathcal{A}$ based on ${a}_{0}^{(1)}$ in (9), and using (7), gives at once the stated bounds on $E({X}_{\mu})$ with ${b}^{(1)}(\mu )=\mu (\mu -log\mu )$, which does indeed tend to 0 as $\mu \to {0}_{+}$.

Further, let ${a}_{0}^{(2)}(\xi ):=\xi {(log(\xi /\mu ))}^{2}$. Taking $m=1$ and a as the restriction of ${a}_{0}^{(2)}$ to ${\mathcal{N}}_{0}$ in (9) gives $E({X}_{\mu}^{2})=\mu E({a}^{(2)}(X+1))$. Noting that
in which $\overline{\mu}$ denotes the smallest integer greater than or equal to μ, and putting
(7), (10), and l’Hôpital’s rule give the stated bounds on $E({X}_{\mu}^{2})$, with
which, indeed, tends to 0 as $\mu \to {0}_{+}$. ☐

$$\{x\in {\mathcal{N}}_{0}:log((x+1)/\mu )<0\}=\left\{\begin{array}{cc}\varnothing & (\mu \le 1)\\ \{0,\dots ,\overline{\mu}-2\}& (\mu >1)\end{array}\right.,$$

$$B(\mu ):=\left\{\begin{array}{cc}0& (\mu \le 1)\\ \mu {\textstyle {\sum}_{x=0}^{\overline{\mu}-2}}{a}^{(2)}(x+1)p(x)& (\mu >1)\end{array}\right.,$$

$$\begin{array}{cc}\hfill {b}^{(2)}(\mu )& =B(\mu )+\mu {\textstyle {\sum}_{x=0}^{\infty}}(x+1){(x-log\mu )}^{2}p(x)\hfill \\ & =B(\mu )+\mu E\{{X}^{3}+{X}^{2}(1-2log\mu )+X({(log\mu )}^{2}-2log\mu )+{(log\mu )}^{2}\}\hfill \\ & =B(\mu )+{\mu}^{4}+4{\mu}^{3}+2{\mu}^{2}+\mu {(log\mu )}^{2}+{(\mu log\mu )}^{2}-2\mu (\mu +2)(\mu log\mu )\hfill \end{array}$$

As a result of Theorem 2, the distribution of the deviance is stable in this limit. Further, as noted in [2], each of ν, τ, and ρ can be easily and accurately approximated by standard truncate and bound methods in the limit as ${\pi}_{\text{min}}\to 0$. These are detailed in Appendix B.

The emphasis of this section is the importance of the boundary of the extended multinomial when understanding the links between information geometric divergences and families of goodness-of-fit statistics. For completeness, a set of well-known results linking the Power-Divergence family and information geometry in the manifold sense are surveyed in Section 3.1, Section 3.2, and Section 3.3. The extension to the extended multinomial family is discussed in Section 3.4, where we make clear how the global behaviour of divergences is dominated by boundary effects. This complements the usual local analysis, which links divergences with the Fisher information, [8]. Perhaps the key point is that, since counts in the data can be zero, information geometric structures should also allow probabilities to be zero. Hence, closures of exponential families seem to be the correct geometric object to work on.

The results of Section 2 concern the boundary behaviour of two important members of a rich class of goodness-of-fit statistics. An important unifying framework which encompasses these and other important statistics can be found in [5] (page 16) with the so-called Power-Divergence statistics. These are defined, for $-\infty <\lambda <\infty $, by
with the cases $\lambda =-1,0$ being defined by taking the appropriate limit to give
Important special cases are shown in Table 1 (whose first column is described below in Section 3.3), and we also note the case $\lambda =2/3$, which Read and Cressie recommend [5] (page 79) as a reasonably robust statistic with an easily calculable critical value for small N. In a sense, it lies “between” the Pearson ${\chi}^{2}$ and deviance statistics, which we compared in Section 2.

$$2N{I}^{\lambda}\left(\frac{n}{N}:\pi \right):=\frac{2}{\lambda (\lambda +1)}{\displaystyle \sum _{i=0}^{k}}{n}_{i}\left[{\left(\frac{{n}_{i}}{N{\pi}_{i}}\right)}^{\lambda}-1\right],$$

$$\underset{\lambda \to -1}{lim}2N{I}^{\lambda}\left(\frac{n}{N}:\pi \right)=2{\displaystyle \sum _{i=0}^{k}}N{\pi}_{i}log\left(N{\pi}_{i}/{n}_{i}\right),\phantom{\rule{0.277778em}{0ex}}\underset{\lambda \to 0}{lim}2N{I}^{\lambda}\left(\frac{n}{N}:\pi \right)=2{\displaystyle \sum _{i=0}^{k}}{n}_{i}log\left({n}_{i}/N{\pi}_{i}\right).$$

This paper is primarily concerned with the sparse case where many of the ${n}_{i}$ counts are zero, and we are also interested in letting probabilities, ${\pi}_{i}$, becoming arbitrarily small, or even zero.

Before we look at this, we briefly review the literature on the geometry of goodness-of-fit statistics. A good source for the historical developments (in the discrete context) can be found in [5] (pages 131–153) and [7]. Important examples include the analysis of contingency tables, log-linear, and discrete graphical models. Testing is often used to check the consistency of a parametric model with given data, and to check dependency assumptions such as independence between categorical variables. However, we note an important caveat: as pointed out by [14,15], the fact that a parametric model “passes” a goodness-of-fit test only weakly constrains the resulting inference. The essential point here is that goodness-of-fit is a necessary, but not sufficient, condition for model choice, since—in general—many models will be empirically supported. This issue has recently been explored geometrically in [16] using CIG.

There have been many possible test statistics proposed for goodness-of-fit testing, and one of the attractions of the Power-Divergence family, defined in (11), is that the most important ones are included in the family and indexed by a single scalar λ. Of course, when there is a choice of test statistic, different inferences can result from different choices. One of the main themes of [5] is to give the analyst insight about selecting a particular λ. Key considerations for making the selection of λ include the tractability of the sampling distribution, its power against important alternatives, and interpretation when hypotheses are rejected.

The first order, asymptotic in N, ${\chi}^{2}$-sampling distribution for all members of the Power-Divergence family, which is appropriate when all observed counts are “large enough”, is the most commonly used tool, and a very attractive feature of the family. However, this can fail badly in the “sparse” case and when the model is close to the boundary. Elementary, moment based corrections, to improve small sample performance, are discussed in [5] (Chapter 5). More formal asymptotic approaches to these issues include the doubly asymptotic, in N and k, approach of [17], discussed in Section 2 and similar normal approximation ideas in [18]. See also [19]. Extensive simulation experiments have been undertaken to learn in practice what ‘large enough’ means, see [5,20,21].

When there are nuisance parameters to be estimated (as is common), [22] points out that it is the sampling distribution conditional upon these estimates which needs to be approximated, and proposes higher order methods based on the Edgeworth expansion. Simulation approaches are often used in the conditional context due to the common intractability of the conditional distribution [23,24], and importance sampling methods play an important role—see [25,26,27]. Other approaches used to investigate the sampling distribution include jackknifing [28], the Chen–Stein method [29], and detailed asymptotic analysis in [30,31,32].

In very high dimensional model spaces, considerations of the power of tests rarely generates uniformly best procedures but, we feel, geometry can be an important tool in understanding the choices that need to be made. Further, [5], states the situation is “complicated”, showing this through simulation experiments. One of the reasons for Read and Cressie’s preferred choice of $\lambda =2/3$ is its good power against some important types of alternative–the so-called bump or dip cases–as well as the relative tractability of its sampling distribution under the null. Other considerations about power can be found in [33] which looks specifically at mixture model based alternatives.

At the time that the Power-Divergence family was being examined, there was a parallel development in Information Geometry; oddly, however, it seemed to have taken some time before the links between the two areas were fully recognised. A good treatment of these links can be found in [6] (Chapter 9). Since it is important to understand the extreme values of divergence functions, considerations of convexity can clearly play an important role. The general class of Bregman divergences, [6,34] (page 240), and [35] (page 13) is very useful here. For each Bregman divergence, there will exist affine parameters of the exponential family in which the divergence function is convex. In the class of product Poisson models—which are the key building blocks of log–linear models—all members of the Power-Divergence family have the Bregman property. These are then α-divergences, capable of generating the complete Information Geometry of the model [35], with the link between α and λ given in Table 1. The α-representation highlights the duality properties, which are a cornerstone of Information Geometry, but which is rather hidden in the λ representation. The Bregman divergence representation for the Poisson is given in Table 2. The divergence parameter—in which we have convexity—is shown for each λ, as is the so-called potential function, which generates the complete information geometry for these models.

In this paper, we are focusing on the class of log–linear models where the multinomial is the underlying class of distributions; that is, we condition on the sample size, N, being fixed in the product Poisson space. In particular, we focus on extended multinomials, which includes the closure of the multinomials, so we have a boundary. Due to the conditioning (which induces curvature), only the cases where $\lambda =0,-1$ remain Bregman divergences, but all are still divergences in the sense of being Csiszár f-divergences [36,37].

The closure of an exponential family (e.g., [11,38,39,40]), and its application in the theory of log–linear models has been explored in [12,13,41,42]. The key here is understanding the limiting behaviour in the natural—$\alpha =1$ in the sense of [8]—parameter space. This can be done by considering the polar dual [43], or, alternatively, the directions of recession—[12] or [42]. The boundary polytope determines key statistical properties of the model, including the behaviour of the sampling distribution of (functions of) the MLE and the shape of level sets of divergence functions.

Figure 1 and Figure 2 show level sets of the $\alpha =\pm 1$ Power-Divergences in the $(+1)$-affine and $(-1)$-affine parameters (Panels (a) and (b), respectively) for the $k=2$ extended multinomial model. The boundary polytope in this case is a simple triangle “at infinity”, and the shape of this is strongly reflected in the behaviour of the level sets. In Figure 1, we show—in the simplex $\left\{({\pi}_{0},{\pi}_{1},{\pi}_{2})|{\sum}_{i=0}^{2}{\pi}_{i}=1,{\pi}_{i}\ge 0\right\}$—the level sets of the $\alpha =-1$ divergence, which, in the Csiszár f-divergence form, is
The figures show how in Panel (a), the directions of recession dominate the shape of level sets, and in Panel (b) the duals of these directions (i.e., the vertices of the simplex) each have different maximal behaviour. The lack of convexity of the level sets in Panel (a) corresponds to the fact that the natural parameters are not the affine divergence parameters for this divergence, so we do not expect convex behaviour. In Panel (b), we do get non-convex level sets, as expected.

$$K({\pi}^{0},\pi ):={\displaystyle \sum _{i=0}^{2}}log\left(\frac{{\pi}_{i}^{0}}{{\pi}_{i}}\right){\pi}_{i}^{0}.$$

Figure 2 shows the same story, but this time for the dual divergence,
Now, the affine divergence parameters are shown in Panel (a), the natural parameters. We see that in the limit the shape of the divergence is converging to that of the polar of the boundary polytope. In general, local behaviour is quadratic, but boundary behaviour is polygonal.

$${K}^{*}(\pi ,{\pi}^{0}):=K({\pi}^{0},\pi ).$$

In this section, we undertake simulation studies to numerically explore what has been discussed above. Separate sub-sections address three general topics—focusing on one particular instance of each, as follows:

- The transition as $(N,k)$ varies between discrete and continuous features of the sampling distributions of goodness-of-fit statistics—focusing on the behaviour of the deviance at the uniform discrete distribution;
- The comparative behaviour of a range of Power-Divergence statistics—focusing on the relative stability of their sampling distributions near the boundary;
- The lack of uniformity—across the parameter space—of the finite sample adequacy of standard asymptotic sampling distributions, focusing on testing independence in $2\times 2$ contingency tables.

Earlier work [2] used the decomposition:
to show that a particularly bad case for the adequacy of any continuous approximation to the sampling distribution of the deviance $D:={D}^{*}|({N}^{*}=N)$ is the uniform discrete distribution: ${\pi}_{i}=1/(k+1)$. In this case, the ${\Gamma}^{*}$ term contributes a constant to the deviance, while the ${\Delta}^{*}$ term has no contributions from cells with 0 or 1 observations—these being in the vast majority in the $N<<k$ situation considered here. In other words, all of the variability in D comes from that between the ${n}_{i}log{n}_{i}$ values for the (relatively rare) cell counts above 1. This gives rise to a discreteness phenomenon termed “granularity” in [2], whose meaning was conveyed graphically there in the case $N=30$ and $k=200$. Work by Holst [19] predicts that continuous (indeed, normal) approximations will improve with larger values of $N/k$, as is intuitive. Remarkably, simply doubling the sample size to $N=60$ was shown in [2] to be sufficient to give a good enough approximation for most goodness-of-fit testing purposes. In other words, N being 30% of $k=200$ was found to be good enough for practical purposes.

$${D}^{*}/2={\displaystyle \sum _{\{0\le i\le k:{n}_{i}^{*}>0\}}}{n}_{i}^{*}log({n}_{i}^{*}/{\mu}_{i})={\Gamma}^{*}+{\Delta}^{*},$$

$${\Gamma}^{*}:={\displaystyle \sum _{i=0}^{k}}{\alpha}_{i}{n}_{i}^{*}\text{and}{\Delta}^{*}:={\displaystyle \sum _{\{0\le i\le k:{n}_{i}^{*}1\}}}{n}_{i}^{*}log{n}_{i}^{*}\ge 0,\text{where}{\alpha}_{i}:=-log{\mu}_{i},$$

Here, we illustrate the role of k-asymptotics (Section 2) in this transition between discrete and continuous features by repeating the above analyses for different values of k. Figure 3 and Figure 4 (where $k=100$ while $N=20$ and 40, respectively) are qualitatively the same as those presented in [2]. The difference here is that the smaller value of k means that a higher value of $N/k$ (40%) is needed in Figure 4 to adequately remove the granularity evident in Figure 3. For $k=400$, the figures with $N=50$ and $N=100$ (omitted here for brevity) are, again, qualitatively the same as in [2]—the larger value of k needing only a smaller value of $N/k$ (25%) for practical purposes. Note the QQ-plots used in these two figures are relative to normal quantiles.

The results of this section show the universality of boundary effects. The simulations of Figure 3 and Figure 4 are undertaken under the uniform model, which might be felt to be far from the boundary. In fact, the results show that in the high dimensional, low sample size case, all distributions are “close to” the boundary, and that discretisation effects can dominate.

Here we study the relative stability—near the boundary of the simplex—of the sampling distributions of a range of Power-Divergence statistics indexed by Amari’s parameter α. Figure 5 shows histograms for six different values of α, $N=50$, $k=200$, and exponentially decreasing values of $\{{\pi}_{i}\}$, as plotted in Figure 6. In it, red lines depict kernel density estimates using the bandwidth suggested in [44].

These sampling distributions differ markedly. The instability for $\alpha =3$ expected from Theorem 1 is clearly visible: very large values contribute to high variance and skewness. Analogous instability features (albeit at a lower level) remain with the Cressie–Read recommended value $\alpha =7/3$. In contrast (as expected from the discussion around Theorem 2), the distribution of the deviance ($\alpha =1$) is stable and roughly normal. Lower values of α retain these same features.

Pearson’s ${\chi}^{2}$ statistic ($\alpha =3$) is widely used to test independence in contingency tables, a standard rule-of-thumb for its validity being that each expected cell frequency should be at least 5. For illustrative purposes, we consider $2\times 2$ contingency tables, the relevant N-asymptotic null distribution being ${\chi}_{1}^{2}$. We assess the adequacy of this asymptotic approximation by comparing nominal and actual significance levels of this test, based on 10,000 replications. Particular interest lies in how these actual levels vary across different data generation processes within the same null hypothesis of independence.

Figure 7 and Figure 8 show the actual level of the Pearson ${\chi}^{2}$ test for nominal levels 0.1 and 0.05 for sample sizes $N=20$ and $N=50$, with ${\pi}_{r}$ and ${\pi}_{c}$ denoting row and column probabilities, respectively. The above general rule applies only at the central black dot in Figure 7, and inside the closed black curved region in Figure 8. The actual level was computed for all pairs of values of ${\pi}_{r}$ and ${\pi}_{c}$, averaged using the symmetry of the parameter space, and smoothed using the kernel smoother for irregular 2D data (implemented in the package fields in `R`). In each case, the white tone contains the nominal level, while red tones correspond to liberal and blue tones to conservative actual levels.

The finite sample adequacy of this standard asymptotic test clearly varies across the parameter space. In particular, its nominal and actual levels agree well at some parameter values outside the standard rule-of-thumb region; and, conversely, disagree somewhat at other parameter values inside it. Intriguingly, the agreement between nominal and actual levels does not improve everywhere with sample size. Overall, the clear patterns evident in this lack of uniformity invite further theoretical investigation.

This paper has illustrated the key importance of working with the boundary of the closure of exponential families when studying goodness-of-fit testing in the high dimensional, low sample size context. Some of this work is new (Section 2), while some uses the structure of extended exponential families to add insight to standard results in the literature (Section 3). The last section, Section 4, uses simulation studies to start to explore open questions in this area.

One open question—related to the results of Theorems 1 and 2—is to see if a unified theory, for all values of α, and over large classes of extended exponential families, can be developed.

The authors would like to thank the EPSRC for the support of grant number EP/L010429/1. Germain Van Bever would also like to thank FRS-FNRS for its support through the grant FC84444. We would also like to thank the referees for very helpful comments.

All four authors made critical contributions to the paper. R.S. made key contribution to, especially, Section 4. P.M. and F.C. provided the overall structure and key content details of the paper. G.V.B. provided invaluable suggestions throughout.

The authors declare no conflicts of interest.

We start by noting an important recurrence relation which will be exploited in the computations below. By definition, for any $t:=({t}_{i})\in {R}^{k+1}$, $n=({n}_{i})$ has moment generating function
with $m(t)={\textstyle {\sum}_{i=0}^{k}}{a}_{i}$ and ${a}_{i}={a}_{i}({t}_{i})={\pi}_{i}{e}^{{t}_{i}}$. Putting
where
we have
and the recurrence relation:
When there is no risk of confusion, we may abbreviate $M(t;N)$ to M and ${f}_{N,i}(t;r)$ to ${f}_{N}(r)$, or even to $f(r)$—so that (A1) becomes $M=f(0)$. Again, we may write ${\partial}^{r}M(t;N)/\partial {t}_{i}^{r}$ as ${M}_{r}$, ${\partial}^{r+s}M(t;N)/\partial {t}_{i}^{r}\partial {t}_{j}^{s}$ as ${M}_{r,s}$ and ${\partial}^{r+s+u}M(t;N)/\partial {t}_{i}^{r}\partial {t}_{j}^{s}\partial {t}_{l}^{u}$ as ${M}_{r,s,u}$, with similar conventions for higher order mixed derivatives.

$$M(t;N):=E\{exp({t}^{T}n)\}={[m(t)]}^{N}$$

$${f}_{N,i}(t;r):={N}_{(r)}{\left[m(t)\right]}^{N-r}{a}_{i}^{r}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}(0\le r\le N),$$

$${N}_{(r)}:={\phantom{\rule{0.277778em}{0ex}}}^{N}{P}_{r}=\left\{\begin{array}{ll}1& \text{if}r=0\\ N(N-1)\dots (N-(r-1))& \text{if}r\in \{1,\dots ,N\}\end{array}\right.,$$

$$M(t;N)={f}_{N,i}(t;0)\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\left(0\le i\le k\right)$$

$$\frac{\partial {f}_{N,i}(t;r)}{\partial {t}_{i}}={f}_{N,i}(t;r+1)+r{f}_{N,i}(t;r)\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\left(0\le i\le k;\phantom{\rule{0.277778em}{0ex}}0\le r<N\right).$$

We can now use this to explicitly calculate low order moments of the count vectors. Using $E({n}_{i}^{r})={\partial}^{r}M(t;N)/{\partial {t}_{i}^{r}|}_{t=0}$, the first N moments of ${n}_{i}$ now follow from (A1) and repeated use of (A2), noting that $m(0)=1$ and ${a}_{i}(0)={\pi}_{i}$.

In particular, the first 6 moments of each ${n}_{i}$ can be obtained as follows, where $N\ge 6$ is assumed. Using (A1) and (A2), we have
Substituting in, we have

$$\begin{array}{cc}\hfill {M}_{1}& =f(1)\hfill \\ \hfill {M}_{2}& =f(2)+f(1)\hfill \\ \hfill {M}_{3}& =f(3)+2f(2)+f(2)+f(1)=f(3)+3f(2)+f(1)\hfill \\ \hfill {M}_{4}& =f(4)+\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}6f(3)+\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}7f(2)+\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}f(1)\hfill \\ \hfill {M}_{5}& =f(5)+10f(4)+25f(3)+15f(2)+\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}f(1)\hfill \\ \hfill {M}_{6}& =f(6)+15f(5)+65f(4)+90f(3)+31f(2)+f(1).\hfill \end{array}$$

$$\begin{array}{cc}\hfill E({n}_{i})& =\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}N{\pi}_{i}\hfill \\ \hfill E({n}_{i}^{2})& ={N}_{(2)}{\pi}_{i}^{2}+\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}N{\pi}_{i}\hfill \\ \hfill E({n}_{i}^{3})& ={N}_{(3)}{\pi}_{i}^{3}+\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}3{N}_{(2)}{\pi}_{i}^{2}+\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}N{\pi}_{i}\hfill \\ \hfill E({n}_{i}^{4})& ={N}_{(4)}{\pi}_{i}^{4}+\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}6{N}_{(3)}{\pi}_{i}^{3}+\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}7{N}_{(2)}{\pi}_{i}^{2}+\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}N{\pi}_{i}\hfill \\ \hfill E({n}_{i}^{5})& ={N}_{(5)}{\pi}_{i}^{5}+10{N}_{(4)}{\pi}_{i}^{4}+25{N}_{(3)}{\pi}_{i}^{3}+15{N}_{(2)}{\pi}_{i}^{2}+\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}N{\pi}_{i}\hfill \\ \hfill E({n}_{i}^{6})& ={N}_{(6)}{\pi}_{i}^{6}+15{N}_{(5)}{\pi}_{i}^{5}+65{N}_{(4)}{\pi}_{i}^{4}+90{N}_{(3)}{\pi}_{i}^{3}+31{N}_{(2)}{\pi}_{i}^{2}+N{\pi}_{i}.\hfill \end{array}$$

This can be formalised in the following Lemma

The integer coefficients in any expansion
can be computed using ${c}_{r}(1)={c}_{r}(r)=1$ together, for $r\ge 3$, with the update:

$${M}_{r}=\sum _{s=1}^{r}{c}_{r}(s)f(s)\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}(1\le r\le N)$$

$${c}_{r}(s)={c}_{r-1}(s-1)+s{c}_{r-1}(s)\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}(1<s<r).$$

We note that if ${M}_{r}$ is required for $r>N$, we may repeatedly differentiate
w.r.t. ${t}_{i}$, noting that $f(N)=N!{a}_{i}^{N}$ no longer depends on $m(t)$ so that, for all $h>0$, ${\partial}^{h}f(N)/\partial {t}_{i}^{h}={N}^{h}f(N)$.

$${M}_{N}={\displaystyle \sum _{s=1}^{N}}{c}_{N}(s)f(s)$$

Mixed moments of any order can be derived from those of lower order, exploiting the fact that ${a}_{i}$ depends on t only via ${t}_{i}$. We illustrate this by deriving those required for the second and third moments of W.

First consider the mixed moments required for the second moment of W. Of course, $Var(W)=0$ if $k=0$. Otherwise, $k>0$, and computing $Var(W)$ requires $E({n}_{i}^{2}{n}_{j}^{2})$ for $i\ne j$. We find this as follows, assuming $N\ge 4$.

The relation ${M}_{2}=f(2)+f(1)$ established above gives
Repeated use of (A3) now gives
so that

$${\partial}^{2}M/\partial {t}_{j}^{2}={N}_{(2)}{a}_{j}^{2}{f}_{N-2}(0)+N{a}_{j}{f}_{N-1}(0).$$

$${M}_{2,2}={N}_{(4)}{a}_{i}^{2}{a}_{j}^{2}{f}_{N-4}(0)+{N}_{(3)}{a}_{i}{a}_{j}({a}_{i}+{a}_{j}){f}_{N-3}(0)+{N}_{(2)}{a}_{i}{a}_{j}{f}_{N-2}(0)$$

$$E({n}_{i}^{2}{n}_{j}^{2})={N}_{(4)}{\pi}_{i}^{2}{\pi}_{j}^{2}+{N}_{(3)}{\pi}_{i}{\pi}_{j}({\pi}_{i}+{\pi}_{j})+{N}_{(2)}{\pi}_{i}{\pi}_{j}.$$

We further look at the mixed moments needed for the third moment of W. For the skewness of W, we need $E({n}_{i}^{2}{n}_{j}^{4})$ for $i\ne j$ and, when $k>1$, $E({n}_{i}^{2}{n}_{j}^{2}{n}_{l}^{2})$ for $i,j,l$ distinct. We find these similarly, as follows, assuming $k>1$ and $N\ge 6$.

Equation (A4) above gives
from which, using (A3) repeatedly, we have
so that $E({n}_{i}^{2}{n}_{j}^{2}{n}_{l}^{2})$ equals

$$\begin{array}{cc}\hfill {\partial}^{2}M/\partial {t}_{j}^{2}\partial {t}_{l}^{2}& ={N}_{(4)}{a}_{j}^{2}{a}_{l}^{2}{f}_{N-4}(0)+{N}_{(3)}{a}_{j}{a}_{l}({a}_{j}+{a}_{l}){f}_{N-3}(0)+{N}_{(2)}{a}_{j}{a}_{l}{f}_{N-2}(0)\hfill \end{array}$$

$$\begin{array}{cc}\hfill {M}_{2,2,2}& ={a}_{j}^{2}{a}_{l}^{2}\{{N}_{(6)}{a}_{i}^{2}{f}_{N-6}(0)+{N}_{(5)}{a}_{i}{f}_{N-5}(0)\}+{a}_{j}{a}_{l}({a}_{j}+{a}_{l})\{{N}_{(5)}{a}_{i}^{2}{f}_{N-5}(0)+{N}_{(4)}{a}_{i}{f}_{N-4}(0)\}+\hfill \\ & \phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}{a}_{j}{a}_{l}\{{N}_{(4)}{a}_{i}^{2}{f}_{N-4}(0)+{N}_{(3)}{a}_{i}{f}_{N-3}(0)\}\hfill \\ & ={N}_{(6)}{a}_{i}^{2}{a}_{j}^{2}{a}_{l}^{2}{f}_{N-6}(0)+{N}_{(5)}{a}_{i}{a}_{j}{a}_{l}\{{a}_{i}{a}_{j}+{a}_{j}{a}_{l}+{a}_{l}{a}_{i}\}{f}_{N-5}(0)+{N}_{(4)}{a}_{i}{a}_{j}{a}_{l}\{{a}_{i}+{a}_{j}+{a}_{l}\}{f}_{N-4}(0)+\hfill \\ & \phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}{N}_{(3)}{a}_{i}{a}_{j}{a}_{l}{f}_{N-3}(0)\hfill \end{array}$$

$${N}_{(6)}{\pi}_{i}^{2}{\pi}_{j}^{2}{\pi}_{l}^{2}+{N}_{(5)}{\pi}_{i}{\pi}_{j}{\pi}_{l}\{{\pi}_{i}{\pi}_{j}+{\pi}_{j}{\pi}_{l}+{\pi}_{l}{\pi}_{i}\}+{N}_{(4)}{\pi}_{i}{\pi}_{j}{\pi}_{l}\{{\pi}_{i}+{\pi}_{j}+{\pi}_{l}\}+{N}_{(3)}{\pi}_{i}{\pi}_{j}{\pi}_{l}.$$

Finally, the relation ${M}_{4}=f(4)+6f(3)+7f(2)+f(1)$ established above gives
so that, again using (A3) repeatedly, yields

$$\begin{array}{cc}\hfill {\partial}^{4}M/\partial {t}_{j}^{4}& =\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{N}_{(4)}{a}_{j}^{4}{f}_{N-4}(0)+6{N}_{(3)}{a}_{j}^{3}{f}_{N-3}(0)+7{N}_{(2)}{a}_{j}^{2}{f}_{N-2}(0)+\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}N{a}_{j}{f}_{N-1}(0)\hfill \end{array}$$

$$\begin{array}{cc}\hfill E({n}_{i}^{2}{n}_{j}^{4})& ={N}_{(6)}{\pi}_{i}^{2}{\pi}_{j}^{4}+{N}_{(5)}{\pi}_{i}{\pi}_{j}^{3}(6{\pi}_{i}+{\pi}_{j})+{N}_{(4)}{\pi}_{i}{\pi}_{j}^{2}(7{\pi}_{i}+6{\pi}_{j})+{N}_{(3)}{\pi}_{i}{\pi}_{j}({\pi}_{i}+7{\pi}_{j})+{N}_{(2)}{\pi}_{i}{\pi}_{j}.\hfill \end{array}$$

Combining above results, we obtain here the first three moments of W. Higher moments may be found similarly.

We first look at $E(W)$. We have $W=\frac{1}{{N}^{2}}{\displaystyle \sum _{i=0}^{k}}\frac{{n}_{i}^{2}}{{\pi}_{i}}-1$ and $E({n}_{i}^{2})={N}_{(2)}{\pi}_{i}^{2}+N{\pi}_{i}$, so that
The variance is computed by recalling that ${N}^{2}(W+1)={\sum}_{i}\frac{{n}_{i}^{2}}{{\pi}_{i}}$, while $E(W)=\frac{k}{N}$,
where
Using expressions for $E({n}_{i}^{4})$ and $E({n}_{i}^{2}{n}_{j}^{2})$ established above, and putting
we have
and
so that
whence

$$E(W)=\frac{{N}_{(2)}}{{N}^{2}}+\frac{(k+1)}{N}-1=\frac{k}{N}.$$

$$Var(W)=Var(W+1)=\frac{{A}^{(2)}}{{N}^{4}}-{\left(\frac{k}{N}+1\right)}^{2},$$

$${A}^{(2)}:={N}^{4}E\{{(W+1)}^{2}\}={\sum}_{i}\frac{E({n}_{i}^{4})}{{\pi}_{i}^{2}}+\sum {\sum}_{i\ne j}\frac{E({n}_{i}^{2}{n}_{j}^{2})}{{\pi}_{i}{\pi}_{j}}.$$

$${\pi}^{(\alpha )}:={\sum}_{i}{\pi}_{i}^{\alpha},$$

$$\begin{array}{cc}\hfill {\sum}_{i}\frac{E({n}_{i}^{4})}{{\pi}_{i}^{2}}& ={\sum}_{i}\{{N}_{(4)}{\pi}_{i}^{2}+6{N}_{(3)}{\pi}_{i}+7{N}_{(2)}+N{\pi}_{i}^{-1}\}\hfill \\ & ={N}_{(4)}{\pi}^{(2)}+6{N}_{(3)}+7{N}_{(2)}(k+1)+N{\pi}^{(-1)}\hfill \end{array}$$

$$\begin{array}{cc}\hfill \sum {\sum}_{i\ne j}\frac{E({n}_{i}^{2}{n}_{j}^{2})}{{\pi}_{i}{\pi}_{j}}& ={\sum}_{i\ne j}\{{N}_{(4)}{\pi}_{i}{\pi}_{j}+{N}_{(3)}({\pi}_{i}+{\pi}_{j})+{N}_{(2)}\}\hfill \\ & ={N}_{(4)}(1-{\pi}^{(2)})+2{N}_{(3)}k+{N}_{(2)}k(k+1),\hfill \end{array}$$

$${A}^{(2)}={N}_{(4)}+2{N}_{(3)}(k+3)+{N}_{(2)}(k+1)(k+7)+N{\pi}^{(-1)},$$

$$\begin{array}{cc}\hfill Var(W)& =\frac{{N}_{(4)}+2{N}_{(3)}(k+3)+{N}_{(2)}(k+1)(k+7)+N{\pi}^{(-1)}}{{N}^{4}}-{\left(1+\frac{k}{N}\right)}^{2}\hfill \\ & =\frac{\left\{{\pi}^{(-1)}-{(k+1)}^{2}\right\}+2k(N-1)}{{N}^{3}},\phantom{\rule{4.pt}{0ex}}\phantom{\rule{4.pt}{0ex}}\phantom{\rule{4.pt}{0ex}}\text{after}\phantom{\rule{4.pt}{0ex}}\text{some}\phantom{\rule{4.pt}{0ex}}\text{simplification}.\hfill \end{array}$$

Note that $Var(W)$ depends on $({\pi}_{i})$ only via ${\pi}^{(-1)}$ while, by strict convexity of $x\to 1/x\phantom{\rule{0.277778em}{0ex}}(x>0)$,
Thus, for given k and N, $Var(W)$ is strictly increasing as $({\pi}_{i})$ departs from uniformity, tending to ∞ as one or more ${\pi}_{i}\to {0}_{+}$.

$${\pi}^{(-1)}\ge {(k+1)}^{2},\phantom{\rule{4.pt}{0ex}}\text{equality}\phantom{\rule{4.pt}{0ex}}\text{holding}\phantom{\rule{4.pt}{0ex}}\text{iff}\phantom{\rule{4.pt}{0ex}}{\pi}_{i}\stackrel{i}{\equiv}1/(k+1).$$

Finally, for these calculations, we look at $E[{\left\{W-E(W)\right\}}^{3}]$. Recalling again that ${N}^{2}(W+1)={\sum}_{i}\frac{{n}_{i}^{2}}{{\pi}_{i}}$,
where ${A}^{(3)}:={N}^{6}E\{{(W+1)}^{3}\}$ is given by
Given that
it suffices to find ${A}^{(3)}$.

$$\begin{array}{cc}\hfill E[{\left\{W-E(W)\right\}}^{3}]& =E[{\left\{(W+1)-E(W+1)\right\}}^{3}]\hfill \\ & ={N}^{-6}{A}^{(3)}-3\phantom{\rule{0.166667em}{0ex}}Var(W)(E(W)+1)-{(E(W)+1)}^{3},\hfill \end{array}$$

$$\begin{array}{cc}\hfill {A}^{(3)}& ={\sum}_{i}\frac{E({n}_{i}^{6})}{{\pi}_{i}^{3}}+3\sum {\sum}_{i\ne j}\frac{E({n}_{i}^{2}{n}_{j}^{4})}{{\pi}_{i}{\pi}_{j}^{2}}+\sum \sum {\sum}_{i,j,l\phantom{\rule{4.pt}{0ex}}\mathrm{distinct}}\frac{E({n}_{i}^{2}{n}_{j}^{2}{n}_{l}^{2})}{{\pi}_{i}{\pi}_{j}{\pi}_{l}}.\hfill \end{array}$$

$$E(W)=k/N\phantom{\rule{4.pt}{0ex}}\text{and}\phantom{\rule{4.pt}{0ex}}Var(W)=\frac{\left\{{\pi}^{(-1)}-{(k+1)}^{2}\right\}+2k(N-1)}{{N}^{3}},$$

Using expressions for $E({n}_{i}^{6})$, $E({n}_{i}^{2}{n}_{j}^{2}{n}_{l}^{2})$, and $E({n}_{i}^{2}{n}_{j}^{4})$ established above, we have
and
so that, after some simplification,
Substituting in and simplifying, we find $E[{\left\{W-E(W)\right\}}^{3}]$ to be:
$$\frac{\left\{{\pi}^{(-2)}-{(k+1)}^{3}\right\}-(3k+25-22N)\left\{{\pi}^{(-1)}-{(k+1)}^{2}\right\}+g(k,N)}{{N}^{5}},$$
where

$${\sum}_{i}\frac{E({n}_{i}^{6})}{{\pi}_{i}^{3}}={N}_{(6)}{\pi}^{(3)}+15{N}_{(5)}{\pi}^{(2)}+65{N}_{(4)}+90{N}_{(3)}(k+1)+31{N}_{(2)}{\pi}^{(-1)}+N{\pi}^{(-2)}$$

$$\begin{array}{cc}\hfill \sum {\sum}_{i\ne j}\frac{E({n}_{i}^{2}{n}_{j}^{4})}{{\pi}_{i}{\pi}_{j}^{2}}& ={N}_{(6)}{\pi}_{i}{\pi}_{j}^{2}+{N}_{(5)}{\pi}_{j}(6{\pi}_{i}+{\pi}_{j})+{N}_{(4)}(7{\pi}_{i}+6{\pi}_{j})+{N}_{(3)}({\pi}_{i}/{\pi}_{j}+7)+{N}_{(2)}{\pi}_{j}^{-1}\hfill \\ & ={N}_{(6)}\{{\pi}^{(2)}-{\pi}^{(3)}\}+{N}_{(5)}\{6+(k-6){\pi}^{(2)}\}+\hfill \\ & \phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}13{N}_{(4)}k+{N}_{(3)}\{{\pi}^{(-1)}+(7k-1)(k+1)\}+{N}_{(2)}k{\pi}^{(-1)}\hfill \end{array}$$

$$\begin{array}{cc}\hfill \sum \sum {\sum}_{i,j,l\phantom{\rule{4.pt}{0ex}}\mathrm{distinct}}\frac{E({n}_{i}^{2}{n}_{j}^{2}{n}_{l}^{2})}{{\pi}_{i}{\pi}_{j}{\pi}_{l}}& ={N}_{(6)}\{1+2{\pi}^{(3)}-3{\pi}^{(2)}\}+3{N}_{(5)}(k-1)\{1-{\pi}^{(2)}\}+\hfill \\ & 3{N}_{(4)}k(k-1)+{N}_{(3)}k({k}^{2}-1)\hfill \end{array}$$

$$\begin{array}{cc}\hfill {A}^{(3)}& ={N}_{(6)}+3{N}_{(5)}(k+5)+{N}_{(4)}\{3k(k+12)+65\}+\hfill \\ & {N}_{(3)}\{{k}^{3}+21{k}^{2}+107k+87\}+3{N}_{(3)}{\pi}^{(-1)}+{N}_{(2)}(31+3k){\pi}^{(-1)}+N{\pi}^{(-2)}.\hfill \end{array}$$

$$g(k,N)=4(N-1)k(k+2N-5)>0.$$

Note that $E[{\left\{W-E(W)\right\}}^{3}]$ depends on $({\pi}_{i})$ only via ${\pi}^{(-1)}$ and the larger quantity ${\pi}^{(-2)}$. In particular, for given k and N, the skewness of W tends to $+\infty $ as one or more ${\pi}_{i}\to {0}_{+}$.

In the notation of Lemma 1, it suffices to find truncate and bound approximations for each of $E({X}_{\mu})$, $E(X.{X}_{\mu})$, and $E({X}_{\mu}^{2})$.

For all $r,s$ in $\mathcal{N}$, define ${h}_{r,s}(\mu ):=E\{{(log(X+r))}^{s}\}$. Appropriate choices of $m\in {\mathcal{N}}_{0}$ and $a\in \mathcal{A}$ in (9), together with (10), give:
so that it suffices to truncate and bound ${h}_{r,s}(\mu )$ for $r,s\in \{1,2\}$.

$$\begin{array}{cc}\hfill E({X}_{\mu})& =\mu {h}_{1,1}(\mu )-\mu log\mu ,\hfill \\ \hfill E(X.{X}_{\mu})& =\{{\mu}^{2}{h}_{2,1}(\mu )+\mu {h}_{1,1}(\mu )\}-({\mu}^{2}+\mu )log\mu ,\phantom{\rule{4.pt}{0ex}}\text{and}:\hfill \\ \hfill E({X}_{\mu}^{2})& ={\mu}^{2}{h}_{2,2}(\mu )+\mu {h}_{1,2}(\mu )+({\mu}^{2}+\mu ){(log\mu )}^{2}-2log\mu \{{\mu}^{2}{h}_{2,1}(\mu )+\mu {h}_{1,1}(\mu )\},\hfill \end{array}$$

For all $r,s$ in $\mathcal{N}$, and for all $m\in {\mathcal{N}}_{0}$, we write:
in which:
Using again (7), the “error term” ${\epsilon}_{r,s}^{[m]}(\mu )$ has lower and upper bounds:
Restricting attention now to $r,s\in \{1,2\},$ as we may, and requiring $m\ge s$ so that ${F}^{[m-s]}(\mu )$ given by (4) is defined, (8) gives:
and:
Accordingly, for given μ, each ${\overline{\epsilon}}_{r,s}^{[m]}(\mu )$ decreases strictly to zero with m providing—to any desired accuracy—truncate and bound approximations for each of ν, τ, and ρ. In this connection, we note that the upper tail probabilities involved here can be bounded by standard Chernoff arguments.

$${h}_{r,s}(\mu )={h}_{r,s}^{[m]}(\mu )+{\epsilon}_{r,s}^{[m]}(\mu )$$

$${h}_{r,s}^{[m]}(\mu ):={\textstyle {\sum}_{x=0}^{m}}\{{(log(x+r))}^{s}\}p(x)\phantom{\rule{4.pt}{0ex}}\text{and}\phantom{\rule{4.pt}{0ex}}{\epsilon}_{r,s}^{[m]}(\mu ):={\textstyle {\sum}_{x=m+1}^{\infty}}\{{(log(x+r))}^{s}\}p(x).$$

$$0<{\epsilon}_{r,s}^{[m]}(\mu )<{\overline{\epsilon}}_{r,s}^{[m]}(\mu ):={\textstyle {\sum}_{x=m+1}^{\infty}}{(x+(r-1))}^{s}p(x).$$

$${\overline{\epsilon}}_{1,1}^{[m]}(\mu )={\textstyle {\sum}_{x=m+1}^{\infty}}xp(x)=\mu {\textstyle {\sum}_{x=m}^{\infty}}p(x)=\mu \{1-{F}^{[m-1]}(\mu )\},$$

$${\overline{\epsilon}}_{2,1}^{[m]}(\mu )={\textstyle {\sum}_{x=m+1}^{\infty}}(x+1)p(x)={\overline{\epsilon}}_{1,1}^{[m]}(\mu )+\{1-{F}^{[m]}(\mu )\},$$

$$\begin{array}{cc}\hfill {\overline{\epsilon}}_{1,2}^{[m]}(\mu )& ={\textstyle {\sum}_{x=m+1}^{\infty}}{x}^{2}p(x)={\textstyle {\sum}_{x=m+1}^{\infty}}\{x(x-1)+x\}p(x)\hfill \\ & ={\mu}^{2}\{1-{F}^{[m-2]}(\mu )\}+{\overline{\epsilon}}_{1,1}^{[m]}(\mu )\hfill \end{array}$$

$$\begin{array}{cc}\hfill {\overline{\epsilon}}_{2,2}^{[m]}(\mu )& ={\textstyle {\sum}_{x=m+1}^{\infty}}{(x+1)}^{2}p(x)={\textstyle {\sum}_{x=m+1}^{\infty}}\{{x}^{2}+(x+1)+x\}p(x)\hfill \\ & ={\overline{\epsilon}}_{1,2}^{[m]}(\mu )+{\overline{\epsilon}}_{2,1}^{[m]}(\mu )+{\overline{\epsilon}}_{1,1}^{[m]}(\mu ).\hfill \end{array}$$

- Critchley, F.; Marriott, P. Computational Information Geometry in Statistics: Theory and practice. Entropy
**2014**, 16, 2454–2471. [Google Scholar] [CrossRef] - Marriott, P.; Sabolova, R.; Van Bever, G.; Critchley, F. Geometry of goodness-of-fit testing in high dimensional low sample size modelling. In Geometric Science of Information: Second International Conference, GSI 2015, Palaiseau, France, October 28–30, 2015, Proceedings; Nielsen, F., Barbaresco, F., Eds.; Springer: Berlin, Germany, 2015; pp. 569–576. [Google Scholar]
- Amari, S.-I.; Nagaoka, H. Methods of Information Geometry; Translations of Mathematical Monographs; American Mathematical Society: Providence, RI, USA, 2000. [Google Scholar]
- Cressie, N.; Read, T.R.C. Multinomial goodness-of-fit tests. J. R. Stat. Soc. B
**1984**, 46, 440–464. [Google Scholar] - Read, T.R.C.; Cressie, N.A.C. Goodness-of-Fit Statistics for Discrete Multivariate Data; Springer: New York, NY, USA, 1988. [Google Scholar]
- Kass, R.E.; Vos, P.W. Geometrical Foundations of Asymptotic Inference; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 1997. [Google Scholar]
- Agresti, A. Categorical Data Analysis, 3rd ed.; Wiley: Hoboken, NJ, USA, 2013. [Google Scholar]
- Amari, S.-I. Differential-geometrical methods in statistics. In Lecture Notes in Statistics; Springer: New York, NY, USA, 1985; Volume 28. [Google Scholar]
- Barndorff-Nielsen, O.E.; Cox, D.R. Asymptotic Techniques for Use in Statistics; Chapman & Hall: London, UK, 1989. [Google Scholar]
- Anaya-Izquierdo, K.; Critchley, F.; Marriott, P. When are first-order asymptotics adequate? A diagnostic. STAT
**2014**, 3, 17–22. [Google Scholar] [CrossRef] - Lauritzen, S.L. Graphical Models; Clarendon Press: Oxford, UK, 1996. [Google Scholar]
- Geyer, C.J. Likelihood inference in exponential families and directions of recession. Electron. J. Stat.
**2009**, 3, 259–289. [Google Scholar] [CrossRef] - Fienberg, S.E.; Rinaldo, A. Maximum likelihood estimation in log-linear models. Ann. Stat.
**2012**, 40, 996–1023. [Google Scholar] [CrossRef] - Eguchi, S.; Copas, J. Local model uncertainty and incomplete-data bias. J. R. Stat. Soc. B
**2005**, 67, 1–37. [Google Scholar] - Copas, J.; Eguchi, S. Likelihood for statistically equivalent models. J. R. Stat. Soc. B
**2010**, 72, 193–217. [Google Scholar] [CrossRef] - Anaya-Izquierdo, K.; Critchley, F.; Marriott, P.; Vos, P. On the geometric interplay between goodness-of-fit and estimation: Illustrative examples. In Computational Information Geometry: For Image and Signal Processing; Lecture Notes in Computer Science (LNCS); Nielsen, F., Dodson, K., Critchley, F., Eds.; Springer: Berlin, Germany, 2016. [Google Scholar]
- Morris, C. Central limit theorems for multinomial sums. Ann. Stat.
**1975**, 3, 165–188. [Google Scholar] [CrossRef] - Osius, G.; Rojek, D. Normal goodness-of-fit tests for multinomial models with large degrees of freedom. JASA
**1992**, 87, 1145–1152. [Google Scholar] [CrossRef] - Holst, L. Asymptotic normality and efficiency for certain goodness-of-fit tests. Biometrika
**1972**, 59, 137–145. [Google Scholar] [CrossRef] - Koehler, K.J.; Larntz, K. An empirical investigation of goodness-of-fit statistics for sparse multinomials. JASA
**1980**, 75, 336–344. [Google Scholar] [CrossRef] - Koehler, K.J. Goodness-of-fit tests for log-linear models in sparse contingency tables. JASA
**1986**, 81, 483–493. [Google Scholar] [CrossRef] - McCullagh, P. The conditional distribution of goodness-of-fit statistics for discrete data. JASA
**1986**, 81, 104–107. [Google Scholar] [CrossRef] - Forster, J.J.; McDonald, J.W.; Smith, P.W.F. Monte Carlo exact conditional tests for log-linear and logistic models. J. R. Stat. Soc. B
**1996**, 58, 445–453. [Google Scholar] - Kim, D.; Agresti, A. Nearly exact tests of conditional independence and marginal homogeneity for sparse contingency tables. Comput. Stat. Data Anal.
**1997**, 24, 89–104. [Google Scholar] [CrossRef] - Booth, J.G.; Butler, R.W. An importance sampling algorithm for exact conditional tests in log-linear models. Biometrika
**1999**, 86, 321–332. [Google Scholar] [CrossRef] - Caffo, B.S.; Booth, J.G. Monte Carlo conditional inference for log-linear and logistic models: A survey of current methodology. Stat. Methods Med. Res.
**2003**, 12, 109–123. [Google Scholar] [CrossRef] [PubMed] - Lloyd, C.J. Computing highly accurate or exact P-values using importance sampling. Comput. Stat. Data Anal.
**2012**, 56, 1784–1794. [Google Scholar] [CrossRef] - Simonoff, J.S. Jackknifing and bootstrapping goodness-of-fit statistics in sparse multinomials. JASA
**1986**, 81, 1005–1011. [Google Scholar] [CrossRef] - Gaunt, R.E.; Pickett, A.; Reinert, G. Chi-square approximation by Stein’s method with application to Pearson’s statistic. arXiv, 2015; arXiv:1507.01707. [Google Scholar]
- Fan, J.; Hung, H.-N.; Wong, W.-H. Geometric understanding of likelihood ratio statistics. JASA
**2000**, 95, 836–841. [Google Scholar] [CrossRef] - Ulyanov, V.V.; Zubov, V.N. Refinement on the convergence of one family of goodness-of-fit statistics to chi-squared distribution. Hiroshima Math. J.
**2009**, 39, 133–161. [Google Scholar] - Asylbekov, Z.A.; Zubov, V.N.; Ulyanov, V.V. On approximating some statistics of goodness-of-fit tests in the case of three-dimensional discrete data. Sib. Math. J.
**2011**, 52, 571–584. [Google Scholar] [CrossRef] - Zelterman, D. Goodness-of-fit tests for large sparse multinomial distributions. JASA
**1987**, 82, 624–629. [Google Scholar] [CrossRef] - Bregman, L.M. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comp. Math. Math.
**1967**, 7, 200–217. [Google Scholar] [CrossRef] - Amari, S.-I. Information Geometry and Its Applications; Springer: Tokyo, Japan, 2015. [Google Scholar]
- Csiszár, I. On topological properties of f-divergences. Stud. Sci. Math. Hung.
**1967**, 2, 329–339. [Google Scholar] - Csiszár, I. Information measures: A critical survey. In Transactions of the Seventh Prague Conference on Information Theory, Statistical Decision Functions, Random Processes and of the 1974 European Meeting of Statisticians; Kozesnik, J., Ed.; Springer: Houten, The Netherlands, 1977; Volume B, pp. 73–86. [Google Scholar]
- Barndorff-Nielsen, O. Information and Exponential Families in Statistical Theory; John Wiley & Sons, Ltd.: Chichester, UK, 1978. [Google Scholar]
- Brown, L.D. Fundamentals of Statistical Exponential Families with Applications in Statistical Decision Theory; Lecture Notes-Monograph Series; Integrated Media Systems (IMS): Hayward, CA, USA, 1986; Volume 9. [Google Scholar]
- Csiszár, I.; Matúš, F. Closures of exponential families. Ann. Probab.
**2005**, 33, 582–600. [Google Scholar] [CrossRef] - Eriksson, N.; Fienberg, S.E.; Rinaldo, A.; Sullivant, S. Polyhedral conditions for the nonexistence of the MLE for hierarchical log-linear models. J. Symb. Comput.
**2006**, 41, 222–233. [Google Scholar] [CrossRef] - Rinaldo, A.; Feinberg, S.; Zhou, Y. On the geometry of discrete exponential families with applications to exponential random graph models. Electron. J. Stat.
**2009**, 3, 446–484. [Google Scholar] [CrossRef] - Critchley, F.; Marriott, P. Computing with Fisher geodesics and extended exponential families. Stat. Comput.
**2016**, 26, 325–332. [Google Scholar] [CrossRef] - Sheather, S.J.; Jones, M.C. A reliable data-based bandwidth selection method for kernel density estimation. J. R. Stat. Soc. B
**1991**, 53, 683–690. [Google Scholar]

α:= 1+ 2λ | λ | Formula | Name |
---|---|---|---|

3 | 1 | ${\sum}_{i=0}^{k}\frac{{({n}_{i}-N{\pi}_{i})}^{2}}{N{\pi}_{i}}$ | Pearson ${\chi}^{2}$ |

7/3 | 2/3 | $\frac{9}{5}{\sum}_{i=0}^{k}{n}_{i}\left[{\left(\frac{{n}_{i}}{N{\pi}_{i}}\right)}^{\frac{2}{3}}-1\right]$ | Read–Cressie |

1 | 0 | $2{\sum}_{i=0}^{k}{n}_{i}log\left({n}_{i}/N{\pi}_{i}\right)$ | Twice log-likelihood (deviance) |

0 | $-\frac{1}{2}$ | $4{\sum}_{i=0}^{k}{\left(\sqrt{{n}_{i}}-\sqrt{N{\pi}_{i}}\right)}^{2}$ | Freeman–Tukey or Hellinger |

−1 | −1 | $2{\sum}_{i=0}^{k}N{\pi}_{i}log\left(N{\pi}_{i}/{n}_{i}\right)$ | Twice modified log-likelihood |

−3 | −2 | ${\sum}_{i=0}^{k}\frac{{({n}_{i}-N{\pi}_{i})}^{2}}{{n}_{i}}$ | Neyman ${\chi}^{2}$ |

λ | α | Divergence ${\mathit{D}}_{\mathit{\lambda}}\mathbf{(}{\mathit{\mu}}_{\mathbf{1}}\mathbf{,}{\mathit{\mu}}_{\mathbf{2}}\mathbf{)}$ | Divergence Parameter ξ | Potential |
---|---|---|---|---|

−1 | −1 | ${\mu}_{1}-{\mu}_{2}-{\mu}_{2}\left(log({\mu}_{1})-log({\mu}_{2})\right)$ | $\xi =log(\mu )$ | $exp(\xi )$ |

0 | 1 | ${\mu}_{2}-{\mu}_{1}-{\mu}_{1}\left(log({\mu}_{2})-log({\mu}_{1})\right)$ | $\xi =\mu $ | $\xi log(\xi )-\xi $ |

$\lambda \ne 0,-1$ | $\alpha \ne \pm 1$ | $\frac{\left({\lambda}^{*}{\mu}_{1}-{\lambda}^{*}{\mu}_{2}-{\mu}_{2}\left({\left(\frac{{\mu}_{1}}{{\mu}_{2}}\right)}^{{\lambda}^{*}}-1\right)\right)}{{\lambda}^{*}(1-{\lambda}^{*})}$ | $\xi =\frac{1}{{\lambda}^{*}}{\mu}^{{\lambda}^{*}}$ | $\frac{{({\lambda}^{*}\xi )}^{1/{\lambda}^{*}}}{1-{\lambda}^{*}}$ |

© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).