# Divergence from, and Convergence to, Uniformity of Probability Density Quantiles

^{1}

^{2}

^{*}

^{†}

Next Article in Journal

Next Article in Special Issue

Next Article in Special Issue

Previous Article in Journal

Previous Article in Special Issue

Previous Article in Special Issue

Department of Mathematics and Statistics, La Trobe University, Bundoora, VIC 3086, Australia

School of Mathematics and Statistics, University of Melbourne, Parkville, VIC 3010, Australia

Author to whom correspondence should be addressed.

These authors contributed equally to this work.

Received: 7 March 2018
/
Revised: 10 April 2018
/
Accepted: 19 April 2018
/
Published: 25 April 2018

(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)

We demonstrate that questions of convergence and divergence regarding shapes of distributions can be carried out in a location- and scale-free environment. This environment is the class of probability density quantiles (pdQs), obtained by normalizing the composition of the density with the associated quantile function. It has earlier been shown that the pdQ is representative of a location-scale family and carries essential information regarding shape and tail behavior of the family. The class of pdQs are densities of continuous distributions with common domain, the unit interval, facilitating metric and semi-metric comparisons. The Kullback–Leibler divergences from uniformity of these pdQs are mapped to illustrate their relative positions with respect to uniformity. To gain more insight into the information that is conserved under the pdQ mapping, we repeatedly apply the pdQ mapping and find that further applications of it are quite generally entropy increasing so convergence to the uniform distribution is investigated. New fixed point theorems are established with elementary probabilistic arguments and illustrated by examples.

For each continuous location-scale family of distributions with square-integrable density, there is a probability density quantile (pdQ), which is an absolutely continuous distribution on the unit interval. Members of the class of such pdQs differ only in shape, and the asymmetry of their shapes can be partially ordered by their Hellinger distances or Kullback–Leibler divergences from the class of symmetric distributions on this interval. In addition, the tail behaviour of the original family can be described in terms of the boundary derivatives of its pdQ. Empirical estimators of the pdQs enable one to carry out inference, such as robust fitting of shape parameter families to data; details are in [1].

The Kullback–Leibler directed divergences and symmetrized divergence (KLD) of a pdQ with respect to the uniform distribution on [0,1] is investigated in Section 2, with remarkably simple numerical results, and a map of these divergences for some standard location-scale families is constructed. The ‘shapeless’ uniform distribution is the center of the pdQ universe, as is explained in Section 3, where it is found to be a fixed point. A natural question of interest is to find the invariant information of the pdQ mapping, that is, the conserved information after the pdQ mapping is applied. To this end, it is necessary to repeatedly apply the pdQ mapping to extract the information. Numerical studies indicate that further applications of the pdQ transformation are generally entropy increasing, so we investigate the convergence to uniformity of repeated applications of the pdQ transformation, by means of fixed point theorems for a semi-metric. As the pdQ mapping is not a contraction, the proofs of the fixed point theorems are through elementary probabilistic arguments rather than the classical contraction mapping principle. Our approach may shed light on future research in the fixed point theory. Further ideas are discussed in Section 4.

Let $\mathcal{F}\phantom{\rule{0.166667em}{0ex}}$ denote the class of cumulative distribution functions (cdfs) on the real line $\mathbb{R}$ and for each $F\in \mathcal{F}\phantom{\rule{0.166667em}{0ex}}$ define the associated quantile function of F by $Q\left(u\right)=inf\{x:\phantom{\rule{4pt}{0ex}}F(x)\ge u\}$, for $0<u<1.$ When the random variable X has cdf F, we write $X\sim F$. When the density function $f={F}^{\prime}$ exists, we also write $X\sim f$. We only discuss F as absolutely continuous with respect to Lebesgue measure, but the results can be extended to the discrete and mixture cases using suitable dominating measures.

Let $\mathcal{F}{\phantom{\rule{0.166667em}{0ex}}}^{\prime}=\{F\in \mathcal{F}\phantom{\rule{0.166667em}{0ex}}:\phantom{\rule{0.277778em}{0ex}}f={F}^{\prime}\phantom{\rule{4.pt}{0ex}}\mathit{exists}\phantom{\rule{4.pt}{0ex}}\mathit{and}\phantom{\rule{4.pt}{0ex}}\mathit{is}\phantom{\rule{4.pt}{0ex}}\mathit{positive}\}$. For each $F\in \mathcal{F}{\phantom{\rule{0.166667em}{0ex}}}^{\prime}$, we follow [2] and define the quantile density function $q\left(u\right)={Q}^{\prime}\left(u\right)=1/f\left(Q\left(u\right)\right)$. Parzen called its reciprocal function $fQ\left(u\right)=f\left(Q\right(u\left)\right)$ the density quantile function. For $F\in \mathcal{F}{\phantom{\rule{0.166667em}{0ex}}}^{\prime}$, and U uniformly distributed on [0,1], assume $\kappa =\mathbb{E}\left[fQ\left(U\right)\right]=\int {f}^{2}\left(x\right)\phantom{\rule{0.166667em}{0ex}}dx$ is finite; that is, f is square integrable. Then, we can define the continuous pdQ of F by ${f}^{*}\left(u\right)=fQ\left(u\right)/\kappa $, $0<u<1$. Let $\mathcal{F}{\phantom{\rule{0.166667em}{0ex}}}^{\prime *}\subset \mathcal{F}{\phantom{\rule{0.166667em}{0ex}}}^{\prime}$ denote the class of all such F.

Not all f are square-integrable, and this requirement for the mapping $f\to {f}^{*}$ means that $\mathcal{F}{\phantom{\rule{0.166667em}{0ex}}}^{\prime *}$ is a proper subset of $\mathcal{F}{\phantom{\rule{0.166667em}{0ex}}}^{\prime}.$ The advantages of working with ${f}^{*}$s over fs are that they are free of location and scale parameters; they ignore flat spots in F and have a common bounded support. Moreover, ${f}^{*}$ often has a simpler formula than f; see Table 1 for examples.

Given that a pdQ ${f}^{*}$ exists for a distribution with density f, then so does the cdf ${F}^{*}$ and quantile function ${Q}^{*}={\left({F}^{*}\right)}^{-1}$ associated with ${f}^{*}$. Thus, a monotone transformation from $X\sim F$ to ${X}^{*}\sim {F}^{*}$ exists; it is simply ${X}^{*}=m\left(X\right)={Q}^{*}\left(F\left(X\right)\right)$. For the Power$\left(b\right)$ distribution of Table 1, ${f}_{b}^{*}={f}_{{b}^{*}}$, where ${b}^{*}=2-1/b$, so ${m}_{b}\left(x\right)={Q}_{b}^{*}\left({F}_{b}\left(x\right)\right)={x}^{b/{b}^{*}}={x}^{{b}^{2}/(2b-1)}.$ For the normal distribution with parameters $\mu ,\sigma $, it is ${m}_{\mu ,\sigma}\left(x\right)=\mathsf{\Phi}((x-\mu )/\sqrt{2}\phantom{\rule{0.166667em}{0ex}}\sigma ).$ In general, an explicit expression for ${Q}^{*}$ that depends only on f or F (plus location-scale parameters) need not exist.

Next, we evaluate and plot the [3] divergences from uniformity. The [3] divergence of density ${f}_{1}$ from density ${f}_{2}$, when both have domain [0,1], is defined as
where U denotes a random variable with the uniform distribution $\mathcal{U}$ on [0,1]. The divergences from uniformity are easily computed through
and

$$I({f}_{1}:{f}_{2}):={\int}_{0}^{1}ln({f}_{1}\left(u\right)/{f}_{2}\left(u\right))\phantom{\rule{0.166667em}{0ex}}{f}_{1}\left(u\right)\phantom{\rule{0.166667em}{0ex}}du=\mathbb{E}[ln({f}_{1}\left(U\right)/{f}_{2}\left(U\right))\phantom{\rule{0.166667em}{0ex}}{f}_{1}\left(U\right)],$$

$$I(\mathcal{U}:{f}^{*})=-{\int}_{0}^{1}ln\left({f}^{*}\left(u\right)\right)\phantom{\rule{0.166667em}{0ex}}du=-\mathbb{E}[ln\left({f}^{*}\left(U\right)\right)]$$

$$I({f}^{*}:\mathcal{U})={\int}_{0}^{1}ln\left({f}^{*}\left(u\right)\right)\phantom{\rule{0.166667em}{0ex}}{f}^{*}\left(u\right)\phantom{\rule{0.166667em}{0ex}}du=\mathbb{E}[ln\left({f}^{*}\left(U\right)\right)\phantom{\rule{0.166667em}{0ex}}{f}^{*}\left(U\right)].$$

Kullback ([4], p. 6) interprets $I({f}^{*}:\mathcal{U})$ as the mean evidence in one observation $V\sim {f}^{*}$ for ${f}^{*}$ over $\mathcal{U}$; it is also known as the relative entropy of ${f}^{*}$ with respect to $\mathcal{U}$. The terminology directed divergence for $I({f}_{1}:{f}_{2})$ is also sometimes used ([4], p. 7) with ‘directed’ explained in ([4], pp. 82, 85); see also [5] in this regard.

Table 1 shows the quantile functions of some standard distributions, along with their pdQs, associated divergences $I(\mathcal{U}:{f}^{*}),I({f}^{*}:\mathcal{U})$ and symmetrized divergence (KLD) defined by $J(\mathcal{U},{f}^{*}):=I(\mathcal{U}:{f}^{*})+I({f}^{*}:\mathcal{U})$. The last measure was earlier introduced in a different form by [6].

Given pdQs ${f}_{1}^{*}$, ${f}_{2}^{*}$, let $d({f}_{1}^{*},{f}_{2}^{*}):=\sqrt{I({f}_{1}^{*}:{f}_{2}^{*})+I({f}_{2}^{*}:{f}_{1}^{*})}$. Then, d is a semi-metric on the space of pdQs; i.e., d satisfies all requirements of a metric except the triangle inequality. Introducing the coordinates $({s}_{1},{s}_{2})=(\sqrt{I(\mathcal{U}:{f}^{*})}\phantom{\rule{0.166667em}{0ex}},\sqrt{I({f}^{*}:\mathcal{U})})$, we can define the distance from uniformity of any ${f}^{*}$ by the Euclidean distance of $({s}_{1},{s}_{2})$ from the origin $(0,0)$, namely $d(\mathcal{U},{f}^{*})$.

This d does not satisfy the triangle inequality: for example, if $\mathcal{U},$ $\mathcal{N}$ and $\mathcal{C}$ denote the uniform, normal and Cauchy pdQs, then $d(\mathcal{U},\mathcal{N})=0.5,$ $d(\mathcal{N},\mathcal{C})=0.4681$ but $d(\mathcal{U},\mathcal{C})=1;$ see Table 1 and Figure 1. However, d can provide an informative measure of distance from uniformity.

Figure 1 shows the loci of points $({s}_{1},{s}_{2})$ for some continuous shape families. The light dotted arcs with radii 1/2, 1 and 2 are a guide to these distances from uniformity. The large discs in purple, red and black correspond to $\mathcal{U},$ $\mathcal{N}$ and $\mathcal{C}$. The blue cross at distance $1/\sqrt{2}$ from the origin corresponds to the exponential distribution. Nearby is the standard lognormal point marked by a red cross. The lower red curve is nearly straight and is the locus of points corresponding to the lognormal shape family.

The chi-squared($\nu $), $\nu >1$, family also appears as a red curve; it passes through the blue cross when $\nu =2$, as expected, and heads toward the normal disc as $\nu \to \infty .$ The Gamma family has the same locus of points as the chi-squared family. The curve for the Weibull($\beta $) family, for $0.5<\beta <3$, is shown in blue; it crosses the exponential blue cross when $\beta =1$. The Pareto(a) curve is shown in black. As a increases from 0, this line crosses the arcs distant 2 and 1 from the origin for $a=(2\sqrt{2}\phantom{\rule{0.166667em}{0ex}}+1)/7\approx 0.547$ and $a=(\sqrt{5}\phantom{\rule{0.166667em}{0ex}}-1)/2\approx 1.618$, respectively, and approaches the exponential blue cross as $a\to \infty $.

The Power(b) or Beta($b,1$) for $b>1/2$ family is represented by the magenta curve of points moving toward the origin as b increases from 1/2 to 1, and then moving out towards the exponential blue cross as $b\to \infty $. For each choice of $\alpha >0.5,$ $\beta >0.5$ the locus of the Beta($\alpha ,\beta $) pdQ divergences lies above the chi-squared red curve and mostly below the power(b) magenta curve; however, the U-shaped Beta distributions have loci above it.

The lower green line near the Pareto black curve gives the loci of root-divergences from uniformity of the Tukey($\lambda $) with $\lambda <1$, while the upper green curve corresponds to $\lambda \ge 1$. It is known that the Tukey($\lambda $) distributions, with $\lambda <1/7$, are good approximations to Student’s t-distributions for $\nu >0,$ provided $\lambda $ is chosen properly. The same is true for their corresponding pdQs ([1], Section 3.2). For example, the pdQof ${t}_{\nu}$ with $\nu =0.24$ degrees of freedom is well approximated by the choice $\lambda =-4.063.$ Its location is marked by the small black disk in Figure 1; it is of distance 2 from uniformity. The generalized Tukey distributions of [7] with two shape parameters also fill a large funnel shaped region (not marked on the map) emanating from the origin and just including the region bounded by the green curves of the Tukey symmetric distributions.

There are numerous tests for uniformity, but as [8] points out, many are undermined by the common practice of estimating location-scale parameters of the null and/or alternative distributions when in fact it is assumed that these distributions are known exactly. In practice, this means that if a test for uniformity is preceded by a probability integral transformation including parameter estimates, then the actual levels of such tests will not be those nominated unless (often complicated and model-specific) adjustments are made. Examples of such adjustments are in [9,10].

Given a random sample of m independent, identically distributed (i.i.d.) variables, each from a distribution with density f, it is feasible to carry out a nonparametric test of uniformity by estimating the pdQ with a kernel density estimator $\widehat{{f}_{m}^{*}}$ and comparing it with the uniform density on [0,1] using any one of a number of metrics or semi-metrics. Consistent estimators $\widehat{{f}_{m}^{*}}$ for ${f}^{*}$ based on normalized reciprocals of the quantile density estimators derived in [11] are available and described in (Staudte [1], Section 2). Note that such a test compares an arbitrary uniform distribution with an arbitrary member of the location-scale family generated by f; it is a test of shape only. Preliminary work suggests that such a test is feasible. However, an investigation into such omnibus nonparametric testing procedures, including comparison with bootstrap and other kernel density based techniques found in the literature, such as [12,13,14,15,16,17], is beyond the scope of this work.

The transformation $f\to {f}^{*}$ of Definition 1 is quite powerful, removing location and scale and moving the distribution from the support of f to the unit interval. A natural question of interest is to find the information in a density that is invariant after the pdQ mapping is applied. To this end, it is necessary to repeatedly apply the pdQ mapping to extract the information. Examples suggest that another application of the transformation ${f}^{2*}:={\left({f}^{*}\right)}^{*}$ leaves less information about f in ${f}^{2*}$ and hence it is closer to the uniform density. Furthermore, with n iterations ${f}^{(n+1)*}:={\left({f}^{n*}\right)}^{*}$ for $n\ge 2$, it seems that no information can be conserved after repeated *-transformation so we would expect that ${f}^{n*}$ converges to the uniform density as $n\to \infty $. An R script [18] for finding repeated *-iterates of a given pdQ is available as Supplementary Material.

Given $f\in \mathcal{F}{\phantom{\rule{0.166667em}{0ex}}}^{\prime}$, we say that f is of *-order n if ${f}^{*},{f}^{2*},\dots ,{f}^{n*}$ exist but ${f}^{(n+1)*}$ does not. When the infinite sequence ${\{{f}^{n*}\}}_{n\ge 1}$ exists, it is said to be of infinite *-order.

For example, the Power($3/4$) family is of *-order 2, while the Power(2) family is of infinite *-order. The ${\chi}_{\nu}^{2}$ distribution is of finite *-order for $1<\nu <2$ and infinite *-order for $\nu \ge 2.$ The normal distribution is of infinite *-order.

We write ${\mu}_{n}:={\int}_{-\infty}^{\infty}{\{f\left(y\right)\}}^{n}\phantom{\rule{0.166667em}{0ex}}dy$, ${\kappa}_{n}={\int}_{0}^{1}{\{{f}^{n*}\left(x\right)\}}^{2}\phantom{\rule{0.166667em}{0ex}}dx$, $n\ge 1$, and ${\kappa}_{0}={\int}_{-\infty}^{\infty}{\{f\left(x\right)\}}^{2}\phantom{\rule{0.166667em}{0ex}}dx$. The next proposition characterises the property of infinite *-order.

For $f\in \mathcal{F}{\phantom{\rule{0.166667em}{0ex}}}^{\prime}$ and $m\ge 1$, the following statements are equivalent:

- (i)
- ${\mu}_{m+2}<\infty $,
- (ii)
- ${\mu}_{j}<\infty $ for all $1\le j\le m+2$,
- (iii)
- ${\kappa}_{j}<\infty $ and ${\kappa}_{j}=\frac{{\mu}_{j}{\mu}_{j+2}}{{\mu}_{j+1}^{2}}$ for all $1\le j\le m$.

In particular, f is of infinite *-order if and only if ${\mu}_{n}<\infty $, $n\ge 1$.

For each $i,n\ge 1$, provided all terms below are finite, we have the following recursive formula
giving

$${\nu}_{n,i}:=\int {\{{f}^{n*}\left(x\right)\}}^{i}\phantom{\rule{0.166667em}{0ex}}dx=\frac{1}{{\kappa}_{n-1}^{i}}\phantom{\rule{0.166667em}{0ex}}{\nu}_{n-1,i+1},$$

$${\kappa}_{n}=\frac{1}{{\prod}_{j=0}^{n-1}{\kappa}_{j}^{n+1-j}}\phantom{\rule{0.166667em}{0ex}}{\mu}_{n+2}.$$

(i) ⇒ (ii) For $1\le j\le m+2$,

$$\begin{array}{cc}\hfill {\mu}_{j}& ={\int}_{-\infty}^{\infty}{\{f\left(x\right)\}}^{j}{\mathbf{1}}_{\{f(x)>1\}}dx+{\int}_{-\infty}^{\infty}{\{f\left(x\right)\}}^{j}{\mathbf{1}}_{\{f(x)\le 1\}}dx\hfill \\ & \le {\int}_{-\infty}^{\infty}{\{f\left(x\right)\}}^{m+2}dx+{\int}_{-\infty}^{\infty}f\left(x\right)dx={\mu}_{m+2}+1<\infty .\hfill \end{array}$$

(ii) ⇒ (iii) Use (2) and proceed with induction for $1\le n\le m$.

(iii) ⇒ (i) By Definition 1, ${\kappa}_{1}<\infty $ means that ${\kappa}_{0}<\infty $. Hence, (i) follows from (2) with $n=m$. ☐

Next, we investigate the involutionary nature of the *-transformation.

Let ${f}^{*}$ be a pdQ and assume ${f}^{2*}$ exists. Then, ${f}^{*}\sim \mathcal{U}$ if and only if ${f}^{2*}\sim \mathcal{U}$.

For $r>0$, we have

$${\int}_{0}^{1}|{f}^{2*}{\left(u\right)-1|}^{r}\phantom{\rule{0.166667em}{0ex}}du=\frac{1}{{\kappa}_{1}^{r}}{\int}_{0}^{1}{|{f}^{*}\left(x\right)-{\kappa}_{1}|}^{r}{f}^{*}\left(x\right)\phantom{\rule{0.166667em}{0ex}}dx.$$

If ${f}^{*}\left(u\right)\sim \mathcal{U}$, then ${\kappa}_{1}=1$ and (3) ensures ${\int}_{0}^{1}{|{f}^{2*}\left(u\right)-1|}^{r}du=0$, so ${f}^{2*}\left(u\right)\sim \mathcal{U}$.

Conversely, if ${f}^{2*}\left(u\right)\sim \mathcal{U}$, then using (3) again gives ${\int}_{0}^{1}{|{f}^{*}\left(x\right)-{\kappa}_{1}|}^{r}{f}^{*}\left(x\right)\phantom{\rule{0.166667em}{0ex}}dx=0$. Since ${f}^{*}\left(x\right)>0$ a.s., we have ${f}^{*}\left(x\right)={\kappa}_{1}$ a.s. and this can only happen when ${\kappa}_{1}=1$. Thus, ${f}^{*}\sim \mathcal{U}$, as required. ☐

Proposition 2 shows that the uniform distribution is a fixed point in the Banach space of integrable functions on [0,1] with the ${L}_{r}$ norm for any $r>0$. It remains to show that ${f}^{n*}$ has a limit and that the limit is the uniform distribution. It was hoped that the classical machinery for convergence in Banach spaces ([19], Chapter 10) would prove useful in this regard, but the *-mapping is not a contraction. For this reason, although there are many studies of fixed point theory in metric and semi-metric spaces (see, e.g., [20] and references therein), the fixed point Theorems 1, 2 and 3 shown below do not seem to be covered in these general studies. Moreover, our proofs are purely probabilistic and non-standard in this area. For simplicity, we use $\stackrel{{L}_{r}}{\u27f6}$ to stand for the convergence in ${L}_{r}$ norm and $\stackrel{\mathbb{P}}{\u27f6}$ for convergence in probability as $n\to \infty $.

For $f\in \mathcal{F}{\phantom{\rule{0.166667em}{0ex}}}^{\prime}$ with infinite *-order, the following statements are equivalent:

- (i)
- ${f}^{n*}\stackrel{{L}_{2}}{\u27f6}1$;
- (ii)
- For all $r>0$, ${f}^{n*}\stackrel{{L}_{r}}{\u27f6}1$;
- (iii)
- $\frac{{\mu}_{n}{\mu}_{n+2}}{{\mu}_{n+1}^{2}}\to 1$ as $n\to \infty $.

Notice that ${\mu}_{n}=\mathbb{E}\left\{{f}^{*}{\left(U\right)}^{n-1}\right\}$, $n\ge 1$, are the moments of the random variable ${f}^{*}\left(U\right)$ with $U\sim \mathcal{U}$. Theorem 1 says that the convergence of $\{{f}^{n*}:\phantom{\rule{4pt}{0ex}}n\ge 1\}$ is purely determined by the moments of ${f}^{*}\left(U\right)$. This is rather puzzling because it is well known that the moments do not uniquely determine the distribution ([21], p. 227), meaning that different distributions with the same moments have the same converging behaviour. However, if f is bounded, then ${f}^{*}\left(U\right)$ is a bounded random variable so its moments uniquely specify its distribution ([21], pp. 225–226), leading to stronger results in Theorem 2.

It is clear that (ii) implies (i).

(i) ⇒ (iii): By Proposition 1, ${\kappa}_{n}=\frac{{\mu}_{n}{\mu}_{n+2}}{{\mu}_{n+1}^{2}}$. Now,
so (iii) follows immediately.

$${\int}_{0}^{1}{\{{f}^{n*}\left(x\right)-1\}}^{2}\phantom{\rule{0.166667em}{0ex}}dx={\kappa}_{n}-1,$$

(iii) ⇒ (ii): It suffices to show that ${f}^{n*}\stackrel{{L}_{r}}{\u27f6}1$ for any integer $r\ge 4$. To this end, since for $a,b\ge 0$, ${|a-b|}^{r-2}\le {a}^{r-2}+{b}^{r-2}$, we have from (4) that
where, as before, ${\nu}_{n,r}={\int}_{0}^{1}{\{{f}^{n*}\left(x\right)\}}^{r}dx$. However, applying (1) gives
and (2) ensures
which imply
as $n\to \infty $. Hence, it follows from (5) that ${\int}_{0}^{1}{|{f}^{n*}\left(x\right)-1|}^{r}dx\to 0$ as $n\to \infty $, completing the proof. ☐

$${\int}_{0}^{1}{|{f}^{n*}\left(x\right)-1|}^{r}dx\le {\int}_{0}^{1}{({f}^{n*}\left(x\right)-1)}^{2}({f}^{n*}{\left(x\right)}^{r-2}+1)dx={\nu}_{n,r}-2{\nu}_{n,r-1}+{\nu}_{n,r-2}+{\kappa}_{n}-1,$$

$${\nu}_{n,r}=\frac{{\mu}_{n+r}}{{\kappa}_{n-1}^{r}{\kappa}_{n-2}^{r+1}\dots {\kappa}_{0}^{n+r-1}}$$

$${\mu}_{n+r}={\kappa}_{n+r-2}{\kappa}_{n+r-3}^{2}\dots {\kappa}_{0}^{n+r-1},$$

$${\nu}_{n,r}={\kappa}_{n+r-2}{\kappa}_{n+r-3}^{2}\dots {\kappa}_{n}^{r-1}\to 1$$

We write $\parallel g\parallel ={sup}_{x}\left|g\left(x\right)\right|$ for each bounded function g.

If f is bounded, then

- (i)
- for all $n\ge 0$, $\parallel {f}^{(n+1)*}\parallel \le \parallel {f}^{n*}\parallel $ and the inequality becomes equality if and only if ${f}^{n*}\sim \mathcal{U}$;
- (ii)
- ${f}^{n*}\stackrel{{L}_{r}}{\u27f6}1$ for all $r>0$.

It follows from (4) that ${\kappa}_{n}\ge 1$ and the inequality becomes equality if and only if ${f}^{n*}\sim \mathcal{U}$.

(i) Let ${Q}^{n*}$ be the inverse of the cumulative distribution function of ${f}^{n*}$, then ${f}^{(n+1)*}\left(u\right)=\frac{{f}^{n*}\left({Q}^{n*}\left(u\right)\right)}{{\kappa}_{n}}\le \frac{\parallel {f}^{n*}\parallel}{{\kappa}_{n}}$, giving $\parallel {f}^{(n+1)*}\parallel \le \frac{\parallel {f}^{n*}\parallel}{{\kappa}_{n}}\le \parallel {f}^{n*}\parallel $. If ${f}^{n*}\sim \mathcal{U}$, then Proposition 2 ensures that ${f}^{(n+1)*}\sim \mathcal{U}$, so $\parallel {f}^{(n+1)*}\parallel =\parallel {f}^{n*}\parallel $. Conversely, if $\parallel {f}^{(n+1)*}\parallel =\parallel {f}^{n*}\parallel $, then ${\kappa}_{n}=1$, so ${f}^{n*}\sim \mathcal{U}$.

(ii) It remains to show that ${\kappa}_{n}\to 1$ as $n\to \infty $. In fact, if ${\kappa}_{n}\nrightarrow 1$, since ${\kappa}_{n}\ge 1$, there exist a $\delta >0$ and a subsequence $\{{n}_{k}\}$ such that ${\kappa}_{{n}_{k}}\ge 1+\delta $, which implies

$$\frac{{\mu}_{{n}_{k}+2}}{{\mu}_{{n}_{k}+1}}=\prod _{i=0}^{{n}_{k}}{\kappa}_{i}\ge {(1+\delta )}^{k}\to \infty ask\to \infty .$$

However, $\frac{{\mu}_{{n}_{k}+2}}{{\mu}_{{n}_{k}+1}}\le \parallel f\parallel <\infty $, which contradicts (6). ☐

For $f\in \mathcal{F}{\phantom{\rule{0.166667em}{0ex}}}^{\prime}$ with infinite *-order such that $\{{\mu}_{n}{\mu}_{n+2}{\mu}_{n+1}^{-2}:\phantom{\rule{4pt}{0ex}}n\ge 1\}$ is a bounded sequence, then the following statements are equivalent:

- (i*)
- ${f}^{n*}\stackrel{\mathbb{P}}{\u27f6}1$;
- (ii)
- For all $r>0$, ${f}^{n*}\stackrel{{L}_{r}}{\u27f6}1$;
- (iii)
- ${\mu}_{n}{\mu}_{n+2}{\mu}_{n+1}^{-2}\to 1$ as $n\to \infty $.

It suffices to show that (i*) implies (iii). Recall that ${\kappa}_{n}={\mu}_{n}{\mu}_{n+2}{\mu}_{n+1}^{-2}$. For each subsequence $\{{\kappa}_{{n}_{k}}\}$, there exists a converging sub-subsequence $\{{\kappa}_{{n}_{{k}_{i}}}\}$ such that ${\kappa}_{{n}_{{k}_{i}}}\to b$ as $i\to \infty $. It remains to show that $b=1$. To this end, for $\delta >1$, we have

$$\begin{array}{c}{\int}_{0}^{1}\left|{f}^{({n}_{{k}_{i}}+1)*}\left(x\right)-1\right|{\mathbf{1}}_{\left\{\left|{f}^{({n}_{{k}_{i}}+1)*}\left(x\right)-1\right|\le \delta \right\}}dx\hfill \\ =\frac{1}{{\kappa}_{{n}_{{k}_{i}}}}{\int}_{0}^{1}\left|{f}^{\left({n}_{{k}_{i}}\right)*}\left(x\right)-{\kappa}_{{n}_{{k}_{i}}}\right|{f}^{\left({n}_{{k}_{i}}\right)*}\left(x\right){\mathbf{1}}_{\left\{\left|{f}^{\left({n}_{{k}_{i}}\right)*}\left(x\right)-{\kappa}_{{n}_{{k}_{i}}}\right|\le \delta {\kappa}_{{n}_{{k}_{i}}}\right\}}dx.\hfill \end{array}$$

(i*) ensures that
as $i\to \infty $, so applying the bounded convergence theorem to both sides of (7) to get $0=|1/b-1|$, i.e., $b=1$. ☐

$$\left|{f}^{({n}_{{k}_{i}}+1)*}-1\right|\stackrel{\mathbb{P}}{\u27f6}0,\phantom{\rule{4pt}{0ex}}{f}^{\left({n}_{{k}_{i}}\right)*}\left|{f}^{\left({n}_{{k}_{i}}\right)*}-{\kappa}_{{n}_{{k}_{i}}}\right|\stackrel{\mathbb{P}}{\u27f6}|1-b|,\phantom{\rule{4pt}{0ex}}{\mathbf{1}}_{\left\{\left|{f}^{\left({n}_{{k}_{i}}\right)*}\left(x\right)-{\kappa}_{{n}_{{k}_{i}}}\right|\le \delta {\kappa}_{{n}_{{k}_{i}}}\right\}}\stackrel{\mathbb{P}}{\u27f6}1$$

We note that not all distributions are of infinite *-order so the fixed point theorems are only applicable to a proper subclass of all distributions.

The main results in Section 3.1 cover all the standard distributions with infinite *-order in [22,23]. In fact, as observed in the Remark after Theorem 1 that the convergence to uniformity is purely determined by the moments of ${f}^{*}\left(U\right)$ with $U\sim \mathcal{U}$, we have failed to construct a density such that $\{{f}^{n*}:\phantom{\rule{4pt}{0ex}}n\ge 1\}$ does not converge to the uniform distribution. Here, we give a few examples to show that the main results in Section 3.1 are indeed very convenient to use.

Power function family.

From Table 1, the Power$\left(b\right)$ family has density ${f}_{b}\left(x\right)=b{x}^{b-1},\phantom{\rule{3.33333pt}{0ex}}0<x<1$, so it is of infinite *-order if and only if $b\ge 1$. As ${f}_{b}$ is bounded for $b\ge 1$, Theorem 2 ensures that ${f}_{b}^{n*}$ converges to the uniform in ${L}_{r}$ for any $r>0$.

Exponential distribution.

Suppose $f\left(x\right)={e}^{x},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{3.33333pt}{0ex}}x<0$. f is bounded, so Theorem 2 says that ${f}^{n*}$ converges to the uniform distribution as $n\to \infty .$ By symmetry, the same result holds for $f\left(x\right)={e}^{-x},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{3.33333pt}{0ex}}x>0$.

Pareto distribution.

The Pareto(a) family, with $a>0$, has ${f}_{a}\left(x\right)=a{x}^{-a-1}$ for $x>1$, which is bounded, so an application of Theorem 2 yields that the sequence ${\{{f}_{a}^{n*}\}}_{n\ge 1}$ converges to the uniform distribution as $n\to \infty .$

Cauchy distribution.

The pdQ of the Cauchy density is given by ${f}^{*}\left(u\right)=2{sin}^{2}\left(\pi u\right)$, $0<u<1$, see Table 1; it retains the bell shape of f. It follows that ${F}^{*}\left(t\right)=t-sin\left(2\pi t\right)/\left(2\pi \right),$ for $0<t<1$. It seems impossible to obtain an analytical form of ${f}^{n*}$ for $n\ge 2$. However, as f is bounded, using Theorem 2, we can conclude that ${f}^{n*}$ converges to the uniform distribution as $n\to \infty $.

Skew-normal.

A skew-normal distribution [17,24] has the density of the form
where $\alpha \in \mathbb{R}$ is a parameter, ϕ and Φ, as before, are the density and cdf of the standard normal distribution. When $\alpha =0$, f is reduced to the standard normal so it is possible to obtain its $\{{f}^{n*}\}$ by induction and then derive directly that ${f}^{n*}$ converges to the uniform distribution as $n\to \infty $. However, the general form of skew-normal densities is a lot harder to handle and one can easily see that the density is bounded and so Theorem 2 can be employed to conclude that ${f}^{n*}$ converges to the uniform distribution as $n\to \infty $.

$$f\left(x\right)=2\varphi \left(x\right)\mathsf{\Phi}\left(\alpha x\right),\phantom{\rule{4pt}{0ex}}x\in \mathbb{R},$$

Let $f\left(x\right)=-lnx$, $x\in (0,1)$. Then, ${\mu}_{n}=n!$ and ${\kappa}_{n}=\frac{n+2}{n+1}\to 1$ as $n\to \infty $, so we have from Theorem 1 that, for any $r>0$, ${f}^{n*}$ converges in ${L}_{r}$ norm to constant 1 as $n\to \infty $.

The pdQ, transformation from a density function f to ${f}^{*}$ extracts the important information of f such as its asymmetry and tail behaviour and ignores the less critical information such as gaps, location and scale, and thus provides a powerful tool in studying the shapes of density functions. We found the directed divergences from uniformity of the pdQs of many standard location-scale families and used them to make a map locating each shape family relative to others and giving its distance from uniformity. It would be of interest to find the pdQs of other shape families, such as the skew-normal of Example 5; however, a simple expression for this pdQ appears unlikely given the complicated nature of its quantile function. Nevertheless, the [25] skew-normal family should be amenable in this regard because there are explicit formulae for both its density and quantile functions. To obtain the information conserved in the pdQ transformation, we repeatedly applied the transformation and found the limiting behaviour of repeated applications of the pdQ mapping. When the density function f is bounded, we showed that each application lowers its modal height and hence the resulting density function ${f}^{*}$ is closer to the uniform density than f. Furthermore, we established a necessary and sufficient condition for ${f}^{n*}$ converging in ${L}_{2}$ norm to the uniform density, giving a positive answer to a conjecture raised in [1]. In particular, if f is bounded, we proved that ${f}^{n*}$ converges in ${L}_{r}$ norm to the uniform density for any $r>0$. The fixed point theorems can be interpreted as follows. As we repeatedly apply the pdQ transformation, we keep losing information about the shape of the original f and will eventually exhaust the information, leaving nothing in the limit, as represented by the uniform density, which means no points carry more information than other points. Thus, the pdQ transformation plays a similar role to the difference operator in time series analysis where repeated applications of the difference operator to a time series with a polynomial component lead to a white noise with a constant power spectral density ([26], p. 19). We conjecture that every almost surely positive density g on $[0,1]$ is a pdQ of a density function, hence uniquely representing a location-scale family. This is equivalent to saying that there exists a density function f such that $g={f}^{*}$. When g satisfies ${\int}_{0}^{1}\frac{1}{g\left(t\right)}dt<\infty $, one can show that the cdf F of f can be uniquely (up to location-scale parameters) represented as $F\left(x\right)={H}^{-1}\left(H\left(1\right)x\right)$, where $H\left(x\right)={\int}_{0}^{x}\frac{1}{g\left(t\right)}dt$ (Professor A.D. Barbour, personal communication). The condition ${\int}_{0}^{1}\frac{1}{g\left(t\right)}dt<\infty $ is equivalent to saying that f has bounded support and it is certainly not necessary, e.g., $g\left(x\right)=2x$ for $x\in [0,1]$ and $f\left(x\right)={e}^{x}$ for $x<0$ (see Example 2 in Section 3.2).

In summary, the study of shapes of probability densities is facilitated by composing them with their own quantile functions, which puts them on the same finite support where they are absolutely continuous with respect to Lebesgue measure, and thus amenable to metric and semi-metric comparisons. In addition, we showed that further applications of this transformation, which intuitively reduces information and increases the relative entropy, is generally valid but requires a non-standard approach for proof. Similar results are likely to be obtainable in the multivariate case. Further research could investigate the relationship between relative entropy and tail-weight or distance from the class of symmetric pdQs.

An R script entitled StaudteXiaSupp.R, which is available online at https://www.mdpi.com/1099-4300/20/5/317/s1, enables the reader to plot successive iterates of the pdQ transformation on any standard probability distribution available in R.

The authors thank the three reviewers for their critiques and many positive suggestions. The authors also thank Peter J. Brockwell for helpful commentary on an earlier version of this manuscript. The research by Aihua Xia is supported by an Australian Research Council Discovery Grant DP150101459. The authors have not received funds for covering the costs to publish in open access.

The authors declare no conflict of interest.

- Staudte, R. The shapes of things to come: Probability density quantiles. Statistics
**2017**, 51, 782–800. [Google Scholar] [CrossRef] - Parzen, E. Nonparametric statistical data modeling. J. Am. Stat. Assoc.
**1979**, 7, 105–131. [Google Scholar] [CrossRef] - Kullback, S.; Leibler, R. On information and sufficiency. Ann. Math. Stat.
**1951**, 22, 79–86. [Google Scholar] [CrossRef] - Kullback, S. Information Theory and Statistics; Dover: Mineola, NY, USA, 1968. [Google Scholar]
- Abbas, A.; Cadenbach, A.; Salimi, E. A Kullback–Leibler View of Maximum Entropy and Maximum Log-Probability Methods. Entropy
**2017**, 19, 232. [Google Scholar] [CrossRef] - Jeffreys, H. An invariant form for the prior probability in estimation problems. Proc. R. Soc. Lond. A
**1946**, 186, 453–461. [Google Scholar] [CrossRef] - Freimer, M.; Kollia, G.; Mudholkar, G.; Lin, C. A study of the generalized Tukey lambda family. Commun. Stat. Theory Methods
**1988**, 17, 3547–3567. [Google Scholar] [CrossRef] - Stephens, M. Uniformity, Tests of. In Encyclopedia of Statistical Sciences; John Wiley & Sons: Hoboken, NJ, USA, 2006; Volume 53, pp. 1–8. [Google Scholar] [CrossRef]
- Lockhart, R.; O’Reilly, F.; Stephens, M. Tests of Fit Based on Normalized Spacings. J. R. Stat. Soc. B
**1986**, 48, 344–352. [Google Scholar] - Schader, M.; Schmid, F. Power of tests for uniformity when limits are unknown. J. Appl. Stat.
**1997**, 24, 193–205. [Google Scholar] [CrossRef] - Prendergast, L.; Staudte, R. Exploiting the quantile optimality ratio in finding confidence Intervals for a quantile. Stat
**2016**, 5, 70–81. [Google Scholar] [CrossRef] - Dudewicz, E.; Van Der Meulen, E. Entropy-Based Tests of Uniformity. J. Am. Stat. Assoc.
**1981**, 76, 967–974. [Google Scholar] [CrossRef] - Bowman, A. Density based tests for goodness-of-fit. J. Stat. Comput. Simul.
**1992**, 40, 1–13. [Google Scholar] [CrossRef] - Fan, Y. Testing the Goodness of Fit of a Parametric Density Function by Kernel Method. Econ. Theory
**1994**, 10, 316–356. [Google Scholar] [CrossRef] - Pavia, J. Testing Goodness-of-Fit with the Kernel Density Estimator: GoFKernel. J. Stat. Softw.
**2015**, 66, 1–27. [Google Scholar] [CrossRef] - Noughabi, H. Entropy-based tests of uniformity: A Monte Carlo power comparison. Commun. Stat. Simul. Comput.
**2017**, 46, 1266–1279. [Google Scholar] [CrossRef] - Arellano-Valle, R.; Contreras-Reyes, J.; Stehlik, M. Generalized Skew-Normal Negentropy and Its Application to Fish Condition Factor Time Series. Entropy
**2017**, 19, 528. [Google Scholar] [CrossRef] - R Core Team. R Foundation for Statistical Computing; R Core Team: Vienna, Austria, 2008; ISBN 3-900051-07-0. [Google Scholar]
- Luenberger, D. Optimization by Vector Space Methods; Wiley: New York, NY, USA, 1969. [Google Scholar]
- Bessenyei, M.; Páles, Z. A contraction principle in semimetric spaces. J. Nonlinear Convex Anal.
**2017**, 18, 515–524. [Google Scholar] - Feller, W. An Introduction to Probability Theory and Its Applications; John Wiley & Sons: New York, NY, USA, 1971; Volume 2. [Google Scholar]
- Johnson, N.; Kotz, S.; Balakrishnan, N. Continuous Univariate Distributions; John Wiley & Sons: New York, NY, USA, 1994; Volume 1. [Google Scholar]
- Johnson, N.; Kotz, S.; Balakrishnan, N. Continuous Univariate Distributions; John Wiley & Sons: New York, NY, USA, 1995; Volume 2, ISBN 0-471-58494-0. [Google Scholar]
- Azzalini, A. A Class of Distributions which Includes the Normal Ones. Scand. J. Stat.
**1985**, 12, 171–178. [Google Scholar] - Jones, M.; Pewsey, A. Sinh-arcsinh distributions. Biometrika
**2009**, 96, 761–780. [Google Scholar] [CrossRef] - Brockwell, P.; Davis, R. Time Series: Theory and Methods; Springer: New York, NY, USA, 2009. [Google Scholar]

$\phantom{\rule{1.em}{0ex}}\mathit{Q}\left(\mathit{u}\right)$ | $\phantom{\rule{1.em}{0ex}}{\mathit{f}}^{*}\left(\mathit{u}\right)$ | $\mathit{I}(\mathcal{U}:{\mathit{f}}^{*})$ | $\mathit{I}({\mathit{f}}^{*}:\mathcal{U})$ | $\mathit{J}(\mathcal{U},{\mathit{f}}^{*})$ | ||
---|---|---|---|---|---|---|

Normal | ${z}_{u}$ | $2\sqrt{\pi}\phantom{\rule{0.166667em}{0ex}}\varphi \left({z}_{u}\right)$ | 0.153 | 0.097 | 0.250 | |

Logistic | $ln(u/(1-u\left)\right)$ | $6u(1-u)$ | 0.208 | 0.125 | 0.333 | |

Laplace | $ln\left(2u\right),\phantom{\rule{3.33333pt}{0ex}}u\le 0.5$ | $2\phantom{\rule{0.166667em}{0ex}}\mathrm{min}\{u,1-u\}$ | 0.307 | 0.193 | 0.500 | |

${t}_{2}$ | $\frac{2u-1}{{\{2u(1-u)\}}^{1/2}}$ | $\frac{{2}^{7}{\{u(1-u)\}}^{3/2}}{3\pi}$ | 0.391 | 0.200 | 0.591 | |

Cauchy | $tan\{\pi (u-0.5)\}$ | $2{sin}^{2}\left(\pi u\right)$ | 0.693 | 0.307 | 1.000 | |

Exponential | $-ln(1-u)$ | $2(1-u)$ | 0.307 | 0.193 | 0.500 | |

Gumbel | $-ln(-ln(u\left)\right)$ | $-4uln\left(u\right)$ | 0.191 | 0.116 | 0.307 | |

Lognormal ($\sigma $) | ${e}^{\phantom{\rule{0.166667em}{0ex}}\sigma \phantom{\rule{0.166667em}{0ex}}{z}_{u}}$ | $\frac{2\sqrt{\pi}\phantom{\rule{0.277778em}{0ex}}}{{e}^{{\sigma}^{2}/4}}\phantom{\rule{0.166667em}{0ex}}\varphi \left({z}_{u}\right)\phantom{\rule{0.166667em}{0ex}}{e}^{-\sigma \phantom{\rule{0.166667em}{0ex}}{z}_{u}}$ | $\frac{{\sigma}^{2}}{4}+\frac{1}{2}-ln\left(\sqrt{2}\right)$ | - | $\frac{1}{4}+\frac{3{\sigma}^{2}}{8}$ | |

Pareto (a) | ${(1-u)}^{-1/a}$ | $\frac{2a+1}{a}\phantom{\rule{0.166667em}{0ex}}{(1-u)}^{1+1/a}$ | $\frac{1+a}{a}-ln(2+\frac{1}{a})$ | - | $\frac{{(1+a)}^{2}}{a(1+2a)}$ | |

Power (b) | ${u}^{1/b}$ | $\frac{2b-1}{b}\phantom{\rule{0.166667em}{0ex}}{u}^{1-1/b}$ | $\frac{b-1}{b}-ln(2-\frac{1}{b})$ | - | $\frac{{(b-1)}^{2}}{b(2b-1)}$ |

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).