Open Access
This article is

- freely available
- re-usable

*Entropy*
**2016**,
*18*(7),
248;
https://doi.org/10.3390/e18070248

Article

Cumulative Paired φ-Entropy

Department of Statistics and Econometrics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Lange Gasse, Nürnberg 90403, Germany

^{*}

Author to whom correspondence should be addressed.

^{†}

These authors contributed equally to this work.

Academic Editor:
Adom Giffin

Received: 13 April 2016 / Accepted: 24 June 2016 / Published: 1 July 2016

## Abstract

**:**

A new kind of entropy will be introduced which generalizes both the differential entropy and the cumulative (residual) entropy. The generalization is twofold. First, we simultaneously define the entropy for cumulative distribution functions (cdfs) and survivor functions (sfs), instead of defining it separately for densities, cdfs, or sfs. Secondly, we consider a general “entropy generating function” φ, the same way Burbea et al. (IEEE Trans. Inf. Theory 1982, 28, 489–495) and Liese et al. (Convex Statistical Distances; Teubner-Verlag, 1987) did in the context of φ-divergences. Combining the ideas of φ-entropy and cumulative entropy leads to the new “cumulative paired φ-entropy” ($CP{E}_{\phi}$). This new entropy has already been discussed in at least four scientific disciplines, be it with certain modifications or simplifications. In the fuzzy set theory, for example, cumulative paired φ-entropies were defined for membership functions, whereas in uncertainty and reliability theories some variations of $CP{E}_{\phi}$ were recently considered as measures of information. With a single exception, the discussions in the scientific disciplines appear to be held independently of each other. We consider $CP{E}_{\phi}$ for continuous cdfs and show that $CP{E}_{\phi}$ is rather a measure of dispersion than a measure of information. In the first place, this will be demonstrated by deriving an upper bound which is determined by the standard deviation and by solving the maximum entropy problem under the restriction of a fixed variance. Next, this paper specifically shows that $CP{E}_{\phi}$ satisfies the axioms of a dispersion measure. The corresponding dispersion functional can easily be estimated by an L-estimator, containing all its known asymptotic properties. $CP{E}_{\phi}$ is the basis for several related concepts like mutual φ-information, φ-correlation, and φ-regression, which generalize Gini correlation and Gini regression. In addition, linear rank tests for scale that are based on the new entropy have been developed. We show that almost all known linear rank tests are special cases, and we introduce certain new tests. Moreover, formulas for different distributions and entropy calculations are presented for $CP{E}_{\phi}$ if the cdf is available in a closed form.

Keywords:

φ-entropy; absolute mean deviation; cumulative residual entropy; measure of dispersion; generalized maximum entropy principle; Tukey’s λ distribution; φ-regression; L-estimator; linear rank test## 1. Introduction

The φ-entropy
where f is a probability density function and φ is a strictly concave function, was introduced by [1]. If we set $\phi \left(u\right)=-ulnu$, $u\in [0,1]$, we get Shannon’s differential entropy as the most prominent special case.

$${E}_{\phi}\left(F\right)={\int}_{\mathbb{R}}\phi \left(f\left(x\right)\right)dx,$$

Shannon et al. [2] derived the “entropy power fraction” and showed that there is a close relationship between Shannon entropy and variance. In [3], it was demonstrated that Shannon’s differential entropy satisfies an ordering of scale and thus is a proper measure of scale (MOS). Recently, the discussion in [4] has shown that entropies can be interpreted as a measure of dispersion. In the discrete case, minimal Shannon entropy means maximal certainty about the random outcome of an experiment. A degenerate distribution minimizes the Shannon entropy as well as the variance of a discrete quantitative random variable. For such a degenerate distribution, Shannon entropy and variance both take the value 0. However, there is an important difference between the differential entropy and the variance when discussing a discrete quantitative random variable with support $[a,b]$. The differential entropy is maximized by a uniform distribution over $[a,b]$, while the variance is maximal if both interval bounds a and b have the probability mass of $0.5$ (cf. [5]). A similar result holds for a discrete random variable with a finite number of realizations. Therefore, it is doubtful that Equation (1) is a true measure of dispersion.

We propose to define the φ-entropy for cumulative distribution functions (cdfs) F and survivor functions (sf) $1-F$ instead of for density functions f. Throughout the paper, we define $\overline{F}:=1-F$. By applying this modification we get
where cdf F is absolutely continuous, $CPE$ means “cumulative paired entropy”, and φ is the “entropy generating function” defined on $[0,1]$ with $\phi \left(0\right)=\phi \left(1\right)=0$. We will assume that φ is concave on $[0,1]$ throughout most of this paper. In particular, we will show that Equation (2) satisfies a popular ordering of scale and attains its maximum if the domain is an interval $[a,b]$, while a, b occur with a probability of $1/2$. This means that Equation (2) behaves like a proper measure of dispersion.

$$CP{E}_{\phi}\left(F\right)={\int}_{\mathbb{R}}\phi \left(F\left(x\right)\right)+\phi \left(\overline{F}\left(x\right)\right)dx,$$

In addition, we generalize results from the literature, focusing on the Shannon case with $\phi \left(u\right)=-ulnu$, $u\in [0,1]$ (cf. [6]), the cumulative residual entropy
(cf. [7]), and the cumulative entropy
(cf. [8,9]). In the literature, this entropy is interpreted as a measure of information rather than dispersion without any clarification on what kind of information is considered.

$$CRE\left(F\right)=-{\int}_{\mathbb{R}}^{+}\overline{F}\left(x\right)ln\overline{F}\left(x\right)dx$$

$$CE\left(F\right)=-{\int}_{\mathbb{R}}F\left(x\right)lnF\left(x\right)dx$$

A first general aim of this paper is to show that entropies can rather be interpreted as measures of dispersion than as measures of information. A second general aim is to demonstrate that the entropy generating function φ, the weight function J in L-estimation, the dispersion function d which serves as a criterion for minimization in robust rank regression, and the scores-generating function ${\varphi}_{1}$ are closely related.

Specific aims of this paper are:

- To show that the cdf-based entropy Equation (2) originates in several distinct scientific areas.
- To demonstrate the close relationship between Equation (2) and the standard deviation.
- To derive maximum entropy (ME) distributions under simple and more complex restrictions and to show that commonly known as well as new distributions solve the ME principle.
- To derive the entropy maximized by a given distribution under certain restrictions.
- To formally prove that Equation (2) is a measure of dispersion.
- To propose an L-estimator for Equation (2) and derive its asymptotic properties.
- To use Equation (2) in order to obtain new related concepts measuring the dependence of random variables (such as mutual φ-information, φ-correlation, and φ-regression).
- To apply Equation (2) to get new linear rank tests for the comparison of scale.

The paper is structured in the same order as these aims. After this introduction, in the second section we give a short review of the literature that is concerned with Equation (2) or related measures. The third section begins by summarizing reasons for defining entropies for cdfs and sfs instead of defining them for densities. Next, some equivalent characterizations of Equation (2) are given, provided the derivative of φ exists. In the fourth section, we use the Cauchy–Schwarz inequality to derive an upper bound for Equation (2), which provides sufficient conditions for the existence of $CPE$. In addition, more stringent conditions for the existence are directly proven. In the fifth section, the Cauchy–Schwarz inequality allows to derive ME distributions if the variance is fixed. For more complicated restrictions we attain ME distributions by solving the Euler–Lagrange conditions. Following the generalized ME principle (cf. [10]), we change the perspective and ask which entropy is maximized if the variance and the population’s distribution is fixed. The sixth section is of key importance because the properties of Equation (2) as a measure of dispersion is analyzed in detail. We show that Equation (2) satisfies an often applied ordering of scale by [3], is invariant with respect to translations and equivariant with respect to scale transformations. Additionally, we provide certain results concerning the sum of independent random variables. In the seventh section, we propose an L-estimator for $CP{E}_{\phi}$. Some basic properties of this estimator like influence function, consistency, and asymptotic normality are shown. In the eighth section, we introduce several new statistical concepts based on $CP{E}_{\phi}$, which are generalizing divergence, mutual information, Gini correlation, and Gini regression. Additionally, we show that new linear rank tests for dispersion can be based on $CP{E}_{\phi}$. The known linear rank tests like the Mood- or the Ansari-Bradley tests are special cases of this general approach. However, in this paper we exclude most of the technical details for they will be presented in several accompanying papers. In the last section we compute Equation (2) for certain generating functions φ and some selected families of distributions.

## 2. State of the Art—An Overview

Entropies are usually defined on the simplex of probability vectors, which are summing up to one (cf. [2,11]). Until now it has been rather usual to calculate the Shannon entropy not for vectors of probability or probability density functions f, but for distribution functions F. The corresponding Shannon entropy is given by

$$CP{E}_{S}\left(F\right)=-{\int}_{\mathbb{R}}F\left(x\right)lnF\left(x\right)+\overline{F}\left(x\right)ln\overline{F}\left(x\right)dx.$$

Nevertheless, we have identified five scientific disciplines directly or implicitly working with an entropy based on distribution functions or survivor functions:

- Fuzzy set theory,
- Generalized ME principle,
- Theory of dispersion of ordered categorial variables,
- Uncertainty theory,
- Reliability theory.

#### 2.1. Fuzzy Set Theory

To the best of our knowledge, Equation (5) was initially introduced by [12]. However, they did not consider the entropy for a cdf F. Instead, they were concerned with a so-called membership function ${\mu}_{A}$ that quantifies the degree to which a certain element x of a set Ω belongs to a subset $A\subseteq \Omega $. Membership functions were introduced by [13] within the framework of the “fuzzy set theory”.

It is important to note that if all elements of Ω are mapped to the value $1/2$, maximum uncertainty about x belonging to a set A will be attained.

This main property is one of the axioms of membership functions. In the aftermath of [12] numerous modifications to the term “entropy” have been made and axiomatizations of the membership functions have been stated (see, e.g., the overview in [14]).

Finally, those modifications proceeded parallel to a long history of extensions and parametrizations of the term entropy for probability vectors and densities. It began with [15] up to [16,17], who provided a superstructure of those generalizations consisting of a very general form of the entropy, including the φ-entropy Equation (1) as a special case. Burbea et al. [1] introduced the term φ-entropy. If both $\phi \left(x\right)$ and $\phi (1-x)$ appeared in the entropy, as in the Fermi-Dirac entropy (cf. [18], p. 191), they used the term “paired” φ-entropy.

#### 2.2. Generalized Maximum Entropy Principle

Regardless of the debate in the fuzzy set theory and the theory of measurement of dispersion, Kapur [10] showed that a growth model with a logistic growth rate is yielded as the solution of maximizing Equation (5) under two simple constraints. This provides an example for the “generalized maximum entropy principle” postulated by Kesavan et al. [19]. In addition to that, the simple ME principle introduced by [20,21] derives a distribution which maximizes an entropy given certain constraints. Furthermore, the generalization of [19] consists of determining the φ-entropy, which is maximized given a distribution and some constraints. Finally, they used a slightly modified formula Equation (5). The cdf had to be replaced by a monotonically increasing function with logistic shape.

#### 2.3. Theory of Dispersion

Irrespectively of the discussion on membership functions in the fuzzy set theory and the proposals of generalizing the Shannon entropy, Leik [22] discussed a measure of dispersion for ordered categorial variables with a finite number k of categories ${x}_{1}<{x}_{2}<\dots <{x}_{k}$. His measure is based on the distance between the $k-1$-dimensional vectors of cumulated frequencies $({F}_{1},{F}_{2},\dots ,{F}_{k-1})$ and $(1/2,1/2,\dots ,1/2)$. Both vectors only coincide if the extreme categories ${x}_{1}$ and ${x}_{k}$ appear with same frequency. This represents the case of maximal dispersion. Consider
as discrete version of Equation (2). Setting $\phi \left(u\right)=min\{u,1-u\}$, we get the measure of Leik as a special case of Equation (6) up to a change of sign. Vogel et al. [23] considered $\phi \left(u\right)=-u\mathrm{ln}\left(u\right)$ and the Shannon variation of Equation (6) as measure of dispersion for ordered categorial variables. Numerous modifications of Leik’s measure of dispersion have been published. In [24,25,26,27,28,29], the authors implicitly used $\phi \left(u\right)=1/4-{(u-1/2)}^{2}$ or equivalently $\phi \left(u\right)=u(1-u)$. Most of the discussion was conducted in the journal “Perceptual and Motor Skills”. For a recent overview of measuring dispersion including ordered categorial variables see, e.g., [30]. Instead of dispersion, some articles are concerned with related concepts for ordered categorial variables, like bipolarization and inequality (cf. [31,32,33,34,35]). A class of measures of dispersion for ordered categorial variables with a finite number of categories that is similar to Equation (6) has been introduced by Klein and Yager [36,37] independently of each other. They had obviously not been aware of the discussion in “Perceptual and Motor Skills”. Both authors gave axiomatizations to describe which functions φ will be appropriate for measuring dispersion. However, at least Yager [37] recognized the close relationship between those measures and the general term “entropy” in the fuzzy set theory. He introduced the term “dissonance” to more precisely characterize measures of dispersion for ordered categorial variables. In the language of information theory, maximum dissonance describes an extreme case in which there is still some information. But this information is extremely contradictory. As an example, we could ask in the field of product evaluation to what degree information, which states that 50 percent of the recommendations are extremely good and at the same time 50 percent are extremely bad, is useful to make a purchase decision. This is an important difference to the Shannon entropy, which is maximal if there is no information at all, i.e., all categories occur with same probability.

$$CP{E}_{\phi}\left(F\right)=\sum _{i=1}^{k-1}\left(\phi \left({F}_{i}\right)+\phi (1-{F}_{i})\right)$$

Bowden [38] defines the location entropy function $h\left(x\right)=-F\left(x\right)lnF\left(x\right)+\overline{F}\left(x\right)ln\overline{F}\left(x\right)$, given a value of x. He emphasizes the possibility to construct measures of spread and symmetry based on this function. To the best of our knowledge, Bowden [38] is the only one to mention the application of cumulated paired Shannon entropy to continuous distributions so far.

#### 2.4. Uncertainty Theory

Reference ([6] (first edition 2004) can be considered the founder of the uncertainty theory. This theory is concerned with formalizing data consisting of expert opinions rather than formalizing data gathered by repeating a random experiment. Liu slightly modified the Kolmogoroff axioms of probability theory to receive an uncertainty measure, following which he defined uncertain variables, uncertainty distribution functions, and moments of uncertain variables. Liu argued that “an event is the most uncertain if its uncertainty measure is $0.5$, because the event and its complement may be regarded as ‘equally likely’ ” ([6], p. 14). Liu’s maximum uncertainty principle states: “For any event, if there are multiple reasonable values that an uncertain measure may take, the value as close to $0.5$ as possible is assigned to the event” [6] (p. 14). Similar to the fuzzy set theory, the distance between the uncertainty distribution and the value $0.5$ can be measured by the Shannon-type entropy Equation (5). Apparently for the first time in the third edition of 2010, he explicitly calculated Equation (5) for several distributions (e.g., the logistic distribution) and derived upper bounds. He applied the ME principle to uncertainty distributions. The preferred constraint is to predetermine values of mean and variance ([6], p. 83ff.). In this case, the logistic distribution maximizes Equation (5). In this context, the logistic distribution plays the same role in uncertainty theory as the Gaussian distribution in probability theory. The Gaussian distribution maximizes the differential entropy, given values for mean and variance. Therefore, in uncertainty theory the logistic distribution is called “normal The authors of distribution”. [39] provided Equation (5) as a function of the quantile function. In addition to that, the authors of [40] chose $\phi \left(u\right)=u(1-u)$, $u\in [0,1]$, as entropy generating function and derived the ME distribution as a discrete uniform distribution, which is concentrated on the endpoints of the compact domain $[a,b]$ if no further restrictions are assumed. Popoviciu [5] attained the same distribution by maximizing the variance. Chen et al. [41] introduced cross entropies and divergence measures based on general functions φ. Further literature on this topic is provided by [42,43,44].

#### 2.5. Reliability Theory

Entropies also play a prominent role in reliability theory. They were initially introduced in the fields of hazard rates and residual lifetime distributions (cf. [45]). In addition, the authors of [46,47] introduced the cumulative residual entropy Equation (3), discussed its properties, and derived the exponential and the Weibull distribution by an ME principle, given the coefficient of variation. This work went into detail on the advantage of defining entropy via survivor functions instead of probability density functions. Rao et al. [46] refer to the extensive criticism on the differential entropy by [48]. Moreover, Zografos et al. [49] generalized the Shannon-type cumulative residual entropy to an entropy of the Rényi type. Furthermore, Drissi et al. [50] considered random variables with general support. They also presented solutions for the maximization of Equation (3), provided that more general restrictions are considered. Similar to [51], they identified the logistic distribution to be the ME distribution, given mean, variance, and a symmetric form of the distribution function.

Di Crescenzo et al. [9] analyzed Equation (4) for cdfs and discussed its stochastic properties. Sunoj et al. [52] plugged the quantile function into the Shannon-type entropy Equation (4) and presented expressions if the quantile function possesses a closed form, but not the cdf. In recent papers an empirical version of Equation (3) is used as goodness-of-fit test (cf. [53]).

Additionally, $CRE$ and $CE$ are applied to the distribution function of the residual lifetime $(X-t|X>t)$ and the inactivity time $(t-X|X<t)$ (cf. [54]). This can directly be generalized to the $CPE$ framework.

Moreover, Psarrakos et al. [55] provides an interesting alternative generalization of the Shannon case. In this paper we focus on the class of concave functions φ. Special extensions to non-concave functions will be subject to future research.

This brief overview shows that different disciplines are accessing an entropy based on distribution functions. The contributions of the fuzzy set theory, the uncertainty theory, and the reliability theory all have the exclusive consideration of continuous random variables in common. The discussions about entropy in reliability theory on the one hand and fuzzy set theory and uncertainty theory, respectively, on the other hand were conducted independently of each other without even noticing the results of the other disciplines. However, Liu’s uncertainty theory benefits from the discussion in the fuzzy set theory. In the theory of dispersion of ordered categorial variables the authors do not appear to be aware of their implicit use a concept of entropy. Nevertheless the situation is somewhat different to that of the other areas since only discrete variables were discussed. Kiesl’s dissertation [56] provides a theory of measures of the form Equation (6) with numerous applications. However, an intensive discussion of Equation (2) is missing and will be provided here.

## 3. Cumulative Paired φ-Entropy for Continuous Variables

#### 3.1. Definition

We focus on absolute continuous cdfs F with density functions f. The set of all those distribution functions is called $\mathcal{F}$. We call a function “entropy generating function” if it is non-negative and concave on the domain $[0,1]$ with $\phi \left(0\right)=\phi \left(1\right)=0$. In this case, $\phi \left(u\right)+\phi (1-u)$ is a symmetric function with respect to $1/2$.

**Definition 1.**

The functional $CP{E}_{\phi}:\mathcal{F}\to {\mathbb{R}}_{0}^{+}$ with
is called cumulative paired φ-entropy for $F\in \mathcal{F}$ with entropy generating function φ.

$$CP{E}_{\phi}\left(F\right)={\int}_{\mathbb{R}}\phi \left(F\left(x\right)\right)+\phi \left(\overline{F}\left(x\right)\right)dx$$

Up to now, we assumed the existence of $CP{E}_{\phi}$. In the following section we will discuss some sufficient criteria ensuring the existence of $CP{E}_{\phi}$. If X is a random variable with cdf F, we occasionally use the notation $CP{E}_{\phi}\left(X\right)$ instead.

Next, some examples of well established concave entropy generating functions φ and corresponding cumulative paired φ-entropies will be given.

- Cumulative paired α-entropy $CP{E}_{\alpha}$: Following [57], let φ be given by$$\phi \left(u\right)=u\frac{{u}^{\alpha -1}-1}{1-\alpha},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}u\in [0,1],$$$$CP{E}_{\alpha}\left(F\right)={\int}_{\mathbb{R}}F\left(x\right)\frac{F{\left(x\right)}^{\alpha -1}-1}{1-\alpha}+\overline{F}\left(x\right)\frac{\overline{F}{\left(x\right)}^{\alpha -1}-1}{1-\alpha}dx.$$
- Cumulative paired Gini entropy $CP{E}_{G}$: For $\alpha =2$ we get$$CP{E}_{G}\left(F\right)=2{\int}_{\mathbb{R}}F\left(x\right)\overline{F}\left(x\right)dx$$
- Cumulative paired Shannon entropy $CP{E}_{S}$: Set $\phi \left(u\right)=-ulnu$, $\phantom{\rule{3.33333pt}{0ex}}u\in [0,1]$. Thus,$$CP{E}_{S}\left(F\right)=-{\int}_{\mathbb{R}}F\left(x\right)lnF\left(x\right)+\overline{F}\left(x\right)ln\overline{F}\left(x\right)dx$$
- Cumulative paired Leik entropy $CP{E}_{L}$: The function$$\phi \left(u\right)=min\{u,1-u\}=\frac{1}{2}-\left|u-\frac{1}{2}\right|,\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}u\in [0,1],$$$$CP{E}_{L}\left(F\right)=2{\int}_{\mathbb{R}}min\{F\left(x\right),\overline{F}\left(x\right)\}dx$$

Figure 1 gives an impression of the previously mentioned generating functions φ .

#### 3.2. Advantages of Entropies Based on Cdfs

The authos of [46,47] list several reasons for better defining an entropy for distribution functions rather than defining it for density functions. Starting point is the well-known critique of Shannon’s differential entropy $-\int f\left(x\right)lnf\left(x\right)dx$ that was expressed by several authors like [48,58] and (p. 58f) in [59].

Transferred to cumulative paired entropies, the advantages of entropies based on distribution functions (cf. [46]) are as follows:

- $CP{E}_{\phi}$ is based on probabilities and has a consistent definition for both discrete and continuous random variables.
- $CP{E}_{\phi}$ is always non-negative.
- $CP{E}_{\phi}$ can easily be estimated by the empirical distribution function. This estimation is strongly consistent, due to the strong consistency of the empirical distribution function.

Problems of the differential entropy are occasionally discussed in case of grouped data, at which the usual Shannon entropy is calculated for each group probability. With an increasing amount of groups, the Shannon entropy not only does not converge to the respective differential entropy, but it even diverges (cf., e.g., (p. 54) in ([59], (p. 239) in [60]). In the next section we will show that the discrete version of $CP{E}_{\phi}$ converges to $CP{E}_{\phi}$ as the number of groups approaches infinity.

#### 3.3. $CP{E}_{\phi}$ for Grouped Data

First, we show the notation for characterizing grouped data. The interval $[{\tilde{x}}_{0},{\tilde{x}}_{k}]$ is divided into k subintervals with limits ${\tilde{x}}_{0}<{\tilde{x}}_{1}<...<{\tilde{x}}_{k-1}<{\tilde{x}}_{k}$. The range of each group is called $\mathsf{\Delta}{x}_{i}={\tilde{x}}_{i}-{\tilde{x}}_{i-1}$ for $i=1,2,...,k$. Let X be a random variable with absolute continuous distribution function F, which is only known at the limits of each group. The probabilities of each group are denoted by ${p}_{i}=F\left({\tilde{x}}_{i}\right)-F\left({\tilde{x}}_{i-1}\right)$, $i=1,2,...,k$. ${X}^{*}$ is the random variable whose distribution function ${F}^{*}$ is yielded by linear interpolation of the values of F at the limits of successive groups. Finally, ${X}^{*}$ is the result of adding an independent, uniformly distributed random variable to X. It holds that
for $x\in \mathbb{R}$, ${F}^{*}\left(x\right)=0$ for $x\le {\tilde{x}}_{0}$ and ${F}^{*}\left(x\right)=1$ for $x\phantom{\rule{4pt}{0ex}}>{\tilde{x}}_{k}$.

$${F}^{*}\left(x\right)=F\left({\tilde{x}}_{i-1}\right)+\frac{{p}_{i}}{\mathsf{\Delta}{x}_{i}}(x-{\tilde{x}}_{i-1})\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\mathrm{if}{\tilde{x}}_{i-1}x\le {\tilde{x}}_{i}$$

Let ${X}^{*}$ denote the respective random variable of ${F}^{*}$. The probability density function ${f}^{*}$ of ${X}^{*}$ is defined by ${f}^{*}\left(x\right)={p}_{i}/\mathsf{\Delta}{x}_{i}$ for ${\tilde{x}}_{i-1}<x\le {\tilde{x}}_{i}$, $i=1,2,...,k$.

**Lemma 1.**

Let φ be an entropy generating function with antiderivative ${S}_{\phi}$. The paired cumulative φ-entropy of the distribution function in Equation (12) is given as follows:

$$CP{E}_{\phi}\left({X}^{*}\right)={\displaystyle \sum _{i=1}^{k}}\frac{\mathsf{\Delta}{x}_{i}}{{p}_{i}}\left({S}_{\phi}\left(F\left({\tilde{x}}_{i}\right)\right)-{S}_{\phi}\left(F\left({\tilde{x}}_{i-1}\right)\right)+{S}_{\phi}\left(\overline{F}\left({\tilde{x}}_{i-1}\right)\right)-{S}_{\phi}\left(\overline{F}\left({\tilde{x}}_{i}\right)\right)\right).$$

**Proof.**

For $x\in ({\tilde{x}}_{i-1},{x}_{i}]$, we have
with ${a}_{i}+{b}_{i}{\tilde{x}}_{i-1}=F\left({\tilde{x}}_{i-1}\right)$, ${a}_{i}+{b}_{i}{\tilde{x}}_{i}=F\left({\tilde{x}}_{i}\right)$, $1-{a}_{i}-{b}_{i}{\tilde{x}}_{i-1}=\overline{F}\left({\tilde{x}}_{i-1}\right)$, and $1-{a}_{i}-{b}_{i}{\tilde{x}}_{i}=\overline{F}\left({\tilde{x}}_{i}\right)$, $i=1,2,\dots ,k$. With $y={a}_{i}+{b}_{i}x$ and $dx=1/{b}_{i}dy$ we have
☐

$${F}^{*}\left(x\right)={a}_{i}+{b}_{i}x\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\mathrm{with}\phantom{\rule{0.277778em}{0ex}}{b}_{i}=\frac{{p}_{i}}{\mathsf{\Delta}{x}_{i}}\phantom{\rule{0.277778em}{0ex}}\mathrm{and}\phantom{\rule{0.277778em}{0ex}}{a}_{i}=F\left({\tilde{x}}_{i-1}\right)-{b}_{i}{\tilde{x}}_{i-1}$$

$$\begin{array}{ccc}\hfill CP{E}_{\phi}\left({X}^{*}\right)& =& \sum _{i=1}^{k}{\int}_{{\tilde{x}}_{i-1}}^{{\tilde{x}}_{i}}\phi ({a}_{i}+{b}_{i}x)+\phi (1-{a}_{i}-{b}_{i}x)dx\hfill \\ & =& \sum _{i=1}^{k}\frac{1}{{b}_{i}}{\int}_{F\left({\tilde{x}}_{i-1}\right)}^{F\left({\tilde{x}}_{i}\right)}\phi \left(y\right)+\phi (1-y)dy=\sum _{i=1}^{k}\frac{\mathsf{\Delta}{x}_{i}}{{p}_{i}}\left({\int}_{F\left({\tilde{x}}_{i-1}\right)}^{F\left({\tilde{x}}_{i}\right)}\phi \left(y\right)dy-{\int}_{\overline{F}\left({\tilde{x}}_{i-1}\right)}^{\overline{F}\left({\tilde{x}}_{i}\right)}\phi \left(y\right)dy\right)\hfill \\ & =& \sum _{i=1}^{k}\frac{\mathsf{\Delta}{x}_{i}}{{p}_{i}}\left({\int}_{F\left({\tilde{x}}_{i-1}\right)}^{F\left({\tilde{x}}_{i}\right)}\phi \left(y\right)dy+{\int}_{\overline{F}\left({\tilde{x}}_{i}\right)}^{\overline{F}\left({\tilde{x}}_{i-1}\right)}\phi \left(y\right)dy\right)\hfill \\ & =& \sum _{i=1}^{k}\frac{\mathsf{\Delta}{x}_{i}}{{p}_{i}}\left({S}_{\phi}\left(F\left({\tilde{x}}_{i}\right)\right)-{S}_{\phi}\left(F\left({\tilde{x}}_{i-1}\right)\right)\right.\left.+{S}_{\phi}\left(\overline{F}\left({\tilde{x}}_{i-1}\right)\right)-{S}_{\phi}\left(\overline{F}\left({\tilde{x}}_{i}\right)\right)\right).\hfill \end{array}$$

Considering this result, we can easily prove the convergence property for $CP{E}_{\phi}\left({X}^{*}\right)$:

**Theorem 1.**

Let φ be a generating function with antiderivative ${S}_{\phi}$ and let F be a continuous distribution function of the random variable X with support $[a,b]$. ${X}^{*}$ is the corresponding random variable for grouped data with $\mathsf{\Delta}x=(b-a)/k$, $k>0$. Then the following holds:

$$CP{E}_{\phi}\left({X}^{*}\right)\to {\int}_{a}^{b}\phi \left(F\left(x\right)\right)+\phi \left(\overline{F}\left(x\right)\right)dx\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}fork\to \infty .$$

**Proof.**

Consider equidistant classes with $\mathsf{\Delta}{x}_{i}=\mathsf{\Delta}x=(b-a)/k$, $i=1,2,...,k$. Subsequently, Equation (13) results in

$$\begin{array}{ccc}\hfill CP{E}_{\phi}\left({X}^{*}\right)& =& \sum _{i=1}^{k}\left(\frac{{S}_{\phi}\left(F\left({\tilde{x}}_{i}\right)\right)-{S}_{\phi}\left(F\left({\tilde{x}}_{i-1}\right)\right)}{F\left({\tilde{x}}_{i}\right)-F\left({\tilde{x}}_{i-1}\right)}\right.\hfill \\ & & \left.+\frac{{S}_{\phi}\left(\overline{F}\left({\tilde{x}}_{i-1}\right)\right)-{S}_{\phi}\left(\overline{F}\left({\tilde{x}}_{i}\right)\right)}{F\left({\tilde{x}}_{i}\right)-F\left({\tilde{x}}_{i-1}\right)}\right)\mathsf{\Delta}x.\hfill \end{array}$$

With $k\to \infty $ we have $\mathsf{\Delta}x\to 0$ such that for F continuous we get $F\left({\tilde{x}}_{i}\right)-F\left({\tilde{x}}_{i-1}\right)\to 0$. The antiderivative ${S}_{\phi}$ has the derivative φ almost everywhere such that with $k\to \infty $

$$\sum _{i=1}^{k}\frac{{S}_{\phi}\left(F\left({\tilde{x}}_{i}\right)\right)-{S}_{\phi}\left(F\left({\tilde{x}}_{i-1}\right)\right)}{F\left({\tilde{x}}_{i}\right)-F\left({\tilde{x}}_{i-1}\right)}\mathsf{\Delta}x\to {\int}_{a}^{b}\phi \left(F\left(x\right)\right)dx.$$

An analogue argument holds for the second term of Equation (14). ☐

In addition to this theoretical result we get the following useful expressions for $CP{E}_{\phi}$ for grouped data and a specific choice of φ as Corollary 1 shows:

**Corollary 1.**

Let φ be s.t.
where $u\in [0,1]$. Then for $\alpha =1$
and for $\alpha \ne 1$

$$\phi \left(u\right)=\left\{\begin{array}{cc}-ulnu& for\alpha =1\hfill \\ -u\frac{{u}^{\alpha -1}-1}{1-\alpha}& for\alpha \ne 1,\hfill \end{array}\right.$$

$$\begin{array}{ccc}\hfill CP{E}_{S}\left({X}^{*}\right)& =& -\frac{1}{2}\sum _{i=1}^{k}\frac{\mathsf{\Delta}{x}_{i}}{{p}_{i}}\left(F{\left({\tilde{x}}_{i}\right)}^{2}lnF\left({\tilde{x}}_{i}\right)-F{\left({\tilde{x}}_{i-1}\right)}^{2}lnF\left({\tilde{x}}_{i-1}\right)\right)\hfill \\ & & -\frac{1}{2}\sum _{i=1}^{k}\frac{\mathsf{\Delta}{x}_{i}}{{p}_{i}}\left(\overline{F}{\left({\tilde{x}}_{i-1}\right)}^{2}ln\overline{F}\left({\tilde{x}}_{i-1}\right)-\overline{F}{\left({\tilde{x}}_{i}\right)}^{2}ln\overline{F}\left({\tilde{x}}_{i}\right)\right)+\frac{1}{2}({\tilde{x}}_{k}-{\tilde{x}}_{0})\hfill \end{array}$$

$$\begin{array}{ccc}\hfill CP{E}_{\alpha}\left({X}^{*}\right)& =& \frac{1}{1-\alpha}\sum _{i=1}^{k}\frac{\mathsf{\Delta}{x}_{i}}{{p}_{i}}(\frac{1}{\alpha +1}\left(F{\left({\tilde{x}}_{i}\right)}^{\alpha +1}-F{\left({\tilde{x}}_{i-1}\right)}^{\alpha +1}\right)+\hfill \\ & & \overline{F}{\left({\tilde{x}}_{i-1}\right)}^{\alpha +1}-\overline{F}{\left({\tilde{x}}_{i}\right)}^{\alpha +1})-({\tilde{x}}_{k}-{\tilde{x}}_{0}).\hfill \end{array}$$

**Proof.**

Using the antiderivatives
since ${p}_{i}=F\left({\tilde{x}}_{i}\right)-F\left({\tilde{x}}_{i-1}\right)$, it holds that
for $i=1,2,...,k$. The results follow immediately. ☐

$${S}_{\alpha}\left(u\right)=\left\{\begin{array}{cc}-\frac{1}{2}{u}^{2}lnu+\frac{1}{4}{u}^{2}& \mathrm{for}\alpha =1\hfill \\ \frac{1}{1-\alpha}\left(\frac{1}{\alpha +1}{u}^{\alpha +1}-\frac{1}{2}{u}^{2}\right)& \mathrm{for}\alpha \ne 1,\hfill \end{array}\right.$$

$$\begin{array}{ccc}& & \frac{1}{{p}_{i}}\left(F{\left({\tilde{x}}_{i}\right)}^{2}-F{\left({\tilde{x}}_{i-1}\right)}^{2}+\overline{F}{\left({\tilde{x}}_{i-1}\right)}^{2}-\overline{F}{\left({\tilde{x}}_{i}\right)}^{2}\right)\hfill \\ & & =\frac{\left(F\left({\tilde{x}}_{i}\right)-F\left({\tilde{x}}_{i-1}\right)\right)\left(F\left({\tilde{x}}_{i}\right)+F\left({\tilde{x}}_{i-1}\right)\right)}{F\left({\tilde{x}}_{i}\right)-F\left({\tilde{x}}_{i-1}\right)}\hfill \\ & & +\frac{\left(\overline{F}\left({\tilde{x}}_{i-1}\right)-\overline{F}\left({\tilde{x}}_{i}\right)\right)\left(\overline{F}\left({\tilde{x}}_{i-1}\right)+\overline{F}\left({\tilde{x}}_{i}\right)\right)}{F\left({\tilde{x}}_{i}\right)-F\left({\tilde{x}}_{i-1}\right)}=2\hfill \end{array}$$

#### 3.4. Alternative Representations of $CP{E}_{\phi}$

In case $\phi \left(0\right)=\phi \left(1\right)=0$ holds and φ is differentiable, one can provide several alternative representations of $CP{E}_{\phi}$ in addition to Eqaution (7). These alternative representations will be useful in the following to find conditions ensuring the existence of $CP{E}_{\phi}$ and to find some simple estimators.

**Proposition 1.**

Let φ be a non-negative and differentiable function on the domain $[0,1]$ with derivative ${\phi}^{\prime}$ and $\phi \left(0\right)=\phi \left(1\right)=0$. In this case, for $F\in \mathcal{F}$ with quantile function ${F}^{-1}\left(u\right)$, density function f, and quantile density function $q\left(u\right)=1/f\left({F}^{-1}\left(u\right)\right)$, for $u\in [0,1]$, the following holds:

$$\begin{array}{ccc}\hfill CP{E}_{\phi}\left(F\right)& =& {\int}_{0}^{1}\phi \left(u\right)+\phi (1-u)q\left(u\right)du,\hfill \end{array}$$

$$\begin{array}{ccc}\hfill CP{E}_{\phi}\left(F\right)& =& {\int}_{0}^{1}({\phi}^{\prime}(1-u)-{\phi}^{\prime}\left(u\right)){F}^{-1}\left(u\right)du,\hfill \end{array}$$

$$\begin{array}{ccc}\hfill CP{E}_{\phi}\left(F\right)& =& {\int}_{\mathbb{R}}x({\phi}^{\prime}\left(\overline{F}\left(x\right)\right)-{\phi}^{\prime}\left(F\left(x\right)\right))f\left(x\right)dx.\hfill \end{array}$$

**Proof.**

Apply probability integral transform $U=F\left(X\right)$ and partial integration. ☐

Due to $\phi \left(0\right)=\phi \left(1\right)=0$ it holds that

$${\int}_{0}^{1}({\phi}^{\prime}(1-u)-{\phi}^{\prime}\left(u\right))du=0.$$

This property supports the understanding of $CP{E}_{\phi}$ being a covariance for which the Cauchy–Schwarz inequality gives an upper bound:

**Corollary 2.**

Let φ be a non-negative and differentiable function on the domain $[0,1]$ with derivative ${\phi}^{\prime}$ and $\phi \left(0\right)=\phi \left(1\right)=0$. Then if U is uniformly distributed on $[0,1]$ and $X\sim F$:

$$\begin{array}{ccc}\hfill CP{E}_{\phi}\left(F\right)& =& Cov({\phi}^{\prime}(1-U)-{\phi}^{\prime}\left(U\right),{F}^{-1}\left(U\right)),\hfill \end{array}$$

$$\begin{array}{ccc}\hfill CP{E}_{\phi}\left(F\right)& =& Cov({\phi}^{\prime}\left(\overline{F}\left(X\right)\right)-{\phi}^{\prime}\left(F\left(X\right)\right),X).\hfill \end{array}$$

**Proof.**

Let $\mu =E\left[X\right]$, then since $E[{\phi}^{\prime}(1-U)-{\phi}^{\prime}\left(U\right)]=0$,
☐

$$\begin{array}{ccc}\hfill CP{E}_{\phi}\left(F\right)& =& {\int}_{0}^{1}({\phi}^{\prime}(1-u)-{\phi}^{\prime}\left(u\right)){F}^{-1}\left(u\right)du\hfill \\ & =& {\int}_{0}^{1}({\phi}^{\prime}(1-u)-{\phi}^{\prime}\left(u\right))({F}^{-1}\left(u\right)-\mu )du.\hfill \end{array}$$

Depending on the context, we switch between these alternative representations of $CP{E}_{\phi}$.

## 4. Sufficient Conditions for the Existence of CPE${}_{\phi}$

#### 4.1. Deriving an Upper Bound for $CP{E}_{\phi}$

The Cauchy–Schwarz inequality for Equations (18) and (19), respectively, provides an upper bound for $CP{E}_{\phi}$ if the variance ${\sigma}^{2}=E\left[{({F}^{-1}\left(u\right)-\mu )}^{2}\right]$ exists and
holds. The existence of the upper bound simultaneously ensures the existence of $CP{E}_{\phi}$.

$${\int}_{0}^{1}{({\phi}^{\prime}(1-u)-{\phi}^{\prime}\left(u\right))}^{2}du<\infty $$

**Proposition 2.**

Let φ be a non-negative and differentiable function on the domain $[0,1]$ with derivative ${\phi}^{\prime}$ and $\phi \left(0\right)=\phi \left(1\right)=0$. If Equation (20) holds, then for $X\sim F$ with $Var\left(X\right)<\infty $ and quantile function ${F}^{-1}$, we have

$$\begin{array}{ccc}\hfill CP{E}_{\phi}\left(F\right)& \le & \sqrt{E\left({({\phi}^{\prime}(1-U)-{\phi}^{\prime}\left(U\right))}^{2}\right)E\left({({F}^{-1}\left(U\right)-\mu )}^{2}\right)}\hfill \end{array}$$

$$\begin{array}{ccc}\hfill CP{E}_{\phi}\left(F\right)& \le & \sqrt{E\left({({\phi}^{\prime}\left(\overline{F}\left(X\right)\right)-{\phi}^{\prime}\left(F\left(X\right)\right))}^{2}\right){\sigma}^{2}}.\hfill \end{array}$$

**Proof.**

The statement follows from
☐

$$\begin{array}{ccc}\hfill {\left(E\left(({\phi}^{\prime}(1-U)-{\phi}^{\prime}\left(U\right))({F}^{-1}\left(U\right)-\mu )\right)\right)}^{2}& \le & {\int}_{0}^{1}{({\phi}^{\prime}(1-u)-{\phi}^{\prime}\left(u\right))}^{2}du\hfill \\ & & \times \phantom{\rule{3.33333pt}{0ex}}E\left({({F}^{-1}\left(U\right)-\mu )}^{2}\right).\hfill \end{array}$$

Next, we consider the upper bound for the cumulative paired α-entropy:

**Corollary 3.**

Let X be a random variable having a finite variance. Then
for $\alpha >1/2$, $\alpha \ne 1$ with
for $\alpha =1$.

$$CP{E}_{\alpha}\left(X\right)\le \sigma \left|\frac{\alpha}{1-\alpha}\right|\sqrt{2\left(\frac{1}{2\alpha -1}-B(\alpha ,\alpha )\right)}$$

$$CP{E}_{S}\left(X\right)\le \frac{\pi \sigma}{\sqrt{3}}$$

**Proof.**

For $\phi \left(u\right)=u({u}^{\alpha -1}-1)/(1-\alpha )$ and ${\phi}^{\prime}\left(u\right)=(\alpha {u}^{\alpha -1}-1)/(1-\alpha )$, $u\in [0,1]$, we have

$$\begin{array}{ccc}\hfill {\int}_{0}^{1}{({\phi}^{\prime}(1-u)-{\phi}^{\prime}\left(u\right))}^{2}du& =& {\left(\frac{\alpha}{1-\alpha}\right)}^{2}\left({\int}_{0}^{1}{\left({u}^{\alpha -1}-{(1-u)}^{\alpha -1}\right)}^{2}du\right)\hfill \\ & =& 2{\left(\frac{\alpha}{1-\alpha}\right)}^{2}\left({\int}_{0}^{1}{u}^{2(\alpha -1)}du\right.\hfill \\ & & \left.-2{\int}_{0}^{1}{u}^{\alpha -1}{(1-u)}^{\alpha -1}du\right)\hfill \\ & =& 2{\left(\frac{\alpha}{1-\alpha}\right)}^{2}\left(\frac{1}{2\alpha -1}-B(\alpha ,\alpha )\right).\hfill \end{array}$$

$\alpha >1/2$ is required for the existence of $CP{E}_{\alpha}\left(X\right)$. For $\alpha =1$ we have $\phi \left(u\right)=-ulnu$ and ${\phi}^{\prime}\left(u\right)=-lnu-1$, $u\in [0,1]$, such that
☐

$${\int}_{0}^{1}{({\phi}^{\prime}(1-u)-{\phi}^{\prime}\left(u\right))}^{2}du={\int}_{0}^{1}{\left(ln\left(\frac{1-u}{u}\right)\right)}^{2}du=\frac{{\pi}^{2}}{3}.$$

In the framework of uncertainty theory, the upper bound for the paired cumulative Shannon entropy was derived by [51] (see also [6], p. 83). For $\alpha =2$ we get the upper bound for the paired cumulative Gini entropy

$$CP{E}_{G}\left(X\right)\le \sigma \frac{2}{\sqrt{3}}.$$

This result has already been proven for non-negative uncertainty variables by [40]. Finally, one yields the following upper bound for the paired cumulative Leik entropy:

**Corollary 4.**

Let X be a random variable with existing variance. Then

$$CP{E}_{L}\left[X\right]\le 2\sigma .$$

**Proof.**

Use
to get the result. ☐

$$\begin{array}{c}\hfill {\int}_{0}^{1}{(\mathrm{sign}(u-1/2)-\mathrm{sign}(1/2-u))}^{2}\mathrm{d}u=4\end{array}$$

#### 4.2. Stricter Conditions for the Existence of $CP{E}_{\alpha}$

So far, we only considered sufficient conditions for an existing variance. Following the arguments in [46,50], which were used for the special case of cumulative residual and residual Shannon entropy, one can derive stricter sufficient conditions for the existence of $CP{E}_{\alpha}$.

**Theorem 2.**

If $E\left(\right|X{|}^{p})<\infty $ for $p>1$, then $CP{E}_{\alpha}<\infty $ for $\alpha >1/p$.

**Proof.**

To prepare the proof we first note that
holds for $0<\beta <1<\alpha $ and $0\le u\le 1$.

$$u\frac{{u}^{\alpha -1}-1}{1-\alpha}\le -ulnu\le u\frac{{u}^{\beta -1}-1}{1-\beta}\le 1-u$$

The second fact required for the proof is that
if $E\left(X\right)<\infty $, because

$${\int}_{0}^{\infty}\overline{F}\left(x\right)dx<\infty \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\mathrm{and}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{\int}_{-\infty}^{0}F\left(x\right)dx<\infty $$

$$E\left(X\right)={\int}_{0}^{\infty}\overline{F}\left(x\right)dx+{\int}_{-\infty}^{0}F\left(x\right)dx.$$

Third, it holds that
because

$$P(-X\ge y)\le P\left(\right|X|\ge y)\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\mathrm{for}y0,$$

$$\begin{array}{ccc}\hfill P\left(\right|X|\ge y)& =& 1-P\left(\right|X|<y)=1-\left(P\right(X<y)-P(X\le -y\left)\right)\hfill \\ & =& 1-P(X<y)+P(X\le -y)\hfill \\ & =& 1-P(X<y)+P(-X\ge y).\hfill \end{array}$$

$CP{E}_{\alpha}$ consists of four indefinite integrals:

$$\begin{array}{ccc}\hfill CP{E}_{\alpha}& =& {\int}_{0}^{\infty}F\left(x\right)\frac{F{\left(x\right)}^{\alpha -1}-1}{1-\alpha}dx+{\int}_{-\infty}^{0}\overline{F}\left(x\right)\frac{\overline{F}{\left(x\right)}^{\alpha -1}-1}{1-\alpha}dx\hfill \\ & & +{\int}_{-\infty}^{0}F\left(x\right)\frac{F{\left(x\right)}^{\alpha -1}-1}{1-\alpha}dx+{\int}_{0}^{\infty}\overline{F}\left(x\right)\frac{\overline{F}{\left(x\right)}^{\alpha -1}-1}{1-\alpha}dx.\hfill \end{array}$$

It must be shown separately that these integrals converge.

The convergence of the first two terms follows directly from the existence of $E\left(X\right)$. With Equations (27) and (28) we have for $\alpha >0$
and

$${\int}_{0}^{\infty}F\left(x\right)\frac{F{\left(x\right)}^{\alpha -1}-1}{1-\alpha}dx\le {\int}_{0}^{\infty}\overline{F}\left(x\right)dx<\infty $$

$${\int}_{-\infty}^{0}\overline{F}\left(x\right)\frac{\overline{F}{\left(x\right)}^{\alpha -1}-1}{1-\alpha}dx\le {\int}_{-\infty}^{0}F\left(x\right)dx<\infty .$$

For the third term we have to demonstrate that
for $\alpha >1/p$. If $p>1$, there is a β with $1/p<\beta <1$ while $\beta <\alpha $. With Equation (27) it is for $-\infty <x\le 0$ that
because $1-\beta >0$.

$${\int}_{-\infty}^{0}F\left(x\right)\frac{F{\left(x\right)}^{\alpha -1}-1}{1-\alpha}dx<\infty $$

$$F\left(x\right)\frac{F{\left(x\right)}^{\alpha}-1}{1-\alpha}\le F\left(x\right)\frac{F{\left(x\right)}^{\beta}-1}{1-\beta}\le \frac{1}{1-\beta}F{\left(x\right)}^{\beta}$$

With $F\left(x\right)=P(X\le x)=P(-X\ge -x)$ there exists

$$\frac{1}{1-\beta}F{\left(x\right)}^{\beta}\left\{\begin{array}{cc}\le \frac{1}{1-\beta}& \mathrm{for}0\le -x\le 1\hfill \\ =\frac{1}{\beta -1}P{(-X\ge -x)}^{\beta}\le \frac{1}{\beta -1}P{\left(\right|X|\ge -x)}^{\beta}& \mathrm{for}1-x\infty .\hfill \end{array}\right.$$

For $p>0$ the transformation $g\left(y\right)={y}^{p}$ is monotonically increasing for $y>1$. Using the Markov inequality we get

$$P\left(\right|X|\ge y)\ge \frac{E\left[\right|X{|}^{p}]}{{y}^{p}}.$$

Putting these results together, we attain
for $\beta >1/p$ (and thus for $p\beta >1$) and due to ${\int}_{1}^{\infty}1/{y}^{q}dy<\infty $ for $q>1$.

$${\int}_{-\infty}^{0}F\left(x\right)\frac{F{\left(x\right)}^{\alpha -1}-1}{1-\alpha}dx\le \frac{1}{1-\beta}+\frac{1}{1-\beta}{\int}_{1}^{\infty}\frac{{E\left[\right|X|}^{p}{]}^{\beta}}{{y}^{p\beta}}dy<\infty $$

It remains to show the convergence of the fourth term:
for $\alpha >1/p$. For $p>1$, there is a β with $1/p<\beta <1$ and $\beta <\alpha $. Due to Equation (27) and $1-\beta >0$ for $0\le x<\infty $ it is true that

$${\int}_{0}^{\infty}\overline{F}\left(x\right)\frac{\overline{F}{\left(x\right)}^{\alpha -1}-1}{1-\alpha}dx<\infty $$

$$\overline{F}\left(x\right)\frac{\overline{F}{\left(x\right)}^{\alpha}-1}{1-\alpha}\le \overline{F}\left(x\right)\frac{\overline{F}{\left(x\right)}^{\beta}-1}{1-\beta}\le \frac{1}{1-\beta}\overline{F}{\left(x\right)}^{\beta}.$$

With $\overline{F}\left(x\right)=P(X>x)$ we have

$$\frac{1}{1-\beta}\overline{F}{\left(x\right)}^{\beta}\left\{\begin{array}{cc}\le \frac{1}{1-\beta}& \mathrm{for}0\le x\le 1\hfill \\ =\frac{1}{\beta -1}P{(X\ge x)}^{\beta}\le \frac{1}{\beta -1}P{\left(\right|X|\ge x)}^{\beta}& \mathrm{for}1x\infty \phantom{\rule{3.33333pt}{0ex}}.\hfill \end{array}\right.$$

Now, the Markov inequality gives

$$P\left(\right|X|\ge y)\ge \frac{E\left(\right|X{|}^{p})}{{y}^{p}}.$$

In summary, we have
for $\beta >1/p$ and by ${\int}_{1}^{\infty}1/{y}^{q}dy<\infty $ for $q>1$. This completes the proof. ☐

$${\int}_{-\infty}^{0}\overline{F}\left(x\right)\frac{\overline{F}{\left(x\right)}^{\alpha -1}-1}{1-\alpha}dx\le \frac{1}{1-\beta}+\frac{1}{1-\beta}{\int}_{1}^{\infty}\frac{{E\left[\right|X|}^{p}{]}^{\beta}}{{y}^{p\beta}}dy<\infty $$

Following Theorem 2, depending on the number of existing moments, specific conditions for α arise in order to ensure the existence of $CP{E}_{\alpha}$:

- If the variance of X exists (i.e., $p=2$), $CP{E}_{\alpha}\left(X\right)$ exists for $\alpha >1/2$.
- For $p>1$, $E\left[\right|X{|}^{p}]<\infty $ is sufficient for the existence of $CP{E}_{S}$ (i.e., $\alpha =1$).
- For $p=1$, $E\left[\right|X{|}^{p}]<\infty $ is sufficient for the existence of $CP{E}_{G}$ (i.e., $\alpha =2$).

## 5. Maximum CPE${}_{\phi}$ Distributions

#### 5.1. Maximum $CP{E}_{\phi}$ Distributions for Given Mean and Variance

Equality in the Cauchy–Schwarz inequality gives a condition under which the upper bound is attained. This is the case if an affine linear relation between ${F}^{-1}\left(U\right)$ respectively X and ${\phi}^{\prime}(1-U)-{\phi}^{\prime}\left(U\right)$ respectively ${\phi}^{\prime}\left(\overline{F}\left(x\right)\right)-{\phi}^{\prime}\left(F\left(X\right)\right)$ exists with probability 1. Since the quantile function is monotonically increasing, such an affine linear function can only exist if ${\phi}^{\prime}(1-u)-{\phi}^{\prime}\left(u\right)$ is monotonic as well (de- or increasing). This implies that φ needs to be a concave function on $[0,1]$. In order to derive a maximum $CP{E}_{\phi}$ distribution under the restriction that mean and variance are given, one may only consider concave generating functions φ.

We summarize this obvious but important result in the following Theorem:

**Theorem 3.**

Let φ be a non-negative and differentiable function on the domain $[0,1]$ with derivative ${\phi}^{\prime}$ and $\phi \left(0\right)=\phi \left(1\right)=0$. Then F is the maximum $CP{E}_{\phi}$ distribution with prespecified mean μ and variance ${\sigma}^{2}$ of $X\sim F$ iff a constant $b\in \mathbb{R}$ exists such that

$$P\left({F}^{-1}\left(U\right)-\mu =\frac{\sigma}{\sqrt{E\left({({\phi}^{\prime}(1-U)-{\phi}^{\prime}\left(U\right))}^{2}\right)}}({\phi}^{\prime}(1-U)-{\phi}^{\prime}\left(U\right))\right)=1.$$

**Proof.**

The upper bound of the Cauchy–Schwarz inequality will be attained if there are constants $a,b\in \mathbb{R}$ such that the first restriction equals

$$P\left({F}^{-1}\left(U\right)=a+b({\phi}^{\prime}(1-U)-{\phi}^{\prime}\left(U\right))\right)=1.$$

The property $\phi \left(0\right)=\phi \left(1\right)=0$ leads to $E\left({\phi}^{\prime}(1-U)-{\phi}^{\prime}\left(U\right)\right)=0$ such that

$$\mu ={\int}_{0}^{1}{F}^{-1}\left(u\right)du=a+b{\int}_{0}^{1}({\phi}^{\prime}(1-u)-{\phi}^{\prime}\left(u\right))du=a.$$

This means that there is a constant $b\in \mathbb{R}$ with

$$P\left({F}^{-1}\left(U\right)-\mu =b({\phi}^{\prime}(1-U)-{\phi}^{\prime}\left(U\right))\right)=1.$$

The second restriction postulates that

$${\sigma}^{2}={\int}_{0}^{1}{({F}^{-1}\left(u\right)-\mu )}^{2}du={b}^{2}E\left({({\phi}^{\prime}(1-U)-{\phi}^{\prime}\left(U\right))}^{2}\right).$$

φ is concave on $[0,1]$ with

$$-{\phi}^{\prime \prime}(1-u)-{\phi}^{\prime \prime}\left(u\right)\le 0,\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}u\in [0,1].$$

Therefore, ${\phi}^{\prime}(1-u)-{\phi}^{\prime}\left(u\right)$ is monotonically increasing. The quantile function is also monotonically increasing such that b has to be positive. This gives
☐

$$b=\frac{\sigma}{\sqrt{E\left({({\phi}^{\prime}(1-U)-{\phi}^{\prime}\left(U\right))}^{2}\right)}}.$$

The quantile function of the Tukey’s λ distribution is given by

$$Q(u,\lambda )=\frac{1}{\lambda}({u}^{\lambda}-{(1-u)}^{\lambda}),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}u\in [0,1],\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\lambda \ne 0.$$

Its mean and variance are

$$\mu =0\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\mathrm{and}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{\sigma}^{2}=\frac{2}{{\lambda}^{2}}\left(\frac{1}{2\lambda +1}-B(\lambda +1,\lambda +1)\right).$$

The domain is given by $[-1/\lambda ,1/\lambda ]$ for $\lambda >0$.

By discussing the paired cumulative α-entropy, one can prove the new result that the Tukey’s λ distribution is the maximum $CP{E}_{\alpha}$ distribution for prespecified mean and variance. Tukey’s λ distribution takes on the role of the Student-t distribution if one changes from the differential entropy to $CP{E}_{\alpha}$ (cf. [61]).

**Corollary 5.**

The cdf F maximizes $CP{E}_{\alpha}$ for $\alpha >1/2$ under the restrictions of a given mean μ and given variance ${\sigma}^{2}$ iff F is the cdf of the Tukey λ distribution with $\lambda =\alpha -1$.

**Proof.**

For $\phi \left(u\right)=u({u}^{\alpha -1}-1)/(1-\alpha )$, $u\in [0,1]$, we have
for $\alpha >1/2$. As a consequence, the constant b is given by
and the maximum $CP{E}_{\alpha}$ distribution results in
${F}^{-1}$ can easily be identified as the quantile function of a Tukey’s λ distribution with $\lambda =\alpha -1$ and $\alpha >1/2$. ☐

$$\begin{array}{ccc}\hfill {\int}_{0}^{1}{({\phi}^{\prime}(1-u)-{\phi}^{\prime}\left(u\right))}^{2}du& =& {\left(\frac{\alpha}{1-\alpha}\right)}^{2}{\int}_{0}^{1}{({(1-u)}^{\alpha -1}-{u}^{\alpha -1})}^{2}du\hfill \\ & =& 2{\left(\frac{\alpha}{1-\alpha}\right)}^{2}\left(\frac{1}{2\alpha -1}-B(\alpha ,\alpha )\right)\hfill \end{array}$$

$$b=\frac{1}{\sqrt{2}}\sigma \left|\frac{1-\alpha}{\alpha}\right|{\left(\frac{1}{2\alpha -1}-B(\alpha ,\alpha )\right)}^{-1/2},$$

$$\begin{array}{ccc}\hfill {F}^{-1}\left(u\right)-\mu & =& \frac{\sigma}{\sqrt{2}}\left|\frac{1-\alpha}{\alpha}\right|{\left(\frac{1}{2\alpha -1}-B(\alpha ,\alpha )\right)}^{-1/2}\frac{\alpha}{1-\alpha}\left({(1-u)}^{\alpha -1}-{u}^{\alpha -1}\right)\hfill \\ & =& \sigma \frac{|\alpha -1|}{\sqrt{2}}{\left(\frac{1}{2\alpha -1}-B(\alpha ,\alpha )\right)}^{-1/2}\frac{\left({u}^{\alpha -1}-{(1-u)}^{\alpha -1}\right)}{\alpha -1}.\hfill \end{array}$$

For the Gini case ($\alpha =2$), one obtains the quantile function of a uniform distribution
with domain $[\mu -\sqrt{3}\sigma ,\mu +\sqrt{3}\sigma ]$. This maximum $CP{E}_{G}$ distribution corresponds essentially to the distribution derived by Dai et al. [40].

$${F}^{-1}\left(u\right)=\mu +\sigma \sqrt{\frac{1}{2}}\sqrt{6}\left(2u-1\right)=\mu +\sigma \sqrt{3}(2u-1),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}u\in [0,1],$$

The fact that the logistic distribution is the maximum $CP{E}_{S}$ distribution, provided mean and variance are given, was derived by Chen et al. [51] in the framework of uncertainty theory and by ([50], p. 4) in the framework of reliability theory. Both proved this result using Euler–Lagrange equations. In the interest of completeness, we provide an alternative proof via the upper bound of the Cauchy–Schwarz inequality.

**Corollary 6.**

The cdf F maximizes $CP{E}_{S}$ under the restrictions of a known mean μ and a known variance ${\sigma}^{2}$ iff F is the cdf of a logistic distribution.

**Proof.**

Since
one receives

$${\int}_{0}^{1}{\left(ln\left(\frac{1-u}{u}\right)\right)}^{2}du=\frac{{\pi}^{2}}{3},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}$$

$${F}^{-1}\left(u\right)-\mu =\frac{\sigma}{\pi /\sqrt{3}}ln\left(\frac{1-u}{u}\right),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}u\in [0,1].$$

Inverting gives the distribution function of the logistic distribution with mean μ and variance 1:
☐

$$F\left(x\right)=\frac{1}{1+exp\left(-\frac{\pi}{\sqrt{3}}\frac{x-\mu}{\sigma}\right)},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}x\in \mathbb{R}.$$

As a last example we consider the cumulative paired Leik entropy $CP{E}_{L}$.

**Corollary 7.**

The cdf F maximizes $CP{E}_{L}$ under restrictions of a known mean μ and a known variance ${\sigma}^{2}$ iff for F holds

$$F\left(x\right)=\left\{\begin{array}{cc}0& forx\mu -\sigma \hfill \\ 1/2& for\mu -\sigma \le x\mu +\sigma \hfill \\ 1& forx\ge \mu +\sigma .\hfill \end{array}\right.$$

**Proof.**

From $\phi \left(u\right)=min\{u,1-u\}$ and ${\phi}^{\prime}\left(u\right)=\mathrm{sign}(1/2-u)$, $u\in [0,1]$, follows that
☐

$${F}^{-1}\left(u\right)-\mu =\sigma \mathrm{sign}(u-1/2),\phantom{\rule{0.222222em}{0ex}}\phantom{\rule{0.277778em}{0ex}}u\in [0,1].$$

Therefore, the maximization of $CP{E}_{L}$ with given mean and variance leads to a distribution whose variance is maximal on the interval $[\mu -\sigma ,\mu +\sigma ]$.

#### 5.2. Maximum $CP{E}_{\phi}$ Distributions for General Moment Restrictions

Drissi et al. [50] discuss general moment restrictions of the form
for which the existence of the moments is assumed. By using Euler–Lagrange equations they show that
maximizes the residual cumulative entropy $-{\int}_{\mathbb{R}}\overline{F}\left(x\right)ln\overline{F}\left(x\right)\mathrm{d}x$ under constraints Equation (31). Moreover, they demonstrated that the solution needs to be symmetric with respect to μ. Here, ${\lambda}_{i}$, $i=1,2,...,k$, are the Lagrange parameters which are determined by the moment restrictions, provided a solution exists. Rao et al. [47] shows that for distributions with support ${\mathbb{R}}^{+}$ the ME distribution is given by
if the restrictions Equation (31) are again required.

$${\int}_{-\infty}^{\infty}{c}_{i}\left(x\right)f\left(x\right)dx={\int}_{0}^{1}{c}_{i}\left({F}^{-1}\left(U\right)\right)du={k}_{i},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}i=1,2,\dots ,k,$$

$$\overline{F}\left(x\right)=\frac{1}{1+exp\left({\sum}_{i=1}^{r}{\lambda}_{i}{c}_{i}^{\prime}\left(x\right)\right)},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}x\in \mathbb{R},$$

$$\overline{F}\left(x\right)=exp\left(-\sum _{i=1}^{r}{\lambda}_{i}{c}_{i}^{\prime}\left(x\right)\right),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}x>0$$

One can easily examine the shape of a distribution which maximizes the cumulative paired φ-entropy under the constraints Equation (31). This maximum $CP{E}_{\phi}$ distribution can no longer be derived by the upper bound of the Cauchy–Schwarz inequality if $i>2$. One has to solve the Euler–Lagrange equations for the objective function
with Lagrange parameters ${\lambda}_{i}$, $i=1,2,\dots ,k$. The Euler–Lagrange equations lead to the optimization problem
for $i=1,2,...,k$. Once again there is a close relation between the derivative of the generating function and the quantile function, provided a solution of the optimization problem Equation (32) exists.

$${\int}_{0}^{1}({\phi}^{\prime}\left(u\right)-{\phi}^{\prime}(1-u)){F}^{-1}\left(u\right)du-\sum _{i=1}^{k}{\lambda}_{i}({c}_{i}\left({F}^{-1}\left(u\right)\right)-{k}_{i})$$

$$\sum _{i=1}^{k}{\lambda}_{i}{c}_{i}^{\prime}\left({F}^{-1}\left(u\right)\right)={\phi}^{\prime}(1-u)-{\phi}^{\prime}\left(u\right),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}u\in [0,1],$$

The following example shows that the optimization problem Equation (32) leads to a well-known distribution if constraints are chosen carefully in case of a Shannon-type entropy.

**Example 1.**

The power logistic distribution is defined by the distribution function
for $\gamma >0$. The corresponding quantile function is

$$F\left(x\right)=\frac{1}{1+exp\left(-\lambda \phantom{\rule{0.166667em}{0ex}}\mathrm{sign}\left(x\right){x}^{\gamma}\right)},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}x\in \mathbb{R},$$

$${F}^{-1}\left(u\right)={\left(\frac{1}{\lambda}\right)}^{1/\gamma}\mathrm{sign}(u-1/2){\left|ln\left(\frac{1-u}{u}\right)\right|}^{1/\gamma},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}u\in [0,1].$$

This quantile function is also solution of Equation (33) given $\phi \left(u\right)=-ulnu$, $u\in [0,1]$, under the constraint $E\left[{\left|X\right|}^{\gamma +1}\right]=c$. The maximum of the cumulative paired Shannon entropy under the constraint $E\left[{\left|X\right|}^{\gamma +1}\right]=c$ is given by

$$\begin{array}{ccc}\hfill CP{E}_{S}\left(X\right)& =& {\int}_{0}^{1}ln\left(\frac{1-u}{u}\right){\left(\frac{1}{\lambda}\right)}^{1/\gamma}\mathrm{sign}(u-1/2)\xb7{\left|ln\left(\frac{1-u}{u}\right)\right|}^{1/\gamma}du\hfill \\ & =& {\left(\frac{1}{\lambda}\right)}^{1/\gamma}{\int}_{0}^{1}{\left|ln\left(\frac{1-u}{u}\right)\right|}^{(\gamma +1)/\gamma}du={\lambda E\left(\right|X|}^{\gamma +1}).\hfill \end{array}$$

Setting $\gamma =1$ leads to the familiar result for the upper bound of $CP{E}_{S}$ given the variance.

#### 5.3. Generalized Principle of Maximum Entropy

Kesavan et al. [19] introduced the generalized principle of an ME problem which describes the interplay of entropy, constraints, and distributions. A variation of this principle is the aim of finding an entropy that is maximized by a given distribution and some moment restrictions.

This problem can easily be solved for $CP{E}_{\phi}$ if mean and variance are given, due to the linear relationship between ${\phi}^{\prime}(1-u)-{\phi}^{\prime}\left(u\right)$ and the quantile function ${F}^{-1}\left(u\right)$ of the maximum $CP{E}_{\phi}$ distribution provided by the Cauchy–Schwarz inequality. However, it is a precondition for ${F}^{-1}\left(u\right)$ that ${\phi}^{\prime}(1-u)-{\phi}^{\prime}\left(u\right)$ is strictly monotonic on $[0,1]$ in order to be a quantile function. Therefore, the concavity of $\phi \left(u\right)$ and the condition $\phi \left(0\right)=\phi \left(1\right)=0$ are of key importance.

We demonstrate the solution to the generalized principle of the maximum entropy problem for the Gaussian and the Student-t distribution.

**Proposition 3.**

Let φ, Φ and ${\Phi}^{-1}$ be the density, the cdf and the quantile function of a standard Gaussian random variable. The Gaussian distribution is the maximum $CP{E}_{\phi}$ distribution for a given mean μ and variance ${\sigma}^{2}$ for $CP{E}_{\phi}$ with entropy generating function

$$\phi \left(u\right)=\phi \left({\Phi}^{-1}\left(u\right)\right),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}u\in [0,1].$$

**Proof.**

With
the condition for the maximum $CP{E}_{\phi}$ distribution with mean μ and variance ${\sigma}^{2}$ becomes

$${\phi}^{\prime}\left(u\right)=\frac{{\phi}^{\prime}\left({\Phi}^{-1}\left(u\right)\right)}{\phi \left({\Phi}^{-1}\left(u\right)\right)}=-{\Phi}^{-1}\left(u\right),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}u\in [0,1],$$

$${F}^{-1}\left(u\right)-\mu =\frac{\sigma}{\sqrt{{\int}_{0}^{1}{\left(2{\Phi}^{-1}\left(u\right)\right)}^{2}du}}2{\Phi}^{-1}\left(u\right),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}u\in [0,1].$$

By substituting ${\int}_{0}^{1}{\left(2{\Phi}^{-1}\left(u\right)\right)}^{2}du=4$, it follows that
such that ${F}^{-1}$ is the quantile function of a Gaussian distribution with mean μ and variance ${\sigma}^{2}$. ☐

$${F}^{-1}\left(u\right)-\mu =\sigma {\Phi}^{-1}\left(u\right),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}u\in [0,1],$$

An analogue result holds for the Student-t distribution with k degrees of freedom. In this case, the main difference to the Gaussian distribution is the fact that the entropy generating function possesses no closed form but is obtained by numerical integration of the quantile function.

**Corollary 8.**

Let ${t}_{k}$ respectively ${t}_{k}^{-1}$ be the cdf respectively the quantile function of a Student-t distribution with k degrees of freedom for $k>2$. $\mu +\frac{k}{k-2}{t}_{k}^{-1}$ is the maximum $CP{E}_{\phi}$ quantile function for a given mean μ and variance ${\sigma}^{2}$ iff

$$\phi \left(u\right)=\sqrt{\frac{k-2}{k}}{\int}_{0}^{u}{t}_{k}^{-1}\left(p\right)dp,\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}u\in [0,1].$$

**Proof.**

Starting with
and the symmetry of the ${t}_{k}$ distribution around μ, we get the condition

$${\phi}^{\prime}\left(u\right)=-\sqrt{\frac{k-2}{k}}{t}_{k}^{-1}\left(u\right),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}u\in [0,1],$$

$${F}^{-1}\left(u\right)-\mu =\frac{\sigma}{\sqrt{{\int}_{0}^{1}{\left(2{t}_{k}^{-1}\left(u\right)\right)}^{2}du}}2\sqrt{k-2}k{t}_{k}^{-1}\left(u\right),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}u\in [0,1].$$

With ${\int}_{0}^{1}{\left({t}_{k}^{-1}\left(u\right)\right)}^{2}du=k/(k-2)$ we get the quantile function of the t distribution with k degrees of freedom and mean μ:
☐

$${F}^{-1}\left(u\right)-\mu =\sigma \frac{k-2}{k}{t}_{k}^{-1}={t}_{k}^{-1}\left(u\right),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}u\in [0,1].$$

Figure 2 shows the shape of the entropy generating function φ for several distributions generated by the generalized ME principle.

## 6. CPE${}_{\phi}$ as a Measure of Scale

#### 6.1. Basic Properties of $CP{E}_{\phi}$

The cumulative residual entropy ($CRE$) introduced by [46], the generalized cumulative residual entropy (GCRE) of [50], and the cumulative entropy ($CE$) discussed by [8,9], have always been interpreted as measures of information. However, all these approaches do not explain which kind of information was considered. In contrast to this interpretation as measures of information, Oja [3] proved that the differential entropy satisfies a special ordering of scale and has certain meaningful properties of measures of scale. In [4], the authors discussed the close relationship between differential entropy and variance. In the discrete case the Shannon entropy can be interpreted as a measure of diversity, which is a concept of dispersion if there is no ordering and no distance between the realizations of a random variable. In this section, we will clarifying the important role which the variance plays for the existence of $CP{E}_{\phi}$.

Therefore, we intend to provide a deeper insight in $CP{E}_{\phi}$ as a proper MOS. We start by showing that $CP{E}_{\phi}$ has typical properties of an MOS. In detail, a proper MOS should always be non-negative and attain its minimal value 0 for a degenerated distribution. If a finite interval $[a,b]$ is considered as support, an MOS should attain its maximum if a and b occur with probability $1/2$. $CP{E}_{\phi}$ possesses all these properties as shown in the next proposition.

**Proposition 4.**

Let $\phi :[0,1]\to \mathbb{R}$ with $\phi \left(u\right)>0$ for $u\in (0,1)$ and $\phi \left(0\right)=\phi \left(1\right)=0$. Let X be a random variable with support D and $CP{E}_{\phi}$ is assumed to exist. Then the following properties hold:

- 1.
- $CP{E}_{\phi}\left(X\right)\ge 0$.
- 2.
- $CP{E}_{\phi}\left(X\right)=0$ iff there exists an ${x}^{*}$ with $\mathbb{P}(X={x}^{*})=1$.
- 3.
- $CP{E}_{\phi}\left(X\right)$ attains its maximum iff there exist $a,b$ with $-\infty <a<b<\infty $ such that $\mathbb{P}(X=a)=\mathbb{P}(X=b)=1/2$.

**Proof.**

- Follows from the non-negativity of φ.
- If there is an ${x}^{*}\in \mathbb{R}$ with $\mathbb{P}(X={x}^{*})=1$, then ${F}_{X}\left(x\right)=0$ and ${\overline{F}}_{X}\left(x\right)\in \{0,1\}$ for all $x\in \mathbb{R}$. Due to $\phi \left(0\right)=\phi \left(1\right)=0$ follows $\phi \left({F}_{X}\left(x\right)\right)=\phi \left({\overline{F}}_{X}\left(x\right)\right)=0$ for all $x\in \mathbb{R}$.Set $CP{E}_{\phi}\left(X\right)=0$. Due to the non-negativity of the integrand of $CP{E}_{\phi}$, $\phi \left({F}_{X}\left(x\right)\right)+\phi \left({\overline{F}}_{X}\left(x\right)\right)=0$ must hold for $x\in \mathbb{R}$. Since $\phi \left(u\right)>0$, $0<u<1$, it follows ${F}_{X}\left(x\right)$, ${\overline{F}}_{X}\left(x\right)\in \{0,1\}$ for $x\in [0,1]$.
- Let $CP{E}_{\phi}\left(X\right)$ have a finite maximum. Since $\phi \left(u\right)+\phi (1-u)$ has a unique maximum at $u=1/2$, the maximum of $CP{E}_{\phi}\left(X\right)$ is$$2{\int}_{D}\phi (1/2)du=2\phi (1/2){\int}_{D}du.$$$$F\left(x\right)=\left\{\begin{array}{cc}0& \mathrm{for}xa\hfill \\ 1/2& \mathrm{for}a\le x\le b\hfill \\ 1& \mathrm{for}x\ge b,\hfill \end{array}\right.$$To prove the other direction of statement 3 we consider an arbitrary distribution G with survival function $\overline{G}$ and support $[a,b]$. Due to $\phi \left(0\right)=\phi \left(1\right)=0$ and $\phi \left(u\right)+\phi (1-u)\le 2\phi (1/2)$, it holds that$$CP{E}_{\phi}\left(G\right)={\int}_{a}^{b}\phi \left(G\left(x\right)\right)+\phi \left(\overline{G}\left(x\right)\right)dx\le 2\phi (1/2)(b-a)=CP{E}_{\phi}\left(F\right).$$

#### 6.2. $CP{E}_{\phi}$ and Oja’s Axioms for Measures of Scale

Oja ([3] p. 159) defined a MOS as follows:

**Definition 2.**

Let $\mathcal{F}$ be a set of continuous distribution functions and ⪯ an appropriate ordering of scale on $\mathcal{F}$. $T:\mathcal{F}\to \mathbb{R}$ is called MOS, if

- 1.
- $T(aX+b)=\left|a\right|T\left(X\right)$ for all $a,b\in \mathbb{R}$, $F\in \mathcal{F}$.
- 2.
- $T\left({X}_{1}\right)\le T\left({X}_{2}\right)$, for ${X}_{1}\sim {F}_{1}$, ${X}_{2}\sim {F}_{2}$, ${F}_{1},{F}_{2}\in \mathcal{F}$ with ${F}_{1}\u2aaf{F}_{2}$.

Oja [3] discussed several orderings of scale. He showed in particular that Shannon entropy and variance satisfy a partial quantile based ordering of scale, which has been discussed by [62]. Referring to [63,64] criticized that this ordering and the location-scale family of distributions focused by Oja [3] were too restrictive. He discussed a more general nonparametric model of dispersion based on a more general ordering of scale (cf. [65,66]). In line with [4], we focus on the scale ordering proposed by [62].

**Definition 3.**

Let ${F}_{1}$, ${F}_{2}$ be continuous cdfs with respective quantile functions ${F}_{1}^{-1}$ and ${F}_{2}^{-1}$. ${F}_{2}$ is said to be more spread out than ${F}_{1}$ (${F}_{1}{\u2aaf}_{1}{F}_{2}$) if

$$\begin{array}{c}\hfill {F}_{2}^{-1}\left(u\right)-{F}_{2}^{-1}\left(v\right)\ge {F}_{1}^{-1}\left(u\right)-{F}_{1}^{-1}\left(v\right)\phantom{\rule{0.277778em}{0ex}}forall\phantom{\rule{0.277778em}{0ex}}0uv1.\end{array}$$

If ${F}_{1}$, ${F}_{2}$ are absolutely continuous with density functions ${f}_{1}$, ${f}_{2}$, ${\u2aaf}_{1}$ can be characterized equivalently by the property that ${F}_{2}^{-1}\left({F}_{1}^{-1}\left(x\right)\right)-x$ is monotonically non-decreasing or
(cf. [3], p. 160).

$${f}_{1}\left({F}_{1}^{-1}\left(u\right)\right)\ge {f}_{2}\left({F}_{2}^{-1}\left(u\right)\right),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}u\in [0,1]$$

Next, we show that $CP{E}_{\phi}$ is an MOS in the sense of [3]. This following lemma examines the behavior of $CP{E}_{\phi}$ with respect to affine linear transformations, referring to the first axiom of Definition 2:

**Lemma 2.**

Let F be the cdf of the random variable X. Then

$$CP{E}_{\phi}(aX+b)=\left|a\right|CP{E}_{\phi}\left(X\right).$$

**Proof.**

For $Y=aX+b$, it follows that

$${\int}_{-\infty}^{\infty}\phi \left(P(Y\le y)\right)dy=\left\{\begin{array}{cc}{\int}_{-\infty}^{\infty}P\left(X\le \frac{y-b}{a}\right)dy& \mathrm{for}a0\hfill \\ {\int}_{-\infty}^{\infty}P\left(X\frac{y-b}{a}\right)dy& \mathrm{for}a0\hfill \end{array}\right..$$

Substitution of $x=(y-b)/a$ with $\mathrm{d}y=a\mathrm{d}x$ gives

$${\int}_{-\infty}^{\infty}\phi \left(P(Y\le y)\right)dy=\left\{\begin{array}{cc}a{\int}_{-\infty}^{\infty}P\left(X\le x\right)dx& \mathrm{for}a0\hfill \\ -a{\int}_{-\infty}^{\infty}P\left(Xx\right)dx& \mathrm{for}a0\hfill \end{array}\right..$$

Likewise, we have that
such that
☐

$${\int}_{-\infty}^{\infty}\phi \left(P(Y>y)\right)dy=\left\{\begin{array}{cc}\hfill a{\int}_{-\infty}^{\infty}P\left(X>x\right)dx& \mathrm{for}a0\hfill \\ \hfill -a{\int}_{-\infty}^{\infty}P\left(X\le x\right)dx& \mathrm{for}a0\hfill \end{array}\right.,$$

$$CP{E}_{\phi}(aX+b)=\left|a\right|CP{E}_{\phi}\left(X\right).$$

In order to satisfy the second axiom of Oja’s definition of a measure of scale, $CP{E}_{\phi}$ has to satisfy the ordering of scale ⪯. This is shown by the following lemma:

**Lemma 3.**

Let ${F}_{1}$ and ${F}_{2}$ be continuous cdfs of the random variables ${X}_{1}$ and ${X}_{2}$ with ${F}_{1}{\u2aaf}_{1}{F}_{2}$. Then the following holds:

$$CP{E}_{\phi}\left({X}_{1}\right)\le CP{E}_{\phi}\left({X}_{2}\right).$$

**Proof.**

One can show with $u={F}_{i}\left(x\right)$ that
for $i=1,2$. Therefore,

$$CP{E}_{\phi}\left({F}_{i}\right)={\int}_{0}^{1}\phi \left(u\right)\frac{1}{{f}_{i}\left({F}_{i}^{-1}\left(u\right)\right)}du+{\int}_{0}^{1}\phi (1-u)\frac{1}{{f}_{i}\left({F}_{i}^{-1}\left(u\right)\right)}du$$

$$\begin{array}{ccc}\hfill CP{E}_{\phi}\left({F}_{1}\right)-CP{E}_{\phi}\left({F}_{2}\right)& =& {\int}_{0}^{1}\phi \left(u\right)\left(\frac{1}{{f}_{1}\left({F}_{1}^{-1}\left(u\right)\right)}-\frac{1}{{f}_{2}\left({F}_{2}^{-1}\left(u\right)\right)}\right)du\hfill \\ & & +{\int}_{0}^{1}\phi (1-u)\left(\frac{1}{{f}_{1}\left({F}_{1}^{-1}\left(u\right)\right)}-\frac{1}{{f}_{2}\left({F}_{2}^{-1}\left(u\right)\right)}\right)du.\hfill \end{array}$$

If ${F}_{1}{\u2aaf}_{1}{F}_{2}$ and hence ${f}_{1}\left({F}_{1}^{-1}\left(u\right)\right)\ge {f}_{2}\left({F}_{2}^{-1}\left(u\right)\right)$ for $u\in [0,1]$, it follows that $CP{E}_{\phi}\left({F}_{1}\right)-CP{E}_{\phi}\left({F}_{2}\right)\le 0$. ☐

As a consequence of Lemma 2 and Lemma 3, $CP{E}_{\phi}$ is an MOS in the sense of [3]. Thus, not only variance, differential entropy, and other statistical measures have the properties of measures of scale, but also $CP{E}_{\phi}$.

#### 6.3. $CP{E}_{\phi}$ and Transformations

Ebrahimi et al. ([4] p. 323), the authors considered cdf ${F}_{1}$, ${F}_{2}$ on domain ${D}_{1}$, ${D}_{2}$ and density functions ${f}_{1}$, ${f}_{2}$, which are connected via ${F}_{2}\left(x\right)={F}_{1}\left({g}^{-1}\left(x\right)\right)$, $x\in {D}_{1}$, via a differentiable transformation $g:{D}_{1}\to {D}_{2}$, that is ${F}_{2}\left(y\right)={F}_{1}\left(g\left(y\right)\right)$ respectively ${f}_{2}\left(y\right)={f}_{1}\left({g}^{-1}\left(y\right)\right)\left|\mathrm{d}{g}^{-1}\left(y\right)/\mathrm{d}y\right|$ for $y\in {D}_{1}$. Thus, they demonstrated for Shannon’s differential entropy H that the transformation only affects the difference:

$$H\left({f}_{2}\right)=H\left({f}_{1}\right)-{\int}_{{D}_{2}}ln\left|\frac{d{g}^{-1}\left(y\right)}{dy}\right|{f}_{2}\left(y\right)dy.$$

For $CP{E}_{\phi}$, one gets a less explicit relationship between $CP{E}_{\phi}\left({F}_{2}\right)$ and $CP{E}_{\phi}\left({F}_{1}\right)$:

$$CP{E}_{\phi}\left({F}_{2}\right)={\int}_{{D}_{1}}\left(\phi \left({F}_{1}\left(y\right)\right)+\phi \left({\overline{F}}_{1}\left(y\right)\right)\right)\frac{d{g}^{-1}\left(y\right)}{dy}.$$

Transformations with $|{g}^{\prime}\left(y\right)|\ge 1$, $y\in {D}_{2}$, are of special interest since these transformations do not diminish measures of scale. In Theorem 1, Ebrahimi et al. [4] showed that ${F}_{1}{\u2aaf}_{1}{F}_{2}$ holds if $|{g}^{\prime}\left(y\right)|\ge 1$ for $y\in {D}_{2}$. Hence, no MOS can be diminished by this specific transformation, especially neither Shannon entropy nor $CP{E}_{\phi}$.

Ebrahimi et al. [4] considered the special transformation $g\left(x\right)=ax+b$, $x\in {D}_{1}$. They showed that Shannon’s differential entropy is moved additively by this transformation, which is not expected for an MOS. Furthermore, the standard deviation is changed by the factor $\left|a\right|$, which is also true for $CP{E}_{\phi}$ as shown in Lemma 2.

#### 6.4. $CP{E}_{\phi}$ for Sums of Independent Random Variables

As is generally known, variance and differential entropy behave additively for the sum of independent random variables X and Y. More general entropies such as the Rényi or the Havrda & Charvát entropy are only subadditive (cf. [18], p. 194).

Neither the property of additivity nor the property of subadditivity could be shown for cumulative paired φ-entropies. Instead, they possess the maximum property if φ is a concave function on $[0,1]$. This means that, for two independent variables X and Y, $CP{E}_{\phi}(X+Y)$ is lower-bounded by the maximum of the two individual entropies $CP{E}_{\phi}\left(X\right)$ and $CP{E}_{\phi}\left(Y\right)$. This result was shown by [46] for the cumulative residual Shannon entropy. The following Theorem generalizes this result, while the proof partially follows Theorem 2 of [46].

**Theorem 4.**

Let X and Y be independent random variables and φ a concave function on the interval $[0,1]$ with $\phi \left(0\right)=\phi \left(1\right)=0$. Then we have

$$CP{E}_{\phi}(X+Y)\ge max\left\{CP{E}_{\phi}\left(X\right),CP{E}_{\phi}\left(Y\right)\right\}.$$

**Proof.**

Let X and Y be independent random variables with distribution functions ${F}_{X}$, ${F}_{Y}$ and densities ${f}_{X}$, ${f}_{Y}$. Using the convolution formula, we immediately get

$$P(X+Y\le t)={\int}_{-\infty}^{\infty}{F}_{X}(t-y){f}_{Y}\left(y\right)dy={E}_{Y}\left[{F}_{X}(t-Y)\right],\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}t\in \mathbb{R}.$$

Applying Jensen’s inequality for a concave function φ to Equation (37) results in
and

$${E}_{Y}\left[\phi \left({F}_{X}(t-Y)\right)\right]\ge \phi \left({E}_{Y}\left[{F}_{X}(t-Y)\right]\right)$$

$${E}_{Y}\left[\phi \left({\overline{F}}_{X}(t-Y)\right)\right]\ge \phi \left({E}_{Y}\left[{\overline{F}}_{X}(t-Y)\right]\right).$$

The existence of the expectation is assumed. To prove the Theorem, we begin with

$$CP{E}_{\phi}[X+Y]={\int}_{-\infty}^{\infty}\phi \left({E}_{Y}\left[{F}_{X}(t-Y)\right]\right)+\phi \left({E}_{Y}\left[{\overline{F}}_{X}(t-Y)\right]\right)dt.$$

By using Equations (38) and (39), setting $z=t-y$, and exchanging the order of integration, one yields
☐

$$\begin{array}{ccc}\hfill CP{E}_{\phi}[X+Y]& \ge & {\int}_{-\infty}^{\infty}{\int}_{-\infty}^{\infty}\phi \left({F}_{X}(t-y)\right)+\phi \left({\overline{F}}_{X}(t-y)\right)dt{f}_{Y}\left(y\right)dy\hfill \\ & =& {\int}_{-\infty}^{\infty}{\int}_{-\infty}^{\infty}\phi \left({F}_{X}\left(z\right)\right)+\phi \left({\overline{F}}_{X}\left(z\right)\right)dz{f}_{Y}\left(y\right)dy\hfill \\ & =& {\int}_{-\infty}^{\infty}\phi \left({F}_{X}\left(z\right)\right)+\phi \left({\overline{F}}_{X}\left(z\right)\right)dz=CP{E}_{\phi}\left[X\right].\hfill \end{array}$$

In the context of uncertainty theory, Liu [6] considered a different definition of independence for uncertain variables leading to the simpler additivity property
for independent uncertain variables X and Y.

$$CP{E}_{\phi}(X+Y)=CP{E}_{\phi}\left(X\right)+CP{E}_{\phi}\left(Y\right)$$

## 7. Estimation of CPE${}_{\phi}$

Beirlant et al. [67] presented an overview of differential entropy estimators. Essentially, all proposals are based on the estimation of a density function f inheriting all typical problems of nonparametric estimation of a density function. Among others, the problems are biasedness, choice of a kernel, and optimal choice of the smoothing parameter (cf. [68], p. 215ff.). However, $CP{E}_{\phi}$ is based on cdf F for which several natural estimators with desirable stochastic properties, derived from the Theorem of Glivenko and Cantelli (cf. [69], p. 61), exist. For a simple random sample $({X}_{1},...,{X}_{n})$, independently distributed random variables with identical distribution function F, the authors of [8,9] estimated F using the empirical distribution function ${F}_{n}\left(x\right)=\frac{1}{n}\mathrm{I}({X}_{i}\le x)$ for $x\in \mathbb{R}$. Moreover, they showed for the cumulative entropy $CE\left(F\right)=-{\int}_{\mathbb{R}}F\left(x\right)lnF\left(x\right)\mathrm{d}x$ that the estimator $CE\left({F}_{n}\right)$ is consistent for $CE\left(F\right)$ (cf. [8]). In particular, for F being the distribution function of a uniform distribution, they provided the expected value of the estimator and demonstrated that the estimator is asymptotically normal. For F being the cdf of an exponential distribution, they additionally derived the variance of the estimator.

In the following, we generalize the estimation approach of [8] by embedding it into the well-established theory of L-estimators (cf. [70], p. 55ff.). If φ is differentiable, then $CP{E}_{\phi}$ can be represented as the covariance between the random variable X and ${\phi}^{\prime}\left(\overline{F}\left(X\right)\right)-{\phi}^{\prime}\left(F\left(X\right)\right)$:

$$CP{E}_{\phi}\left(F\right)=E\left(X\left({\phi}^{\prime}\left(\overline{F}\left(X\right)\right)-{\phi}^{\prime}\left(F\left(X\right)\right)\right)\right).$$

An unbiased estimator for this covariance is
where

$$\begin{array}{ccc}\hfill CP{E}_{\phi}\left({F}_{n}\right)& =& \frac{1}{n}\sum _{i=1}^{n}{X}_{i}({\phi}^{\prime}(1-{F}_{n}\left({X}_{i}\right))-{\phi}^{\prime}\left({F}_{n}\left({X}_{i}\right)\right))\hfill \\ & =& \frac{1}{n}\sum _{i=1}^{n}{X}_{n:i}({\phi}^{\prime}(1-{F}_{n}\left({X}_{n:i}\right))-{\phi}^{\prime}\left({F}_{n}\left({X}_{n:i}\right)\right))\hfill \\ & =& \frac{1}{n}\sum _{i=1}^{n}\left({\phi}^{\prime}\left(1-\frac{i}{n+1}\right)-{\phi}^{\prime}\left(\frac{i}{n+1}\right)\right){X}_{n:i}\hfill \\ & =& \sum _{i=1}^{n}{c}_{ni}{X}_{n:i}\hfill \end{array}$$

$${c}_{ni}=\frac{1}{n}\left({\phi}^{\prime}\left(1-\frac{i}{n+1}\right)-{\phi}^{\prime}\left(\frac{i}{n+1}\right)\right),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}i=1,2,\dots ,n.$$

This results in an L-estimator ${\sum}_{i=1}^{n}J(i/(n+1)){X}_{n:i}$ with $J\left(u\right)={\phi}^{\prime}(1-u)-{\phi}^{\prime}\left(u\right)$, $u\in (0,1)$. By applying known results for the influence functions of L-estimators (cf. [70]), we get for the influence function of $CP{E}_{\phi}$:

$$\begin{array}{ccc}\hfill IF(x;CP{E}_{\phi},F)& =& {\int}_{0}^{1}\frac{u}{f\left({F}^{-1}\left(u\right)\right)}({\phi}^{\prime}(1-u)-{\phi}^{\prime}\left(u\right))du\hfill \\ & & -{\int}_{F\left(x\right)}^{1}\frac{1}{f\left({F}^{-1}\left(u\right)\right)}({\phi}^{\prime}(1-u)-{\phi}^{\prime}\left(u\right))du.\hfill \end{array}$$

In particular, the derivative is

$$\frac{dIF(x;CP{E}_{\phi},F)}{dx}={\phi}^{\prime}\left(\overline{F}\left(x\right)\right)-{\phi}^{\prime}\left(F\left(x\right)\right),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}x\in \mathbb{R}.$$

This means that the influence function will be completely determined by the antiderivative of ${\phi}^{\prime}\left(F\left(x\right)\right)$. The following examples demonstrate that the influence function of $CP{E}_{\phi}$ can easily be calculated if the underlying distribution F is logistic. We consider the Shannon, the Gini, and the α-entropy cases.

**Example 2.**

Beginning with the derivative
we arrive at

$$\frac{dIF(x;CP{E}_{S},F)}{dx}={\phi}^{\prime}\left(\overline{F}\left(x\right)\right)-{\phi}^{\prime}\left(F\left(x\right)\right)=ln\left(\frac{F\left(x\right)}{\overline{F}\left(x\right)}\right)=x,\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}x\in \mathbb{R},$$

$$IF(x,CP{E}_{S},F)=\frac{1}{2}{x}^{2}+C,\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}x\in \mathbb{R}.$$

The influence function is not bounded and proportional to the influence function of the variance, which implies that variance and $CP{E}_{S}$ have a similar asymptotic and robustness behavior. The integration constant C has to be determined such that $E\left[IF(x;CP{E}_{S},F)\right]=0:$

$$C=-\frac{1}{2}E\left({X}^{2}\right)=-\frac{1}{2}\frac{{\pi}^{2}}{3}=-\frac{{\pi}^{2}}{6}.$$

**Example 3.**

Using the Gini entropy $CP{E}_{G}$ and the logistic distribution function F we have

$$\begin{array}{ccc}\hfill \frac{dIF(x;CP{E}_{G},F)}{dx}& =& {\phi}^{\prime}\left(\overline{F}\left(x\right)\right)-{\phi}^{\prime}\left(F\left(x\right)\right)=2(2F\left(x\right)-1)\hfill \\ & =& 2\frac{{e}^{x}-1}{{e}^{x}+1}=2tanh\left(\frac{x}{2}\right),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}x\in \mathbb{R}.\hfill \end{array}$$

Integration gives the influence function

$$IF(x,CP{E}_{G},F)=4ln\left(cosh\left(\frac{x}{2}\right)\right)+C,\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}x\in \mathbb{R}.$$

By applying numerical integration we get $C=-1.2741$.

**Example 4.**

For $\phi \left(u\right)=u({u}^{\alpha -1}-1)/(1-\alpha )$ the derivative of the influence function is given by
Integration leads to the influence function
where

$$\begin{array}{ccc}\hfill \frac{dIF(x;CP{E}_{\alpha},F)}{dx}& =& {\phi}^{\prime}\left(\overline{F}\left(x\right)\right)-{\phi}^{\prime}F\left(x\right)=\frac{\alpha}{1-\alpha}\frac{1-{e}^{(\alpha -1)x}}{{(1+{e}^{x})}^{\alpha -1}}\hfill \\ & =& \frac{\alpha}{1-\alpha}\left(\frac{1}{{(1+{e}^{x})}^{\alpha -1}}-\frac{1}{{(1+{e}^{-x})}^{\alpha -1}}\right),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}x\in \mathbb{R}.\hfill \end{array}$$

$$\begin{array}{ccc}\hfill IF(x;CP{E}_{\alpha},F)& =& {}_{2}{F}_{1}(\alpha ,\alpha ;\alpha +1;-{e}^{-x})\frac{{e}^{\alpha x}}{\alpha}\left(1+{\left(\frac{{e}^{-x}+1}{{e}^{x}+1}\right)}^{\alpha}\right)\hfill \\ & & +\frac{1}{\alpha -1}\left(\frac{1+{e}^{x}+{e}^{(\alpha -1)x}}{{({e}^{x}+1)}^{\alpha}}-1\right),\hfill \end{array}$$

$${}_{2}{F}_{1}(\alpha ,\alpha ;\alpha +1;-{e}^{-x})=\alpha {\int}_{0}^{1}{t}^{\alpha -1}{\left(1+t{e}^{-x}\right)}^{-\alpha}dt+C,\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}x\in \mathbb{R}.$$

Under certain conditions (cf. [71], p. 143) concerning J, or φ and F, L- estimators are consistent and asymptotically normal. So, the cumulative paired φ-entropy is
with asymptotic variance

$$CP{E}_{\phi}\left({F}_{n}\right){\sim}_{asy}N\left(CP{E}_{\phi}\left(F\right),\frac{1}{n}A(F,CP{E}_{\phi})\right)$$

$$A(F,CP{E}_{\phi})=Var\left(IF(X;CP{E}_{\phi}\left(F\right),F)\right)={\int}_{-\infty}^{\infty}{\left({\int}_{F\left(x\right)}^{1}\frac{{\phi}^{\prime}(1-u)-{\phi}^{\prime}\left(u\right)}{f\left({F}^{-1}\left(u\right)\right)}du\right)}^{2}f\left(x\right)dx.$$

The following examples consider the Shannon and the Gini case for which the condition that is sufficient to guarantee asymptotic normality can easily be checked. We consider again the cdf F of the logistic distribution.

**Example 5.**

For the cumulative paired Shannon entropy it holds that
since

$$CP{E}_{S}\left({F}_{n}\right){\sim}_{asy}N\left(CP{E}_{S}\left(F\right),\frac{4}{45}{\pi}^{4}\right)$$

$$A(F,L)=Var\left(IF(X;CP{E}_{\phi}\left(F\right),F)\right)=\frac{1}{4}Var\left({X}^{2}\right)=\frac{1}{4}\left(E\left({X}^{4}\right)-E\left({X}^{2}\right)\right)=\frac{4}{45}{\pi}^{4}.$$

**Example 6.**

In the Gini case we get
since by numerical integration

$$CP{E}_{G}\left({F}_{n}\right){\sim}_{asy}N\left(CP{E}_{G}\left(F\right),2.8405\right)$$

$$A(F,L)={\int}_{-\infty}^{\infty}{\left(4ln\left(cosh\left(\frac{x}{2}\right)\right)-1.2274\right)}^{2}\frac{{e}^{-x}}{{(1+{e}^{-x})}^{2}}dx=2.8405.$$

It is known that L-estimators have a remarkable small-sample bias. Following [72], the bias can be reduced by applying the Jackknife method. It is well-known that asymptotical distributions can be used to construct approximate confidence intervals as well as that they can be applied for hypothesis tests in the one- or two-sample case. ([70], p. 116ff.) discussed asymptotic efficient L-estimators for a parameter of scale θ. Klein et al. [73] examine how the entropy generating function φ will be determined by the requirement that $CP{E}_{\phi}\left({F}_{n}\right)$ has to be asymptotically efficient.

## 8. Related Concepts

Several statistical concepts are closely related to cumulative paired φ-entropies. These concepts generalize some results which are known from literature. We begin with the cumulative paired φ-divergence that was discussed for the first time by [41], who called it “generalized cross entropy”. Their focus was on uncertain variables, whereas ours is on random variables. The second concept generalizes mutual information, which is defined for Shannon’s differential entropy, to mutual φ-information. We consider two random variables X and Y. The task is to decompose $CP{E}_{\phi}\left(Y\right)$ into two kinds of variation such that the so-called external variation measures how much of $CP{E}_{\phi}\left(Y\right)$ can be explained by X. This procedure mimics the well-known decomposition of variance and allows to define directed measures of dependence for X and Y. The third concept deals with dependence. More precisely, we introduce a new family of correlation coefficients that measure the strength of a monotonic relationship between X and Y. Well-known coefficients like the Gini correlation can be embedded in this approach. The fourth concept treats the problem of linear regression. $CP{E}_{\phi}$ can serve as general measure of dispersion that has to be minimized to estimate the regression coefficients. This approach will be identified as a special case of rank-based regression or R regression. Here, the robustness properties of the rank-based estimator can directly be derived from the entropy generating function φ . Moreover, asymptotics can be derived from theory of rank-based regression. The last concept we discuss applies $CP{E}_{\phi}$ to linear rank tests for the difference of scale. Known results, especially concerning the asymptotics, can be transferred from the theory of linear rank tests to this new class of tests. In this paper, we only sketch the main results and focus on examples. For a detailed discussion including proofs we refer to a series of papers by Klein and Mangold ([73,74,75]) , which are currently work in progress.

#### 8.1. Cumulative Paired φ-Divergence

Let φ be a concave function defined on $[0,\infty ]$ with $\phi \left(0\right)=\phi \left(1\right)=0$. Additionally, we need $0\phi (0/0)=0$. In the literature, φ-divergences are defined for convex functions φ (cf., e.g., [76], p. 5). Consequently, we consider $-\phi $ with φ concave.

The cumulative paired φ-divergence for two random variables is defined as follows.

**Definition 4.**

Let X and Y be two random variables with cdfs ${F}_{X}$ and ${F}_{Y}$. Then the cumulative paired φ-divergence of X and Y is given by

$$CP{D}_{\phi}(X,Y)=-{\int}_{-\infty}^{\infty}{F}_{Y}\left(x\right)\phi \left(\frac{{F}_{X}\left(x\right)}{{F}_{Y}\left(x\right)}\right)+{\overline{F}}_{Y}\left(x\right)\phi \left(\frac{{\overline{F}}_{X}\left(x\right)}{{\overline{F}}_{Y}\left(x\right)}\right)dx.$$

The following examples introduce cumulative paired φ-divergences for the Shannon, the α-entropy, the Gini, and the Leik cases:

**Example 7.**

- 1.
- Considering $\phi \left(u\right)=-ulnu$, $u\in [0,\infty )$, we obtain the cumulative paired Shannon divergence$$CP{D}_{S}(X,Y)={\int}_{-\infty}^{\infty}{F}_{X}\left(x\right)ln\left(\frac{{F}_{X}\left(x\right)}{{F}_{Y}\left(x\right)}\right)+{\overline{F}}_{X}\left(x\right)ln\left(\frac{{\overline{F}}_{X}\left(x\right)}{{\overline{F}}_{Y}\left(x\right)}\right)dx.$$
- 2.
- Setting $\phi \left(u\right)=u({u}^{\alpha -1}-1)/(1-\alpha )$, $u\in [0,\infty )$, leads to the cumulative paired α-divergence$$\begin{array}{ccc}\hfill CP{D}_{\alpha}(X,Y)& =& \frac{1}{\alpha -1}\left({\int}_{-\infty}^{\infty}{F}_{X}{\left(x\right)}^{\alpha}{F}_{Y}{\left(x\right)}^{1-\alpha}\right.\hfill \\ & & \left.\left.+{\overline{F}}_{X}{\left(x\right)}^{\alpha}{\overline{F}}_{Y}{\left(x\right)}^{1-\alpha}-1\right)dx\right).\hfill \end{array}$$
- 3.
- For $\alpha =2$ we receive as a special case the cumulative paired Gini divergence$$\begin{array}{ccc}\hfill CP{D}_{G}(X,Y)& =& {\int}_{-\infty}^{\infty}\left(\frac{{F}_{X}{\left(x\right)}^{2}}{{F}_{Y}\left(x\right)}+\frac{{\overline{F}}_{X}{\left(x\right)}^{2}}{{\overline{F}}_{Y}\left(x\right)}-1\right)dx\hfill \\ & =& {\int}_{-\infty}^{\infty}\frac{{({F}_{X}\left(x\right)-{F}_{Y}\left(x\right))}^{2}}{{F}_{Y}\left(x\right){\overline{F}}_{Y}\left(x\right)}dx.\hfill \end{array}$$
- 4.
- The choice $\phi \left(u\right)=1/2-|u-1/2|$, $u\in [0,1]$, leads to the cumulative paired Leik divergence$$\begin{array}{ccc}\hfill CP{D}_{L}(X,Y)& =& {\int}_{-\infty}^{\infty}-{F}_{Y}\left(x\right)\left(\frac{1}{2}-\left|\frac{{F}_{X}\left(x\right)}{{F}_{Y}\left(x\right)}-\frac{1}{2}\right|\right)-{\overline{F}}_{Y}\left(x\right)\left(\frac{1}{2}-\left|\frac{{\overline{F}}_{X}\left(x\right)}{{\overline{F}}_{Y}\left(x\right)}-\frac{1}{2}\right|\right)dx\hfill \\ & =& {\int}_{-\infty}^{\infty}-\frac{1}{2}+\left|{F}_{X}\left(x\right)-\frac{1}{2}{F}_{Y}\left(x\right)\right|+\left|\frac{1}{2}+\frac{1}{2}{F}_{Y}\left(x\right)-{F}_{X}\left(x\right)\right|dx\hfill \\ & =& {\int}_{-\infty}^{\infty}-\frac{1}{2}+\left|{F}_{X}\left(x\right)-\frac{1}{2}{F}_{Y}\left(x\right)\right|+\left|{F}_{X}\left(x\right)-\frac{1}{2}(1+{F}_{Y}\left(x\right))\right|dx\hfill \end{array}$$

$CP{D}_{S}$ is equivalent to the Anderson-Darling functional (cf. [77]) and has been used by [78] for a goodness-of-fit test, where ${F}_{X}$ represents the empirical distribution. Likewise, $CP{D}_{S}$ serves as a goodness-of-fit test (cf. [79]).

Further work in this area with similar concepts was done by [80,81], using the notation cumulative residual Kullback-Leiber (CRKL) information and cumulative Kullback-Leiber (CKL) information.

Based on work from [82,83,84,85] a general function ${\phi}_{\alpha}$ was discussed by [86]:

$${\phi}_{\alpha}\left(u\right)=\left\{\begin{array}{cc}(\alpha -1-\alpha u+{u}^{\alpha})/\left(\alpha (1-\alpha )\right)& \mathrm{for}\alpha \ne 0,1\hfill \\ -u(lnu-1)-1& \mathrm{for}\alpha =1\hfill \\ lnu-u+1& \mathrm{for}\alpha =0.\hfill \end{array}\right.$$

Up to a multiplicative constant, ${\phi}_{\alpha}$ includes all of the aforementioned examples. In addition, the Hellinger distance is a special case for $\alpha =1/2$ that leads to the cumulative paired Hellinger divergence:

$$CP{D}_{H}(X,Y)=2{\int}_{-\infty}^{\infty}{\left(\sqrt{{F}_{X}\left(x\right)}-\sqrt{{F}_{Y}\left(x\right)}\right)}^{2}+{\left(\sqrt{{\overline{F}}_{X}\left(x\right)}-\sqrt{{\overline{F}}_{Y}\left(x\right)}\right)}^{2}dx.$$

For a strictly concave function φ, Chen et al. [41] proved that $CP{E}_{\phi}(X,Y)\ge 0$ and $CP{E}_{\phi}(X,Y)=0$ iff X and Y have identical distributions. Thus, the cumulative paired φ-divergence can be interpreted as a kind of a distance between distribution functions. As an application, Chen et al. [41] mentioned the “minimum cross-entropy principle”. They proved that X follows a logistic distribution if $CP{D}_{S}$ is minimized, given that Y is exponentially distributed and the variance of X is fixed. If ${F}_{Y}$ is an empirical distribution and ${F}_{X}$ has an unknown vector of parameters θ, $CP{D}_{\phi}$ can be minimized to attain a point estimator for θ (cf. [87]). The large class of goodness-of-fit tests based on $CP{D}_{\phi}$, discussed by Jager et al. [86], has already been mentioned.

#### 8.2. Mutual Cumulative φ-Information

Let X and Y again be random variables with cdfs ${F}_{X}$, ${F}_{Y}$, density functions ${f}_{X}$, ${f}_{Y}$, and the conditional distribution function ${F}_{Y|X}$. ${D}_{X}$ and ${D}_{Y}$ denote the supports of X and Y. Then we have
which is the variation of Y given $X=x$. Averaging with respect to x leads to the internal variation

$$CP{E}_{\phi}\left(Y\right|x)={\int}_{-\infty}^{\infty}\phi \left({F}_{Y|X}\left(y\right|x)\right)dy+{\int}_{-\infty}^{\infty}\phi \left(1-{F}_{Y|X}\left(y\right|x)\right)dy,$$

$${E}_{X}\left(CP{E}_{\phi}\left(Y\right|X)\right)={\int}_{-\infty}^{\infty}CP{E}_{\phi}\left(Y\right|x){f}_{X}\left(x\right)dx.$$

For a concave entropy generating function φ, this internal variation cannot be greater than the total variation $CP{E}_{\phi}\left(Y\right)$. More precisely, it holds:

- ${E}_{X}\left(CP{E}_{\phi}\left(Y\right|X)\right)\le CP{E}_{\phi}\left(Y\right)$.
- ${E}_{X}\left(CP{E}_{\phi}\left(Y\right|X)\right)=CP{E}_{\phi}\left(Y\right)$ if X and Y are independent.
- If φ is strictly concave and ${E}_{X}\left(CP{E}_{\phi}\left(Y\right|X)\right)=CP{E}_{\phi}\left(Y\right)$, X and Y are independent random variables.

We consider the non-negative difference

$$MCP{I}_{\phi}(X,Y):=CP{E}_{\phi}\left(Y\right)-{E}_{X}\left(CP{E}_{\phi}\left(Y\right|X)\right).$$

This expression measures the part of the variation of Y that can be explained by the variable X (= external variation) and shall be named “mutual cumulative paired φ-information” $MCP{I}_{\phi}$ (cf. Rao et al. [46] using the term “cross entropy”, (p. 3) in [50]). $MCP{I}_{\phi}$ is equivalent to the transinformation that is defined for Shannon’s differential entropy (cf. [60], p. 20f.). In contrast to transinformation, $MCP{I}_{\phi}$ is not symmetric, so $MCP{I}_{\phi}(X,Y)=MCP{I}_{\phi}(Y,X)$ is not true in general.

Cumulative paired mutual φ-information is the starting point for two directed measures of strength of φ-dependence between X and Y, namely “directed (measure) of cumulative paired φ-dependence”, $DCPD$. The first one is
and the second one is

$$DCP{D}_{\phi}^{1}(X\to Y)=\frac{MCP{I}_{\phi}(X,Y)}{CP{E}_{\phi}\left(Y\right)}$$

$$DCP{D}_{\phi}^{2}(X\to Y)=\frac{CP{E}_{\phi}{\left(Y\right)}^{2}-{E}_{X}\left(CP{E}_{\phi}{\left(Y\right|X)}^{2}\right)}{CP{E}_{\phi}{\left(Y\right)}^{2}}.$$

Both expressions measure the relative decrease in variation of Y if X is known. The domain is $[0,1]$. The lower bound 0 is taken if Y and X are independent, while the upper bound 1 corresponds to ${E}_{X}\left(CP{E}_{\phi}\left(Y\right|X)\right)=0$. In this case, from $\phi \left(u\right)>0$ for $0<u<1$ and $\phi \left(0\right)=\phi \left(1\right)=0$, we can conclude that the conditional distribution ${F}_{Y|X}\left(y\right|x)$ has to be degenerated. Thus, for every $x\in {D}_{X}$ there is exactly one $y*\in {D}_{Y}$ with $P(Y={y}^{*}|X=x)=1$. Therefore, there is a perfect association between X and Y. The next example illustrates these concepts and demonstrates the advantage of considering both types of measures of dependence.

**Example 8.**

Let $(X,Y)$ follow a bivariate standard Gaussian distribution with $E\left(X\right)=E\left(Y\right)=0$, $Var\left(X\right)=Var\left(Y\right)=1$, and $Cov(X,Y)=\rho $, $-1<\rho <1$. Note that X and Y follow univariate standard Gaussian distributions, whereas $X+Y$ follows a univariate Gaussian distribution with mean 0 and variance $2(1+\rho )$. Considering this, one can conclude that

$${F}_{X}^{-1}\left(u\right)={F}_{Y}^{-1}\left(u\right)={\Phi}^{-1}\left(u\right),\phantom{\rule{0.277778em}{0ex}}{F}_{X+Y}^{-1}\left(u\right)=\sqrt{2(1+\rho )}{\Phi}^{-1}\left(u\right),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}u\in [0,1].$$

By plugging this quantile function into the defining equation of the cumulative paired φ-entropy one yields

$$CP{E}_{\phi}(X+Y)=\sqrt{2(1+\rho )}CP{E}_{\phi}\left(X\right)\le CP{E}_{\phi}\left(X\right)+CP{E}_{\phi}\left(Y\right).$$

For $\rho \to -1$, the cumulative paired φ-entropy behaves like the variance or the standard deviation. All measures approach 0 for $\rho \to -1$, such that $CP{E}_{\phi}$ can be used as a measure of risk since the risk can be completely eliminated in a portfolio with perfectly negative correlated returns of assets. To be more precise, it is to say that $CP{E}_{\phi}$ rather behaves like the standard deviation than the variance.

For $\rho =0$, the variance of the sum equals the sum of the variances, but the standard deviation of the sum is equal to or smaller than the sum of the individual standard deviations. This is also true for $CP{E}_{\phi}$.

In case of the bivariate standard Gaussian distribution, $Y|x$ is Gaussian as well with mean $\rho x$ and variance $1-{\rho}^{2}$ for $x\in \mathbb{R}$ and $-1<\rho <1$. Therefore, the quantile function of $Y|x$ is

$${F}_{Y|x}^{-1}\left(u\right)=\rho x+\sqrt{1-{\rho}^{2}}{\Phi}^{-1}\left(u\right),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}u\in [0,1].$$

Using this quantile function, the cumulative paired φ-entropy for the conditional random variable $Y|x$ is

$$CP{E}_{\phi}\left(Y\right|x)=\sqrt{1-{\rho}^{2}}{\int}_{0}^{1}({\phi}^{\prime}(1-u)-{\phi}^{\prime}\left(u\right)){\Phi}^{-1}\left(u\right)du=\sqrt{1-{\rho}^{2}}CP{E}_{\phi}\left(Y\right).$$

Just like the variance of $Y|x$, $CP{E}_{\phi}$ does not depend on x in case of a bivariate Gaussian distribution. This implies that the internal variation is $\sqrt{1-{\rho}^{2}}CP{E}_{\phi}\left(Y\right)$, as well.

For $\rho \to 1$, the bivariate distribution becomes degenerated and the internal variation consequently approaches 0. The mutual cumulative paired φ-information is given by

$$MCP{I}_{\phi}(X,Y)=CP{E}_{\phi}\left(Y\right)-{E}_{Y}\left(CP{E}_{\phi}\left(Y\right|X)\right)=(1-\sqrt{1-{\rho}^{2}})CP{E}_{\phi}\left(Y\right).$$

$MCP{I}_{\phi}$ takes the value 0 if and only if ${\rho}^{2}=0$, in which case X and Y are independent.

The two measures of directed cumulative φ-dependence for this example are
and
ρ completely determines the values for both measures of directed dependence. Provided the upper bound 1 will be attained, there is a perfect linear relation between Y and X.

$$DCP{D}_{\phi}^{1}(X\to Y)=\frac{MCP{I}_{\phi}(X,Y)}{CP{E}_{\phi}\left(Y\right)}=1-\sqrt{1-{\rho}^{2}}$$

$$DCP{D}_{\phi}^{2}(X\to Y)=\frac{CP{E}_{\phi}{\left(Y\right)}^{2}-{E}_{X}\left(CP{E}_{\phi}{\left(Y\right|X)}^{2}\right)}{CP{E}_{\phi}{\left(Y\right)}^{2}}={\rho}^{2}.$$

As a second example we consider the dependence structure of the Farlie-Gumbel-Morgenstern copula (FGM copula). For the sake of brevity, we define a copula C as bivariate distribution function with uniform marginals for two random variables U and V with support $[0,1]$. For details concerning copulas see, e.g., [88].

**Example 9.**

Let
be the FGM copula (cf. [88], p. 68). With
it holds for the conditional cumulative φ-entropy of U given $V=v$ that

$${C}_{U,V}(u,v)=uv+\theta u(1-u)v(1-v),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}u,v\in [0,1],\theta \in [-1,1],$$

$${C}_{U|V}\left(u\right|v)=\frac{\partial C(u,v)}{\partial v}=u+\theta u(1-u)(1-2v)$$

$$CP{E}_{\phi}\left({C}_{U|V}\right)={\int}_{0}^{1}\phi (1-u-\theta u(1-u)(1-2v))+\phi (u+\theta u(1-u)(1-2v))du.$$

To get expressions in closed form we consider the Gini case with $\phi \left(u\right)=u(1-u)$, $u\in [0,1]$. After some simple calculations we have

$$CP{E}_{G}\left({C}_{U|V}\right)=\frac{1}{3}-\frac{{\theta}^{2}}{15}{(1-2v)}^{2},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}v\in [0,1].$$

Averaging over the uniform distribution of V leads to the internal variation

$$E\left(CP{E}_{G}\left({C}_{U|V}\right)\right)=\frac{1}{3}-\frac{{\theta}^{2}}{45}.$$

With $CP{E}_{G}\left(U\right)=1/3$, the mutual cumulative Gini information and the directed cumulative measure of Gini dependence are

$$MC{I}_{G}(V\to U)=\frac{{\theta}^{2}}{45}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}and\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}DCP{D}_{G}^{1}(V\to U)=\frac{{\theta}^{2}}{15}.$$

It is well-known that only a small range of dependence can be covered by the FGM copula (cf. [88], p. 129).

Hall et al. [89] discussed several methods for estimating a conditional distribution. The results can be used for estimating the mutual φ-information and the two directed measures of dependence. This will be the task of future research.

#### 8.3. φ-Correlation

Schechtman et al. [90] introduced Gini correlations of two random variables X and Y with distribution functions ${F}_{X}$ and ${F}_{Y}$ as

$${\Gamma}_{G}(X,Y)=\frac{Cov(X,{F}_{Y}\left(Y\right))}{Cov(X,{F}_{X}\left(X\right))}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\text{and}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{\Gamma}_{G}(Y,X)=\frac{Cov(Y,{F}_{X}\left(X\right))}{Cov(Y,{F}_{Y}\left(Y\right))}.$$

The numerator equals $1/4$ of the Gini mean difference
where the expectation is calculated for two independent and with ${F}_{X}$ identically distributed random variables ${X}_{1}$ and ${X}_{2}$.

$${\mathsf{\Delta}}_{X}={E}_{{X}_{1}}{E}_{{X}_{2}}\left[\right|{X}_{1}-{X}_{2}\left|\right],$$

Gini’s mean difference coincides with the cumulative paired Gini entropy $CP{E}_{G}\left(X\right)$ in the following way:

$$Cov(X,{F}_{X}\left(X\right))=4CP{E}_{G}\left(X\right)=4{\int}_{-\infty}^{\infty}X({\phi}^{\prime}\left({\overline{F}}_{X}\left(X\right)\right)-{\phi}^{\prime}\left({F}_{X}\left(X\right)\right))dx.$$

Therefore, in the same way that Gini’s mean difference can be generalized to the Gini correlation, $CP{E}_{\phi}$ can be generalized to the φ-correlation.

Let $X,Y$ be two random variables and let $CP{E}_{\phi}\left(X\right)$, $CP{E}_{\phi}\left(Y\right)$ be the corresponding cumulative paired φ-entropies, then
and
are called φ-correlations of X and Y. Since $E({\phi}^{\prime}\left({\overline{F}}_{Y}\left(Y\right)\right)-{\phi}^{\prime}\left({F}_{Y}\left(Y\right)\right))=0$, the numerator is the covariance between X and ${\phi}^{\prime}\left({\overline{F}}_{Y}\left(Y\right)\right)-{\phi}^{\prime}\left({F}_{Y}\left(Y\right)\right)$.

$${\Gamma}_{\phi}(X,Y)=\frac{E\left(X({\phi}^{\prime}\left({\overline{F}}_{Y}\left(Y\right)\right)-{\phi}^{\prime}\left({F}_{Y}\left(Y\right)\right))\right)}{CP{E}_{\phi}\left(X\right)}$$

$${\Gamma}_{\phi}(Y,X)=\frac{E\left(Y({\phi}^{\prime}\left({\overline{F}}_{X}\left(X\right)\right)-{\phi}^{\prime}\left({F}_{X}\left(X\right)\right))\right)}{CP{E}_{\phi}\left(Y\right)}$$

The first example verifies that the Gini correlation is a proper special case of the φ-correlation.

**Example 10.**

The setting $\phi \left(u\right)=u(1-u)$, $u\in [0,1]$, leads to the Gini correlation, because
and

$$\begin{array}{ccc}\hfill E\left(X({\phi}^{\prime}\left({\overline{F}}_{Y}\left(Y\right)\right)-{\phi}^{\prime}\left({F}_{Y}\left(Y\right)\right))\right)& =& 2E\left(X(2{F}_{Y}\left(Y\right)-1)\right)=4E\left(X({F}_{Y}\left(Y\right)-1/2)\right)\hfill \\ & =& 4E\left((X-E\left(X\right))({F}_{Y}\left(Y\right)-1/2)\right)=4Cov(X,{F}_{Y}\left(Y\right))\hfill \end{array}$$

$$E\left(X({\phi}^{\prime}\left({\overline{F}}_{X}\left(X\right)\right)-{\phi}^{\prime}\left({F}_{X}\left(X\right)\right))\right)=4Cov(X,{F}_{X}\left(X\right)).$$

The second example considers the new Shannon correlation.

**Example 11.**

Set $\phi \left(u\right)=-ulnu$, $u\in [0,1]$, then we get the Shannon correlation

$${\Gamma}_{S}(X,Y)=\frac{E(Xln({F}_{Y}\left(Y\right)/(1-{F}_{Y}\left(Y\right))))}{CP{E}_{S}\left(X\right)}.$$

If Y follows a logistic distribution with ${F}_{Y}\left(y\right)=1/(1+{e}^{-y})$, $y\in \mathbb{R}$, then $ln({F}_{Y}\left(y\right)/{\overline{F}}_{Y}\left(y\right))=y$. Considering this, we get

$${\Gamma}_{S}(X,Y)=\frac{E\left(XY\right)}{CP{E}_{S}\left(X\right)}.$$

From Equation (30) we know that $CP{E}_{S}\left(X\right)=\pi /\sqrt{3}$ if X is logistically distributed. In this specific case we get

$${\Gamma}_{S}(X,Y)=\sqrt{3}\frac{E\left(XY\right)}{\pi}.$$

In the following example we introduce the α-correlation.

**Example 12.**

For $\phi \left(u\right)=u({u}^{\alpha -1}-1)/(1-\alpha )$, $u\in [0,1]$, we get the α-correlation

$${\Gamma}_{\alpha}(X,Y)=\frac{E\left(X\frac{\alpha}{1-\alpha}({F}_{Y}{\left(Y\right)}^{\alpha -1}-{\overline{F}}_{Y}{\left(Y\right)}^{\alpha -1})\right)}{CP{E}_{\alpha}\left(X\right)}.$$

For ${F}_{Y}\left(y\right)=1/(1+{e}^{-y})$, $y\in \mathbb{R}$, we get

$${\Gamma}_{S}(X,Y)=\frac{\alpha}{(1-\alpha )CP{E}_{S}\left(X\right)}E\left(X\left({\left(\frac{1}{1+{e}^{-Y}}\right)}^{\alpha -1}-{\left(\frac{1}{1+{e}^{Y}}\right)}^{\alpha -1}\right)\right).$$

The authors of [90,91,92] proved that Gini correlations possess many desirable properties. In the following we give an overview of all properties which can be transferred to φ-correlations. For proofs and further details we refer to [75].

We start with the fact that φ-correlations also have a copula representation since for the covariance holds

$$Cov(X,{F}_{Y}\left(Y\right))=-{\int}_{0}^{1}{\int}_{0}^{1}(C(u,v)-uv)\frac{1}{f\left({F}_{X}^{-1}\left(u\right)\right)}({\phi}^{\prime \prime}(1-v)+{\phi}^{\prime \prime}\left(v\right))dudv.$$

The following examples demonstrate the copula representation for the Gini and the Shannon correlation.

**Example 13.**

In the Gini case it is ${\phi}^{\prime \prime}\left(u\right)+{\phi}^{\prime \prime}(1-u)=-4$. This leads to

$$Cov(X,{F}_{Y}\left(Y\right))=4{\int}_{0}^{1}{\int}_{0}^{1}({C}_{X,Y}(u,v)-uv)\frac{1}{{f}_{X}\left({F}_{X}^{-1}\left(u\right)\right)}dudv.$$

**Example 14.**

In the Shannon case, ${\phi}^{\prime \prime}\left(u\right)+{\phi}^{\prime \prime}(1-u)=-1/\left(u(1-u)\right)$ such that

$$Cov\left(X,ln\frac{{F}_{Y}\left(Y\right)}{{\overline{F}}_{Y}\left(Y\right)}\right)={\int}_{0}^{1}{\int}_{0}^{1}\frac{{C}_{X,Y}(u,v)-uv}{u(1-u)}\frac{1}{{f}_{X}\left({F}_{X}^{-1}\left(u\right)\right)}dudv.$$

The following basic properties of φ-correlations can easily be checked with the arguments applied by [90]:

- ${\Gamma}_{\phi}(X,Y)\in [-1,1]$.
- ${\Gamma}_{\phi}(X,Y)=1$$(-1)$ if there is a strictly increasing (decreasing) transformation g such that $X=g\left(Y\right)$.
- If g is monotonic, then ${\Gamma}_{\phi}(X,Y)={\Gamma}_{\phi}(X,g\left(Y\right))$.
- If g is affin-linear, then ${\Gamma}_{\phi}(X,Y)={\Gamma}_{\phi}(g\left(X\right),Y)$.
- If X and Y are independent, then ${\Gamma}_{X,Y}=\Gamma (Y,X)=0$.
- If $a+bX$ and $c+dY$ are exchangeable for some constants $a,b,c,d\in \mathbb{R}$ with $b,d>0$, then ${\Gamma}_{\phi}(X,Y)={\Gamma}_{\phi}(Y,X)$.

In the last subsection we have seen that two directed measures of φ-dependence do not rely on φ if a bivariate Gaussian distribution is considered. The same holds for φ-correlations as will be demonstrated in the following example.

**Example 15.**

Let $(X,Y)$ be a bivariate standard Gaussian random variable with Pearson correlation coefficient ρ. Thus, all φ-correlations coincide with ρ as the following consideration shows:

With $E\left(X\right|y)=\rho y$ it is
Dividing this by $CP{E}_{\phi}\left(X\right)$ yields the result.

$$\begin{array}{ccc}\hfill Cov(X,{\phi}^{\prime}\left({\overline{F}}_{Y}\left(Y\right)\right)-{\phi}^{\prime}\left({F}_{Y}\left(Y\right)\right))& =& {E}_{Y}{E}_{X|Y}\left(X\right|Y)({\phi}^{\prime}\left({\overline{F}}_{Y}\left(Y\right)\right)-{\phi}^{\prime}\left({F}_{Y}\left(Y\right)\right))\hfill \\ & =& \rho {E}_{Y}\left(Y({\phi}^{\prime}\left({\overline{F}}_{Y}\left(Y\right)\right)-{\phi}^{\prime}\left({F}_{Y}\left(Y\right)\right))\right)\hfill \\ & =& \rho CP{E}_{\phi}\left(Y\right)=\rho CP{E}_{\phi}\left(X\right).\hfill \end{array}$$

Weighted sums of random variables appear for example in portfolio optimization. The diversification effect concerns negative correlations between the returns of assets. Thus, the risk of a portfolio can be significantly smaller than the sum of the individual risks. Now, we analyze whether cumulative paired φ-entropies can serve as a risk measure as well. Therefore, we have to examine the diversification effect for $CP{E}_{\phi}$.

First, we display the total risk $CP{E}_{\phi}\left(Y\right)$ as a weighted sum of individual risks. Essentially, the weights need to be the φ-correlations of the individual returns with the portfolio return: Let $Y={\sum}_{i=1}^{k}{a}_{i}{X}_{i}$, then it holds that

$$CP{E}_{\phi}\left(Y\right)=\sum _{i=1}^{k}{a}_{i}{\Gamma}_{\phi}({X}_{i},Y)CP{E}_{\phi}\left({X}_{i}\right).$$

For the diversification effect the total risk $CP{E}_{\phi}\left(Y\right)$ has to be displayed as a function of the φ-correlations between ${X}_{i}$ and ${X}_{j}$, $i,j=1,2,\dots ,k$. A similar result was provided by [92] for the Gini correlation without proof. Let $Y={\sum}_{i=1}^{k}{a}_{i}{X}_{i}$ and set ${D}_{iy}={\Gamma}_{\phi}({X}_{i},Y)-{\Gamma}_{\phi}(Y,{X}_{i})$, $i=1,2,\dots ,k$, then the following decomposition of the square of $CP{E}_{\phi}\left(Y\right)$ holds:

$$\begin{array}{ccc}& & CP{E}_{\phi}{\left(Y\right)}^{2}-CP{E}_{\phi}\left(Y\right)\sum _{i=1}^{k}{a}_{i}{D}_{iy}CP{E}_{\phi}\left({X}_{i}\right)\hfill \\ & & =\sum _{i=1}^{k}{a}_{i}^{2}CP{E}_{\phi}{\left({X}_{i}\right)}^{2}+\sum _{i=1}^{k}\sum _{j\ne i}^{k}{a}_{i}{a}_{j}CP{E}_{\phi}\left({X}_{i}\right)CP{E}_{\phi}\left({X}_{j}\right){\Gamma}_{\phi}({X}_{i},{X}_{j}).\hfill \end{array}$$

This is similar to the representation for the variance of Y, where ${\Gamma}_{\phi}({X}_{i},{X}_{j})$ takes the role of the Pearson correlation and $CP{E}_{\phi}\left({X}_{i}\right)$ the role of the standard deviation for $i,j=1,2,\dots ,k$.

Schechtman et al. [90] also introduced an estimator for the Gini correlation and derived its asymptotic distribution. For the proof it is useful to note that the numerator of the Gini correlation can be represented as a U-statistic. For the general case of the φ-correlation it is necessary to derive the influence function and to calculate its variance. This will be done in [75].

#### 8.4. φ-Regression

Based on the Gini correlation Olkin et al. [93] considered the traditional ordinary least squares (OLS) approach in regression analysis
where Y is the dependent variable and x is the independent variable. They modified it by minimizing the covariance between the error term ε in a linear regression model and the ranks of ε with respect to α and β. Ranks are the sample analogue of the theoretical distribution function ${F}_{\epsilon}$, such that the Gini mean difference $Cov(\epsilon ,{F}_{\epsilon})$ is the center of this new approach for regression analysis. Olkin et al. [93] noticed that this approach is already known as “rank based regression” or short “R regression” in robust statistics. In robust regression analysis the more general optimization criteria $Cov(\epsilon ,\phi \left({F}_{\epsilon}\right))$ has been considered, where φ denotes a strictly increasing score function (cf. [94], p. 233). The choice $\phi \left(u\right)=1-2u$ leads to the Gini mean difference, which is the scores generating function of the Wilcoxon scores. The rank based regression approach with general scores generating function $\phi \left(u\right)={\phi}^{\prime}(1-u)-{\phi}^{\prime}\left(u\right)$, $u\in [0,1]$, is equivalent to the generalization of the Gini regression to a so-called φ-regression based on the criteria function
which has to be minimized to obtain α and β. Therefore, cumulative paired φ-entropies are special cases of the dispersion function that [95,96] proposed as optimization criteria for R regression. More precisely, R estimation proceeds in two steps. In the first step
has to be minimized with respect to β. Let ${\widehat{\beta}}_{\phi}$ denote this estimator. In the second step α will be estimated separately by

$${Y}_{i}=\alpha +{x}_{i}^{\prime}\beta +{\epsilon}_{i},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}i=1,2,\dots ,n,$$

$$CP{E}_{\phi}\left(\epsilon \right)=Cov(\epsilon ,{\phi}^{\prime}(1-{F}_{\epsilon}\left(\epsilon \right))-{\phi}^{\prime}\left({F}_{\epsilon}\right)),$$

$${d}_{\phi}\left(\beta \right)=CP{E}_{\phi}(y-X\beta )$$

$${\widehat{\alpha}}_{\phi}={\mathrm{med}}_{i}({y}_{i}-{x}_{i}^{\prime}{\widehat{\beta}}_{\phi}).$$

The authors of [97,98] gave an overview of recent developments in rank based regression. We will apply their main results to φ-regression. In [99], the authors showed that the following property holds for the influence function of ${\widehat{\beta}}_{\phi}$:
where $({x}_{0}^{\prime},{y}_{0})$ represents an outlier. ${\phi}^{\prime}$ determines the influence of an outlier in the dependent variable on the estimator ${\widehat{\beta}}_{\phi}$.

$$IF({x}_{0},{y}_{0};{\widehat{\beta}}_{\phi},{F}_{Y,X})={\tau}_{\phi}{(\left({X}^{\prime}X\right)/n)}^{-1}\left({\phi}^{\prime}\left({\overline{F}}_{\epsilon}\left({y}_{0}\right)\right)-{\phi}^{\prime}\left({F}_{\epsilon}\left({y}_{0}\right)\right)\right){x}_{0},$$

The scale parameter ${\tau}_{\phi}$ is given by

$${\tau}_{\phi}=-{\left({\int}_{0}^{1}({\phi}^{\prime}(1-u)-{\phi}^{\prime}\left(u\right))\frac{{f}_{\epsilon}^{\prime}\left({F}_{\epsilon}^{-1}\left(u\right)\right)}{{f}_{\epsilon}\left({F}_{\epsilon}^{-1}\left(u\right)\right)}du\right)}^{-1}.$$

The influence function shows that ${\widehat{\beta}}_{\phi}$ is asymptotically normal:

$${\widehat{\beta}}_{\phi}{\sim}_{asy}N\left(\beta ,{\tau}_{\phi}^{2}{\left({X}^{\prime}X\right)}^{-1}\right).$$

For ${\phi}^{\prime}(1-u)-{\phi}^{\prime}\left(u\right)$ bounded, Koul et al. [100] proposed a consistent estimator ${\widehat{\tau}}_{\phi}$ for the scale parameter ${\tau}_{\phi}$. This asymptotic property can again be used to construct approximate confidence limits for the regression coefficients, to derive a Wald test for the general linear hypothesis, to derive a goodness-of-fit test, and to define a measure of determination (cf. [97])).

Gini regression corresponds to $CP{E}_{G}(\epsilon ,{F}_{\epsilon}\left(\epsilon \right))$. In the same way we can derive from $CP{E}_{S}(\epsilon ,{F}_{\epsilon}\left(\epsilon \right))$ the new Shannon regression, from $CP{E}_{\alpha}(\epsilon ,{F}_{\epsilon}\left(\epsilon \right))$ the α-regression, and from $CP{E}_{L}(\epsilon ,{F}_{\epsilon}\left(\epsilon \right))$ the Leik regression.

The R package “Rfit” has the option to include individual φ-functions into rank based regression (cf. [97]). Using this option and the dataset “telephone”, which is available with several outliers in “Rfit”, we compare the fit of the Shannon regression $(\alpha \to 1$), the Leik regression, and the α-regression (for several values of α) with the OLS regression. Figure 3 shows on the left the original data, the OLS, and the Shannon regression, while on its right side outliers were excluded to get a more detailed impression of the differences between the φ-regressions.

In comparison with the very sensitive OLS regression all rank based regression techniques behave similarly. In case of a known error distribution, McKean et al. [98] showed an asymptotically efficient estimator for ${\tau}_{\phi}$. This procedure also determines the entropy generating function φ. In case of an unknown error distribution but some available information with respect to skewness and leptokurtosis, a data-driven (adaptive) procedure was proposed by them.

#### 8.5. Two-Sample Rank Test on Dispersion

Based on $CP{E}_{\phi}$ the linear rank statistics
can be used as a test statistic for alternatives of scale, where ${R}_{1},{R}_{2},\dots ,{R}_{n}$ are the ranks of ${X}_{1},{X}_{2},\dots ,{X}_{n}$ in the pooled sample ${X}_{1},{X}_{2},\cdots ,{X}_{n},{Y}_{1},{Y}_{2},\dots ,{Y}_{m}$. All random variables are assumed to be independent.

$$CP{E}_{\phi}\left(R\right)=\sum _{i=1}^{n}\phi \left(\frac{{R}_{i}}{n+m+1}\right)+\phi \left(1-\frac{{R}_{i}}{n+m+1}\right)$$

Some of the linear rank statistics which are well-known from the literature are special cases of Equation (56) as will be shown in the following examples:

**Example 16.**

Let $\phi \left(u\right)=1/2-|u-1/2|$, $u\in [0,1]$, then we have

$$CP{E}_{L}\left(R\right)=2\sum _{i=1}^{n}\left(\frac{1}{2}-\left|\frac{{R}_{i}}{n+m+1}-\frac{1}{2}\right|\right).$$

Ansari et al. [101] suggest the statistic
as a two-sample test for alternatives of scale (cf. [102], p. 104). Apparently, we have ${S}_{AB}=1/2(n+m+1)CP{E}_{L}\left(R\right)$.

$${S}_{AB}=\sum _{i=1}^{n}\left(\frac{1}{2}(n+m+1)-\left|{R}_{i}-\frac{1}{2}(n+m+1)\right|\right)$$

**Example 17.**

Let $\phi \left(u\right)=1/4-{(u-1/2)}^{2}$, $u\in [0,1]$. Consequently, we have
which is identical to the test statistic suggested by [103] up to an affine linear relation (cf. [68], p. 149f.). This test statistic is given by ${S}_{M}={\sum}_{i=1}^{n}{({R}_{i}-(n+m+1)/2)}^{2}$, thus, the resulting relation is given by

$$CP{E}_{G}\left(R\right)=\frac{n}{2}-2\sum _{i=1}^{n}{\left(\frac{{R}_{i}}{n+m+1}-\frac{1}{2}\right)}^{2},$$

$$CP{E}_{\phi}\left(R\right)=\frac{n}{2}-2{(n+m+1)}^{2}{S}_{M}.$$

In the following, the scores of the Mood test will be generated by the generating function of $CP{E}_{G}$.

Dropping the requirement of concavity of φ, one finds analogies to other well-known test statistics.

**Example 18.**

Let $\phi \left(u\right)=1/2-1/2\left(sign\right(|u-1/2|-1/4)+1)$, $u\in [0,1]$, which is not concave on the interval [0,1], we have
which is identical to the quantile test statistic for alternatives of scale up to an affine linear relation ([102], p. 105).

$$CP{E}_{\phi}\left(R\right)=n-\sum _{i=1}^{n}\left(sign\left(\left|\frac{{R}_{i}}{n+m+1}-\frac{1}{2}\right|-\frac{1}{4}\right)+1\right),$$

The asymptotic distribution of linear rank tests based on $CP{E}_{\phi}$ can be derived from the theory of linear rank test, as discussed in [102]. The asymptotic distribution under the null hypothesis is needed to be able to make an approximate test decision given a significance level α. The asymptotic distribution under the alternative hypothesis is needed for an approximate evaluation of the test power and the choice of the required sample size in order to ensure a given effect size, respectively.

We consider the centered linear rank statistic

$${\overline{CPE}}_{\phi}\left(R\right)=CP{E}_{\phi}\left(R\right)-\frac{2n}{n+m}\sum _{i=1}^{n+m}\phi \left(\frac{i}{n+m+1}\right).$$

Under the null hypothesis of identical scale parameters and the assumption that
where $\overline{\phi}={\int}_{0}^{1}\phi \left(u\right)du$, the asymptotical distribution of ${\overline{CPE}}_{\phi}\left(R\right)$ is given by
(cf. [102], p. 194, Theorem 1 and p. 195, Lemma 1).

$${\int}_{0}^{1}{(\phi \left(u\right)-\overline{\phi})}^{2}+(\phi \left(u\right)-\overline{\phi})(\phi (1-u)-\overline{\phi})du>0,$$

$${\overline{CPE}}_{\phi}\left(R\right){\sim}_{asy}N\left(0,\frac{2nm}{n+m}{\int}_{0}^{1}\left(\phi \left(u\right)-\overline{\phi}{)}^{2}+(\phi \left(u\right)-\overline{\phi})(\phi (1-u)-\overline{\phi}\right)\right)$$

The property of asymptotic normality of the Ansari-Bradley test and the Mood test is well-known. Therefore, we provide a new linear rank test based on cumulative paired Shannon entropy $CP{E}_{S}$ (so-called “Shannon”-test) in the following example:

**Example 19.**

With $\phi \left(u\right)=-ulnu$, $u\in [0,1]$, and $\overline{\phi}=1/4$ we have
and

$${\int}_{0}^{1}{\left(\phi \left(u\right)-\overline{\phi}\right)}^{2}du={\int}_{0}^{1}\phi {\left(u\right)}^{2}du-\frac{1}{16}={\int}_{0}^{1}{u}^{2}{(lnu)}^{2}du-\frac{1}{16}=\frac{2}{27}-\frac{1}{16}=\frac{5}{432}$$

$$\begin{array}{ccc}\hfill {\int}_{0}^{1}(\phi \left(u\right)-\overline{\phi})(\phi (1-u)-\overline{\phi})du& =& {\int}_{0}^{1}\phi \left(u\right)\phi (1-u)du-\frac{1}{16}\hfill \\ & =& {\int}_{0}^{1}u(1-u)lnuln(1-u)du-\frac{1}{16}\hfill \\ & =& \frac{37-3{\pi}^{2}}{108}-\frac{1}{16}=\frac{121-12{\pi}^{2}}{432}.\hfill \end{array}$$

Under the null hypothesis of identical scale, the centered linear rank statistic ${\overline{CPE}}_{S}\left(R\right)$ is asymptotically normal with variance

$$\frac{nm}{n+m}\frac{63-6{\pi}^{2}}{108}.$$

If the alternative hypothesis ${H}_{1}$ for a density function ${f}_{0}$ is given by
for $\sigma >0$ and $\sigma \ne 1$, then set
and assume $I\left({f}_{0}\right)={\int}_{0}^{1}{\phi}_{1}{(u;{f}_{0})}^{2}du>0$. If $min(n,m)\to \infty $ and $ln\sigma I\left({f}_{0}\right)mn/(n+m)\to {b}^{2}$ with $0<{b}^{2}<\infty $, ${\overline{CPE}}_{\phi}\left(R\right)$ is asymptotically normal distributed with mean
and variance

$$f({x}_{1},\dots ,{x}_{n+m};\sigma )=\prod _{i=1}^{n}\frac{1}{\sigma}{f}_{0}\left(\frac{{x}_{i}}{\sigma}\right)\prod _{i=n+1}^{n+m}{f}_{0}\left({x}_{i}\right)$$

$${\phi}_{1}(u;{f}_{0})=-1-{F}_{0}^{-1}\left(u\right)\frac{{f}_{0}^{\prime}\left({F}_{0}^{-1}\left(u\right)\right)}{{f}_{0}\left({F}_{0}^{-1}\left(u\right)\right)}$$

$$-\frac{n}{n+m}ln\sigma \frac{mn}{n+m}{\int}_{0}^{1}\phi \left(u\right){\phi}_{1}(u;{f}_{0})+\phi (1-u){\phi}_{1}(u;{f}_{0})du$$

$$\frac{2nm}{n+m}{\int}_{0}^{1}\left(\phi \left(u\right)-\overline{\phi}{)}^{2}+(\phi \left(u\right)-\overline{\phi})(\phi (1-u)-\overline{\phi}\right)du.$$

This result follows immediately from [102], p. 267, Theorem 1, together with the Remark on, p. 268.

If ${f}_{0}$ is a symmetric distribution, ${\phi}_{1}(u;{f}_{0})={\phi}_{1}(1-u;{f}_{0})$, $u\in [0,1]$, holds such that

$${\int}_{0}^{1}(2\overline{\phi}-\phi \left(u\right)-\phi (1-u)){\phi}_{1}(u;{f}_{0})du=-2{\int}_{0}^{1}\phi \left(u\right){\phi}_{1}(u;{f}_{0})du.$$

This simplifies the variance of the asymptotic normal distribution.

Since the asymptotic normality of the test statistic of the Ansari-Bradley test and the Mood test under the alternative hypothesis have been examined intensely (cf., e.g., [103,104]), we focus in the following example on the new Shannon test:

**Example 20.**

Set $\phi \left(u\right)=-ulnu$, $u\in [0,1]$ and let ${f}_{0}$ be the density function of a standard Gaussian distribution, such that ${\phi}_{1}(u;{f}_{0})=-1+{\Phi}^{-1}{\left(u\right)}^{2}$ and ${I}_{1}\left({f}_{0}\right)=1$. As a consequence, we have
and
where the integrals have been evaluated by numerical integration. Then under the alternative Equation (58):

$$-2{\int}_{0}^{1}(-ulnu)({\Phi}^{-1}{\left(u\right)}^{2}-1)du=0.240,$$

$${\int}_{0}^{1}{\left(1/2+ulnu+(1-u)ln(1-u)\right)}^{2}du=\frac{63-6{\pi}^{2}}{108},$$

$${\overline{CPE}}_{S}\left(R\right){\sim}_{asy}N\left(0.240\frac{n}{n+m}ln\sigma \frac{mn}{n+m},\frac{63-6{\pi}^{2}}{108}\frac{2nm}{n+m}\right).$$

Hereafter, one can discuss the asymptotic efficiency of linear rank tests based on cumulative paired φ-entropy. If ${f}_{0}$ is the true density and
then ${\rho}_{1}^{2}$ gives the desired asymptotic efficiency (cf. [102], p. 317).

$${\rho}_{1}=\frac{{\int}_{0}^{1}\left(\phi u){\phi}_{1}(u;{f}_{0})+\phi (1-u){\phi}_{1}(u;{f}_{0}\right)du}{\sqrt{{\int}_{0}^{1}{\phi}_{1}{(u;{f}_{0})}^{2}du{\int}_{0}^{1}\left(\phi \left(u\right)-\overline{\phi}{)}^{2}+(\phi \left(u\right)-\overline{\phi})(\phi (1-u)-\overline{\phi}\right)du}},$$

The asymptotic efficiency of the Ansari-Bradley test (and the asymptotic equivalent Siegel-Tukey test, respectively) and the Mood test have been analyzed by [104,105,106]. The asymptotic relative efficiency (ARE) with respect to the traditional F-test for differences in scale for two Gaussian distributions has been discussed by [103]. This asymptotic relative efficiency between Mood test and F-test for differences in scale has been derived by [107]. Once more, we focus on the new Shannon-test.

**Example 21.**

The Klotz test is asymptotically efficient for the Gaussian distribution. With ${\int}_{0}^{1}{({\Phi}^{-1}{\left(u\right)}^{2}-1)}^{2}du=2$,
gives the asymptotic efficiency of the new Shannon test.

$${\rho}_{1}^{2}=\frac{0.{24}^{2}}{(63-6{\pi}^{2})/108\times 2}=0.823$$

Using a distribution that ensures the asymptotic efficiency of the Ansari-Bradley test, we compare the asymptotic efficiency of the Shannon test to the one of the Ansari-Bradley test.

**Example 22.**

The Ansari-Bradley test statistic ${S}_{AB}$ is asymptotically efficient for the double log-logistic distribution with density function ${f}_{0}$ (cf. [102], p. 104). The Fisher information is given by

$${\int}_{0}^{1}{\phi}_{1}{(u;{f}_{0})}^{2}du={\int}_{0}^{1}{\left(2\right|2u-1|-1)}^{2}du=4{\int}_{0}^{1}{(2u-1)}^{2}du-1=\frac{1}{3}.$$

Furthermore, we have
such that the asymptotic efficiency of the Shannon-test for ${f}_{0}$ is

$$\begin{array}{ccc}\hfill {\int}_{0}^{1}{\phi}_{1}(u;{f}_{0})(2\overline{\phi}-\phi \left(u\right)-\phi (1-u))du& =& {\int}_{0}^{1}{\phi}_{1}(u;{f}_{0})\left(\frac{1}{2}+ulnu+(1-u)ln(1-u)\right)du\hfill \\ & =& 2{\int}_{0}^{1}|2u-1|(ulnu+(1-u)ln(1-u))du=0.102,\hfill \end{array}$$

$${\rho}_{1}^{2}=\frac{{0.102}^{2}}{1/3\times (63-2{\pi}^{2})/108}=0.892.$$

These two examples show that the Shannon test has a rather good asymptotic efficiency, even if the underlying distribution has moderate tails similar to the Gaussian distribution or heavy tails like the double log-logistic distribution. Asymptotic efficient linear rank tests correspond to a distribution and a scores generating function ${\varphi}_{1}$, from which we can derive an entropy generating function φ and a cumulative paired φ-entropy. This relationship will be further examined in [74].

## 9. Some Cumulative Paired Entropies for Selected Distribution Functions

In the following, we derive closed form expressions for some cumulative paired φ-entropies. We mimic the procedure of ([4], p. 326) to some degree. Table 1 of their paper contains multiple formulas of the differential entropy for the most popular statistical distributions. Several of these distributions will also be considered in the following. Since cumulative entropies depend on the distribution function or equivalently on the quantile function, we focus on families of distributions for which these functions have a closed form expression. Furthermore, we only discuss standardized random variables since the parameter of scale only has a multiplicative effect on $CP{E}_{\phi}$ and the parameter of location has no effect. For the standard Gaussian distribution we provide the value of $CP{E}_{S}$ by numerical integration rounded to two decimal places since the probability function has no explicit form. For the Gumbel distribution however, there is a closed form expression for the distribution function – nevertheless, we were unable to establish a closed form of $CP{E}_{S}$ and $CP{E}_{G}$. Therefore, we applied numerical integration in this case as well. In the following, next to the Gamma function $\Gamma \left(a\right)$ and the Beta function $B(a,b)$, we use

- the incomplete Gamma function$$\Gamma (x;a)={\int}_{0}^{x}{y}^{a-1}{e}^{-y}dy\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\mathrm{for}x0,a0,$$
- the incomplete Beta function$$B(x;a,b)={\int}_{0}^{x}{u}^{a-1}{(1-u)}^{b-1}du\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\mathrm{for}0x1,a,b0,$$
- and the Digamma function$$\psi \left(a\right)=\frac{\mathrm{d}}{\mathrm{d}a}ln\Gamma \left(a\right),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}a>0.$$

#### 9.1. Uniform Distribution

Let X have the standard uniform distribution. Then we have

$$CP{E}_{S}\left(X\right)=\frac{3}{2},\phantom{\rule{0.277778em}{0ex}}CP{E}_{G}\left(X\right)=\frac{1}{3},\phantom{\rule{0.277778em}{0ex}}CP{E}_{L}\left(X\right)=\frac{1}{2},\phantom{\rule{0.277778em}{0ex}}CP{E}_{\alpha}\left(X\right)=\frac{1}{\alpha +1}.$$

#### 9.2. Power Distribution

Let X have the Beta distribution on $[0,1]$ with parameter $\alpha >0$ and $b=1$, i.e., density function ${f}_{X}\left(x\right)=a{x}^{a-1}$ for $x\in [0,1]$, then we have

$$\begin{array}{ccc}\hfill CP{E}_{S}\left(X\right)& =& \frac{a}{{(a+1)}^{2}}+\psi \left(\frac{a+1}{a}\right)-\frac{a+1}{a}\psi \left(\frac{a+2}{a}\right)+\frac{1}{a}\psi \left(1\right),\hfill \\ \hfill CP{E}_{G}\left(X\right)& =& \frac{2a}{(1+a)(1+2a)},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}CP{E}_{L}\left(X\right)=\frac{a}{a+1}\left(1-{\left(\frac{1}{2}\right)}^{1/a}\right),\hfill \\ \hfill CP{E}_{\alpha}\left(X\right)& =& \frac{1}{a(1-\alpha )}B\left(\frac{1}{a},\alpha +1\right)-\frac{\alpha a}{(1-\alpha )(1+\alpha a)}.\hfill \end{array}$$

#### 9.3. Triangular Distribution with Parameter c

Let X have a triangular distribution with density function

$$f\left(x\right)=\left\{\begin{array}{cc}2x/c& \mathrm{for}0xc\hfill \\ 2(1-x)/(1-c)& \mathrm{for}c\le x1.\hfill \end{array}\right.$$

Then the following holds:

$$\begin{array}{ccc}\hfill CP{E}_{S}\left(X\right)& =& \frac{{\pi}^{2}}{6}+ln2(1-ln2),\hfill \\ \hfill CP{E}_{G}\left(X\right)& =& \frac{2}{3}\left({c}^{2}+{(1-c)}^{2}\right)-\frac{2}{5}\left({c}^{3}+{(1-c)}^{3}\right),\hfill \\ \hfill CP{E}_{L}\left(X\right)& =& \frac{1}{3}(2-c)-\frac{3-\sqrt{2}}{3\sqrt{2}}\sqrt{1-c},\hfill \\ \hfill CP{E}_{\alpha}\left(X\right)& =& \frac{1}{1-\alpha}\left(\frac{2}{2\alpha +1}\left({c}^{\alpha +1}+{(1-c)}^{\alpha +1}\right)\right.\hfill \\ & & \left.+\sqrt{c}B\left(c;\frac{1}{2},\alpha +1\right)+\sqrt{1-c}B\left(1-c;\frac{1}{2},\alpha +1\right)-2\right).\hfill \end{array}$$

#### 9.4. Laplace Distribution

Let X follow the Laplace distribution with density function ${f}_{X}\left(x\right)=1/2exp(-|x\left|\right)$ for $x\in \mathbb{R}$, then we have

$$\begin{array}{ccc}\hfill CP{E}_{S}\left(X\right)& =& \frac{{\pi}^{2}}{6}+ln2(1-ln2),\phantom{\rule{0.277778em}{0ex}}CP{E}_{G}\left(X\right)=\frac{3}{2},\phantom{\rule{0.277778em}{0ex}}CP{E}_{L}\left(X\right)=2,\hfill \\ \hfill CP{E}_{\alpha}\left(X\right)& =& \frac{4}{\alpha -1}{\left(\frac{1}{2}\right)}^{\alpha -1}\left(\frac{1}{\alpha -1}-\frac{1}{2\alpha}\right).\hfill \end{array}$$

#### 9.5. Logistic Distribution

Let X follow the logistic distribution with distribution function ${F}_{X}\left(x\right)=1/(1+exp(-x))$ for $x\in \mathbb{R}$, then we have

$$\begin{array}{ccc}\hfill CP{E}_{S}\left(X\right)& =& \frac{{\pi}^{2}}{3},\phantom{\rule{0.277778em}{0ex}}CP{E}_{G}\left(X\right)=2,\phantom{\rule{0.277778em}{0ex}}CP{E}_{L}\left(X\right)=4ln2,\hfill \\ \hfill CP{E}_{\alpha}& =& \frac{2}{\alpha -1}(\psi \left(\alpha \right)-\psi \left(1\right)).\hfill \end{array}$$

#### 9.6. Tukey λ Distribution

Let X follow the Tukey λ distribution with quantile function ${F}^{-1}\left(U\right)=1/\lambda \left({u}^{\lambda}-{(1-u)}^{1-\lambda}\right)$ for $0\le u\le 1$ and $\lambda >-1$. Then the following holds:

$$\begin{array}{ccc}\hfill CP{E}_{S}\left(X\right)& =& \frac{2}{{(\lambda +1)}^{2}}\left(1+\left(1+\frac{1}{\lambda}\right)\left((\lambda +1)\psi (\lambda +1)-\psi (\lambda +2)-\psi \left(1\right)\right)\right),\hfill \\ \hfill CP{E}_{G}\left(X\right)& =& \frac{4}{\lambda +1}\left(1+\frac{1}{\lambda}\right),\hfill \\ \hfill CP{E}_{L}\left(X\right)& =& 2\left(\frac{1}{\lambda +1}{\left(\frac{1}{2}\right)}^{\lambda +1}+B\left(\frac{1}{2};2,\lambda \right)\right),\hfill \\ \hfill CP{E}_{\alpha}\left(X\right)& =& 2\frac{1}{1-\alpha}\left(\frac{{\lambda}^{3}-\lambda \alpha -2(\lambda +\alpha )}{{\lambda}^{2}(\lambda +1)(\lambda +\alpha )}+B(\alpha +1,\lambda )\right).\hfill \end{array}$$

#### 9.7. Weibull Distribution

Let X follow the Weibull distribution with distribution function ${F}_{X}\left(x\right)=1-{e}^{-{x}^{c}}$ for $x>0$, $c>0$, then we have

$$\begin{array}{ccc}\hfill CP{E}_{S}\left(X\right)& =& \frac{1}{c}\Gamma \left(\frac{1}{c}\right)\left(1+\sum _{i=1}^{\infty}\frac{1}{i!}\left({\left(\frac{1}{i}\right)}^{1/c}-{\left(\frac{1}{i+1}\right)}^{1/c}\right)\right),\hfill \\ \hfill CP{E}_{G}\left(X\right)& =& \frac{2}{c}\left(\Gamma \left(\frac{1}{c}\right)-\frac{1}{2}\Gamma \left(\frac{1}{2c}\right)\right),\hfill \\ \hfill CP{E}_{L}\left(X\right)& =& 2\left({(ln2)}^{1/c}+\frac{1}{c}\left(\Gamma \left(\frac{1}{c}\right)-2\Gamma \left(ln2;\frac{1}{c}\right)\right)\right),\hfill \\ \hfill CP{E}_{\alpha}\left(X\right)& =& \frac{1}{c}\Gamma \left(\frac{1}{c}\right)\left(\frac{1}{{\alpha}^{1/c}}+\sum _{i=1}^{\infty}\left(\genfrac{}{}{0pt}{}{\alpha}{i}\right){(-1)}^{i}{i}^{-1/c}\right).\hfill \end{array}$$

#### 9.8. Pareto Distribution

Let X follow the Pareto distribution with distribution function ${F}_{X}\left(x\right)=1-{x}^{-c}$ for $x>1$, $c>1$, then we have

$$\begin{array}{ccc}\hfill CP{E}_{S}\left(X\right)& =& \frac{1}{c-1}\psi \left(2-\frac{1}{c}\right)+\psi \left(1-\frac{1}{c}\right)-\frac{c}{c-1}\psi \left(1\right)+\frac{4}{c},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\hfill \\ \hfill CP{E}_{G}\left(X\right)& =& \frac{2c}{(c-1)(2c-1)},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\hfill \\ \hfill CP{E}_{L}\left(X\right)& =& 2\frac{1}{c-1},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\hfill \\ \hfill CP{E}_{\alpha}\left(X\right)& =& \frac{1}{1-\alpha}\left(\frac{c(1-\alpha )}{(c\alpha -1)(c-1)}-\frac{1}{c}B\left(\alpha ,1-\frac{1}{c}\right)\right).\hfill \end{array}$$

#### 9.9. Gaussian Distribution

By means of numerical integration we calculated the following values for the standard Gaussian distribution:

$$CP{E}_{S}\left(X\right)=1.806,\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}CP{E}_{G}\left(X\right)=1.128,\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}CP{E}_{L}\left(X\right)=1.596.$$

$CP{E}_{\alpha}$ for $\alpha \in [0.5,3]$ and the standard Gaussian distribution can be seen in Figure 4.

#### 9.10. Student-t Distribution

By means of numerical integration and for $\nu =3$ degrees of freedom we calculated the following values for the Student-t distribution

$$CP{E}_{S}\left(X\right)=2.947,\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}CP{E}_{G}\left(X\right)=3.308,\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}CP{E}_{L}\left(X\right)=2.205.$$

As can be seen in Figure 4, the heavy tails of the Student-t distribution result in a higher value for $CP{E}_{\alpha}$ as compared with the Gaussian distribution.

## 10. Conclusions

A new kind of entropy has been introduced that generalizes Shannon’s differential entropy. The main difference to the previous discussion of entropies is the fact that the new entropy is defined for distribution functions instead of density functions. This paper shows that this definition has a long tradition in several scientific disciplines like fuzzy set theory, reliability theory, and more recently in uncertainty theory. With only one exception within all the disciplines, the concepts had been discussed independently. Along with that, the theory of dispersion measures for ordered categorical variables refers to measures based on distribution functions, without realizing that implicitly some sort of entropies are applied. Using the Cauchy–Schwarz inequality, we were able to show the close relationship between the new kind of entropy named cumulative paired φ-entropy and the standard deviation. More precisely, the standard deviation yields an upper limit for the new entropy. Additionally, the Cauchy–Schwarz inequality can be used to derive maximum entropy distributions provided that there are constraints specifying values of mean and variance. Here, the logistic distribution takes on the same key role for the cumulative paired Shannon entropy which the Gaussian distribution takes by maximizing the differential entropy. As a new result we have demonstrated that Tukey’s λ distribution is a maximum entropy distribution if using the entropy generating function φ which is known from the Harvda and Charvát entropy. Moreover, some new distributions can be derived by considering more general constraints. A change in perspective allows to determine the entropy that will be maximized by a certain distribution if, e.g., mean and variance are known. In this context the Gaussian distribution gives a simple solution. Since cumulative paired φ-entropy and variance are closely related, we have investigated whether the cumulative paired φ-entropy is a proper measure of scale. We show that it satisfies the axioms which were introduced by Oja for measures of scale. Several further properties, concerning the behavior under transformations or the sum of independent random variables, have been proven. Consequently, we have given first insights on how to estimate the new entropy. In addition, based on cumulative paired φ-entropy we have introduced new concepts like φ-divergence, mutual φ-information, and φ-correlation. φ-regression and linear rank tests for scale alternatives were considered as well. Furthermore, formulas have been derived for some popular distributions with cdf or quantile function in closed form and for certain cumulative paired φ-entropies.

## Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive criticism, which helped to improve the presentation of this paper significantly. Furthermore, we would like to thank Michael Grottke for helpful advises.

## Author Contributions

Ingo Klein conceived the new entropy concept, investigated its properties and wrote an initial version of the manuscript. Benedikt Mangold cooperated especially by checking, correcting and improving the mathematical details including the proofs. He examined the entropy’s properties by simulation. Monika Doll contributed by mathematical and linguistic revision. All authors have read and approved the final manuscript.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

- Burbea, J.; Rao, C.R. On the convexity of some divergence measures based on entropy functions. IEEE Trans. Inf. Theory
**1982**, 28, 489–495. [Google Scholar] [CrossRef] - Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J.
**1948**, 27, 379–423. [Google Scholar] [CrossRef] - Oja, H. On location, scale, skewness and kurtosis of univariate distributions. Scand. J. Stat.
**1981**, 8, 154–168. [Google Scholar] - Ebrahimi, N.; Massoumi, E.; Soofi, E.S. Ordering univariate distributions by entropy and variance. J. Econometr.
**1999**, 90, 317–336. [Google Scholar] [CrossRef] - Popoviciu, T. Sur les équations algébraique ayant toutes leurs racines réelles. Mathematica
**1935**, 9, 129–145. (In French) [Google Scholar] - Liu, B. Uncertainty Theory. Available online: http://orsc.edu.cn/liu/ut.pdf (accessed on 27 June 2016).
- Wang, F.; Vemuri, B.C.; Rao, M.; Chen, Y. A New & Robust Information Theoretic Measure and Its Application to Image Alignment: Information Processing in Medical Imaging; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2003; Volume 2732, pp. 388–400. [Google Scholar]
- Di Crescenzo, A.; Longobardi, M. On cumulative entropies and lifetime estimation. In Methods and Models in Artificial and Natural Computation; Mira, J.M., Ed.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 132–141. [Google Scholar]
- Di Crescenzo, A.; Longobardi, M. On cumulative entropies. J. Stat. Plan. Inference
**2009**, 139, 4072–4087. [Google Scholar] [CrossRef] - Kapur, J.N. Derivation of logistic law of population growth from maximum entropy principle. Natl. Acad. Sci. Lett.
**1983**, 6, 429–433. [Google Scholar] - Hartley, R. Transmission of information. Bell Syst. Tech. J.
**1928**, 7, 535–563. [Google Scholar] [CrossRef] - De Luca, A.; Termini, S. A definition of a nonprobabilistic entropy in the setting of fuzzy set theory. Inf. Control
**1972**, 29, 301–312. [Google Scholar] [CrossRef] - Zadeh, L. Probability measures of fuzzy events. J. Math. Anal. Appl.
**1968**, 23, 421–427. [Google Scholar] [CrossRef] - Pal, N.R.; Bezdek, J.C. Measuring fuzzy uncertainty. IEEE Trans. Fuzzy Syst.
**1994**, 2, 107–118. [Google Scholar] [CrossRef] - Rényi, A. On measures of entropy and information. In Fourth Berkeley Symposium on Mathematical Statistics and Probability; University of California Press: Oakland, CA, USA, 1961; pp. 547–561. [Google Scholar]
- Esteban, M.D.; Morales, D. A summary on entropy statistics. Kybernetika
**1995**, 31, 337–346. [Google Scholar] - Cichocki, A.; Amari, S. Families of alpha- beta- and gamma-divergences: Flexible and robust measures of similarities. Entropy
**2010**, 12, 1532–1568. [Google Scholar] [CrossRef] - Arndt, C. Information Measures; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
- Kesavan, H.K.; Kapur, J.N. The generalizedmaximumentropy principle. IEEE Trans. Syst. Man Cyber.
**1989**, 19, 1042–1052. [Google Scholar] [CrossRef] - Jaynes, E.T. Information theory and statistical mechanics. Phys. Rev.
**1957**, 106, 620–630. [Google Scholar] [CrossRef] - Jaynes, E.T. Information theory and statistical mechanics. II. Phys. Rev.
**1957**, 108, 171–190. [Google Scholar] [CrossRef] - Leik, R.K. A measure of ordinal consensus. Pac. Sociol. Rev.
**1966**, 9, 85–90. [Google Scholar] [CrossRef] - Vogel, H.; Dobbener, R. Ein Streuungsmaß für komparative Merkmale. Jahrbücher für Nationalökonomie und Statistik
**1982**, 197, 145–157. (In German) [Google Scholar] - Kvålseth, T.O. Nominal versus ordinal variation. Percept. Mot. Skills
**1989**, 69. [Google Scholar] [CrossRef] - Berry, K.J.; Mielke, P.W. Assessment of variation in ordinal data. Percept. Motor Skills
**1992**, 74, 63–66. [Google Scholar] [CrossRef] - Berry, K.J.; Mielke, P.W. Indices of ordinal variation. Percept. Motor Skills
**1992**, 74, 576–578. [Google Scholar] [CrossRef] - Berry, K.J.; Mielke, P.W. A test of significance for the index of ordinal variation. Percept. Motor Skills
**1994**, 79, 291–1295. [Google Scholar] - Blair, J.; Lacy, M.G. Measures of variation for ordinal data. Percept. Motor Skills
**1996**, 82, 411–418. [Google Scholar] [CrossRef] - Blair, J.; Lacy, M.G. Statistics of ordinal variation. Sociol. Methods Res.
**2000**, 28, 251–280. [Google Scholar] [CrossRef] - Gadrich, T.; Bashkansky, E.; Zitikas, R. Assessing variation: A unifying approach for all scales of measurement. Qual. Quant.
**2015**, 49, 1145–1167. [Google Scholar] [CrossRef] - Allison, R.A.; Foster, J.E. Measuring health inequality using qualitative data. J. Health Econ.
**2004**, 23, 505–524. [Google Scholar] [CrossRef] [PubMed] - Zheng, B. Measuring inequality with ordinal data: A note. Res. Econ. Inequal.
**2008**, 16, 177–188. [Google Scholar] - Abul Naga, R.H.; Yalcin, T. Inequality measurement for ordered response health data. J. Health Econ.
**2008**, 27, 1614–1625. [Google Scholar] [CrossRef] [PubMed] - Zheng, B. A new approach to measure socioeconomic inequality in health. J. Econ. Inequal.
**2011**, 9, 555–577. [Google Scholar] [CrossRef] - Apouey, B.; Silber, J. Inequality and bi-polarization in socioeconomic status and health: Ordinal approaches. Res. Econ. Inequal.
**2013**, 21, 77–109. [Google Scholar] - Klein, I. Rangordnungsstatistiken als Verteilungsmaßzahlen für ordinalskalierte Merkmale: I. Streuungsmessung. In Diskussionspapiere des Lehrstuhls für Statistik und; Ökonometrie der Universität: Erlangen-Nürnberg, Germany, 1999; Volume 27. (In German) [Google Scholar]
- Yager, R.R. Dissonance—A measure of variability for ordinal random variables. Int. J. Uncertain. Fuzzin. Knowl. Based Syst.
**2001**, 9, 39–53. [Google Scholar] - Bowden, R.J. Information, measure shifts and distribution metrics. Statistics
**2012**, 46, 249–262. [Google Scholar] [CrossRef] - Dai, W. Maximum entropy principle for quadratic entropy of uncertain variables. Available online: http://orsc.edu.cn/online/100314.pdf (accessed on 27 June 2016).
- Dai, W.; Chen, X. Entropy of function of uncertain variables. Math. Comput. Model.
**2012**, 55, 754–760. [Google Scholar] [CrossRef] - Chen, X.; Kar, S.; Ralescu, D.A. Cross-entropy measure of uncertain variables. Inf. Sci.
**2012**, 201, 53–60. [Google Scholar] [CrossRef] - Yao, K.; Gao, J.; Dai, W. Sine entropy for uncertain variables. Int. J. Uncertain. Fuzzin. Knowl. Based Syst.
**2013**, 21, 743–753. [Google Scholar] [CrossRef] - Yao, K.; Ke, H. Entropy operator for membership function of uncertain set. Appl. Math. Comput.
**2014**, 242, 898–906. [Google Scholar] [CrossRef] - Ning, Y.; Ke, H.; Fu, Z. Triangular entropy of uncertain variables with application to portfolio selection. Soft Comput.
**2015**, 19, 2203–2209. [Google Scholar] [CrossRef] - Ebrahimi, N. How to measure uncertainty in the residual lifetime distribution. Sankhya Ser. A
**1996**, 58, 48–56. [Google Scholar] - Rao, M.; Chen, Y.; Vemuri, B.C.; Wang, F. Cumulative residual entropy: A new measure of information. IEEE Trans. Inf. Theory
**2004**, 50, 1220–1228. [Google Scholar] [CrossRef] - Rao, M. More on a new concept of entropy and information. J. Theor. Probabil.
**2005**, 18, 967–981. [Google Scholar] [CrossRef] - Schroeder, M.J. An alternative to entropy in the measurement of information. Entropy
**2004**, 6, 388–412. [Google Scholar] [CrossRef] - Zografos, K.; Nadarajah, S. Survival exponential entropies. IEEE Trans. Inf. Theory
**2005**, 51, 1239–1246. [Google Scholar] [CrossRef] - Drissi, N.; Chonavel, T.; Boucher, J.M. Generalized cumulative residual entropy distributions with unrestricted supports. Res. Lett. Signal Process.
**2008**, 2008. [Google Scholar] [CrossRef] - Chen, X.; Dai, W. Maximum entropy principle for uncertain variables. Int. J. Fuzzy Syst.
**2011**, 13, 232–236. [Google Scholar] - Sunoj, S.M.; Sankaran, P.G. Quantile based entropy function. Stat. Probabil. Lett.
**2012**, 82, 1049–1053. [Google Scholar] [CrossRef] - Zardasht, V.; Parsi, S.; Mousazadeh, M. On empirical cumulative residual entropy and a goodness-of-fit test for exponentiality. Stat. Pap.
**2015**, 56, 677–688. [Google Scholar] [CrossRef] - Navarro, J.; del Aguila, Y.; Asadi, M. Some new results on the cumulative residual entropy. J. Stat. Plan. Inference
**2010**, 140, 310–322. [Google Scholar] [CrossRef] - Psarrakos, G.; Navarro, J. Generalized cumulative residual entropy and record values. Metrika
**2013**, 76, 623–640. [Google Scholar] [CrossRef] - Kiesl, H. Ordinale Streuungsmaße; JOSEF-EUL-Verlag: Köln, Germany, 2003. (In German) [Google Scholar]
- Havrda, J.; Charvát, F. Quantification method of classification processes. Concept of structural a-entropy. Kybernetika
**1967**, 3, 30–35. [Google Scholar] - Jumarie, G. Relative Information: Theories and Applications; Springer: Berlin/Heidelberg, Germany, 1990. [Google Scholar]
- Kapur, J.N. Measures of Information and their Applications; New Age International Publishers: New Delhi, India, 1994. [Google Scholar]
- Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: New York, NY, USA, 1991. [Google Scholar]
- Kapur, J.N. Generalized Cauchy and Students distributions as maximum entropy distributions. Proc. Natl. Acad. Sci. India
**1988**, 58, 235–246. [Google Scholar] - Bickel, P.J.; Lehmann, E.L. Descriptive statistics for nonparametric models: III. Dispersion. Ann. Stat.
**1976**, 5, 1139–1158. [Google Scholar] [CrossRef] - Behnen, K.; Neuhaus, G. Rank Tests with Estimated Scores and their Applications; Teubner-Verlag: Stuttgart, Germany, 1989. [Google Scholar]
- Burger, H.U. Dispersion orderings with applications to nonparametric tests. Stat. Probabil. Lett.
**1993**, 16. [Google Scholar] [CrossRef] - Bickel, P.J.; Lehmann, E.L. Descriptive statistics for nonparametric models: IV. Spread. In Contributions to Statistics; Jurečková, J., Ed.; Academic Press: New York, NY, USA, 1979; pp. 33–40. [Google Scholar]
- Pfanzagl, J. Asymptotic Expansions for General Statistical Models; Springer: New York, NY, USA, 1985. [Google Scholar]
- Beirlant, J.; Dudewicz, E.J.; Györfi, L.; van der Meulen, E.C. Nonparametric entropy estimation: An overview. Int. J. Math. Stat. Sci.
**1997**, 6, 17–39. [Google Scholar] - Büning, H.; Trenkler, G. Nichtparametrische Statistische Methoden; de Gruyter: Berlin, Germany, 1994. [Google Scholar]
- Serfling, R.J. Approximation Theorems in Mathematical Statistics; John Wiley & Sons: New York, NY, USA, 1980. [Google Scholar]
- Huber, P.J. Robust Statistics; John Wiley & Sons: New York, NY, USA, 1981. [Google Scholar]
- Jurečková, J.; Sen, P.K. Robust Statistical Procedures: Asymptotics and Interrelations; John Wiley & Sons: New York, NY, USA, 1996. [Google Scholar]
- Parr, W.C.; Schucany, W.R. Jackknifing L-statistics with smooth weight functions. J. Am. Stat. Assoc.
**1982**, 77, 629–638. [Google Scholar] [CrossRef] - Klein, I.; Mangold, B. Cumulative paired φ-entropies—Estimation and Robustness. Unpublished work. 2016. [Google Scholar]
- Klein, I.; Mangold, B. Cumulative paired φ -entropies and two sample linear rank tests for scale alternatives. Unpublished work. 2016. [Google Scholar]
- Klein, I.; Mangold, B. φ-correlation and φ-regression. Unpublished work. 2016. [Google Scholar]
- Pardo, L. Statistical Inferences based on Divergence Measures; Chapman & Hall: Boca Raton, FL, USA, 2006. [Google Scholar]
- Anderson, T.W.; Darling, D.A. Asymptotic theory of certain goodness of fit criteria based on stochastic processes. Ann. Math. Stat.
**1952**, 23, 193–212. [Google Scholar] [CrossRef] - Berk, R.H.; Jones, D.H. Goodness-of-fit statistics that dominate the Kolmogorov statistics. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete
**1979**, 47, 47–59. [Google Scholar] [CrossRef] - Donoho, D.; Jin, J. Higher criticism for detecting sparse heterogeneous mixtures. Ann. Stat.
**2004**, 32, 962–994. [Google Scholar] - Park, S.; Rao, M.; Shin, D.W. On cumulative residual Kullback–Leibler information. Stat. Probabil. Lett.
**2012**, 82, 2025–2032. [Google Scholar] [CrossRef] - Di Crescenzo, A.; Longobardi, M. Some properties and applications of cumulative Kullback–Leibler information. Appl. Stoch. Models Bus. Ind.
**2015**, 31, 875–891. [Google Scholar] [CrossRef][Green Version] - Liese, F.; Vajda, I. Convex Statistical Distances; Teubner-Verlag: Leipzig, Germany, 1987. [Google Scholar]
- Csiszár, I. Eine informationstheoretische Ungleichung und ihre Anwendung auf den Beweis der Ergodizität von Markoffschen Ketten. Magyar Tud. Akad. Mat. Kutató Int. Közl.
**1963**, 8, 85–108. (In German) [Google Scholar] - Ali, S.M.; Silvey, S.D. A general class of coefficients of divergence of one distribution from another. J. R. Stat. Soc. Ser. B
**1966**, 28, 131–142. [Google Scholar] - Cressie, N.; Read, T. Multinomial goodness-of-fit tests. J. R. Stat. Soc. Ser. B
**1984**, 46, 440–464. [Google Scholar] - Jager, L.; Wellner, J.A. Goodness-of-fit tests via phi-divergences. Ann. Stat.
**2007**, 35, 2018–2053. [Google Scholar] [CrossRef] - Parr, W.C.; Schucany, W.R. Minimum distance and robust estimation. J. Am. Stat. Assoc.
**1980**, 75, 616–624. [Google Scholar] [CrossRef] - Nelsen, R.B. An Introduction to Copulas; Springer: New York, NY, USA, 1999. [Google Scholar]
- Hall, P.; Wolff, R.C.; Yao, Q. Methods for estimating a conditional distribution function. J. Am. Stat. Assoc.
**1999**, 94, 154–163. [Google Scholar] [CrossRef] - Schechtman, E.; Yitzhaki, S. A measure of association based on Gini’s mean difference. Commun. Stat. Theory Methods
**1987**, 16, 207–231. [Google Scholar] [CrossRef] - Schechtman, E.; Yitzhaki, S. On the proper bounds of the Gini correlation. Econ. Lett.
**1999**, 63, 133–138. [Google Scholar] [CrossRef] - Yitzhaki, S. Gini’s mean difference: A superior measure of variability for non-normal distributions. Metron
**2003**, 61, 285–316. [Google Scholar] - Olkin, I.; Yitzhaki, S. Gini regression analysis. Int. Stat. Rev.
**1992**, 60, 185–196. [Google Scholar] [CrossRef] - Hettmansperger, T.P. Statistical Inference Based on Ranks; John Wiley & Sons: New York, NY, USA, 1984. [Google Scholar]
- Jaeckel, L.A. Estimating regression coefficients by minimizing the dispersion of residuals. Ann. Math. Stat.
**1972**, 43, 1449–1458. [Google Scholar] [CrossRef] - Jurečková, J. Nonparametric estimate of regression coefficients. Ann. Math. Stat.
**1971**, 42, 1328–1338. [Google Scholar] [CrossRef] - Kloke, J.D.; McKean, J.W. Rfit: Rank-based estimation for linear models. R J.
**2012**, 4, 57–64. [Google Scholar] - McKean, J.W.; Kloke, J.D. Efficient and adaptive rank-based fits for linear models with skew-normal errors. J. Stat. Distrib. Appl.
**2014**, 1. [Google Scholar] [CrossRef] - Hettmansperger, T.P.; McKean, J.W. Robust Nonparametric Statistical Methods; Chapman & Hall: New York, NY, USA, 2011. [Google Scholar]
- Koul, H.L.; Sievers, G.L.; McKean, J. An estimator of the scale parameter for the rank analysis of linear models under general score functions. Scand. J. Stat.
**1987**, 14, 131–141. [Google Scholar] - Ansari, A.R.; Bradley, R.A. Rank-sum tests for dispersion. Ann. Math. Stat.
**1960**, 31, 142–149. [Google Scholar] [CrossRef] - Hájek, J.; Šidák, Z.; Sen, P.K. Theory of Rank Tests; Academic Press: San Diego, CA, USA, 1999. [Google Scholar]
- Mood, A.M. On the asymptotic efficiency of certain nonparametric two-sample tests. Ann. Math. Stat.
**1954**, 25, 514–522. [Google Scholar] [CrossRef] - Klotz, J. Nonparametric tests for scale. Ann. Math. Stat.
**1961**, 33, 498–512. [Google Scholar] [CrossRef] - Basu, A.P.; Woodworth, G. A note on nonparametric tests for scale. Ann. Math. Stat.
**1967**, 38, 274–277. [Google Scholar] [CrossRef] - Shiraishi, T.A. The asymptotic power of rank tests under scale-alternatives including contaminated distributions. Ann. Math. Stat.
**1986**, 38, 513–522. [Google Scholar] [CrossRef] - Sukhatme, B.V. On certain two-sample nonparametric tests for variances. Ann. Math. Stat.
**1957**, 28, 188–194. [Google Scholar] [CrossRef]

**Figure 2.**Several entropy generating functions φ derived from the generalized maximum entropy (ME) principle.

**Figure 4.**$CP{E}_{\alpha}$, $\alpha \in [0.5,3]$ for the standard Gaussian and the Student-t distribution.

© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).