Rényi Cross-Entropy Measures for Common Distributions and Processes with Memory

Two Rényi-type generalizations of the Shannon cross-entropy, the Rényi cross-entropy and the Natural Rényi cross-entropy, were recently used as loss functions for the improved design of deep learning generative adversarial networks. In this work, we derive the Rényi and Natural Rényi differential cross-entropy measures in closed form for a wide class of common continuous distributions belonging to the exponential family, and we tabulate the results for ease of reference. We also summarise the Rényi-type cross-entropy rates between stationary Gaussian processes and between finite-alphabet time-invariant Markov sources.


Introduction
The Rényi entropy [1] of order α of a probability mass function p with finite support S is defined as for α > 0, α = 1. The Rényi entropy generalizes the Shannon entropy, in the sense that as α → 1, H α (p) → H(p). Several other Rényi-type information measures have been put forward, each obeying the condition that their limit as α goes to one reduces to a Shannon-type information measure. This includes the Rényi divergence (of order α) between two discrete distributions p and q with common finite support S, given by which reduces to the familiar Kullback-Leibler divergence, as α → 1. Note that in some cases [2], there may exist multiple Rényi-type generalisations for the same information measure (particularly for mutual information). Many of these definitions admit natural counterparts in the case when the involved distributions have a probability density function (pdf). This gives rise to information measures such as the Rényi differential entropy for pdf p with support S, and the Rényi differential divergence between pdfs p and q with common support S, The Rényi cross-entropy between distributions p and q is an analogous generalization of the Shannon cross-entropy Two definitions for this measure have been recently suggested. In light of the fact that Shannon's cross-entropy satisfies H(p; q) = D(p q) + H(p), a natural definition of the Rényi cross-entropy is:H This definition was indeed proposed in [3] in the continuous case, with the differential cross-entropy measure given bỹ In contrast, prior to [3], the authors of [4] introduced the Rényi cross-entropy in their study of shifted Rényi measures expressed as the logarithm of weighted generalized power means. Specifically, upon simplifying Definition 6 in [4], their expression for the Rényi cross-entropy between distributions p and q is given by For the continuous case, (10) can be readily converted to yield the Rényi differential crossentropy between pdfs p and q: Note that both (8) and (10) reduce to the Shannon cross-entropy H(p; q) as α → 1 [5]. A similar result holds for (9) and (11), where the Shannon differential cross-entropy, is obtained. Further, the Rényi (differential) entropy is recovered in all equations when p = q (almost everywhere). These properties alone make these definitions viable extensions of the Shannon (differential) cross-entropy. Finding closed-form expressions for the cross-entropy measure in (9) for continuous distributions is direct, since the Rényi divergence and the Rényi differential entropy were already calculated for numerous distributions in [6,7], respectively. However, deriving the measure in (11) is more involved. We hereafter refer to the measuresH α (p; q) in (8) and h α (p; q) in (9) as the Natural Rényi cross-entropy and the Natural Rényi differential cross-entropy, respectively; while we plainly call the measures H α (p; q) in (10) and h α (p; q) (11) as the Rényi cross-entropy and the Rényi differential cross-entropy, respectively.
In a recent conference paper [8], we showed how to calculate the Rényi differential cross-entropy h α (p; q) between distributions of the same type from the exponential family. Building upon the results shown there, the purpose of this paper is to derive in closed form the expression of h α (p; q) for thirteen commonly used univariate distributions from the exponential family, as well as for multivariate Gaussians, and tabulate the results for ease of reference. We also analytically derive the Natural Rényi differential crossentropyh α (p; q) for the same set of distributions. Finally, we present tables summarising the Rényi and Natural Rényi (differential) cross-entropy rate measures, along with their Shannon counterparts, for two important classes of sources with memory, namely stationary Gaussian sources and finite-state time-invariant Markov sources.
Motivation for determining formulae for the Rényi cross-entropy originates from the use of the Shannon differential cross-entropy as a loss function for the design of deep learning generative adversarial networks (GANs) in [9]. The parameter α, ubiquitous to all Rényi information measures, allows one to fine-tune the loss function to improve the quality of the GAN-generated output. This can be seen in [3,5,10], which used the Rényi differential cross-entropy, and the Natural Rényi differential cross-entropy measures, respectively, to generalize the original GAN loss function (which is recovered as α → 1), resulting in both improved GAN system stability and performance for multiple image datasets. It is also shown in [5,10] that the introduced Rényi-centric generalized loss function preserves the equilibrium point satisfied by the original GAN via the so-called Jensen-Rényi divergence [11], a natural extension of the Jensen-Shannon divergence [12] upon which the equilibrium result of [9] is established. Other GAN systems that utilize different generalized loss functions were recently developed and analysed in [13][14][15] (see also the references therein for prior work).
The rest of this paper is organised as follows. In Section 2, the formulae for the Rényi differential cross-entropy and Natural Rényi differential cross-entropy for distributions from the exponential family are given. In Section 3, these calculations are systematically carried out for fourteen pairs of distributions of the same type within the exponential family, and the results are presented in two tables. The Rényi and Natural Rényi differential cross-entropy rates are presented in Section 4 for stationary Gaussian sources; furthermore, the Rényi and Natural Rényi cross-entropy rates are provided in Section 5 for finite-state time-invariant Markov sources. Finally, the paper is concluded in Section 6.

Rényi and Natural Rényi Differential Cross-Entropies for Distributions from the Exponential Family
An exponential family is a class of probability distributions over a support S ⊆ R n defined by a parameter space Θ ⊆ R m and functions b : S → R, c : Θ → R, T : S → R m , and η : Θ → R m such that the pdf in this family has the form where ·, · denotes the standard inner product in R m . Alternatively, by using the (natural) parameter η = η(θ), the pdf can also be written where Examples of important pdfs we consider from the exponential family are included in Appendix A.
In [8], the cross-entropy between pdfs f 1 and f 2 of the same type from the exponential family was proven to be where Here, f h refers to a distribution of the same type as f 1 and f 2 within the exponential family with natural parameter It can also be shown that the Natural Rényi differential cross-entropy between f 1 and f 2 is given byh where and where f α1 refers to a distribution of the same type as f 1 and f 2 within the exponential family with natural parameter αη 1 .

Tables of Rényi and Natural Rényi Differential Cross-Entropies
Tables 1 and 2 list Rényi and Natural differential cross-entropy expressions, respectively, between common distributions of the same type from the exponential family (which we describe in Appendix A for convenience). The closed-form expressions were derived using (15) and (18), respectively. In the tables, the subscript i is used to denote that a parameter belongs to pdf f i , i = 1, 2. Name  Table 2. Natural Rényi differential cross-entropies. Nameh α ( f 1 ; f 2 )

Maxwell Boltzmann
spectral density of the first zero-mean Gaussian process, g(λ) is the spectral density of the second Gaussian process, and Table 3. Differential cross-entropy rates for stationary zero-mean Gaussian sources.

Information Measure Rate Constraint
Shannon Differential Cross-Entropy

Rényi and Natural Rényi Cross-Entropy Rates for Markov Sources
In [8], the Rényi cross-entropy rate between finite-state time-invariant Markov sources was established, using, as in [16], tools from the theory of non-negative matrices and Perron-Frobenius theory (e.g., cf. [17,18]). This measure, as well as the Shannon and Natural Rényi differential cross-entropy rates, are derived and summarised in Table 4. Here, P and Q are the m × m (stochastic) transition matrices associated with the first and second Markov sources, respectively, where both sources have a common alphabet of size m. To allow any value of the Rényi parameter α in (0, 1) ∪ (1, ∞), we assume that the transition matrix Q of the second Markov chain has positive entries (Q > 0); however, the transition matrix P of the first Markov chain is taken to be an arbitrary stochastic matrix. For simplicity, we assume that the initial distribution vectors, p and q, of both Markov chains also have positive entries (p > 0 and q > 0). This condition can be relaxed via the approach used to prove Theorem 1 in [16]. Moreover, π T p denotes the stationary probability row vector associated with the first Markov chain, and 1 is an m-dimensional column vector in which each element equals one. Furthermore, denotes element-wise multiplication (i.e., the Hadamard product operation), andl n is the element-wise natural logarithm.
Finally, the definition of λ(R) : R m×m → R for a matrix R is more involved. If R is irreducible, λ(R) is its largest positive eigenvalue. Otherwise, rewriting R in its canonical form as detailed in Proposition 1 in [16], we have that λ(R) = max(λ * , λ * ), where λ * is the maximum of all largest positive eigenvalues of (irreducible) sub-matrices of R corresponding to self-communicating classes, and λ * is the maximum of all largest positive eigenvalues of sub-matrices of R corresponding to classes reachable from an inessential class. Table 4. Cross-entropy rates for time-invariant Markov sources.

Information Measure Rate
Shannon Cross-Entropy −π T p P l nQ 1

Conclusions
We have derived closed-form formulae for the Rényi and Natural Rényi differential cross-entropies of commonly used distributions from the exponential family. This is of potential use to further studies in information theory and machine learning, particularly problems where deep neural networks, trained according to a Shannon cross-entropy loss function, can be improved via generalized Rényi-type loss functions in virtue of the extra degree of freedom provided by the Rényi (α) parameter. In addition, we have provided formulae for the Rényi and Natural Rényi differential cross-entropy rates for stationary zero-mean Gaussian processes and expressions for the cross-entropy rates for Markov sources. Further work includes expanding the present collection by considering distributions such as Levy or Weibull and investigating cross-entropy measures based on the f -divergence [19][20][21], starting with Arimoto's divergence [22].

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A
In Table A1, we describe the distributions of Tables 1 and 2 (for the multivariate Gaussian, µ ∈ R n is a mean vector, and Σ 2 ∈ R n × R n is a positive definite covariance matrix). Table A1. Distributions listed in Tables 1 and 2.