A Generic Formula and Some Special Cases for the Kullback–Leibler Divergence between Central Multivariate Cauchy Distributions

This paper introduces a closed-form expression for the Kullback–Leibler divergence (KLD) between two central multivariate Cauchy distributions (MCDs) which have been recently used in different signal and image processing applications where non-Gaussian models are needed. In this overview, the MCDs are surveyed and some new results and properties are derived and discussed for the KLD. In addition, the KLD for MCDs is showed to be written as a function of Lauricella D-hypergeometric series FD(p). Finally, a comparison is made between the Monte Carlo sampling method to approximate the KLD and the numerical value of the closed-form expression of the latter. The approximation of the KLD by Monte Carlo sampling method are shown to converge to its theoretical value when the number of samples goes to the infinity.


Introduction
Multivariate Cauchy distribution (MCD) belongs to the elliptical symmetric distributions [1] and is a special case of the multivariate t-distribution [2] and the multivariate stable distribution [3]. MCD has been recently used in several signal and image processing applications for which non-Gaussian models are needed. To name a few of them, in speckle denoizing, color image denoizing, watermarking, speech enhancement, among others. Sahu et al. in [4] presented a denoizing method for speckle noise removal applied to a retinal optical coherence tomography (OCT) image. The method was based on the wavelet transform where the sub-bands coefficients were modeled using a Cauchy distribution. In [5], a dual tree complex wavelet transform (DTCWT)-based despeckling algorithm was proposed for synthetic aperture radar (SAR) images, where the DTCWT coefficients in each subband were modeled with a multivariate Cauchy distribution. In [6], a new color image denoizing method in the contourlet domain was suggested for reducing noise in images corrupted by Gaussian noise where the contourlet subband coefficients were described by the heavy-tailed MCD. Sadreazami et al. in [7] put forward a novel multiplicative watermarking scheme in the contourlet domain where the watermark detector was based on the bivariate Cauchy distribution and designed to capture the across scale dependencies of the contourlet coefficients. Fontaine et al. in [8] proposed a semi-supervised multichannel speech enhancement system where both speech and noise follow the heavy-tailed multi-variate complex Cauchy distribution.
Kullback-Leibler divergence (KLD), also called relative entropy, is one of the most fundamental and important measures in information theory and statistics [9,10]. KLD was first introduced and studied by Kullback and Leibler [11] and Kullback [12] to measure the divergence between two probability mass functions in the case of discrete random variables and between two univariate or multivariate probability density functions in the case of continuous random variables. In the literature, numerous entropy and divergence measures have been suggested for measuring the similarity between probability distributions, such as Rényi [13] divergence, Sharma and Mittal [14] divergence, Bhattacharyya [15,16] divergence and Hellinger divergence measures [17]. Other general divergence families have been also introduced and studied like the φ-divergence family of divergence measures defined simultaneously by Csiszár [18] and Ali and Silvey [19] where the KLD measure is a special case, the Bregman family divergence [20], the R-divergences introduced by Burbea and Rao [21][22][23], the statistical f -divergences [24,25] and recently the new family of a generalized divergence called the (h, φ)-divergence measures introduced and studied in Menéndez et al. [26]. Readers are referred to [10] for details about these divergence family measures.
KLD has a specific interpretation in coding theory [27] and is therefore the most popular and widely used as well. Since information theoretic divergence and KLD in particular are ubiquitous in information sciences [28,29], it is therefore important to establish closed-form expressions of such divergence [30]. An analytical expression of the KLD between two univariate Cauchy distributions was presented in [31,32]. To date, the KLD of MCDs has no known explicit form, and it is in practice either estimated using expensive Monte Carlo stochastic integration or approximated. Monte Carlo sampling can efficiently estimate the KLD provided that a large number of independent and identically distributed samples is provided. Nevertheless, Monte Carlo integration is a too slow process to be useful in many applications. The main contribution of this paper is to derive a closed-form expression for the KLD between two central MCDs in a general case to benchmark future approaches while avoiding approximation using expensive Monte Carlo (MC) estimation techniques. The paper is organized as follows. Section 2 introduces the MCD and the KLD. Section 3 gives some definitions and propositions related to a multiple power series used to compute the closed-form expression of the KLD between two central MCDs. In Sections 4 and 5, expressions of some expectations related to the KLD are developed by exploiting the propositions presented in the previous section. Section 6 demonstrates some final results on the KLD computed for the central MCD. Section 7 presents some particular results such as the KLD for the univariate and the bivariate Cauchy distribution. Section 8 presents the implementation procedure of the KLD and a comparison with Monte Carlo sampling method. A summary and some conclusions are provided in the final section.

Multivariate Cauchy Distribution and Kullback-Leibler Divergence
Let X be a random vector of R p which follows the MCD, characterized by the following probability density function (pdf) given as follows [2] This is for any x ∈ R p , where p is the dimensionality of the sample space, µ is the location vector, Σ is a symmetric, positive definite (p × p) scale matrix and Γ(.) is the Gamma function. Let X 1 and X 2 be two random vectors that follow central MCDs with pdfs f X 1 (x|Σ 1 , p) = f X 1 (x|0, Σ 1 , p) and f X 2 (x|Σ 2 , p) = f X 2 (x|0, Σ 2 , p) given by (1). KLD provides an asymmetric measure of the similarity of the two pdfs. Indeed, the KLD between the two central MCDs is given by Since the KLD is the relative entropy defined as the difference between the cross-entropy and the entropy, we have the following relation: where the entropy. Therefore, the determination of KLD requires the expression of the entropy and the cross-entropy. It should be noted that the smaller KL(X 1 ||X 2 ), the more similar are f X 1 (x|Σ 1 , p) and f X 2 (x|Σ 2 , p). The symmetric KL similarity measure between X 1 and X 2 is d KL (X 1 , X 2 ) = KL(X 1 ||X 2 ) + KL(X 2 ||X 1 ). In order to compute the KLD, we have to derive the analytical expressions of Consequently, the closed-form expression of the KLD between two zero-mean MCDs is given by To provide the expression of these two expectations, some tools based on the multiple power series are required. The next section presents some definitions and propositions used for this goal.

Definitions and Propositions
This section presents some definitions and exposes some propositions related to the multiple power series used to derive the closed-form expression of the expectation and as a consequence the KLD between two central MCDs. Definition 1. The Humbert series of n variables, denoted Φ (n) 2 , is defined for all x i ∈ C, i = 1, . . . , n, by the following multiple power series (Section 1.4 in [33]) The Pochhammer symbol (q) i indicates the i-th rising factorial of q, i.e., for an integer 3.1. Integral Representation for Φ (n) 2 Proposition 1. The following integral representation is true for Real{c} > Real{∑ n i=1 b i } > 0 and Real{b i } > 0 where Real{.} denotes the real part of the complex coefficients where ∆ = {(u 1 , . . . , u n )|0 ≤ u i ≤ 1, i = 1, . . . , n; 0 ≤ u 1 + . . . + u n ≤ 1} and the multivariate beta function B is the extension of beta function to more than two arguments (called also Dirichlet function) defined as (Section 1.6.1 in [34]) Proof. The power series of exponential function is given by By substituting the expression of the exponential into the multiple integrals we have ..
where the multivariate integral I D , which is a generalization of a beta integral, is the type-1 Dirichlet integral (Section 1.6.1 in [34]) given by Finally, plugging (13) back into (12) leads to the final result Given Proposition 1, we consider the particular cases n = {1, 2} one by one as follows: where 1 F 1 (.) is the confluent hypergeometric function of the first kind (Section 9.21 in [35]). Case n = 2 where the double series Φ 2 is one of the components of the Humbert series of two variables [36] that generalize Kummer's confluent hypergeometric series 1 F 1 of one variable. The double series Φ 2 converges absolutely at any x 1 , x 2 ∈ C.
The multiple power series (17) is absolutely convergent on the region |x i x −1 The multiple power series F By Horn's rule for the determination of the convergence region (see [37], Section 5.7.2), the multiple power series (18) and (19) are absolutely convergent on region |x i | < 1, ∀ i ∈ {1, . . . , n − 1}, |1 − x n | < 1 in C n . Equation (18) can then be deduced from (17) by using the following development where the F and α = a + ∑ n−1 i=1 m i is used here to alleviate writing equations. Using the definition of Gauss' hypergeometric series 2 F 1 (.) [34] and the Pfaff transformation [38], we can write By substituting (23) into (20), and using the following two relations: we can get (18).
The second transformation is given as follows By substituting (27) into (20), we get (19). [39] when a − c n + 1 = c and is given as follows Proof. By using Equation (18)

Integral Representation for F
where U(·) is the confluent hypergeometric function of the second kind (Section 9.21 in [35]) defined for Real{b} > 0, Real{z} > 0 by the following integral representation and Φ (n) 2 (·) is defined by Equation (6).
Proof. The multiple power series Φ (n) 2 and the confluent hypergeometric function U(·) are absolutely convergent on [0, +∞]. Using these functions in the above integral and changing the order of integration and summation, which is easily justified by absolute convergence, we get where integral I is defined as follows Substituting the integral expression of U(·) in the previous equation and replacing α = a + ∑ n i=1 m i to alleviate writing equations, we have Knowing that [35] ∞ and the new expression of I is then given by Using the fact that , and developing the same method to Γ(α + b n+1 − c n+1 + 1), the final complete expression of the integral is then given by 4. Expression of E X 1 {ln[1 + X T Σ −1 1 X]} Proposition 3. Let X 1 be a random vector that follows a central MCD with pdf given by f X 1 (x|Σ 1 , p).
where ψ(.) is the digamma function defined as the logarithmic derivative of the Gamma function (Section 8.36 in [35]).
, as a consequence the expectation is given as follows Consider the transformation y = Σ −1/2 1 x where y = [y 1 , y 2 , . . . , y p ] T . The Jacobian determinant is given by dy = |Σ 1 | −1/2 dx (Theorem 1.12 in [40]). The new expression of the expectation is given by Let u = y T y be a transformation where the Jacobian determinant is given by (Lemma 13.3.1 in [41]) The new expectation is as follows Using the definition of beta function, we can write that . (45) The derivative of the last integral w.r.t a is given by Let X 1 and X 2 be two random vectors that follow central MCDs with pdfs given, respectively, by f X 1 (x|Σ 1 , p) and f X 2 (x|Σ 2 , p). Expectation E X 1 {ln[1 + X T Σ −1 2 X]} is given as follows where λ 1 ,. . . , λ p are the eigenvalues of the real matrix Σ 1 Σ −1 2 , and F Proof. To prove Proposition 4, different steps are necessary. They are described in the following:

First Step: Eigenvalue Expression
x where y = [y 1 , y 2 , . . . , y p ] T . The Jacobian determinant is given by dy = |Σ 1 | −1/2 dx (Theorem 1.12 in [40]) and matrix 1 is a real symmetric matrix since Σ 1 and Σ 2 are real symmetric matrixes. Then, the expectation is evaluated as follows Matrix Σ can be diagonalized by an orthogonal matrix P with P −1 = P T and Σ = PDP −1 where D is a diagonal matrix composed of the eigenvalues of Σ. Considering that y T Σy = tr(Σyy T ) = tr(PDP T yy T ) = tr(DP T yy T P), the expectation can be written as Let z = P T y with z = [z 1 , z 2 , . . . , z p ] T be a transformation where the Jacobian determinant is given by dz = |P T |dy = dy. Using the fact that tr(DP T yy T P) = tr(Dzz T ) = z T Dz and y T y = z T P T Pz = z T z, then the previous expectation (51) is given as follows where λ 1 ,. . . , λ p are the eigenvalues of Σ 1 Σ −1 2 .
Let x i = cos 2 θ i be a transformation to use where dx i = 2x 1/2 i (1 − x i ) 1/2 dθ i . Then the expectation given by the multiple integral over all θ j , j = 1, . . . , p − 1 is as follows In the following, we use the notation B p instead of B p (x 1 , . . . , x p−1 ) to alleviate writing equations. Let t = r 2 be transformation to use. Then, one can write In order to solve the integral in (62), we consider the following property given by log( x −a f (x)dx a=0 and the following equation given as follows Making use of the above equation, we obtain a new expression of (62) given as follows

Third Step: Expression for H(t,y) by Humbert and Beta Functions
Let Adding equations from (67) to (70), we can state that the new expression of the function B p becomes Then, the multiple integral H(t, y) given by (66) can be written otherwise Let the real variables x 1 , x 2 , . . . , x p−1 be transformed to the real variables u 1 , u 2 , . . . , u p−1 as follows The Jacobian determinant is given by Accordingly, the new expression of B p becomes As a consequence, the new domain of the multiple integral (72) is ∆ = {(u 1 , u 2 , . . . , u p−1 ) ∈ . . , and 0 ≤ u p−1 ≤ 1 − u 1 − u 2 . . . − u p−2 }, and the expression of H(t, y) is given as follows (λ p − λ 1 )ty, (λ p − λ 2 )ty, . . . , (λ p − λ p−1 )ty .

Final Step
The last integral is related to the confluent hypergeometric function of the second kind U(.) as follows As a consequence, the new expression is Using the transformation r = λ p r and the Proposition 2, and taking into account the expression of A, the new expression becomes Knowing that Applying the expression given by (18) The final development of the previous expression is as follows In this section, we presented the exact expression of E X 1 {ln[1 + X T Σ −1 2 X]}. In addition, the multiple power series F (p) D which appears to be a special case of F (p) N provides many properties and numerous transformations (see Appendix A) that make easier the convergence of the multiple power series. In the next section, we establish the KLD closed-form expression based on the expression of the latter expectation.

KLD between Two Central MCDs
Plugging (39) and (93) into (5) yields the closed-form expression of the KLD between two central MCDs with pdfs f X 1 (x|Σ 1 , p) and f X 2 (x|Σ 2 , p). This result is presented in the following theorem. Theorem 1. Let X 1 and X 2 be two random vectors that follow central MCDs with pdfs given, respectively, by f X 1 (x|Σ 1 , p) and f X 2 (x|Σ 2 , p). The Kullback-Leibler divergence between central MCDs is where λ 1 ,. . . , λ p are the eigenvalues of the real matrix Σ 1 Σ −1 2 , and F Lauricella [39] gave several transformation formulas (see Appendix A), whose relations (A5)-(A7), and (A9) are applied to F Considering the above equations, it is easy to provide different expressions of KL(X 1 ||X 2 ) shown in Table 1. The derivative of the Lauricella D-hypergeometric series with respect to a goes through the derivation of the following expression The derivative with respect to a of the Lauricella D-hypergeometric series and its transformations goes through the following expressions (see Appendix B for demonstration) To derive the closed-form expression of d KL (X 1 , X 2 ) we have to evaluate the expression of KL(X 2 ||X 1 ). The latter can be easily deduced from KL(X 1 ||X 2 ) as follows Proceeding in the same way by using Lauricella transformations, different expressions of KL(X 2 ||X 1 ) are provided in Table 1. Finally, given the above results, it is straightforward to compute the symmetric KL similarity measure d KL (X 1 , X 2 ) between X 1 and X 2 . Technically, any combination of the KL(X 1 ||X 2 ) and KL(X 2 ||X 1 ) expressions is possible to compute d KL (X 1 , X 2 ). However, we choose the same convergence region for the two divergences for the calculation of the distance. Some expressions of d KL (X 1 , X 2 ) are given in Table 1. Table 1. KLD and KL distance computed when X 1 and X 2 are two random vectors following central MCDs with pdfs f X 1 (x|Σ 1 , p) and f X 2 (x|Σ 2 , p).
where 2 F 1 is the Gauss's hypergeometric function. The expression of the derivative of 2 F 1 is given as follows (see Appendix C.1 for details of computation) Accordingly, the KLD is then expressed as We conclude that KLD between Cauchy densities is always symmetric. Interestingly, this is consistent with the result presented in [31].

Case of p = 2
This case corresponds to the Bivariate Cauchy distribution. The KLD is then given by where F 1 is the Appell's hypergeometric function (see Appendix A). The expression of the derivative of F 1 can be further developed In addition, when the eigenvalue λ i for i = 1, 2 takes some particular values, the expression of the KLD becomes very simple. In the following, we show some cases: For this particular case, we have The demonstration of the derivation is shown in Appendix C.2. Then, KLD becomes equal to For this particular case, we have For more details about the demonstration see Appendix C.3. The KLD becomes equal to It is easy to deduce that This result can be demonstrated using the same process as KL(X 1 ||X 2 ). It is worth to notice that KL(X 1 ||X 2 ) = KL(X 2 ||X 1 ) which leads us to conclude that the property of symmetry observed for the univariate case is no longer valid in the multivariate case. Nielsen et al. in [32] gave the same conclusion.

Implementation and Comparison with Monte Carlo Technique
In this section, we show how we practically compute the numerical values of the KLD, especially when we have several equivalent expressions which differ in the region of convergence. To reach this goal, the eigenvalues of Σ 1 Σ −1 2 are rearranged in a descending order λ p > λ p−1 > . . . > λ 1 > 0. This operation is justified by Equation (53) where it can be seen that the permutation of the eigenvalues does not affect the expectation result. Three cases can be identified from the expressions of KLD.
8.3. Case λ p > 1 and λ 1 < 1 This case guarantees that 0 ≤ 1 − λ j /λ p < 1, j = 1, . . . , p − 1 and 0 ≤ 1 − 1/λ p < 1. The expression of the KL(X 1 ||X 2 ) is given by Equation (106) and KL(X 2 ||X 1 ) is given by (112) This relation allows us to compare the computational accuracy of the approximation of the Lauricella series with respect to the Gauss function. In addition, to compute the numerical value the indices of the series will evolve from 0 to N instead of infinity. The latter is chosen to ensure a good approximation of the Lauricella series. Table 2 shows the computation of the derivative of F  (129). We can deduce the following conclusions. First, the error is reasonably low and decreases as the value of N increases. Second, the error increases for values of 1 − λ close to 1 as expected, which corresponds to the convergence region limit. In the following section, we compare the Monte Carlo sampling method to approximate the KLD value with the numerical value of the closed-form expression of the latter. The Monte Carlo method involves sampling a large number of samples and using them to calculate the sum rather than the integral. Here, for each sample size, the experiment is repeated 2000 times. The elements of Σ 1 and Σ 2 are given in Table 3. Figure 1 depicts the absolute value of bias, mean square error (MSE) and box plot of the difference between the symmetric KL approximated value and its theoretical one, given versus the sample sizes. As the sample size increases, the bias and the MSE decrease. Accordingly, the approximated value will be very close to the theoretical KLD when the number of samples is very large. Outliers are larger than Q 3 + 1.5 × IQR or smaller than Q 1 − 1.5 × IQR, where Q 1 , Q 3 , and IQR are the 25th, 75th percentiles, and the interquartile range, respectively. D . We have also proposed a simple scheme to compute easily the Lauricella series and to bypass the convergence constraints of this series. Codes and examples for numerical calculations are presented and explained in detail. Finally, a comparison is made to show how the Monte Carlo sampling method gives approximations close to the KLD theoretical value. As a final note, it is also possible to extend these results on the KLD to the case of the multivariate t-distribution since the MCD is a particular case of this multivariate distribution.  Acknowledgments: Authors gratefully acknowledge the PHENOTIC platform node of the french national infrastructure on plant phenotyping ANR PHENOME 11-INBS-0012. The authors would like also to thank the anonymous reviewers for their helpful comments valuable comments and suggestions.

Conflicts of Interest:
The authors declare no conflict of interest.
Lauricella has given several transformation formulas, from which we use the two following relationships. More details can be found in Exton's book [43] on hypergeometric equations.  The multiplication of the derivative of f with respect to λ by (1 − λ) is given as follows As a consequence, Finally, f (λ) = −2 ln 1 + λ 1/2 2 . (A24)