Open Access
This article is

- freely available
- re-usable

*Fractal Fract*
**2017**,
*1*(1),
8;
https://doi.org/10.3390/fractalfract1010008

Article

Fractional Divergence of Probability Densities

P.O. Box 123AA, Adelaide, SA 5000, Australia

Received: 12 September 2017 / Accepted: 19 October 2017 / Published: 25 October 2017

## Abstract

**:**

The divergence or relative entropy between probability densities is examined. Solutions that minimise the divergence between two distributions are usually “trivial” or unique. By using a fractional-order formulation for the divergence with respect to the parameters, the distance between probability densities can be minimised so that multiple non-trivial solutions can be obtained. As a result, the fractional divergence approach reduces the divergence to zero even when this is not possible via the conventional method. This allows replacement of a more complicated probability density with one that has a simpler mathematical form for more general cases.

Keywords:

divergence; fractional divergence; probability densities## 1. Introduction

The divergence or relative entropy between two probability densities is a measure of dissimilarity between them. The most well known divergence approach is due to Kullback and Leibler which will be discussed in more detail below. Other divergence formulations include the version by Jeffrey which is symmetric for large separations between densities [1]. While Jeffrey’s approach is a symmetric f-divergence and is non-negative, it does not obey the triangle inequality. The Jensen–Shannon divergence [2] is essentially a half times the sum of the two separate densities and their respective divergence to their mean. The striking feature about the Jensen–Shannon divergence is that its square root is a true distance metric. That is, it not only displays symmetry but also conforms to the triangle inequality. Closely related to the divergence or relative entropy is entropy itself. Some of the definitions are due to Renyi [3], a one-parameter generalisation of the Shannon entropy and the definition due to Tsallis [4]. Tsallis entropy is in fact the underlying formulation for many other entropy definitions in the literature.

There have been attempts to generalise the concepts of divergence and entropy using fractional calculus. Fractional order mathematics has been applied to many classical areas associated with probability, entropy and divergence. The entropy has been derived in fractional form in [5] and subsequently in [6]. Divergence measures based on the Shannon entropy have been dealt with in [7]. An interpretation of fractional order differentiation in the context of probability has been given in [8]. The role of fractional calculus in probability has been discussed in [9]. In [10], the connection of fractional derivatives and negative probability densities is discussed. One of the first attempts to involve fractional calculus with probability theory is due to Jumarie [11]. The fractional probability measure is discussed, in particular the uniform probability density of fractional order. Comparison of the properties of fractional probabilities to the properties of classical probability theory have been studied in [12,13,14]. These latter works extend the ideas of Jumarie and give definitions for fractional probability space and fractional probability measure so that a fractional analogue of the classical probability theory is obtained.

The underlying mathematical construct in all of these approaches is the dependence on probability densities or distributions. In many areas of research, there is a requirement to model the statistical behaviour of a physical process by using probability distributions in terms of the cumulative distribution function (CDF) or the probability density function (PDF). Depending on the problem to be analysed, there is usually a particular distribution that is better suited for the description of the physical process compared to other distributions. The problem is that most distributions contain multiple parameters that must be estimated using such methods as the maximum likelihood approach or method of moments. The estimation of these parameters introduces uncertainty, which translates to performance loss for a particular distribution when used to model physical phenomena. For example, in the detection of signals using the Constant False Alarm Rate (CFAR) approach [15,16,17,18,19], correct estimation of parameters is critical. The estimation of these parameters is almost always not exact and, as a consequence, the detection performance drops because of the loss in accuracy.

The basic requirement is to find a probability density that describes a physical process accurately while possessing a smaller number of parameters. In other words, is there a simpler probability density that can replace a more complicated two or more parameter version? This means that the simpler expression must match the performance of the latter very well for a large solution set. The use of a separation metric is required that will indicate how dissimilar they are. If the separation between them is zero or very close to zero, then the more complicated density can be replaced by the “simpler” density (or approximation). Much work has been done on this problem and two methods have proven to be very useful. The first involves information geometry [20] where the separation is given by the geodesic distance between two probability density-manifolds. The geodesic is obtained via the Fisher–Rao information metric. The geodesic is a true metric because it is symmetric between the densities and obeys the triangle inequality.

The other approach is to consider a class of divergence formulations called f-divergences of which the Kullback–Leibler version belongs to. The Kullback–Leibler divergence is not symmetric for large separations between densities and does not obey the triangle inequality. However, there are a number of ways to make it symmetric for large separations between densities. It is worth noting that there is a mathematical duality between the Kullback–Leibler divergence and the geodesic approach of information geometry. In addition, the latter is more complicated to work with in the mathematical sense because, in many cases, the geodesic must be obtained via the solution of partial differential equations. On the other hand, an f-divergence formulation such as the Kullback–Leibler divergence is relatively easier to implement, requiring the solutions to be obtained via integrals instead.

The Kullback–Leibler divergence has been used previously to find solutions that allow one density or model to be replaced by another [21,22,23,24,25,26,27,28,29]. The problem is that the solution sets that give a divergence of zero or close to zero are either unique or trivial in nature. That is, the divergence is not valid for a large set of parameter values. Replacing one model (density) by another only for certain unique or restricted values in their parameters is not very useful for modelling physical processes or systems. Unfortunately, this is the inherent problem associated with the current form of any divergence method. What is required is an approach that extends the solutions, where the divergence is close to zero or zero, beyond the unique and trivial cases. It would then be possible to replace one model with another since there would be a similarity between them for large parameter sets. This idea will be pursued in this paper by making use of fractional calculus to obtain a fractional form for the Kullback–Leibler divergence.

## 2. Divergence between Two Probability Densities

The divergence between two probability densities considered here is based on the Kullback–Leibler formulation (K-L). This is a pseudo-metric for the distance between the densities because it fails the triangle inequality. The main issue with the K-L formulation is that it is not symmetric unless the metric separation between the densities is small, i.e., probability density $q(x;{\overrightarrow{\xi}}_{2})$ is very close in parameter space to density $p(x;{\overrightarrow{\xi}}_{1})$: $q(x;{\overrightarrow{\xi}}_{2})\approx p(x,{\overrightarrow{\xi}}_{1}+\delta {\overrightarrow{\xi}}_{1})$, where ${\overrightarrow{\xi}}_{i}$ is the parameter space of each density ${\overrightarrow{\xi}}_{i}=({\xi}_{1},{\xi}_{2},...,{\xi}_{N})$ and N represents the total number of parameters. The K-L divergence is defined as
for some region of integration $\Omega $. It is possible to obtain a symmetric version of (1) that is valid for larger separations and obeys the triangle inequality. One way to do this is using the Jeffrey’s formulation as discussed previously:

$$\begin{array}{c}\hfill D(p(x;\overrightarrow{{\xi}_{1}})\left|\right|q(x;\overrightarrow{{\xi}_{2}}))={\int}_{\Omega}p(x;\overrightarrow{{\xi}_{1}})log\left(\frac{p(x;\overrightarrow{{\xi}_{1}})}{q(x;\overrightarrow{{\xi}_{2}})}\right)dx\end{array}$$

$$\begin{array}{c}\hfill D(p(x;\overrightarrow{{\xi}_{1}})\left|\right|q(x;\overrightarrow{{\xi}_{2}}))={\int}_{\Omega}\left[p(x;\overrightarrow{{\xi}_{1}})-q(x;\overrightarrow{{\xi}_{2}})\right]log\left(\frac{p(x;\overrightarrow{{\xi}_{1}})}{q(x;\overrightarrow{{\xi}_{2}})}\right)dx.\end{array}$$

It will suffice to consider the divergence as given by (1) in what follows since the approach discussed in this paper is easily applicable to the symmetric Jeffrey’s case or other similar formulations. Either way, this does not matter much, since, for almost all cases of interest, small separations dominate. The K–L divergence (1), hereby referred to as the divergence for brevity, is also known as the relative entropy for the following reason. If (1) is re-written as
then the negative of the first integral in (3) is the differential entropy H of the probability density $p(x;\overrightarrow{{\xi}_{1}})$. It was first used in statistical physics by Boltzmann and in information theory by Shannon. Both considered the discreet form for a probability mass $p({x}_{i};\overrightarrow{{\xi}_{1}})$

$$\begin{array}{c}\hfill D(p(x;\overrightarrow{{\xi}_{1}})\left|\right|q(x;\overrightarrow{{\xi}_{2}}))={\int}_{\Omega}p(x;\overrightarrow{{\xi}_{1}})log\left(p(x;\overrightarrow{{\xi}_{1}})\right)dx-{\int}_{\Omega}p(x;\overrightarrow{{\xi}_{1}})log\left(q(x;\overrightarrow{{\xi}_{2}})\right)dx,\end{array}$$

$$\begin{array}{c}\hfill H\left(p({x}_{i};\overrightarrow{{\xi}_{1}})\right)=-\sum _{i=1}^{N}p({x}_{i};\overrightarrow{{\xi}_{1}})log\left(p({x}_{i};\overrightarrow{{\xi}_{1}})\right).\end{array}$$

The integral with a positive sign on the right of (3) is the cross-entropy between the densities $p(x;\overrightarrow{{\xi}_{1}})$ and $q(x;\overrightarrow{{\xi}_{2}})$. Hence, (1) and (3) are also referred to as the relative entropy between two densities. The divergence or relative entropy between probability densities $p(x;\overrightarrow{{\xi}_{1}})$ and $q(x;\overrightarrow{{\xi}_{2}})$ are interpreted in the following sense. Assume that a physical process or system is known to be accurately represented and modelled by a probability density $p(x;\overrightarrow{{\xi}_{1}})$. This density might also represent an ideal or theoretical model. Is there another (perhaps simpler) model with density $q(x;\overrightarrow{{\xi}_{2}})$ that is asymptotically close or exact with the former density (model)? If the two densities have a divergence that tends to zero, then the more complicated model can be replaced by the simpler model (approximation) for the given parameters that achieve zero or almost zero divergence. In another sense, the way to understand this is to ask what information is lost if one used the model density $q(x;{\overrightarrow{\xi}}_{2})$ compared to the more accurate model density $p(x;{\overrightarrow{\xi}}_{1})$. As an example, use of divergence in signal processing is very important—in particular, the detection of targets amongst background noise and clutter. This requires determining if signals (targets) of a given probability density differ from another density that represents the background noise and clutter. The degree of separation above a given threshold determines whether targets are present or not (see Section 6). In fact, the concept of divergence is used in many areas of physics, statistics/mathematics and engineering with a common goal. Ideally, the requirement is to find solutions to (1) in terms of the parameter vectors ${\overrightarrow{\xi}}_{1}$ and ${\overrightarrow{\xi}}_{2}$ that make the divergence equal to zero, i.e.,
or from (3) when the entropy term is equal to the cross entropy term. The problem is, since the two densities are different and with different parameters, it is not possible to achieve zero divergence between them except perhaps for particular or unique solutions such as solutions pertaining to their intersections. In some cases, the solutions are trivial such as when the two densities are of the exact mathematical form, which means a divergence of zero is possible since the parameters of one can be made to take on the same values as those of the other. For example, for two Exponential densities with parameters ${\lambda}_{1}$ and ${\lambda}_{2}$, it is trivial to show by inspection or by using the divergence (1) that
have a divergence of zero everywhere only when ${\lambda}_{1}\equiv {\lambda}_{2}$. In fact, forcing the divergence to be zero as in (5) may not necessarily give solutions that achieve zero divergence. In such cases, it is also possible that the solutions become complex, which does not make sense when applied to a real physical problem. In what follows, it will be shown that it is possible to extend the domain of validity of solutions that give zero divergence beyond the trivial or unique cases. This can be done via the transformation of one or more of the parameters appearing in the divergence equations using fractional calculus. The method will be applied to two important densities used in many fields of research: the Exponential density and a well known power form, namely, the Pareto density. The first step is to obtain the conventional and fractional divergences for the Exponential-Pareto case and then to do the same for the Exponential–Exponential case.

$$\begin{array}{c}\hfill {\int}_{\Omega}p(x;\overrightarrow{{\xi}_{1}})log\left(\frac{p(x;\overrightarrow{{\xi}_{1}})}{q(x;\overrightarrow{{\xi}_{2}})}\right)dx=0,\end{array}$$

$$\begin{array}{c}\hfill p(x;{\lambda}_{1})={\lambda}_{1}{e}^{-{\lambda}_{1}x}\phantom{\rule{2.em}{0ex}}\mathrm{and}\phantom{\rule{2.em}{0ex}}q(x;{\lambda}_{2})={\lambda}_{2}{e}^{-{\lambda}_{2}x}\end{array}$$

## 3. Conventional Divergence of Exponential and Pareto Densities

The Exponential and Pareto distributions have been used to model a large number of problems. For example, the Pareto distribution is critical in the analysis of radar clutter. For this reason, a fractional-order Pareto distribution has been presented in [30] in order to more accurately model sea clutter in microwave radar. Consider $i.i.d.$ random variables belonging to the Exponential density ${X}_{i}\sim Exp\left(\lambda \right)$ as well as the Pareto density ${X}_{i}\sim Pa({x}_{0},\beta )$. That is,
where the parameter space contains only one parameter, $\lambda $, which is usually related to the expectation $\mu $ of the random variables by $\lambda =1/\mu $. The Pareto density has parameter space ${\overrightarrow{\xi}}_{2}=({x}_{0},\beta )$ where ${x}_{0}$ is the scale parameter and $\beta $ is the shape parameter:

$$\begin{array}{c}\hfill p(x;\lambda )=\lambda {e}^{-\lambda x},\end{array}$$

$$\begin{array}{c}\hfill q(x;{x}_{0},\beta )=\beta {x}_{0}^{\beta}{x}^{-(\beta +1)}.\end{array}$$

The idea here is to replace the two parameter Pareto density with the one parameter and simpler Exponential density. On this basis, this can only be true for certain solutions where the divergence between them is zero or close to zero. For brevity, the densities will be written as $p\left(x\right)$ and $q\left(x\right)$. The divergence expression between the Exponential and Pareto densities is obtained from
where the log-function in the integrand is simplified to

$$\begin{array}{c}\hfill D\left(p\left(x\right)\right|\left|q\left(x\right)\right)={\int}_{\Omega}p\left(x\right)log\left(\frac{p\left(x\right)}{q\left(x\right)}\right)dx,\end{array}$$

$$\begin{array}{c}\hfill log\left(\frac{p\left(x\right)}{q\left(x\right)}\right)=log\left(\frac{\lambda}{\beta {x}_{0}^{\beta}}\right)-\lambda x+(\beta +1)log\left(x\right).\end{array}$$

Substituting into (9) and taking the integration domain to be the interval $\Omega =[0,\infty )$, the divergence now becomes,

$$\begin{array}{c}\hfill D\left(p\left(x\right)\right|\left|q\left(x\right)\right)={\int}_{0}^{\infty}\lambda {e}^{-\lambda x}\left[log\left(\frac{\lambda}{\beta {x}_{0}^{\beta}}\right)-\lambda x+(\beta +1)log\left(x\right)\right]dx.\end{array}$$

The first term in the integrand is trivial since the second axiom of probability states that the integral of the density in the interval is unity: ${\int}_{0}^{\infty}p\left(x\right)dx=1$. The other terms can be completed by using integration by parts to finally arrive at the following expression for the divergence between the two densities
where the Euler-gamma has been introduced and its value is $\gamma \approx 0.577216$. The modulus is included in (12) to enforce the fact that the divergence is greater or equal to zero. The idea now is to work out for what values of $\beta $ in (12) the divergence approaches zero. That is, what values of $\beta $ make the Pareto density $q\left(x\right)$ be approximate to or become equal to the Exponential $p\left(x\right)$ respectively? Let the parameter space of all parameters be written as a vector $\overrightarrow{\xi}\equiv ({\xi}_{1},{\xi}_{2},{\xi}_{3})=(\lambda ,{x}_{0},\beta )$. Consider the derivative as an operator ${\widehat{L}}_{i}=\partial /\partial {\xi}_{i}$. Taking the index $i=3$ gives the operator in terms of the parameter $\beta $, i.e., ${\widehat{L}}_{3}$. Using the operator on the left and right of (12) gives, (ignoring the modulus):
where ${\widehat{L}}_{3}=\partial /\partial {\xi}_{3}=\partial /\partial \beta $. We enforce the need for the left-hand side to be equal to zero as required, i.e., ${\widehat{L}}_{3}D\left(p\left(x\right)\right|\left|q\left(x\right)\right)=0$ so that

$$\begin{array}{c}\hfill D\left(p\left(x\right)\right|\left|q\left(x\right)\right)=\left|log\left(\frac{\lambda}{\beta {x}_{0}^{\beta}}\right)-(\beta +1)log\left(\lambda \right)-\gamma (\beta +1)-1\right|,\end{array}$$

$$\begin{array}{c}\hfill {\widehat{L}}_{3}D\left(p\left(x\right)\right|\left|q\left(x\right)\right)={\widehat{L}}_{3}log\left(\frac{\lambda}{\beta {x}_{0}^{\beta}}\right)-{\widehat{L}}_{3}(\beta +1)log\left(\lambda \right)-{\widehat{L}}_{3}\gamma (\beta +1)-{\widehat{L}}_{3},\end{array}$$

$$\begin{array}{c}\hfill -\frac{1}{\beta}-(\gamma +log\left(\lambda {x}_{0}\right))=0,\end{array}$$

Solving for $\beta $:

$$\begin{array}{c}\hfill \beta =-\frac{1}{\gamma +log\left(\lambda {x}_{0}\right)}.\end{array}$$

This means that the density $q\left(x\right)$ has a divergence that is zero or close to zero with respect to $p\left(x\right)$, the Exponential, whenever $\beta $ is given by (15). Then, the Pareto density is modified to

$$\begin{array}{c}\hfill q\left(x\right)=-\frac{{x}_{0}^{-\frac{1}{\gamma +log\left(\lambda {x}_{0}\right)}}}{\gamma +log\left(\lambda {x}_{0}\right)}{x}^{\frac{1}{\gamma +log\left(\lambda {x}_{0}\right)}-1}.\end{array}$$

The Pareto density (16) is now expressed in terms of the Exponential-density parameter $\lambda $. This indicates where the divergence of $p\left(x\right)$ from $q\left(x\right)$ is approaching zero as a function of $\lambda $. When the divergence is acceptably small or even zero, the Pareto model can be adequately described by the simpler one-parameter Exponential model. Thus, substituting (15) into (12) means that the divergence can be written as:
where $\omega ={(\gamma +log\left(\lambda {x}_{0}\right))}^{-1}$. Equation (17) determines the value of the minimum-divergence between the two densities. Figure 1 shows a plot of the divergence (17) as a function of the parameter $\lambda $ at ${x}_{0}=0.01$. The conventional divergence (17) is zero for the unique value of $\lambda \approx 9.458$ in the range considered. Multiple solutions that approach zero are not generally possible. In this case, the divergence between the densities $p\left(x\right)$ and $q\left(x\right)$ is zero only for the particular value $\lambda \approx 9.458$ and close to zero for small values of $\lambda $ on either side.

$$\begin{array}{c}\hfill D\left(p\left(x\right)\right|\left|q\left(x\right)\right)=\left|log\left(-\frac{\lambda {x}_{0}^{\omega}}{\omega}\right)+(\omega -1)log\left(\lambda \right)+\gamma (\omega -1)-1\right|,\end{array}$$

In fact, for the general case, the conventional divergence given by the expression (12) can be plotted as a function of the parameters $(\lambda ;{x}_{0},\beta )$. Figure 2a shows the divergence between the two densities as a function of the two parameters $\lambda $ and $\beta $ at a fixed Pareto scale parameter value of ${x}_{0}=0.01$. For the range of $\lambda $ and $\beta $ values shown, the divergence is never zero or very close to zero. In terms of Figure 1, the divergence is zero for $\lambda \approx 9.458$ and this occurs when $\beta =0.561$, which is outside the range of values for $\beta $ shown in Figure 2a. An exact divergence of zero at only one unique point is not very useful or practical in the general sense anyway. What is required is an extension of the solutions so that zero divergence (or very close to zero) is achieved over a wider parameter range (see Figure 2b). This will require the use of fractional-order calculus and will be discussed in the next section.

## 4. Fractional Divergence of Exponential and Pareto Densities

Fractional calculus has been around since the time of integer order calculus, which was developed by Newton and Leibniz. The name “fractional” is a misnomer that has endured since around 1695 when l’ Hopital queried Leibniz on the meaning of a fractional order of one-half for the derivative operator. It is to be understood that fractional really means “generalised”. Fractional-order derivatives and integrals of functions have been studied for a very long time with various definitions appearing in the literature. Among the well known are due to Caputo, Grunwald-Letnikov and Riemann–Liouville. For a comprehensive review of the many versions that have been derived, see [31] and the references therein. Research into fractional order mathematics has been prevalent in recent times in many fields of science, mathematics and engineering [32,33,34,35,36,37,38,39,40,41]. In this paper, the interest is in the fractional derivative of functions only and the Riemann–Liouville formulation for the fractional derivative will be considered:

$$\begin{array}{c}\hfill {}_{a}{D}_{t}^{\alpha}f\left(t\right)=\frac{1}{\Gamma (\nu -\alpha )}\frac{{d}^{\nu}}{d{t}^{\nu}}{\int}_{a}^{t}{(t-x)}^{\nu -\alpha -1}f\left(x\right)dx.\end{array}$$

The terminal a takes two values. The case $a=-\infty $ is due to Liouville while the case $a=0$ is due to Riemann. The parameter $\nu $ represents values that are integer order, i.e., $\nu \in {Z}^{+}$. The parameter $\alpha $ is the fractional order that can be real or complex and is bounded by $\lfloor \alpha \rfloor <\alpha \le \lceil \alpha \rceil $. Here, $\lfloor \xb7\rfloor $ is the floor function and $\lceil \xb7\rceil $ is the ceiling function, respectively. Consider the Riemann–Liouville fractional derivative for $\nu =1$ and terminal $a=0$. The following fractional operator can then be defined:

$$\begin{array}{c}\hfill {\widehat{\Lambda}}_{i}(x\mapsto {\xi}_{i})=\frac{1}{\Gamma (1-\alpha )}\frac{d}{dx}{\int}_{0}^{x}d{\xi}_{i}{(x-{\xi}_{i})}^{-\alpha}.\end{array}$$

Applying the operator on the conventional divergence, i.e., ${\widehat{\Lambda}}_{i}(x\mapsto {\xi}_{i})D(p(x;{\overrightarrow{\xi}}_{1})\left|\right|q(x;{\overrightarrow{\xi}}_{2}))$, gives the fractional divergence $\mathcal{D}(x\mapsto {\xi}_{i},p(x;{\overrightarrow{\xi}}_{1})|\left|q(x;{\overrightarrow{\xi}}_{2})\right)$ such that the following holds:

**Definition**

**1.**

The fractional divergence, which is a generalisation of the conventional divergence, is defined as
where $<\xb7>$ is the expectation with respect to the density $p(x;{\overrightarrow{\xi}}_{1})$ and the three axioms of probability theory hold for both densities. The modulus $|\xb7|$ is required because $\alpha \in \mathcal{R}$ and $\alpha \in \mathcal{C}$. In addition, the definition: $p\left(x\right)log\left(p\right(x)/q(x\left)\right)=0$ whenever $p\left(x\right)=0$ is applicable.

$$\begin{array}{ccc}\hfill \mathcal{D}(x\mapsto {\xi}_{i},p(x;{\overrightarrow{\xi}}_{1})|\left|q(x;{\overrightarrow{\xi}}_{2})\right)& =& \left|\frac{1}{\Gamma (1-\alpha )}\frac{d}{dx}{\int}_{0}^{x}{(x-{\xi}_{i})}^{-\alpha}{\int}_{\Omega}p(x;{\overrightarrow{\xi}}_{1})log\left(\frac{p(x;{\overrightarrow{\xi}}_{1})}{q(x;{\overrightarrow{\xi}}_{2})}\right)dxd{\xi}_{i}\right|\hfill \\ & =& \left|\frac{1}{\Gamma (1-\alpha )}\frac{d}{dx}{\int}_{0}^{x}{(x-{\xi}_{i})}^{-\alpha}{\u2329log\left(\frac{p(x;{\overrightarrow{\xi}}_{1})}{q(x;{\overrightarrow{\xi}}_{2})}\right)\u232a}_{p(x;{\overrightarrow{\xi}}_{1})}d{\xi}_{i}\right|,\hfill \end{array}$$

**Theorem**

**1.**

If the fractional divergence is a generalised form for the divergence between two densities, it must produce the same solutions as the conventional divergence as a special limit. The latter is true when the fractional order approaches $\alpha =1$ in (20). Thus,
or in operator form:

$$\begin{array}{c}\hfill \underset{\alpha \to 1}{lim}\mathcal{D}(x\mapsto {\xi}_{i},p(x;{\overrightarrow{\xi}}_{1})\left|\right|q(x;{\overrightarrow{\xi}}_{2}))={L}_{i}\left({\xi}_{i}\right)D(p(x;{\overrightarrow{\xi}}_{1})\left|\right|q(x;{\overrightarrow{\xi}}_{2}))\end{array}$$

$$\begin{array}{c}\hfill \underset{\alpha \to 1}{lim}\widehat{\mathcal{D}}(x\mapsto {\xi}_{i})={\widehat{L}}_{i}\left({\xi}_{i}\right).\end{array}$$

**Proof.**

The proof involves showing that the fractional operator of order, $\alpha $, reduces to the $\nu $-th integer order derivative in the limit $\alpha \to \nu $. The final stage requires setting $\nu =1$ to complete the proof. Let $\nu \in \mathcal{N}$ be an arbitrary integer order of the conventional derivative. Let the fractional order operator be written in terms of the integer order derivative $\nu $,
where the expression on the right is obtained by using the transformation ${y}_{i}=x-{\xi}_{i}$. Then,

$$\begin{array}{c}\hfill \underset{\alpha \to \nu}{lim}\widehat{\mathcal{D}}(x\mapsto {\xi}_{i})=\underset{\alpha \to \nu}{lim}\frac{1}{\Gamma (\nu -\alpha )}\frac{{d}^{\nu}}{d{x}^{\nu}}{\int}_{0}^{x}{(x-{\xi}_{i})}^{\nu -\alpha -1}d{\xi}_{i}\to \underset{\alpha \to \nu}{lim}\frac{1}{\Gamma (\nu -\alpha )}\frac{{d}^{\nu}}{d{x}^{\nu}}{\int}_{0}^{x}{y}_{i}^{\nu -\alpha -1}d{y}_{i},\end{array}$$

$$\begin{array}{ccc}\hfill \underset{\alpha \to \nu}{lim}\widehat{\mathcal{D}}(x\mapsto {\xi}_{i})& =& \underset{\alpha \to \nu}{lim}\frac{1}{\Gamma (\nu -\alpha )}\frac{{d}^{\nu}}{d{x}^{\nu}}{\int}_{0}^{x}{y}_{i}^{\nu -\alpha -1}d{y}_{i}\hfill \\ & =& \underset{\alpha \to \nu}{lim}\frac{1}{\Gamma (\nu -\alpha )}\frac{{d}^{\nu}}{d{x}^{\nu}}{\left[\frac{{y}_{i}^{\nu -\alpha}}{\nu -\alpha}\right]}_{0}^{x}\hfill \\ & =& \underset{\alpha \to \nu}{lim}\frac{{d}^{\nu}}{d{x}^{\nu}}\left[\frac{{x}_{i}^{\nu -\alpha}}{\Gamma (\nu +1-\alpha )}\right]\hfill \\ & =& \frac{{d}^{\nu}}{d{x}^{\nu}}.\hfill \end{array}$$

The conventional divergence corresponds to the integer order $\nu =1$, hence
as required. Note that the mapping $(x\mapsto {\xi}_{i})$ has been applied in (25). ☐

$$\begin{array}{ccc}\hfill \underset{\alpha \to 1}{lim}\widehat{\mathcal{D}}\left({\xi}_{i}\right)& =& \frac{d}{d{\xi}_{i}}\hfill \\ & =& {\widehat{L}}_{i}\left({\xi}_{i}\right)\hfill \end{array}$$

The divergence integral appearing in the integrand of (20), i.e., the expectation, has already been calculated before (see (12)). The parameter vector space for both densities is $\overrightarrow{\xi}=({\overrightarrow{\xi}}_{1},{\overrightarrow{\xi}}_{2})=(\lambda ,{x}_{0},\beta )$. Re-arranging the divergence expression (12), the following form is obtained, (neglecting the modulus until the end):
where ${\omega}^{-1}=\gamma +log\left(\lambda {x}_{0}\right)$ and $({\overrightarrow{\xi}}_{1};{\overrightarrow{\xi}}_{2})$ have been omitted for brevity. The requirement now is to use the operator and calculate the fractional divergence as follows:

$$\begin{array}{c}\hfill D\left(p\left(x\right)\right|\left|q\left(x\right)\right)=-log\left(\beta \right)-{\omega}^{-1}\beta -(\gamma +1),\end{array}$$

$$\begin{array}{c}\hfill \mathcal{D}(x\mapsto {\xi}_{i},p(x;{\overrightarrow{\xi}}_{1})\left|\right|q(x;{\overrightarrow{\xi}}_{2}))=-{\widehat{\Lambda}}_{i}(x\mapsto {\xi}_{i})log\left(\beta \right)-{\omega}^{-1}{\widehat{\Lambda}}_{i}(x\mapsto {\xi}_{i})\beta -(\gamma +1){\widehat{\Lambda}}_{i}(x\mapsto {\xi}_{i}).\end{array}$$

The argument $(x\mapsto {\xi}_{i})$ implies that the variable x maps on to the variable ${\xi}_{i}$. This will be elucidated further in what follows below. Recall that the parameter vector is given by $\overrightarrow{\xi}=(\lambda ,{x}_{0},\beta )$ and as before, in Section 3, the interest is in the parameter $\beta $, i.e., $i=3$ so that ${\xi}_{3}=\beta $. In addition, the condition $\mathcal{D}(x\mapsto {\xi}_{i},p(x;{\overrightarrow{\xi}}_{1})|\left|q(x;{\overrightarrow{\xi}}_{2})\right)=0$ is enforced so that (27) becomes

$$\begin{array}{c}\hfill {\widehat{\Lambda}}_{3}(x\mapsto \beta )log\left(\beta \right)+{\omega}^{-1}{\widehat{\Lambda}}_{3}(x\mapsto \beta )\beta +(\gamma +1){\widehat{\Lambda}}_{3}(x\mapsto \beta )=0.\end{array}$$

Each term appearing in (28) will now be calculated. Before proceeding, it is important to re-visit the meaning of the mapping $(x\mapsto {\xi}_{i})$. Once the operator ${\widehat{\Lambda}}_{i}$ is used, the final result is a function of the variable x, which must then be replaced by the variable ${\xi}_{i}$, i.e, ${\widehat{\Lambda}}_{i}(x\mapsto {\xi}_{i})\to {\widehat{\Lambda}}_{i}\left({\xi}_{i}\right)$. The first term in (28) will be calculated last as it is more involved than the other two. In addition, the function $log\left(z\right)$, for some argument z, always appears in these kinds of problems involving divergence or parameter estimation, and, for this reason, it will be treated in full. The other two terms contain monomials ${\beta}^{1}$ and ${\beta}^{0}=1$. It can be shown, by using the Riemann–Liouville fractional formulation, that the fractional derivative of monomials with power n results in a form that is the exact version of Euler’s generalisation of the integer derivatives of monomials:
for monomial powers n. To verify this, the second term is (leaving out the coefficient):

$$\begin{array}{c}\hfill \frac{{d}^{\alpha}}{d{\beta}^{\alpha}}{\beta}^{n}=\frac{\Gamma (n+1)}{\Gamma (n+1-\alpha )}{\beta}^{n-\alpha}\end{array}$$

$$\begin{array}{c}\hfill {\widehat{\Lambda}}_{3}(x\mapsto \beta )\beta =\frac{1}{\Gamma (1-\alpha )}\frac{d}{dx}{\int}_{0}^{x}{(x-\beta )}^{-\alpha}\beta d\beta .\end{array}$$

Let the above integral be transformed to the form
using the transformation $y=x-\beta $ and $dy=-d\beta $. The requirement now is to map the variable x such that ${\widehat{\Lambda}}_{3}(x\mapsto \beta )\beta \to {\widehat{\Lambda}}_{3}\left(\beta \right)\beta $ in (31) to obtain the final result
since $(1-\alpha )\Gamma (1-\alpha )\equiv \Gamma (2-\alpha )$. As stated above, this result is equivalent to that obtained by using Euler’s form (29) for $n=1$. In a similar way, the final term in (28) can be obtained as follows (leaving out the coefficient again),
where the transformation $y=x-\beta $ and $dy=-d\beta $ have been applied. The final result then becomes:

$$\begin{array}{ccc}{\widehat{\Lambda}}_{3}(x\mapsto \beta )\beta & =& \frac{1}{\Gamma (1-\alpha )}\frac{d}{dx}{\int}_{0}^{x}{y}^{-\alpha}(x-y)dy\hfill \\ & =& \frac{1}{\Gamma (1-\alpha )}\frac{d}{dx}\left[\frac{{x}^{2-\alpha}}{(1-\alpha )}-\frac{{x}^{2-\alpha}}{(2-\alpha )}\right]\hfill \\ & =& \frac{{x}^{1-\alpha}}{(1-\alpha )\Gamma (1-\alpha )}\hfill \end{array}$$

$$\begin{array}{c}\hfill {\widehat{\Lambda}}_{3}\left(\beta \right)\beta =\frac{{\beta}^{1-\alpha}}{\Gamma (2-\alpha )}\end{array}$$

$$\begin{array}{ccc}\hfill {\widehat{\Lambda}}_{3}(x\mapsto \beta )& =& \frac{1}{\Gamma (1-\alpha )}\frac{d}{dx}{\int}_{0}^{x}{(x-\beta )}^{-\alpha}d\beta \hfill \\ & =& \frac{1}{\Gamma (1-\alpha )}\frac{d}{dx}{\int}_{0}^{x}{y}^{-\alpha}dy\hfill \\ & =& \frac{{x}^{-\alpha}}{\Gamma (1-\alpha )},\hfill \end{array}$$

$$\begin{array}{c}\hfill {\widehat{\Lambda}}_{3}\left(\beta \right)=\frac{{\beta}^{-\alpha}}{\Gamma (1-\alpha )}.\end{array}$$

Once again, this result can be obtained directly from the Euler Equation (29) for $n=0$. The first term of (28) is now evaluated as follows:

$$\begin{array}{c}\hfill {\widehat{\Lambda}}_{3}(x\mapsto \beta )log\left(\beta \right)=\frac{1}{\Gamma (1-\alpha )}\frac{d}{dx}{\int}_{0}^{x}{(x-\beta )}^{-\alpha}log\left(\beta \right)d\beta .\end{array}$$

To perform the integration in (35), let $y=x-\beta $ so that $dy=-d\beta $ and this gives
where $log(x-y)\equiv log\left(x\right(1-y/x\left)\right)=log\left(x\right)+log(1-y/x)$ has been used in (36) to expand the integrand. The first integral on the right of (36) is only dependent on the variable y so that it is trivial to show that

$$\begin{array}{c}\hfill {\int}_{0}^{x}{y}^{-\alpha}log(x-y)dy={\int}_{0}^{x}{y}^{-\alpha}log\left(x\right)dy+{\int}_{0}^{x}{y}^{-\alpha}log(1-y/x)dy,\end{array}$$

$$\begin{array}{c}\hfill {\int}_{0}^{x}{y}^{-\alpha}log\left(x\right)dy=\frac{log\left(x\right)}{1-\alpha}{x}^{1-\alpha}.\end{array}$$

$$\begin{array}{ccc}\hfill {\int}_{0}^{x}{y}^{-\alpha}log(1-y/x)dy& =& {x}^{1-\alpha}{\int}_{0}^{1}{z}^{-\alpha}log(1-z)dz\hfill \\ & =& \frac{{x}^{1-\alpha}}{\alpha -1}{H}_{1-\alpha}.\hfill \end{array}$$

Here, ${H}_{1-\alpha}$ is the harmonic-function that is related to the polygamma-function of the zeroth order or digamma-function ${\psi}_{0}(\xb7)$ via ${H}_{1-\alpha}=\gamma +{\psi}_{0}(2-\alpha )$, where $\gamma \approx 0.577216$ is the Euler gamma constant. The digamma-function ${\psi}_{0}(2-\alpha )$ can be simplified further by using the identity:

$$\begin{array}{c}\hfill {\psi}_{n}(z+1)={\psi}_{n}\left(z\right)+{(-1)}^{n}\frac{n!}{{z}^{n+1}}.\end{array}$$

Setting $z=1-\alpha $ and $n=0$ in the identity, one obtains ${\psi}_{0}(2-\alpha )={\psi}_{0}(1-\alpha )+\frac{1}{1-\alpha}$. Hence, (38) can be re-written as:

$$\begin{array}{c}\hfill {\int}_{0}^{x}{y}^{-\alpha}log(1-y/x)dy=\frac{{x}^{1-\alpha}}{\alpha -1}\left[\gamma +{\psi}_{0}(1-\alpha )+\frac{1}{1-\alpha}\right].\end{array}$$

Substituting (40) and (37) into (36), (35) becomes:

$$\begin{array}{c}\hfill {\widehat{\Lambda}}_{3}(x\mapsto \beta )log\left(\beta \right)=\frac{1}{\Gamma (1-\alpha )}\frac{d}{dx}\left[\frac{{x}^{1-\alpha}log\left(x\right)}{1-\alpha}+\frac{{x}^{1-\alpha}}{\alpha -1}\left(\gamma +{\psi}_{0}(1-\alpha )+\frac{1}{1-\alpha}\right)\right].\end{array}$$

After performing the simple differentiation in (41) and noting that ${\widehat{\Lambda}}_{3}(x\mapsto \beta )log\left(\beta \right)\to {\widehat{\Lambda}}_{3}\left(\beta \right)log\left(\beta \right),$ we have:

$$\begin{array}{c}\hfill {\widehat{\Lambda}}_{3}\left(\beta \right)log\left(\beta \right)=\frac{{\beta}^{-\alpha}}{\Gamma (1-\alpha )}\left[log\left(\beta \right)-{\psi}_{0}(1-\alpha )-\gamma \right].\end{array}$$

It is now a matter of substituting (42), (34) and (32) into (28) to obtain the final result:

$$\begin{array}{c}\hfill {\beta}^{-\alpha}\left[log\left(\beta \right)-{\psi}_{0}(1-\alpha )-\gamma \right]+\frac{{\omega}^{-1}{\beta}^{1-\alpha}}{(1-\alpha )}+(\gamma +1){\beta}^{-\alpha}=0.\end{array}$$

The problem now requires the solution of (43) in terms of the parameter $\beta $, which will be the fractional analogue of the conventional version as discussed in Section 3. Unfortunately, due to the fact that (43) is a transcendental equation in $\beta $, it means that solutions can only be obtained numerically. However it is possible to rewrite (43) in such a way as to obtain closed form analytic solutions. Equation (43) can be re-arranged to:

$$\begin{array}{c}\hfill \beta =\omega (\alpha -1)log\left(\beta \right)+\omega (1-\alpha )\left[{\psi}_{0}(1-\alpha )-1\right].\end{array}$$

Define A and B as follows:
so that (44) becomes:
which allows the solution in terms of $\beta $ to be in closed form if it can be transformed to resemble the Lambert W-function or product-log function. The W-function has the form

$$\begin{array}{c}\hfill A=\omega (\alpha -1)\phantom{\rule{1.em}{0ex}}\mathrm{and}\phantom{\rule{1.em}{0ex}}B=\omega (1-\alpha )\left[{\psi}_{0}(1-\alpha )-1\right]\end{array}$$

$$\begin{array}{c}\hfill \beta =Alog\left(\beta \right)+B,\end{array}$$

$$\begin{array}{c}\hfill y{e}^{y}=f\left(x\right).\end{array}$$

That is, if any equation can be written so that the left-hand side resembles the left-hand side of (47), then for any function on the right side, $f\left(x\right)$, the solution for y is given by: $y={W}_{n}\left(f\left(x\right)\right)$, where $n=0,-1$ are the two branch cuts of the Lambert W-function. Equation (46) can now be solved via the W-function if it is transposed as follows:

$$\begin{array}{c}\hfill \beta =exp\left(\frac{\beta}{A}-\frac{B}{A}\right)\iff -\frac{\beta}{A}{e}^{-\frac{\beta}{A}}=-\frac{1}{A}{e}^{-\frac{B}{A}}.\end{array}$$

Then, by (47), the solution for fractional $\beta $ is obtained from the W-function as:

$$\begin{array}{c}\hfill \beta =-A{W}_{n}\left(-\frac{1}{A}exp\left(-\frac{B}{A}\right)\right).\end{array}$$

Substituting both A and B while noting that $\omega =1/(\gamma +log\left(\lambda {x}_{0}\right))$ gives the fractional $\beta $ as:
where the argument of the W-function, $\chi $ is
and the $n=0$ branch cut is considered for the W-function. The fractional form for the Pareto shape parameter, (50), can now be substituted into the conventional Pareto to obtain the fractional Pareto density (PDF) that minimizes the divergence with respect to the Exponential-density:

$$\begin{array}{c}\hfill \beta =\frac{(1-\alpha )}{\gamma +log\left(\lambda {x}_{0}\right)}{W}_{0}\left(\chi \right),\end{array}$$

$$\begin{array}{c}\hfill \chi =\frac{\gamma +log\left(\lambda {x}_{0}\right)}{(1-\alpha )}exp\left({\psi}_{0}(1-\alpha )-1\right),\end{array}$$

$$\begin{array}{c}\hfill q\left(x\right)=\frac{(1-\alpha )}{\gamma +log\left(\lambda {x}_{0}\right)}{W}_{0}\left(\chi \right){x}_{0}^{\frac{(1-\alpha )}{\gamma +log\left(\lambda {x}_{0}\right)}{W}_{0}\left(\chi \right)}{x}^{-\left(1+\frac{(1-\alpha )}{\gamma +log\left(\lambda {x}_{0}\right)}{W}_{0}\left(\chi \right)\right)}.\end{array}$$

This is the fractional analogue of (16). Equation (50) can be substituted into the divergence Equation (12) as was done for the conventional solution for $\beta =-\omega $ (see (15)). Thus, the fractional divergence becomes:

$$\begin{array}{c}\hfill \mathcal{D}\left(p\left(x\right)\right|\left|q\left(x\right)\right)=\left|log\left(\frac{\gamma +log\left(\lambda {x}_{0}\right)}{(1-\alpha ){W}_{0}\left(\chi \right)}\right)+(\alpha -1){W}_{0}\left(\chi \right)-(\gamma +1)\right|.\end{array}$$

The modulus $|\xb7|$ in (53) has been reinstated not only to ensure a divergence greater or equal to zero but also because the fractional order can take, not just real, but also complex values. The interesting aspect of the fractional order $\alpha $ appearing in (51) and (53) is that the fractional $\beta $ now depends on $\alpha $ (see (50)). There is no reason why the fractional order $\alpha $ cannot be replaced by the variable $\beta $. This means of course that $\beta $ takes on the same domain or range of values that $\alpha $ does so defining the correct range is critical. In this instance, using (51) and (53) is essentially the same as using the following forms. Set $\alpha =\beta $ to obtain:
and

$$\begin{array}{c}\hfill \chi =\frac{\gamma +log\left(\lambda {x}_{0}\right)}{(1-\beta )}exp\left({\psi}_{0}(1-\beta )-1\right)\end{array}$$

$$\begin{array}{c}\hfill \mathcal{D}\left(p\left(x\right)\right|\left|q\left(x\right)\right)=\left|log\left(\frac{\gamma +log\left(\lambda {x}_{0}\right)}{(1-\beta ){W}_{0}\left(\chi \right)}\right)+(\beta -1){W}_{0}\left(\chi \right)-(\gamma +1)\right|.\end{array}$$

Thus, in keeping with the conventional divergence plot shown in Figure 2a, Figure 2b shows a plot of the fractional divergence (55) (or (53)) for the parameters $\lambda $ and $\beta $. As can be seen from the color bars, the divergence is large for the conventional divergence. However, the fractional version shows not only much smaller divergence separations for various values of $\lambda $ and $\beta $, but a large region where the divergence is everywhere equal to zero. It is worth noting that the minimum divergence achieved by the conventional divergence is $D\approx 0.75$, which is still much greater than the maximum fractional divergence of $\mathcal{D}\approx 0.16$.

## 5. Manipulation of the Divergence between Two Exponential Densities via the Fractional Orders

The fractional divergence between two Exponential-densities will be investigated in this section with the aim of showing that it gives non-trivial solutions and that it is possible to manipulate the divergence via the fractional order(s). There is a good reason for analysing two Exponential-densities as opposed to any other densities. Unlike the divergence solutions obtained for arbitrary densities, which are not entirely known, there is absolute certainty as to the expected divergence profile for the Exponential-densities. This is because, according to the conventional divergence, there is zero divergence whenever their parameters are equal. There are no other solutions that minimise the divergence for two Exponential-densities. Let
be two Exponential-densities. The two Exponential-densities (56) have one parameter each so that ${\overrightarrow{\xi}}_{1}={\xi}_{1}=u$ and ${\overrightarrow{\xi}}_{2}={\xi}_{2}=v$. This corresponds to $i=1,2$ respectively. Omitting the modulus for now, the expression for the fractional divergence becomes,

$$\begin{array}{c}\hfill p(y;u)=u{e}^{-uy}\phantom{\rule{1.em}{0ex}}\mathrm{and}\phantom{\rule{1.em}{0ex}}q(y;v)=v{e}^{-vy}\end{array}$$

$$\begin{array}{c}\hfill \mathcal{D}(x\mapsto {\xi}_{i},p(y;{\overrightarrow{\xi}}_{1})\left|\right|q(y;{\overrightarrow{\xi}}_{2}))=\frac{1}{\Gamma (1-\alpha )}\frac{d}{dx}{\int}_{0}^{x}{\int}_{\Omega}{(x-{\xi}_{i})}^{-\alpha}p(y;{\overrightarrow{\xi}}_{1})log\left(\frac{p(y;{\overrightarrow{\xi}}_{1})}{q(y;{\overrightarrow{\xi}}_{2})}\right)dyd{\xi}_{i}\end{array}$$

The following two equations are obtained from (57):
when $i=1$ and
when $i=2$. The domain of integration for the two densities is $\Omega \in [0,\infty )$. The conventional divergence $D\left(p\right(y;u\left)\right|\left|q\right(y;v\left)\right)$ which is embedded in (58) and (59), is evaluated as follows:

$$\begin{array}{c}\hfill \mathcal{D}(x\mapsto u,p(y;u)|\left|q(y;v)\right)=\frac{1}{\Gamma (1-\alpha )}\frac{d}{dx}{\int}_{0}^{x}{\int}_{0}^{\infty}{(x-u)}^{-\alpha}p(y;u)log\left(\frac{p(y;u)}{q(y;v)}\right)dydu\end{array}$$

$$\begin{array}{c}\hfill \mathcal{D}(x\mapsto v,p(y;u)|\left|q(y;v)\right)=\frac{1}{\Gamma (1-\alpha )}\frac{d}{dx}{\int}_{0}^{x}{\int}_{0}^{\infty}{(x-v)}^{-\alpha}p(y;u)log\left(\frac{p(y;u)}{q(y;v)}\right)dydv\end{array}$$

$$\begin{array}{ccc}\hfill D\left(p\right(y;u\left)\right|\left|q\right(y;v\left)\right)& =& {\int}_{0}^{\infty}p(y;u)log\left(\frac{p(y;u)}{q(y;v)}\right)dy\hfill \\ & =& {\int}_{0}^{\infty}u{e}^{-uy}\left[log\left(\frac{u}{v}\right)+log\left({e}^{(v-u)y}\right)\right]dy\hfill \end{array}$$

The first terms in (60) is straightforward since the second axiom of probability applies, while the second term requires integration by parts. The conventional divergence between two Exponential-densities takes the form:

$$\begin{array}{c}\hfill D\left(p(y;u)\right|\left|q(y;v)\right)=log\left(\frac{u}{v}\right)+\frac{v}{u}-1\end{array}$$

Substituting (61) into (58) gives the following result:

$$\begin{array}{c}\hfill \mathcal{D}(x\mapsto u,p(y;u)|\left|q(y;v)\right)=\frac{1}{\Gamma (1-\alpha )}\frac{d}{dx}{\int}_{0}^{x}{(x-u)}^{-\alpha}\left[log\left(\frac{u}{v}\right)+\frac{v}{u}-1\right]du\end{array}$$

Using the operator form and enforcing the condition $\mathcal{D}(x\mapsto u,p(y;u\left)\right|\left|q\right(y;v\left)\right)=0$ means that the fractional divergence with respect to parameter u becomes

$$\begin{array}{c}\hfill {\widehat{\Lambda}}_{u}(x\mapsto u)log\left(u\right)-{\widehat{\Lambda}}_{u}(x\mapsto u)log\left(v\right)+{\widehat{\Lambda}}_{u}(x\mapsto u)\left(\frac{v}{u}\right)-{\widehat{\Lambda}}_{u}(x\mapsto u)=0.\end{array}$$

Applying the fractional operator on the function $log\left(u\right)$ has been addressed in the previous section. The result here follows a similar process that gives:

$$\begin{array}{ccc}\hfill {\widehat{\Lambda}}_{u}(x\mapsto u)log\left(u\right)& =& \frac{{x}^{-\alpha}}{\Gamma (1-\alpha )}\left[log\left(x\right)-{\psi}_{0}(1-\alpha )-\gamma \right]\to \hfill \\ \hfill {\widehat{\Lambda}}_{u}\left(u\right)log\left(u\right)& =& \frac{{u}^{-\alpha}}{\Gamma (1-\alpha )}\left[log\left(u\right)-{\psi}_{0}(1-\alpha )-\gamma \right].\hfill \end{array}$$

Once again, ${\psi}_{0}(1-\alpha )$ is the digamma function and $\gamma $ is the Euler constant. The next term is evaluated to give the result:

$$\begin{array}{ccc}\hfill {\widehat{\Lambda}}_{u}(x\mapsto u)\left[log\left(v\right)+1\right]& =& \frac{{x}^{-\alpha}}{\Gamma (1-\alpha )}\left[log\left(v\right)+1\right]\to \hfill \\ \hfill {\widehat{\Lambda}}_{u}\left(u\right)\left[log\left(v\right)+1\right]& =& \frac{{u}^{-\alpha}}{\Gamma (1-\alpha )}\left[log\left(v\right)+1\right].\hfill \end{array}$$

The final requirement is to evaluate the ratio $v/u$. Application of the fractional operator on this ratio gives the result:

$$\begin{array}{ccc}\hfill {\widehat{\Lambda}}_{u}(x\mapsto u)\left(\frac{v}{u}\right)& =& {(-1)}^{\alpha}v\Gamma (\alpha +1){x}^{-(\alpha +1)}\to \hfill \\ \hfill {\widehat{\Lambda}}_{u}\left(u\right)\left(\frac{v}{u}\right)& =& v{e}^{i\alpha \pi}\Gamma (\alpha +1){u}^{-(\alpha +1)}.\hfill \end{array}$$

Substitution of the expressions (64)–(66) into (63) and rearranging results in the following:

$$\begin{array}{c}\hfill u=-\frac{{e}^{-i\alpha \pi}}{v\Gamma (\alpha +1)\Gamma (1-\alpha )}log\left(u\right)+\frac{{\psi}_{0}(1-\alpha )+\gamma +log\left(v\right)+1}{v\Gamma (\alpha +1)\Gamma (1-\alpha ){e}^{i\alpha \pi}}.\end{array}$$

Equation (67) can only be solved numerically for u in its present form. However, as shown in the previous section, it can be transformed so that its solutions can be obtained analytically by using the Lambert W-function. Setting
requires the solution of u using the form

$$\begin{array}{ccc}\hfill A& =& \frac{{e}^{-i\alpha \pi}}{v\Gamma (\alpha +1)\Gamma (1-\alpha )}\hfill \\ \hfill B& =& \frac{{\psi}_{0}(1-\alpha )+\gamma +log\left(v\right)+1}{v\Gamma (\alpha +1)\Gamma (1-\alpha ){e}^{i\alpha \pi}}\hfill \end{array}$$

$$\begin{array}{c}\hfill u=-Alog\left(u\right)+B.\end{array}$$

Transforming this expression to a form that allows solution using the W-function finally gives (see previous section):

$$\begin{array}{c}\hfill u=A{W}_{0}\left(\frac{exp\left(\frac{B}{A}\right)}{A}\right).\end{array}$$

The solution (70) is a function of the fractional order $\alpha $ as well as other parameters. The fractional order belonging to u will be distinguished from now on and will be defined as $\alpha ={\alpha}_{1}$. The same will be done later for the solution v, which will be a function of its own fractional order $\alpha ={\alpha}_{2}$. Hence, substituting (68) into (70), the final result becomes:
where the argument ${\chi}_{1}$ in the W-function is given by,

$$\begin{array}{c}\hfill u=\frac{{e}^{-i{\alpha}_{1}\pi}}{v\Gamma ({\alpha}_{1}+1)\Gamma (1-{\alpha}_{1})}{W}_{0}\left({\chi}_{1}\right),\end{array}$$

$$\begin{array}{c}\hfill {\chi}_{1}=v\Gamma ({\alpha}_{1}+1)\Gamma (1-{\alpha}_{1})exp\left(i{\alpha}_{1}\pi +{\psi}_{0}(1-{\alpha}_{1})+\gamma +log\left(v\right)+1\right).\end{array}$$

The next step is to complete a similar process for the parameter v. Substitution of the conventional divergence (61) into (59) requires the solution of

$$\begin{array}{c}\hfill \mathcal{D}(x\mapsto v,p(y;u)|\left|q(y;v)\right)=\frac{1}{\Gamma (1-\alpha )}\frac{d}{dx}{\int}_{0}^{x}{(x-v)}^{-\alpha}\left[log\left(\frac{u}{v}\right)+\frac{v}{u}-1\right]dv.\end{array}$$

Using the operator formulation, and noting that $\mathcal{D}(x\mapsto v,p(y;u\left)\right|\left|q\right(y;v\left)\right)=0$, gives the expression:

$$\begin{array}{c}\hfill {\widehat{\Lambda}}_{v}(x\mapsto v)\left[log\left(u\right)-1\right]-{\widehat{\Lambda}}_{v}(x\mapsto v)log\left(v\right)+{\widehat{\Lambda}}_{v}(x\mapsto v)\left(\frac{v}{u}\right)=0.\end{array}$$

Each term is now evaluated beginning with the first term:

$$\begin{array}{ccc}\hfill {\widehat{\Lambda}}_{v}(x\mapsto v)\left[log\left(u\right)-1\right]& =& \frac{{x}^{-\alpha}}{\Gamma (1-\alpha )}\left[log\left(u\right)-1\right]\hfill \\ \hfill {\widehat{\Lambda}}_{v}\left(v\right)\left[log\left(u\right)-1\right]& =& \frac{{v}^{-\alpha}}{\Gamma (1-\alpha )}\left[log\left(u\right)-1\right].\hfill \end{array}$$

The next term involves the log-function, which has been treated before in detail. Following the same process gives:
where once again ${\psi}_{0}(1-\alpha )$ is the digamma-function and $\gamma $ is the Euler constant. The final term is evaluated to be:

$$\begin{array}{ccc}\hfill {\widehat{\Lambda}}_{v}(x\mapsto v)log\left(v\right)& =& \frac{{x}^{-\alpha}}{\Gamma (1-\alpha )}\left[log\left(v\right)-{\psi}_{0}(1-\alpha )-\gamma \right]\hfill \\ \hfill {\widehat{\Lambda}}_{v}\left(v\right)log\left(v\right)& =& \frac{{v}^{-\alpha}}{\Gamma (1-\alpha )}\left[log\left(v\right)-{\psi}_{0}(1-\alpha )-\gamma \right],\hfill \end{array}$$

$$\begin{array}{ccc}\hfill {\widehat{\Lambda}}_{v}(x\mapsto v)\left(\frac{v}{u}\right)& =& \frac{{x}^{1-\alpha}}{u\Gamma (2-\alpha )}\hfill \\ \hfill {\widehat{\Lambda}}_{v}\left(v\right)\left(\frac{v}{u}\right)& =& \frac{{v}^{1-\alpha}}{u\Gamma (2-\alpha )}.\hfill \end{array}$$

It is now a matter of substituting (75)–(77) into (74). Rearranging the expression gives the following form:

$$\begin{array}{c}\hfill v=u(1-\alpha )log\left(v\right)-u(1-\alpha )\left[{\psi}_{0}(1-\alpha )+\gamma +log\left(u\right)-1\right].\end{array}$$

In order to solve this equation using the W-function, set

$$\begin{array}{ccc}\hfill A& =& u(1-\alpha )\hfill \\ \hfill B& =& u(1-\alpha )\left[{\psi}_{0}(1-\alpha )+\gamma +log\left(u\right)-1\right].\hfill \end{array}$$

The required equation takes the form

$$\begin{array}{c}\hfill v=Alog\left(v\right)-B.\end{array}$$

Rearranging this equation into the form that allows a solution by the W-function finally gives

$$\begin{array}{c}\hfill v=-A{W}_{0}\left(-\frac{exp(B/A)}{A}\right).\end{array}$$

As was done for the u-solution, the fractional order of v will be set to $\alpha ={\alpha}_{2}$ to distinguish it from ${\alpha}_{1}$ belonging to the parameter u. With this in mind and substituting the definitions for A and B, namely (79), gives
where

$$\begin{array}{c}\hfill v=u({\alpha}_{2}-1){W}_{0}\left({\chi}_{2}\right),\end{array}$$

$$\begin{array}{c}\hfill {\chi}_{2}=\frac{1}{u({\alpha}_{2}-1)}exp\left({\psi}_{0}(1-{\alpha}_{2})+\gamma +log\left(u\right)-1\right).\end{array}$$

The conventional divergence can now be transformed to the fractional divergence between two Exponential-densities by substituting the fractional solutions (71)–(72) for u and (82)–(83) for v into (61) to obtain the final form:
where the arguments ${\chi}_{1}$ and ${\chi}_{2}$ are given by:

$$\begin{array}{c}\mathcal{D}\left(p\right(y;u\left)\right|\left|q\right(y;v\left)\right)=\hfill \\ \hspace{1em}\hspace{1em}\left|log\left(\frac{{W}_{0}\left({\chi}_{1}\right){e}^{-i{\alpha}_{1}\pi}}{uv({\alpha}_{2}-1)\Gamma ({\alpha}_{1}+1)\Gamma (1-{\alpha}_{1}){W}_{0}\left({\chi}_{2}\right)}\right)+uv({\alpha}_{2}-1)\Gamma ({\alpha}_{1}+1)\Gamma (1-{\alpha}_{1}){e}^{i{\alpha}_{1}\pi}\frac{{W}_{0}\left({\chi}_{2}\right)}{{W}_{0}\left({\chi}_{1}\right)}-1\right|,\hfill \end{array}$$

$$\begin{array}{ccc}\hfill {\chi}_{1}& =& v\Gamma ({\alpha}_{1}+1)\Gamma (1-{\alpha}_{1})exp\left(i{\alpha}_{1}\pi +{\psi}_{0}(1-{\alpha}_{1})+\gamma +log\left(v\right)+1\right),\hfill \\ \hfill {\chi}_{2}& =& \frac{1}{u({\alpha}_{2}-1)}exp\left({\psi}_{0}(1-{\alpha}_{2})+\gamma +log\left(u\right)-1\right).\hfill \end{array}$$

The modulus is used because ${\alpha}_{1,2}\in \mathcal{R}$ as well as ${\alpha}_{1,2}\in \mathcal{C}$ as can be seen from (84) and (85). In Figure 3, the conventional divergence, which is exact with the fractional divergence when $\alpha =1$ in the latter, is shown as a divergence manifold (top-left) with the line $u=v$ running down the middle where the divergence is zero.

The conventional divergence is also shown on the right as an image map where the red region indicates small divergence on either side of the $u=v$ line (not shown). According to the conventional divergence between two Exponential-densities, the only solutions which give zero are those where $u=v$. However, as the middle two and last two plots indicate, the fractional divergence can make the divergence between them zero or close to zero for regions (solutions) where the conventional version fails. The middle two figures show manipulation of the divergence manifold for ${\alpha}_{1}=10.0001$ and ${\alpha}_{2}=190.9$ in which the divergence manifold has been minimised perpendicular to the conventional version ($\alpha =1$). The image map on the right also contains iteration lines with each point being an iteration step in the process of finding the global minimum of the divergence using a differential-evolution numerical algorithm. This minimum occurs when $u=2.32623$ and $v=5.45556$ and at those parametric coordinates the fractional divergence is $\mathcal{D}={10}^{-8}$. The bottom two plots show further manipulation of the divergence manifold for ${\alpha}_{2}=40.9,$ giving a fractional divergence of ${10}^{-8}$ for a global minimum in this case given by $u=8.88506$ and $v=6.73169$. The last four plots confirm that the fractional divergence approach can give essentially zero divergence for parameter values $(u,v),$ which are not equal, unlike the expected results from the conventional divergence approach.

Further evidence of this can be seen in Figure 4. The manipulation of the divergence manifold is not only possible via ${\alpha}_{1,2}\in \mathcal{R}$ but also when ${\alpha}_{1}$ and ${\alpha}_{2}$ are complex (bottom-right plot). The fractional divergence has a global minimum for the complex solution of $\mathcal{D}={10}^{-14}$ at $(u,v)=(5.77781,1.01829)$. There are numerous other non-trivial solutions with divergence of the order of ${10}^{-22}$ or less which have been omitted for brevity reasons. The results shown in Figure 3 and Figure 4 indicate that the fractional divergence formulation makes it possible to find parameter values $(u,v)$ that achieve zero divergence even when the conventional approach does not. When the fractional order is $\alpha =1$, the fractional divergence recovers the same ‘trivial’ solutions as the conventional method, hence the former is a generalisation of the latter. Note that one can set ${\alpha}_{1}\ne {\alpha}_{2}$ or ${\alpha}_{1}={\alpha}_{2}=\alpha $ or any combination, where ${\alpha}_{i}\in \mathcal{R}$ and ${\alpha}_{i}\in \mathcal{C}$.

Finally, it is worth discussing the $\alpha =1$ or conventional divergence image map on the right of Figure 3. At first glance it appears that the divergence is also very small on either side of the $u=v$ solutions which would indicate that there must be other solutions apart from those given by $u=v$. However this is misleading. As the $(u,v)$ parameters of the Exponential-densities increase in value, ($u\to \infty $ and $v\to \infty $), the Exponential-densities decay very quickly to zero. As this happens to both of them simultaneously, the densities tend to have the same asymptotic behaviour whenever $(u,v)$ are large, giving the impression that the divergence is zero between them. In other words,
where the last term on the right is valid by the Definition found in the previous section and $\sim 0$ means that the densities asymptotically approach zero (rapidly) for large $(u,v)$. Caution must be used when interpreting the divergence solutions for the conventional case on either side of the $u=v$ line. These solutions are trivial and are due to the decay process of the densities and not because there are alternative solutions in addition to the ones given by $u=v$. This explains the $\u201cV\u201d$-shape that is diagonal to the $u-v$ axes.

$$\begin{array}{c}\hfill D\left(p(y;u)\right|\left|q(y;v)\right)=\underset{u,v\to \infty}{lim}{\int}_{\Omega}p(y;u)log\left(\frac{p(y;u)}{q(y;v)}\right)dy={\int}_{\Omega}(\sim 0)log\left(\frac{\sim 0}{\sim 0}\right)dy=0,\end{array}$$

## 6. An Application of the Fractional Divergence to Detection Theory

In this section, it will be shown how the fractional divergence can be used to solve an important problem in the field of signal processing. The problem consists of detecting signals embedded in background noise or clutter. Suppose that a hypothesis test is constructed. Set ${H}_{0}$ to be the null hypothesis which describes only the noise/clutter. Let ${H}_{1}$ be the alternative hypothesis that there is a signal of interest that has to be detected in the noise/clutter. That is,

$$\begin{array}{ccc}\hfill {H}_{0}& :& noise/clutter.\hfill \\ \hfill {H}_{1}& :& signal+noise/clutter.\hfill \end{array}$$

It is usually the case where the density that describes the noise/clutter is known, e.g., Gaussian or Normal. Let ${q}_{0}\left(x\right)$ be a density that represents this situation. Let the alternative hypothesis be represented by the density ${q}_{1}\left(x\right)$, i.e., that there is a signal of interest embedded inside the noise/clutter. It is possible to construct a detector that can discriminate in some optimal fashion whether there is a signal present or not when sampling observed data. Let $p\left(x\right)$ be a density that is constructed by observing/measuring i.i.d. random variables. What is required is a metric which determines how close the observed data $p\left(x\right)$ is to either ${q}_{0}\left(x\right)$ and ${q}_{1}\left(x\right)$. If $p\left(x\right)$ is closer to ${q}_{0}\left(x\right)$, then it is more likely that it is not a signal of interest but rather what is being detected is merely noise/clutter. If the separation of $p\left(x\right)$ is closer to ${q}_{1}\left(x\right)$ instead, then it is highly probable that a signal is present, so a detection is declared. It should be clear that a minimum divergence detector can be constructed, which can differentiate if there is a signal present or not by calculating the divergence between the observed density and that of the the null and the alternative densities.

According to the Neyman–Pearson theorem that optimises the detection probability for a given false alarm rate, the log-likelihood ratio for the hypothesis test is:
where the total number of samples observed is N. Taking the log-likelihood of (88) and normalising by N gives:

$$\begin{array}{c}\hfill {\theta}^{\prime}=\prod _{i=1}^{N}\frac{{q}_{1}\left({x}_{i}\right)}{{q}_{0}\left({x}_{i}\right)},\end{array}$$

$$\begin{array}{c}\hfill \theta \equiv \frac{1}{N}log\left({\theta}^{\prime}\right)=\frac{1}{N}\sum _{i=1}^{N}log\left(\frac{{q}_{1}\left({x}_{i}\right)}{{q}_{0}\left({x}_{i}\right)}\right).\end{array}$$

The log-likelihood $\theta $ is essentially a random variable. It is an average of N i.i.d. random variables ${\theta}_{i}=log({q}_{1}\left({x}_{i}\right)/{q}_{0}\left({x}_{i}\right))$. Accordingly, from the law of large numbers, for large N,
where $<\xb7>$ is the expectation and $i=1,2,...,N$. By the expectation (90) for the continuous case, one has

$$\begin{array}{c}\hfill \theta \to \u2329{\theta}_{i}\u232a,\end{array}$$

$$\begin{array}{ccc}\hfill \u2329\theta \u232a& =& \int p\left(x\right)log\left(\frac{{q}_{1}\left(x\right)}{{q}_{0}\left(x\right)}\right)dx\hfill \\ & =& \int p\left(x\right)log\left(\frac{{q}_{1}\left(x\right)p\left(x\right)}{{q}_{0}\left(x\right)p\left(x\right)}\right)dx\hfill \\ & =& \int p\left(x\right)log\left(\frac{p\left(x\right)}{{q}_{0}\left(x\right)}\right)dx-\int p\left(x\right)log\left(\frac{p\left(x\right)}{{q}_{1}\left(x\right)}\right)dx\hfill \\ & =& D\left(p\left(x\right)\right||{q}_{0}\left(x\right))-D(p\left(x\right)\left|\right|{q}_{1}\left(x\right)).\hfill \end{array}$$

Hence, the divergence is related to the expectation of the log-likelihood ratio. For large N and by the Neyman–Pearson theorem:
where ${\tau}^{\prime}$ is the un-normalised threshold. The minimum distance detector based on the divergence is given by:
with $\tau $ being the normalised by N threshold. For a threshold $\tau =0$, the detection scheme becomes

$$\u2329\theta \u232a\underset{{H}_{0}}{\overset{{H}_{1}}{\gtrless}}{\tau}^{\prime},$$

$$\begin{array}{c}\hfill D\left(p\left(x\right)\right||{q}_{0}\left(x\right))-D(p\left(x\right)\left|\right|{q}_{1}\left(x\right))\underset{{H}_{0}}{\overset{{H}_{1}}{\gtrless}}\frac{1}{N}{\tau}^{\prime}\equiv \tau ,\end{array}$$

$$\begin{array}{c}\hfill D\left(p\left(x\right)\right||{q}_{0}\left(x\right))\underset{{H}_{0}}{\overset{{H}_{1}}{\gtrless}}D\left(p\left(x\right)\right||{q}_{1}\left(x\right)).\end{array}$$

If the divergence indicates that the distance of $p\left(x\right)$ to the null hypothesis ${q}_{0}\left(x\right)$ is greater than the distance to the alternative hypothesis ${q}_{1}\left(x\right),$ then ${H}_{1}$ is true, which means that a signal of interest is detected and vice versa. The main problem is that the detection scheme (93) or (94) requires the estimation of parameters for each density, i.e., $p(x;{\overrightarrow{\xi}}_{1})$, $p(x;{\overrightarrow{\xi}}_{2})$ and $p(x;{\overrightarrow{\xi}}_{3})$. The critical issue that arises is that the parameters $({\overrightarrow{\xi}}_{1},{\overrightarrow{\xi}}_{2},{\overrightarrow{\xi}}_{3})$ are estimated from the observed data. Unfortunately, in order to obtain accurate estimates for these parameters, the number of samples N must be very large. In reality, however, this is never the case. There are only a small number of samples n that can be used for estimation purposes, i.e., $n\in N:n<<N$. This introduces error in the estimation of $({\overrightarrow{\xi}}_{1},{\overrightarrow{\xi}}_{2},{\overrightarrow{\xi}}_{3})$ and, as a consequence, the divergence detector does not perform optimally.

Using the fractional divergence approach means that the parameters depend on the fractional order, $({\overrightarrow{\xi}}_{1}\left(\alpha \right),{\overrightarrow{\xi}}_{2}\left(\alpha \right),{\overrightarrow{\xi}}_{3}\left(\alpha \right))$. Thus, even if the parameters are estimated using only a small sample n in each case, the fractional order can be changed in order to compensate for this by varying the divergences to obtain the optimal solution as if the sampling was very large to begin with. The fractional-order(s) ‘fine-tunes’ the performance of the detector by acting as a correction factor to the loss experienced in the estimation process for the parameters because of poor or small sampling.

## 7. Conclusions

It has been shown that the divergence between different probability densities can be studied using the Kullback–Leibler approach. It is possible to find solutions that indicate where two competing density models approach each other asymptotically, but the solutions are generally unique or trivial in nature. The fractional divergence employs fractional calculus to improve on the conventional divergence results beyond the trivial or unique cases. Apart from the improved overall performance, fractional solutions open up the possibility of giving further insights into problems requiring this type of analysis.

## Acknowledgments

The author would like to thank the reviewers for their suggestions on how to improve the paper.

## Conflicts of Interest

The author declares no conflict of interest.

## References

- Jeffrey, H. Theory of Probability, 2nd ed.; Clarendon Press: Oxford, UK, 1948. [Google Scholar]
- Flemming, T. Some inequalities for information divergence and related measures of discrimination. IEEE Trans. Inf. Theory
**2000**, 46, 1602–1609. [Google Scholar] - Renyi, A. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematics, Statistics and Probability, Berkeley, CA, USA, 20 June–30 July 1960; Volume 1, pp. 547–561. [Google Scholar]
- Borland, L.; Plastino, A.R.; Tsallis, C. Information gain within nonextensive thermostatistics. J. Math. Phys.
**1998**, 39, 6490–6501. [Google Scholar] [CrossRef] - Ubriaco, M.R. Entropies based on fractional calculus. Phys. Lett. A
**2009**, 373, 2516–2519. [Google Scholar] [CrossRef] - Machado, J.T. Fractional order generalized information. Entropy
**2014**, 16, 2350–2361. [Google Scholar] [CrossRef] - Lin, J. Divergence measures based on the Shannon Entropy. IEEE Trans. Inf. Theory
**1991**, 37, 145–151. [Google Scholar] [CrossRef] - Machado, J.T. A probabilistic interpretation of the fractional-order differentiation. Fract. Calc. Appl. Anal.
**2003**, 6, 73–80. [Google Scholar] - Nguyen, V.T. Fractional calculus in probability. Probab. Math. Stat.
**1984**, 3, 173–189. [Google Scholar] - Machado, J.T. Fractional coins and fractional derivatives. Abstr. Appl. Anal.
**2013**, 5. [Google Scholar] [CrossRef] - Jumarie, G. Probability calculus of fractional order and fractional Taylor’s series application to Fokker-Planck equation and information of non-random functions. Chaos Solitons Fractals
**2009**, 40, 1428–1448. [Google Scholar] [CrossRef] - Resnik, S.I. A Probability Path; Birkhauser: Boston, MA, USA, 1998. [Google Scholar]
- Mostafaei, M.; Ahmadi Ghotbi, P. Fractional probability measure and its properties. J. Sci.
**2010**, 21, 259–264. [Google Scholar] - El-Shehawy, S.A. On properties of fractional probability measure. Int. Math. Forum
**2016**, 11, 1175–1184. [Google Scholar] [CrossRef] - Swerling, P. Probability of detection for fluctuating targets. IRE Trans. Inf. Theory
**1960**, IT-6, 269–308. [Google Scholar] [CrossRef] - Gandhi, P.; Kassam, S. Analysis of CFAR processors in nonhomogeneous background. IEEE Trans. Aerosp. Electron. Syst.
**1988**, 24, 427–445. [Google Scholar] [CrossRef] - Rohling, H. Radar CFAR thresholding in clutter and multiple target situations. IEEE Trans. Aerosp. Electron. Syst.
**1983**, 19, 608–621. [Google Scholar] [CrossRef] - Tuzlukov, V.P. Signal Detection Theory; Springer: Boston, MA, USA, 2001. [Google Scholar]
- Levanon, N. Radar Principles; Wiley: New York, NY, USA, 1988. [Google Scholar]
- Amari, S.; Nagaoka, H. Methods of information geometry. In Translations of Mathematical Monographs; American Mathematical Society: Provindence, RI, USA, 2000; Volume v191, ISBN 978-0821805312. [Google Scholar]
- Alexopoulos, A. One-parameter Weibull-type distribution and its relative entropy. Digit. Signal Process.
**2017**. under review. [Google Scholar] - Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat.
**1951**, 22, 79–86. [Google Scholar] [CrossRef] - Goutis, C.; Robert, C.P. Model choice in generalised linear models: A Bayesian approach via Kullback–Leibler projections. Biometrika
**1998**, 85, 29–37. [Google Scholar] [CrossRef] - Van Erven, T.; Harremoes, P. Renyi divergence and Kullback–Leibler divergence. IEEE Trans. Inf. Theory
**2014**, 60, 3797–3820. [Google Scholar] [CrossRef] - Do, M.N.; Vetterli, M. Wavelet-based texture retrieval using generalized Gaussian density and Kullback–Leibler distance. IEEE Trans. Image Process.
**2002**, 11, 146–158. [Google Scholar] [CrossRef] [PubMed] - Perez-Cruz, F. Kullback–Leibler divergence estimation of continuous distributions. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Toronto, ON, Canada, 6–11 July 2008. [Google Scholar]
- Lee, Y.K.; Park, B.U. Estimation of Kullback–Leibler divergence by local likelihood. Ann. Inst. Stat. Math.
**2006**, 58, 327–340. [Google Scholar] [CrossRef] - Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley-Interscience: New York, NY, USA, 1991. [Google Scholar]
- Wang, C.P.; Ghosh, M. A Kullback–Leibler divergence for Bayesian model diagnostics. Open J. Stat.
**2011**, 1, 172–184. [Google Scholar] [CrossRef] [PubMed] - Alexopoulos, A.; Weinberg, G.V. Fractional-order Pareto distributions with application to X-band maritime radar clutter. IET Radar Sonar Navig.
**2015**, 9, 817–826. [Google Scholar] [CrossRef] - De Oliveira, E.C.; Machado, J.A.T. A review of definitions for fractional derivatives and integral. Math. Probl. Eng.
**2014**, 6. [Google Scholar] [CrossRef] - Alexopoulos, A.; Weinberg, G.V. Fractional-order formulation of power-law and exponential distributions. Phys. Lett. A
**2014**, 378, 2478–2481. [Google Scholar] [CrossRef] - Kulish, V.V.; Lage, J.L. Application of fractional calculus to fluid mechanics. Fluids Eng.
**2002**, 124, 803–806. [Google Scholar] [CrossRef] - Douglas, J.F. Some applications of fractional calculus to polymer science. Adv. Chem. Phys.
**1997**, 102, 121–192. [Google Scholar] - Fellah, Z.E.A.; Depollier, C. Application of fractional calculus to the sound waves propagation in rigid porous materials: Validation via ultrasonic measurement. Acta Acust.
**2002**, 88, 34–39. [Google Scholar] - Assaleh, K.; Ahmad, W.M. Modeling of speech signals using fractional calculus. In Proceedings of the 9th International Symposium on Signal Processing and Its Applications (ISSPA), Sharjah, UAE, 12–15 February 2007; pp. 1–4. [Google Scholar]
- Mathieu, B.; Melchior, P.; Oustaloup, A.; Ceyral, C. Fractional differentiation for edge detection. Fract. Signal Process. Appl.
**2003**, 83, 2285–2480. [Google Scholar] [CrossRef] - Soczkiewicz, E. Application of fractional calculus in the theory of viscoelasticity. Mol. Quantum Acoust.
**2002**, 23, 397–404. [Google Scholar] - Machado, J.A.T.; Jesus, I.S.; Cunha, J.B.; Tar, J.K. Fractional dynamics and control of distributed parameter systems. Intell. Syst. Serv. Mank.
**2006**, 2, 295–305. [Google Scholar] - Hilfer, R. Applications of Fractional Calculus in Physics; World Scientific Publishing: Singapore, 2000. [Google Scholar]
- Podlubny, I. Fractional Differential Equations; Academic Press: Cambridge, MA, USA, 1999; Volume 198. [Google Scholar]

**Figure 1.**The divergence between the Exponential-density and the Pareto-density for a fixed Pareto scale parameter, ${x}_{0}=0.01$.

**Figure 2.**For ${x}_{0}=0.01$, the (

**a**) conventional and (

**b**) fractional divergence is shown, respectively.

**Figure 3.**Variation of the (fractional) divergence manifold between two Exponential-densities in terms of the fractional orders ${\alpha}_{1}$ and ${\alpha}_{2}$. The case $\alpha =1$ corresponds to the conventional divergence.

**Figure 4.**Further manipulation of the (fractional) divergence manifold between two Exponential-densities via the fractional orders ${\alpha}_{1}$ and ${\alpha}_{2}$.

© 2017 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).