On Fourier-Based Inequality Indices

Inequality indices are quantitative scores that take values in the unit interval, with a zero score denoting complete equality. They were originally created to measure the heterogeneity of wealth metrics. In this study, we focus on a new inequality index based on the Fourier transform that demonstrates a number of intriguing characteristics and shows great potential for applications. By extension, it is demonstrated that other inequality measures, such as the Gini and Pietra indices, can be usefully stated in terms of the Fourier transform, allowing us to illuminate characteristics in a novel and straightforward manner.


Introduction
As recently discussed in [1][2][3], the challenge of measuring the statistical heterogeneity of measures arises in most fields of science and engineering, and it is one of the fundamental features of data analysis.
In economics and social sciences size measures of interest are wealth measures, and in the context of wealth measures many inequality indices have been introduced [4][5][6][7]. Specifically, inequality indices quantify the socio-economic divergence of a given wealth measures from the state of perfect equality. In this area, the most used measure of inequality is the Gini index, first proposed by the Italian statistician Corrado Gini more than a century ago [8,9]. However, although it has had an economic origin, the use of the Gini index has not been limited to wealth alone [10].
A second important index of inequality, still introduced in economics, is the Pietra index [11]. As discussed in [12], the Pietra index is an elemental measure of statistical heterogeneity which has a number of properties that render it not only an alternative to the popular Gini index, but rather, a far more natural and meaningful quantitative tool for the measurement of egalitarianism, and, consequently, for the measurement of statistical heterogeneity at large.
In addition, other indices have been introduced so far. An alternative to the Gini index was introduced by Bonferroni in 1930 in a textbook for students at Bocconi University in Milan [13]. The main properties and representations of the Bonferroni index and its connections with the index of Gini and other measures were studied in [3]. Furthermore, it is important to mention the Kolkata index, first introduced in [14] as a measure of inequality, whose connections with the Gini and Pietra indices have been studied in [1,15].
An indispensable tool for measuring statistical heterogeneity of measures is the Lorenz function and its graphical representation, the Lorenz curve [16]. For wealth measures, the Lorenz curve plots the percentage of total income earned by the various sectors of the population, ordered by the increasing size of their incomes. The Lorenz curve is typically represented as a curve in the unit square of opposite vertices in the origin of the axes and the point (1,1), starting from the origin and ending at the point (1,1).
The diagonal of the square exiting the origin is the line of perfect equality, representing a situation in which all individuals have the same income. Since the diagonal is the line of perfect equality, one can say that that the closer the Lorenz curve is to the diagonal, the more equal is the distribution of income.
This idea of closeness between the line of perfect equality and the Lorenz curve can be expressed in many ways, each of which gives rise to a possible measure of inequality. Thus, starting from the Lorenz curve, several indices of inequality can be defined, including the Gini index. Various indices were obtained by looking at the maximal distance between the line of perfect equality and the Lorenz curve, either horizontally or vertically, or alternatively parallel to the other diagonal of the unit square [2].
Despite the enormous amount of research illustrating the fields of application of inequality indices, the use of arguments based on Fourier transforms appears rather limited. In particular, although the Gini index can be easily expressed in terms of the Fourier transform, at least to our knowledge, its expression in Fourier has never been considered in applications. The same conclusion can be drawn for the Pietra index, whose expression in Fourier transform is very useful to understand its nature, and to introduce from that other Fourier-based measures of inequality, including the one considered in this paper.
We would like to point out that having inequality indices expressed in terms of the Fourier transform could be very interesting for a variety of applications. Indeed, the Fourier transform makes it possible to model many (often very surprising) phenomena ranging from environmental problems to image processing and social sciences. To name a few, environmental pollution [17], image processing [18,19], markets description as quantum processes, where supply and demand strategies are described as reciprocal Fourier transforms [20].
The objective of this paper is to introduce a new inequality index based on the Fourier transform, which satisfies some properties that make it very interesting for possible applications.
Denote by P s (R), s ≥ 1, the class of all probability measures F on the Borel subsets of R such that m s (F) = R |x| s dF(x) < +∞.
Further, denote byP s (R) the class of probability measures F ∈ P s (R) which possess a positive mean value and with P + s (R) the subset of probability measures F ∈ P s (R) such that F(x) = 0 for x ≤ 0. Let F s be the set of Fourier transforms of probability measures F inP s (R). On F s we introduce an inequality index, named T(F), given by the formula In definition (1), f (ξ) = f ξ (ξ) denotes the derivative of the Fourier transform f (ξ) with respect to its argument ξ. Indeed, F ∈ P s (R) implies that f (ξ) is continuously differentiable on the entire real line. In the following, we will show that the functional T(F) is a measure of inequality which satisfies most of the properties required to be a good measure of sparsity and/or heterogeneity [10]. The interest in having a measure of inequality based on the Fourier transform, such as T(F), is twofold. On the one hand, it is very simple to calculate the value taken by this measure at probability distributions for which the characteristic function is explicitly available. This is the case, among others, of the Poisson distribution and, for probability measures defined on the whole real line R, of the stable laws. On the other hand, in the case of dealing with a discrete probability measure, the use of the Fourier transform makes it possible to develop very fast computational procedures [21,22].
From a certain point of view, the measure of inequality defined by (1) has many points of contact with the inequality measures obtained from the Lorenz curve through the concept of maximum distance.
In fact, the index (1) expresses the maximum value of the modulus of the difference between the Fourier transforms of a probability measure F of positive mean m and its derivative normalized by the mean. In the economic context, the closer the Fourier transform of the probability measure is to its derivative normalized by dividing it by the mean, the more equal is the distribution of income. In other words, the line of perfect equality in the Lorenz square is here substituted by the Fourier transform of a Dirac delta function located in a point different from zero.
It is interesting to note that, as will become clear from the examples, the maximum value is usually taken in the finite interval (−2π, 2π).
Before studying the new index T(·) defined in (1) and listing its properties, we will begin with a brief introduction to the use of the Fourier transform to express the classical Gini and Pietra indices. This will be done in Section 2. As we shall see, the use of Fourier transform allows to clarify the functional setting where these indices live. It is worth mentioning that, unlike the classical Gini and Pietra indices, neither the Bonferroni index nor the Kolkata index seem to be expressible in closed form in terms of the Fourier transform.
Next, Section 3 will be devoted to the study of the main properties of the new inequality measure. Various examples will be collected in Section 4. Last, Section 5 illustrates how some property of the index can be fruitfully used in connection with linear kinetic models.

A Fourier-Based Expression of Gini Index
In the rest of the paper, for any fixed constant a > 0, we will denote by F a (x) the Heaviside step function defined by Clearly, F a (x) is the cumulative measure function of a random variable which is almost surely equal to a. It belongs to P s (R) for any s ≥ 1, and m(F a ) = a.
To obtain an explicit expression in Fourier transform for the Gini index, which admits many equivalent formulations [23], we will resort to its well-known form in terms of a continuous probability measure. For a probability measure F ∈ P + s (R) with mean m, the Gini index is defined by the formula Since F ∈ P + s (R), F(x) = 0 for x ≤ 0. Hence, resorting to the definition of the Heaviside step function F 0 (x), we have the identity For any given pair of probability measures F, G ∈ P s (R), the Parseval formula implies where F and G are the Fourier transforms of the probability measures F, G. If it holds that Indeed, considering that F(−∞) − G(−∞) = F(+∞) − G(+∞) = 0, integration by parts gives Consequently, we have the identity Therefore, for any probability measure F ∈ P + s (R), the Gini index has a simple expression in Fourier transform, given by Remark 1. For a given constant q > 0, letḢ −q denote the homogeneous Sobolev space of fractional order with negative index −q, endowed with the norm Then, the variable part of the Gini index coincides with the scaling invariant distance between the probability measure F and the Heaviside step function F 0 in the homogeneous Sobolev spaceḢ −1 .

Remark 2.
Considering that the value zero in (7) is obtained when f (ξ) = e −imξ , namely when F = F m , we can rewrite Gini index as

Another Fourier-Based Inequality Measure
Expression (9) suggests considering a related expression in which the dispersion of the probability measure F of mean value m > 0 coincides with its scale invariantḢ −1 -distance from the Heaviside step function F m with the same mean value m. We define Unlike the Gini index, which requires F ∈ P + s (R), the inequality measure H(F) is welldefined for any measure F ∈P s (R).
It is interesting to remark that, similarly to the Gini index, the inequality measure H(F), for F ∈ P + s (R), is bounded above by 1. This property is shown in the Appendix A. The interest in having an inequality index that quantifies the statistical heterogeneity of probability measures defined on the whole real line R in terms of the Fourier transform is evident. As an example, let us compute the value of the functional H for a Gaussian probability measure F of mean m > 0 and variance σ 2 . Since the Fourier transform of the Gaussian density is given by we easily obtain Integration by parts yields Thus, for a Gaussian probability measure F of mean m > 0 and variance σ 2 we have the value namely a value proportional to the coefficient of variation σ/m, with an explicit constant strictly less than one.

A Fourier-Based Expression of Pietra Index
For a probability measure F ∈ P + s (R) with mean m, the Pietra index P(F) [11,12] is defined by the formula As remarked in [12], the definition (13) seems to disregard the part of the measure below the mean. This, however, is not true, and (13) makes use of the full information encapsulated in the probability law of the random variable X of measure F. There is a simple way to verify the previous assertion. Indeed, since it holds Hence, we have the identity In other words, the Pietra index of a probability measure F ∈P + s is represented by the mean value of the two indices G(F) and H(F), where H is defined in (10), identically weighted.
Resorting to the Fourier expressions of Gini and H indices we then obtain for the Pietra index the expression Remark 3. The Fourier expression (15) clarifies that the Pietra index is obtained by taking into account at the same time the distances inḢ −1 of a probability measure in P + s (R) from the Dirac delta functions located in zero, and, respectively in mean value m. From this point of view, the Pietra index appears as a well-balanced inequality index. This feature is hidden in the classical definition.
the values of the inequality index H(F) for a large number of probability measures can be easily computed resorting to the tables of values assumed by Gini and Pietra indices.

Remark 5.
If one considers only one-dimensional discrete measures, the inequality index H(F) defined by (10) coincides with a particular case of the discrepancy function recently introduced in [22], where the discrepancy measures the distance in L 2 distance between the characteristic functions of two given discrete measures weighted by the function k 2 , with k = 1, 2, . . . , N. In this case, one of the two discrete measure is a Dirac delta function located in the mean value.

Towards New Inequality Indices
A part from the scaling constant, the functional H(F), coincides with the square of the L 2 (R)-norm of the function It is simple to verify that a further scaled invariant functional can be obtained by considering the L ∞ (R)-norm of h(ξ). This functional is given by Resorting to the triangular inequality, we can easily conclude that, if F ∈P + s , H ∞ satisfies the standard bounds Indeed, for F ∈ P + s with mean value m Moreover, since by (5) The functional H ∞ is a particular case of a metric for probability measures which have been used to study convergence to equilibrium for the Boltzmann equation. This is an argument that in kinetic theory of rarefied gases goes back to [24], where convergence to equilibrium for the Boltzmann equation for Maxwell pseudo-molecules was studied in terms of a metric for Fourier transforms (cf. also [25][26][27] for further applications).
The metric introduced in [24] in connection with the Boltzmann equation for Maxwell molecules was subsequently applied in various contexts, which include kinetic models for wealth measures [28], thus establishing a number of common points between kinetic modeling and inequality measures.
For a given pair of random variables X and Y distributed according to F and G these metrics read As shown in [24], the metric d r (F, G) is finite any time the probability measures F and G have equal moments up to [r], namely the entire part of r ∈ R + , or equal moments up to r − 1 if r ∈ N, and it is equivalent to the weak * convergence of measures for all r > 0. Among other properties, it is easy to see [24,28] that, for two pairs of random variables X, Y, where X is independent from Y, and Z,Z (Z independent fromZ), and any constant c These properties classify d s as an ideal probability metric in the sense of Zolotarev [29].
Properties of H ∞ (F) can be easily extracted from (19) considering that, if X is a random variable with probability measure F of mean value m In particular, the second property in (19) implies the scaling invariance of H ∞ . Moreover, the first inequality in (19) implies that, for any pair of independent variables X and Y, with means m X (respectively m Y ), by choosing Z andZ with probability measures F m X (respectively F m Y ) namely a property of sub-additivity for convolutions. Moreover, if Y is distributed with probability measure F m Y , Inequality (20) gives Inequality (21) is a typical feature of sparsity measures, which translates to the case of a continuous variable the property that adding a constant to each coefficient decreases sparsity [10].
In view of its properties, the functional H ∞ (·) appears to be a good measure of inequality. Unfortunately, the computation of the values of H for most probability measures is cumbersome. In particular, it seems not possible to explicitly compute the value of H ∞ (X) in the simplest case in which the variable X takes only two positive values. Consequently, we can not evaluate if, for a given 1, there exists a probability measure with index 1 − . Indeed, as we saw in Section 2.2, this basic property follows from the analysis of the values taken by the inequality index at two-valued random variables. The upper bound 1 can be found also by resorting to Lagrange theorem. Indeed, since F ∈ P s (R) the function h(ξ) = | f (ξ) − e −imξ | is continuously differentiable on the entire real line, and satisfies h(ξ = 0) = 0.
Therefore, by Lagrange theorem, for any given ξ ∈ R, there exists ξ 0 ∈ R such that Since |e imξ | = 1, we have the identity We therefore obtain Using the argument leading to the upper bound in (24) one easily concludes that H ∞ (F) is bounded above by T(F), where T(F) is the functional defined by (1), and that H ∞ (F) ≤ 1.

A New Fourier-Based Index of Inequality
This Section will be devoted to study in more details the main properties of the inequality index T(F), as given by (1). Depending on convenience, given a random variable X with probability measure F ∈P s (R), we will write indifferently T(X) or T(F).
The inequality index T satisfies various properties we list and prove in the following.

Scaling
For any constant c > 0, the index T(F) is invariant with respect to the scaling F(x) → F(cx). The scaling invariance of T(F) can be easily seen by noticing that, if f (ξ) is the Fourier transform of F(x), f (ξ/c) is the Fourier transform of F(cx), and

Lower and Upper Bounds
If F ∈P s (R), the values of the functional T(F) lie between zero and one, where the value zero (minimal inequality) is assumed in correspondence to a Heaviside probability measure F m , with m > 0. Indeed, let F ∈ P + s (R). Since | f (ξ)| ≤ f (0) = 1, and it is easy to conclude, by the triangular inequality, that T(F) satisfies the bounds and T(F) = 0 if and only if f (ξ) satisfies the differential equation with f (0) = 1, so that the unique solution is given by f (ξ) = e −imξ , namely by the Fourier transform of a Dirac delta function located in the mean value x = m(F) > 0. Note however that, even if the functional F is defined in the whole classP s (R), the upper bound is lost if the probability measure F / ∈ P + s (R), since in this case the inequality | f (ξ)/ f (0)| ≤ 1 does not hold.
The value T(F) = 1, corresponding to maximal inequality is approached if we compute the value of T(X) when the random variable X of mean value m is a two-valued random variable where P(X = 0) = 1 − ; P X = m = ; 1.
In this case and T(X) = 1 − .

Convexity
Let F, G ∈P s (R) two probability measures with the same mean value, say m. Then, for any given τ ∈ (0, 1) it holds τ f (0) + (1 − τ) g (0) = f (0) = g (0), so that This shows the convexity of the functional T on the set of probability measures with the same mean.

Sub-Additivity for Convolutions
The most important property characterizing the inequality index T is linked to its behavior in presence of convolutions. For any given pair of Fourier transforms of probability measures inP s (R), let us set h(ξ) = f (ξ) g(ξ).
Then, since | f (ξ)| ≤ f (0) = 1 and | g(ξ)| ≤ g(0) = 1 Therefore, if X and Y are independent random variables with probability measures iñ P s (R), and mean values m X (respectively m Y ) the inequality index T satisfies the inequality In particular, if Y is a random variable that takes the value m > 0 with probability 1 (so that g(ξ) = e −imξ and T(Y) = 0), Since X + Y corresponds to adding the constant m to X, this property asserts that adding a constant wealth to each agent decreases inequality. Furthermore, if the random variables X 1 and X 2 are distributed with the same law of X, thanks to the scale property while the mean of (X 1 + X 2 )/2 is equal to the mean of X. (27) is fully operational in the case where the two variables X 1 and X 2 are characterized either by a continuous probability measure or take on an infinite number of values. Only in this case, in fact, do the probability measure remain of the same type under the operation of convolution. Suppose in fact that the variables X i , i = 1, 2, are Bernoulli variables, such that

Remark 6. Inequality
The probability measure of X i , i = 1, 2, has Fourier transform and the probability measure of the convolution corresponds to the Fourier transform Hence, the random variable Y = X 1 + X 2 takes the three values 0, 1, 2 with probabilities Clearly, it makes little sense to relate the heterogeneity of a two-valued random variable to a three-valued random variable.

Adding a Noise
Another important consequence of inequality (25) is related to the situation in which the random variable Y represents a noise (of mean value m > 0) that is present when measuring the inequality index of X. The classical choice is that the additive noise is represented by a Gaussian variable of mean m and variance σ 2 .
If this is the case, the Fourier transform of the Gaussian density is given by (11), which is such that Finally, if Y denotes the Gaussian random variable of mean m > 0 and variance σ 2 we obtain As we showed in Section 2.2 for the index H defined by (10), for a Gaussian variable, the inequality index T(Y) is proportional to the coefficient of variation of Y. We have in this case namely an explicit upper bound for the inequality index in terms of the mean value and the variance of the Gaussian noise.

Remark 7.
It is important to note that inequality (28) remains valid even if the mean value of the Gaussian noise is assumed equal to zero. In this case, by letting m → 0 we obtain the upper bound

Examples
In this section we will recover the values of the inequality index T for some wellknown probability measures. With few exceptions, any time the explicit expression of the Fourier transform of the probability measure is available, the computation of the value of the inequality index T(·) is straightforward. The list of probability measures that can be treated via Fourier transform is consistent, and includes both discrete and continuous distributions. For an in-depth look at this topic, the interested reader can consult the book [30].
We do not consider in this paper the possibility to make use of the fast Fourier transform to compute the values of the functional T in the case of a random variable taking only a finite number of values, a situation that we intend to treat in a companion paper.

Two-Valued Random Variables
Let X be a Bernoulli random variable, characterized by the probability measure with Fourier transform f (ξ) = 1 − p + pe −iξ , 0 < p < 1.
Then, since f (ξ) = −ipe −iξ , it immediately follows that For given positive constants a, b, let Y = aX + b: Then Y is characterized by the Fourier transform h(ξ) = f (aξ)e −ib ξ .
We have Choosing α = b and β = a + b, where β > α, we then conclude that a two valued random variable Y such that has an inequality index The same value is assumed by the Gini and Pietra indices of Y.

Poisson Distribution
Poisson distribution is characterized by the Fourier transform In this case Let us set 0 ≤ 1 − cos ξ = x 2 ≤ 2. Then x e −λx 2 .
If λ ≤ 1/4, the maximum is taken inx = √ 2, and T(F) = e −2λ . If λ > 1/4, the maximum is taken at the pointx = 1/ √ 2λ, and in this case Hence, if F is a Poisson probability measure of mean λ we have Note that, as a function of λ, the functional T(F) is differentiable at the point λ = 1/4, and it decreases as λ increases. Hence, small values of λ corresponds to large heterogeneity.

Remark 8.
It is interesting to remark that the value of the Gini index of a Poisson distribution, say F, can not be computed explicitly by resorting to its expression in Fourier transform, as given by formula (7). The same conclusion holds if we try to compute the values of H(F), as given by (10), and H ∞ (F), defined in (16).

Remark 9.
The previous computations can be extended, at the cost of more complicated calculations, to evaluate the explicit values of the index T to distributions which are obtained by summing up independent Poisson variables. Maybe the most interesting case corresponds to the Skellam distribution [31,32], that is the discrete probability distribution of the difference of two independent random variables X 1 and X 2 , each Poisson-distributed with expected values λ 1 and, respectively λ 2 , with λ 1 = λ 2 .

Stable Laws
As further example of probability measures defined on the whole real line R, we will compute the value of T in correspondence to a stable law [33]. We will restrict ourselves here to the case of symmetric alpha-stable distributions of scale parameter σ > 0 and shift parameter m > 0, characterized by the Fourier transform Note that the Gaussian distribution of mean m and variance 2σ 2 corresponds to the choice α = 2.
For these distributions Evaluating the value of the supremum, we obtain For α = 1 the distribution reduces to a Cauchy distribution with scale parameter σ and shift parameter m. In this case

An Interesting Case: The Uniform Distribution
The uniform distribution in the interval (−a, a), with a > 0 is characterized by the Fourier transform Hence, if X is a random variable uniformly distributed on (−a, a), for any constant b > 0, X + b is uniformly distributed on the interval (−a + b, a + b), and the Fourier transform of the probability measure of X + b, of mean value b is given by Then Next, since f is expressed by (33) f (ξ) = aξ cos(aξ) − sin(aξ) aξ 2 , which implies where δ u is a positive constant. Hence, if X is uniformly distributed on the interval (−a, a), and b > 0 In particular, if b > a, by setting α = b − a and β = b + a, we conclude that, if Y is a random variable uniformly distributed on the interval (α, β) ∈ R + , it holds In this case, at difference with the Gini index, which takes the explicit value the value of the coefficient δ u can be achieved only numerically. It is however interesting to remark, in the case of a uniform distribution, the values of the two indices have deep similarities.
A rough estimation of the constant δ u follows by studying the function It is easy to show that any extremal pointx of the function u(x) solves the equation Consequently, ifx is an extremal point of u(x), Hence δ u ≤ 1/2. To end this Section, we list in Table 1 the values of the inequality index T for some probability measures in R + and R allowing explicit computations. It is remarkable that the Fourier-based index T is well-adapted to compute the heterogeneity index of discrete probability measures, such as the negative binomial distribution, or the geometric distribution, which are explicitly expressible in terms of the Fourier transform. We leave the details of the evaluation to the reader. Table 1. Values of the index T for some probability measures.

Measure Density Fourier Transform Index T(·)
Exponential

An Application to Kinetic Theory of Wealth Distribution
Kinetic modelling of agent-based markets are based on few universal assumptions [28]. First, agents are indistinguishable, so that an agent's state at any instant of time t ≥ 0 is completely characterized by his current wealth w ≥ 0. Second, the time variation of the wealth distribution is entirely due to binary trades between agents. A trade represents a binary interaction in which part of the money of each agent is modified according to well-defined rules. When two agents undertake in a trade, their pre-trade wealths v, w change into the post-trade wealths v * , w * according to a linear exchange rule: The interaction coefficients p i and q i , i = 1, 2, are, in general, non negative random parameters. The first explicit description of a binary wealth-exchange model dates back to the seminal work of Angle [34], (cf. also [35]), even if the intimate relation to statistical mechanics was only described about a decade later [36,37]. In each binary interaction, winner and loser are randomly chosen, and the loser pays a random fraction of his wealth to the winner. From here, Chakraborti and Chakrabarti [38] developed the class of strictly conservative exchange models, which preserve the total wealth in each individual trade, In its most basic version, the microscopic interaction is determined by one single parameter λ ∈ (0, 1), which is the global saving propensity. In the interactions, each agent retains the corresponding fraction of its pre-trade wealth, while the rest (1 − λ)(v + w) is equally shared equally between the two trading partners, The wealth distribution f (v, t) of the system of agents coincides with agent's density and satisfies the associated spatially homogeneous Boltzmann equation, on the real half-line, v ≥ 0. The collisional gain operator Q + acts on test functions ϕ(v) as Because of (37), the average wealth of the society is conserved with time, so that where m > 0 is finite. A useful way of writing Equation (38) is to resort to the Fourier trans-On the other hand, by scaling invariance T(X(t)) = T(Y(t)) = T(Z(t), where the probability measure of Z(t) has Fourier transform f (ξ, t). Hence, Equation (44) implies that, for any given t 0 < t by Gronwall inequality implies [28] T(Z(t)) ≤ T(Z(t 0 ), and, consequently the monotonicity in time of the inequality index T(F(t) of the probability measure solution of the kinetic Equation (38). It is remarkable that this result, which does not require the condition s > 1, is a direct consequence of the convolution property of the inequality index T. Hence, the monotonicity result does not hold if we resort to Gini and Pietra indices.

Conclusions
Inequality indices are quantitative scores that take values in the unit interval, with the zero score characterizing perfect equality. Measuring the statistical heterogeneity of measures arises in most fields of science and engineering, which makes it important to know the strengths and possible weaknesses of heterogeneity measures in applications [1,2,[4][5][6][7]10]. In this paper, we draw attention to a new inequality index, based on the Fourier transform, which exhibits a number of interesting properties that make it very promising in applications. In comparison with the well-known and widely used Gini index, which can still be expressed by resorting to the Fourier transform, the new index T allows to compute explicitly the heterogeneity of various probability measures, such as the Poisson distribution, which can not be measured explicitly resorting to the Gini index. Moreover, this new Fourier-based index has an interesting property of sub-additivity for convolutions, which in principle makes it interesting for applications to models of kinetic theory which contain mass and mean preserving bilinear operators [28]. Since X has mean value m, a and b are related to p by the relation pa = (1 − p)b. (A3) In this case, it is a simple exercise to verify that so that, thanks to (A3) H(F) = pa m .
Therefore, since a ≤ m and p < 1, we conclude with H(F) < 1. An interesting application of the previous expression is obtained by assuming a = m, and p = 1 − , with 0 < 1. In this case the random variable X of mean value m is such that P(X = 0) = 1 − ; P X = m = .
In economics, this situation describes a population in which most of agents have zero wealth, while one small part possesses an extremely high wealth, while maintaining the mean wealth fixed. In this case H(F) = 1 − . Let us now consider a random variable X of mean value m that takes three non negative values x 1 < x 2 < m < x 3 , where that implies H(F) ≤ H(G) < 1.
The same conclusion holds if we consider a random variable X of mean value m that takes the three non negative values x 1 < m < x 2 < x 3 , and we choose the value x ∈ (x 2 , x 3 ) like in (A4). The previous computations show that, by suitably choosing the point, we can built, starting from a random variable with three values, a random variable with two values, with the same mean and with a bigger value of the functional H, which by the previous computations is less than 1. At this point, we can iterate the procedure and conclude that the upper bound in (A1) holds for the measure function F ∈ P + s (R) of any discrete random variable X, and finally for any F ∈ P + s (R).