Stochastic Order and Generalized Weighted Mean Invariance

In this paper, we present order invariance theoretical results for weighted quasi-arithmetic means of a monotonic series of numbers. The quasi-arithmetic mean, or Kolmogorov–Nagumo mean, generalizes the classical mean and appears in many disciplines, from information theory to physics, from economics to traffic flow. Stochastic orders are defined on weights (or equivalently, discrete probability distributions). They were introduced to study risk in economics and decision theory, and recently have found utility in Monte Carlo techniques and in image processing. We show in this paper that, if two distributions of weights are ordered under first stochastic order, then for any monotonic series of numbers their weighted quasi-arithmetic means share the same order. This means for instance that arithmetic and harmonic mean for two different distributions of weights always have to be aligned if the weights are stochastically ordered, this is, either both means increase or both decrease. We explore the invariance properties when convex (concave) functions define both the quasi-arithmetic mean and the series of numbers, we show its relationship with increasing concave order and increasing convex order, and we observe the important role played by a new defined mirror property of stochastic orders. We also give some applications to entropy and cross-entropy and present an example of multiple importance sampling Monte Carlo technique that illustrates the usefulness and transversality of our approach. Invariance theorems are useful when a system is represented by a set of quasi-arithmetic means and we want to change the distribution of weights so that all means evolve in the same direction.


Introduction and Motivation
Stochastic orders [1,2], are orders defined in probability theory and statistics, to quantify the concept of one random variable being bigger or smaller than another one. Discrete probability distributions, also called probability mass functions (pmf), are sequences of n-tuples of non-negative values that add up to 1, and can be thus interpreted in several ways, for instance, as weights in computation of moments of the discrete random variable described by the pmf, or as equivalence classes of compositional data [3]. Stochastic orders have found application in decision and risk theory [4], and in economics in general, among many other fields [2]. Some stochastic orders have been defined based on order invariance: two pmf's are ordered when the arithmetic means of any increasing sequence of real numbers weighted with the corresponding pmf's are ordered in the same direction. This raises the question whether this invariance might also hold for other kind of means beyond the arithmetic mean.
The quasi-arithmetic means, also called Kolmogorov or Kolmogorov-Nagumo means, are ubiquitous in many branches of science [5]. They have the expression g −1 ∑ M k=1 α k g(b k ) , where g(x) is a real-valued strictly monotonous function, {b k } M k=1 a sequence of reals, and {α k } M k=1 a set of weights with ∑ M k=1 α k = 1. This family of means comprises the usual means: arithmetic g(x) = x, ∑ M k=1 α k b k , harmonic g(x) = 1/x, ∑ M k=1 α k b k −1 , power mean p . For a long time, economists have discussed the best mean for a problem [6]. Harmonic mean is used for the price earning ratio, and power means are used to represent the aggregate labor demand and its corresponding wage [7], and the constant elasticity of substitution (CES) [8]. Yoshida [9,10] has studied the invariance under quasiarithmetic means with function g(x) increasing and for utility functions. In information theory, Alfred Rényi [11] defined axiomatically the entropy of a probability distribution as a Kolmogorov mean of the information 1/ log p k conveyed by result k with probability p k , and recently, Americo et al. [12] defined conditional entropy based on quasi-arithmetic means. In physics, the equivalent spring constant of springs combined in series is obtained as the harmonic mean of the individual spring constants, and in parallel, as their arithmetic mean [13], while the equivalent resistance of resistors combined in parallel is obtained by the harmonic mean of the individual resistances, and in series by their arithmetic mean [14]. In traffic flow [15], arithmetic and harmonic mean of speed distribution are used. In [16] both geometric and harmonic mean are used in addition to arithmetic mean to improve noise source maps.
In our recent work on inequalities for generalized, quasi-arithmetic weighted means [17], we found some invariance properties, that depended on the particular relationship considered between the sequences of weights. These relationships between weights define first stochastic order, and likelihood ratio order. Their application to multiple importance sampling (MIS), a Monte Carlo technique, has been presented in [18], its application to cross entropy in [19], and in [20] applications to image processing, traffic flow and income distribution have been shown. In [21], the invariance results on products of distributions of independent scalar r.v.'s [22] was generalized to any joint distribution of a 2-dimensional r.v.
In this paper, we show that the order invariance is a necessary and sufficient condition for first stochastic order, and that it holds under any quasi-arithmetic mean. We also study invariance under the second stochastic order, likelihood ratio, hazard-rate, and increasing convex stochastic orders. The fact that the invariance results hold for both increasing and decreasing monotonic functions allows us to use both utilities and liabilities, represented by efficiencies and expected error, respectively, to look for an optimal solution, where in liabilities we look for minimum expected error, while in utilities for maximum efficiency.
The rest of the paper is organized as follows. In Section 2, we introduce the stochastic order; in Section 3, the arithmetic mean and its relationship with stochastic order. In Section 4, we present the invariance theorems; in Section 5, we discuss the invariance for concave (convex) functions; in Section 6, its application to stochastic orders; in Section 7, we present an example based on the linear combination of Monte Carlo estimators. Finally, conclusions and future work are given in Section 8.

Stochastic Orders
Stochastic orders are pre-orders (i.e, binary relationships holding symmetric and transitive properties) defined on probability distributions with finite support. Note that equivalently, one can think of sequences (i.e., ordered sets) of non-negative weights/values that sum up to one. Observe that any sequence {α k } of M positive numbers such that ∑ M k=1 α k = 1 can be considered a probability distribution. It can be seen too as an element of the (M-1)-Simplex, i.e., {α k } ∈ ∆ M−1 . While several interpretations hold and hence increase the range of applicability, in the remaining of this paper, we will talk of sequence without any loss of generality.
Notation. We use the symbols≺, to represent orders between two sequences {α k } and {α k } of size M, e.g., and we write {α k } {α k } or equivalently {α k } ≺ {α k }. We will denote the elements of the sequences without the curly brackets, e.g., the first element of the sequence {α k } is denoted as α 1 , while the last one is α M . Moreover, the first and last elements of the sequence receive special attention, since when α 1 ≤ α 1 , we can write this order as {α k } F {α k } (where F stands for first) and whenever α M ≥ α M , we can write {α k } L {α k } (where L stands for last). The case with both α 1 ≤ α 1 and α M ≥ α M , we can denote as {α k } FL {α k }. The orders F , L , FL are superorders of first stochastic dominance order, {α k } FSD {α k }, that will be studied in Section 6. We denote as {α M−k+1 } the sequence with the same elements of {α k } but in reversed order.
Property 1 (Mirror property). One desirable property of stochastic orders is such that when reversing the ordering of the sequence elements, the stochastic order should be reversed as well, i.e., if {α k } {α k }, then {α M−k+1 } {α M−k+1 }. This is similar to the invariance of physical laws to the right and left hand. We call this property the mirror property.

Definition 1 (Mirror property).
We say that a stochastic order has the mirror property if Observe that the simple orders defined before, F , L , do not hold this property, but FL holds it. We will see in Section 6 that usual stochastic orders do hold the mirror property. However, an order that is insensitive to the permutation of the elements of a sequence, like majorization or Lorentz order, does not hold the mirror property.

Quasi-Arithmetic Mean
Stochastic orders are usually defined by invariance to arithmetic mean [1,2] (see Section 6), and we want to investigate in this paper invariance to more general means. We define here the kind of means we are interested in.
Definition 2 (Quasi-arithmetic or Kolmogorov or Kolmogorov-Nagumo mean). A quasiarithmetic weighted mean (or Kolmogorov mean) M({b k }, {α k }) of a sequence of real numbers {b k } M k=1 , is of the form g −1 ∑ M k=1 α k g(b k )) , where g(x) is a real valued, invertible strictly monotonic function, with inverse function g −1 (x) and {α k } M k=1 are positive weights such that ∑ M k=1 α k = 1.
Examples of such mean are arithmetic weighted mean (g(x) = x), harmonic weighted mean (g(x) = 1/x), geometric weighted mean (g(x) = log x) and in general, the weighted power means (g(x) = x r ).
Given a distribution {p k }, Shannon entropy, H({p k }) = − ∑ k p k log p k , and Rényi entropy, R β ({p k }) = 1 1−β log ∑ k p β k , can be considered as quasi-arithmetic means of the sequence { 1 log p k } with weights {p k }: Shannon entropy with g(x) = x (arithmetic mean or expected value), and Rényi entropy with g(x) = 2 (β−1)x [11]. Tsallis entropy, , can be considered the weighted arithmetic mean of the sequence { 1 ln q p k } with weights {p k }, where ln q x = x 1−q −1 1−q is the q-logarithm function [23]. Without loss of generality, we consider from now on that {b k = f (k)} where f (x) is a real valued function. (a) for g(x) increasing, for any increasing function f (x) the following inequality holds: increasing, for any increasing function f (x) the following inequality holds: (a') for g(x) decreasing, for any increasing function f (x) the following inequality holds: decreasing, for any increasing function f (x) the following inequality holds: (c) the following inequalities hold: (d) the following inequalities hold: Proof. Indirect or partial proofs can be found in [17,20]. We provide a complete proof in the Appendix A.
Note. Observe that in Lemma 1, it is sufficient to consider the monotonicity of the sequences { f (k)} M k=1 . Furthermore, Lemma 1 can be extended to any real sequences {y k } and {x k }, such that ∑ M k=1 y k = ∑ M k=1 x k . It is enough to observe that the order of all the inequalities are unchanged by adding a positive constant, so that {y k } and {x k } can be made positive, and also are unchanged by the multiplication of a positive constant, so that the resulting {y k } and {x k } sequences can be normalized.

(c) Condition (c) of Lemma 1 holds
Proof. It is a direct consequence of Lemma 1 and the definition of quasi-arithmetic mean, observing that the inverse g −1 (x) of a strictly monotonic increasing (respectively decreasing) function g(x) is also increasing (respectively decreasing).

Invariance
Theorem 2 (Invariance). Given two distributions {α k }, {α k }, and two quasi-arithmetic means M, M , the following propositions are equivalent: (a) for all increasing functions f (x) Proof. It is a direct consequence of the observation that conditions (c) and (d) in Lemma 1 do not depend on any particular g(x) function considered, and thus the order of inequalities does not change with the mean considered as long as {α k } and {α k } are kept fixed.
The following properties relate stochastic order with the quasi-arithmetic mean. Let I be the set of monotonous functions, I < the set of increasing functions, I > the set of decreasing functions.
Definition 3 (preserve mean order property). We say that a stochastic order preserves mean order for a given mean M and a set I < ⊂ I < of increasing functions when, for all functions f (x) ∈ I < and any distributions {α k }, {α k }, Definition 4 (preserve inverse mean order property). We say that a stochastic order preserves inverse mean order for a given mean M and a set I > ⊂ I > of decreasing functions when, for all functions f (x) ∈ I > and any distributions {α k }, {α k }, Theorem 2 together with the preserve mean order properties allows us to state the following invariance property: Theorem 3 (preserve mean order invariance). Given a stochastic order that preserves mean order (respectively preserves inverse mean order) for a given mean and for I < (respectively for I > ), then for any mean it preserves both mean order for I < and inverse mean order for I > . In other words, the preserve mean order properties are invariant with respect to the mean considered.
Observe that from Lemma 1 and Theorems 1 and 3 , we have that a necessary and sufficient condition for an order to preserve mean order for I < (or preserve inverse mean order for I > ) is the holding of Equation (6), independently of the mean considered. We will see in Section 6 that this corresponds to first stochastic dominance order.

Concavity and Convexity
Let us consider now I < v ⊂ I < , the set of all increasing concave functions, I < x ⊂ I < , the set of all increasing convex functions, I > v ⊂ I > , the set of all decreasing concave functions, and I > x ⊂ I > , the set of all decreasing convex functions. The following theorem relates the preserve mean order properties with the mirror property.

Theorem 4.
If an order holds the mirror property, then holding preserve mean order for I < v (respectively I < x ) implies holding preserve inverse mean order for I > v (respectively I > x ) and viceversa.
where m(x) = M − x + 1, and because if f (x) is decreasing and concave (respectively convex) then f (m(x)) is increasing and concave (respectively convex).
The following result is necessary to prove Lemma 2.
{α k }) holds for any g(x) strictly increasing and convex (respectively concave) then it holds for any g(x) strictly decreasing and concave (respectively convex). If it holds for any g(x) strictly decreasing and convex (respectively concave) then it holds for any g(x) strictly increasing and concave (respectively convex).
Proof. Consider the quasi-arithmetic mean with Kolmogorov function is decreasing and viceversa. When g(x) is convex, g (x) is concave and viceversa. We have The following Lemma is needed to prove how the invariance properties for one mean extends to other means.
. Consider the following Equations: and Then, for each line in Table 1, and for g(x), f (x) filling the conditions in first and second column in Table 1 and holding Equations (20) and (21), Equations (20) and (21) hold too for g (x), f (x) filling the conditions in columns three and four.
Additionally, for each line in Table 1, changing f (x) in column four from increasing to decreasing, Equations (20) and (21) hold with the inequalities reversed. Table 1. For each line, for g(x), f (x) filling the conditions in first and second column, then Equations (20) and (21) hold for g , f filling the conditions in columns three and four. By changing f from increasing to decreasing, the reverse of Equations (20) and (21) 1  ICX  ICV  ICV  ICV  1'  "  "  DCX  ICV  2  DCX  ICX  DCV  ICX  2'  "  "  ICX  ICX  3  ICV  ICX  ICX  ICX  3'  "  "  DCV  ICX  4  DCV  ICV  DCX  ICV  4'  "  "  ICV  ICV  5  ICX  ICV  DCV  ICX  5'  "  "  ICX  ICX  6  DCX  ICX  ICV  ICV  6'  "  "  DCX  ICV  7  ICV  ICX  DCX  ICV  7'  "  "  ICV  ICV  8  DCV  ICV  ICX  ICX  8' " " DCV ICX Proof. The proof of lines 1-8 in Table 1 is in the Appendix B. Lines 1'-8' in Table 1 are a direct consequence of Theorem 5 applied to lines 1-8.  Table 1, and for g(x) filling the condition in the first column in Table 1 and preserving order for f (x) in the second column in Table 1, the mean for g (x) filling the condition in column three in Table 1 preserves order for f (x) filling the condition in column four in Table 1.
Proof. It is enough to apply Lemma 2, taking into account that if Equation (20) holds then Equation (21) holds too by the mirror property.

Corollary 1.
Consider the weighted arithmetic mean A({ f (k)}, {α k }). Given an order that holds the mirror property and preserves the order for mean A, then (a) If order is preserved for mean A and for I < v then order is preserved for any mean with concave-increasing/convex-decreasing (respectively concave-decreasing/convex-increasing) function g(x) and for I < v (respectively for I < x ). (b) If order is preserved for mean A and for I < x , then order is preserved for any mean with convex-increasing/concave-decreasing (respectively convex-decreasing/concave-increasing) function g(x) and for I < x (respectively for I < v ).
Proof. Arithmetic mean is a quasi-arithmetic mean with increasing function g(x) = x, which is both concave-increasing and convex-increasing, and thus Table 1 collapses to  Table 2.  (20) and (21) holding, then Equations (20) and (21) ICV  ICV  ICV  ICV  DCX  ICV  ICX  ICX  ICX  ICX  DCV  ICX  ICV  DCV  ICX  ICV  ICX  ICX  ICX  DCX  ICV  ICX ICV ICV Functions x p , for p ≥ 1 are convex-increasing and with p ≤ 0 are convex-decreasing over R ++ , e x convex-increasing over R, while log x, x p , for 0 < p ≤ 1 are concaveincreasing over R ++ . Affine functions are both concave and convex over R. If g(x) is convex, −g(x) is concave and viceversa, and the composition of concave-increasing and concave is concave, and convex-increasing and convex is convex. We will see in the next section that preserving the order for mean A for I < v is defined as second-order stochastic dominance or increasing concave order, and preserving the order for mean A for I < x is defined as increase convex order stochastic dominance. Both orders hold the mirror property and thus Corollary 1 applies to both.
A necessary and sufficient condition for first-order stochastic dominance is defined by the condition c) of Lemma 1, which is equivalent to condition d) of Lemma 1, thus {α k } ≺ FSD {α k } ⇒ {α M−k+1 } ≺ FSD {α M−k+1 }, i.e., the mirror property holds. From the definition of FSD, and from Theorem 3, we can redefine FSD, {α k } ≺ FSD {α k } as there exists a mean M such that, for any increasing function f (x), Equation (16) holds. This is, the definition of FSD is independent of the mean considered, while the original definition relies on the expected value (arithmetic mean). The mean considered can be arithmetic, harmonic, geometric or any other quasi-arithmetic mean.
Let us consider now a strictly monotonous function g(x), and define a generalized crossentropy GCE ({p i }, {q i }) = g −1 (∑ i p i g(− log q i )). Observe that it is a quasi-arithmetic mean, and for g(x) = x, we get the cross-entropy CE({p i }, {q i }) = − ∑ p i log q i . Other functions that generalize cross entropy have been defined in the context of training deep neural networks [25]. We can state the following theorem: Proof. Observe first that f (k) = − log q k is a decreasing sequence. From the hypothesis, condition c) of Lemma 1 holds, thus we can then apply Theorem 1 to the mean with function g(x) and for { f (k)} decreasing.
The following result relates the entropies of two distributions with the first stochastic order. Proof. The Kullback-Leibler distance is always positive [26], ∑ i p i log p i q i ≥ 0, and thus we have that H
From Corollary 1, second-order stochastic dominance preserves mean order for all means M defined by a concave-increasing/convex-decreasing (respectively concavedecreasing/convex-increasing) function g(x), and for the set of all concave-increasing functions I < v (respectively convex-increasing functions I < x ). For instance, order is preserved for the geometric mean, or any mean with g(x) = x p , 0 < p ≤ 1 and for I < v , or for or any mean with g(x) = x p with p < 0, as the harmonic mean, for I < x . In particular, we can state the following theorem about cross-entropy of two distributions, , where G stands for geometric mean. The geometric mean is a quasi-arithmetic mean with function g(x) = log x, concave-increasing. Using Definition 6 and applying Corollary 1(a), we obtain the inequality G({q i }, {p i }) ≤ G({q i }, {p i }) and we apply the function − log x to both members of this inequality.
When we consider f (x), a convex instead of a concave function, we talk of increasing convex order, ICX. This is, Definition 7. Second-order stochastic dominance between two probability distributions, {α k }, {α k }, {α k } ≺ ICX {α k }, occurs when for any increasing convex function f (x), {α k } is greater in increasing convex order than {α k }, {α k } ≺ ICX {α k }, if and only if the following inequalities hold: (24) or equivalently,F Mirror property is also immediate. From Corollary 1, second-order stochastic dominance preserves mean order for all means M defined by a convex-increasing/concave-decreasing (respectively convex-decreasing/concave-increasing) function g(x) and for the set of all convex-increasing functions I < x (respectively concave-increasing functions I < v ). For instance, any mean with g(x) = x p , p ≥ 1 as the weighted quadratic mean with I < x , or g(x) = x p , p < 0 as the harmonic mean with I < v , or the mean with g(x) = − log x with I < v . In particular, we can state the following result, . Using Definition 7 and applying Corollary 1(b), we have that G ({q i }, {p i }) ≤ G ({q i }, {p i }), and we apply to each member of the inequality the function − log x.

Likelihood Ratio Dominance
As Proof. There are indirect proofs in the literature [1,17]. A direct proof is in the Appendix C.
As the condition for LR order, Equation (26), is easy to check, this order comes very handy in proving sufficient condition for FSD order. Additionally, for the uniform distribution {1/N} and any increasing distribution {α k }, we have that {1/M} ≺ LR {α k }, while for any decreasing distribution {α k } we have that {α k } ≺ LR {1/M}. Consider Shannon entropy, H({p k }), and Rényi entropy R β ({p k }) of a distribution {p k }, that as seen in Section 3 are quasi-arithmetic means of the sequence {1/ log p k } with weights {p k }, and with g(x) = x and g(x) = 2 (β−1)x , respectively. Without loss of generality, we can consider {p k } in increasing order, then the sequence {1/ log p k } will be decreasing. We have that {1/M} ≺ {p k }, and then we can apply Theorem 3, and obtain for Shannon entropy where A({1/ log p k }) is the arithmetic mean of {1/ log p k }, and for Rényi entropy, observing that g −1 (x) = 1 1−β log x, For Tsallis entropy, which can be considered the weighted arithmetic mean of the sequence {1/ ln q p k } with weights {p k }, as for {p k } in increasing order the sequence {1/ ln q p k } will be decreasing (the derivative of ln q x is positive), we have similarly to Shannon entropy, We can also state the following result for the generalized cross entropy GCE . We say that {p i } is comonotonic [27,28] with {q i } when, for all 1 ≤ i, j ≤ M, q i < q j =⇒ p i ≤ p j .
Observe that this can be written asF If we consider now the sequences written in inverse order, i.e.,

Example: Linear Combination of Monte Carlo Techniques
When we want to estimate an integral µ = h(x)dx using MIS (multiple importance sampling) Monte Carlo methods, we have several choices of techniques, each of them with a given pdf {p k (x)}, 1 ≤ k ≤ n, which provide the primary estimators {I k,1 = h(x) p k (x) }. If for h(x) > 0, we have that p k (x) > 0, then the technique is unbiased. We are interested in optimal ways of combining the techniques. One option is linearly combining the different If all techniques are unbiased, the resulting combination is also unbiased. The variance is given by where v i are the variances for the primary estimators of each technique and V is the variance for the primary MIS estimator. The optimal combination of weights, i.e. the one that leads to minimum variance, has been studied in [18,29,30].
The variance value will depend on two sets of weights, {w i } and {α i }, but there are cases where we can reduce it to a single set of weights. We present the following examples, where variances are taken for N = 1: when the sampling proportions are fixed, the optimal variance is given by H({ v k α k }), the weighted harmonic mean (g(x) = 1/x) of {v i }.
• when weights {w i } are fixed, the optimal variance is given by (∑ M k=1 w k √ v k ) 2 , which is the weighted power mean of {v i } with exponent r = 1/2 (g(x) = x 1/2 ). Observe that the variance in these three cases is a quasi-arithmetic mean.
Let us now order {v k } in increasing order, and let us take α k = h(v k ), h(x) decreasing, and α k = 1/M. We have that α k = h(v k ) ≺ LR {α k = 1/M} and thus, by Theorem 11, α k = h(v k ) ≺ FSD {α k = 1/M}, and we can apply Theorem 2 with f (k) = v k to the quasiarithmetic means above. Thus, the variance is less when taking sampling proportions or coefficients decreasing in v k than for equal sampling or equal weighting.
The same would be the case when considering a different measure of error, whenever the error of the combined techniques can also be expressed as a weighted quasi-arithmetic mean of the values of this measure for all techniques. For instance, suppose we use as measure of error the standard deviation, i , the weighted root mean square or quadratic mean of {σ i }, or weighted power mean with r = 2 (g(x) = x 2 ). • when the weights {w i } are fixed, the optimal standard deviation is given by σ = when the sampling proportions are fixed, the optimal standard deviation is given by . For more details see [18].

Liabilities vs. Utilities
From the above example, we can study the relationship between utilities, which we try to maximize, and liabilities, that we try to minimize. As a liability can always be considered as the inverse of a utility, we can see from Theorem 1 that establishing invariance properties on the order between means of utilities is equivalent to doing this with liabilities, except that the order is inverted. We can define for instance the efficiency E = 1/V as the inverse of the variance, and obtain it as a weighted mean of individual technique's efficiencies, e i = 1/v i . Consider, for instance, the first case in the example above, where where H denotes the weighted harmonic mean. Considering now the second case or weighted arithmetic mean. For the third case, which is the weighted power mean with r = −1/2.

Conclusions and Future Work
We have presented in this paper the relationship between stochastic orders and quasi-arithmetic means. We have proved several ordering invariance theorems, that show that given two distributions under a certain stochastic order, the ordering of the means is preserved for any quasi-arithmetic mean we might consider, this is, not only for the arithmetic mean (or expected value). We have shown how the results apply to first order, second order, likelihood ratio, hazard-rate, and increasing convex stochastic orders, and its application to cross-entropy. We have also presented an application example based on the linear combination of Monte Carlo estimators, and shown that the invariance allows costs or liabilities to be considered as the symmetric case of utilities.
In the future, we want to generalize our results to spatial weight matrices [31]. The rows in a spatial weight matrix are weights that give the influence of n entities over each other. Different weighted means such as arithmetic, harmonic, or geometric [32] can be used to compute this influence. We can thus apply our invariance results to each row. We will also investigate which of our results for Shannon entropy extend to Tsallis entropy too. Both Shannon and Tsallis entropy are weighted arithmetic means [23], and given {p k } monotonic, both {1/ log p k } and {1/ ln q p k } are monotonic too. Finally, we will investigate the invariance of the different stochastic orders under the operations of compositional data [3]. Proof. Subtracting 1 from both sides of each inequality proves (c) ⇔ (d). To prove (a) ⇒ (c), we proceed in the following way. Consider the increasing sequence {g( f (1)), . . . , g( f (1)), g( f (M)), . . . , g( f (M))}, f (1) < f (M) (and thus g( f (1)) < g( f (M)) by the strict monotonicity of g(x)), and where g( f (1)) is written l times, denote L = a 1 + . . . + a l , L' = a 1 + . . . + a l . Since a l+1 + . . . + a M = 1 − L, a l+1 + . . . + a M = 1 − L', then (a) gives  Table A1. Different possible combinations where the concavity/convexity of g −1 (h(x)) can be predicted for g(x) and f , f (x) increasing. ICX: convex and increasing, ICV: concave and increasing, DCX: convex and decreasing, DCV: concave and decreasing. ICV  ICV  ICV  ICV  ICV  2  DCX  DCX  DCV  ICX  DCV  ICX  3  ICV  ICX  ICX  ICX  ICX  ICX  4  DCV  DCV  DCX  ICV  DCX  ICV  Let us prove now Lines 5-8 from Table 1. Consider the following inequalities, where m(x) = M − x + 1, for g(x) increasing, g decreasing, Table A2. Different possible combinations where the concavity/convexity of g −1 (h(x)) can be predicted and f (x) is increasing when g(x) is increasing and g (x) decreasing (or viceversa) when f is increasing. ICX: convex and increasing, ICV: concave and increasing, DCX: convex and decreasing, DCV: concave and decreasing. ICV  DCV  ICX  DCX  ICV  ICV  6  DCX  DCX  ICV  ICV  DCV  DCV  ICX  7  ICV  ICX  DCX  ICV  DCV  ICX  ICX  8  DCV  DCV  ICX  ICX  DCX  DCX  ICV   Then, we can summarize the results from Tables A1 and A2 in Table 1. This will do to prove Equation (20). The results for Equation (21)  Suppose now f (x) is decreasing and that Equations (20) and Equation (21) hold for g (x) and for f (x) increasing. Then, the following inequalities, for g (x) increasing, ∑ k α k g ( f (k)) = ∑ k α M−k+1 g ( f (m(k))) (A8) ≤ ∑ k α M−k+1 g ( f (m(k))) = ∑ k α k g ( f (k)),