Stochastic Order for a Multivariate Uniform Distributions Family

In this article we give sufficient conditions for stochastic order of multivariate uniform distributions on closed convex sets.


Introduction
Stochastic dominance has become a topic of great interest which is widely studied due to its various applications. The univariate case has been carefully studied since the appearance of the first papers (see Dudley [1] and Hadar [2]) which have introduced the basic criteria and definitions. For introduction in the field, we recommend the reader more recent books ( see, for instance Levy [3], Shaked [4] and Zbaganu [5]) that address different types of stochastic dominance and the links between them, but at the same time offer a broad perspective over the multiple implications of stochastic dominance in different domains: economics (see Kim et al. [6]), finances, banking, statistics, risk theory, medicine and others.
The generalization of these results in the multivariate case is justified by practical aspects, for instance in finance: an investor wants a portfolio that has the return rate dominant over another given benchmark portfolio. Post et al. [7] have developed an optimization method for constructing investment portfolios that dominate a given benchmark portfolio in terms of third-degree stochastic dominance, Petrova [8] have introduced multivariate stochastic dominance constraints in multistage portfolio optimization problem.
Stochastic comparison is also strongly related to the insurance and risk theory (see Tarp [9] and Xu [10]). Denuit et al. [11] and Raducan et al. [12] present a mirror analysis of risk seeking versus risk averse behavior, while Jamali et al. [13] provide comparison between different type of stochastic ordering.
Several comparison criteria have been defined, some of them require very strong assumptions on the utility functions. Our analysis relies only on comparison of the cumulative distribution function. There is, however, a significant difference from the univariate case: more precisely, if X is a multivariate r.v., then it is not true that P (X > x) = 1 − P(X ≤ x), as one can see in Catana [14].
Di Crescenzo et al. [15] investigated the past lifetime of this system, given that at a larger time t the system is found to be failed. They performed some stochastic comparisons between the random lifetimes of the single items and the doubly truncated random variable that describes the system lifetime. Bello et al. [16] introduced a family of stochastic orders and studied its main properties and compare it with other families of stochastic orders that have been proposed in the literature to compare tail risks. Toomaj et. al. [17] proposed a new measure and showed that this measure proposed is equivalent to the generalized cumulative residual entropy of the cumulative weighted random variable. Wang et al. [18] showed that the system performance is better (worse) with the stronger component heterogeneity in the parallel (series) system under the usual stochastic order and the (reversed) hazard rate order under the conditions of interdependency and independency. Also, Di Crescenzo et al. [19] gave results for stochastic comparisons of random lifetimes in a replacement model.
We present the structure of this article. In the Section 2 we have introduced some notation and definitions (see, for instance, the dual stochastic domination) and we have reminded known facts. In the Section 3 we give neccesary conditions for stochastic order using affine transforms. In the Section 4 we give neccesary conditions for stochastic order using decompositions. In the last section we discuss the conclusions.

Preliminaries
Let (Ω, F , P) be a probability space. Let X : For a random vector X we consider µ(B) = P(X ∈ B) be its distribution on R d , B(R d ) , In this article for the random vectors X and Y we will denote by µ and ν their distributions and by F and G their distribution functions and λ d is Lebesgue measure on R d , B(R d ) . The support of X (or, in terms of distribution, the support of µ) is defined as the smallest closed set K having the property that P (X ∈ K) = 1. It will be denoted by SuppX or Suppµ. Thus If A ⊂ R d is bounded we denote by a * = inf A and a * = sup A. We also denote by b (A) = proj 1 (A) × ... × proj d (A) the smallest box containing A. Here proj i (a 1 , a 2 , ..., a d ) = a i are the canonical projections. It is obvious that We call a set A ⊂ R d increasing iff ∀x ∈ A, y ≥ x ⇒ y ∈ A. We denote the sets for each c ∈ R n . Also, for A, B ⊂ R d we denote A ≤ B iff ∀x ∈ A ∃y ∈ B such that x ≤ y and ∀y ∈ B ∃x ∈ A such that y ≥ x. The following well known fact is more or less obvious: ., n} and A j ≤ B j for all j then Proof. (1) " =⇒ " We know that for all a ∈ A there exists b ∈ B such that a ≤ b. Let (a n ) n be a decreasing sequence such that a n → sup A and b n ∈ B such that a n ≤ b n . Then sup A = lim a n ≤ lim sup b n ≤ sup B In the same way let (b n ) n be a increasing sequence such that b n → inf B and a n ∈ A such that a n ≤ b n . Then inf A ≤ lim inf a n ≤ lim b n = inf B.
" ⇐= " Now we know that a * ≤ b * and a * ≤ b * and, as A and B are compact, that (2) and (3).

Definition 2. Let X, Y
: Ω → R d be two random variables. We say that X is stochastic dominated by Y and we denote Obviously in the unidimensional case (when d = 1) all these three relations are the same. We shall use the same definition for the distributions corresponding to our variables. More precise It is well known that (see Shaked et al. [4]): First interesting case is the stochastic order between uniform distributions defined on closed compact sets having positive Lebesgue measure or on finite sets. We shall be interested in the connection between the assertions "Uni f (A) ≺ st Uni f (B)" and "A ≤ B" In the unidimensional case it is true that if X ≺ st Y and X, Y ∈ L ∞ (Ω, F , P) then Supp(X) ≤ Supp(Y) .
Of course the converse cannot be true even for uniform distributions. For instance if A = [0, 1] and But if A, B are convex sets (meaning intervals) the situation changes.

Proposition 2. If A, B are compact intervals or finite intervals of integers then
Proof. The continuous case is easy and well known. For the arithmetical case: now the "intervals" are of the form And we claim that, as in the continuous case, the inequalities a * ≤ b * and a as one can easily see: F (α) = 1 3 < G (α) = 1 2 for any α ∈ [a + 1, a + 2). What remains true from these facts in the multidimensional case? Anyway, the implication X ≺ st Y =⇒ Supp(X) ≤ Supp(Y) remains true due to Strassen's Theorem (see, for instance Zbaganu [5]): If X ≺ st Y then there exist versions X of X and Y of Y (on another probability space (Ω , F , P )) such that X ≤ Y and cl(X (Ω )) = Supp(X) , cl(Y (Ω )) = Supp(Y) .
As a method which is working sometimes we have.
. We can further modify on null sets the random variables X and Y such that Then P(X ≤ b) > 0, thus there exists a = X(ω ) ∈ Int (A) , ω ∈ {ω : X(ω) ≤ b} such that b ≥ a. But A, B are compact. Then, using a standard argument, for all a ∈ A there exists b ∈ B such that a ≤ b and for all b ∈ B there exists a ∈ A such that b ≥ a, in other words A ≤ B.
As about the converse implication, A ≤ B =⇒ Uni f (A) ≺ st Uni f (B) , we know that, in general, it fails to be true even in the unidimensional case. But it is verified if A, B are intervals. The analog of the intervals are either the boxes I 1 × I 2 × ... × I d or the convex sets.
In the case of boxes the implication holds (see Zbaganu [5]): Proof. Suppose that A ≤ B. According to Proposition 1 proj In general, the converse is not true, A ≤ B does not imply even the weak stochastic order.

Counterexample 1. Let be
Then A, B are closed convex sets and A ≤ B but it is not true that Uni f (A) ≺ stw Uni f (B).

Proof. It is obvious that the sets verify the properties. Let
Then one can notice that f (a, a) > 0, for instance f 1 2 , 1 2 = 0, 02, thus Uni f (A) ⊀ stw Uni f (B). Even worse, it is not even true that X j ≺ stw Y j , j = 1, 2

Stochastic Orders between Multivariate Uniform Distributions via Affine Transforms
We give sufficient conditions such that Uni f (A) ≺ st Uni f (φ(A)). An obvious candidate for φ is smooth mapping and φ (x) ≥ x for all x ∈ A.

Lemma 2.
If X ∼ Uni f (A) and ϕ : R d → R d smooth then ϕ(X) is distributed Uni f (ϕ(A)) iff the Jacobian is constant, more precisely, Proof. Let f X , f ϕ(X) be the density functions of X and ϕ(X), thus f ϕ(X) (y) = f X ϕ −1 (y) · J ϕ (y),

Remark 2.
Let us consider pair of sets A ≤ B having the property that B = φ (A) with φ affine and φ (x) ≥ x for all x ∈ A. This is the case of balls. Obviously B (a, r) = a + rB and all the balls are convex, compact. Moreover, B is symmetric, meaning that  (B (a, r)) if and only if a ≥ 0 R d and |1 − r| ≤ min (B (a, r)) ≺ st Uni f (B (α, ρ)) iff B (a, r) ≤ B (α, ρ) and that happens precisely if and only if a ≤ α, |r − ρ| ≤ min

5.
If the norm is the usual L p norm on R d defined by x p = d ∑ k=1 x j p 1/p for p ∈ [1, ∞) and x ∞ = max j x j if p = ∞, we know who is b: it is (1, 1, ..., 1). Therefore Uni f (B (a, r)) ≺ st Uni f (B (α, ρ)) 2. We know that a + rx ≥ x for all x ∈ B. If x = 0 R d we get a ≥ 0 R d . For x = te j it follows that a j + rt ≥ t; the inequality must hold for t ∈ −b j , b j . Therefore a j ± rb j ≥ ±rb j and we find the same conditions as in 1. Conversely, if a ≥ 0 and −a ≤ (1 − r) b ≤ a we want to prove that φ (x) ≥ x for all x ∈ B. But that is true even x j ∈ −b j , b j because the affine function φ j (t) = a j + rt − t have the property that φ j ±b j ≥ 0.
3. Let X˜U (B) and Y˜U (B (a, r)) . The random vector Y = a + rX has the same distribution as Y and X ≤ Y . Apply Lemma 1.
4. Suppose that B (a, r) ≤ B (α, ρ) or a + rB ≤ α + ρB. Using the same tricks as before we get −rb ≤ α − a + ρb, a − rb ≤ α − ρb and a ≤ α. It follows that |r − ρ| ≤ α j −a j b j for all 1 ≤ j ≤ d and this is the condition that a + rx ≤ α + rx for all x ∈ B.
5. It is similar.
A slight generalization for L p norms is the following result. Here B is the unity ball from L p .
Proposition 6. Let f : R d → R d be defined by f (x) = Ax + b where A = a i,j 1≤i,j≤d has the form For the converse we need to verify that (λ i − 1) But this is obvious: Another idea runs as follows: suppose that F = Uni f (C) is the distribution of stochastic vector X.
.,n is a partition of Ω which is independent on X.
The distribution of Y is a mixture. If f j (x) ≥ x then X ≤ Y hence we have stochastic domination. If X is uniformly distributed on C and f j are affine and, moreover f j (C) are disjoint then one may hope to be able to choose the weights p j = P Ω j in such a way that Y be again uniformly distributed. Even if they are not affine we could try somehow. Precisely Proposition 7. Let C ⊂ R d be a Borel set having positive finite Lebesgue measure. Let X be a random vector uniformly distributed on C, I be a set at most countable and let f j j∈I : R d → R d having the property that f j (x) ≥ x for almost all x ∈ C and, moreover, that f j (X)˜Uni f f j (C) .
Suppose that f j (C) are almost disjoint, meaning that i = j =⇒ λ d f i (C) ∩ f j (C) = 0.
Then Uni f (C) ≺ st Uni f ∪ j∈I f j (C) .

Proof.
We know that f j (X)˜Uni f C j where C j = f j (C) . Let π j = λ d f j (C) . Then the density of f j (X) is 1 Let Ω j j∈I be a partition of Ω which is independent of X and let Y = P Ω j Uni f C j and its density is ∑ j∈J P(Ω j ) In terms of transition operators, The real problem is how to construct the functions f j . An idea is to split the set C in almost disjoint subsets ∆ j,k k∈K , K at most countable and to define f j (x) = a j,k +A j,k x on the sets ∆ j,k taking care that det A j,k not to depend on k. To understand that, let us look at Example 1. Let X˜Uni f (C) and Y˜Uni f (∆) where C = [0, 1] 2 and ∆ is the triangle with the vertices O(0, 0) , A(2, 0) , B(0, 2) . Clearly C ≤ ∆ but we already know that this is not enough to imply that Uni f (C) ≺ st Uni f (∆) .

Stochastic Orders between Multivariate Uniform Distributions via Decomposition
Another criterion to decide the stochastic order could be the total probability formula. Here is its simplest variant Theorem 1. Let F be a probability distribution on R 2 . Then there exist a probability µ on R and a transition probability from R to R such that F = µ ⊗ Q.
Or, even more precise in random vectors terms: Let Z = (X, Y) be a stochastic vector in plane. Let F be its distribution, µ the distribution of X and Q (x) be the conditioned distribution of Y provided that X = x.
Then F = µ ⊗ Q. This is a notation meaning that udF = u (x, y) Q (x, dy) dµ (x). Now suppose that we have two random vectors in plane, Z j = X j , Y j with their distributions written as F j = µ j ⊗ Q j .
It seems to be plausible that if µ 1 ≺ st µ 2 and We were able to prove a weaker result. Call a transition probability Q monotone if x ≤ x =⇒ Q (x) ≺ st Q (x ). Proposition 8. Let µ 1 , µ 2 be two probability distributions and Q 1 , Q 2 be two transition probabilities. Suppose that at least one of them is monotonous. Then Proof. Let u : R 2 → R be measurable, bounded and increasing.

Conclusions
This study has started from a well-known result in univariate case. We have followed two different approaches: transforms and decompositions, in order to prove similar results in a more general framework. Examples, remarks and counterexamples highlight interesting cases which, themselfs, might generate new questions. For instance, different types of order might be considered and the links between them. Therefore we consider that this study merit further investigations.