Next Article in Journal
Modelling Trade Durations Using Dynamic Logarithmic Component ACD Model with Extended Generalised Inverse Gaussian Distribution
Next Article in Special Issue
Some New Mathematical Integral Inequalities Pertaining to Generalized Harmonic Convexity with Applications
Previous Article in Journal
A Simplified Radial Basis Function Method with Exterior Fictitious Sources for Elliptic Boundary Value Problems
Previous Article in Special Issue
Improvements of Slater’s Inequality by Means of 4-Convexity and Its Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Probability of Finding Extremes in a Random Set

by
Anișoara Maria Răducan
1,2,
Constanța Zoie Rădulescu
3,*,
Marius Rădulescu
1 and
Gheorghiță Zbăganu
1
1
“Gheorghe Mihoc-Caius Iacob” Institute of Mathematical Statistics and Applied Mathematics of the Romanian Academy, 050711 Bucharest, Romania
2
Department of Applied Mathematics, Bucharest University of Economic Studies, 010522 Bucharest, Romania
3
National Institute for Research and Development in Informatics, 011455 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(10), 1623; https://doi.org/10.3390/math10101623
Submission received: 2 March 2022 / Revised: 3 May 2022 / Accepted: 5 May 2022 / Published: 10 May 2022
(This article belongs to the Special Issue Recent Trends in Convex Analysis and Mathematical Inequalities)

Abstract

:
We consider a sequence Z j j 1 of i.i.d. d-dimensional random vectors and for every n 1 consider the sample S n = { Z 1 , Z 2 , , Z n } . We say that Z j is a “leader” in the sample S n if Z j Z k , k { 1 , 2 , , n } and Z j is an “anti-leader” if Z j Z k , k { 1 , 2 , , n } . After all, the leader and the anti-leader are the naive extremes. Let a n be the probability that S n has a leader, b n be the probability that S n has an anti-leader and c n be the probability that S n has both a leader and an anti-leader. One of the aims of the paper is to compute, or, at least to estimate, or if even that is not possible, to estimate the limits of this quantities. Another goal is to find conditions on the distribution F of Z j j 1 so that the inferior limits of a n , b n , c n are positive. We give examples of distributions for which we can compute these probabilities and also examples when we are not able to do that. Then we establish conditions, unfortunately only sufficient when the limits are positive. Doing that we discovered a lot of open questions and we make two annoying conjectures—annoying because they seemed to be obvious but at a second thought we were not able to prove them. It seems that these problems have never been approached in the literature.
MSC:
60E15; 60B12; 62H05

1. Introduction

Let Ω be a population composed of individuals. Each individual is characterized by a d-dimensional vector of the Euclidean space. Each entry of the vector corresponds to a characteristic of the individual. Examples of characteristics can be wealth, height, income, education level, health, age, notoriety, and so on. Thus, any individual “j” may be characterized by a vector Z j with d components, where d is the number of measured characteristics.
If x = x 1 , x 2 , , x d and y = y 1 , y 2 , , y d are two vectors from R d we say x y iff x j y j for all j 1 , 2 , . . , d . This is the natural partial order. Let A be a subset of R d . We shall say that x = x 1 , x 2 , , x d from A is maximal if there is no y = y 1 , y 2 , , y d from A such that x y . If the set A has an unique maximal element, then it will be called the last element of A.
If Z j : Ω R d , j = 1 , 2 are two functions then Z 1 Z 2 iff Z 1 ω Z 2 ω for all ω Ω .
The individuals can be compared with each other using each component. The temptation of making tops is licit at the unidimensional level.
However, in the multidimensional case, the order relation is not total. It follows that the order statistics make no sense. However, people want to rank a set of vectors in order to make a decision. Sometimes it is unavoidable.
In a partially ordered set we shall call the set formed by the first and the last element the extremes set. Of course, the extreme set has 0, 1 or 2 elements.
Definition 1.
A partially ordered set S has the property F if it has a first element, that is if there exists an element a S such that a x for every x S . Next, S has the property L (or it has the leadership property) if it has a last element, i.e., if there exists an element a S such that a x for every x S . Sometimes we call the last element “the leader”. Furthermore, finally, S has the property FL if it has both a first element and a last element.
Let Z j j 1 be a sequence of d-dimensional random vectors.
Let n 2 be a natural number. Consider the sample Z j 1 j n .
Some questions are natural. For instance: which is the probability a n that a “leader” exists in that sample? Z is a leader if Z j Z for any j 1 , , n . Or which is the probability b n that an anti-leader does exist in the sample? Z is an anti-leader if Z Z j for any j 1 , , n . Or which is the probability c n that in the sample we find both a leader and an anti-leader? Or the probability that the sample contains at least a comparable pair ( Z , Z ) meaning that Z Z or Z Z ?
For instance, if Z j 1 j n are i.i.d. and uniformly distributed in the hypercube 0 , 1 d then it is easy to see that a n = b n = 1 n d 1 , c n = 1 n n 1 d 1 and the probability to find at least two comparable vectors is 1 1 n ! in the case d = 2 . (For d 3 we do not know the answer!) In the case d 2 the probabilities a n , b n and c n tend to 0 if n . Is that a general rule? Do these probabilities always tend to 0 if d 2 no matter of the distribution of the i.i.d. random vectors Z j j ?
These questions have prompted our paper. The subject of our paper belongs to the domain of Probability Theory called “Random Geometry”. We recall that Random Geometry covers a variety of techniques and methods applicable for the description of stochastic behavior of geometric objects ranging from graphs and networks to abstract or embedded continuous manifolds. An interesting problem in the Random Geometry is the study of the number of maximal elements of a random set of points in R d . Let S n = { Z 1 , Z 2 , , Z n } be a set of n random vectors in R d . Define the random variable K n , d as being the number of maximal elements in S n .
In [1], Barndorff-Nielsen and Sobel initiated the study of the number of maximal elements of the set S n as an attempt to describe the boundary of a set of random points in R d . This problem in closely related with a problem from computational geometry, namely the construction of the convex hull of a set [2]. Potential applications of the study may be found in various disciplines such as mathematical programming, multiple criteria decision making, game theory, algorithm analysis, pattern classification, graphics, economics, data analysis, social psychology, and others.
In [1], asymptotic results for the expected value and variance of the random variable K n , d were obtained. Subsequent papers that deal with this problem, some of them giving simplified proofs include [2,3,4,5,6,7]. The interested reader can find more information in the books [8,9] and in the papers [5,10,11,12,13,14,15,16,17,18,19]. Now we shall return to our problem. We searched for answers regarding our specific problem in the literature, but we were not able to find any. This is why we decided to study the probability of naive extrema existence in a sample. The structure of our paper is described below.
In Section 1 and Section 2 we introduced definitions, notation, and elementary examples. Some results for the discrete distributions and formulation of three conjectures are presented in Section 3. They are followed by the analysis of the continuous case, in Section 4. The last part of the article is dedicated to absolutely continuous distributions. It contains a series of examples that shed light on this subject.

2. Stating of the Problem

Let Z k k 1 be a sequence of i.i.d. d-dimensional random vectors defined on a probability space Ω , K , P and let F be their common probability distribution. If we shall write only Z we shall understand that Z is a copy of Z k .
We will denote with the same letter F both its distribution probability and its distribution function: if B R d is a Borel set then F B = P Z B but x = x j 1 j d is a vector from R d , then
F x : = P Z x
or, explicitly, F x : = P ( Z 1 x 1 , Z 2 x 2 , , Z d x d ) where Z = ( Z 1 , Z 2 , , Z d ) . Let also
F x : = P Z x
Let us notice that in the 1-dimensional case, if F is continuous then F x = F ¯ x = 1 F x .
In the 2-dimensional case we will prefer the notation Z k = X k , Y k . Now F x , y means P X k x , Y k y . In this particular case the marginals will be denoted by F X and F Y : F X x = P X j x , F Y x = P Y j x .
Let
a n = P there exists j 1 , , n such that Z i Z j for all i 1 , , n
b n = P there exists j 1 , , n such that Z i Z j for all i 1 , , n
c n = P there exists i , j 1 , , n such that Z i Z k Z j for all k 1 , , n
Thus, a n is the probability that one of these n random vectors is the greatest of them, b n is the probability that one of them is the smallest and c n the probability that the set S n = Z 1 , , Z n has both a minimum and a maximum. Or, in other words, a n is the probability that the set S n have a leader, b n is the probability that the set S n have an anti-leader and c n is the probability that the set S n have both a leader and an anti-leader. It is evident that
c n min a n , b n
Notice that all the probabilities a n , b n , c n depend only on the distribution F of Z j .
When the danger of confusion arises, we shall write a n F , b n F , c n F instead of a n , b n , c n to be on the safe side.
In other words, a n is the probability that the random set Z 1 , , Z n has the property L, b n is the probability that the same set has the property F and c n is the probability to have the property FL.
Definition 2.
We say that
-
the probability distribution Fhas the leader propertyif lim n inf a n > 0 ,
-
the probability distribution Fhas the strong leader propertyif lim n inf a n = 1 ,
-
the probability distribution Fhas the anti-leader propertyif lim n inf b n > 0 ,
-
the probability distribution Fhas the strong anti-leader propertyif lim n inf b n = 1
-
the probability distribution Fhas the extremes propertyif lim n inf c n > 0 ,
-
the probability distribution Fhas the strong extremes propertyif lim n inf c n = 1 .
We say that the random vector Z has a leader if its distribution has the leader property. In the same way Z has the anti-leader property if its distribution has the anti-leader property and so on.
Definition 3.
Let ϕ : R d R d be a function.
We say that
-
ϕ is strongly increasing iff z z ϕ z ϕ z and
-
ϕ is strongly decreasingiff z z ϕ z ϕ z .
Typical examples of strongly increasing functions are f z = f j z j 1 j d with all f j : I j R increasing. Here I j R are intervals. For instance, for d = 2 the functions f x , y = ln 1 x , 1 1 y or f x , y = 1 1 x , 1 1 y are strongly increasing for x , y 0 ; 1 2 and f x , y = ln 1 x , ln 1 y is strongly decreasing. Here I 1 = I 2 = 0 , 1 .
Let Z = Z j 1 j d be a d dimensional random vector and let F be its probability distribution.
Some statements are obvious:
(S1)If Z has a leader and ϕ : R d R d is strongly increasing then ϕ Z has a leader, too. Moreover, if Z has a leader and ϕ is strongly decreasing, then ϕ Z has the anti-leader property.
(S2)The probabilities a n , b n , c n , n 1 , remain the same if we replace the sequence Z n n with ϕ Z n n where ϕ is a strongly increasing function.
(S3)Let J = j 1 , . . , j k 1 , , d and Z J = Z j i 1 i k . If Z has a leader then Z J has a leader, too. Or, in terms of distributions: if F has the leader property, then all its marginals F J have the leader property.
(S4)Let σ be a permutation of 1 , 2 , , d and Z σ = Z σ j 1 j d . If Z has the leader property, then Z σ has the leader property. The probabilities a n , b n , c n are the same for Z and Z σ .
The support of F , denoted by S = supp F is defined to be the intersection of all closed sets B such that F B = 1 . Or, in terms of random vectors: z S if and only if P Z z < ε > 0 for any ε > 0 . Of course S is closed. In the unidimensional case—i.e., if d = 1 —the support is S = R A with A = y R I n t F 1 y . Moreover, the restriction of the distribution function on S is increasing since if x < y and F x = F y then S R x , y .
Definition 4.
Let S R d . The point z S is called aweak leaderif there exists no z S such that z < z . In the same way z S is aweak anti-leaderif there exists no z S such that z > z . After all, a weak leader is a Pareto maximum and a weak anti-leader is a Pareto minimum. Obviously the set S R d has a leaderif it has the property L; that is, if there exists a point z 0 S such that Sup S z 0 . This leader z 0 is unique.
Remark thatthe compact set S has no leaderif and only if it has at least two weak leaders.
Examples. Here the uniform distribution on a set S will be denoted by U ( S ) and λ d will be the Lebesgue measure on R d .
1. F = U S with S = 2 , 0 , 1 , 1 , 0 , 2 . The set S has no leader. All the points of S are weak leaders.
2. F = U 0 , 1 Q where Q x = U f x , g x with f , g : 0 , 1 R measurable functions such as f g .
In this case, S = Supp F = C l x , y R 2 : 0 x 1 , f x y g x . Here C l A means the closure of A . If f , g are nondecreasing then S has the leader 1 , g 1 and the anti-leader 0 , f 0 .
3. F = U C where C R d is a compact set such that 0 < λ d C < . Let C j be the j th projection of C . The leader of C is the point max C 1 , max C 2 , , max C d provided that it belongs to C .

3. The Discrete Case

Suppose that the distribution F is discrete: F = z S p z δ z with S = S u p p F R d at most countable, δ z is the Dirac distribution, p z > 0 for all z S and z S p z = 1 .
In this case, we obtain an exact formula for a n and also an upper bound.
Proposition 1.
Let F be the discrete probability distribution described above.
Then
a n = m = 1 n z S n m 1 m 1 F n m z p z m n z S F n 1 z p z
Consequently if lim inf n z S n F n 1 z p z = 0 then F has not the leader property.
Proof. 
According to the definition of a n we have a n = P k = 1 n A k where
A k = Z j Z k for all j k .
However, P A k = z S P Z j Z k j k , Z k = z = z S P Z j z j k P Z k = z = z S P Z j z n 1 p z = z S F n 1 z p z .
In the same way
P A k 1 A k 2 = z S P Z j Z k 1 j k 1 , k 2 , Z k 1 = Z k 2 = z = z S F n 2 z p z 2 and, in general P A k 1 A k 2 . . . A k m = z S F n m z p z m for m n .
The claim is a consequence of the well known formula for the probability of a union of sets. □
Unfortunately, Formula (5) seems too involved in order to be useful.
Proposition 2.
Let Z be a discrete d dimensional random vector with the distribution function F and let S = S u p p F . Then the following assertions hold:
a. If the set S has a leader z 0 and F z 0 > 0 , then the distribution F has the strong leader property. In this case lim n a n = 1 . Otherwise written, the occurrence of a leader is almost sure.
b. If S is compact and F has the leader property, then S has the property L. In the same way, if F has the anti-leader property then S has the property F.
c. If S is finite then F has the leader property if and only if S has the property L.
d. If F = F 1 F d and all S u p p F j are finite, then F has the strong leader property hence lim a n = 1 .
Proof. 
a. Let Z j j 1 be a sequence of i.i.d. F—distributed random vectors. Let S = S u p p F and z 0 be the leader of S . As F is discrete it follows that S is at most countable. Let p 0 = F z 0 = P Z j = z 0 > 0 . Let N be the first n such that Z n = z 0 . N is finite and geometrical distributed: P N = n = p 0 1 p 0 n 1 . Of course P N n a n hence a n 1 .
b. Suppose that S is compact. We prove that if F has the leader property then S has a leader, too. Suppose the contrary. As S is compact then it has no leader iff it has at least two different weak leaders. Denote them by z and z . Let z 0 = max z , z . Surely z 0 S , since no point in S can be greater both than z and z . The complement of S is an open set, hence there exists a neighborhood V of z 0 such that V S = . Let U and U be two neighborhoods of z and z such that if z U and z U then max z , z V . By the definition of the support F U > 0 and F U > 0 . Let N and N the hitting times of U and U and let N = max N , N .
If N < n then the set Z 1 , , Z n has no leader since nobody is greater both than z and z if z U and z U . Thus P N < n 1 a n . As lim n P N < n = 1 it follows that a n 0
c. Apply a. and b.
d. A consequence of a.: S u p p F = S u p p F 1 × . . . × S u p p F d is finite and has the leader z 0 = max S u p p F j 1 j d . Moreover F z 0 = j = 1 d F j max S u p p F j > 0 .
Remark 1.
The fact that F = j = 1 d F j means that all the components of Z are independent. Thus, the point d. from Proposition 2 is to be read as “If all the components of Z have finite range and are independent, then Z has the strong leader property”.
Sometimes one can relax the finitude conditions allowing a component to have an infinity of values. Precisely, it is true that “If all but one components of Z have finite range and are independent then Z has the leader property”.
Lemma 1.
Let U i i , V j j be two independent sequences of F distributed i.i.d. random variables. Suppose that their distribution F = k 1 p k δ k is discrete and moreover, that there exist some positive β > 1 such that p k 1 p k β for all k 1 .
Let X = max i m U i , Y = max j n V j and
p m , n = p m , n F = P X Y
Then
1 β n m + n p m , n F β n m + n
Particular case: p n 1 , 1 β n , p n k , k β k n .
Proof. 
As p m , n = k 1 P X k P Y = k we can write p m , n = k 1 F m k F n k F n k 1 .
From the well known inequality n x n 1 y n x n y x n y n 1 and the fact that
p k = F k F k 1 we infer that
F n 1 k 1 p k F n k F n k 1 n F n 1 k p k .
It follows that
n F m k F n 1 k 1 p k F m k F n k F n k 1 n F m + n 1 k p k .
However, F m + n 1 k p k = p k p k + 1 F m + n 1 k p k + 1 β F m + n k + 1 F m + n k m + n therefore, by (*) k 1 F m k F n k F n k 1 β n n + m k 1 F m + n k + 1 F m + n k = β n n + m 1 F 1 .
On the other side n F m k F n 1 k 1 p k n F m + n 1 k 1 p k = n p k p k 1 F m + n 1 k 1 p k 1 n p k p k 1 F m + n k 1 F m + n k 2 m + n n β F m + n k 1 F m + n k 2 m + n .
Adding these quantities we obtain k 1 F m k F n k F n k 1 1 β n m + n .
Proposition 3.
Let ( Z n ) n 1 be a sequence of i.i.d. random vectors in the plane. Suppose that Z n = X n , Y n , n 1 and F = F 1 F 2 with F 1 = 1 2 M p 1 p 2 p M and F 2 = k = 1 q k δ k . Suppose moreover that p j , q k are positive for j 1 , , M and k 1 and sup k q k 1 q k = β < .
Then a n p M β . Thus, F has the leader property.
Proof. 
Let n 2 be fixed. Notice that a n = P Ω 0 where Ω 0 = { ω Ω : k = k ω 1 , , n such that X j ω X k ω , Y j ω Y k ω j n , j k }
Consider the random vector U = U 1 , , U M whith U k = { i 1 , , n : X i = k } . It is well known that the distribution of U is the multinomial one:
P U 1 = k 1 , , U M = k M = n ! k 1 ! k M ! p 1 k 1 p M k M and its expected value is
E U = n p 1 , , n p N .
Let A k = { U M = k } . Then P A k = n k p M k 1 p M n k . Consider k 1 .
Notice that ω A k Ω 0 max j n k Y j ω max n k < j n Y j ω .
It follows that P Ω 0 | A k = p n k , k F Y . Thus
a n = P Ω 0 k = 1 n P Ω 0 | A k P A k = k = 1 n p n k , k F Y n k p M k 1 p M n k
k = 1 n k n β n k p M k 1 p M n k .
However, k = 1 n k n k p M k 1 p M n k = E U M .
It results that a n E U M n β = n p M n β = p M β .
With the same proof one can obtain a small generalization
Corollary 1.
Let Z n = X n , Y n , n 1 be a sequence of i.i.d. d + 1 - dimensional random vectors, d 2 . Suppose that the distribution F of Z n has the form F = F 1 F 2 whereSupp F 1 R d is a finite set with the property L and F 2 = k = 1 q k δ k such that sup k q k 1 q k = β < . Let z 0 R d be the leader of Supp F 1 and p M = F 1 { z 0 } > 0 . Then a n p M β . Thus, F has the leader property.
Conjecture 1.
We think that if the support of F X is finite and has the property L, then F X F Y has the leader property.
The situation changes if F = j = 1 d F j is discrete but S u p p ( F j ) is infinite at least for two indices j 1 and j 2 .
Conjecture 2.
Let Z = X , Y be 2-dimensional vector. If S u p p F X and S u p p F Y are infinite then F X F Y has NOT the leader property.
We were able to prove this guess in two cases. Both conjectures are true if supplementary conditions on F X and F Y are assumed.
Proposition 4.
Let F = F X F Y with
F X = k = 1 p k δ k , F Y = k = 1 q k δ k , β X = sup k 2 p k 1 p k , β Y = sup k 2 q k 1 q k .
If β X < or β Y < then a n 0 hence F has not the leader property.
Proof. 
Let Z k = X k , Y k , k 1 be a sequence of i.i.d. F distributed random vectors. Suppose that β X < . Let A k = j k , 1 j n { X j X k , Y j Y k } .
Then
P A k = E P A k | Z k = E F X n 1 X k F Y n 1 Y k = i , j = 1 F X n 1 i F Y n 1 j P X k = i P Y k = j .
With our notation, P A k = i = 1 F X n 1 i p i j = 1 F Y n 1 j q j no matter of k .
It follows that
a n = P k = 1 n A k n P A 1 = n i = 1 F X n 1 i p i j = 1 F Y n 1 j q j .
However, i = 1 F X n 1 i p i = p n 1 , 1 F X . According to Lemma 1, p n 1 , 1 F X β X n .
Therefore a n n β X n j = 1 F Y n 1 j q j = β X j = 1 F Y n 1 j q j and the last quantity tends to 0 since F Y j < 1 , according to Beppo-Levi theorem. □
Remark 2.
Let F be a probability distribution on N . Consider the sequence
π n F = n p n 1 , 1 F = n k = 1 F X n 1 k p k , n 1
Proposition 4 says that the conjecture is true if one of the sequences π n F X n or π n F Y n is bounded. However, the condition fails to be true in important cases such as F X = P o i s s o n α ,   F Y = P o i s s o n β .
In this situation we use another approach.
For any F = k = 1 p k δ k with p k > 0 for all k 1 , let r k = r k F = 1 F k = p k + 1 + p k + 2 + . and λ k = λ k F = p k r k be the discrete analog of the hazard rate (see [20], Section 2.2, relation (2.1)). For the continuous case see Definition 5 below, in Section 5.
The connection between λ k and p k is given by
r k = 1 1 + λ 1 1 + λ k , p k = λ k 1 + λ 1 1 + λ k
for all k 1 .
As lim k r k = 0 the only condition on the positive numbers λ k is that k = 1 1 + λ k = or, which is the same thing, that k = 1 λ k = .
Proposition 5.
Let F be a distribution on N with the property that S u p p F = N , r k = 1 F k , p k = F k , λ k = p k r k . Let N = N n = inf k : r k < 1 n . Then
p n , 1 F < 1 + 1 e k = 1 N λ k k = 1 N 1 1 + λ k
Proof. 
From the definition of N we see that r N < 1 n r N 1 or 1 r N 1 n < 1 r N or, again
k = 1 N 1 1 + λ k n < k = 1 N 1 + λ k ,
n p n , 1 F = n k = 1 F X n k p k = n k = 1 1 r k n r k λ k .
From the elementary inequality 1 x n e n x which holds for x [ 0 , 1 ] we infer that
n p n , 1 F n i = 1 e n r k r k λ k .
For x , a > 0 , the function x x e a x attains its maximum at x = 1 a and the maximum is 1 e a .
That is why k = 1 n e n r k r k λ k = k = 1 N n e n r k r k λ k + k = N + 1 n e n r k r k λ k k = 1 N 1 e r k r k λ k + k = N + 1 n r k λ k = 1 e k = 1 N λ k + n k = N + 1 p k = 1 e k = 1 N λ k + n r N .
As n r N < 1 we obtain n p n , 1 F < 1 + 1 e k = 1 N λ k hence p n , 1 F < 1 + 1 e k = 1 N λ k n .
However, n > 1 r N 1 = 1 + λ 1 1 + λ N 1 . □
Let
L n F = λ 1 + + λ n 2 1 + λ n k = 1 n 1 1 + λ k , n 2 .
Corollary 2.
Suppose that F = F X F Y is a probability distribution on N 2 with the property that lim n L n F X = lim n L n F Y = 0 . Then F does not have the leader property.
Proof. 
We write relation (8) as
a n n p n 1 , 1 F X p n 1 , 1 F Y = n p n 1 , 1 F X · n p n 1 , 1 F Y
However, lim n p n 1 , 1 F X 2 = lim n n n + 1 n + 1 p n 1 , 1 F X = lim n n + 1 1 + 1 e k = 1 N n λ k 2 k = 1 N n 1 1 + λ k 2 .
Notice that n + 1 k = 1 N n 1 + λ k . It follows that
lim n n + 1 1 + 1 e k = 1 N n λ k 2 k = 1 N n 1 1 + λ k 2 lim n 1 + λ N n 1 + 1 e k = 1 N n λ k 2 k = 1 N n 1 1 + λ k =
= 1 e 2 lim N L N F X + 2 e lim N 1 + λ N k = 1 N λ k k = 1 N 1 1 + λ k + lim N 1 + λ N k = 1 N 1 1 + λ k .
However, for N to be good enough k = 1 N λ k 2 k = 1 N λ k . It follows that if lim N L N F X = 0 then lim N 1 + λ N k = 1 N λ k k = 1 N 1 1 + λ k = 0 too.
The last term obviously vanishes to 0 hence lim n π N F X 2 = 0 .
The same holds for lim n p n 1 , 1 F Y 2 .
Corollary 3.
The distribution F = P o i s s o n α P o i s s o n β has never the leader property.
Proof. 
In this case, F X = P o i s s o n ( α ) and F Y = P o i s s o n ( β ) . We shall focus on F X . Now λ k from Proposition 5 becomes
λ k = α k k ! α k + 1 k + 1 ! + α k + 2 k + 2 ! + = k + 1 α 1 1 + α k + 2 + α 2 k + 2 k + 3 + = C k , α k + 1 α
with 1 C k , α = 1 + α k + 2 + α 2 k + 2 k + 3 + 1 + α 3 + α 2 3 · 4 + = 2 α 2 e α 1 α .
It follows that k + 1 B λ k k + 1 α with B = α 2 e α 1 α .
Therefore
L n F = λ 1 + + λ n 2 1 + λ n k = 1 n 1 1 + λ k 2 α + 3 α + + n + 1 α 2 1 + n + 1 α k = 1 n 1 1 + k + 1 B n + 1 2 n + 2 2 n + 1 + α α 3 B n 1 n ! thus
lim n L n F X = 0 . In the same way lim n L n F Y = 0 . Therefore, according to Corollary 2, F has not the leader property. □
The problem is that not always is true that lim n L n F = 0 . For instance, if λ n = 2 2 n the limit is infinity.
Remark 3.
Unfortunately, the method fails to answer the problem:
Is it true that F = F 1 F 2 has never the leader property when S u p p F 1 = S u p p F 2 = N ?
We think the answer is positive.

4. The Continous Case

Recall that a probability distribution F is continuos if F { z } = 0 for any z R d . It is well known that if F is absolutely continuous with respect to Lebesgue measure λ d then it is continuous.
It is more or less obvious that if F is continuous the following formulae hold:
a n = n P Z j Z n , j 1 , 2 , , n
b n = n P Z j Z n , j 1 , 2 , , n
c n = n n 1 P Z 1 Z k Z 2 , 3 k n if n 3
All of them are immediate consequences of the continuity of F. Precisely, if Z , Z are i.i.d. F distributed random vectors then P Z = Z = 0 .
Computation rules.
We will need the following result
Lemma 2.
Let ( Ω , K , P ) be a probability space, ( E , E ) and ( H , H ) be measurable spaces. Let Z = X , Y : Ω E × H be a random vector. Suppose that the distribution of Z is decomposable, meaning that it can be written as F Z = F X F Y | X where F X is a probability on ( E , E ) and the conditioned distribution F Y | X is a transition probability from ( E , E ) to ( H , H )
Let ϕ : E × H R be measurable and bounded. Then
(i)
E ϕ Z | X ) = ϕ X , y d F Y | X y .
In the particular case when X and Y are independent, F Z = F X F Y hence
(ii)
E ϕ Z | X ) = ϕ X , y d F Y y .
Proof. 
According to the definition of the product between a probability and a transition probability we know that P Z A × B | X = 1 A X F Y | X B = 1 A × B X , y d F Y | X y for every A E and B H . It follows that (i) holds for simple functions ϕ hence it holds for any bounded measurable ones. □
Proposition 6.
Let Z k k 1 be a sequence of i.i.d. d-dimensional random vectors. Suppose that Z k is F— distributed. Let Φ: R   2 d 0 , 1 be defined as Φ x , y = P x Z y , x , y R d .
Then the following assertions hold:
(A) P Z j Z n , j 1 , 2 , , n = E [ F n 1 Z 1 ]
(B) Suppose moreover that F is continuous. Then
a n = n F n 1 d F
b n = n F n 1 d F
c n = n n 1 Φ n 2 x , y 1 A x , y d F y d F x ,
where A = x , y R 2 d : x y .
In the 2-dimensional case it means that if F , the distribution of Z 1 , is absolutely continuous and has the density ρ then
a n = n F n 1 x , y ρ x , y d y d x
b n = n F n 1 x , y ρ x , y d y d x
c n = n n 1 x 1 y 1 Φ n 2 x 1 , y 1 , x 2 , y 2 ρ x 1 , y 1 ρ x 2 , y 2 d x 1 d x 2 d y 1 d y 2 .
Proof. 
This assertions are consequences of Lemma 2. Namely, let f : R 2 d R be abounded and measurable function and ϕ z 1 , , z n = f z 1 , z n f z n 1 , z n .
Let X = Z n , Y = Z 1 , , Z n 1 , Z = X , Y . As X and Y are independent, clearly E ϕ Z | X ) = ϕ X , Y d F Y y .
However, Z 1 , , Z n 1 are independent too, hence F Y = F n 1 where F = F Z j . Next, as ϕ is a product, it follows that
ϕ y , X d F Y y = f y 1 , X f y n 1 , X d F n 1 y = f u , Z n d F u n 1
Thus, we have
E f Z 1 , Z n f Z n 1 , Z n | Z n = E f Z 1 , Z n | Z n n 1
The claims (17) and (18) are simple corollaries of (20):
P Z j Z n j 1 , 2 , , n 1 = E P Z 1 Z n , , Z n 1 Z n | Z n
= P Z 1 z , , Z n 1 z d F z = P Z 1 z . . . P Z n 1 z d F z
= F n 1 z d F z = E [ F n 1 Z 1 ] .
As about the claim (19): as the n n 1 sets A i , j = Z i Z k Z j , k i , j are disjoint a.s. we have c n = i = 1 n j = 1 j i n P A i , j .
However, P A i , j = P Z i Z k Z j , k i , j = E P ( Z i Z k Z j , k i , j | Z i , Z j ) = =E ( k i , j P ( Z i Z k Z j | Z i , Z j ) ) = E ( P ( Z 1 Z 2 Z 3 | Z 1 , Z 3 ) ) n 2 (since Z n n are i.i.d.) = E Φ Z 1 , Z 3 1 Z 1 Z 3 n 2 where 1 Z 1 Z 3 ω = 1 if Z 1 ω Z 3 ω 0 if Z 1 ω > Z 3 ω
Proposition 7.
If the two-dimensional random vector Z = X , Y has the leader property and f : I 1 R , g : I 2 R are increasing functions, I 1 , I 2 R are intervals, then the random vector Z = f X , g Y has the leader property, too.
Proof. 
The first assertion is obvious: According to the definition
a n = P k 1 , 2 , , n : Z j Z k j 1 , 2 , , n .
However, Z j Z k X j X k and Y j Y k f X j f X k and g Y j g Y k .
For the second one, let S X = S u p p ( F X ) and S Y = S u p p ( F Y ) be the supports of X and Y . As we assumed that X and Y are continuous these two sets are closed and uncountable. Let α = sup S X and β = sup S Y , f : ( , α ) ( 0 , 1 ) , g : ( , β ) ( 0 , 1 ) be continuous, increasing such that f α 0 = g β 0 = 1 .
Then X j X k iff f X j f X k and Y j Y k iff f Y j f Y k .
Moreover, esssup f X j = esssup g Y j = 1 .
Proposition 8.
Let Z n n 1 be a sequence of i.i.d. d dimensional continuous random vectors that are F-distributed. Suppose that the support of F, denoted by S, is compact.
If S has not the property L then F has no leader.
Proof. 
Notice that F S = 1 by the very definition of S . We prove the 2-dimensional case since the d dimensional case has a similar proof.
Let z = x , y . In terms of probability distributions, F z = F 0 , x × 0 , y S .
We claim that z S is a leader if and only if F z = 1 . Indeed, if z is a leader, then S 0 , x × 0 , y . Conversely, if F 0 , x × 0 , y S = 1 then F S 0 , x × 0 , y = 0 hence S 0 , x × 0 , y almost surely. It means that z is a leader.
We prove that if S has not the property L, then F has no leader. Suppose the contrary: F has a leader, meaning that lim inf a n > 0 .
Notice that q : = sup F z : z S < 1 .
For, if q = 1 , we could find a sequence z n n in S such that F z n > 1 1 n . As S is compact, this sequence has a convergent subsequence to some z and in that case F z = 1 . Recall that F is continuous hence a n = n E F Z n 1 . Then a n n sup F z : z S n 1 = n q n 1 0 as n , contradicting the hypothesis lim a n > 0 .
The fact that we have a formula for a n simplifies a lot the things.
The results from Lemma 1, Corollary 1, Proposition 4 become
Proposition 9.
The following assertions hold:
1 0 Let F be a continuous probability distribution on R , B R . Then p m , n F = n m + n .
2 0 Let F = F 1 F 2 with S u p p F 1 R d 1 finite and F 2 unidimensional continuous distribution. Suppose that S u p p F 1 has the leader z 0 . Then a n F 1 z 0 > 0 .
3 0 Let F = F 1 F 2 with S u p p ( F 2 ) = N and F 1 continuous. Then F has not the leader property.
4 0 If F = F 1 F 2 F d and all the distributions F j are continuous, then a n = 1 n d 1 .
Proof. 
1. Let U i i , V j j be i.i.d. random variable. We denote by p m , n F = P max 1 i m U i max 1 j n V j where U i i , V j j are i.i.d. F distributed random variables. Let S be the support of F . Notice that X and Y have the same support as U i and V j , namely S . Let U i = F U i , V j = F V j ,   X = max i m U i , Y = max j n V j .
Then U i , V j ~U 0 , 1 are standard uniformly distributed and, what is crucial X Y = X Y almost surely due to the fact that F | S is increasing. However, P X Y = 0 1 x m d x n = 0 1 n t m + n 1 d t = m m + n .
2. Let Z k = X k , Y k such that that F X k = F 1 , F Y k = F 2 . Denote P X = z 0 by p 0 . Thus, p 0 = F 1 z 0 . Let n 2 be fixed.
Consider the sets A j = k : X k = z 0 = j . Clearly P A j = n j p 0 j 1 p 0 n j . Given A j , the probability to have a leader is p n j , j = j n from 1. It follows that
a n n j = 1 n n j p 0 j 1 p 0 n j p n j , j = p 0 .
3. From proposition 6 we know that a n = n F n 1 d F = n F 1 n 1 d F 1 F 2 n 1 d F 2 = n p n 1 , 1 F 1 F 2 n 1 d F 2 = F 2 n 1 d F 2 and the last integral vanishes as n .
4. a n = n E F n 1 Z n = n F n 1 d F = n j = 1 d F j n 1 d F j = n n d .

5. The Absolutely Continuous Case

In this section, we study the case when the distribution F of the i.i.d. sequence of random vectors is absolutely continuous. In Section 5.1. we establish general conditions under which F has the leader property. In the following subsections we consider special cases for distribution F in the general result of Section 5.1. In Section 5.2. we investigate the case when F is the product of an exponential and a convolution of exponential with uniform distribution. In Section 5.3. we consider the case when the random vectors have an uniform distribution in a triangle and in Section 5.4. we study the case when F is a mixture of small uniforms. In Section 5.5. we have considered the case when F = F X F Y with the exponential distributions F X = E x p 1 and F Y = E x p λ . In the last subsection we give an example of leaderless distribution.

5.1. A General Result

Here we are looking for finding sufficient conditions in order that F have a leader provided that F is absolutely continuous. The main result of this subsection is Proposition 10.
We will focus on the case d = 2 . The case d 3 is much more difficult and remains an open problem.
Thus, the 2-dimensional random vectors Z n , n 1 become Z n = X n , Y n and F = F X Q where Q is a transition probability from R to R with the meaning that Q x , B = P Y B | X = x for any Borel set B from R .
Or, explicitly, one can write
E u Y | X = u X , y Q Y , d y for any u bounded and measurable function.
Suppose that F X is absolutely continuous with respect to Lebesgue measure and let ρ be its density: d F X x = ρ x d x .
Important remark. Whenever possible we have validated our computations by computer simulations. A brief description of the simulation procedure follows: To generate a random vector Z = ( X , Y ) in plane which is F-distributed we used the usual algorithm: let F X be the distribution of X and Q ( x , . ) be the distribution of Y given that X = x . Then generate X with the distribution F X and after that generate Y with the distribution Q ( X , . ) . As long as these distributions are classical the procedure is very fast. We used the language “R”. In our case X E x p ( 1 ) , Y = X + U , U U n i f o r m ( 0 , 1 ) . The following script in “R”:
x=rexp(n);
u=runif(n);
y=x+u
generates a set S n of n random vectors which are F-distributed. To estimate a n we generate N such sets and count the number of cases when S n has a leader. The proportion of these cases should approximate a n if N is great. A script which does that could be as follows (the reader can check the script himself)
marex=rep(0,N);
marey=rep(0,N)
for (k in 1:N)
x=rexp(n);
u=runif(n);
y=x+u;
marex[k]=which(x==max(x));
marey[k]=which(y==max(y))
}
an=length(which(marex==marey))/N
Returning to the problem: let n be fixed. Order the random variables X j 1 j n as X 1 X 2 X n . We consider the random vector X 1 , X 2 , , X n . This is called the order statistics of X j 1 j n .
Let X = X n , X = X n 1 , X = X 1 , X = X 2 .
The following facts are well known and can be found in any handbook of order statistics, for instance [21], Propositions 4.1.2–4.1.4 pp. 183–185 or [22]:
α . The density of X 1 , X 2 , , X n is p x 1 , , x n = n ! ρ x 1 ρ x 2 ρ x n 1 x 1 x 2 x n .
β . The density of X , X is p x , y = n n 1 F X n 2 x ρ x ρ y 1 x , y .
γ . The density of X , X is p x , y = n n 1 F ¯ X n 2 y ρ x ρ y 1 x , y where F ¯ X t = P X > t .
δ . The density of X is p y = n n 1 F ¯ X n 2 y F X y ρ y .
ε . The density of X is p x = n n 1 F X n 2 x F ¯ X x ρ x .
Lemma 3.
With the above notation the following equalities hold
1 0 . P X X > t = E F ¯ X X + t F ¯ X X = n n 1 F ¯ X x + t F X n 2 x ρ x d x
2 0 . P X X > t = E F X X t F X X = n n 1 F X y t F ¯ X n 2 y ρ y d y
As a consequence, if X is not bounded above (meaning that F ¯ X t > 0 for any real t) then P X X > t > 0 for any t and if it is not bounded below (meaning that F X t > 0 for any t ) , then P X X > t > 0 for any t . Thus, if X is unbounded both below and above, then P X 2 X 1 > t and P X n X n 1 > t are positive for all t .
Proof. 
To prove 1 0 , notice that
p X | X = x y = p x , y p X x = n n 1 F X n 2 x ρ x ρ y 1 x , y n n 1 F X n 2 x ρ x 1 F X x = ρ y 1 x , y 1 F X x .
However, this is the density of the random variable written traditionally ζ = X | X > x defined by P ζ > y = P X > y | X > x = F ¯ X max x , y F X ¯ x . The random variable ζ x has the tail P ζ x > y = F ¯ X x + y F ¯ X x . Notice that if X is positive—interpreted as a life time—the random variable X x | X > x is denoted traditionally by X x and called the residual time of life at age x . Its expected value is denoted in demography by e x and P ζ x > t is denoted by t p x .
It follows that E X X | X = x = E X x = e x no matter of n and
P X x > t | X = x = P X x > t = F ¯ X x + t F ¯ X x .
Or, in a rigorous writing
P X X > t | X = P X X > t = F ¯ X X + t F ¯ X X hence
P X X > t = E P X X > t | X = E F ¯ X X + t F ¯ X X = E t p X
E X X = 0 E F ¯ X X + t F ¯ X X d t .
2 0 In the same way
p X | X = y x = p x , y p X y = n n 1 F ¯ X n 2 y ρ x ρ y 1 x , y n n 1 F ¯ X n 2 y F y = ρ x 1 , y x F X y
This is the density of the random variable X | X y . It follows that
P X X > t | X = y = F X y t F X y or, explicitly P X X > t | X = F X X t F X X .
Now we can state a result for non-negative random variables. If they are thought as waiting times, a useful tool is the concept of hazard rate, also known as failure rate (see [23]).
Definition 5.
Let X > 0 be an absolutely continuous random variable, F X ¯ x = 1 F x be its tail and ρ X its density. The quantity λ X = ρ X F X ¯ is called the hazard rate of X (or of F X ). Then
F X ¯ x = exp 0 x λ X u d u .
If λ X is non-decreasing one says that X is of type IFR (Increasing Failure rate) and if it is non-increasing X is of type DFR (Decreasing Failure Rate). Usually one writes X IFR in the first case and X DFR in the second one. (A better notation would be, of course, F X IFR or F X DFR, but this is the tradition). Of course X IFR∩DFR means that X is exponentially distributed. All the random variables of type DFR are not bounded above. (for instance, see [23]).
Proposition 10.
Let Z n = X n , Y n , n 1 be a sequence of non-negative i.i.d. random vectors in plane with the distribution F = F X Q . Let λ X = ρ X F X ¯ be the hazard rate of X 1 . Suppose that
(i)
Q x , x m , x + M = 1 for some nonnegative m , M for all x 0 and
(ii)
sup t S u p p F X λ X t = λ 0 <
Then a = lim inf a n e M + m λ 0 > 0 hence F has the leader property.
As a particular case, if X DFR then λ 0 = λ X 0 > 0 thus a e M + m λ X 0 .
Proof. 
Let n 2 be fixed. According to Lemma 3. (1 0 ) P X X > t = E F ¯ X X + t F ¯ X X .
However, F ¯ X X + t F ¯ X X = exp 0 X + t λ u d u exp 0 X λ u d u = exp X X + t λ u d u
= exp 0 t λ X + u d u exp 0 t λ u d u (since λ u + X λ u ).
In the second case P X X > t = exp X X + t λ u d u e X X + t λ 0 d u = e λ 0 t .
As a particular case P X X > M + m F X ¯ M + m .
Now suppose that X X > M + m . Let Y and Y the pairs of X and X : precisely, Y , Y = Y i , Y j iff X i = X and X j = X . Then Y Y . Indeed, X m Y X + M and X m Y X + M thus 0 X X M + m Y Y X X + M + m .
The result can be extended to random variables which are unbounded both above and below. We can always write X = X + X where X + = max X , 0 and X = min X , 0 . Suppose that both X + and X are DFR. Then X and X as n in probability.
Then F X ¯ x = F X + ¯ x whenever x 0 and F ¯ X X + t F ¯ X X = F ¯ X X + t F ¯ X X if X 0 . As P X 0 tends to infinity, E F ¯ X X + t F ¯ X X − E F ¯ X + X + t F ¯ X + X tends to 0.
Corollary 4.
If both X + and X are DFR then F has both the leader property and the anti-leader property.
Open question. We were not able to answer the question: If F has both the leader property and the anti-leader property is it true or not that F has the extremes property?
If F has both the leader property and the anti-leader property then F has the extremes property.

5.2. Exponential versus Uniform

As an application of Proposition 10 let us consider the following case:
Z = X , Y , X E x p 1 , Y = X + V , V U n i f o r m 0 , 1 , X independent on V. Z n , n 1 are independent copies of Z .
Of course Z is both IFR and DFR, X Y X + 1 and, according to Proposition 10 with m = 0 , M = 1 , we see that a e 1 = 0.367 88 .
It follows that Z has the leader property.
The density of Z is p x , y = e x 1 x , x + 1 y , and the exact limit is
a = lim inf n 0 e x x 1 n F n 1 x , y p x , y d y d x .
After some computations one sees that for y x , x + 1
F x , y = 0 x e s min y , s + 1 s d s = 1 e x ( x + 1 y ) ( e ( y 1 ) + e x )
Computer simulations suggest that a 0.8 .
In this very case we can transform the random vector Z into a bounded one with the same ordering property.
The vector ζ = F X X , F X Y : = ξ , η has the support in 0 , 1 2 and its density is π s , t = 1 1 t 1 A s , t where A = ( s , t ) : 0 < s < t < 1 1 s e .
Notice that the density of ξ is standard uniform and that π is unbounded.
One can prove that the density of ζ = F X X , F X Y is always unbounded if X DFR and V is bounded.
A first guess is that if we want F have the leader property and S u p p F be compact then its density p should be unbounded.
The answer is NO.

5.3. Uniform in a Triangle

Proposition 11.
Let α , β 0 , α + β = 1 , A α = x , y : 0 x 1 , x y α + β x and Z = X , Y be a random vector uniformly distributed on A α . Let F = Uniform A α .
Then F has the leader property if and only if α < 1 .
Proof. 
The density of Z is p Z x , y = 2 α 1 0 , 1 x 1 x , α + β x y if α < 1 2 · 1 0 , 1 x 1 x , 1 y if α = 1 and the density of X is p X x = 2 1 x 1 0 , 1 x no matter of α .
Let ζ = ln 1 X , ln 1 Y : = ξ , η . Let B α = s , t : 0 s t s + ln 1 1 α .
Let ϕ x , y = ln 1 x , ln 1 y . Notice that ϕ is strongly increasing. Let Z n n be a sequence of i.i.d. F- distributed random vectors and ζ n = ϕ Z n , n 1 . By (S2), Z n n and ζ n n have the same probabilities a n , b n , c n .
The densities are
p ζ s , t = p 1 e s , 1 e t e s e t 1 R + 2 s , t = 2 α 1 B α s , t e s e t if α < 1 2 e s e t 1 [ s , ) t if α = 1
p ξ s = p X 1 e s e s 1 0 , s = 2 e 2 s 1 [ 0 , ) s
p η | ξ = s t = 2 α 1 B α s , t e s e t 2 e 2 s = e s α e t 1 s , s + ln 1 β t if α < 1 e s e t 1 [ s , ) t if α = 1
Thus, ξ Exp 2 hence the hazard rate is λ ξ = 2 and ζ Exp 2 Q with Q s , d t = e s α e t 1 s , s + ln 1 β t d t for α < 1 .
There are two cases.
Case 1. α < 1 . According to Proposition 10 with m = 0 , M = ln 1 β we see that for the vectors ζ n n the limit of a n F ζ n is a F ζ e M + m λ 0 = e 2 ln β = β 2 . As Z n n and ζ n n have the same probabilities a n n it follows that a β 2 > 0 hence F has the leader property.
As a particular case, for α = β = 1 2 it follows that a > 1 4 .
Case 2. α = 1 . We cannot apply Proposition 10. We have to use brute force: namely formulas (17)–(19). We apply them to the original Z .
a n = 0 1 x 1 n F n 1 x , y p x , y d y d x = 2 0 1 x 1 n 2 x y x 2 n 1 d y d x
Change the variable y with t = 2 x y x 2 . As d t = 2 x d y , y = x t = x 2 , y = 1
t = 2 x x 2 we obtain a n = 0 1 x 2 2 x x 2 n x t n 1 d t d x = 0 1 1 x 2 x x 2 n x 2 n d x =
= 0 1 x n 1 2 x n d x 1 2 n .
Changing the variable x with 1 x :
0 1 x n 1 2 x n 1 d x = 0 1 1 x n 1 1 + x n d x = 0 1 1 x 2 n 1 1 + x d x =
= 0 1 1 x 2 n 1 d x + 1 2 n .
To conclude: if α = 1 then a n = 1 2 π Γ n + 1 2 Γ n + 1 2 n 0 .
Remark 4.
If we substitute the set A α in the statement of Proposition 11 with the set A α = x , y : 0 x 1 , x α + β y α + β x , or with the set
A α = x , y : 0 x 1 , x y α + β x 2 , then F has the leader property iff α < 1 .

5.4. Mixture of Small Uniforms

It is instructive to compare the two methods — the probabilistic one and the analytic one — in order to decide if a distribution F has the leader property or not.
Let 0 = α 0 < α 1 < < α n < ...and 0 = β 0 < β 1 < < β n < ...be such lim α n = lim n β = 1 . Let p j j 1 be a distribution probability on the set of positive integers such that p j > 0 for all j .
Let I k = α k 1 , α k , J k = β k 1 , β k , k 1 and finally, let Z = X , Y be a random vector with the distribution F = k = 1 p k Uniform I k × J k = k = 1 p k Uniform I k Uniform J k
Its density is p = k = 1 p k α k α k 1 β k β k 1 1 I k × J k and the marginal densities are
p X = k = 1 p k α k α k 1 1 I k , p Y = k = 1 p k β k β k 1 1 J k .
The probabilistic bound is given by Proposition 10.
Let f , g : [ 0 , ) [ 0 , ) be defined as:
f x = k if x = α k , k N k + x α k α k + 1 α k if α k < x < α k + 1 , k N
g x = k if x = β k , k N k + x β k β k + 1 β k if β k < x < β k + 1 , k N
Let ϕ = f 1 , ψ = g 1 be their inverses. Thus, for all k N we have
ϕ k = α k and k < s < k + 1 ϕ s = α k + s α k + 1 α k .
ψ k = β k and k < s < k + 1 ψ s = β k + s β k + 1 β k where s is the fractional part of s.
Let now ζ = f X , g Y . Notice that
-
f I k = g I k = k 1 , k and, as the mapping h = f , g is strongly increasing
-
Z has the leader property if and only if ζ has the leader property.
Its distribution F ζ has the density
p ζ s , t = k = 1 p k α k + 1 α k β k + 1 β k 1 f I k s ϕ s 1 g J k s ψ t
and, taking into account the construction of f and g
p ζ s , t = k = 1 p k 1 k 1 , k s 1 k 1 , k t . We see that f Y f X 1 .
The criterion from Proposition 10. ( namely lim a n = a e M + m λ 0 = e λ 0 in our case) says that ζ has the leader property if f X has a bounded hazard rate. Now we estimate the hazard rate: the density of f X is p f X s = k = 1 p k 1 k 1 , k s and its tail is F f X ¯ s = s p f X s d s . Suppose that k < s < k + 1 . Then s p f X s d s = s k + 1 p f X s d s + k + 1 p f X s d s = p k + 1 k + 1 s + p k + 2 + =
= p k + 1 + p k + 2 + s p k + 1 . On the same interval [ k , k + 1 ) , p f X s = p k + 1 .
It follows that k < s < k + 1 λ f X s = p k + 1 p k + 1 + p k + 2 + s p k + 1 . For k < s < k + 1 some upper and lower bounds for λ f X are p k + 1 p k + 1 + p k + 2 + λ f X s p k + 1 p k + 2 + p k + 3 +
Lemma 4.
Let 0 = α 0 < α 1 < α 2 < be an increasing sequence such that lim α k = 1 . Let p k = α k α k 1 , k 1 . Then k = 1 α k α k 1 n α k 1 n 1 inf k 1 p k + 1 p k . Lemma
Proof. 
By Lagrange theorem, we know that
a < b n a n 1 < b n a n b a < n b n 1 .
Apply that to a = a k 2 , b = a k 1 : we obtain the inequality n α k 2 n 1 < α k 1 n α k 2 n α k 1 α k 2 < n α k 1 n 1 .
It follows that n α k 1 n 1 α k α k 1 > α k 1 n α k 2 n α k 1 α k 2 α k α k 1 for any k 2 hence
a n > p 2 n α 1 n 1 + p 2 p 1 α 1 n + p 3 p 2 α 2 n α 1 n + p 4 p 3 α 3 n α 2 n +
Thus, k = 1 α k α k 1 n α k 1 n 1 > k = 2 α k α k 1 n α k 1 n 1 > k = 2 p k p k 1 α k 1 n α k 2 n
> min k 2 p k p k 1 for any n 1 .
We arrived at the following
Remark 5.
If sup k 1 p k p k + 1 + p k + 2 + < (or, which is the same, if inf k 1 p k + 1 + p k + 2 + p k > 0 ) then Z has the leader property. Precisely, a e sup k 1 p k p k + 1 + p k + 2 +
The analytic approach is to estimate a = lim E n F n 1 ζ using brute force:
Notice that if s , t [ n , n + 1 ) then F ζ s , t = 0 s 0 t k = 1 p k 1 k 1 , k x 1 k 1 , k y d y d x
= p 1 + + p n + n s n t p n + 1 d y d x = p 1 + + p n + p n + 1 s t .
E n F n 1 ζ = 0 0 n F n 1 s , t p ζ s , t d t d s = k = 1 k 1 k k 1 k n F n 1 s , t p k d t d s =
= k = 1 0 1 0 1 n p 1 + + p k + p k + 1 s t n 1 p k d t d s k = 1 n p 1 + + p k n 1 p k
inf k 1 p k + 1 p k .
For the last inequality we used Lemma 4.
If we combine the two approaches we arrive at
Proposition 12.
Let 0 = α 0 < α 1 < < α n < and 0 = β 0 < β 1 < < β n < be such that lim α n = lim β n = 1 . Denote I k = α k 1 , α k , J k = β k 1 , β k , k 1 . Also let p j j 1 be a distribution probability on the set of positive integers such that p j > 0 for all j 1 . Finally, let Z = X , Y be a random vector with the distribution F = k = 1 p k Uniform I k × J k = k = 1 p k Uniform I k Uniform J k .
Then
lim inf a n = a max exp sup k 1 p k p k + 1 + p k + 2 + , inf k 1 p k + 1 p k
Now we have a clue to decide if Z has the leader property: if Y X is bounded and X is unbounded. However, what can we say if Y X is unbounded, too?
One is obliged to use Formula (17).
In that case the following result may help.
Lemma 5.
The following assertions hold:
1 0 Let f : 0 , 1 0 , be continuous function at x = 1 .
Then lim n 0 1 n x n 1 f x d x = f 1 .
2 0 Let G : 0 , 1 [ 0 , 1 ] be increasing and differentiable such that G 1 = 1 , G 1 > 0 and let f be as above.
Then lim n 0 1 n G n 1 x f x d x = f 1 G 1 .
3 0 Let ϕ : 0 , 1 2 [ 0 , ) be continuous.
Then lim n n n 1 0 1 0 1 y x + n 2 ϕ x , y d y d x = ϕ 0 , 1 .
Proof. 
1. Let ε > 0 be arbitrary and M ε = sup 1 ε x 1 f 1 f x . Then M 0 + 0 = 0 . Obviously lim n 0 1 n x n 1 f x d x = lim n 0 1 ε n x n 1 f x d x + lim n 1 ε 1 n x n 1 f x d x = = lim n 1 ε 1 n x n 1 f 1 f 1 f x d x = f 1 L ε
where L ε = lim n n 1 ε 1 x n f 1 f x d x M ε lim n n 1 1 ε n + 1 n + 1 = M ε 0 as ε 0 .
2. Let y = G x . As G is differentiable and increasing we have
0 1 G n 1 x f x d x = 0 1 y n 1 f G 1 y d y G G 1 y hence applying 1) we obtain
lim n 0 1 n G n 1 x f x d x = f G 1 1 G G 1 1 = f 1 G 1 .
3. Let u n = n n 1 0 1 x 1 y x n 2 ϕ x , y d y d x , n 1 and let ε > 0 , δ > 0 be arbitrary small such that x 0 , δ , y 1 δ , 1 ϕ x , y ϕ 0 , 1 ε .
Let A δ = 0 , δ × 1 δ , 1 and B δ = 0 , 1 2 A δ and write u n = v n + w n with
v n = n n 1 0 δ 1 δ 1 y x n 2 ϕ x , y d y d x and
w n = n n 1 B δ y x n 2 ϕ x , y d y d x .
Remark that x , y B δ y x 1 δ < 1 .
It follows that w n n n 1 1 δ n 2 sup ϕ 0 as n .
On the other hand v n n n 1 0 δ 1 δ 1 y x n 2 ϕ 0 , 1 d y d x   n n 1 B δ y x n 2 ϕ x , y ϕ 0 , 1 d y d x ε n n 1 0 1 0 1 y x + n 2 d y d x = ε .
An interesting result is
Lemma 6.
Let Z = X , Y be a F distributed random vector where F = F X U ( x , x + ε ) for some ε > 0 . Or, in terms of random variables Z = X , X + V , with V independent of X, uniformly distributed on 0 , ε .
Suppose that Supp F X = [ 0 , ) and F X is absolutely continuous with respect to Lebesgue measure, f = F X . Suppose that lim x f x f x ε does exist. Then
lim inf n a n lim x f x f x ε
Proof. 
Let p x , y = 1 ε f x 1 x , x + ε y the density of the random vector Z . Let us mention first the obvious relations:
F x , = F X x
F x , y F x , x F X x ε , x 0 .
Then the probability a n verifies the inequality
n F n 1 x , y p x , y d y d x n F n 1 x ε , p x , y d y d x = = 0 n F X n 1 x ε f x d x = 0 1 n t n 1 f x t f x t ε d t if t = F X x ε and d t = f x ε d x , x = ε + F X 1 t .
However, according to Lemma 5. lim n 0 1 n t n 1 f x t f x t ε d t = lim t 1 f x t f x t ε = lim x f x f x ε .
Remark 6.
It results that if lim x f x f x ε > 0 then F has the leader property. For instance, this happens if F X = G a m m a ν , λ or if F is a Pareto distribution. In the last case F has even the strong leader property since lim x x m x ε m = 1 .
Here are another two cases when Lemma 5 is applied.

5.5. Exponential versus Exponential

Suppose that X E x p 1 , V E x p λ , Y = X + V , Z = X , Y = X , X + V
Then the density of Z is
p λ x , y = λ e λ 1 x e λ y 1 [ 0 , ) x 1 x , y
and its distribution function is F λ x , y = 1 0 , x s e λ 1 s 1 s , y t λ e λ t d t d s . After some computation we find
F λ x , y = 1 λ λ 1 e λ x + 1 λ 1 e λ y i f x > y 1 e x e λ y e λ 1 x 1 λ 1 i f x y
and
a n = a n λ = E n F λ n 1 Z = = 0 x n 1 e x e λ y e λ 1 x 1 λ 1 n 1 λ e λ 1 x e λ y d y d x . The result is
Proposition 13.
Suppose that λ 1 is a positive integer. Then the following assertions hold:
1 0 lim inf n a n m m 1 m hence F λ has the leader property if λ > 1 .
2 0 If λ = 2 , lim a n = ln 2 .
3 0 The sequence a n m m 2 , m N is increasing. Thus, λ 2 a λ ln 2 .
4 0 If m = 1 then lim a n m = 0 .
Proof. 
1 0 Remark that F x , y F x , x if x y . Thus
a n λ 0 x n 1 e x e λ x e λ 1 x 1 λ 1 n 1 λ e λ 1 x e λ y d y d x
= 0 n 1 e x e x e λ x λ 1 n 1 e λ 1 x e λ x d x
= 0 n 1 e x e x e λ x λ 1 n 1 e x d x = 0 1 n λ t 1 + 1 t λ λ 1 n 1 d t
(with the substitution t = 1 e x ).
The function G : 0 , 1 [ 0 , 1 ] defined by G t = λ t 1 + 1 t λ λ 1 is increasing
sin ce G t = λ λ 1 ( 1 1 t λ 1 ) and λ > 1 , G 0 = 0 , G 1 = 1 .
Therefore, by assertion (2) of Lemma 5. we have
lim inf a n λ lim 0 1 n G t n 1 d t = 1 G 1 = λ 1 λ .
2 0 If λ = 2 then a n 2 = 0 x n 1 e x e 2 y e x 1 n 1 2 e x e 2 y d y d x
= 0 e x e x 1 n 1 I x d x with
I x = x e x e 2 y n 1 2 n e 2 y d y
Let t = e x e 2 y : thus y = x t = e x e 2 x , y = t = e x and d t = 2 e 2 y d y . Then I x = e x e 2 x e x n t n 1 d t = e n x e x e 2 x n = e n x 1 1 e x n .
Consequently
a n 2 = 0 e x e x 1 n 1 e n x 1 1 e x n d x =
= 0 1 e x n 1 1 1 e x n d x
= 0 1 e x n 1 1 e x 2 n 1 d x = 0 1 t n 1 t 2 n 1 1 t d t and, finally
a n 2 = 0 1 t n 1 1 + t + + t n 1 d t = 1 n + 1 n + 1 + + 1 2 n 1 .
It is well known that lim a n 2 = ln 2 = 0.69315
3 0 Let us put u = e x , v = e λ y . The integral becomes
a n λ = 0 1 1 u λ 0 u λ n 1 u u λ 1 1 λ 1 v n 1 d v d u .
With the new substitution t = 1 u u λ 1 1 λ 1 v it becomes
a n λ = 0 1 λ 1 u u λ 1 u n 1 u u u λ λ 1 n d u .
Or
a n λ = 0 1 λ 1 1 u n u u λ 1 1 u 1 + u + + u λ 2 λ 1 n d u
Now denote S λ = 1 + u + + u λ 2 and simplify 1 u : we arrive at a n λ = 0 1 λ 1 1 u n 1 u S λ 1 1 u S λ λ 1 n d u which expands as
a n λ = k = 0 n 1 0 1 1 u n 1 1 u S λ λ 1 k d u
Let J λ n , k = 0 1 1 u n 1 1 u S λ λ 1 k d u . We claim that the sequence J λ n , k λ 2 , λ N is increasing.
Notice that
1 u S λ λ 1 = 1 u 1 + 1 + u + + 1 + u + + u λ 2 λ 1 = 1 u λ 1 + ( λ 2 ) u + + u λ 2 λ 1
It is enough to check that λ 1 + ( λ 2 ) u + + u λ 2 λ 1 < λ + ( λ 1 ) u + + 2 u λ 2 + u λ 1 λ or
λ λ 1 + ( λ 2 ) u + + u λ 2 < λ 1 λ + ( λ 1 ) u + + 2 u λ 2 + u λ 1 which is obvious since the difference between the right hand term and the left one is j = 1 λ 1 j u j .
4 0 If λ = 1 then
p x , y = e y 1 [ 0 , ) x 1 x , y , F x , y = 1 e x x e y if x y
a n = 0 x n 1 e x x e y n 1 e y d y d x = 0 0 e x n 1 e x t x n 1 d t d x
(with t = e y ). Let u = 1 e x t x . The integral becomes
a n = 0 1 x 1 e x x e x 1 e x n u n 1 d u d x = 0 1 x 1 e x n 1 e x x e x n d x
Let t = 1 e x hence x = ln 1 t , d x = d t 1 t . We arrive at
a n = 0 1 t n t + 1 t ln 1 t n 1 t ln 1 t d t = 0 1 t n f n t t f t d t = 0 1 1 t f t f t t n x n 1 d x d t
with f t = t + 1 t ln 1 t . The function f : 0 , 1 0 , 1 is injective and
f 1 = 1 = f 1 1 . Changing the integration order we obtain
a n = 0 1 x f 1 x n x n 1 t f t d t d x = 0 1 n x n 1 x f 1 x d t t f t d x = 0 1 n x n 1 Φ x d x with
Φ x = x f 1 x d t t f t = x f 1 x d t 1 t ln 1 t = 1 f 1 x 1 x d t t ln t .
As a b d t t ln t = ln ln b ln a we arrive at Φ x = ln ln 1 f 1 x ln 1 x .
We do not know Φ x but it is easy to see that h y = Φ f y = ln ln 1 y ln 1 f y has the property that h 1 = Φ 1 = 0 .
Indeed, applying the Hospital rule we have h 1 = ln lim y 1 ln 1 y ln 1 f y .
However, lim y 1 ln 1 y ln 1 f y = lim t 0 ln t ln t t ln t = lim t 0 1 / t ln t / t t ln t = = lim t 0 1 ln t ln t = 1 . It follows that Φ 1 = h 1 = ln 1 = 0 . According to Lemma 5. the integral converges to f 1 1 = 1 hence a = lim a n = 0 .
This ends the proof. □
Corollary 5.
Let the sets A = { ( x , y ) R 2 : 0 < x < y < 1 } and B = { ( x , y ) R 2 : 0 x y 1 } . Consider the distributions F λ on 0 , 1 2 with the densities
p λ x , y = λ 1 x λ 1 y λ 1 1 A x , y i f λ > 1 1 1 x 1 B x , y i f λ = 1
where λ is a positive integer.
Then the following assertions hold:
1 0 F λ has   the   leader   property   for   λ 2 .
2 0 F λ has   not   the   leader   property   if λ = 1 .
In the following we shall try to give a justification using computer simulation that our results are correct. We chose the case “exponential vs. exponential” (see proposition 13, 2 0 ) since this is one of the rare cases when an analytical expression for a n can be established.
Therefore, let Z = ( X , X + U ) with X Exp 1 and U Exp 2 , X and U independent. According to proposition 13, 2 0 we have: a n = 1 n + 1 n + 1 + + 1 2 n 1 with lim n a n = ln 2 = 0.69315 To check that statistically, construct the estimator θ n = K N = ((no. of sets which have a Leader)/(no. of simulations)). Then K is distributed as Binomial ( N , a n ) . Chose and accepted risk ε = 0.002 . The confidence interval J n ε of K is between the quantile of 0.001 and the quantile of 0.999. For any n we have made N = 5000 simulations, five times and have obtained the results K 1 , K 2 , , K 5 Here are the results for ε = 0.002 :
n = 5 : a 5 = 0.7456349 , J 5 ε = [ 3632 , 3823 ] , K 1 = 3655 , K 2 = 3730 , K 3 = 3708 , K 4 = 3740 , K 5 = 3703 ;
n = 50 : a 50 = 0.6981722 , J 50 ε = [ 3390 , 3591 ] , K 1 = 3550 , K 2 = 3458 , K 3 = 3559 , K 4 = 3518 , K 5 = 3540 ;
n = 100 : a 100 = 0.6956534 , J 100 ε = [ 3377 , 3578 ] , K 1 = 3484 , K 2 = 3481 , K 3 = 3497 , K 4 = 3452 , K 5 = 3476 ;
n = 1000 : a 1000 = 0.6933972 , J 1000 ε = [ 3366 , 3567 ] , K 1 = 3423 , K 2 = 3459 , K 3 = 3396 , K 4 = 3506 , K 5 = 3471 .
One can easily see that K 1 , K 2 , , K 5 never fall out the confidence interval. The conclusion is that we cannot reject the hypotheses that a n = 1 n + 1 n + 1 + + 1 2 n 1 .

5.6. A Leaderless Distribution

Let X U 0 , 1 , V U 0 , α , Y = X + V , Z = X , Y = X , X + V , α > 0 .
The random vector Z has the density: p x , y = 1 α 1 0 , 1 x 1 x , x + α y .
One can easily see that for every fbounded measurable we have
E f Z =E f X , X + V = 1 α 0 1 0 α f x , x + u d u d x = 1 α 0 1 x x + α f x , y d y d x .
Note that the distribution of the random vector Z is the following:
F x , y = x x + α y + 2 α y + 2 2 α i f x y y 2 y α + 2 α i f x > y
a n = n E (F n 1 (Z))= 1 α 0 1 x x + α n x x + α y + 2 α y + 2 2 α n 1 d y d x = I 1 n + I 2 n with I 1 n = 1 α 0 α x x + α n x x + α y + 2 α y + 2 2 α n 1 d y d x and
I 2 n = 1 α α 1 x x + α n x x + α y + 2 α y + 2 2 α n 1 d y d x .
Case 1. Suppose that α < 1 .
If x < α then F x , y F α , 2 α = α < 1 hence I 1 n 1 α 0 α x x + α n α n 1 d y d x = n α n 0 .
Next, since y x α , we have
I 2 n = 1 α α 1 x x + α n x x + α y + 2 2 α n 1 d y d x .
Let in the second integral t = x + α y . The new limits: y = x t = α , y = x + α t = 0 .
Thus, I 2 n = 1 α α 1 0 α n x t 2 2 α n 1 d t d x = 1 α 0 α α 1 n x t 2 2 α n 1 d x d t .
If we put u = x t 2 2 α , d x = d t so I 2 n becomes 1 α 0 α α t 2 2 α 1 t 2 2 α n u n 1 d u d t or
I 2 n = 1 α 0 α 1 t 2 2 α n α t 2 2 α n d t and the last integral clearly converges to 0 by Beppo Levi theorem.
Case 2. If α = 1 we have a n = 0 1 x x + 1 n x 1 + x y 2 2 n 1 d y d x . In this case, we put
I 1 n = 0 1 x 1 n x 1 + x y 2 2 n 1 d y d x < 0 1 x 1 n 2 n d y d x = n 2 1 n 0 . Note that
I 2 n = 0 1 1 1 + x n x 1 + x y 2 2 n 1 d y d x = 0 1 0 x n x t 2 2 n 1 d t d x .
If we make the substitution t = x z we obtain
I 2 n = 0 1 x n 0 1 n 1 x 2 z 2 n 1 d z d x 2 n 2 n + 1 / 2 0 as n .
So a n 0 in this case, too. We use the following nice fact:
Lemma 7.
If n is good enough, then I n , α = 0 1 n 1 α x 2 + n 1 d x n α .
Proof. 
Let J n = 0 1 n 1 x 2 n 1 d x . Thus, J 1 = 1 and, integrating by parts,
J n = 0 1 n x 1 x 2 n 1 d x = n x 1 x 2 n 1 | 0 1 + 0 1 2 n n 1 x 2 1 x 2 n 2 d x
= 0 1 2 n n 1 1 1 x 2 1 x 2 n 2 d x = 2 n J n 1 2 n 2 J n hence
J n = 2 n 2 n 1 J n 1 = = 1 2 2 n n ! 1 · 3 · · 2 n 1 = 1 2 2 n n ! 2 2 n !
By Stirling formula
J n 2 π n 4 n π = 1 2 n π 886 23 n < n at least for n to be good enough.
Now I n , α = 0 1 n 1 α x 2 + n 1 d x = 1 α 0 α n 1 t 2 + n 1 d x n a for n to be good enough. □
Case 3. α > 1 . Now we put
I 1 n = 1 a 0 1 x α n x x + α y + 2 α y + 2 2 α n 1 d y d x 1 a 0 1 x α n 1 1 2 a n 1 d y d x 0
I 2 n = 1 a 0 1 a x + α n x x + α y + 2 2 α n 1 d y d x = 1 a 0 1 0 x n x t 2 2 α n 1 d t d x
= 1 a 0 1 x n 0 1 n 1 x 2 α z 2 n 1 d t d x 1 a 0 1 x n 2 α n x d x 0 according to the same Lemma.
To conclude:
Proposition 14.
The distribution F = U 0 , 1 Q with Q x = U x , x + α has never the leader property.
The result is counterintuitive: one would expect that F has the leader property if α is small enough. After all, if α = 0 , we have F is comonotone copula, F x , y = min x , y and the last one has even the strong leader, anti-leader properties.

6. Conclusions

In our paper, we initiated the study of a problem regarding the probability of the existence of a leader, anti-leader in a sequence of d-dimensional i.i.d. random vectors Z k k 1 with the distribution probability F. We searched conditions for the distribution probability F that imply that F has or has not the leader property, anti-leader property or extremes property. Many open problems remain. The main one is how to find a computable criterion to decide if F has or has not the leader property. Other questions seemed to be easy at a first glance, but we could not answer them. For instance
-
Is it true that if F has both the leader and the anti-leader property then it has the extremes property?
-
Is it true that if for all j 1 the components of Z j are independent ( F = j = 1 d F j ) and have an infinite number of values then F cannot have the leader property?
-
Is it true that if for all j 1 the components of Z j are independent and all but one have a finite number of values then F has the leader property?

Author Contributions

Conceptualization, G.Z., M.R., C.Z.R. and A.M.R.; methodology, G.Z., M.R., C.Z.R.; validation, G.Z., M.R., C.Z.R. and A.M.R.; resources, M.R.; writing—original draft preparation, G.Z., M.R., C.Z.R. and A.M.R.; writing—review and editing, C.Z.R. and A.M.R.; supervision, G.Z., M.R., C.Z.R. and A.M.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Barndorff-Nielsen, O.; Sobel, M. On the distribution of the number of admissible points in a vector random sample. Theory Probab. Appl. 1966, 11, 249–269. [Google Scholar] [CrossRef]
  2. Devroye, L. A note on finding convex hulls via maximal vectors. Inform. Process. Lett. 1980, 11, 53–56. [Google Scholar] [CrossRef]
  3. Berezovskii, B.A.; Travkin, S.I. Supervision of queues of requests in computer systems. Automat. Remote Control 1975, 36, 1719–1725. [Google Scholar]
  4. Ivanin, V.M. Estimate of the mathematical expectation of the number of elements in a Pareto set. Cybernetics 1975, 11, 506–507. [Google Scholar] [CrossRef]
  5. Bentley, J.L.; Kung, H.T.; Schkolnick, M. On the average number of maxima in a set of vectors and applications. J. ACM 1978, 25, 536–543. [Google Scholar] [CrossRef] [Green Version]
  6. O’Neill, B. The number of outcomes in the Pareto-optimal set of discrete bargaining games. Math. Oper. Res. 1980, 6, 571–578. [Google Scholar] [CrossRef] [Green Version]
  7. Buchta, C. On the average number of maxima in a set of vectors. Inform. Process. Lett. 1989, 33, 63–65. [Google Scholar] [CrossRef]
  8. Preparata, F.P.; Shamos, M.I. Computational Geometry: An Introduction; Springer: New York, NY, USA, 1985. [Google Scholar]
  9. Toth, C.D.; O’Rourke, J.; Goodman, J.E. (Eds.) Handbook of Discrete and Computational Geometry; CRC Press: Boca Raton, FL, USA, 2017; (Eds.). [Google Scholar]
  10. Becker, R.A.; Denby, L.; McGill, R.; Wilks, A.R. Analysis of data from Places Rated Almanac. Am. Statist. 1987, 41, 169–186. [Google Scholar] [CrossRef]
  11. Bentley, J.L.; Clarkson, K.L.; Levine, D.B. Fast linear expected-time algorithms for computing maxima and convex hulls. Algorithmica 1993, 9, 168–183. [Google Scholar] [CrossRef]
  12. Golin, M.J. How many maxima can there be? Comput. Geom. 1993, 2, 335–353. [Google Scholar] [CrossRef] [Green Version]
  13. Bai, Z.D.; Chao, C.C.; Hwang, H.K.; Liang, W.Q. On the variance of the number of maxima in random vectors and its applications. Ann. Appl. Probab. 1998, 8, 886–895. [Google Scholar] [CrossRef]
  14. Bai, Z.D.; Hwang, H.K.; Liang, W.Q.; Tsai, T.H. Limit theorems for the number of maxima in random samples from planar regions. Electron. J. Probab. 2001, 6, 1–41. [Google Scholar] [CrossRef]
  15. Barbour, A.D.; Xia, A. The number of two-dimensional maxima. Adv. Appl. Probab. 2001, 33, 727–750. [Google Scholar] [CrossRef] [Green Version]
  16. Carlsund, A. Notes on the variance of the number of maxima in three dimensions. Random Struct. Algorithms 2003, 22, 440–447. [Google Scholar] [CrossRef]
  17. Bai, Z.D.; Devroye, L.; Tsai, T.H. An application of Stein’s method to maxima in hypercubes. In Stein’s Method and Applications; Barbour, D., Chen, L.H.Y., Eds.; World Scientific: Singapore, 2005. [Google Scholar]
  18. Bai, Z.D.; Devroye, L.; Hwang, H.K.; Tsai, T.H. Maxima in hypercubes. Random Struct. Algorithms 2005, 27, 290–309. [Google Scholar] [CrossRef]
  19. JJacobovic, R.; Zuk, O. A phase transition for the probability of being a maximum among random vectors with general iid coordinates. arXiv 2021, arXiv:2112.15534. [Google Scholar]
  20. Nair, U.; Sankaran, P.G.; Balakrishnan, N. Reliability Modelling and Analysis in Discrete Time; Academic Press: Cambridge, MA, USA, 2018. [Google Scholar]
  21. Embrechts, P.; Kluppelberg, C.; Mikosch, T. Modelling Extremal Events for Insurance and Finance; Springer: Berlin/Heidelberg, Germany, 1997. [Google Scholar]
  22. Ahsamullah, M.; Nevrozov, V.B.; Shakil, M. An Introduction to Order Statistics; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  23. Shanthikumar, G.; Shaked, M. Stochastic Orders, Science and Business Media; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Răducan, A.M.; Rădulescu, C.Z.; Rădulescu, M.; Zbăganu, G. On the Probability of Finding Extremes in a Random Set. Mathematics 2022, 10, 1623. https://doi.org/10.3390/math10101623

AMA Style

Răducan AM, Rădulescu CZ, Rădulescu M, Zbăganu G. On the Probability of Finding Extremes in a Random Set. Mathematics. 2022; 10(10):1623. https://doi.org/10.3390/math10101623

Chicago/Turabian Style

Răducan, Anișoara Maria, Constanța Zoie Rădulescu, Marius Rădulescu, and Gheorghiță Zbăganu. 2022. "On the Probability of Finding Extremes in a Random Set" Mathematics 10, no. 10: 1623. https://doi.org/10.3390/math10101623

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop