Next Article in Journal
On the First Quantum Correction to the Second Virial Coefficient of a Generalized Lennard-Jones Fluid
Previous Article in Journal
Three-Basis Loop-Back QKD: A Passive Architecture for Secure and Scalable Quantum Mobile Networks
Previous Article in Special Issue
Statistical Inference for High-Dimensional Heteroscedastic Partially Single-Index Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tight Bounds for Joint Distribution Functions of Order Statistics Under k-Independence

by
Andrzej Okolewski
1,* and
Barbara Blazejczyk-Okolewska
2
1
Institute of Mathematics, Lodz University of Technology, 93-590 Lodz, Poland
2
Division of Dynamics, Lodz University of Technology, 90-537 Lodz, Poland
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(12), 1250; https://doi.org/10.3390/e27121250
Submission received: 31 October 2025 / Revised: 2 December 2025 / Accepted: 9 December 2025 / Published: 11 December 2025
(This article belongs to the Special Issue Statistical Inference: Theory and Methods)

Abstract

The present study investigates the problem of determining sharp bounds for key reliability and distributional characteristics associated with order statistics. We establish pointwise sharp two-sided bounds for linear combinations of joint distribution functions and joint reliability functions of selected order statistics based on k-independent and identically distributed random variables. The proposed framework is general and also applies to arbitrarily dependent observations. The obtained results provide exact bounds for the expected values of functions of order statistics corresponding to finite-valued random variables. Furthermore, the study yields the best possible upper and lower bounds for the joint reliability function of semicoherent systems with shared exchangeable k-independent components.

1. Introduction

Order statistics have found important applications in many diverse areas, including life testing and reliability, robust inference, statistical quality control, filtering theory, and signal and image processing (see, e.g., [1]). The properties of order statistics based on independent and identically distributed random variables and, to a large extent, on independent but non-identically distributed samples are well-established and have been extensively studied (see, e.g., [2,3]). Several classical results obtained for independent observations have been extended to dependent samples with prescribed joint distributions or partially known moments (see, e.g., [4,5,6,7,8,9,10,11]).
Research on the extremal properties of order statistic distributions under dependence uncertainty has primarily focused on cases where no restrictions are imposed on the interdependence of observations. In these frameworks, either the one-dimensional marginals are assumed to be known [12,13,14,15,16,17], or the distributions of the maxima (or minima) of all k-tuples are assumed to be identical and predefined (see [18,19]). Few studies have investigated the optimal estimation of distribution functions of order statistics from samples of dependent random variables with partially known dependence structures. Kemperman’s [20] analysis of k-independent, identically distributed observations yielded the first significant result in this field, providing a general approach for deriving pointwise sharp bounds for the distribution functions of order statistics and establishing the best-possible upper bounds for single-order statistics from pairwise independent observations. Extensions of Kemperman’s results to piecewise uniform marginal copulas and linear combinations of distribution functions of single-order statistics were presented in [21]. Mallows [22] made another significant contribution by considering three-element, two-independent samples with uniform marginals and constructing explicit extremal distributions that maximize the distribution function of the minimum. Furthermore, Okolewski [23] analyzed the extremal properties of order statistic distributions for dependent samples with partially specified multivariate marginals when the marginal copula diagonals up to a certain dimension are known.
The present study is concerned with determining pointwise sharp bounds for linear combinations of joint distribution functions and joint reliability functions of selected order statistics derived from identically distributed k-independent random variables. The problem is reformulated as a moment problem and solved using a geometric approach. The bounds established are significant because, in particular, they hold under minimal assumptions-specifically, for arbitrarily dependent random variables, without requiring knowledge of their joint dependence structure. In reliability theory, these results enable the derivation of best-possible lower and upper bounds for the reliability functions of semicoherent systems with shared, exchangeable components when the dependence structure among component lifetimes is completely unspecified or exhibits k-independence (see Remark 3). Additionally, the bounds provide precise expectation ranges for functions of order statistics when the underlying random variables take values in a finite set. In the case of possibly dependent random variables, these results may be applicable in the context of largest-claims (LC) reinsurance, where conservative bounds are essential for risk assessment and pricing under unknown dependence structures (see Remark 2).
This paper is organized as follows: Section 2 derives sharp distribution bounds for order statistics and illustrates their applications. Section 3 presents explicit examples that demonstrate the theoretical results.

2. Distribution Bounds for Order Statistics

Suppose we have n random variables X 1 , , X n , and order statistics X 1 : n , , X n : n based on them. Let x 1 < < x q be distinct real numbers. The number q may be greater than, equal to, or less than n. We consider multivariate marginal distribution functions
F j 1 , , j m ( x 1 , , x m ) = P ( X j 1 : n x 1 , , X j m : n x m )
of several order statistics X j 1 : n , , X j m : n for some 1 j 1 < < j m n , evaluated at points belonging to the fixed finite set { x 1 , , x q } . It suffices to consider only strictly increasing arguments x 1 < < x m , since otherwise redundant terms can be removed.
More generally, we study arbitrary linear combinations
L ( X ) : = m = 1 n q 1 j 1 < < j m n 1 1 < < m q c j 1 , , j m 1 , , m F j 1 , , j m : n ( x 1 , , x m )
involving multivariate marginal distribution functions of order statistics evaluated at the elements of the finite set { x 1 , , x q } . Here, n q = min ( n , q ) . By setting some coefficients c j 1 , , j m 1 , , m = 0 , various reductions are possible. The value L ( X ) is uniquely determined when the distribution of the vector X = ( X 1 , , X n ) is fully known. When only partial information on X is available, L ( X ) takes values within a certain range.
In statistical theory, it is well known that for a set of three or more random variables to be mutually independent, pairwise independence is necessary but not sufficient. A classic example of three discrete variables that are pairwise independent but not mutually independent, frequently cited in the statistical literature, is due to S. Bernstein (see Cramér [24]). For an overview of generalizations of this example to cases with more than three variables and continuous distributions, see [25]. In this work, the author constructs an infinite sequence of random variables such that the components of any proper subset are independent if and only if the size of the subset is less than or equal to a fixed positive integer k. Such random variables are called k-independent. k-independence is a powerful tool because it provides a balance between full independence and arbitrary dependence. It is often used to reduce the amount of randomness required in probabilistic algorithms, for example, when weaker sources of randomness are sufficient for analyses that use Chernoff–Hoeffding bounds under limited independence, such as the analysis of randomized algorithms for random sampling (see [26]).
We are interested in determining sharp lower and upper bounds for L ( X ) over the set of all possible distributions of k-independent vectors X having the same one-dimensional marginal distribution F. Here, X is k-independent if every k-tuple ( X j 1 , , X j k ) is independent (for k > 1 ), while in the case k = 1 , the components may be arbitrarily dependent. Our approach transforms this problem into a moment problem based on Kemperman’s characterization of k-independence [20] [Theorem 1]: A random vector Y = ( Y 0 , Y 1 , , Y q ) has the same distribution as the vector B = ( B 0 , B 1 , , B q ) , where
B p = j = 1 n 1 ( x p < X j x p + 1 ) , p = 0 , 1 , , q ,
associated to some k-independent X , if and only if Y takes values in the set
q [ n ] = ( y 0 , y 1 , , y q ) Z + q + 1 : p = 0 q y p = n
and satisfies the moment conditions
E p = 0 q ( Y p ) i p = ( n ) r i p = 0 q π p i p for all ( i 0 , , i q ) I k q + 1 ,
where 1 ( s ) = 1 if s is true and 1 ( s ) = 0 otherwise, Z + is the set of non-negative integers, ( s ) j = s ( s 1 ) ( s j + 1 ) denotes the falling factorial, indices satisfy r i = p = 0 q i p , and
I k q + 1 = ( i 0 , , i q ) Z + q + 1 : 0 < p = 0 q i p k ,
with probabilities π p = F ( x p + 1 ) F ( x p ) , assuming x 0 = and x q + 1 = + .
Throughout, we assume that
F ( x p + 1 ) F ( x p ) > 0 , p = 0 , 1 , , q ,
to avoid trivial cases.
Our main result states that the bounds on L ( X ) can be expressed as values of certain known functions, which depend solely on the coefficients c , evaluated at points determined exclusively by the values x . In essence, these bounds take the same form as those for linear combinations of distribution functions of individual order statistics from possibly dependent observations, as established in [16].
Before formulating the result, we introduce some notation and terminology. Since p = 0 q Y p = p = 0 q B p = n , it suffices to require condition (3) only for i q = 0 (cf. Kemperman [20] [Remark]). Denote by δ 1 , , δ ϑ the ϑ = j = 1 k ( q + j 1 j ) elements of the set I k q , which consists of all orders not exceeding k of factorial moments of the random vector
Y = ( Y 0 , , Y q 1 )
taking values in the set
U n q = ( u 0 , u 1 , , u q 1 ) Z + q : 0 p = 0 q 1 u p n .
Define the vector function
θ k ( u ) = θ k 1 ( u ) , , θ k ϑ ( u ) ,
where
θ k j ( u ) = p = 0 q 1 ( u p ) δ p j
for u = ( u 0 , u 1 , , u q 1 ) U n q . Each component θ k j ( u ) represents a possible value of the product p = 0 q 1 ( Y p ) δ p j for a vector δ j from I k q . Accordingly, the coordinates of θ k ( u ) enumerate the possible values of such products for all vectors in I k q .
Let M k be the compact and convex set of all possible moments E θ k ( Y ) , where the distribution of Y varies over all probability distributions on U n q . Clearly, we have
M k = conv Γ k ,
where
Γ k = θ k ( u ) : u U n q ,
and conv A denotes the convex hull of the set A.
We now define two functions necessary to state our results. For any function φ : U n q R , define
C φ : M k R and C φ : M k R ,
to be, respectively, the greatest convex function such that
C φ ( θ k ( u ) ) φ ( u ) for all u U n q ,
and the smallest concave function such that
C φ ( θ k ( u ) ) φ ( u ) for all u U n q .
Theorem 1. 
Let 1 k n , n 2 , and q 1 be fixed integers. Let x 1 < < x q and c j 1 , , j m 1 , , m be fixed real numbers, where m = 1 , , n q , 1 j 1 < < j m n , and 1 1 < < m q .
(i) If X = ( X 1 , , X n ) is a vector of k-independent random variables with a common distribution function F satisfying condition (5), then the following bounds hold:
C φ ( m ( F ( x 1 ) , , F ( x q ) ) ) L ( X ) C φ ( m ( F ( x 1 ) , , F ( x q ) ) ) ,
where
φ ( u 0 , , u q 1 ) = m = 1 n q 1 j 1 < < j m n 1 1 < < m q c j 1 , , j m 1 , , m 1 p = 0 1 1 u p j 1 , , p = 0 m 1 u p j m
for ( u 0 , , u q 1 ) U n q and
m ( t ) = ( m 1 ( t ) , , m ϑ ( t ) )
with
m j ( t ) = m j ( t 1 , , t q ) = ( n ) r j p = 0 q 1 ( t p + 1 t p ) δ p j ,
in which 0 = t 0 < t 1 < < t q < 1 and r j = p = 0 q 1 δ p j .
(ii) Moreover, for any distribution function F that satisfies condition (5), there exist k-independent random variables X 1 , , X n with common distribution function F such that equality is attained in the first (resp. second) inequality (7).
Proof. 
From Kemperman’s [20] [Theorem 1, Remark] characterization of k-independence, it follows that a random vector Y = ( Y 0 , Y 1 , , Y q 1 ) has the same distribution as the random vector B = ( B 0 , B 1 , , B q 1 ) , associated as in (2) to some k-independent random vector X = ( X 1 , , X n ) with marginals F, if and only if Y takes values in U n q and satisfies the condition
E p = 0 q 1 ( Y p ) i p = ( n ) r i p = 0 q 1 π p i p for all ( i 0 , i 1 , , i q 1 ) I k q
with r i = p = 0 q 1 i p , I k q as in (4), and π p = F ( x p + 1 ) F ( x p ) . Observe that condition (10) can be rewritten in the form
E θ k ( Y ) = m ( t )
with m ( t ) as in (9) and t = ( F ( x 1 ) , , F ( x q ) ) .
For any given function ψ , minimizing (resp. maximizing) the moment E ψ ( B ) is equivalent to minimizing (resp. maximizing) E ψ ( Y ) , subject to the constraints that Y takes values in U n q and satisfies the moment condition (11). Since X s : n x if and only if j = 1 n 1 ( X j x ) s and j = 1 n 1 ( X j x ) = p = 0 1 B p , by (1) and (2), one has
L ( X ) = m = 1 n q 1 j 1 < < j m n 1 1 < < m q c j 1 , , j m 1 , , m P ν = 1 m p = 0 ν 1 B p j ν .
Hence, determining bounds on L ( X ) reduces to minimizing (or maximizing)
T ( Y ) : = m = 1 n q 1 j 1 < < j m n 1 1 < < m q c j 1 , , j m 1 , , m P ν = 1 m p = 0 ν 1 Y p j ν ,
subject to Y U n q and the moment constraint (11).
We solve the latter problem using a geometric approach. Note that L ( X ) = E φ ( B ) , where φ is given by (8). Recall that M k is the compact set of all possible moment points E θ k ( Y ) R ϑ where the distribution of Y varies over all distributions on U n q .
Consider the auxiliary function
θ ˜ k ( u ) = θ k 1 ( u ) , , θ k ϑ ( u ) , φ ( u ) .
Since
M ˜ k = conv { θ ˜ k ( u ) : u U n q }
is the compact set of all possible moment points E θ ˜ k ( Y ) R ϑ + 1 , where the random vector Y takes its values in U n q , the line segment
M ˜ k { ( m ( t ) , ρ ) : ρ R }
represents the range of possible moment points E θ ˜ k ( Y ) R ϑ + 1 , where the distribution of Y varies over all distributions on U n q satisfying the condition E θ k ( Y ) = m ( t ) . The lower (resp. upper) endpoint belongs to the lower (resp. upper) envelope of M ˜ k , defined by ( y , C φ ( y ) ) (resp. ( y , C φ ( y ) ) for y M k , providing the sharp lower (resp. upper) bound on L ( X ) (cf. [27]).
This completes the proof of (i). Statement (ii) follows directly from [20] [Theorem 1]. □
Remark 1.  (i) The bounds for multivariate marginal distribution functions of order statistics provided by Theorem 1 are new even in the case k = 1 , that is, when the X j ’s are arbitrarily dependent identically distributed random variables.
(ii) Fix j { 1 , , n } and let q = 1 , k = 2 , and c j 1 , , j m 1 , , m = 1 ( m = 1 , j 1 = j , 1 = 1 ) in (7). Then we recover Kemperman’s [20] bounds for single-order statistics from pairwise independent observations. If we fix c 1 , , c n R and k { 1 , , n 1 } , and take q = 1 and c j 1 , , j m 1 , , m = c j 1 1 ( m = 1 , 1 = 1 ) in (7), we obtain more explicit expressions for the bounds [21] Equation (28) for linear combinations of distribution functions of single-order statistics from k-independent observations.
(iii) From the proof of Theorem 1 it follows that the range of possible values of L ( X ) over all possible distributions of k-independent vectors X with one-dimensional marginals F is equal to the interval [ C φ ( m ( t ) ) , C φ ( m ( t ) ) ] , where t = ( F ( x 1 ) , , F ( x q ) .
Remark 2. 
The pointwise sharp distribution bounds (7) do not generally yield sharp expectation bounds (cf. [20]). However, the bounds obtained in this way may be accurate in certain particular cases. Indeed, suppose that q { 2 , 3 , } and the k-independent X j ’s have a common piecewise constant distribution function F with jumps at points < x 1 < < x q < . For example, for any fixed 1 j 1 < j 2 n and a function h : R 2 R , we have
E h ( X j 1 : n , X j 2 : n ) = 1 = 1 q 2 = 1 q h ( x 1 , x 2 ) P ( X j 1 : n = x 1 , X j 2 : n = x 2 ) ,
which can be expressed as
E h ( X j 1 : n , X j 2 : n ) = 2 = 1 q ( 1 = 2 q h 1 2 ) F j 2 : n ( x 2 ) + 1 = 1 q 1 2 = 1 + 1 q h 1 2 F j 1 , j 2 : n ( x 1 , x 2 ) ,
where
h 1 2 = h ( x 1 , x 2 ) h ( x 1 , x 2 + 1 ) 1 ( 2 < q ) h ( x 1 + 1 , x 2 ) 1 ( 1 < q ) + h ( x 1 + 1 , x 2 + 1 ) 1 ( 1 < q , 2 < q ) .
Thus, applying (7) yields two-sided attainable bounds for E h ( X j 1 : n , X j 2 : n ) , which are expressed in terms of the points x 1 , , x q and the values of the distribution function F ( x 1 ) , , F ( x q ) . Sharp expectation bounds for L-statistics from arbitrarily dependent observations (i.e., for k = 1 ) were presented in [16]. Note that for h ( s 1 , s 2 ) = s 1 + s 2 , the expectation of h ( X n 1 : n , X n : n ) represents the net premium for the largest-claims (LC) reinsurance of the two largest claims in an individual model of homogeneous, and in particular arbitrarily dependent, risks (cf. [28]). In the context of LC reinsurance, conservative bounds are essential for risk assessment and pricing under unknown dependence structures.
Our next objective is to establish an analogue of Theorem 1 for linear combinations of joint reliability functions associated with selected order statistics, defined by
L ¯ ( X ) : = m = 1 n q 1 j 1 < < j m n 1 1 < < m q c j 1 , , j m 1 , , m F ¯ j 1 , , j m : n ( x 1 , , x m ) ,
where x 1 < < x q ,   c j 1 , , j m 1 , , m are fixed real numbers, and F ¯ j 1 , , j m : n ( x 1 , , x m ) : = P ( X j 1 : n > x 1 , , X j m : n > x m ) .
Theorem 2. 
Under the assumptions and notation of Theorem 1, the following statements hold.
(i) Let X = ( X 1 , , X n ) be a vector of k-independent random variables with a common distribution function F fulfilling condition (5). Then
C φ ¯ ( m ( F ( x 1 ) , , F ( x q ) ) ) L ¯ ( X ) C φ ¯ ( m ( F ( x 1 ) , , F ( x q ) ) ) ,
where
φ ¯ ( u 0 , , u q 1 ) = m = 1 n q 1 j 1 < < j m n 1 1 < < m q c j 1 , , j m 1 , , m 1 p = 0 1 1 u p < j 1 , , p = 0 m 1 u p < j m .
(ii) Moreover, for any distribution function F satisfying condition (5), there exist k-independent random variables X 1 , , X n with distribution function F for which the lower (resp. upper) bound in (13) is attained.
Proof. 
Combining (12) with the fact that X s : n > x if and only if j = 1 n 1 ( X j x ) < s , we obtain
L ¯ ( X ) = m = 1 n q 1 j 1 < < j m n 1 1 < < m q c j 1 , , j m 1 , , m P ν = 1 m p = 0 ν 1 B p < j ν .
An analysis analogous to that in the proof of Theorem 1 shows that determining bounds on L ¯ ( X ) is equivalent to minimizing (maximizing) the quantity
T ¯ ( Y ) : = m = 1 n q 1 j 1 < < j m n 1 1 < < m q c j 1 , , j m 1 , , m P ν = 1 m p = 0 ν 1 Y p < j ν ,
where the vector Y = ( Y 0 , Y 1 , , Y q 1 ) takes values in U n q and satisfies condition (11). The proof is completed by adapting the second part of the proof of Theorem 1 with L ( X ) replaced by L ¯ ( X ) and φ replaced by φ ¯ .
Remark 3. 
If the X j ’s are not only k-independent but also non-negative, exchangeable and have no ties, then (13) provides sharp bounds for the joint reliability function F ¯ s of any pair of semi-coherent systems based on common components. Specifically,
F ¯ s ( z 1 , z 2 ) = j 1 = 1 n j 2 = 1 n s j 1 j 2 F ¯ j 1 , j 2 : n ( z 1 , z 2 ) , z 1 , z 2 0 ,
where s = ( s j 1 j 2 ) 1 j 1 , j 2 n is a probability matrix of order n depending solely on the system structure, known as the structure signature of the system (cf. [29]). To see this, observe that
F ¯ s ( z 1 , z 2 ) = 1 j 1 n 1 1 2 c j 1 1 F ¯ j 1 : n ( x 1 ) + 1 j 1 < j 2 n 1 1 < 2 2 c j 1 , j 2 1 , 2 F ¯ j 1 , j 2 : n ( x 1 , x 2 ) ,
where x 1 = min ( z 1 , z 2 ) , x 2 = max ( z 1 , z 2 ) ,
c j 1 1 = s j 1 j 1 + j = j 1 + 1 n s j j 1 1 ( z 1 z 2 ) + s j 1 j 1 ( z 1 z 2 ) 1 ( 1 = 2 )
and
c j 1 j 2 1 , 2 = s j 1 j 2 1 ( z 1 < z 2 ) + s j 2 j 1 1 ( z 1 > z 2 ) .
Sharp bounds on the lifetime distributions and expectations of a single system with arbitrarily dependent exchangeable component lifetimes X 1 , , X n were derived in [30].
Remark 4. 
Theorems 1 and 2 reduce the problem of determining distribution bounds for order statistics from dependent observations to finding the convex hull of a finite set of points in R ν + 1 , where ν = j = 1 k ( q + j 1 j ) . Computing the convex hull is a fundamental step in many practical problems, including statistical tasks such as robust estimation, isotonic regression, and clustering (see [31]). Even in moderate dimensions, such as 10 or 20, convex hull computation can be challenging. Knowledge of specific properties of the convex hull in a given problem, for example, the presence of symmetry, can be helpful (see [32]). A comprehensive overview of algorithms and methods for determining the convex hull of a finite set of points in R d is provided in [33], along with a detailed discussion of their respective advantages, disadvantages, and recommended applications.

3. Explicit Bounds for Possibly Dependent Observations

We will derive explicit sharp lower and upper bounds for the joint distribution function and joint reliability function of any pair of order statistics based on arbitrarily dependent observations.
Proposition 1. 
Let F be a given distribution function, and let 1 j 1 < j 2 n and x 1 < x 2 be such that 0 < F ( x 1 ) < F ( x 2 ) < 1 . If X = ( X 1 , , X n ) is a vector of possibly dependent random variables with common distribution function F , then
F j 1 , j 2 : n ( x 1 , x 2 ) min n j 1 F ( x 1 ) , n j 2 F ( x 2 ) , 1
and
F j 1 , j 2 : n ( x 1 , x 2 ) max H j 1 , j 2 : n ( F ( x 1 ) , F ( x 2 ) ) , 0
with
H j 1 , j 2 : n ( z 1 , z 2 ) = ( n j 2 + 1 ) n z 1 + ( j 2 j 1 ) n z 2 ( j 2 1 ) ( n j 1 + 1 ) ( n j 1 + 1 ) ( n j 2 + 1 ) ,
and both the bounds are sharp.
Proof. 
We apply Theorem 1 with parameters k = 1 , q = 2 , and n 2 . Let F , 1 j 1 < j 2 n , and x 1 < x 2 be such that 0 < F ( x 1 ) < F ( x 2 ) < 1 . Then, we have that ϑ = 2 , I 1 2 = { δ 1 , δ 2 } , where δ 1 = ( 1 , 0 ) and δ 2 = ( 0 , 1 ) , while
U n 2 = { u = ( u 0 , u 1 ) : u 0 , u 1 Z + ; u 0 + u 1 n } .
Moreover, m ( t ) = ( m 1 ( t ) , m 2 ( t ) ) , in which t = ( t 1 , t 2 ) : = ( F ( x 1 ) , F ( x 2 ) ) , with m 1 ( t ) = n t 1 > 0 and m 2 ( t ) = n ( t 2 t 1 ) > 0 , and
M 2 = conv { θ 2 ( u ) : u = ( u 0 , u 1 ) U n 2 }
with θ 2 ( u ) = u . Clearly, m ( t ) M 2 .
To determine the bounds C φ ( m ( t ) ) and C φ ( m ( t ) ) on F j 1 , j 2 : n ( x 1 , x 2 ) , it suffices to find the lower envelope { ( y , C φ ( y ) ) : y M 2 } and the upper envelope { ( y , C φ ( y ) ) : y M 2 } of the set M ˜ 2 = conv Γ ˜ 2 , where Γ ˜ 2 = { θ ˜ 2 ( u ) : u U n 2 } ,   θ ˜ 2 ( u 0 , u 1 ) = ( u 0 , u 1 , φ ( u 0 , u 1 ) ) , and φ ( u 0 , u 1 ) = 1 ( u 0 j 1 , u 0 + u 1 j 2 ) . Figure 1 illustrates the set M ˜ 2 for the specific case n = 10 , j 1 = 3 , and j 2 = 5 . Notably, the convex hulls of the elements in the underlying set Γ 2 ˜ with the third coordinate equal to 0 and 1, respectively, form the lower (depicted in blue) and upper (depicted in red) faces of the polyhedron M ˜ 2 , which are parallel to the x- and y-axes. The vertices of these faces constitute the set of extreme points of Γ 2 ˜ , meaning that none of these points can be represented as a convex combination of other feasible points in the set.
Observe that, in the general case, the set ext M ˜ 2 of extreme points of Γ 2 ˜ , and thus also of M ˜ 2 , is given by
{ θ ˜ 2 ( u ( 0 ) ) , θ ˜ 2 ( u ( 1 ) ) , , θ ˜ 2 ( u ( 7 ) ) } ,
where u ( 0 ) = ( 0 , 0 ) ,   u ( 1 ) = ( n , 0 ) ,   u ( 2 ) = ( 0 , n ) ,   u ( 3 ) = ( j 1 , j 2 j 1 ) ,   u ( 4 ) = ( j 2 , 0 ) , u ( 5 ) = ( j 1 , n j 1 ) ,   u ( 6 ) = ( j 1 1 , n j 1 + 1 ) and u ( 7 ) = ( j 2 1 , 0 ) . Each of these points corresponds to a vertex of the convex hull of feasible integer triplets. In particular, none of them can be expressed as a convex combination of other feasible points in M ˜ 2 . Therefore, they precisely constitute the extreme points that determine the boundary of the convex set.
Consequently,
ext M ˜ 2 = { ( 0 , 0 , 0 ) , ( n , 0 , 1 ) , ( 0 , n , 0 ) , ( j 1 , j 2 j 1 , 1 ) , ( j 2 , 0 , 1 ) , ( j 1 , n j 1 , 1 ) , ( j 1 1 , n j 1 + 1 , 0 ) , ( j 2 1 , 0 , 0 ) } .
We first derive the upper bounds. Observe that
M 2 = M 2 ( 1 ) M 2 ( 2 ) M 2 ( 3 )
and the upper envelope uenv M ˜ 2 of M ˜ 2 is given by
uenv M ˜ 2 = U ˜ 2 ( 1 ) U ˜ 2 ( 2 ) U ˜ 2 ( 3 ) ,
where
M 2 ( 1 ) = conv { u ( 0 ) , u ( 3 ) , u ( 4 ) } , M 2 ( 2 ) = conv { u ( 0 ) , u ( 2 ) , u ( 3 ) , u ( 5 ) } , M 2 ( 3 ) = conv { u ( 1 ) , u ( 3 ) , u ( 4 ) , u ( 5 ) }
and
U ˜ 2 ( 1 ) = conv { θ ˜ 2 ( u ( 0 ) ) , θ ˜ 2 ( u ( 3 ) ) , θ ˜ 2 ( u ( 4 ) ) } , U ˜ 2 ( 2 ) = conv { θ ˜ 2 ( u ( 0 ) ) , θ ˜ 2 ( u ( 2 ) ) , θ ˜ 2 ( u ( 3 ) ) , θ ˜ 2 ( u ( 5 ) ) } , U ˜ 2 ( 3 ) = conv { θ ˜ 2 ( u ( 1 ) ) , θ ˜ 2 ( u ( 3 ) ) , θ ˜ 2 ( u ( 4 ) ) , θ ˜ 2 ( u ( 5 ) ) } .
Figure 1 shows the outline of the upper envelope of the set M ˜ 2 for n = 10 , j 1 = 3 , and j 2 = 5 , indicated in red.
We distinguish three cases: If m ( t ) M 2 ( 1 ) , then the restriction U ˜ 2 ( 1 ) of uenv M ˜ 2 to M 2 ( 1 ) lies on the plane in R 3 described by the equation
z 1 + z 2 j 2 z 3 = 0 .
By (7), the third coordinate of the intersection point of the line { ( m ( t ) , ρ ) : ρ R } with U ˜ 2 ( 1 ) provides the desired upper bound:
F j 1 , j 2 : n ( x 1 , x 2 ) n j 2 F ( x 2 )
since C φ ( z 1 , z 2 ) = ( z 1 + z 2 ) / j 2 .
If m ( t ) M 2 ( 2 ) , then the restriction of uenv M ˜ 2 to M 2 ( 2 ) equals U ˜ 2 ( 2 ) . Any triple ( z 1 , z 2 , z 3 ) U ˜ 2 ( 2 ) satisfies the equation
z 1 j 1 z 3 = 0 ,
yielding the upper bound
F j 1 , j 2 : n ( x 1 , x 2 ) n j 1 F ( x 1 ) .
If m M 2 ( 3 ) , then
F j 1 , j 2 : n ( x 1 , x 2 ) 1
as the restriction of uenv M ˜ 2 to M 2 ( 3 ) equals U ˜ 2 ( 3 ) and the equation of the plane passing through the points of U ˜ 2 ( 3 ) has the form: z 3 = 1 .
Next, we derive the lower bounds. It is evident that M 2 = M 2 ( 4 ) M 2 ( 5 ) and the lower envelope lenv M ˜ 2 of M ˜ 2 is given by
lenv M ˜ 2 = L ˜ 2 ( 1 ) L ˜ 2 ( 2 ) ,
where
M 2 ( 4 ) = conv { u ( 0 ) , u ( 2 ) , u ( 6 ) , u ( 7 ) } , M 2 ( 5 ) = conv { u ( 1 ) , u ( 6 ) , u ( 7 ) } , L ˜ 2 ( 1 ) = conv { Θ ˜ ( u ( 0 ) ) , Θ ˜ ( u ( 2 ) ) , Θ ˜ ( u ( 6 ) ) , Θ ˜ ( u ( 7 ) ) } , L ˜ 2 ( 2 ) = conv { Θ ˜ ( u ( 1 ) ) , Θ ˜ ( u ( 6 ) ) , Θ ˜ ( u ( 7 ) ) } .
The lower envelope of M ˜ 2 for n = 10 , j 1 = 3 , and j 2 = 5 is illustrated in blue in Figure 1.
We consider two cases. If m ( t ) M 2 ( 4 ) , then L ˜ 2 ( 1 ) is the restriction of lenv M ˜ 2 to M 2 ( 4 ) , and points of L ˜ 2 ( 1 ) satisfy the equation
z 3 = 0
Hence,
F j 1 , j 2 : n ( x 1 , x 2 ) 0 .
If m ( t ) M 2 ( 5 ) , then
F j 1 , j 2 : n ( x 1 , x 2 ) ( n j 2 + 1 ) n F ( x 1 ) + ( j 2 j 1 ) n F ( x 2 ) ( j 2 1 ) ( n j 1 + 1 ) ( n j 1 + 1 ) ( n j 2 + 1 )
because L ˜ 2 ( 2 ) , the restriction of lenv M ˜ 2 to M 2 ( 5 ) , lies in the plane
( n j 1 + 1 ) ( z 1 n ) + ( j 2 j 1 ) z 2 ( n j 1 + 1 ) ( n j 2 + 1 ) ( z 3 1 ) = 0 .
Summarizing, we have
max H j 1 , j 2 : n ( F ( x 1 ) , F ( x 2 ) ) , 0 F j 1 , j 2 : n ( x 1 , x 2 ) min n j 1 F ( x 1 ) , n j 2 F ( x 2 ) , 1
with H j 1 , j 2 : n given by (14), and both the bounds are sharp. □
Now, we derive an analogue of Proposition 1 for the case of linear combinations of joint reliability functions of selected order statistics.
Proposition 2. 
Let F be a given distribution function. Let 1 j 1 < j 2 n and x 1 < x 2 be such that 0 < F ( x 1 ) < F ( x 2 ) < 1 . If X 1 , , X n are possibly dependent random variables with distribution function F , then
F ¯ j 1 , j 2 : n ( x 1 , x 2 ) min n F ¯ ( x 1 ) n j 1 + 1 , n F ¯ ( x 2 ) n j 2 + 1 , 1 ,
and
F ¯ j 1 , j 2 : n ( x 1 , x 2 ) max H ¯ j 1 , j 2 : n ( F ¯ ( x 1 ) , F ¯ ( x 2 ) ) , 0 ,
where F ¯ = 1 F and
H ¯ j 1 , j 2 : n ( z 1 , z 2 ) = 1 n ( j 2 j 1 ) ( 1 z 1 ) n j 1 ( 1 z 2 ) j 1 j 2 .
The bounds are sharp.
Proof. 
Set k = 1 , q = 2 and n 2 . Let F , 1 j 1 < j 2 n , and x 1 < x 2 be such that 0 < F ( x 1 ) < F ( x 2 ) < 1 . Further, let ϑ , I 1 2 , δ 1 , δ 2 ,   U n 2 , t , m ( t ) ,   θ 2 ( u ) , and M 2 be as defined in the proof of Proposition 1. Clearly, m ( t ) M 2 .
By Theorem 2, it suffices to find the values C φ ¯ ( m ( t ) ) and C φ ¯ ( m ( t ) ) , where φ ¯ ( u 0 , u 1 ) = 1 ( u 0 < j 1 , u 0 + u 1 < j 2 ) . To this end, we determine the lower and upper envelopes of the set
M ˜ 2 = conv { θ ˜ 2 ( u ) : u U n 2 }
with θ ˜ 2 ( u 0 , u 1 ) = ( u 0 , u 1 , φ ¯ ( u 0 , u 1 ) ) .
It is easily seen that the set ext M ˜ 2 of extreme points of M ˜ 2 equals
{ θ ˜ 2 ( u ( 0 ) ) , θ ˜ 2 ( u ( 1 ) ) , , θ ˜ 2 ( u ( 7 ) ) } ,
in which
u ( 0 ) = ( 0 , 0 ) , u ( 1 ) = ( n , 0 ) , u ( 2 ) = ( 0 , n ) , u ( 3 ) = ( j 1 1 , 0 ) , u ( 4 ) = ( j 1 1 , j 2 j 1 ) , u ( 5 ) = ( 0 , j 2 1 ) , u ( 6 ) = ( j 1 , 0 ) , u ( 7 ) = ( 0 , j 2 ) ,
that is,
ext M ˜ 2 = { ( 0 , 0 , 1 ) , ( n , 0 , 0 ) , ( 0 , n , 0 ) , ( j 1 1 , 0 , 1 ) , ( j 1 1 , j 2 j 1 , 1 ) , ( 0 , j 2 1 , 1 ) , ( j 1 , 0 , 0 ) , ( 0 , j 2 , 0 ) } .
Figure 2 shows the set M ˜ 2 corresponding to n = 10 , j 1 = 3 , and j 2 = 5 .
We first derive the upper bounds. Observe that
M 2 = M 2 ( 1 ) M 2 ( 2 ) M 2 ( 3 )
and the upper envelope uenv M ˜ 2 of M ˜ 2 is given by
uenv M ˜ 2 = U ˜ 2 ( 1 ) U ˜ 2 ( 2 ) U ˜ 2 ( 3 ) ,
where
M 2 ( 1 ) = conv { u ( 1 ) , u ( 3 ) , u ( 4 ) } , M 2 ( 2 ) = conv { u ( 1 ) , u ( 2 ) , u ( 4 ) , u ( 5 ) } , M 2 ( 3 ) = conv { u ( 0 ) , u ( 3 ) , u ( 4 ) , u ( 5 ) }
and
U ˜ 2 ( 1 ) = conv { θ ˜ 2 ( u ( 1 ) ) , θ ˜ 2 ( u ( 3 ) ) , θ ˜ 2 ( u ( 4 ) ) } , U ˜ 2 ( 2 ) = conv { θ ˜ 2 ( u ( 1 ) ) , θ ˜ 2 ( u ( 2 ) ) , θ ˜ 2 ( u ( 4 ) ) , θ ˜ 2 ( u ( 5 ) ) } , U ˜ 2 ( 3 ) = conv { θ ˜ 2 ( u ( 0 ) ) , θ ˜ 2 ( u ( 3 ) ) , θ ˜ 2 ( u ( 4 ) ) , θ ˜ 2 ( u ( 5 ) ) } .
The red outline in Figure 2 indicates the upper envelope of M ˜ 2 for n = 10 , j 1 = 3 , and j 2 = 5 .
We consider three cases. If m ( t ) M 2 ( 1 ) , then the third coordinate C φ ¯ ( m ( t ) ) of the intersection point of the line { ( m ( t ) , ρ ) : ρ R } with the restriction U ˜ 2 ( 1 ) of uenv M ˜ 2 to M 2 ( 1 ) , which lies on the plane described by the equation
z 1 + ( n j 1 + 1 ) z 3 n = 0 ,
gives the desired bound:
F ¯ j 1 , j 2 : n ( x 1 , x 2 ) n ( 1 F ( x 1 ) ) n j 1 + 1 .
If m ( t ) M 2 ( 2 ) , then the restriction of uenv M ˜ 2 to M 2 ( 2 ) equals U ˜ 2 ( 2 ) , and any triple ( z 1 , z 2 , z 3 ) U ˜ 2 ( 2 ) satisfies
z 1 + z 2 + ( n j 2 + 1 ) z 3 n = 0 ,
Hence, the upper bound is
F ¯ j 1 , j 2 : n ( x 1 , x 2 ) n ( 1 F ( x 2 ) ) n j 2 + 1 .
If m ( t ) M 2 ( 3 ) , then
F ¯ j 1 , j 2 : n ( x 1 , x 2 ) 1
because U ˜ 2 ( 3 ) is the restriction of uenv M ˜ 2 to M 2 ( 3 ) , and the plane passing through the points of U ˜ 2 ( 3 ) satisfies z 3 = 1 .
Next, we derive the lower bounds. Observe that
M 2 = M 2 ( 4 ) M 2 ( 5 )
and the lower envelope lenv M ˜ 2 of M ˜ 2 is given by
lenv M ˜ 2 = L ˜ 2 ( 1 ) L ˜ 2 ( 2 ) ,
where
M 2 ( 4 ) = conv { u ( 1 ) , u ( 2 ) , u ( 6 ) , u ( 7 ) } , M 2 ( 5 ) = conv { u ( 0 ) , u ( 6 ) , u ( 7 ) } , L ˜ 2 ( 1 ) = conv { θ ˜ ( u ( 1 ) ) , θ ˜ ( u ( 2 ) ) , θ ˜ ( u ( 6 ) ) , θ ˜ ( u ( 7 ) ) } , L ˜ 2 ( 2 ) = conv { θ ˜ ( u ( 0 ) ) , θ ˜ ( u ( 6 ) ) , θ ˜ ( u ( 7 ) ) } .
The blue outline in Figure 2 represents the lower envelope of M ˜ 2 for n = 10 , j 1 = 3 , and j 2 = 5 .
We consider two cases. If m ( t ) M 2 ( 4 ) , given the points in L ˜ 2 ( 1 ) , the restriction of lenv M ˜ 2 to M 2 ( 4 ) satisfy
z 3 = 0 ,
we have
F ¯ j 1 , j 2 : n ( x 1 , x 2 ) 0 .
If m ( t ) M 2 ( 5 ) , then
F ¯ j 1 , j 2 : n ( x 1 , x 2 ) 1 n ( j 2 j 1 ) F ( x 1 ) n j 1 F ( x 2 ) j 1 j 2
because the restriction L ˜ 2 ( 2 ) of lenv M ˜ 2 to M 2 ( 5 ) lies in the plane
j 2 z 1 + j 1 z 2 + j 1 j 2 ( z 3 1 ) = 0 .
The proof is complete. □
Remark 5. 
Proposition 2 provides the best-possible lower and upper bounds on the reliability of a pair of ( n j 1 + 1 ) -out-of-n and ( n j 2 + 1 ) -out-of-n systems based on common components with possibly dependent lifetimes.

Author Contributions

Conceptualization, A.O. and B.B.-O.; Methodology, A.O.; Validation, A.O. and B.B.-O.; Formal analysis, A.O. and B.B.-O.; Investigation, A.O. and B.B.-O.; Writing – original draft, A.O. and B.B.-O.; Supervision, A.O.; Project administration, A.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study.

Acknowledgments

The authors acknowledge Lodz University of Technology for providing open-access funding.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Balakrishnan, N.; Rao, C.R. (Eds.) Order Statistics: Applications. In Handbook of Statistics; North-Holland: Amsterdam, The Netherlands, 1998; Volume 17, pp. 497–671. [Google Scholar]
  2. David, H.A.; Nagaraja, H.N. Order Statistics, 3rd ed.; Wiley: New York, NY, USA, 2003; pp. 1–148. [Google Scholar]
  3. Rychlik, T. Projecting Statistical Functionals; Springer: New York, NY, USA, 2001; pp. 55–93. [Google Scholar]
  4. Maurer, W.; Margolin, B.H. The multivariate inclusion–exclusion formula and order statistics from dependent data. Ann. Stat. 1976, 4, 1190–1199. [Google Scholar] [CrossRef]
  5. Arnold, B.C.; Groeneveld, R.A. Bounds on expectations of linear systematic statistics based on dependent samples. Ann. Stat. 1979, 7, 220–223. [Google Scholar] [CrossRef]
  6. Papadatos, N. Expectation bounds on linear estimators from dependent samples. J. Stat. Plan. Inference 2001, 93, 17–27. [Google Scholar] [CrossRef]
  7. Kaluszka, M.; Okolewski, A. Bounds for L-statistics from weakly dependent samples of random length. Commun. Stat.-Theory Methods 2005, 34, 1899–1910. [Google Scholar] [CrossRef]
  8. Kaluszka, M.; Okolewski, A.; Szymanska, K. Sharp bounds for L-statistics from dependent samples of random length. J. Stat. Plan. Inference 2005, 127, 71–89. [Google Scholar] [CrossRef]
  9. Bertsimas, D.; Natarajan, K.; Teo, C.P. Tight bounds on expected order statistics. Probab. Eng. Inf. Sci. 2006, 20, 667–686. [Google Scholar] [CrossRef]
  10. Jasiński, K.; Rychlik, T. Bounds on dispersion of order statistics based on dependent symmetrically distributed random variables. J. Stat. Plan. Inference 2012, 142, 2421–2429. [Google Scholar] [CrossRef]
  11. Papadatos, N. Maximizing the expected range from dependent observations under mean–variance information. Statistics 2016, 50, 596–629. [Google Scholar] [CrossRef]
  12. Mallows, C.L. Extrema of expectations of uniform order statistics. SIAM Rev. 1969, 11, 410–411. [Google Scholar] [CrossRef]
  13. Lai, T.L.; Robbins, H. A class of dependent random variables and their maxima. Z. Wahrsch. Verw. Gebiete 1978, 48, 89–111. [Google Scholar] [CrossRef]
  14. Rychlik, T. Statistically extremal distributions for dependent samples. Stat. Probab. Lett. 1992, 13, 337–341. [Google Scholar] [CrossRef]
  15. Caraux, G.; Gascuel, O. Bounds on distribution functions of order statistics for dependent variates. Stat. Probab. Lett. 1992, 14, 103–105. [Google Scholar] [CrossRef]
  16. Rychlik, T. Bounds for expectations of L-estimates for dependent samples. Statistics 1993, 24, 1–7. [Google Scholar] [CrossRef]
  17. Rychlik, T. Bounds for expectations of L-estimates. Order Statistics: Theory and Methods. In Handbook of Statistics; Balakrishnan, N., Rao, C.R., Eds.; North-Holland: Amsterdam, The Netherlands, 1998; Volume 16, pp. 105–145. [Google Scholar]
  18. Papadatos, N. Distribution and expectation bounds on order statistics from possibly dependent variates. Stat. Probab. Lett. 2001, 4, 21–31. [Google Scholar] [CrossRef]
  19. Okolewski, A. Bounds on expectations of L-estimates for maximally and minimally stable samples. Statistics 2016, 50, 903–916. [Google Scholar] [CrossRef]
  20. Kemperman, J.H.B. Bounding moments of an order statistic when each k-tuple is independent. In Distributions with Given Marginals and Moment Problems; Beneš, V., Štěpán, J., Eds.; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1997; pp. 291–304. [Google Scholar]
  21. Okolewski, A. Distribution bounds for order statistics when each k-tuple has the same piecewise uniform copula. Statistics 2017, 51, 969–987. [Google Scholar] [CrossRef]
  22. Mallows, C.L. Minimizing the expected minimum. Adv. Appl. Math. 2003, 31, 180–192. [Google Scholar] [CrossRef]
  23. Okolewski, A. Extremal properties of order statistic distributions for dependent samples with partially known multidimensional marginals. J. Multivar. Anal. 2017, 160, 1–9. [Google Scholar] [CrossRef]
  24. Cramér, H. Mathematical Methods of Statistics; Princeton University Press: Princeton, NJ, USA, 1946; p. 162. [Google Scholar]
  25. Wang, Y.H. Dependent random variables with independent subsets-II. Canad. Math. Bull. 1990, 33, 24–28. [Google Scholar] [CrossRef]
  26. Schmidt, J.; Siegel, A.; Srinivasan, A. Chemoff-Hoeffding bounds for applications with limited independence. SIAM J. Disc. Math. 1995, 8, 223–250. [Google Scholar] [CrossRef]
  27. Móri, T.F.; Székely, G.J. A note on the background of several Bonferroni–Galambos-type inequalities. J. Appl. Probab. 1985, 22, 836–843. [Google Scholar] [CrossRef]
  28. Kremer, E. Rating of largest claims and ECOMOR reinsurance treaties for large portfolios. Astin Bulletin 1982, 13, 47–56. [Google Scholar] [CrossRef]
  29. Marichal, J.L.; Mathonet, P.; Navarro, J.; Paroissin, C. Joint signature of two or more systems with applications to multistate systems made up of two-state components. Eur. J. Oper. Res. 2017, 263, 559–570. [Google Scholar] [CrossRef]
  30. Navarro, J.; Rychlik, T. Reliability and expectation bounds for coherent systems with exchangeable components. J. Multivar. Anal. 2007, 98, 102–113. [Google Scholar] [CrossRef]
  31. Preparata, F.P.; Shamos, M.I. Computational Geometry; Springer: Berlin, Germany, 1985; pp. 95–184. [Google Scholar]
  32. Matoušek, J. Lectures on Discrete Geometry; Springer: New York, NY, USA, 2002; pp. 105–107. [Google Scholar]
  33. Seidel, R. Convex hull computations. In Handbook of Discrete and Computational Geometry, 3rd ed.; Goodman, J.E., O’Rourke, J., Eds.; CRC Press: Boca Raton, FL, USA, 2017; pp. 687–703. [Google Scholar]
Figure 1. The set M ˜ 2 corresponding to F 3 , 5 : 10 .
Figure 1. The set M ˜ 2 corresponding to F 3 , 5 : 10 .
Entropy 27 01250 g001
Figure 2. The set M ˜ 2 corresponding to F ¯ 3 , 5 : 10 .
Figure 2. The set M ˜ 2 corresponding to F ¯ 3 , 5 : 10 .
Entropy 27 01250 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Okolewski, A.; Blazejczyk-Okolewska, B. Tight Bounds for Joint Distribution Functions of Order Statistics Under k-Independence. Entropy 2025, 27, 1250. https://doi.org/10.3390/e27121250

AMA Style

Okolewski A, Blazejczyk-Okolewska B. Tight Bounds for Joint Distribution Functions of Order Statistics Under k-Independence. Entropy. 2025; 27(12):1250. https://doi.org/10.3390/e27121250

Chicago/Turabian Style

Okolewski, Andrzej, and Barbara Blazejczyk-Okolewska. 2025. "Tight Bounds for Joint Distribution Functions of Order Statistics Under k-Independence" Entropy 27, no. 12: 1250. https://doi.org/10.3390/e27121250

APA Style

Okolewski, A., & Blazejczyk-Okolewska, B. (2025). Tight Bounds for Joint Distribution Functions of Order Statistics Under k-Independence. Entropy, 27(12), 1250. https://doi.org/10.3390/e27121250

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop