Next Article in Journal
Fractal Differential Equations of 2α-Order
Previous Article in Journal
Computational Algebra, Coding Theory, and Cryptography: Theory and Applications
Previous Article in Special Issue
Robust Semi-Infinite Interval Equilibrium Problem Involving Data Uncertainty: Optimality Conditions and Duality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modeling Directional Monotonicity in Sequence with Copulas

by
José Juan Quesada-Molina
1,† and
Manuel Úbeda-Flores
2,*,†
1
Department of Applied Mathematics, University of Granada, 18071 Granada, Spain
2
Department of Mathematics, University of Almería, 04120 Almería, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2024, 13(11), 785; https://doi.org/10.3390/axioms13110785
Submission received: 8 October 2024 / Revised: 8 November 2024 / Accepted: 11 November 2024 / Published: 14 November 2024
(This article belongs to the Special Issue New Perspectives in Fuzzy Sets and Their Applications)

Abstract

:
In this paper, we present the concept of being monotonic in sequence according to a specific direction for a collection of random variables. This concept broadens the existing notions of multivariate dependence, such as sequential left-tail and right-tail dependence. Furthermore, we explore connections with other multivariate dependence concepts, highlight key properties, and analyze the new concept within the framework of copulas. Several examples are provided to demonstrate our findings.

1. Introduction

There are several approaches to describing dependence relationships among random variables, and, as noted by Jogdeo [1], “this is one of the most widely studied objects in probability and statistics”. For a multivariate model, it is essential to analyze the type of dependence structure that it incorporates to determine its suitability for a specific application or dataset. This study focuses on positive and negative dependence, with positive dependence defined as any criterion that mathematically captures the tendency of components within an n-variate random vector to display concordant values [2]. According to Barlow and Proschan [3], concepts of (positive) dependence become notably more diverse and intricate in the multivariate context compared to the bivariate one.
Several generalizations of bivariate dependence notions to the multivariate setting have been explored in the literature (see, for example, [4,5]). In this paper, our objective is to extend some established multivariate dependence concepts—both positive and negative—such as orthant dependence and tail monotonicity to examine relationships with other dependence structures and to outline various properties.
Aggregation functions are crucial in numerous applications, including fuzzy set theory and fuzzy logic [6], among other fields. Copulas, which are multivariate distribution functions with uniform univariate margins on [ 0 , 1 ] , represent a specific kind of conjunctive aggregation function. They are commonly applied in aggregation, as they ensure stability, meaning that minor input errors lead to minor output errors [7]. This paper investigates the new dependence concepts through the lens of copulas.
The paper is organized as follows. We begin with preliminaries on (multivariate) dependence properties in Section 2. Section 3 introduces the concept of monotonic in sequence random variables in a given direction, alongside properties and examples. Section 4 explores the concept through copulas, focusing first on the bivariate and trivariate cases, and then extending to the general case. Finally, conclusions are presented in Section 5.

2. Preliminaries

In the following, we use the terms “increasing” (or “decreasing”) interchangeably with “nondecreasing” (or “nonincreasing”), unless specified otherwise. Additionally, a subset A R d , with d 1 , is called an increasing set if its indicator function χ A is increasing.
Let n 2 be a natural number. Consider a probability space ( Ω , F , P ) , where Ω is a nonempty set, F is a σ -algebra of subsets of Ω , and P is a probability measure on F . Let X = ( X 1 , X 2 , , X n ) be a random vector from Ω to R n , composed of n random variables X i : Ω R , i = 1 , 2 , , n . In this context, we assume that the random vector X is continuous.
We now summarize several established concepts of multivariate dependence.
Orthant dependence according to a direction is defined as follows [8]: Let α = ( α 1 , α 2 , , α n ) be a vector in R n such that | α i | = 1 for all i = 1 , 2 , , n . An n-variate random vector X (or its joint distribution function F) is said to be orthant positive (respectively, orthant negative) dependently according to the direction α , denoted PD( α ) (respectively, ND( α )), if
P i = 1 n ( α i X i > x i ) i = 1 n P [ α i X i > x i ] for all x i R
(or, respectively, with the inequality in (1) reversed).
For certain choices of the direction α , such as α = 1 = ( 1 , 1 , , 1 ) or α = 1 = ( 1 , 1 , , 1 ) , we retrieve well-known dependence concepts, including positive quadrant dependence and positive upper orthant dependence (for further details, see [2,5,9,10,11]). Additional related concepts in multivariate total positivity by direction can be found in [12].
For a pair of random variables ( X 1 , X 2 ) , two bivariate positive dependence notions are introduced in [13]: left-tail decreasing (LTD) and right-tail increasing (RTI). Specifically, X 2 is said to be left-tail decreasing (or right-tail increasing) in X 1 if P [ X 2 x 2 | X 1 x 1 ] (or P [ X 2 > x 2 | X 1 > x 1 ] ) is a nonincreasing (or nondecreasing) function of x 1 for all x 2 —the negative dependence counterparts are defined by reversing the inequalities. For instance, in the LTD concept above, the probability of X 2 being less than or equal to any x 2 given that X 1 x 1 increases as x 1 decreases, indicating positive dependence. Multivariate extensions of RTI and LTD are presented in [14,15]. A random vector X = ( X 1 , X 2 , , X n ) is said to be left-tail decreasing in sequence (LTDS) if
P X i x i X 1 x 1 , , X i 1 x i 1
is decreasing in x 1 , , x i 1 for all x i , i { 2 , , n } ; X is right-tail increasing in sequence (RTIS) if
P X i > x i X 1 > x 1 , , X i 1 > x i 1
is increasing in x 1 , , x i 1 for all x i , i { 2 , , n } . For the properties of these notions and their relationships to other multivariate dependence concepts, see [2,16].
In the following section, we generalize these multivariate dependence concepts according to a direction.

3. Monotonic in Sequence According to a Direction

In this section, we introduce the concepts of left-tail and right-tail dependence in sequence according to a direction for a set of random variables, generalizing the LTDS and RTIS concepts presented in Section 2. We also provide a characterization of these notions and examine several of their key properties.

3.1. Definition and Characterization

Definition 1.
Let X 1 , X 2 , , X n be n random variables, and let α = α 1 , α 2 , , α n R n such that α i = 1 for all i = 1 , 2 , , n . The random variables X 1 , X 2 , , X n are said to be increasing (or decreasing) in sequence according to the direction α—denoted by IS(α) (or DS(α))—if, for any x i R ,
P α i X i > x i α 1 X 1 > x 1 , , α i 1 X i 1 > x i 1
is nondecreasing (or nonincreasing) in x 1 , x 2 , , x i 1 R for all i = 2 , 3 , , n .
From here, we focus primarily on the IS( α ) concept. Parallel results apply to DS( α ), so we omit them for brevity. Additionally, we refer to a random vector X = ( X 1 , X 2 , , X n ) —or its joint distribution function—as being IS( α ).
The IS( α ) concept allows for an analysis of how high or low values in one variable can directionally influence other variables in the sequence. This concept extends bivariate dependence to the multivariate case and provides a directional analysis that is not fully covered by previous approaches. To be precise, it implies that large values of the variables X j , for indices j J , correspond to small values for variables X j , where j I J , with I = { 1 , 2 , , n } and J = { i I : α i = 1 } . Consequently, if X is IS( α ), then, for all i = 2 , 3 , , n and any x i R , we have that
P X i > x i j J 1 X j > x j , j I 1 J 1 X j < x j
is nondecreasing (or nonincreasing) in x 1 , x 2 , , x i 1 if α i = 1 (or α i = 1 ), where I 1 = { 1 , 2 , , i 1 } and J 1 = { i I 1 : α i = 1 } (or J 1 = { i I 1 : α i = 1 } ). Furthermore, observe also that the IS( α ) concept generalizes the LTDS and RTIS concepts introduced in Section 2: IS( 1 ) is RTIS and IS( 1 ) is LTDS.
Next, we provide a useful characterization of the IS( α ) concept, though a preliminary definition is needed.
Definition 2.
Let X 1 , X 2 , , X n be n random variables, and let α = α 1 , α 2 , , α n R n such that α i = 1 for all i = 1 , 2 , , n . The random variables X 1 , X 2 , , X n are said to be stochastically increasing (or decreasing) in sequence according to a direction α R n —denoted by SIS(α) (or SDS(α))—if
E f ( α i X i ) j = 1 i 1 ( α j X j > x j )
is nondecreasing (or nonincreasing) in x 1 , x 2 , , x i 1 for any real-valued, non-decreasing function f and for each i, with 2 i n . We will also say that the random vector X = ( X 1 , X 2 , , X n ) —or its joint distribution function—is SIS(α) (or SDS(α)).
In the next result, we characterise the IS( α ) concept in terms of SIS( α ). Similar results can be formulated for DS( α ) and SDS( α ).
Theorem 1.
A random vector is IS(α) if, and only if, it is SIS(α).
Proof. 
Suppose the random vector X = ( X 1 , X 2 , , X n ) is SIS( α ), and consider, for each i and any x i , the function
f ( α i X i ) = 0 , α i X i x i , 1 , α i X i > x i .
Then, we have that
E f ( α i X i ) | j = 1 i 1 ( α j X j > x j ) = P α i X i > x i | j = 1 i 1 ( α j X j > x j )
is nondecreasing in x 1 , x 2 , , x i 1 and, hence, X is IS( α ).
Conversely, if X is IS( α ), then we have that
P α i X i > x i | j = 1 i 1 ( α j X j > x j ) = E χ { α i X i > x i } | j = 1 i 1 ( α j X j > x j )
is nondecreasing in x 1 , x 2 , , x i 1 . Thus, for any simple function f that is non-negative and nondecreasing, the expression in (2) remains nondecreasing in x 1 , x 2 , , x i 1 . Utilizing the monotone convergence theorem confirms that this property holds for all nondecreasing (and non-negative) functions. Consequently, we conclude that X is SIS( α ), thereby completing the proof. □

3.2. Relationships with Other Multivariate Dependence Concepts

In this subsection, we explore the connections between the IS( α ) concept and several established multivariate dependence notions in relation to a specific direction. The initial result demonstrates the link between the IS( α ) and PD( α ) concepts, as expressed in (1).
Proposition 1.
If the random vector X = ( X 1 , X 2 , , X n ) is IS(α), then it is PD(α).
Proof. 
Let x 1 , x 2 , , x n R . Since X is IS( α ) then, for every 2 i n , we have
P α i X i > x i | j = 1 i 1 α j X j > x j P α i X i > x i | j = 1 i 1 α j X j > x j
whenever x j x j for all j = 1 , 2 , , i 1 . Therefore,
P α i X i > x i | j = 1 i 1 α j X j > x j P α i X i > x i
by letting x j for j = 1 , 2 , , i 1 . Thus,
P i = 1 n α i X i > x i = P [ α 1 X 1 > x 1 ] · i = 2 n P α i X i > x i | j = 1 i 1 ( α j X j > x j ) i = 1 n P [ α i X i > x i ] ,
i.e., X is PD( α ). □
For the forthcoming results, we review several multivariate dependence concepts based on a specific direction. For any vectors x = x 1 , , x n and y = y 1 , , y n in R n , we define x y = max x 1 , y 1 , , max x n , y n and x y = min x 1 , y 1 , , min x n , y n .
Definition 3
([12]). Let X be an n-dimensional random vector with joint density function f, and α R n such that α i = 1 for all i = 1 , 2 , , n . The random vector X is referred to as being multivariate totally positive of order two according to the direction α—denoted by MTP 2 ( α ) —if
f ( α ( x y ) ) f ( α ( x y ) ) f ( α x ) f ( α y )
holds for all x = x 1 , , x n and y = y 1 , , y n in R n .
It follows from Proposition 2 and Theorem 3 in [12] that an n-dimensional random vector X is MTP 2 ( α ) if, and only if, α X is MTP 2 ( 1 ) —or simply MTP 2 .
Definition 4
([17]). Let X be an n-dimensional random vector, and α R n such that α i = 1 for all i = 1 , 2 , , n . The random vector X is said to be increasing (or decreasing) according to the direction α—denoted by I(α) (or D(α))—if
P α 1 X 1 > x 1 , α 2 X 2 > x 2 , , α n X n > x n α 1 X 1 > x 1 , α 2 X 2 > x 2 , , α n X n > x n
is non-decreasing (or non-increasing) in x 1 , x 2 , , x n for all x 1 , x 2 , , x n .
It is important to note that the notion I(1) (or D(−1)) generalizes the established concept of RCSI (or LCSD), as discussed in [18] for the bivariate case and [5] for the multivariate context.
If a random vector X is MTP 2 ( α ) , then it is known to be I( α ) [17]. The next result illustrates the relationship between the concepts I( α ) and IS( α ).
Proposition 2.
If the random vector X is I(α), then it is IS(α).
Proof. 
Let x 1 , x 2 , , x n , x 1 , x 2 , , x n R such that x i x i for 1 i n . Since X is I( α ), for any i, 2 i n , we have
P α i X i > x i j = 1 i 1 ( α j X j > x j ) = P j = 1 n α j X j > t j j = 1 n α j X j > s j P j = 1 n α j X j > t j j = 1 n α j X j > s j = P α i X i > x i j = 1 i 1 ( α j X j > x j )
where
t j = x i , j = i , , j i ,
s j = x j , for j = 1 , 2 , , i 1 , , for j = i , , n ,
and
s j = x j , for j = 1 , 2 , , i 1 , , for j = i , , n .
Therefore, X is IS( α ), which concludes the proof. □
It should be noted that the converse of Proposition 2 does not hold, as illustrated in Example 9 in Section 4.
From Proposition 2, and given that MTP 2 ( α ) implies I( α ), we can derive the following result.
Corollary 1.
If the random vector X is MTP 2 ( α ) , then it is IS(α).

3.3. Properties

The following results are properties of the IS( α ) families, which include results for independent random variables, subsets of the new concept IS( α ), the concatenation of IS( α ) random vectors, weak convergence, etc.
Proposition 3.
A set of independent random variables is IS(α) for any α R n .
Proof. 
Let X 1 , X 2 , , X n be n independent random variables. For any α R n , we have
P α i X i > x i | j = 1 i 1 ( α j X j > x j ) = P [ α i X i > x i ]
for all i = 2 , , n and any x i , whence it is immediate that the random variables are IS( α ). □
Proposition 4.
Any subset of random variables that are IS(α) are also IS( α * ), where α * is defined as the vector formed by omitting the components of α that correspond to the random variables not present in the subset.
Proof. 
Let X = ( X 1 , X 2 , , X n ) be a vector of n random vector that is IS ( α ) , and let X k = { X i 1 , X i 2 , , X i k } represent a subset of X . If x j approaches for each j = 1 , , i 1 such that j { 1 , 2 , , n } { i 1 , i 2 , , i k } in the expression
P α i X i > x i j = 1 i 1 ( α j X j > x j )
for i { i 1 , i 2 , , i k } , then we find that
P α i X i > x i i j < i ( α i j X i j > x i j )
is nondecreasing in all x i j with i j < i , for any x i . Consequently, X k is IS ( α i 1 , α i 2 , , α i k ) . □
Proposition 5.
If the random vector X = ( X 1 , X 2 , , X n ) is IS ( α ) , and g 1 , g 2 , , g n are n strictly increasing real-valued functions, then the random vector ( g 1 ( X 1 ) , g 2 ( X 2 ) , , g n ( X n ) ) also satisfies the property of being IS ( α ) .
Proof. 
Let y 1 , y 2 , , y n , y 1 , y 2 , , y n R such that y i y i for i = 1 , 2 , , n . Since X is IS( α ) and α j g j 1 ( α j y j ) α j g j 1 ( α j y j ) for every j = 1 , 2 , , n , we have
P α i g i ( X i ) > y i | j = 1 i 1 ( α j g j ( X j ) > y j ) = P α i X i > α i g i 1 ( α i y i ) | j = 1 i 1 ( α j X j > α j g j 1 ( α j y j ) ) P α i X i > α i g i 1 ( α i y i ) | j = 1 i 1 ( α j X j > α j g j 1 ( α j y j ) ) = P α i g i ( X i ) > y i ) | j = 1 i 1 ( α j g j ( X j ) > y j ) ,
i.e., ( g 1 ( X 1 ) , g 2 ( X 2 ) , , g n ( X n ) ) is IS( α ). □
For the subsequent result, given α = ( α 1 , α 2 , , α n ) R n and β = ( β 1 , β 2 , , β m ) R m , we define ( α , β ) to represent the concatenation, which is expressed as
( α , β ) = ( α 1 , , α n , β 1 , , β m ) R n + m ;
this definition similarly applies to random vectors.
Proposition 6.
If X = ( X 1 , X 2 , , X n ) is IS ( α ) and Y = ( Y 1 , Y 2 , , Y m ) is IS ( β ) , with X and Y being independent, then the combined random vector ( X , Y ) is IS ( α , β ) .
Proof. 
Let Z = ( X , Y ) and γ = ( α , β ) . Then, for any i, with 2 i n , and any z i R , we have
P γ i Z i > z i | j = 1 i 1 ( γ j Z j > z j ) = P α i X i > z i | j = 1 i 1 ( α j X j > z j ) ,
and, thus, it is non-decreasing in z 1 , z 2 , , z i 1 .
Consider now any i, with n + 1 i n + m , and any z i R . Taking into account that X and Y are independent, we have
P γ i Z i > z i | j = 1 i 1 ( γ j Z j > z j ) = P β i n Y i n > z i | j = 1 i n 1 ( β j Y j > z n + j ) ,
which is non-decreasing in z 1 , z 2 , , z i 1 . Therefore, ( X , Y ) is IS( α , β ). □
The following result pertains to a closure property of the IS ( α ) family of multivariate distributions, as well as the DS ( α ) family.
Proposition 7.
The collection of IS ( α ) distribution functions is closed with respect to weak convergence.
Proof. 
Let { X m } m N be a sequence of IS( α ) n-variate random vectors such that H m is the distribution function of X m for each m, and let X be a vector of n random variables with joint distribution function H such that H m w H as m tends to + , where w denotes weak convergence. We prove that X is IS( α ).
Given m 1 , consider the n-variate random vector X m = ( X 1 m , X 2 m , , X n m ) , which, by hypothesis, is IS( α ). From Theorem 1, X m is SIS( α ). Since H m w H as m tends to + , by using the Helly–Bray theorem (see, e.g., [19]), we have
E f ( α i X i m ) | j = 1 i 1 ( α j X j m > x j ) w E f ( α i X i ) | j = 1 i 1 ( α j X j > x j )
as m tends to + for any real-valued and nondecreasing function f and every i, with 2 i n , whence X is SIS( α ). By using Theorem 1 again, we conclude that X is IS( α ). □

3.4. Examples

We provide several examples that demonstrate the applicability of the dependence concepts studied in this work.
Example 1.
Consider the random vector X = ( X 1 , X 2 , , X n ) which follows a multivariate normal distribution represented as N ( μ , Σ ) , where μ = ( μ 1 , μ 2 , , μ n ) is the mean vector and Σ denotes the covariance matrix. Define r i j = Σ 1 such that r i j < 0 for all pairs ( i , j ) , where 1 i < j n ; a similar analysis can be conducted for r i j > 0 . The probability density function (PDF) of X is given by
f ( x 1 , x 2 , , x n ) = ( 2 π ) n / 2 | Σ | 1 / 2 exp 1 2 i = 1 n j = 1 n r i j ( x i μ i ) ( x j μ j ) .
For each pair ( i , j ) with 1 i < j n , we can rewrite
f ( x 1 , x 2 , , x n ) = f 1 x ( i ) f 2 x ( j ) exp ( r i j x i x j ) ,
where x ( k ) = ( x 1 , , x k 1 , x k + 1 , , x n ) for k = i , j , and f 1 , f 2 are suitable functions.
Now, consider x i , x j , x i , x j such that x i x i and x j x j , along with ( α i , α j ) , where | α k | = 1 for k = i , j . We can derive
f x 1 , , α i x i , , α j x j , , x n f x 1 , , α i x i , , α j x j , , x n f x 1 , , α i x i , , α j x j , , x n f x 1 , , α i x i , , α j x j , , x n = f 1 x ( i ) f 1 x ( i ) f 2 x ( j ) f 2 x ( j ) · exp ( r i j α i α j ( x i x j + x i x j ) ) exp ( r i j α i α j ( x i x j + x i x j ) ) .
Since
α i α j ( x i x j + x i x j x i x j x i x j ) = α i α j [ ( x i x i ) ( x j x j ) ] 0 ,
this holds true, provided that α i α j > 0 . Thus, the expression in (3) is non-negative if and only if α i α j > 0 . Consequently, for any vector α = ( α 1 , α 2 , , α n ) R n with | α i | = 1 for each i = 1 , 2 , , n , the random vector X exhibits the property of MTP 2 ( α ) if, and only if, α i α j > 0 for every selected pair ( i , j ) —refer to Theorem 3 in [12]. By Corollary 1, we conclude that X is IS(α) for both α = 1 and α = 1 .
Example 2.
Consider the random vector X = ( X 1 , X 2 , , X n ) following a Dirichlet distribution denoted as Dir ( γ ) , where γ = ( γ 1 , γ 2 , , γ n ; γ n + 1 ) and γ i > 0 for all i = 1 , 2 , , n , with γ n + 1 1 . The probability density function (PDF) for this distribution is expressed as
f ( x 1 , x 2 , , x n ) = Γ i = 1 n + 1 γ i i = 1 n + 1 Γ ( γ i ) i = 1 n x i γ i 1 1 i = 1 n x i γ n + 1 1 ,
where x i 0 and i = 1 n x i 1 . For any chosen pair ( i , j ) with 1 i < j n and any real numbers x i , x j , x i , x j such that x i x i and x j x j , we have the following relationship:
f x 1 , , x i , , x j , , x n f x 1 , , x i , , x j , , x n f x 1 , , x i , , x j , , x n f x 1 , , x i , , x j , , x n = Γ i = 1 n + 1 γ i i = 1 n + 1 Γ ( γ i ) 2 k = 1 k i , j n x k 2 ( γ k 1 ) x i γ i 1 ( x i ) γ i 1 x j γ j 1 ( x j ) γ j 1 · { 1 k = 1 k i , j n x k x i x j 1 k = 1 k i , j n x k x i x j γ n + 1 1 1 k = 1 k i , j n x k x i x j 1 k = 1 k i , j n x k x i x j γ n + 1 1 } .
Since we have
1 k = 1 k i , j n x k x i x j 1 k = 1 k i , j n x k x i x j 1 k = 1 k i , j n x k x i x j 1 k = 1 k i , j n x k x i x j = x i x j + x i x j x i x j x i x j = ( x i x i ) ( x j x j ) 0 ,
and given that γ n + 1 1 , it follows that (4) is non-positive. Thus, we conclude that the random vector X exhibits the MRR 2 ( 1 ) property, indicating that X is characterized by the multivariate reverse rule of order two, which is the negative counterpart to the concept defined in Definition 3 by reversing the inequality sign, according to the direction 1 . Consequently, by applying the corresponding negative dependence framework, analogous to what is provided in Corollary 1 for the positive dependence framework, we deduce that X is DS ( 1 ) .
Example 3.
Let X = ( X 1 , X 2 , , X n ) be a random vector following a multinomial distribution characterized by parameters N (the number of trials) and p = ( p 1 , p 2 , , p n ) (the probabilities of the events), with the constraints p i 0 for each i = 1 , 2 , , n and 0 < i = 1 n p i < 1 . The joint probability mass function (PMF) is given by
f ( x 1 , x 2 , , x n ) = N ! i = 1 n x i ! N i = 1 n x i ! i = 1 n p i x i 1 i = 1 n p i N i = 1 n x i ,
where i = 1 n x i N . It is notable that the multinomial distribution can be viewed as the conditional distribution of independent Poisson random variables, conditioned on their total. As established in Theorem 4.3 of [16] and Theorem 3 of [12], we find that the random vector X satisfies the MRR 2 ( 1 ) property. Consequently, we can conclude that X is also DS ( 1 ) .
Remark 1.
It is worth noting that, by applying reasoning analogous to that outlined in Example 3, any random vector that follows a multivariate Hypergeometric distribution—essentially the conditional distribution of independent binomial random variables given their total—also qualifies as DS ( 1 ) .
Next, we provide an illustrative example showing the application of Proposition 7 related to weak convergence.
Example 4.
Let X = ( X 1 , X 2 , , X n ) represent a random vector with a joint distribution function defined as
H θ ( x ) = exp i = 1 n e θ x i 1 / θ
for all x R ¯ n and θ 1 . This family of distribution functions serves as a multivariate generalization of Type B bivariate extreme-value distributions (see [20,21]). By invoking Theorem 2.11 in [22], which addresses log-convex functions [23], we can assert that the random vector X is IS ( 1 ) . We consider the sequence of distribution functions { H θ } θ N . As θ approaches infinity, we derive H ( x ) = min ( F 1 ( x 1 ) , F 2 ( x 2 ) , , F n ( x n ) ) , where F i are the one-dimensional marginals of H θ for i = 1 , 2 , , n . Hence, by virtue of Proposition 7, it follows that H θ retains the IS ( 1 ) property too.

4. Monotonic in Sequence According to a Direction and Copulas

Copulas serve as an essential tool for analyzing the positive dependence characteristics of continuous random vectors. They encapsulate the dependence structure of the corresponding multivariate distribution function, are independent of the marginal distributions, and provide scale-invariant measures of dependence. Additionally, copulas act as a foundational element for constructing families of distributions (see [24]). In this section, our objective is to examine continuous n-copulas associated with random vectors that are IS( α ). For simplicity, we focus on the bivariate and trivariate cases.
Let us recall some key concepts related to copulas. For n 2 , an n-dimensional copula (shortened to n-copula) is defined as the restriction to [ 0 , 1 ] n of a continuous n-dimensional distribution function, where the univariate margins are uniformly distributed on [ 0 , 1 ] . The significance of copulas in statistics is highlighted by the following theorem from Abe Sklar [25]: Let X = ( X 1 , X 2 , , X n ) be a random vector with joint distribution function F and one-dimensional marginal distributions F 1 , F 2 , , F n . There exists a unique n-copula C (which is determined on × i = 1 n Range F i ) such that
F ( x ) = C ( F 1 ( x 1 ) , F 2 ( x 2 ) , , F n ( x n ) ) for all x [ , + ] n .
Moreover, if F 1 , F 2 , , F n are continuous, then C is uniquely defined. A comprehensive proof of this result can be found in [26]. Thus, copulas serve to connect joint distribution functions with their respective one-dimensional margins. For an overview of copulas, see [21,27], and references discussing positive dependence properties through copulas can be found in [5,21,22,28,29].
Let Π n represent the n-copula for independent random variables (also known as the product n-copula), defined as Π n ( u ) = i = 1 n u i for all u = ( u 1 , u 2 , , u n ) [ 0 , 1 ] n .
For any n-copula C, the following inequality holds:
W n ( u ) = max 0 , i = 1 n u i n + 1 C ( u ) min { u 1 , u 2 , , u n } = M n ( u )
for all u in [ 0 , 1 ] n . While M n is an n-copula for all n 2 , W n qualifies as an n-copula only when n = 2 .
Let X be a random vector with an associated n-copula C, and consider the random pair ( X i , X j ) corresponding to the components i and j (where i < j ) of X . Define C i j as the ( i , j ) -margin of C:
C i j ( u i , u j ) = C ( 1 , , 1 , u i , 1 , , 1 , u j , 1 , , 1 ) ,
for all 1 i < j n , which represents the 2-copula corresponding to the random pair ( X i , X j ) . Additionally, C 1 , , i denotes the i-copula C ( u 1 , , u i , 1 , , 1 ) .
For a random vector ( X 1 , X 2 , , X n ) with associated n-copula C, the survival n-copula associated with C, denoted by C ^ , is defined as
C ^ ( u ) = P X 1 1 u 1 , X 2 1 u 2 , , X n 1 u n
for all u [ 0 , 1 ] n .
We begin by characterizing the concept of IS( α ) in terms of n-copulas, specifically focusing on the bivariate and trivariate cases for clarity.

4.1. The General Case

To characterize the IS( α ) concept in terms of n-copulas, we examine the flipping transformations of copulas. Recall that, for an n-copula C, the flipping of C in the i-th coordinate (referred to as the i-flip of C) is defined as the function C i : [ 0 , 1 ] n [ 0 , 1 ] given by, for all u [ 0 , 1 ] n ,
C i ( u ) = C ( u 1 , , u i 1 , 1 , u i + 1 , , u n ) C ( u 1 , , u i 1 , 1 u i , u i + 1 , , u n )
(see [30]).
Moreover, the j-flipping transformation of the i-flip of C is denoted by C i j , and is given, for all u [ 0 , 1 ] n , by
C i j ( u ) = C ( u 1 , , u i 1 , 1 , u i + 1 , , u j 1 , 1 , u j + 1 , , u n ) C ( u 1 , , u i 1 , 1 u i , u i + 1 , , u j 1 , 1 , u j + 1 , , u n ) C ( u 1 , , u i 1 , 1 , u i + 1 , , u j 1 , 1 u j , u j + 1 , , u n ) + C ( u 1 , , u i 1 , 1 u i , u i + 1 , , u j 1 , 1 u j , u j + 1 , , u n ) .
Similarly, we denote by C i j k the k-flipping transformation of the function C i j , and so on.
Now, the following characterization can be established.
Theorem 2.
Let X 1 , X 2 , , X n be n random variables with associated n-copula C, and α R n such that | α i | = 1 for all i = 1 , 2 , , n . Let J = { 1 , 2 , , n } , and J i = { j J : j i , α j = 1 } , for i = 1 , 2 , , n . Then, C is IS ( α ) if, and only if, for every i = { 2 , , n } , the function G given by
G ( u 1 , , u i ) = C ^ 1 , , i J i ( u 1 , , u i ) C ^ 1 , , i 1 J i 1 ( u 1 , , u i 1 )
is nonincreasing in all u j for j { 1 , 2 , , i 1 } J i 1 , and nondecreasing in all u j for j J i 1 , for all u i , where C ^ 1 , , i J i denotes the function obtained from the survival copula C ^ 1 , , i of ( X 1 , , X i ) , after flipping consecutively in all indices j in J i .
In Theorem 3 of [8], it is shown that a 3-copula C is PD( α ) for every direction α if, and only if, C = Π 3 . This result can be generalised to any dimension n 2 . Since every IS( α ) n-copula is PD( α ) for every α —recall Proposition 1—as a consequence of Proposition 3, the following result easily follows.
Corollary 2.
An n-copula C is deemed IS(α) for every direction α if, and only if, C is equal to Π n .
Now, we provide two examples of applications of Theorem 2.
Example 5.
For all n 2 , the n-copula M n is IS ( α ) only for α = 1 and α = 1 .
Example 6.
Let { C n , λ } λ [ 1 , 1 ] be an n-dimensional generalization of the one-parameter Farlie–Gumbel–Morgenstern (FGM, for short) family of 2-copulas, which is given by
C n , λ ( u ) = i = 1 n u i 1 + λ i = 1 n ( 1 u i )
for all u [ 0 , 1 ] n (see [21,27]). For α = ( α 1 , α 2 , , α n ) , let J = { i { 1 , 2 , , n } : α i = 1 } . After some elementary operations, we have that C n , λ is I ( α ) for λ [ 0 , 1 ] if | J | —the cardinality of J—is even, and C n , λ is I ( α ) for λ [ 1 , 0 ] if | J | is odd.

4.2. The Bivariate Case

We study now the bivariate case, which is a consequence of Theorem 2.
Corollary 3.
Let ( X , Y ) be a pair of random variables with associated 2-copula C. Then, C is:
i. 
IS ( 1 , 1 ) if, and only if,
1 u v + C ( u , v ) 1 u
is increasing in u for all v;
ii. 
IS ( 1 , 1 ) if, and only if,
v C ( u , v ) 1 u
is increasing in u for all v;
iii. 
IS ( 1 , 1 ) if, and only if,
u C ( u , v ) u
is decreasing in u for all v;
iv. 
IS ( 1 , 1 ) if, and only if,
C ( u , v ) u
is decreasing in u for all v.
We provide several examples for the bivariate case.
Example 7.
The 2-copula W 2 is IS ( α ) for α = ( 1 , 1 ) and α = ( 1 , 1 ) .
Example 8.
Let { C δ } δ [ 1 , 1 ] be the Ali–Mikhail–Haq one-parameter family of 2-copulas [31] given by
C δ ( u , v ) = u v 1 + δ ( 1 u ) ( 1 v )
for all ( u , v ) [ 0 , 1 ] 2 . It is easy to prove that C δ is IS ( 1 , 1 ) and IS ( 1 , 1 ) for δ [ 0 , 1 ] , and IS ( 1 , 1 ) and IS ( 1 , 1 ) for δ [ 1 , 0 ] .
The following example shows that the converse of Proposition 2 does not hold in general.
Example 9.
Consider the 2-copula
C ( u , v ) = u v 1 + ( 1 u ) ( 1 v ) 2
for all ( u , v ) [ 0 , 1 ] 2 . C belongs to the family of 2-copulas studied in [32]. Then, we have that C is IS ( 1 , 1 ) . Moreover, we have that C is not I ( 1 , 1 ) .

4.3. The Trivariate Case

In the next result, we study the IS( α ) concept in terms of 3-copulas.
Theorem 3.
Let ( X , Y , Z ) be a random triple with associated 3-copula C. Then, C is:
i. 
IS ( 1 , 1 , 1 ) if, and only if,
1 u v + C 12 ( u , v ) 1 u
is increasing in u for all v and
1 u v w + C 12 ( u , v ) + C 13 ( u , w ) + C 23 ( v , w ) C ( u , v , w ) 1 u v + C 12 ( u , v )
is increasing in ( u , v ) for all w;
ii. 
IS ( 1 , 1 , 1 ) if, and only if,
1 u v + C 12 ( u , v ) 1 u
is increasing in u for all v and
w C 13 ( u , w ) C 23 ( v , w ) + C ( u , v , w ) 1 u v + C 12 ( u , v )
is increasing in ( u , v ) for all w;
iii. 
IS ( 1 , 1 , 1 ) if, and only if,
v C 12 ( u , v ) 1 u
is increasing in u for all v and
v C 12 ( u , v ) C 23 ( v , w ) + C ( u , v , w ) v C 12 ( u , v )
is increasing in u and decreasing in v for all w;
iv. 
IS ( 1 , 1 , 1 ) if, and only if,
u C 12 ( u , v ) u
is decreasing in u for all v and
u C 12 ( u , v ) C 13 ( u , w ) + C ( u , v , w ) u C 12 ( u , v )
is decreasing in u and increasing in v for all w;
v. 
IS ( 1 , 1 , 1 ) if, and only if,
v C 12 ( u , v ) 1 u
is increasing in u for all v and
C 23 ( v , w ) C ( u , v , w ) v C 12 ( u , v )
is increasing in u and decreasing in v for all w;
vi. 
IS ( 1 , 1 , 1 ) if, and only if,
C 12 ( u , v ) u
is decreasing in u for all v and
C 12 ( u , v ) C ( u , v , w ) C 12 ( u , v )
is decreasing in ( u , v ) for all w;
vii. 
IS ( 1 , 1 , 1 ) if, and only if,
u C 12 ( u , v ) u
is decreasing in u for all v and
C 13 ( u , w ) C ( u , v , w ) u C 12 ( u , v )
is decreasing in u and increasing in v for all w;
viii. 
IS ( 1 , 1 , 1 ) if, and only if,
C 12 ( u , v ) u
is decreasing in u for all v and
C ( u , v , w ) C 12 ( u , v )
is decreasing in ( u , v ) for all w.
We provide an example for the trivariate case.
Example 10.
Let C 2 , λ be the one-parameter FGM family of 2-copulas—recall Example 6. Consider the 3-copula C λ given by C λ ( u , v , w ) = w C 2 , λ ( u , v ) for all ( u , v , w ) [ 0 , 1 ] 3 . Then, we have that C λ is IS ( 1 , 1 , 1 ) , IS ( 1 , 1 , 1 ) , IS ( 1 , 1 , 1 ) , and IS ( 1 , 1 , 1 ) for λ [ 0 , 1 ] and C λ is IS ( 1 , 1 , 1 ) , IS ( 1 , 1 , 1 ) , IS ( 1 , 1 , 1 ) , and IS ( 1 , 1 , 1 ) for λ [ 1 , 0 ] .

5. Conclusions

The monotonic in sequence according to a direction concept, denoted by IS( α ), constitutes a significant advancement in multivariate dependence analysis because it allows the modeling of dependencies that are not easily captured by previous concepts, such as simple positive or negative dependence, extending several known multivariate dependence concepts. There is clear potential for applications in fields such as financial risk analysis, where directional dependence relationships could help model the correlation of extreme events between financial assets. Another relevant area could be biostatistics, where the progression of certain conditions or responses may be influenced by a sequence of biological or environmental variables with marked directional dependence. Sequence dependencies may also have implications in network analysis, such as neural or data networks, where the direction in which nodes are affected significantly impacts information or state propagation. This new concept seems to address the limitations of traditional dependence measures, especially in configurations where dependencies are asymmetric or directional.
We have established certain relationships with other multivariate dependence concepts, for instance, the implication from I( α ) to IS( α ) and, subsequently, from IS( α ) to PD( α ). Additionally, we have highlighted key properties and conducted an examination of this novel concept in terms of n-copulas—specifically for the bivariate and trivariate scenarios for a better understanding. Copulas play a fundamental role in this paper. Section 4 examines how copulas capture the multivariate dependence structure independently of the marginal distributions, making it easier to analyze the IS( α ) concept in terms of directional properties.
Further exploration involving analogous extensions of well-known dependence concepts discussed in this paper, the investigation of associated orders—akin to the approach taken in [33]—and defining new measures of association based on the concepts of dependence studied here are the subject of ongoing investigation.

Author Contributions

Conceptualization, J.J.Q.-M.; methodology, J.J.Q.-M. and M.Ú.-F.; validation, J.J.Q.-M. and M.Ú.-F.; investigation, J.J.Q.-M. and M.Ú.-F.; writing—original draft preparation, M.Ú.-F.; writing—review and editing, M.Ú.-F.; visualization, J.J.Q.-M. and M.Ú.-F.; supervision, J.J.Q.-M. and M.Ú.-F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Ministry of Science and Innovation (Spain) grant number PID2021-122657OB-I00. The second author acknowledges the support of PPIT-UAL, Junta de Andalucía-ERDF 2021–2027, Objective RS01.1, Program 54.A.

Data Availability Statement

Data is contained within the article.

Acknowledgments

The authors are grateful for the comments provided by three anonymous reviewers.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jogdeo, K. Concepts of dependence. In Encyclopedia of Statistical Sciences; Kotz, S., Johnson, N.L., Eds.; Wiley: New York, NY, USA, 1982; Volume 1, pp. 324–334. [Google Scholar]
  2. Colangelo, A.; Scarsini, M.; Shaked, M. Some notions of multivariate positive dependence. Insur. Math. Econ. 2005, 37, 13–26. [Google Scholar] [CrossRef]
  3. Barlow, R.E.; Proschan, F. Statistical Theory of Reability and Life Testing; To Begin With: Silver Spring, MD, USA, 1981. [Google Scholar]
  4. Colangelo, A.; Müller, A.; Scarsini, M. Positive dependence and weak convergence. J. Appl. Prob. 2006, 43, 48–59. [Google Scholar] [CrossRef]
  5. Joe, H. Multivariate Models and Dependence Concepts; Chapman & Hall: London, UK, 1997. [Google Scholar]
  6. Beliakov, G.; Pradera, A.; Calvo, T. Aggregation Functions: A Guide for Practitioners; Studies in Fuzziness and Soft Computing; Springer: Berlin, Germany, 2007; Volume 221. [Google Scholar]
  7. Grabisch, M.; Marichal, J.L.; Mesiar, R.; Pap, E. Aggregation Functions; Encyclopedia of Mathematics and Its Applications; Cambridge University Press: Cambridge, UK, 2009; Volume 127. [Google Scholar]
  8. Quesada-Molina, J.J.; Úbeda-Flores, M. Directional dependence of random vectors. Inf. Sci. 2012, 215, 67–74. [Google Scholar] [CrossRef]
  9. Kimeldorf, G.; Sampson, A.R. A framework for positive dependence. Ann. Inst. Stat. Math. 1989, 41, 31–45. [Google Scholar] [CrossRef]
  10. Lehmann, E.L. Some concepts of dependence. Ann. Math. Stat. 1966, 37, 1137–1153. [Google Scholar] [CrossRef]
  11. Shaked, M. A general theory of some positive dependence notions. J. Multivar. Anal. 1982, 12, 199–218. [Google Scholar] [CrossRef]
  12. de Amo, E.; Quesada-Molina, J.J.; Úbeda-Flores, M. Total positivity and dependence of order statistics. AIMS Math. 2023, 8, 30717–30730. [Google Scholar] [CrossRef]
  13. Esary, J.D.; Proschan, F. Relationships among some concepts of bivariate dependence. Ann. Math. Stat. 1972, 43, 651–655. [Google Scholar] [CrossRef]
  14. Ahmed, A.-H.N.; Langberg, N.A.; Leon, R.V.; Proschan, F. Two Concepts of Positive Dependence with Applications in Multivariate Analysis; Technical Report M486; Department of Statistics, Florida State University: Tallahassee, FL, USA, 1978. [Google Scholar]
  15. Ebrahimi, N.; Ghosh, M. Multivariate negative dependence. Commun. Stat. A-Theory Methods 1981, 10, 307–337. [Google Scholar]
  16. Block, H.W.; Savits; Ting, M.-L. Some concepts of multivariate dependence. Commun. Stat. A-Theory Methods 1982, 10, 749–762. [Google Scholar] [CrossRef]
  17. Quesada-Molina, J.J.; Úbeda-Flores, M. Monotonic random variables according to a direction. Axioms 2024, 13, 275. [Google Scholar] [CrossRef]
  18. Harris, R. A multivariate definition for increasing hazard rate distribution functions. Ann. Math. Stat. 1970, 41, 713–717. [Google Scholar] [CrossRef]
  19. Billingsley, P. Convergence of Propability Measures; John Wiley & Sons: New York, NY, USA, 1999. [Google Scholar]
  20. Johnson, N.L.; Kotz, S. Distributions in Statistics: Continuous Multivariate Distributions; John Wiley & Sons: New York, NY, USA, 1972. [Google Scholar]
  21. Nelsen, R.B. An Introduction to Copulas, 2nd ed.; Springer: New York, NY, USA, 2006. [Google Scholar]
  22. Müller, A.; Scarsini, M. Archimedean copulae and positive dependence. J. Multivar. Anal. 2005, 93, 434–445. [Google Scholar] [CrossRef]
  23. Kingman, J.F.C. A convexity property of positive matrices. Quart. J. Math. 1961, 12, 283–284. [Google Scholar] [CrossRef]
  24. Fisher, N.I. Copulas. In Encyclopedia of Statistical Sciences; Kotz, S., Read, C.B., Banks, D.L., Eds.; Wiley: New York, NY, USA, 1997; Volume 1, pp. 159–163. [Google Scholar]
  25. Sklar, A. Fonctions de répartition à n dimensions et leurs marges. Publ. Inst. Stat. Univ. Paris 1959, 8, 229–231. [Google Scholar]
  26. Úbeda-Flores, M.; Fernández-Sánchez, J. Sklar’s theorem: The cornerstone of the Theory of Copulas. In Copulas and Dependence Models with Applications; Úbeda Flores, M., de Amo Artero, E., Durante, F., Fernández Sánchez, J., Eds.; Springer: Cham, Switzerland, 2017; pp. 241–258. [Google Scholar]
  27. Durante, F.; Sempi, C. Principles of Copula Theory; CRC: Boca Raton, FL, USA, 2016. [Google Scholar]
  28. Navarro, J.; Pellerey, F.; Sordo, M.A. Weak dependence notions and their mutual relationships. Mathematics 2021, 9, 81. [Google Scholar] [CrossRef]
  29. Wei, Z.; Wang, T.; Panichkitkosolkul, W. Dependence and association concepts through copulas. In Modeling Dependence in Econometrics—Advances in Intelligent Systems and Computing; Huynh, V.N., Kreinovich, V., Sriboonchitta, S., Eds.; Springer: Cham, Switzerland, 2014; Volume 251, pp. 113–126. [Google Scholar]
  30. Durante, F.; Fernández-Sánchez, J.; Quesada-Molina, J.J. Flipping of multivariate aggregation functions. Fuzzy Sets Syst. 2014, 252, 66–75. [Google Scholar] [CrossRef]
  31. Ali, M.M.; Mikhail, N.N.; Haq, M.S. A class of bivariate distributions including the bivariate logistic. J. Multivar. Anal. 1978, 8, 405–412. [Google Scholar] [CrossRef]
  32. Rodríguez-Lallena, J.A.; Úbeda-Flores, M. A new class of bivariate copulas. Stat. Probab. Lett. 2004, 66, 315–325. [Google Scholar] [CrossRef]
  33. de Amo, E.; Rodríguez-Griñolo, M.R.; Úbeda-Flores, M. Directional dependence orders of random vectors. Mathematics 2024, 12, 419. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Quesada-Molina, J.J.; Úbeda-Flores, M. Modeling Directional Monotonicity in Sequence with Copulas. Axioms 2024, 13, 785. https://doi.org/10.3390/axioms13110785

AMA Style

Quesada-Molina JJ, Úbeda-Flores M. Modeling Directional Monotonicity in Sequence with Copulas. Axioms. 2024; 13(11):785. https://doi.org/10.3390/axioms13110785

Chicago/Turabian Style

Quesada-Molina, José Juan, and Manuel Úbeda-Flores. 2024. "Modeling Directional Monotonicity in Sequence with Copulas" Axioms 13, no. 11: 785. https://doi.org/10.3390/axioms13110785

APA Style

Quesada-Molina, J. J., & Úbeda-Flores, M. (2024). Modeling Directional Monotonicity in Sequence with Copulas. Axioms, 13(11), 785. https://doi.org/10.3390/axioms13110785

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop