1. Introduction
There are several approaches to describing dependence relationships among random variables, and, as noted by Jogdeo [
1], “this is one of the most widely studied objects in probability and statistics”. For a multivariate model, it is essential to analyze the type of dependence structure that it incorporates to determine its suitability for a specific application or dataset. This study focuses on positive and negative dependence, with positive dependence defined as any criterion that mathematically captures the tendency of components within an
n-variate random vector to display concordant values [
2]. According to Barlow and Proschan [
3], concepts of (positive) dependence become notably more diverse and intricate in the multivariate context compared to the bivariate one.
Several generalizations of bivariate dependence notions to the multivariate setting have been explored in the literature (see, for example, [
4,
5]). In this paper, our objective is to extend some established multivariate dependence concepts—both positive and negative—such as orthant dependence and tail monotonicity to examine relationships with other dependence structures and to outline various properties.
Aggregation functions are crucial in numerous applications, including fuzzy set theory and fuzzy logic [
6], among other fields. Copulas, which are multivariate distribution functions with uniform univariate margins on
, represent a specific kind of conjunctive aggregation function. They are commonly applied in aggregation, as they ensure stability, meaning that minor input errors lead to minor output errors [
7]. This paper investigates the new dependence concepts through the lens of copulas.
The paper is organized as follows. We begin with preliminaries on (multivariate) dependence properties in
Section 2.
Section 3 introduces the concept of monotonic in sequence random variables in a given direction, alongside properties and examples.
Section 4 explores the concept through copulas, focusing first on the bivariate and trivariate cases, and then extending to the general case. Finally, conclusions are presented in
Section 5.
2. Preliminaries
In the following, we use the terms “increasing” (or “decreasing”) interchangeably with “nondecreasing” (or “nonincreasing”), unless specified otherwise. Additionally, a subset , with , is called an increasing set if its indicator function is increasing.
Let be a natural number. Consider a probability space , where is a nonempty set, is a -algebra of subsets of , and is a probability measure on . Let be a random vector from to , composed of n random variables , . In this context, we assume that the random vector is continuous.
We now summarize several established concepts of multivariate dependence.
Orthant dependence according to a direction is defined as follows [
8]: Let
be a vector in
such that
for all
. An
n-variate random vector
(or its joint distribution function
F) is said to be
orthant positive (respectively,
orthant negative)
dependently according to the direction , denoted PD(
) (respectively, ND(
)), if
(or, respectively, with the inequality in (
1) reversed).
For certain choices of the direction
, such as
or
, we retrieve well-known dependence concepts, including positive quadrant dependence and positive upper orthant dependence (for further details, see [
2,
5,
9,
10,
11]). Additional related concepts in multivariate total positivity by direction can be found in [
12].
For a pair of random variables
, two bivariate positive dependence notions are introduced in [
13]: left-tail decreasing (LTD) and right-tail increasing (RTI). Specifically,
is said to be
left-tail decreasing (or
right-tail increasing) in
if
(or
) is a nonincreasing (or nondecreasing) function of
for all
—the negative dependence counterparts are defined by reversing the inequalities. For instance, in the LTD concept above, the probability of
being less than or equal to any
given that
increases as
decreases, indicating positive dependence. Multivariate extensions of RTI and LTD are presented in [
14,
15]. A random vector
is said to be
left-tail decreasing in sequence (LTDS) if
is decreasing in
for all
,
;
is
right-tail increasing in sequence (RTIS) if
is increasing in
for all
,
. For the properties of these notions and their relationships to other multivariate dependence concepts, see [
2,
16].
In the following section, we generalize these multivariate dependence concepts according to a direction.
3. Monotonic in Sequence According to a Direction
In this section, we introduce the concepts of left-tail and right-tail dependence in sequence according to a direction for a set of random variables, generalizing the LTDS and RTIS concepts presented in
Section 2. We also provide a characterization of these notions and examine several of their key properties.
3.1. Definition and Characterization
Definition 1. Let be n random variables, and let such that for all . The random variables are said to be increasing (or decreasing) in sequence according to the direction α—denoted by IS(α) (or DS(α))—if, for any ,is nondecreasing (or nonincreasing) in for all . From here, we focus primarily on the IS() concept. Parallel results apply to DS(), so we omit them for brevity. Additionally, we refer to a random vector —or its joint distribution function—as being IS().
The IS(
) concept allows for an analysis of how high or low values in one variable can directionally influence other variables in the sequence. This concept extends bivariate dependence to the multivariate case and provides a directional analysis that is not fully covered by previous approaches. To be precise, it implies that large values of the variables
, for indices
, correspond to small values for variables
, where
, with
and
. Consequently, if
is IS(
), then, for all
and any
, we have that
is nondecreasing (or nonincreasing) in
if
(or
), where
and
(or
). Furthermore, observe also that the IS(
) concept generalizes the LTDS and RTIS concepts introduced in
Section 2: IS(
) is RTIS and IS(
) is LTDS.
Next, we provide a useful characterization of the IS() concept, though a preliminary definition is needed.
Definition 2. Let be n random variables, and let such that for all . The random variables are said to be stochastically increasing (or decreasing) in sequence according to a direction —denoted by SIS(α) (or SDS(α))—ifis nondecreasing (or nonincreasing) in for any real-valued, non-decreasing function f and for each i, with . We will also say that the random vector —or its joint distribution function—is SIS(α) (or SDS(α)). In the next result, we characterise the IS() concept in terms of SIS(). Similar results can be formulated for DS() and SDS().
Theorem 1. A random vector is IS(α) if, and only if, it is SIS(α).
Proof. Suppose the random vector
is SIS(
), and consider, for each
i and any
, the function
Then, we have that
is nondecreasing in
and, hence,
is IS(
).
Conversely, if
is IS(
), then we have that
is nondecreasing in
. Thus, for any simple function
f that is non-negative and nondecreasing, the expression in (
2) remains nondecreasing in
. Utilizing the monotone convergence theorem confirms that this property holds for all nondecreasing (and non-negative) functions. Consequently, we conclude that
is SIS(
), thereby completing the proof. □
3.2. Relationships with Other Multivariate Dependence Concepts
In this subsection, we explore the connections between the IS(
) concept and several established multivariate dependence notions in relation to a specific direction. The initial result demonstrates the link between the IS(
) and PD(
) concepts, as expressed in (
1).
Proposition 1. If the random vector is IS(α), then it is PD(α).
Proof. Let
. Since
is IS(
) then, for every
, we have
whenever
for all
. Therefore,
by letting
for
. Thus,
i.e.,
is PD(
). □
For the forthcoming results, we review several multivariate dependence concepts based on a specific direction. For any vectors and in , we define and .
Definition 3 ([
12])
. Let be an n-dimensional random vector with joint density function f, and such that for all . The random vector is referred to as being multivariate totally positive of order two according to the direction α—denoted by —ifholds for all and in . It follows from Proposition 2 and Theorem 3 in [
12] that an
n-dimensional random vector
is
if, and only if,
is
—or simply
.
Definition 4 ([
17])
. Let be an n-dimensional random vector, and such that for all . The random vector is said to be increasing (or decreasing) according to the direction α—denoted by I(α) (or D(α))—ifis non-decreasing (or non-increasing) in for all . It is important to note that the notion I(
1) (or D(
−1)) generalizes the established concept of RCSI (or LCSD), as discussed in [
18] for the bivariate case and [
5] for the multivariate context.
If a random vector
is
, then it is known to be I(
) [
17]. The next result illustrates the relationship between the concepts I(
) and IS(
).
Proposition 2. If the random vector is I(α), then it is IS(α).
Proof. Let
such that
for
. Since
is I(
), for any
i,
, we have
where
and
Therefore,
is IS(
), which concludes the proof. □
It should be noted that the converse of Proposition 2 does not hold, as illustrated in Example 9 in
Section 4.
From Proposition 2, and given that implies I(), we can derive the following result.
Corollary 1. If the random vector is , then it is IS(α).
3.3. Properties
The following results are properties of the IS() families, which include results for independent random variables, subsets of the new concept IS(), the concatenation of IS() random vectors, weak convergence, etc.
Proposition 3. A set of independent random variables is IS(α) for any .
Proof. Let
be
n independent random variables. For any
, we have
for all
and any
, whence it is immediate that the random variables are IS(
). □
Proposition 4. Any subset of random variables that are IS(α) are also IS(), where is defined as the vector formed by omitting the components of α that correspond to the random variables not present in the subset.
Proof. Let
be a vector of
n random vector that is IS
, and let
represent a subset of
. If
approaches
for each
such that
in the expression
for
, then we find that
is nondecreasing in all
with
, for any
. Consequently,
is IS
. □
Proposition 5. If the random vector is IS, and are n strictly increasing real-valued functions, then the random vector also satisfies the property of being IS.
Proof. Let
such that
for
. Since
is IS(
) and
for every
, we have
i.e.,
is IS(
). □
For the subsequent result, given
and
, we define
to represent the
concatenation, which is expressed as
this definition similarly applies to random vectors.
Proposition 6. If is IS and is IS, with and being independent, then the combined random vector is IS.
Proof. Let
and
. Then, for any
i, with
, and any
, we have
and, thus, it is non-decreasing in
.
Consider now any
i, with
, and any
. Taking into account that
X and
are independent, we have
which is non-decreasing in
. Therefore,
is IS(
). □
The following result pertains to a closure property of the IS family of multivariate distributions, as well as the DS family.
Proposition 7. The collection of IS distribution functions is closed with respect to weak convergence.
Proof. Let be a sequence of IS() n-variate random vectors such that is the distribution function of for each m, and let be a vector of n random variables with joint distribution function H such that as m tends to , where denotes weak convergence. We prove that is IS().
Given
, consider the
n-variate random vector
, which, by hypothesis, is IS(
). From Theorem 1,
is SIS(
). Since
as
m tends to
, by using the Helly–Bray theorem (see, e.g., [
19]), we have
as
m tends to
for any real-valued and nondecreasing function
f and every
i, with
, whence
is SIS(
). By using Theorem 1 again, we conclude that
is IS(
). □
3.4. Examples
We provide several examples that demonstrate the applicability of the dependence concepts studied in this work.
Example 1. Consider the random vector which follows a multivariate normal distribution represented as , where is the mean vector and Σ denotes the covariance matrix. Define such that for all pairs , where ; a similar analysis can be conducted for . The probability density function (PDF) of X is given byFor each pair with , we can rewritewhere for , and are suitable functions. Now, consider such that and , along with , where for . We can deriveSincethis holds true, provided that . Thus, the expression in (3) is non-negative if and only if . Consequently, for any vector with for each , the random vector X exhibits the property of if, and only if, for every selected pair —refer to Theorem 3 in [12]. By Corollary 1, we conclude that X is IS(α) for both and . Example 2. Consider the random vector following a Dirichlet distribution denoted as , where and for all , with . The probability density function (PDF) for this distribution is expressed aswhere and . For any chosen pair with and any real numbers such that and , we have the following relationship:Since we haveand given that , it follows that (4) is non-positive. Thus, we conclude that the random vector exhibits the property, indicating that X is characterized by the multivariate reverse rule of order two, which is the negative counterpart to the concept defined in Definition 3 by reversing the inequality sign, according to the direction 1 . Consequently, by applying the corresponding negative dependence framework, analogous to what is provided in Corollary 1 for the positive dependence framework, we deduce that X is DS. Example 3. Let be a random vector following a multinomial distribution characterized by parameters N (the number of trials) and (the probabilities of the events), with the constraints for each and . The joint probability mass function (PMF) is given bywhere . It is notable that the multinomial distribution can be viewed as the conditional distribution of independent Poisson random variables, conditioned on their total. As established in Theorem 4.3 of [16] and Theorem 3 of [12], we find that the random vector satisfies the property. Consequently, we can conclude that is also DS. Remark 1. It is worth noting that, by applying reasoning analogous to that outlined in Example 3, any random vector that follows a multivariate Hypergeometric distribution—essentially the conditional distribution of independent binomial random variables given their total—also qualifies as DS.
Next, we provide an illustrative example showing the application of Proposition 7 related to weak convergence.
Example 4. Let represent a random vector with a joint distribution function defined asfor all and . This family of distribution functions serves as a multivariate generalization of Type B bivariate extreme-value distributions (see [20,21]). By invoking Theorem 2.11 in [22], which addresses log-convex functions [23], we can assert that the random vector is IS. We consider the sequence of distribution functions . As θ approaches infinity, we derive , where are the one-dimensional marginals of for . Hence, by virtue of Proposition 7, it follows that retains the IS property too. 4. Monotonic in Sequence According to a Direction and Copulas
Copulas serve as an essential tool for analyzing the positive dependence characteristics of continuous random vectors. They encapsulate the dependence structure of the corresponding multivariate distribution function, are independent of the marginal distributions, and provide scale-invariant measures of dependence. Additionally, copulas act as a foundational element for constructing families of distributions (see [
24]). In this section, our objective is to examine continuous
n-copulas associated with random vectors that are IS(
). For simplicity, we focus on the bivariate and trivariate cases.
Let us recall some key concepts related to copulas. For
, an
n-dimensional
copula (shortened to
n-copula) is defined as the restriction to
of a continuous
n-dimensional distribution function, where the univariate margins are uniformly distributed on
. The significance of copulas in statistics is highlighted by the following theorem from Abe Sklar [
25]: Let
be a random vector with joint distribution function
F and one-dimensional marginal distributions
. There exists a unique
n-copula
C (which is determined on
) such that
Moreover, if
are continuous, then
C is uniquely defined. A comprehensive proof of this result can be found in [
26]. Thus, copulas serve to connect joint distribution functions with their respective one-dimensional margins. For an overview of copulas, see [
21,
27], and references discussing positive dependence properties through copulas can be found in [
5,
21,
22,
28,
29].
Let represent the n-copula for independent random variables (also known as the product n-copula), defined as for all .
For any
n-copula
C, the following inequality holds:
for all
in
. While
is an
n-copula for all
,
qualifies as an
n-copula only when
.
Let
be a random vector with an associated
n-copula
C, and consider the random pair
corresponding to the components
i and
j (where
) of
. Define
as the
-margin of
C:
for all
, which represents the 2-copula corresponding to the random pair
. Additionally,
denotes the
i-copula
.
For a random vector
with associated
n-copula
C, the
survival n-
copula associated with
C, denoted by
, is defined as
for all
.
We begin by characterizing the concept of IS() in terms of n-copulas, specifically focusing on the bivariate and trivariate cases for clarity.
4.1. The General Case
To characterize the IS(
) concept in terms of
n-copulas, we examine the flipping transformations of copulas. Recall that, for an
n-copula
C, the
flipping of C in the i-th
coordinate (referred to as the
i-
flip of C) is defined as the function
given by, for all
,
(see [
30]).
Moreover, the
j-flipping transformation of the
i-flip of
C is denoted by
, and is given, for all
, by
Similarly, we denote by the k-flipping transformation of the function , and so on.
Now, the following characterization can be established.
Theorem 2. Let be n random variables with associated n-copula C, and such that for all . Let , and , for . Then, C is IS if, and only if, for every , the function G given byis nonincreasing in all for , and nondecreasing in all for , for all , where denotes the function obtained from the survival copula of , after flipping consecutively in all indices j in . In Theorem 3 of [
8], it is shown that a 3-copula
C is PD(
) for every direction
if, and only if,
. This result can be generalised to any dimension
. Since every IS(
)
n-copula is PD(
) for every
—recall Proposition 1—as a consequence of Proposition 3, the following result easily follows.
Corollary 2. An n-copula C is deemed IS(α) for every direction α if, and only if, C is equal to .
Now, we provide two examples of applications of Theorem 2.
Example 5. For all , the n-copula is IS only for and .
Example 6. Let be an n-dimensional generalization of the one-parameter Farlie–Gumbel–Morgenstern (FGM, for short) family of 2-copulas, which is given byfor all (see [21,27]). For , let . After some elementary operations, we have that is I for if —the cardinality of J—is even, and is I for if is odd. 4.2. The Bivariate Case
We study now the bivariate case, which is a consequence of Theorem 2.
Corollary 3. Let be a pair of random variables with associated 2-copula C. Then, C is:
- i.
IS if, and only if,is increasing in u for all v; - ii.
IS if, and only if,is increasing in u for all v; - iii.
IS if, and only if,is decreasing in u for all v; - iv.
IS if, and only if,is decreasing in u for all v.
We provide several examples for the bivariate case.
Example 7. The 2-copula is IS for and .
Example 8. Let be the Ali–Mikhail–Haq one-parameter family of 2-copulas [31] given byfor all . It is easy to prove that is IS and IS for , and IS and IS for . The following example shows that the converse of Proposition 2 does not hold in general.
Example 9. Consider the 2-copulafor all . C belongs to the family of 2-copulas studied in [32]. Then, we have that C is IS. Moreover, we have that C is not I. 4.3. The Trivariate Case
In the next result, we study the IS() concept in terms of 3-copulas.
Theorem 3. Let be a random triple with associated 3-copula C. Then, C is:
- i.
IS if, and only if,is increasing in u for all v andis increasing in for all w; - ii.
IS if, and only if,is increasing in u for all v andis increasing in for all w; - iii.
IS if, and only if,is increasing in u for all v andis increasing in u and decreasing in v for all w; - iv.
IS if, and only if,is decreasing in u for all v andis decreasing in u and increasing in v for all w; - v.
IS if, and only if,is increasing in u for all v andis increasing in u and decreasing in v for all w; - vi.
IS if, and only if,is decreasing in u for all v andis decreasing in for all w; - vii.
IS if, and only if,is decreasing in u for all v andis decreasing in u and increasing in v for all w; - viii.
IS if, and only if,is decreasing in u for all v andis decreasing in for all w.
We provide an example for the trivariate case.
Example 10. Let be the one-parameter FGM family of 2-copulas—recall Example 6. Consider the 3-copula given by for all . Then, we have that is IS, IS, IS, and IS for and is IS, IS, IS, and IS for .
5. Conclusions
The monotonic in sequence according to a direction concept, denoted by IS(), constitutes a significant advancement in multivariate dependence analysis because it allows the modeling of dependencies that are not easily captured by previous concepts, such as simple positive or negative dependence, extending several known multivariate dependence concepts. There is clear potential for applications in fields such as financial risk analysis, where directional dependence relationships could help model the correlation of extreme events between financial assets. Another relevant area could be biostatistics, where the progression of certain conditions or responses may be influenced by a sequence of biological or environmental variables with marked directional dependence. Sequence dependencies may also have implications in network analysis, such as neural or data networks, where the direction in which nodes are affected significantly impacts information or state propagation. This new concept seems to address the limitations of traditional dependence measures, especially in configurations where dependencies are asymmetric or directional.
We have established certain relationships with other multivariate dependence concepts, for instance, the implication from I(
) to IS(
) and, subsequently, from IS(
) to PD(
). Additionally, we have highlighted key properties and conducted an examination of this novel concept in terms of
n-copulas—specifically for the bivariate and trivariate scenarios for a better understanding. Copulas play a fundamental role in this paper.
Section 4 examines how copulas capture the multivariate dependence structure independently of the marginal distributions, making it easier to analyze the IS(
) concept in terms of directional properties.
Further exploration involving analogous extensions of well-known dependence concepts discussed in this paper, the investigation of associated orders—akin to the approach taken in [
33]—and defining new measures of association based on the concepts of dependence studied here are the subject of ongoing investigation.
Author Contributions
Conceptualization, J.J.Q.-M.; methodology, J.J.Q.-M. and M.Ú.-F.; validation, J.J.Q.-M. and M.Ú.-F.; investigation, J.J.Q.-M. and M.Ú.-F.; writing—original draft preparation, M.Ú.-F.; writing—review and editing, M.Ú.-F.; visualization, J.J.Q.-M. and M.Ú.-F.; supervision, J.J.Q.-M. and M.Ú.-F. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by Ministry of Science and Innovation (Spain) grant number PID2021-122657OB-I00. The second author acknowledges the support of PPIT-UAL, Junta de Andalucía-ERDF 2021–2027, Objective RS01.1, Program 54.A.
Data Availability Statement
Data is contained within the article.
Acknowledgments
The authors are grateful for the comments provided by three anonymous reviewers.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Jogdeo, K. Concepts of dependence. In Encyclopedia of Statistical Sciences; Kotz, S., Johnson, N.L., Eds.; Wiley: New York, NY, USA, 1982; Volume 1, pp. 324–334. [Google Scholar]
- Colangelo, A.; Scarsini, M.; Shaked, M. Some notions of multivariate positive dependence. Insur. Math. Econ. 2005, 37, 13–26. [Google Scholar] [CrossRef]
- Barlow, R.E.; Proschan, F. Statistical Theory of Reability and Life Testing; To Begin With: Silver Spring, MD, USA, 1981. [Google Scholar]
- Colangelo, A.; Müller, A.; Scarsini, M. Positive dependence and weak convergence. J. Appl. Prob. 2006, 43, 48–59. [Google Scholar] [CrossRef]
- Joe, H. Multivariate Models and Dependence Concepts; Chapman & Hall: London, UK, 1997. [Google Scholar]
- Beliakov, G.; Pradera, A.; Calvo, T. Aggregation Functions: A Guide for Practitioners; Studies in Fuzziness and Soft Computing; Springer: Berlin, Germany, 2007; Volume 221. [Google Scholar]
- Grabisch, M.; Marichal, J.L.; Mesiar, R.; Pap, E. Aggregation Functions; Encyclopedia of Mathematics and Its Applications; Cambridge University Press: Cambridge, UK, 2009; Volume 127. [Google Scholar]
- Quesada-Molina, J.J.; Úbeda-Flores, M. Directional dependence of random vectors. Inf. Sci. 2012, 215, 67–74. [Google Scholar] [CrossRef]
- Kimeldorf, G.; Sampson, A.R. A framework for positive dependence. Ann. Inst. Stat. Math. 1989, 41, 31–45. [Google Scholar] [CrossRef]
- Lehmann, E.L. Some concepts of dependence. Ann. Math. Stat. 1966, 37, 1137–1153. [Google Scholar] [CrossRef]
- Shaked, M. A general theory of some positive dependence notions. J. Multivar. Anal. 1982, 12, 199–218. [Google Scholar] [CrossRef]
- de Amo, E.; Quesada-Molina, J.J.; Úbeda-Flores, M. Total positivity and dependence of order statistics. AIMS Math. 2023, 8, 30717–30730. [Google Scholar] [CrossRef]
- Esary, J.D.; Proschan, F. Relationships among some concepts of bivariate dependence. Ann. Math. Stat. 1972, 43, 651–655. [Google Scholar] [CrossRef]
- Ahmed, A.-H.N.; Langberg, N.A.; Leon, R.V.; Proschan, F. Two Concepts of Positive Dependence with Applications in Multivariate Analysis; Technical Report M486; Department of Statistics, Florida State University: Tallahassee, FL, USA, 1978. [Google Scholar]
- Ebrahimi, N.; Ghosh, M. Multivariate negative dependence. Commun. Stat. A-Theory Methods 1981, 10, 307–337. [Google Scholar]
- Block, H.W.; Savits; Ting, M.-L. Some concepts of multivariate dependence. Commun. Stat. A-Theory Methods 1982, 10, 749–762. [Google Scholar] [CrossRef]
- Quesada-Molina, J.J.; Úbeda-Flores, M. Monotonic random variables according to a direction. Axioms 2024, 13, 275. [Google Scholar] [CrossRef]
- Harris, R. A multivariate definition for increasing hazard rate distribution functions. Ann. Math. Stat. 1970, 41, 713–717. [Google Scholar] [CrossRef]
- Billingsley, P. Convergence of Propability Measures; John Wiley & Sons: New York, NY, USA, 1999. [Google Scholar]
- Johnson, N.L.; Kotz, S. Distributions in Statistics: Continuous Multivariate Distributions; John Wiley & Sons: New York, NY, USA, 1972. [Google Scholar]
- Nelsen, R.B. An Introduction to Copulas, 2nd ed.; Springer: New York, NY, USA, 2006. [Google Scholar]
- Müller, A.; Scarsini, M. Archimedean copulae and positive dependence. J. Multivar. Anal. 2005, 93, 434–445. [Google Scholar] [CrossRef]
- Kingman, J.F.C. A convexity property of positive matrices. Quart. J. Math. 1961, 12, 283–284. [Google Scholar] [CrossRef]
- Fisher, N.I. Copulas. In Encyclopedia of Statistical Sciences; Kotz, S., Read, C.B., Banks, D.L., Eds.; Wiley: New York, NY, USA, 1997; Volume 1, pp. 159–163. [Google Scholar]
- Sklar, A. Fonctions de répartition à n dimensions et leurs marges. Publ. Inst. Stat. Univ. Paris 1959, 8, 229–231. [Google Scholar]
- Úbeda-Flores, M.; Fernández-Sánchez, J. Sklar’s theorem: The cornerstone of the Theory of Copulas. In Copulas and Dependence Models with Applications; Úbeda Flores, M., de Amo Artero, E., Durante, F., Fernández Sánchez, J., Eds.; Springer: Cham, Switzerland, 2017; pp. 241–258. [Google Scholar]
- Durante, F.; Sempi, C. Principles of Copula Theory; CRC: Boca Raton, FL, USA, 2016. [Google Scholar]
- Navarro, J.; Pellerey, F.; Sordo, M.A. Weak dependence notions and their mutual relationships. Mathematics 2021, 9, 81. [Google Scholar] [CrossRef]
- Wei, Z.; Wang, T.; Panichkitkosolkul, W. Dependence and association concepts through copulas. In Modeling Dependence in Econometrics—Advances in Intelligent Systems and Computing; Huynh, V.N., Kreinovich, V., Sriboonchitta, S., Eds.; Springer: Cham, Switzerland, 2014; Volume 251, pp. 113–126. [Google Scholar]
- Durante, F.; Fernández-Sánchez, J.; Quesada-Molina, J.J. Flipping of multivariate aggregation functions. Fuzzy Sets Syst. 2014, 252, 66–75. [Google Scholar] [CrossRef]
- Ali, M.M.; Mikhail, N.N.; Haq, M.S. A class of bivariate distributions including the bivariate logistic. J. Multivar. Anal. 1978, 8, 405–412. [Google Scholar] [CrossRef]
- Rodríguez-Lallena, J.A.; Úbeda-Flores, M. A new class of bivariate copulas. Stat. Probab. Lett. 2004, 66, 315–325. [Google Scholar] [CrossRef]
- de Amo, E.; Rodríguez-Griñolo, M.R.; Úbeda-Flores, M. Directional dependence orders of random vectors. Mathematics 2024, 12, 419. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).