Next Article in Journal
An Age Grouping Framework for Multi-Population Mortality Modeling
Previous Article in Journal
Risk-Informed Machine Learning Models for Renewal Classification in Motor Insurance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Return Probabilities of Adverse Events Under Dependence and Lessons to Learn for Decision-Making

Department of Statistics and Actuarial Science, School of Computing and Data Science, The University of Hong Kong, Hong Kong, China
Risks 2026, 14(3), 58; https://doi.org/10.3390/risks14030058
Submission received: 16 January 2026 / Revised: 25 February 2026 / Accepted: 4 March 2026 / Published: 5 March 2026

Abstract

Considering achieving a goal in each of several time intervals when, in every time interval, an adverse event may lead to a failure raises the question of the return probability of adverse events, so the probability of at least one failure to happen during the time period of interest. Through basic mathematical arguments in tractable cases, we investigate the behavior of the return probability of adverse events in various setups. In the univariate case, we consider the independent and identically distributed setup, the independent setup, the dependent but not necessarily identically distributed setup, and the dependent and identically distributed setup. In the multivariate case, we consider several goals to be achieved in each time period. Besides different setups for the marginal failure probabilities, we study dependence in terms of comonotone blocks and independent blocks and via nested copulas. In case closed form expressions are not available, we derive bounds on the return probability of at least one failure. Our results are interpretable in terms of decision-making, provide insight into what affects such return probabilities and thus may help to develop strategies to lower them.

1. Introduction

Consider a discrete-time horizon consisting of n N non-overlapping time intervals, typically forming a partition with equidistant partition elements. In each time interval, we aim to achieve a goal (“success”). However, an adverse event may occur, leading to a failure to achieve the goal (“failure”). Adverse events could be climate extremes, financial losses, failure to reach educational goals, etc. A natural question then concerns the return probability (related to its more classically and colloquially considered return period formulation as a “1 in so-many time periods” event):
What is the probability of at least one failure over the n time periods?
For decision-making with focus on avoiding failures, there are widely found general coping mechanisms for failures, such as “Don’t look back” or “Don’t let failure affect you, just move on”. A related question is therefore:
Is there mathematical support for coping mechanisms for potential failures?
Past literature mostly addressed qualitative aspects of risk in decision-making. For example, Camerer and Kunreuther (1989) argue that individuals systematically misjudge low-probability, high-consequence risks due to cognitive biases, leading to suboptimal decisions that necessitate policy interventions like reframing information, providing economic incentives, and implementing regulations to encourage socially desirable protective behaviors. Tinsley et al. (2012) address how near-miss events can either increase or decrease future risk-taking depending on whether they are interpreted as evidence of resilience (leading to complacency) or vulnerability (leading to caution), primarily by altering perceptions of risk and outcome severity rather than probability estimates. Borovcnik (2015) explores the multifaceted concept of risk in decision-making, examining its diverse definitions, its relationship with probability and uncertainty, its psychological perceptions, and its implications for teaching, arguing that risk is not a unified mathematical concept but a context-dependent construct influenced by stakeholders, criteria, and subjective judgments. Wakker (2010) provides a comprehensive treatment of prospect theory, a theory of behavioral economics, judgment and decision-making under risk and ambiguity, offering a framework for understanding the deviations from rationality that earlier qualitative studies describe. Clemen et al. (2000) experimentally investigate methods for eliciting subjective judgments of dependence from experts. For an early reference on the types of dependencies considered in this paper (copulas), Frees and Valdez (1998) provide an accessible introduction for an actuarial audience, effectively demonstrating their practical utility in modeling multivariate risks. Patton (2012) reviews copula models for economic time series, providing an extension of dependence modeling to a more dynamic, multi-period context. Shao et al. (2019) presents a multivariate Bernoulli logit-normal model for failure prediction, which provides an alternative to our copula-based approach for modeling dependent binary outcomes.
In this article we consider quantitative aspects of risk in decision-making. Effective decision-making under uncertainty hinges not only on the magnitude of individual risks but critically on how they interact. Ignoring the dependence between random risks can invalidate mathematical results, data analyses, and diversification strategies and lead to a severe miscalculation and assessment of tail risks that threaten organizational resilience. Hence it is crucial to understand the effect of dependence of risks for decision-making. Accessible to a wide audience, we provide mathematical answers to the above questions in tractable cases under various scenarios and the related lessons learned for decision-making.
The paper is organized as follows. In Section 2 we consider the univariate case, that is, the case of a single person trying to achieve said goals in all n time periods. We consider the case of identically and not identically distributed failure probabilities, as well as independent and dependent events over time. Section 3 addresses the multivariate case, where we consider d goals to reach in each of the n time periods. Section 4 concludes with a short summary of our findings and their connection to a real-life anecdote.

2. Univariate Case

For adjacent, equidistant time periods i = 1 , , n , let A i denote the event of failure to achieve a goal of interest in time period i. Let X i = 1 A i denote the corresponding indicator of failure to achieve the goal in time period i. Furthermore, let S n = i = 1 n X i , so that
{ S n 1 }
denotes the event of at least one failure in at least one of the n time periods, which is of main concern when studying return periods of adverse events and our main event of interest in this work.

2.1. The Independent and Identically Distributed Setup

Suppose failures across the time periods happen independently of each other and according to the same distribution. Although perhaps unrealistic in practice, this case is important to compare other cases against as one sometimes has no information about a stronger form of dependence and thus makes this classical iid assumption. Mathematically, this means we assume X i ind . B ( 1 , p ) , i = 1 , , n , where p ( 0 , 1 ) denotes the probability of failure in one period of time. In this setup, we obtain the probability of failure at least once in the n time periods as
P ( S n 1 ) = P i = 1 n X i 1 = 1 P ( X 1 = 0 , , X n = 0 ) = ind . 1 i = 1 n P ( X i = 0 ) = id 1 P ( X 1 = 0 ) n = 1 ( 1 P ( X 1 = 1 ) ) n = 1 ( 1 p ) n ;
in this setup, we even know that S n = i = 1 n X i B ( n , p ) and so P ( S n 1 ) = F Geo ( p ) ( n ) , where F Geo ( p ) denotes the distribution function of the geometric distribution on N .
For fixed n, even if we manage to achieve p = c / n over the n time periods for some c ( 0 , n ) , the fact that n 1 ( 1 c / n ) n 1 e c implies that, for sufficiently large n,
P ( S n 1 ) = 1 ( 1 c / n ) n 1 e c .
For c > log ( 2 ) 0.6931 and n large, we thus have P ( S n 1 ) > 1 / 2 . In particular, we see that at least one failure over n time periods is more than 2 / 3 likely than not to happen under p log ( 2 ) / n (or larger).
There are already several observations to take away from these basic results from the point of view of decision-making. First, P ( S n 1 ) is strictly increasing in p, so a larger failure probability on each day translates to a larger probability of at least one failure over the n time periods considered, which is not surprising. And second, for fixed p ( 0 , 1 ) , we see that lim n P ( S n 1 ) = 1 , so we will, eventually, experience failure on our mission to achieve all goals, which also aligns with intuition. We can already learn some lessens from the above, namely, (1) to be prepared to fail (as it will eventually happen) and (2) to not let the potential of failure affect us (as an increase in p will lead to a higher probability of failing at least once).
These results are intuitively clear precisely due to the assumptions, namely, since we have the same probability of failure p in each time interval and events across different time intervals are independent. In what follows, we stepwise relax these assumptions while still investigating the behavior of P ( S n 1 ) in tractable cases. Lesson (2) above already hints at non-equal probabilities per time period, which we will consider next.

2.2. The Independent but Not Necessarily Identically Distributed Setup

We still assume that X 1 , , X n are independent, but now focus on the more realistic scenario in which X i B ( 1 , p i ) for p i ( 0 , 1 ) , i = 1 , , n . In this setup, we obtain the probability of failure at least once in the n time periods as
P ( S n 1 ) = P i = 1 n X i 1 = 1 P ( X 1 = 0 , , X n = 0 ) = ind . 1 i = 1 n P ( X i = 0 ) = 1 i = 1 n ( 1 P ( X i = 1 ) ) = 1 i = 1 n ( 1 p i ) .
The following example contains tractable cases in the sense that we can derive P ( S n 1 ) in closed form but also determine n such that P ( S n 1 ) is smaller or greater than in the iid case considered in Section 2.1.
Example 1
(Tractable cases of P ( S n 1 ) ).
(1) 
Consider the decreasing sequence p i = 1 / ( i + 1 ) , i = 1 , , n . A telescoping argument leads to
P ( S n 1 ) = 1 i = 1 n 1 1 i + 1 = 1 i = 1 n i i + 1 = 1 1 n + 1 .
We thus have P ( S n 1 ) < 1 ( 1 p ) n if and only if n + 1 < ( 1 / ( 1 p ) ) n = e ( log ( 1 p ) ) n and thus if and only if
log ( n + 1 ) n < log ( 1 p ) .
As the left-hand side goes to 0, there thus exists, for each p ( 0 , 1 ) , an n p N sufficiently large such that P ( S n 1 ) < 1 ( 1 p ) n for all n n p ; as the left-hand side is maximal for n = 1 with value log ( 2 ) ; this inequality even holds for all n N for p ( 1 / 2 , 1 ) . Therefore, eventually, the probability of at least one failure decreases in comparison to that in the iid case considered in Section 2.1.
(2) 
Consider the increasing sequence p i = 1 α i + 1 , i = 1 , , n , for α ( 0 , 2 ) . Then
P ( S n 1 ) = 1 i = 1 n ( 1 p i ) = 1 i = 1 n α i + 1 = 1 α n ( n + 1 ) ! ,
which is > 1 ( 1 p ) n if and only if ( α / ( 1 p ) ) n / n ! < n + 1 . Using x n / n ! e x , x > 0 , we thus obtain that the probability of failure to achieve all goals is larger than that in Section 2.1 if n > e α / ( 1 p ) 1 , so eventually for sufficiently large integer n.
(3) 
Another increasing sequence is the geometric sequence p i = 1 ( 1 p min ) i , i = 1 , , n , with p 1 = p min ( 0 , 1 ) and p i 1 geometrically. Then
P ( S n 1 ) = 1 i = 1 n ( 1 p i ) = 1 i = 1 n ( 1 p min ) i = 1 ( 1 p min ) i = 1 n i = 1 ( 1 p min ) n ( n + 1 ) 2 .
Clearly, if p min p for p being the probability of failure in the iid case in Section 2.1, then we obtain that the probability P ( S n 1 ) of at least one failure is larger than that in the iid case. However, this still holds eventually even if p min < p , so even for the first so-many time periods i, the failure probability p i is smaller than p. Moreover, we can determine the number of time periods it takes for this result to hold eventually, namely, by finding the n values for which 1 ( 1 p min ) n ( n + 1 ) 2 > 1 ( 1 p ) n , which are precisely those integers that satisfy n > 2 log ( 1 p ) log ( 1 p min ) 1 .
Example 1 adds support to Lesson (2) of Section 2.1 for decision-making. For example, as seen in Example 1 (3) if, over time, the probability of failure increases, say, due to an increase in chances of climate extremes, financial losses or anxiety around failing to achieve a goal, then, eventually, for large n (and even if the failure probability in each time interval was initially smaller), there is an increase in the probability of at least one failure over the n time periods.
If P ( S n 1 ) is not tractable in the independent but not necessarily identically distributed case, one is interested in bounds. To this end, let p ( 1 ) p ( n ) denote the order statistics of the failure probabilities ( p i ) i = 1 n ( 0 , 1 ) .
Proposition 1
(Bounds on P ( S n 1 ) under independence). Let ( p i ) i = 1 n ( 0 , 1 ) . Then for any i * { 1 , , n } ,
1 1 p ( 1 ) 1 p ( i * ) i * ( 1 p ( i * ) ) n P ( S n 1 ) 1 1 p ( i * ) 1 p ( n ) i * ( 1 p ( n ) ) n .
Proof. 
We have
P ( S n 1 ) = 1 i = 1 n ( 1 p i ) = 1 i = 1 n ( 1 p ( i ) ) = 1 i = 1 i * ( 1 p ( i ) ) i = i * + 1 n ( 1 p ( i ) ) .
Given that p ( i ) p ( 1 ) , i = 1 , , i * , and p ( i ) p ( i * ) , i = i * + 1 , , n implies the lower bound
P ( S n 1 ) 1 ( 1 p ( 1 ) ) i * ( 1 p ( i * ) ) n i * = 1 1 p ( 1 ) 1 p ( i * ) i * ( 1 p ( i * ) ) n .
Given that p ( i ) p ( i * ) , i = 1 , , i * , and p ( i ) p ( n ) , i = i * + 1 , , n implies the upper bound
P ( S n 1 ) 1 ( 1 p ( i * ) ) i * ( 1 p ( n ) ) n i * = 1 1 p ( i * ) 1 p ( n ) i * ( 1 p ( n ) ) n .
Note that trivial bounds for P ( S n 1 ) are
1 ( 1 p ( 1 ) ) n P ( S n 1 ) 1 ( 1 p ( n ) ) n .
From these trivial bounds, only if p < p ( 1 ) can we guarantee that 1 ( 1 p ( 1 ) ) n > 1 ( 1 p ) n and thus infer that P ( S n 1 ) 1 ( 1 p ( 1 ) ) n > 1 ( 1 p ) n so that P ( S n 1 ) is larger than in the iid case. However, this is trivial as p < p ( 1 ) implies that each failure probability p i is larger than p. In contrast, the lower bound P ( S n 1 ) from Proposition 1 allows us to determine when P ( S n 1 ) exceeds the iid case even for p [ p ( 1 ) , p ( i * ) ) : If p < p ( i * ) , then 1 1 p ( 1 ) 1 p ( i * ) i * ( 1 p ( i * ) ) n > 1 ( 1 p ) n if and only if 1 p ( 1 ) 1 p ( i * ) i * < 1 p 1 p ( i * ) n , so if and only if
n > i * log 1 p ( 1 ) 1 p ( i * ) log 1 p 1 p ( i * ) .
The left-hand side of Figure 1 shows the lower bounds (blue) and upper bounds (red) on P ( S n 1 ) (orange) of Proposition 1 for p ( 1 ) = 0.01 , p ( 2 ) = 0.05 , p ( 3 ) = 0.1 , p ( 4 ) = p ( 5 ) = = p ( n ) = 0.25 and i * { 1 , , 4 } . We see that for already rather small n, the largest value i * = 4 gives the sharpest bounds in terms of their difference at each fixed n.
Another lower bound on P ( S n 1 ) in the independent but not necessarily identically distributed case is the following.
Proposition 2
(A lower bound on P ( S n 1 ) ). Let ( p i ) i = 1 n ( 0 , 1 ) and p ¯ n = 1 n i = 1 n p i . Then
P ( S n 1 ) 1 ( 1 p ( n ) ) n exp n ( p ( n ) p ¯ n ) 1 p ( n ) .
Proof. 
Since 1 + x e x , we have
P ( S n 1 ) = 1 i = 1 n ( 1 p i ) = 1 ( 1 p ( n ) ) n i = 1 n 1 p i 1 p ( n ) = 1 ( 1 p ( n ) ) n i = 1 n 1 + p ( n ) p i 1 p ( n ) 1 ( 1 p ( n ) ) n i = 1 n exp p ( n ) p i 1 p ( n ) = 1 ( 1 p ( n ) ) n exp i = 1 n p ( n ) p i 1 p ( n ) ,
from which the form as claimed follows. □
For 0 < p min < p max < 1 and a geometric sequence p i = p max ( p max p min ) i , i = 1 , , n , we obtain from Proposition 2 that
P ( S n 1 ) 1 ( 1 p max ) n exp n p max i = 1 n ( p max ( p max p min ) i ) ) 1 p max = 1 ( 1 p max ) n exp 1 1 p max i = 1 n ( p max p min ) i = 1 ( 1 p max ) n exp p max p min 1 p max · 1 ( p max p min ) n 1 ( p max p min ) 1 ( 1 p max ) n exp p max p min ( 1 p max ) ( 1 ( p max p min ) ) .
The right-hand side of Figure 1 shows the trivial lower bound of (3), the lower bound from Proposition 1 and that of Proposition 2 (all blue) on P ( S n 1 ) (orange), for a geometric sequence p i = p max ( p max p min ) i , i = 1 , , n , with p min = 0.01 and p max = 0.25 . In this example, the bound of Proposition 2 clearly provides the best lower bound on P ( S n 1 ) .
Finally, let us address whether any sequence of failure probabilities p i , i = 1 , , n , decreasing to 0 always leads to lim n P ( S n 1 ) = 1 as in the iid case. As the following example shows, this is not always the case.
Example 2
(Fast convergence). If p i = 1 4 i 2 , i N , then, as reciprocal of the Wallis product, one has lim n i = 1 n ( 1 p i ) = 2 π and thus 1 i = 1 n ( 1 p i ) n 1 2 π 0.3634 < 1 . An even more drastic example is p i = 0 , i 2 , so that 1 i = 1 n ( 1 p i ) = p 1 , which can take any value in [ 0 , 1 ) with the appropriate choice of p 1 . We thus see that the speed of convergence of ( p i ) i N to 0 matters when determining whether lim n P ( S n 1 ) = 1 . For decision-making, this supports the intuition that reducing the probability of failure sufficiently fast is advantageous.

2.3. The Dependent but Not Necessarily Identically Distributed Setup

The assumption of independence between time periods underlying the setups covered so far is typically unrealistic. We thus now consider setups under dependence, the dependence structure being modeled by a copula C, i.e., a distribution function with U ( 0 , 1 ) margins.
Let ( X 1 , , X n ) F , where X i B ( 1 , p i ) , p i ( 0 , 1 ) , i = 1 , , n . By the first part of Sklar’s theorem (see Nelsen 2006, Theorem 2.3.3), there exists a copula C, such that the joint distribution function F can be expressed as
F ( x 1 , , x n ) = C ( F 1 ( x 1 ) , , F n ( x n ) ) , x 1 , , x n R .
According to Sklar’s theorem, the copula C combining the marginal distribution functions F 1 , , F n of F to the joint distribution function F is only uniquely defined on i = 1 n ran ( F i ) , where ran ( F i ) = { F i ( x i ) : x i R } = { 0 , 1 p i , 1 } denotes the range of the Bernoulli distribution function F i . This non-uniqueness is not a problem for us, though, as we only utilize (4) via the second part of Sklar’s theorem, which states that combining any margins F 1 , , F n (continuous or not; in our case they are discrete Bernoulli margins) with any copula C as in (4) guarantees that F defined in (4) is indeed a valid joint distribution function.
Under this setup, the probability of failure P ( S n 1 ) to achieve the goal under consideration at least once in the n time periods can now be expressed as
P ( S n 1 ) = P i = 1 n X i 1 = 1 P ( X 1 = 0 , , X n = 0 ) = 1 F ( 0 , , 0 ) = Sklar 1 C ( F 1 ( 0 ) , , F n ( 0 ) ) = 1 C ( 1 p 1 , , 1 p n ) ;
note that when necessary to distinguish different copulas, we also write P C ( S n 1 ) for P ( S n 1 ) .
Through different assumptions on the dependence structure C and the marginal parameters p 1 , , p n , we can investigate how the probability of failure to achieve all goals under consideration in the n time periods behaves. Here are the first examples.
Example 3
(Behavior under different dependence structures).
(1) 
If C ( u ) = Π ( u ) , u [ 0 , 1 ] n is the independence copula, we obtain P ( S n 1 ) = 1 i = 1 n ( 1 p i ) just as in Section 2.2.
(2) 
If n = 2 and C ( u 1 , u 2 ) = W ( u 1 , u 2 ) = max { u 1 + u 2 1 , 0 } is the countermonotone copula, then P ( S n 1 ) = 1 W ( 1 p 1 , 1 p 2 ) = 1 max { 1 ( p 1 + p 2 ) , 0 } = min { p 1 + p 2 , 1 } , which is 1 if p 1 + p 2 1 and thus reaches failure for sure within two consecutive time periods. The independence setup, however, only leads to P ( S n 1 ) = 1 ( 1 p 1 ) ( 1 p 2 ) < 1 for all p 1 , p 2 ( 0 , 1 ) . As such, independence is advantageous over countermonotonicity. For decision-making, we observe that under countermonotonicity, not failing in the first period increases our chances of failing in the second.
(3) 
If C ( u ) = min { u 1 , , u n } , u [ 0 , 1 ] n , is the comonotone copula, we obtain P ( S n 1 ) = 1 min { 1 p 1 , , 1 p n } = max { p 1 , , p n } = p ( n ) , which means that, if p i 1 ε , i N , for some ε ( 0 , 1 ) , then P ( S n 1 ) 1 ε . In the independence setup, even though P ( S n 1 ) does not necessarily converge to 1 (see Example 2), it does so if ( p i ) i N converges to 0 at an acceptable speed. As all scenarios in Example 1 show, a bound such as P ( S n 1 ) 1 ε is typically not guaranteed. As such, comonotonicity is advantageous over independence. For decision-making, we observe that under comonotonicity, not failing in the first period limits our chances of failing in the second, which is advantageous.
The following result addresses bounds on P ( S n 1 ) in the dependent but not necessarily identically distributed setup. It thus generalizes the findings of Example 3. To this end, let δ C , n ( u ) = C ( u , , u ) , u [ 0 , 1 ] , denote the diagonal of C.
Proposition 3
(Bounds on P ( S n 1 ) under dependence). Let p 1 , , p n ( 0 , 1 ) be marginal probabilities of failure.
(1) 
If C 1 , C 2 are copulas such that C 1 C 2 , that is, C 1 ( u ) C 2 ( u ) , u [ 0 , 1 ] n , then
P C 1 ( S n 1 ) P C 2 ( S n 1 ) .
In particular, C ( u ) Π ( u ) , u [ 0 , 1 ] n , implies P ( S n 1 ) 1 i = 1 n ( 1 p i ) n .
(2) 
If δ C , n denotes the diagonal of the n-dimensional copula C, then
max { 1 δ C , n ( 1 p ( 1 ) ) , p ( n ) } P ( S n 1 ) min 1 δ C , n ( 1 p ( n ) ) , i = 1 n p i .
Proof. 
(1)
By (5), C 1 C 2 implies that P C 1 ( S n 1 ) = 1 C 1 ( 1 p 1 , , 1 p n ) 1 C 2 ( 1 p 1 , , 1 p n ) = P C 2 ( S n 1 ) . Letting one of C 1 or C 2 be C and the other one be Π implies the remaining part of the statement.
(2)
Copulas are componentwise increasing, so δ C , n ( 1 p ( n ) ) = C ( 1 p ( n ) , , 1 p ( n ) ) C ( 1 p 1 , , 1 p n ) C ( 1 p ( 1 ) , , 1 p ( 1 ) ) = δ C , n ( 1 p ( 1 ) ) , and thus
1 δ C , n ( 1 p ( 1 ) ) P ( S n 1 ) 1 δ C , n ( 1 p ( n ) ) .
By the Fréchet–Hoeffding bounds theorem (see Nelsen 2006, Section 2.5), W ( u ) C ( u ) M ( u ) , u [ 0 , 1 ] n , for W ( u ) = max { ( i = 1 n u i ) n + 1 , 0 } and M ( u ) = min i { u i } , u [ 0 , 1 ] n . Therefore,
p ( n ) = 1 M ( 1 p 1 , , 1 p n ) P ( S n 1 ) 1 W ( 1 p 1 , , 1 p n ) = min i = 1 n p i , 1 .
Combining the two bounds leads to the result as stated. □
Figure 2 shows 3d plots (top) and contour plots (bottom) of the lower (left) and upper (right) bounds on P ( S n 1 ) of Proposition 3 (2) for a bivariate t 4 copula, with parameter such that Kendall’s tau equals 0.25 , as functions of p 1 , p 2 .
By Proposition 3 (1), if C is negative lower orthant dependent (NLOD) (positive lower orthant dependent (PLOD)), that is, C ( u ) Π ( u ) ( C ( u ) Π ( u ) ), u [ 0 , 1 ] n , then P ( S n 1 ) is at least (at most) as large as in the independence case. An example of a PLOD copula is the mixture  C ( u ) = λ M ( u ) + ( 1 λ ) Π ( u ) , u [ 0 , 1 ] n between the comonotone copula M and the independence copula Π for some λ [ 0 , 1 ] . If λ ( 0 , 1 ] , then C ( u ) = λ M ( u ) + ( 1 λ ) Π ( u ) > λ Π ( u ) + ( 1 λ ) Π ( u ) = Π ( u ) , u ( 0 , 1 ) n , and thus any such copula implies a smaller probability of failure to achieve all goals under consideration in the n time periods in comparison to the independence case.
Similar to Section 2.1 and Section 2.2, we can also consider the behavior of P ( S n 1 ) for increasing n.
Corollary 1
(Large n).
(1) 
If lim n δ C , n ( 1 p ( 1 ) ) = 0 or if lim n p ( n ) = 1 , then lim n P ( S n 1 ) = 1 .
(2) 
If lim n δ C , n ( 1 p ( n ) ) > 0 or if i = 1 p i < 1 , then lim n P ( S n 1 ) < 1 .
Proof. 
Both results immediately follow from Proposition 3 (2). □
What do we know about the behavior of δ C , n appearing in Proposition 3 and Corollary 1? If C n + 1 denotes an ( n + 1 ) -dimensional copula, then δ C , n ( u ) = C n ( u , , u , u ) = C n + 1 ( u , , u , 1 ) C n + 1 ( u , , u , u ) = δ C , n + 1 ( u ) , u [ 0 , 1 ] , n N , so copula diagonals are pointwise decreasing with increasing dimension. They typically converge to 0 but can remain constant in special cases, see, for example, C = M , for which δ C , n ( u ) = u , n N , or C = λ M + ( 1 λ ) Π , for which δ C , n ( u ) = λ u + ( 1 λ ) u n λ u ( n ). Furthermore, by the Fréchet–Hoeffding bounds theorem, we know that
δ C , n ( u ) [ max { 1 n ( 1 u ) , 0 } , u ] , u [ 0 , 1 ] .

2.4. The Dependent and Identically Distributed Setup

To focus on the effect of dependence, we now address the identically distributed case, so assume that all marginal probabilities of failure are equal to p ( 0 , 1 ) . By (5),
P ( S n 1 ) = 1 C ( 1 p , , 1 p ) = 1 δ C , n ( 1 p ) ;
note that this also follows from the equality of the lower and upper bounds on P ( S n 1 ) in Proposition 3 (2). As numerical examples, Figure 3 shows P ( S n 1 ) for p { 0.01 , 0.05 , 0.25 } for an n-dimensional homogeneous t 4 copula with parameter such that pairwise Kendall’s taus equal 0.1 (left) and 0.5 (right).
By (6), δ C , n ( u ) [ max { 1 n ( 1 u ) , 0 } , u ] , u [ 0 , 1 ] , so that, for any dependence, (7) implies that
P ( S n 1 ) [ p , min { n p , 1 } ] .
Due to the diagonal δ C , n naturally appearing in (7), we now consider some example models with corresponding diagonals typically known in closed form.
Example 4
(Behavior under different dependencies and equal marginal failure probabilities).
(1) 
Clearly, the diagonals of W, Π and M are δ W , n ( u ) = max { 1 n ( 1 u ) , 0 } , δ Π , n ( u ) = u n and δ M , n ( u ) = u , respectively.
(2) 
The n-dimensional mixture copula C ( u ) = λ C 1 ( u 1 ) + ( 1 λ ) C 2 ( u 2 ) has diagonal
δ C , n ( u ) = λ δ C 1 , n ( u ) + ( 1 λ ) δ C 2 , n ( u ) , u [ 0 , 1 ] ,
where δ C 1 , n , δ C 2 , n are the diagonals of C 1 , C 2 , respectively. In particular, the diagonal of the mixture between M and Π is δ C , n ( u ) = λ u + ( 1 λ ) u n , u [ 0 , 1 ] , which converges to λ u for n and any u [ 0 , 1 ) as already mentioned before.
(3) 
Archimax copulas are copulas of the form C ( u ) = ψ ( n ( ψ 1 ( u 1 ) , , ψ 1 ( u n ) ) ) , u [ 0 , 1 ] n , where ψ is an Archimedean generator (that is ψ : [ 0 , ) [ 0 , 1 ] is continuous, decreasing, satisfies ψ ( 0 ) = 1 , ψ ( ) : = lim t ψ ( t ) = 0 , and is strictly decreasing on [ 0 , inf { t : ψ ( t ) = 0 } ] ) and where n is a stable tail dependence function, that is, n : [ 0 , ) n R is homogeneous of degree 1 (that is n ( c x ) = c n ( x ) , x [ 0 , ) n , c > 0 ), n ( e i ) = 1 for each standard base vector e i = ( 0 , , 0 , 1 , 0 , , 0 ) R n , i = 1 , , n , and n is fully n-max-decreasing (that is for all 0 j n 1 , n is ( n j ) -max decreasing for any j variables fixed, a property not further discussed here); see Charpentier et al. (2014) and Ressel (2013) for more details. According to the latter reference, a convenient characterization of stable tail dependence functions is viaD-norms as functions x x D : = E ( max 1 i n { x i W i } ) , x [ 0 , ) n , where the D-norm generator W = ( W 1 , , W n ) is a vector of nonnegative, unit mean random variables.
Archimedean copulas result as special case of Archimax copulas for n ( x ) = i = 1 n x i and if ψ is n-monotone (see McNeil and Nešlehová 2009). And extreme value copulas result as a special case for ψ ( t ) = e t , t 0 . The diagonal of Archimax copulas is
δ C , n ( u ) = ψ n ( ψ 1 ( u ) , , ψ 1 ( u ) ) = ψ ( n ( 1 ) ψ 1 ( u ) ) , u [ 0 , 1 ] ,
which converges to 0 for n and any u [ 0 , 1 ) as long as n ( 1 ) converges to a value at least as large as ψ 1 ( 0 ) / ψ 1 ( u ) . In particular, if C is Archimedean with completely monotone generator ψ (see, for example, McNeil 2008), then δ C , n ( u ) = ψ ( n ψ 1 ( u ) ) 0 for n , u [ 0 , 1 ) , which implies, by (7), that lim n P ( S n 1 ) = 1 . In contrast, if C is extreme value with n ( 1 ) η for n (such n can be constructed via D-norms by considering W i = 1 , i m for some m N ), then δ C , n ( u ) = u n ( 1 ) u η for n , u [ 0 , 1 ) , which implies, by (7), that
lim n P ( S n 1 ) = 1 ( 1 p ) η ;
note that for η N , this is the probability of at least one failure in the iid setup over η time periods. So the long-run return probability is as small as that in the iid setup over (only) η time periods. We also see (8) that this example provides one in the spirit of Example 2 but now under dependence. The probability P ( S n 1 ) not converging to 1 for n may not be expected, but the positive influence the dependence has on P ( S n 1 ) in comparison to the independence case is (by Proposition 3 (1)), since extreme value copulas are PLOD.

3. Multivariate Case

So far we only considered univariate setups, the dimension being time. We can elevate this to include more dimensions. To keep the notation manageable, we only include one additional dimension, namely, that of the number of events considered at each time point. This could be used to model, for example, potential failures in different business lines over some time horizon, the potential failure of each member of a team to achieve a goal under consideration, or, as we do, without loss of generality, multiple goals to achieve at each time point by a person. As the independent or the identically distributed setups are special cases, we directly consider the more general, dependent setup.
For each i = 1 , , n and j = 1 , , d , let A i , j denote the event of failure of a person to achieve goal j in time period i, and let X i , j = 1 A i , j be the corresponding failure indicator. Similarly to before, let S n d = i = 1 n j = 1 d X i , j , so that { S n d 1 } denotes the event of at least one failure to achieve all goals in every time period. Marginally, we thus have X i , j B ( 1 , p i , j ) for p i , j ( 0 , 1 ) , i = 1 , , n , j = 1 , , d , with distribution function F i , j . By the first part of Sklar’s theorem, there exists a copula C, such that
F ( x 1 , 1 , , x n , d ) = C ( F 1 , 1 ( x 1 , 1 ) , , F n , d ( x n , d ) ) , x 1 , 1 , , x n , d R ,
where C is uniquely defined on i = 1 n j = 1 d ran ( F i , j ) for ran ( F i , j ) = { F i , j ( x i , j ) : x i , j R } = { 0 , 1 p i , j , 1 } . As before, the non-uniqueness is not relevant for us as we focus on model building via the second part of Sklar’s theorem, from which we obtain that F defined as in (9) is always a valid joint distribution for any copula C and margins F i , j , i = 1 , , n , j = 1 , , d .
The probability of failure to achieve all goals under consideration in each time period is therefore
P ( S n d 1 ) = P i = 1 n j = 1 d X i , j 1 = 1 P ( X 1 , 1 = 0 , , X n , d = 0 ) = 1 F ( 0 , , 0 ) = Sklar 1 C ( F 1 , 1 ( 0 ) , , F n , d ( 0 ) ) = 1 C ( 1 p 1 , 1 , , 1 p n , d ) = 1 C ( 1 p ) .
Clearly, if C 1 , C 2 are copulas such that C 1 C 2 , then P C 1 ( S n d 1 ) P C 2 ( S n d 1 ) , so more dependence in the sense of concordance is advantageous. In particular, if C is PLOD, then P ( S n d 1 ) = 1 C ( 1 p ) 1 Π ( 1 p ) = 1 i = 1 n j = 1 d ( 1 p i , j ) , so any PLOD dependence is advantageous over the independence case. Adapting Proposition 3 (2) to our double-index setup implies the general bounds
max { 1 δ C , n d ( 1 p ( 1 ) , ( 1 ) ) , p ( n ) , ( d ) } P ( S n d 1 ) min 1 δ C , n d ( 1 p ( n ) ( d ) ) , i = 1 n j = 1 d p i , j ,
where p ( 1 ) , ( 1 ) = min i , j { p i , j } , p ( n ) , ( d ) = max i , j { p i , j } .
One can study the behavior of (10) more systematically in various setups. For the margins, one may be interested in one of the following setups:
(M1)
p i , j ( 0 , 1 ) , i = 1 , , n , j = 1 , , d ;
(M2)
p i , j = p i ( 0 , 1 ) , i = 1 , , n , j = 1 , , d ;
(M3)
p i , j = p ( 0 , 1 ) , i = 1 , , n , j = 1 , , d .
Clearly, the block-homogeneous case (M2) is a special case of (M1), and the homogeneous case (M3) is a special case of the block-homogeneous (M2). For the dependence, due to the different dimensions, considering the case of a general ( n d ) -dimensional copula C that determines the dependence across all components with indices i = 1 , , n , j = 1 , , d , does not allow for an easy comparison with the results of Section 2.3 and Section 2.4. However, nested copulas do, which are what we focus on in what follows. Nested copulas are copulas of the form
C ( u ) = C 0 ( C 1 ( u 1 ) , , C K ( u K ) ) , u = ( u 1 , , u K ) [ 0 , 1 ] n d ,
for copulas C 0 , C 1 , , C K . There are two natural choices for K. First, K = d and u j = ( u 1 , j , , u n , j ) so that C j models the dependence across the n time periods corresponding to goal j, and C 0 is the dependence between the indicators of achieving two different goals (irrespective of their time periods). And second, K = n and u i = ( u i , 1 , , u i , d ) so that C i models the dependence of all d goal-achievement indicators at time i and C 0 is the dependence between two different time points (irrespective of which goals are considered). In what follows, we assume the latter; the former case can be handled similarly. In short, we consider
C ( u ) = C 0 ( C 1 ( u 1 ) , , C n ( u n ) ) , u = ( u 1 , , u n ) [ 0 , 1 ] n d ,
where u i = ( u i , 1 , , u i , d ) , i = 1 , , n . Now, even if C 0 , C 1 , , C n are copulas, C in (11) is in general not a copula anymore unless certain non-trivial conditions are satisfied. As such, we assume:
(D1)
C i = M , i = 1 , , n ;
(D2)
C 0 = Π ;
(D3)
C is a copula for those C 0 , C 1 , , C n considered.
We treat the case (D3) as a generic case under the assumption of working with a valid copula C, one concrete example is nested Archimedean copulas, where sufficient conditions are known under which C is a valid copula; see McNeil (2008) for details.

3.1. Comonotone Blocks

Consider (D1). Under (M1), we obtain from (10) and (11) that
P ( S n d 1 ) = 1 C ( 1 p ) = 1 C 0 ( min 1 j d { 1 p 1 , j } , , min 1 j d { 1 p n , j } ) = 1 C 0 ( 1 p 1 , ( d ) , , 1 p n , ( d ) ) .
If the n-dimensional copula C 0 is the n-dimensional copula C from (5) in Section 2.3, then P ( S n d 1 ) is larger (smaller) than or equal to P ( S n 1 ) implied by (5) if and only if p i , ( d ) p i ( p i , ( d ) p i ), i = 1 , , n . In other words, if there is at least one time point i and at least one goal j at that time point such that p i , j > p i while p k , l p k , k i , j = 1 , , d , then P ( S n d 1 ) is increased. Under (M2), we obtain
P ( S n d 1 ) = 1 C 0 ( 1 p 1 , , 1 p n ) ,
which coincides marginally with (5), so if C 0 is larger (smaller) than or equal to the copula C in (5), then P ( S n d 1 ) is decreased (increased). The same holds under (M3) with C as in (7). Under (M2) (and thus also (M3)), this allows for a simple interpretation. Under (D1) and (M2), the comonotone dependence and equal margins at each time point i imply that we either achieve all goals at i or none, which is equivalent to achieving (of failing to achieve) a single goal at i as we considered in Section 2.3; note that this is not true in general anymore under (M1).

3.2. Independence Between Blocks

Now consider (D2). Under (M1) we obtain from (10) and (11) that
P ( S n d 1 ) = 1 C ( 1 p ) = 1 i = 1 n C i ( 1 p i , 1 , , 1 p i , d ) = 1 i = 1 n 1 ( 1 C i ( 1 p i , 1 , , 1 p i , d ) ) = 1 i = 1 n ( 1 p i ) ,
where
p 0 , i = 1 C i ( 1 p i , 1 , , 1 p i , d ) , i = 1 , , n ;
under (M2), this becomes p 0 , i = 1 δ C i , d ( 1 p i ) , i = 1 , , n . We thus immediately see the connection to the univariate case (2) in Section 2.2. The marginal failure probabilities were treated as given there, whereas now they are implied by the given failure probabilities across the d goals at each time point. In particular, we see from (12) that to obtain homogeneous p 0 , 1 , , p 0 , n as we considered in Section 2.1 neither requires (M3) nor equal C i s. And if all p 0 , i s in (12) equal p, then
P ( S n d 1 ) = 1 C ( 1 p ) = 1 ( 1 p ) n ,
which coincides with (1) from Section 2.1.
Under (M3) and equal C 1 = = C n = : C ˜ , we have p 0 , i = 1 δ C ˜ , d ( 1 p ) = : p ˜ , i = 1 , , n , and thus again
P ( S n d 1 ) = 1 ( 1 p ˜ ) n .
It follows from (6) that in this case p ˜ [ p , min { d p , 1 } ] , so no matter the dependence of the failures across the d goals, P ( S n d 1 ) is at least as large as in the iid case (1) of Section 2.1 based on the marginal failure probability p. Also this result allows for an intuitive explanation. No matter the dependence of the failures across the d goals, every additional trial means there is a potential failure lurking, and so P ( S n d 1 ) must be at least as large as in the case of a single trial. For decision-making, the fewer decisions with the potential to lead to a failure that need to be made, ceteris paribus, the lower the chance of failure.

3.3. Nested Copulas

Under (D3), (10) implies that
P ( S n d 1 ) = 1 C 0 ( C 1 ( 1 p 1 ) , , C n ( 1 p n ) ) = 1 C 0 1 ( 1 C 1 ( 1 p 1 ) ) , , 1 ( 1 C n ( 1 p n ) )
= 1 C 0 ( 1 p 0 , 1 , , 1 p 0 , n ) ,
where p i = ( p i , 1 , , p i , d ) and p 0 , i = 1 C i ( 1 p i ) = 1 C i ( 1 p i , 1 , , 1 p i , d ) , i = 1 , , n . There are various conclusions about the behavior of P ( S n d 1 ) we can draw from this representation.
First, if p i = 0 for some i { 1 , , n } , then p 0 , i = 1 C i ( 1 p i ) = 0 , and thus, in time period i, there is no failure, irrespective of the dependence. This is clear as for each of the d goals, p i = 0 implies a probability of 1 of achieving all. On the other hand, if p i , j = 1 for at least one j { 1 , , d } , then p 0 , i = 1 C i ( 1 p i ) = 1 0 = 1 , and thus P ( S n d 1 ) = 1 . This is also clear as p i , j = 1 implies, for sure, a failure to achieve goal j on day i and thus P ( S n d 1 ) = 1 .
Second, from (13), increasing (decreasing) the dependence within each block i (so pointwise increasing (decreasing) C i for any i { 1 , , n } ) can typically be compensated for by decreasing (increasing) the dependence C 0 across different blocks (as long as C still stays a valid model, which we assume). This behavior is typical for nested models, and can be studied in more detail for specific classes such as nested Archimedean copulas; we already considered two special cases of nested models, namely, (D1) and (D2). Moreover, and again from (13), increasing (decreasing) any marginal probability of failure p i , j leads to an increased (decreased) P ( S n d 1 ) . In particular, and as already mentioned in Section 3.2, if C i ( 1 p i ) remains constant through a change in any of p i , 1 , , p i , d or C i , then P ( S n d 1 ) remains unchanged.
Third, if C 0 , C 1 , , C n are PLOD, so is C, and we thus have P ( S n d 1 ) 1 i = 1 n j = 1 d ( 1 p i , j ) as already pointed out after (10), so the probability of at least one failure is less than or equal to that in the iid setup.
Fourth, we see from (14) that the probability of at least one failure over n time periods with d goals each under C equals that under C 0 over n < n d time periods but with marginal failure probabilities p 0 , 1 , , p 0 , n . Under (M2), we have p 0 , i = 1 C i ( 1 p i ) = 1 δ C i , d ( 1 p i ) , i = 1 , , n , so that if C i = M , i = 1 , , n , we have p 0 , i = 1 ( 1 p i ) = p i , i = 1 , , n , and thus
P ( S n d 1 ) = 1 C 0 ( 1 p 1 , , 1 p n ) .
This means that the probability of at least one failure over n time periods with d goals per unit of time under C equals that under C 0 over n time periods. This is intuitively clear, as we already explained in Section 3.1.
Last, under (M3), we have
P ( S n d 1 ) = 1 C 0 1 ( 1 δ 1 ( 1 p ) ) , , 1 ( 1 δ n ( 1 p ) ) .
If δ 1 ( u ) = = δ n ( u ) = : δ ( u ) , u [ 0 , 1 ] , then
P ( S n d 1 ) = 1 C 0 1 ( 1 δ ( 1 p ) ) , , 1 ( 1 δ ( 1 p ) ) = 1 C 0 ( δ ( 1 p ) , , δ ( 1 p ) ) = 1 δ C 0 , n ( δ ( 1 p ) ) = 1 δ C 0 , n 1 ( 1 δ ( 1 p ) ) ,
which coincides, for C 0 being C and p being p ˜ = 1 δ ( 1 p ) , with (7) from the dependent and identically distributed setup of Section 2.4. In particular, as already argued in Section 3.2, it follows from (6) that p ˜ [ p , min { d p , 1 } ] . Similarly to our conclusion in Section 3.2, this is intuitive since having to achieve d > 1 goals per unit of time bears more risk of potential failure than in the case of a single goal, and that is reflected in the higher marginal failure probability p ˜ = 1 δ ( 1 p ) in comparison to p of (7).
Various extensions of our results are possible. For example, apart from notational convenience, there is no need to keep the number of goals fixed as d in each time interval; in other words, d could be made dependent on i. The following remark addresses another extension.
Remark 1
(Extensions). Instead of considering the probability P ( S n d 1 ) of at least one failure to achieve all goals in every time period, one could consider the probability of at least one failure in each of the n time intervals. Using De Morgan’s law and the inclusion–exclusion principle, we obtain
= P j = 1 d X i , j 1 , i = 1 , , n = P i = 1 n j = 1 d A i , j = 1 i = 1 n ( 1 ) i 1 I { 1 , , n } : | I | = i P i I j = 1 d A i , j c = 1 i = 1 n ( 1 ) i 1 I { 1 , , n } : | I | = i P i I j = 1 d { X i , j = 0 } .
Let 0 I c 1 denote the n d -vector of zeros with those components with index i I c (and any corresponding index j) being replaced by 1. Then
P i I j = 1 d { X i , j = 0 } = P ( X i , j = 0 , i I , j = 1 , , d ) = F ( 0 I c 1 ) = C ( 1 p I c 0 ) = C I ( 1 p I ) ,
where C I denotes the ( | I | d ) -dimensional marginal copula of the ( n d ) -dimensional copula C corresponding to the block with indices i I and p I = ( p i , j ) i I , j = 1 , , d . Therefore,
P j = 1 d X i , j 1 , i = 1 , , n = 1 i = 1 n ( 1 ) i 1 I { 1 , , n } : | I | = i C I ( 1 p I )
As an example, consider p i , j = p j , i = 1 , , n . If C = Π , then C I ( 1 p I ) = i I j = 1 d ( 1 p i , j ) = i I j = 1 d ( 1 p j ) = ( j = 1 d ( 1 p j ) ) | I | . Therefore,
P j = 1 d X i , j 1 , i = 1 , , n = 1 i = 1 n ( 1 ) i 1 I { 1 , , n } : | I | = i j = 1 d ( 1 p j ) i = 1 + i = 1 n n i j = 1 d ( 1 p j ) i = i = 0 n n i j = 1 d ( 1 p j ) i = 1 j = 1 d ( 1 p j ) n ,
which is easy to interpret since 1 p j is the probability of no failure at achieving goal j at one time point i { 1 , , n } , so, under independence, j = 1 d ( 1 p j ) is the probability of no failure at any of the goals at time i; thus, 1 j = 1 d ( 1 p j ) is the probability of at least one failure at time i, and hence, again under independence, ( 1 j = 1 d ( 1 p j ) ) n is the probability of at least one failure at every time point i = 1 , , n .
If C = M , we have C I ( 1 p I ) = 1 max i I , j = 1 , , d { p i , j } = 1 max j = 1 , , d { p j } = 1 p ( d ) . We then obtain that
P j = 1 d X i , j 1 , i = 1 , , n = 1 i = 1 n ( 1 ) i 1 I { 1 , , n } : | I | = i ( 1 p ( d ) ) = 1 + ( 1 p ( d ) ) i = 1 n ( 1 ) i I { 1 , , n } : | I | = i 1 = 1 + ( 1 p ( d ) ) 1 + i = 0 n ( 1 ) i I { 1 , , n } : | I | = i 1 = 1 + ( 1 p ( d ) ) ( 1 + ( 1 1 ) n ) = 1 ( 1 p ( d ) ) = p ( d ) .
Also this result allows for an interpretation. Under C = M , the probability of at least one failure at any time point i is p ( d ) , and thus the probability of at least one failure at every time point i is also p ( d ) due to comonotonicity.
Using the previous results, we obtain from (15) that for C = λ M + ( 1 λ ) Π we have
P j = 1 d X i , j 1 , i = 1 , , n = 1 λ ( 1 p ( d ) ) + ( 1 λ ) 1 1 j = 1 d ( 1 p j ) n = 1 λ + λ p ( d ) ( 1 λ ) + ( 1 λ ) 1 j = 1 d ( 1 p j ) n = λ p ( d ) + ( 1 λ ) 1 j = 1 d ( 1 p j ) n ,
which is the convex combination of (16) and (17).

4. Conclusions

Under various setups of marginal Bernoulli distributions and copulas as dependence structures, we derived closed forms of the return probability of at least one failure when trying to achieve a goal in each of n time periods (univariate case) and when trying to achieve all d goals in each of n time periods (multivariate case). In the univariate case, we considered the iid setup, the independent setup, the dependent but not necessarily identically distributed setup, and the dependent and identically distributed setup. In the multivariate case, besides different setups for the marginal failure probabilities, we considered comonotone blocks, independence between blocks, and nested copulas as dependence structures. We also provided bounds on the return probability of at least one failure in case closed form expressions were not available. Our results are quantitatively admissible and qualitatively interpretable in terms of decision-making.
To see the connection to the latter once again, we close with an anecdote. On 9 June 2024, Swiss former professional tennis player Roger Federer gave the 2024 Commencement Address at Dartmouth College (see Federer (2024)), in which he told the audience three “tennis lessons”, the first two being directly related to our findings. In his first lesson (“`Effortless’... is a myth”), he describes how hard he had to work every year as a professional tennis player, which directly translates to trying to attain a small failure probability in each time interval in our setup. And his second lesson (“It’s only a point”) addresses various aspects of dependence, for example
“When you’re playing a point, it is the most important thing in the world. But when it’s behind you, it’s behind you…This mindset is really crucial, because it frees you to fully commit to the next point…[…].”
or
“You want to become a master at overcoming hard moments. That to me is the sign of a champion. The best in the world are not the best because they win every point…It’s because they know they’ll lose…again and again…and have learned how to deal with it. […] You move on.”
which both support the idea of not letting dependence negatively impact a player when moving from one point (one time period) to the next (“it’s behind you”, “You move on”). In short, independence is advantageous over negative dependence. Moreover, our results support one more important lesson to learn: we saw that positive dependence is advantageous over independence, so we should also keep in mind to learn from our past mistakes moving forward.

Funding

This research received no external funding.

Data Availability Statement

The data used and included in this article is simulated as described. Related requests may be addressed to the author.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Borovcnik, Manfred. 2015. Risk and decision making: The “logic” of probability. The Mathematics Enthusiast 12: 113–39. [Google Scholar] [CrossRef]
  2. Camerer, Colin F., and Howard Kunreuther. 1989. Decision processes for low probability events: Policy implications. Journal of Policy Analysis and Management 8: 565–92. [Google Scholar] [CrossRef]
  3. Charpentier, Arthur, Anne-Laure Fougères, Christian Genest, and Johanna G. Nešlehová. 2014. Multivariate Archimax copulas. Journal of Multivariate Analysis 126: 118–36. [Google Scholar] [CrossRef]
  4. Clemen, Robert T., Gregory W. Fischer, and Robert L. Winkler. 2000. Assessing dependence: Some experimental results. Management Science 46: 1100–15. [Google Scholar] [CrossRef]
  5. Federer, Roger. 2024. 2024 Commencement Address by Roger Federer. Available online: https://home.dartmouth.edu/news/2024/06/2024-commencement-address-roger-federer (accessed on 12 July 2025).
  6. Frees, Edward W., and Emiliano A. Valdez. 1998. Understanding relationships using copulas. North American Actuarial Journal 2: 1–25. [Google Scholar] [CrossRef]
  7. McNeil, Alexander J. 2008. Sampling nested Archimedean copulas. Journal of Statistical Computation and Simulation 78: 567–81. [Google Scholar] [CrossRef]
  8. McNeil, Alexander J., and Johanna Nešlehová. 2009. Multivariate Archimedean copulas, d-monotone functions and 1-norm symmetric distributions. Annals of Statistics 37: 3059–97. [Google Scholar] [CrossRef] [PubMed]
  9. Nelsen, Roger B. 2006. An Introduction to Copulas. Berlin/Heidelberg: Springer. [Google Scholar]
  10. Patton, Andrew. 2012. A review of copula models for economic time series. Journal of Multivariate Analysis 110: 4–18. [Google Scholar] [CrossRef]
  11. Ressel, Paul. 2013. Homogeneous distributions—And a spectral representation of classical mean values and stable tail dependence functions. Journal of Multivariate Analysis 117: 246–56. [Google Scholar] [CrossRef]
  12. Shao, Huijuan, Xinwei Deng, Chi Zhang, Shuai Zheng, Hamed Khorasgani, Ahmed Farahat, and Chetan Gupta. 2019. Multivariate bernoulli logit-normal model for failure prediction. Annual Conference of the PHM Society 11: 1–8. [Google Scholar] [CrossRef]
  13. Tinsley, Catherine H., Robin L. Dillon, and Matthew A. Cronin. 2012. How near-miss events amplify or attenuate risky decision making. Management Science 58: 1596–613. [Google Scholar] [CrossRef]
  14. Wakker, Peter P. 2010. Prospect Theory: For Risk and Ambiguity. Cambridge: Cambridge University Press. [Google Scholar]
Figure 1. Lower (blue) and upper (red) bounds on P ( S n 1 ) (orange) of Proposition 1 for p ( 1 ) = 0.01 , p ( 2 ) = 0.05 , p ( 3 ) = 0.1 , p ( 4 ) = p ( 5 ) = = p ( n ) = 0.25 , and i * { 1 , , 4 } (left). Trivial lower bound, lower bound from Proposition 1 and that of Proposition 2 (all blue) on P ( S n 1 ) (orange) for a geometric sequence p i = p max ( p max p min ) i , i = 1 , , n , with p min = 0.01 and p max = 0.25 (right).
Figure 1. Lower (blue) and upper (red) bounds on P ( S n 1 ) (orange) of Proposition 1 for p ( 1 ) = 0.01 , p ( 2 ) = 0.05 , p ( 3 ) = 0.1 , p ( 4 ) = p ( 5 ) = = p ( n ) = 0.25 , and i * { 1 , , 4 } (left). Trivial lower bound, lower bound from Proposition 1 and that of Proposition 2 (all blue) on P ( S n 1 ) (orange) for a geometric sequence p i = p max ( p max p min ) i , i = 1 , , n , with p min = 0.01 and p max = 0.25 (right).
Risks 14 00058 g001
Figure 2. Lower (left) and upper (right) bounds on P ( S n 1 ) of Proposition 3 (2) for a bivariate t 4 copula with parameter such that Kendall’s tau equals 0.25 .
Figure 2. Lower (left) and upper (right) bounds on P ( S n 1 ) of Proposition 3 (2) for a bivariate t 4 copula with parameter such that Kendall’s tau equals 0.25 .
Risks 14 00058 g002
Figure 3. P ( S n 1 ) for p { 0.01 , 0.05 , 0.25 } for an n-dimensional homogeneous t 4 copula with parameter such that pairwise Kendall’s taus equal 0.1 (left) and 0.5 (right).
Figure 3. P ( S n 1 ) for p { 0.01 , 0.05 , 0.25 } for an n-dimensional homogeneous t 4 copula with parameter such that pairwise Kendall’s taus equal 0.1 (left) and 0.5 (right).
Risks 14 00058 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hofert, M. On Return Probabilities of Adverse Events Under Dependence and Lessons to Learn for Decision-Making. Risks 2026, 14, 58. https://doi.org/10.3390/risks14030058

AMA Style

Hofert M. On Return Probabilities of Adverse Events Under Dependence and Lessons to Learn for Decision-Making. Risks. 2026; 14(3):58. https://doi.org/10.3390/risks14030058

Chicago/Turabian Style

Hofert, Marius. 2026. "On Return Probabilities of Adverse Events Under Dependence and Lessons to Learn for Decision-Making" Risks 14, no. 3: 58. https://doi.org/10.3390/risks14030058

APA Style

Hofert, M. (2026). On Return Probabilities of Adverse Events Under Dependence and Lessons to Learn for Decision-Making. Risks, 14(3), 58. https://doi.org/10.3390/risks14030058

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop