1. Introduction
The dual risk model and its various extensions have been widely addressed by several authors and can be traced back to
Cramér (
1955, Section 5.13),
Seal (
1969, pp. 116–19),
Takács (
1967, pp. 152–54), and also to
Gerber (
1979, pp. 136–37), who called it the negative-claims model. More recently, this model has garnered increasing interest, as evidenced by works such as
Avanzi et al. (
2007),
Afonso et al. (
2013),
Albrecher et al. (
2022), and
Bazyari (
2024). In essence, the dual risk model describes a company’s surplus, which is driven by regular expenses and occasional gains that occur at random moments. The term
dual contrasts this model with the Cramér–Lundberg risk model commonly used in insurance, where companies receive premiums and make random claim payments. For those looking to gain a solid understanding of non-life insurance, we recommend referring to several classic texts in the actuarial literature, including
Cramér (
1955),
Seal (
1969),
Bühlmann (
1970),
Bowers et al. (
1986),
Asmussen and Albrecher (
2010),
Kaas et al. (
2008),
Klugman et al. (
2019),
Rolski et al. (
2009),
Michna (
1988), and
Dickson (
2016). The dual risk model finds applications across a range of industries, including pharmaceutical companies, geological exploration firms (for minerals and oil), real estate and brokerage firms, and research-oriented companies, where gains are random and occur at unpredictable times, while costs are fixed and certain. Another potential application of this model is in the context of lifetime annuities or pension funds, where expenses represent periodic payments, and gains correspond to the actuarial value, upon the death of the insured or pensioner, of the remaining payment obligations. For a more detailed discussion of applications, we refer interested readers to
Seal (
1969),
Mazza and Rullière (
2004),
Avanzi et al. (
2007),
Bayraktar and Egami (
2008),
Dimitrova et al. (
2015), and references therein. Recently, the dual risk model has also been applied to assess the feasibility of participating in a mining pool from the perspective of an individual bitcoin miner, see
Albrecher et al. (
2022). As seen, this model has a broad range of applications.
Mathematically, the stochastic surplus process for the dual risk model at time
is described by
where
is the initial reserve,
is the constant rate of expenses per unit time, paid continuously, and
represents the aggregate gain amount up to time
. The process
follows a compound Poisson process with Poisson parameter
and is given by:
where
is a sequence of independent and identically distributed positive random variables representing the positive individual gain amounts, with distribution function
G, probability density function
g, and finite mean
.
denotes a Poisson process representing the number of gains up to time
. A key feature of this model is that it is a time-homogeneous Markov process. This implies that the surplus level at any given time is sufficient to probabilistically determine the surplus level at any future time
h. We also note that by reversing the signs in the second and third terms in (
1), one obtains the insurance surplus process.
We also assume the income condition
, which implies that, on average, the gains exceed the expenses per unit time. As consequence, the surplus process
tends to drift towards infinity with probability one, meaning that the surplus is not guaranteed to become negative. We introduce a positive constant
, such that
. Following the approach in
Dickson et al. (
1995), without loss of generality, we further assume that
.
A typical sample path for the surplus starts from an initial value
u and then decreases at a constant rate
c. After some time, there is a gain of amount
, followed by another period of decline at the same rate. Eventually, there is another gain of amount
, and this pattern continues. In continuous time, the process stops when ruin occurs, meaning when the surplus drops below zero. However, the time of ruin, denoted by
T, is usually defined as
since ruin is caused by the continuous consumption at rate
c, which implies that ruin can only occur immediately after the surplus reaches zero. The ultimate ruin probability, denoted by
, has a well-known explicit expression for the model under consideration (see, e.g.,
Gerber (
1979) or
Afonso et al. (
2013)):
where
R is the unique positive root of Lundberg’s equation, that is,
More appealing is to consider a finite time horizon from zero to
t and calculate the corresponding ruin probability that is given by
By definition, we have
We denote the (defective) density function of
T by
, and its proper density function is
so this conditional ruin-time density is normalized by the probability of ruin by the horizon (that is, it is conditional on ruin having occurred).
In practice, businesses, insurers, or financial institutions operate over finite time horizons, such as a year, decade, or other periods, which is often more relevant for practical decision-making and risk management, focusing on the short-to-medium-term risks, which are usually more critical for planning and allowing them to take the appropriate preventive actions. Also, companies reassess their positions periodically. Finite-time probabilities provide a snapshot of risk within a defined period, allowing for frequent adjustments to policies, reserves, or investments, ensuring one is prepared for adverse scenarios in the near term rather than being overly reliant on long-term assumptions. Many regulatory guidelines require the assessment of risk within specific time frames, such as solvency tests over a year. Finite-time ruin probabilities align more closely with these requirements.
Ruin in the dual risk model has also attracted considerable attention, and several important contributions have been made in this area. Among others, we note the following works.
Mazza and Rullière (
2004) investigate ruin by establishing a link between the classical insurance risk model and the dual risk model through wave random motions. A dual Markov-modulated risk model is studied in
Zhu and Yang (
2008), where ruin probabilities are analyzed under the assumption that both the expense rate and the gain amount distribution are governed by an
m-state Markov process.
Dong and Wang (
2008) examine the compounding assets dual model with constant interest rate, deriving integral and integro-differential equations for the ultimate ruin probability, as well as closed-form solutions for specific cases.
In recent years, Parisian ruin has gained increasing attention in the context of insurance risk models, where a company is allowed to operate under a negative surplus for a predetermined period, known as the Parisian delay; see
Czarna and Palmowski (
2011),
Loeffen et al. (
2013),
Czarna and Palmowski (
2014),
Czarna and Renaud (
2016),
Czarna et al. (
2017),
Cheung and Zhu (
2023), and the references therein. The latter also investigates cumulative Parisian ruin within a finite time horizon. The application of Parisian ruin to the dual risk model has been explored in several studies, including
Cheung and Wong (
2017) and
Yang et al. (
2020). In these models, when the surplus becomes negative, the company may still survive for a period due to the positive drift (or loading), and depending on the size of the deficit, it may quickly recover to a positive surplus. On one hand, this definition acknowledges the potential for profitability even after a temporary deficit. On the other hand, the classical definition of ruin is more conservative, offering stronger protection against insolvency by not allowing any tolerance for negative surplus.
The existing literature on calculating the finite-time ruin probability for the dual risk model is limited. In
Yang and Sendova (
2014), we can find an expression for the (defective) cumulative distribution function of the time of ruin
T in terms of convolutions of the p.d.f.
g. These authors do not provide any numerical values for the targeted quantity.
Dimitrova et al. (
2015) delve into the finite-time ruin probability within a reasonably generalized dual risk model by establishing a connection between this model and its corresponding insurance model. They derive expressions for this probability under various general assumptions and when capital gains follow a linear combination of exponential distributions or a hyperexponential distribution. These formulas are presented in terms of Appell polynomials. For arbitrarily distributed capital gains, the authors use the result that states that any distribution can be approximated by a mixed exponential distribution in the sense of weak convergence. However, in the specific risk model we study here,
Dimitrova et al. (
2015) show only two numerical results for
, where
u and
t are small, and resort to approximate numerical procedures. As these authors remarked, there are “Some limitations related to the numerical performance of these formulas (…), e.g., the need to evaluate high dimensional integrals when the intensity of the arrival process and/or the length of the finite-time horizon is too large”. Thus, these are the only numerical values currently available in the existing literature. Moreover, to the best of our knowledge, there are no closed-form results for the targeted probability. This gap in the literature motivates our work, which focuses on the numerical approximation of the ruin probability over a finite horizon, taking into account a general framework for gain amount distributions. As a result, we are able to generate graphs for
.
We employ two distinct approaches, detailed in the following section. The first approach involves adapting a methodology originally developed by
Dickson and Gray (
1984) and
Cardoso and Egídio dos Reis (
2002), which utilizes discrete-time Markov chains in the context of the classical insurance risk model. This method has been successfully applied to other contexts of this latter model; see
Cardoso and Waters (
2003,
2005) and
Cardoso (
2014). The second approach builds on the methodology proposed by
De Vylder and Goovaerts (
1988). Both methods share a key concept: the continuous time/amount plane is replaced by a rectangular grid, where the continuous (time and amount) surplus process is approximated by a discrete process. This discrete process operates within a state space composed of equally spaced points along the amount axis. The probabilistic properties of this approximate discrete process are more straightforward to derive than those of the original continuous process. A significant advantage of this methodology is that the process can be scaled, so that, within a single unit of time, the expenses—and thus the maximum decrease in the surplus—are normalized to one.
The algorithms presented in this work are intuitively appealing and conceptually straightforward. The proposed approaches appear to offer more efficient methods for approximating the distribution of the time-to-ruin random variable compared to the approach described in
Dimitrova et al. (
2015). We emphasize that, to the best of our knowledge, apart from the numerical values presented in that paper, no other figures have been reported for finite-time ruin probabilities in the dual risk model. Based on our experience, the proposed approximations exhibit strong accuracy, as demonstrated in the final section, where we present numerical results and accompanying graphs. It is well established that the methodology employed in this study yields reliable approximations in the context of the classical insurance risk model. Naturally, the precision of these approximations depends on the discretization mesh-size, which arises from transitioning the continuous model to a discrete framework. As we show, in the long run, our approximations closely align with the exact values of the ultimate ruin probability.
2. The Algorithms
We now derive two procedures for calculating ruin probabilities within a finite time under the dual risk model. These methods are based on the approaches presented in
Cardoso and Egídio dos Reis (
2002) and
De Vylder and Goovaerts (
1988) within the context of the classical insurance model. We begin by establishing the common foundation, which leads to the discretization of the original risk surplus process.
The sample paths of the surplus process can be represented in a two-dimensional space, where the
x-axis denotes time and the
y-axis represents the monetary amount. We begin by dividing the time axis into equal intervals of length
h, such that
,
,
…,
, where
h is a suitably small positive number. For convenience, we assume that
is a positive integer
K. Next, we divide the amount axis into intervals, such as
,
, and so forth. The objective is to approximate the continuous surplus process, which varies over time and amount, with a discrete process. This discrete process takes on values
at the specified time points, where
Hence, the values
are equally spaced. Conceptually,
represents the level the surplus process would reach within a time interval of length
h if it started from
, assuming no gains occur during that interval.
This new process is clearly a discrete-time Markov chain, denoted by
, which is defined at time points
and has a state space
. It approximates the process
, functioning similarly to a surplus process. The chain initiates at state
u, which is conveniently set equal to
for some integer
, that is,
u coincides to one of the values
. Given the flexibility in choosing the value of
h, this assumption is not overly restrictive, as stated in
Cardoso and Waters (
2005). The process maintains the same rate of expenses as its continuous-time counterpart and terminates at the absorbing state
, representing ruin.
Figure 1 illustrates the possible positive values that the chain
can assume at each time point up until
. In essence, our method replaces the continuous time surplus process by a discrete time process constrained to take values within a countable set.
To fully define this Markov chain, we must specify its transition probabilities. Let us denote the
n-step transition probabilities by
for
,
. Ideally, we would derive these probabilities from the distribution of the aggregate gains over any interval of length
h:
However, in practice, this is often infeasible. Instead, we can construct a discrete approximation using the
Panjer (
1981) recursion formula, that we denote by
. First, if the gain amount distribution is not already discrete, we discretize it using, for example, the mean preserving method of
De Vylder and Goovaerts (
1988), which is known for yielding accurate results. This procedure yields a cumulative distribution function whose probability masses are conveniently concentrated at the values
, with probabilities denoted by
, computed as follows:
For further details on discretization methods see
Panjer and Lutek (
1983),
Klugman et al. (
2019), and
Dickson (
2016). Consequently, as mentioned, we get
and its probability mass function, at each point
, given by
, which can be computed using the version of Panjer’s recursion for the compound-Poisson case:
for
, with the starting value
. Hence, we set the one-step transition probabilities as follows:
These definitions can be intuitively understood. A point to note is that our chain is skip-free downwards (the chain can only jump one state down at a time but can have upward jumps of any size), meaning that if the chain is in state
at time
, then
will be in state
if no gains occur during a period of length
h, but in no lower state. This event has probability
. Further, the chain will remain in state
at time
with probability
if the discrete aggregate gains during that period are greater than
but less than or equal to
. By following this reasoning, we arrive at (
4). For a visual representation, see
Figure 2.
Regarding the
n-step transition probabilities, we focus on the positive values
that the chain
may assume until time
. Starting from
, the chain would take a minimum time period of length
u to reach 0, as the process can only decrease by one position at each time step
h. This implies that, starting from
u, the chain cannot take any value below the diagonal formed by the points
for
. For the same reason, and considering our objective of determining the ruin probability within time
t, if the chain reaches a value above the diagonal formed by the points
for
, then ruin cannot occur before time
t. Consequently, the relevant positive values that the chain may assume to achieve our goal are those depicted in
Figure 3.
Applying the Chapman–Kolmogorov equations, we derive the recursive formulas for the transition probabilities,
for
and
. Note that, on one hand, if our Markov chain starts from
, it will reach a minimum value of
at time
, provided no gains occur during this period, and as long as
; otherwise, the minimum value is zero. On the other hand, at the
nth transition, the chain cannot attain a value higher than
because if it did, it would be certain that ruin would not occur before time
t. In particular, we have
since zero is an absorbing state and, consequently, once zero is reached, the chain remains there, and this state is only attained from
with probability
. Particularly, the ruin state is reached at the first transition if the chain starts from
. Thus, we obtain the desired approximation,
Naturally, the smaller the value of
h, the more accurate our approximation is likely to be.
We now adapt the numerical procedure developed in
De Vylder and Goovaerts (
1988). The basic idea is to condition on the value that the discrete surplus process assumes after the first time period of length
h. Starting from
, the chain will be in state
at time
h, with
. From each of these states, we then consider the probability that the process reaches state 0 within the remaining time horizon, which is the time period of length
. Using the notation from the method described above, we arrive at the following formula:
with
. Note that the upper limit in the summation is
because if, at time
h, the process takes a value above
, then ruin cannot occur within the fixed time period
t. As can be seen, the calculations are performed recursively, and the values required to obtain the approximation are
, for
, and
, as represented in
Figure 4. It is noteworthy that this is a mirror image of
Figure 3, indicating that both approaches are similar and yield the same numerical approximations.
3. Results and Discussion
For our numerical illustrations, we selected three distribution functions for individual gain amounts, each with a mean of one: Exponential (1), Gamma (2,2) and Pareto (2,1). The corresponding probability density functions are as follows:
and
We remind the reader the basic normalization assumptions stated in
Section 1: the expense rate per unit time,
c, and the mean of the individual gain amounts,
, are both set to one. For the Exponential(1), the discretized version of its p.d.f., obtained using (
2), is given by
for
. These values are then inserted into Panjer’s formula (
3) to compute the one-step transition probabilities, as described in Equation (
4). Finally, using these starting values and applying the recursive relations (
5) or (
6), we obtain the desired approximation for
.
For all calculations, we set the mesh size to , which we found suitable for our purposes. We experienced that using a smaller h did not significantly improve the accuracy of the results. We return to this topic later.
In
Table 1,
Table 2 and
Table 3, we present approximate values of finite-time ruin probabilities, rounded to four decimal places, for
and various combinations of time
t and initial surplus
u. As expected, some values are zero because, for instance, with
and
, ruin is impossible. The values marked with an asterisk “
” are not zero but are exceedingly small. As expected, the tables show that for a fixed
t, the approximations for
decrease as
u increases, while for a fixed
u, they increase with
t. From a practical perspective, company managers are typically interested in maintaining small finite-time ruin probabilities, for example in the range of 1–5%. For instance, in the case of Exponential(1) gains,
Table 1 indicates that for
the required initial surplus is approximately 10, whereas for
, the corresponding value of
u is around 20. Similarly, from
Table 2, for a fixed
, an admissible initial surplus is about 10. Among the examples considered, the Pareto(2,1) gain distribution yields the highest probabilities.
Both methods developed in this paper were used to obtain these approximations, and they yielded identical numerical results. It is also noteworthy that the computational times for each algorithm were not significantly different. In
Table 4, we present the computing times (in seconds) required to calculate some of the values shown in
Table 1.
The computing times depend on the values of
u,
t, and
h, that is on the values of
and
K. As expected, for a fixed
u, the computing time increases with
t, since more states are required in the calculations. Conversely, for a fixed
t, the computing time decreases as
u increases, due to a reduction in the number of relevant states—this behavior is consistent with the trends observed in
Figure 3. Despite our efforts to ensure consistency, these times should be considered indicative only, as they are influenced by several external factors such as hardware specifications, system load, and background processes. The computations were performed on an older laptop equipped with an Intel(R) Core(TM) i7-8550U CPU at 1.80GHz and 8 GB of RAM. It is worth noting that, for example, when
and
, our first algorithm allowed us to compute all approximate values for
in a single run, thereby saving time. Similarly, using recursion (
6), we can compute approximations for a fixed
t and for all values of
u less than or equal to a specified initial reserve.
Table 5 provides approximations of
for Exponential(1) gains for different values of mesh sizes
h.
We observe that the approximations for are nearly identical to those for , and even for , the differences are minimal. When rounding to four decimal places, the results for h equal to 0.01 and 0.005 are identical. This pattern was consistently observed across various tests, which justified our choice of setting the mesh size to for the remainder of the computations.
Figure 5 displays (approximate) graphs of the proper density function
, for individual gain distributions: Exponential(1) (top left), Gamma(2,2) (top right), and Pareto(2,1) (bottom), with
, various initial surpluses, and a time horizon from 0 to 100.
We observe that these functions are unimodal and positively skewed, with the mode increasing for larger values of
u. Additionally, in
Figure 6, we provide graphs of the approximations for
, holding
t constant while varying the initial surplus, specifically for Pareto(2,1) gains. As expected, the finite-time ruin probability decreases as
u increases and converges to zero as
u tends to
t.
NOw, we assess the accuracy of our approximations. Firstly, we compare them, in the long run, with the exact ultimate ruin probability, considering the gain distributions mentioned earlier. For all cases shown in
Figure 7, we selected an initial surplus of
.
Given our parametrization, where
, an increase in
leads to a higher number of individual gains. As a result, the surplus tends to reach higher values, making ruin, afterwards, less likely. Therefore,
converges more quickly to
as
increases, which is why we chose values of one and two for that parameter (we could chose higher values).
Figure 7 shows that the approximate values appear to converge to the ultimate ruin probability
, indicating the good accuracy of our algorithms. Interestingly, our approximations are slightly higher than
. For Exponential(1) gains, in
Table 6, we compare the approximations of
for different mesh sizes
h and with the exact value
, while considering both cases
and
.
These results illustrate a consistent pattern observed across multiple computations: as the mesh size
h decreases, the approximated values also decrease, approaching the continuous-time value from above. This behavior is also evident in
Table 5. One might initially expect the opposite trend, given that we are approximating a continuous-time surplus process with a discrete-time model. However, due to the structure of our algorithms, ruin is recorded when the discrete-time process reaches zero, whereas in the continuous-time model, ruin occurs immediately upon reaching zero. This subtle difference implies that ruin may be detected slightly earlier in the discrete model, leading to higher estimates of ruin probability. Another contributing factor could be the discretization of the gain amount distribution, which may result in approximations that slightly overestimate
, as observed by
Dickson and Waters (
1991) in the context of ruin probability in the insurance surplus process. Ideally, one would aim to provide an analytical proof of convergence for these estimates toward the exact values, along with explicit error bounds. However, this task appears highly challenging due to different layers of approximation involved, namely, the discretization of the gain amounts’ distribution, the use of Panjer’s recursion, and the adoption of a discrete-time framework. This issue is not addressed in the original papers on which our algorithms are based, nor, to our knowledge, in many other studies that develop similar recursive methods. In those works, the accuracy of the numerical results is typically assessed by comparison with exact values available in certain special cases. For instance,
Dickson and Waters (
1991), who laid some of the foundations for constructing such numerical algorithms in the setting of the classical insurance risk model, note that intuitively, the resulting values should be good approximations to
if
h is small, so that there are frequent checks for ruin in the discrete case. Moreover, quoting
Dickson (
2016), “Intuitively, if we approximate a continuous time ruin probability by a discrete time one, we would expect the approximation to be good if the interval between the time points at which we ’check’ the surplus is small.” Nevertheless, in our methodology, approximating Poisson gains by aggregating all arrivals within
into a single step-jump defers any within-step upward jumps to the next grid time. This suppresses short clusters of gains, increases the chance of longer periods without gains, and thus raises ruin probabilities. Moreover, concerning the severity discretization, the mean-preserving method by mass reallocation often produces a gain-size distribution that is smaller in convex order than the true one (less variable at the same mean), making large gain jumps less likely and thereby increasing ruin probability.
We can compare the quality of our approximations with those provided in
Dimitrova et al. (
2015). In their Examples 3.1 and 3.2, the authors consider Pareto(1.2, 0.2) and Weibull(0.6, 0.66464) distributions for the capital gains. In both instances, they approximate these distributions using hyperexponential distributions. For the governing parameters
and
, the approximate values for the survival probability
are 0.335042 and 0.414054, respectively. To verify the accuracy of their results, the authors conducted Monte Carlo simulations and constructed 95% confidence intervals: (0.332475, 0.334341) and (0.413077, 0.414989). Rescaling the model to our assumptions, we obtain the approximations 0.333016 and 0.414143. As observed, our values fall within these confidence intervals, though this only occurs for the second approximation from
Dimitrova et al. (
2015).
Our algorithms are particularly appealing due to their intuitive construction, ease of implementation and effectiveness for large values of
u and
t. Moreover, the results appear to be accurate. Another strong feature of our algorithm is its numerical stability.
De Vylder and Goovaerts (
1988), in their Section 5, demonstrated that their algorithm was numerically stable, as also emphasized by
Dickson and Waters (
1991). Consequently, our second method inherits this property, since both recursions are of the same type. The same conclusion applies to our first method:
Cardoso and Waters (
2003), in their Section 3, established the numerical stability of a recursion of the same form as ours (see Formula (
5)). More broadly, numerical aspects of recursive calculations have been discussed in detail by
Panjer and Willmot (
1986) and
Panjer and Wang (
1993). An advantage of the first method described in
Section 2 is that, in calculating the approximate value
, we also recursively compute
, for
. This allows us to easily generate an approximate graph for
. On the other hand, the second algorithm presented in
Section 2 is better suited for generating graphs of
with a fixed
t and varying the initial surplus, as illustrated in
Figure 6. This methodology also shows potential applications in the assessment of capital requirements and the solvency of the company, as demanded by institutional supervisors, by considering the finite-time ruin probability as a risk measure. In this way, managers can, given the current surplus, anticipate the likelihood of ruin in the short or medium term and determine the level of capital required to continue operations, making the necessary adjustments accordingly. Another risk management tool that arises from our methodology—particularly when using the second algorithm presented in this manuscript—is the analysis of initial surpluses associated with low ruin probabilities over a fixed time horizon. This is illustrated graphically in
Figure 6, which provides a glimpse of the appropriate values of
u when targeting ruin probabilities between 1% and 5%. Specifically, for
, the required surplus lies approximately in the interval
; for
, in
; and for
, in
. Moreover, such analyses can inform decisions regarding capital injections, a topic studied in
Dickson and Waters (
2004). Still within the insurance risk model, the expected present value of total operating costs until ruin, within a finite time horizon, provides another valuable tool for strengthening a firm’s management policy, as discussed in
Xie and Zhang (
2025). As directions for future work, it would be of interest to extend the ideas in these two papers by applying the concepts to the dual risk model, as well as to consider the Parisian ruin and to develop numerical algorithms for the corresponding quantities of interest.