We now apply our mathematical framework to an idealized representation of a multi-peril catastrophe model for a US nationwide industry portfolio. We begin by prescribing an appropriate frequency and severity, followed by an analysis of the resulting risk profile to set the context. We then analyze the decompositions of the AAL and total variance of the pre and post-catXL perspectives, which demonstrates the practical insights that are gained by application of our framework.
3.1. Multi-Peril Model Setup and Risk Profile
Our objective is to define a multi-peril frequency and severity model that will serve as a reasonable testbed for our mathematical framework from which we can derive practical insight. First we define the perils, their respective AALs and average annual frequencies, followed by our formulation of the severity distribution.
We use publicly available data from (
AON 2021;
Insurance Information Institute 2021) to formulate model parameters such that the model is representative of results from a catastrophe model run against a US nationwide insured portfolio. We assume that losses on a hypothetical US industry portfolio arise from hurricanes (HU), winter storms (WS), wildfires (WF), earthquakes (EQ) and severe convective storms (SCS). Using the US inflation-adjusted losses from 1980–2018 provided in (
Insurance Information Institute 2021) as a guide, we assume a portfolio AAL of
$29.5 Billion (hereafter denoted by B). We then use Exhibit 19 of (
Insurance Information Institute 2021) as a guide to assign the following AALs to each peril:
$12.5 B to HU,
$2.5 B to WS,
$2.5 B to WF,
$2.0 B to EQ and
$10.0 B to SCS. Historical frequency data of US loss events provided in (
Insurance Information Institute 2021) (1980–2018) suggests that order 100 meteorological events occur on average where the vast majority of events are severe convective storms. We therefore assign an average annual frequency of 100 to SCS, and we assign to 2 to HU and 6 to WS (we use the ratio of HU to WS loss events provided in (
Insurance Information Institute 2021)).
Insurance Information Institute (
2021) also provides frequency data for geological events, and we assign a frequency of 5 to EQ. Finally, we assign a frequency of 70 to WF. This deviates from the data provided in (
Insurance Information Institute 2021), but our choice reflects the fact that many small WF events occur that may not be accounted in (
Insurance Information Institute 2021) due to censoring, and also noting that the annual frequency of WF events is order 50,000 (
CRS 2021). The perils, assigned AALs and average annual frequencies are provided in the first 3 Columns of
Table 1.
The severity distribution for each individual peril is assumed to be a gamma distribution. The gamma density is given by
for
, and
for
, where
is the shape parameter,
is the scale parameter, and the gamma function
(
Section 2.1.1). The mean event loss is
with variance
. We note that the gamma distribution is used in actuarial based reinsurance applications (
Bahnemann 2015), and was therefore deemed sufficient for demonstration purposes only. Alternative severity distributions may result in different conclusions, but we leave exploration of this issue to future work. To determine the shape and scale parameters for each distribution, we note that the mean event loss
is given by the AAL divided by the average annual frequency. We now impose an assumption on the coefficient of variation (CV) given by the event loss standard deviation divided by the event loss mean. For HU we simply
this to be equal to 5. Given a mean event loss of
$6.25 B, this implies that a 1 standard deviation event gives a loss of order
$30 B, and a 2 standard deviation event gives a loss of order
$60 B, which is reasonable given historical losses (
Insurance Information Institute 2021). Using the HU value of CV as a baseline, we then assign CV values for other perils using sensible relativities. We assume EQ has a CV exactly twice that of HU. We assume both WS and SCS are less volatile than HU with assigned values of 3 and 4 respectively, whereas WF is more volatile so we have assigned a CV value of 8. With these (reasonable) assumptions in hand, we can readily solve for the shape and scale parameters
, whose numerical values are provided in Columns 4 and 5 of
Table 1.
Having chosen the individual peril parameters in
Table 1, we can now formulate the multi-peril frequency and severity given certain assumptions. We assume that each peril has independent frequency and severity. We also assume independence across perils. We assume each individual peril has a Poisson frequency, which under our stated assumptions implies that the multi-peril frequency is also a Poisson distribution with average annual frequency
(
Klugman et al. 2008). The multi-peril severity is taken to be a weighted mixture of the underlying gamma distributions for the different perils (with weights proportional to a given perils Frequency in
Table 1 divided by the total multi-peril
). The multi-peril frequency and severity are assumed independent (as stated in
Section 2.1.3).
To set the context, we now analyze the implied risk profile of our multi-peril frequency and severity model. In
Figure 3, we display results from a
500,000 year timeline simulation (details of the simulation method are provided in
Appendix D). The colored dots in the first 5 panels represent individual annual aggregate losses for WS, WF, SCS, HU and EQ. The horizontal axis (for each peril) is the log (base 10) of the annual aggregate loss in USD. The bars overlaid on top of the colored dots represent the minimum, maximum, median and interquartile ranges. The corresponding histograms are displayed on top of the individual colored dots. The final row labeled ALL displays results aggregated across all perils.
Figure 3 demonstrates a number of qualitative features of our modeling setup. First, it is clear that both HU and EQ are the most volatile of the 5 perils with potential loss years greater than
$100 Billion USD.
Figure 3 makes clear that tail risk is dominated by HU followed by EQ. Both SCS and WF have a tighter range of annual aggregate losses which is consistent with the high frequency nature of both perils. The results for ALL perils combined demonstrates a large variation in aggregate annual losses from the tens of Billions USD to over one Trillion USD in the most of extreme years.
Figure 4 provides another viewpoint on the risk profile. Panel A of
Figure 4 displays the annual aggregate loss exceedance probability (1 minus the cumulative probability, labeled as AEP) where the y-axis is displayed in terms of return period (1 over the exceedance probability) using a standard convention. Panel A makes clear the dominance of HU for tail risk in our multi-peril model. Panel A also demonstrates how SCS dominates short return periods (due to the relatively high AAL with high frequency), followed by WF and WS (the next two highest frequency perils). Panel A also shows that EQ is the second most important component of tail risk albeit a distant second compared to HU in our model. Panel B of
Figure 4 displays the exceedance probabilities associated with the individual peril model severities (EQ, HU, SCS, WF and WS) as well as the mixture model for the multi-peril severity (ALL). Panel B re-iterates the idea that HU has the highest loss potential in our model. Panels C and D display the multi-peril exceedance probability curves for the occurrence losses
for orders
(obtained by integration using Equation (
11), and labelled by OEPM). Panel D is a zoomed version of Panel C, and displays an intriguing set of relationships between the occurrence losses.
3.2. Pre-catXL Loss Decomposition for the Multi-Peril Case
Here we discuss in detail the decompositions of the AAL (Equation (
3)), the annual aggregate loss variance (Equations (
4) and (
5)), and the correlation structure across occurrence losses (Equation (
17)) for the pre-catXL perspective, setting the stage for our analysis of the post-catXL case to follow.
We begin with the decomposition of the pre-catXL AAL displayed in
Figure 5. In
Figure 5, the inner (gray shaded) circle represents, proportional to the arc length, the contribution from the different occurrence losses. The results are obtained by numerical integration using our analytical formulation in Equation (
12) (where the normalizing factor
in Equation (
3) is obtained using a
500,000 year simulation procedure discussed in
Appendix D). Note that in what follows we will address the accuracy of our numerical integration schemes versus brute force numerical simulation, but we leave that aside for the moment.
Figure 5 is interesting for the following reasons. Firstly, we see that the order
occurrence loss dominates the pre-catXL AAL with a roughly 60 percent contribution. Secondly, non-trivial contributions are made by orders
, but very little contribution is made from occurrence losses
and above. We have also used our
500,000 simulation to further decompose the orders into the contributions from the individual perils (
Appendix D makes clear how the peril contributions are quantified). This second level of decomposition sheds even further light on the nature of the results.
Figure 5 makes clear that HU dominates the
order, but there are non-trivial contributions from SCS, EQ, WS and WF. Order
shows the diminishing importance of HU, and in fact SCS dominates with non-trivial contributions from HU, WF followed by WS. For orders
and above, it is clear that SCS, due to its high frequency nature and large AAL (
Table 1) dominates.
Table 2 displays estimates of convergence errors (in percentage terms) for the mean losses associated with all the orders
displayed in
Figure 5. The errors in
Table 2 are computed using Equation (
18) (where the standard deviation is computed via numerical integration using the square root of Equation (
14)). The implications of
Table 2 are perhaps self-evident, but, our results indicate that catastrophe modeling systems based on
years of simulation can generate unacceptably large errors (few percent) in fundamentally important metrics like the mean annual loss associated with order
. Our results suggest that around
should be used to reduce errors in the various orders to below 1 percent. Our framework enables one to systematically investigate convergence errors (although not done here to limit scope). Finally, we note that percentage errors for higher order losses are lower than lower orders. This is not surprising as higher order losses have an enhanced likelihood of being assigned zero value (and therefore vary less than the annual maximum for example).
The mean loss results presented in
Figure 5 (the inner grayscale circle segments) are obtained using Equations (
3) and (
12). For completeness, we have recorded the accuracy estimates associated with our numerical integration. We have used standard functions in Mathematica to perform the numerical integrals, with no explicit attempt to optimize the accuracy. Mathematica provides error estimates. Results for orders
are presented in
Table 3 (in percentage terms).
Table 3 shows that the percentage errors are well below 1 percent. Given the convergence errors presented in
Table 2, we can loosely infer that our numerical integrals are as accurate as results from simulations of order
years. We are therefore confident that our numerical integrations are highly accurate.
Figure 6 displays the decomposition of the pre-catXL annual aggregate loss variance using Equations (
4), (
5), (
12), (
14) and (
17). We again use the
500,000 year simulation to estimate the total variance. The leading term is the marginal contribution from the order
occurrence loss, followed by progressively smaller contributions from the 2/1 interaction term between
and
, the marginal
term, and the 3/1 interaction term. In contrast to
Figure 5 for the AAL decomposition, the total variance structure is more strongly dominated by the order
occurrence loss (well over 75% of total variance).
Figure 6 also demonstrates the dominance of the order 1 occurrence loss through the appearance of the 2/1 and 3/1 interaction terms (as the second and fourth most important terms). The appearance of the interaction terms also points to the importance of the correlation structure across occurrence losses.
Analogous to
Table 2, we have estimated the standard errors of the marginal and interaction terms contributing to the total variance, displayed in
Table 4. For example, for the marginal
term in row 1 of
Table 4, the percentage error for a
year simulation is approximately 1.41 percent. The standard errors for the marginal contributions are computed using Equation (
19). For the interaction terms, we use Equations (
20) and (
21) to estimate the errors in the standard deviations and Pearson correlations. For both the marginal and interaction contributions, we have added up errors from individual terms in the required integrals using a standard method (
University of Toronto 2021), under the assumption that errors are independent (which they are generally not). Furthermore, we note that our error estimate for the Pearson correlation is itself approximate (due to non-normality). Our interest is to get a feel for the errors and hence these approximations were deemed acceptable. For example,
Table 4 indicates that well over
simulations are required to obtain percentage errors below 1 percent (for the leading terms that contribute to the total variance). Comparing
Table 3 and
Table 4 indicates the resolving the variance structure requires larger numbers of simulations than for the AAL.
Table 5 displays our estimates of numerical integration errors associated with the various marginal and interaction terms. Similar to
Table 3, we have used output from Mathematica to estimate numerical integration errors of various terms involved in the computations, and used a standard method for adding up errors (
University of Toronto 2021), again using the assumption of independence of errors.
Table 5 again gives us a feel for the order of magnitude of errors arising from numerical integration. For example,
Table 5 shows the percentage error associated with the
marginal term is around 0.1 percent, and 0.29 percent for the 2/1 interaction term. Comparison of
Table 5 with
Table 4 shows that the numerical integration process has similar accuracy to running a
year simulation.
Figure 7 Panel A displays the correlation structure in the pre-catXL occurrence losses. The results in
Figure 7 have been computed using Equation (
17) via numerical integration.
Figure 7 Panel A reveals a very interesting correlation structure. First, we note that higher order occurrence loss interactions have much higher correlation than lower order correlations. For example, the correlation between
and
is 0.33 (as seen in Panel A), and the correlation between
and
is approximately 0.95. Our explanation is that the higher order losses are more likely to be simultaneously small driving up correlation. Lower order losses can take on a large range of values, and this added variation leads to lower correlation. The bottom 4 panels of
Figure 7 display convergence error estimates, in percentage terms, derived using our approximate standard error Equation (
21). Our results indicate that order
years of simulation are required to achieve accuracy of better than 1 percent.
The following is a list of key results and conclusions derived from our analysis of the pre-catXL occurrence loss decompositions presented above:
The order occurrence loss makes up roughly 60 percent of total AAL, with non-trivial contributions from occurrence losses . Using a 500,000 year simulation, we have further decomposed each occurrence loss contribution into the individual peril contributions. At lower orders, HU is dominant, but, all other perils (SCS, EQ, WS and WF) make significant contributions. AAL contributions from higher order occurrence losses are dominated by SCS (with high frequency and significant AAL). Our calculations demonstrate that order simulations are required to achieve less than 1 percent errors in the contributions from various occurrence loss orders, and we find that our numerical integration process is highly accurate, corresponding to roughly years of simulations.
The total variance structure is dominated by the marginal contribution from the order occurrence loss, followed by significant contributions from the marginal contribution, the 2/1 interaction term, and the marginal contribution. Relative to the AAL decomposition, the order occurrence loss is even more dominant, with an over 75 percent contribution to the total variance. Our calculations reveal that order simulations are required to achieve accuracy of 1 percent or better, and our integration process was found to be highly accurate.
Our analysis of the pre-catXL correlation structure across occurrence losses reveals an interesting structure, with lower order occurrence loss pairs being less correlated than higher order interaction terms. We find that order years of simulation are required to achieve errors of less than 1 percent in the correlation structure.
We have now set the stage for our analysis of the post-catXL perspective where we will investigate how the characteristics of the decomposition change as the terms of the reinsurance contract changes. Our analysis will provide an understanding what aspects of our multi-peril model are important for various types of catXL layers covered by insurers and reinsurers.
3.3. Post-catXL Loss Decomposition for the Multi-Peril Case
Figure 8 displays post-catXL AAL results for 4 layers. In each case, we specify the attachment
and exhaustion point
using loss thresholds obtained from the
occurrence loss distribution in
Figure 4 Panel C (a common approach in practice). For example, in
Figure 8 Panel A, the attachment point is the loss threshold associated with return period 1, and the exhaustion point is associated with return period 2. Typically, reinsurers will provide coverage for layers corresponding to higher return periods (Panels C and D) whereas lower layers are retained by insurers (Panels A and B).
Panel A in
Figure 8 clearly demonstrates that the AAL is comprised of non-trivial contributions from occurrence losses up to order
. The inner grayscale decomposition in Panel A is obtained using numerical integration, but we have also used our
500,000 year simulation (see
Appendix D) to further decompose the results into the peril contributions (analogous to the pre-catXL case). Panel A demonstrates that the
occurrence loss explains roughly 40 percent of the AAL (lower than in the pre-catXL case with 60 percent) while the most significant contributor is from SCS, followed by WS, WF, HU and EQ. For higher order contributions, SCS plays a progressively larger role in explaining the AAL. The contrast between
Figure 8 Panel A and
Figure 5 (pre-catXL AAL) is stark. The introduction of the catXL operator
in Panel A leads to more of an emphasis on the 3 highest frequency perils (SCS, WF and WS). HU and EQ play a smaller role in Panel A since losses happen infrequently, and when they do occur, the catXL operator
caps the severity of the loss. We note again that the lowest layers are typically the responsibility of primary insurers. Our results clearly demonstrate the importance of accurately modeling high frequency perils from the perspective of a primary insurer.
Panel B in
Figure 8 shows AAL results for the 2–5 year return period layer. The
occurrence loss now dominates the AAL with an over 90 percent contribution, the majority of which is derived from HU, followed by meaningful contributions from EQ, WS, SCS and WF. The results in Panel B are a stepping stone to the higher layers depicted in Panels C and D. Panels C and D show the dominance of the
occurrence loss, which itself is dominated by HU, followed by EQ. The highest layers are dominated by low-frequency and high severity perils. The most extreme layer depicted in Panel D clearly demonstrates that the HU losses are almost singularly important when determining the AAL component of the catXL premium. While our results in Panel D would be well appreciated by market practitioners, our mathematical framework enables us to shed light on this result in a precise way.
Table 6 provides convergence errors for the results in
Figure 8 for
simulation years, for a selection of layers and occurrence loss orders (computed in the same way as the pre-catXL case).
Table 6 demonstrates very large standard errors for the higher order contributions to the 10–20 year return period layer. While these contributions are generally not significant, and negates to some degree their practical significance,
Table 6 suggests the need for greater than
simulation years for accurate catXL technical pricing.
Table 7 provides a selection of estimates of numerical integration errors for the order
contributions to the 1–2 year return period layer. In comparison to
Table 6, the results in
Table 7 again demonstrates that our numerical integration process is highly accurate, and roughly corresponds to order
years of timeline simulation.
Figure 9 plots the variance decomposition for the 4 catXL layers of interest. Panel A of
Figure 9, which shows results for the 1–2 year return period layer, depicts a rich structure with many interactions contributing to the total variance. The 3 largest contributors are the order 3/2 interaction, followed by the 2/1 interaction, and then the marginal contribution of the order 2 occurrence loss. The fact that the 3/2 interaction term is greater than the 2/1 interaction term could in part be explained by the higher correlation in the 3/2 term (0.78) versus the 2/1 term (0.65) (correlations are provided in
Figure 10 to follow). The key takeaway from Panel A is that the variance is driven by a large set of interaction terms across the different orders, and modeling such interactions accurately is important for risk takers that provide coverage for such layers (primary insurers). Panel B of
Figure 9, corresponding to the 2-5 year return period layer, shows a much less rich structure, which is dominated by the order 1 occurrence loss. As we move to progressively higher catXL layers in Panels C and D, the order 1 occurrence loss becomes nearly singularly important (explaining well over 90 percent of the total variance). Given our findings for the AAL depicted in
Figure 8, this order 1 occurrence loss is largely attributable to HU. The results in
Figure 9 for the total variance further bolster the notion that the HU peril dominates the pricing of high catXL layers in our particular multi-peril setup.
Table 8 provides convergence errors, for
, for the various (variance contribution) terms displayed in
Figure 9 (using the same methodology as in the pre-catXL results provided in
Table 4).
Table 8 shows that most terms have standard errors below 1 percent, with the exception of interaction terms for the 10-20 year return period layer. While such terms do not explain a large part of the variance, our results nonetheless indicate that very precise quantification of all terms requires larger than
simulations. As noted in our discussion of the pre-catXL case, all terms in
Table 8 are computed under assumptions which are not valid, but nonetheless give us a feel for the magnitude of the standard errors.
Table 9 contains numerical integration errors we have computed using methods similar to those discussed for
Table 5 in the pre-catXL case. Results in
Table 9 are displayed for the 1–2 year return period layer.
Table 9 shows that the numerical integration errors we have quantified are higher than the standard errors derived from a
year simulation (but not dramatically so). This was not the case in the analogous pre-catXL results, and it is not entirely clear why. Nonetheless, the results in
Table 9 do demonstrate that the numerical integration process is accurate. We note once again the results in
Table 9 are computed using assumptions which are not correct, but nonetheless give us a feel for the magnitude of the errors.
Figure 10 displays the correlation structures for the post-catXL results (computed using Equation (
17) using the appropriate operator
).
Figure 10 shows that lower layers (Panels A and B) have strong correlations across the different occurrence loss orders, whereas the correlations drop to close to 0 for the highest layer in Panel D. This is a direct consequence of the effect of the catXL operator
. For Layer 1, occurrence losses up to order 10 ‘survive’ the effect of
, and are able to interact in a non-trivial way. For the highest Layer 4 in Panel D, higher order occurrence losses are nearly always zero due to the effect of
, and the correlations as a consequence tend to 0. Comparing the results to the pre-catXL case is also interesting. For example, for Layer 1 in
Figure 10, the correlations for the
interaction terms are
which is much higher that the pre-catXL equivalent set of correlations
. For the highest Layer 4 in
Figure 10, the equivalent correlations are
, much lower that the pre-catXL results. We therefore infer that lower layers emphasize correlations across the occurrence losses, whereas the highest layers diminish their importance.
Our results for an idealized multi-peril catastrophe model for a US nationwide portfolio sheds light on interesting and different aspects of the occurrence losses. Our key findings are:
For the lowest layers which are typically covered by primary insurers, we find non-trivial contributions from occurrence losses up to order 10 for the AAL. The decomposition of the variance clearly demonstrates that many interactions across occurrence losses are important, which is consistent with our results that strong correlations were found for interactions up to order 9. Due to the effect of the catXL operator , the AAL and total variance for the lowest layer have been shown to have the largest contributions from the highest frequency perils (SCS, WS, WF) and generally lower contributions from HU and EQ. As well, increases correlation across all orders. The consequences in practical settings is that primary insurers must ensure accurate modeling of the higher frequency perils to achieve useful technical pricing for retained risk. Our mathematical framework sheds light on this in a precise way.
The highest layers, typically covered by reinsurers, are dominated by the lowest frequency and highest severity perils (HU followed by EQ). What is perhaps surprising is the degree to which the maximum loss arising from hurricanes appears to dominate for the highest layer we have studied (50–100 year return period). The practical implications are clear that high layer catXL underwriters must pay careful attention to accurate modeling of HU. While this notion is well appreciated by market participants, our framework is able to quantify this in a precise way, and will enable practitioners to quickly assess the impact of different model assumptions on the decomposition of pricing metrics.
Within the context of our particular multi-peril catastrophe model, our results have made clear the importance of HU to the high layer reinsurance pricing problem. The following
Section 4 uses a HU only model to address a question related to HU model adequacy.