Next Article in Journal
Role of Bank Credit and External Commercial Borrowings in Working Capital Financing: Evidence from Indian Manufacturing Firms
Previous Article in Journal
Pandemics and Stock Price Volatility: A Sectoral Analysis
Previous Article in Special Issue
The Market Reaction to Repurchase Announcements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tail Risks in Corporate Finance: Simulation-Based Analyses of Extreme Values

1
Faculty of Business Administration and Economics, Heinrich Heine University Düsseldorf, 40225 Düsseldorf, Germany
2
International School of Finance (ISF), Nuertingen-Geislingen University, Sigmaringer Straße 25, 72622 Nuertingen, Germany
*
Author to whom correspondence should be addressed.
J. Risk Financial Manag. 2023, 16(11), 469; https://doi.org/10.3390/jrfm16110469
Submission received: 29 September 2023 / Revised: 23 October 2023 / Accepted: 26 October 2023 / Published: 30 October 2023
(This article belongs to the Special Issue Advances in Corporate Finance and Investments)

Abstract

:
Recently, simulation-based methods for assessing company-specific risks have become increasingly popular in corporate finance. This is because modern capital market theory, with its assumptions of perfect and complete capital markets, cannot satisfactorily explain the risk situation in companies and its effects on entrepreneurial success. Through simulation, the individual risks of a company can be aggregated, and the risk effect on a target variable can be shown. The aim of this article is to investigate which statistical methods can best assess tail risks in the overall distribution of the target variables. By doing so, the article investigates whether extreme value theory is suitable to model tail risks in a business plan independent of company-specific data. For this purpose, the simulated cash flows of a medium-sized company are analyzed. Different statistical ratios, statistical tests, calibrations, and extreme value theory are applied. The findings indicate that the overall distribution of the simulated cash flows can be multimodal. In the example studied, the potential loss side of the cash flow exhibits a superimposed, well-delimitable second distribution. This tail distribution is extensively analyzed through calibration and the application of extreme value theory. Using the example studied, it is shown that similar tail risk distributions can be modeled both by calibrating the simulation data in the tail and by using extreme value theory to describe it. This creates the possibility of working with tail risks even if only a few planning data are available. Thus, this approach contributes to systematically combining risk management and corporate finance and significantly improving corporate risk management. Based on these findings, further analyses can be performed in terms of risk coverage potential and rating to improve the risk situation in a company.

1. Introduction

The model-based evaluation and management of company-specific risks is of significant interest for entrepreneurs, investors, and bankers. While the entrepreneur can use the evaluation results to make better decisions and reduce risks, investors will compare various investment options with regard to their earnings and risk profile.
A company basically has several different options for conducting the risk assessment process. The risk of a company can either be derived from an outside-oriented risk analysis on the capital market or determined internally with respect to the specific risks carried by the company (Gleißner and Ernst 2019; Ernst 2022b, 2023). The latter approach in particular is attracting increasing interest, especially in Europe, and the approaches are being increasingly refined and implemented in tools for corporate planning (Ernst and Gleißner 2022b). The method of simulation-based planning is used to identify risks in a company, quantify them, and integrate them into corporate planning via their distribution functions. Using Monte Carlo simulation, the risks are aggregated into a target figure. Examples of target variables are EBIT, cash flows, or key performance indicators (KPIs). The distribution of the target variable allows a quantitative risk analysis. (Ernst and Gleißner 2022a).
Because we will use one case as an example, we would like to explicitly point out that in simulation-based planning, company-specific simulation results are generated for each company based on its individual risks. Depending on the risk content of corporate planning, the simulated results have different risk profiles. Therefore, no generally valid statements for other companies can be made from the simulation results for one company.
In practice, there are often examples indicating that the overall distribution of a company’s cash flow is not unimodal in many cases (Ernst 2022b) and that a superimposed distribution in the area of the loss tail can often be clearly identified; Figure 1 provides an example. This raises the question of a suitable risk assessment, e.g., in the form of calculated quantile values that can be included in a risk report or an assessment study. Quantile calculations, for example, can be based on a distribution model for the cash flow of the company under consideration. The simple adaptation of a hypothetically assumed, unimodal distribution to the overall distribution is likely to lead to poor goodness-of-fit test results because possible bimodality and regularly result in the rejection of the hypothesis; see, e.g., (Cramér 1928a, 1928b; von Mises 1928; Anderson and Darling 1952, 1954; Shorack and Wellner 2009).
At least three other possible approaches are obvious for appropriately assessing risk (see Embrechts et al. 1997; Basel Committee on Banking Supervision 2009; McNeil et al. 2015):
  • Empirical determination of the required quantiles from the empirical distribution function (Kolmogorov 1933);
  • Adaptation of a suitable distribution function to the superimposed distribution in the loss area;
  • Adaptation of a generalized Pareto distribution (GPD) to the tail of the overall distribution.
The present study describes the procedures and results of all three methods and compares the findings. In doing so, we pursue the research question of which approach seems more suitable in practice for risk assessment. As we need to focus on the tail of the aggregated distribution to answer this research question, we are additionally able to investigate the following issues: Can the combination of individual risks in a simulation-based company valuation lead to a situation that endangers the company’s existence? How can empirical tail distributions be statistically characterized? Can an empirical tail distribution be converted by calibration into a distribution function that is characteristic of tail risks? What additional insights does extreme value theory provide for the analysis of empirical tail distributions? Is there a connection between the distribution functions obtained through calibration and those of extreme value theory? The analyses provide evidence that the tail region can be modeled well with both a separate distribution and the GPD. However, statistical description with the GPD shows slight advantages and allows a somewhat more accurate description of tail risk. In addition, the GPD is also a model for the tail of a separately fitted distribution, and the two approaches show consistency, so in practice, the direct fit of the GPD is preferable. Regardless of the type of tail modeling, the analyzed example shows the possibility of a corporate hazard situation when individual corporate risks are aggregated.
This article illustrates in detail how simulation-based corporate planning can be designed and conducted. In particular, it is shown how the loss distribution and, in particular, the distribution of high losses (tail) can be inferred. We evaluate the loss distributions and calculate statistical key figures for extreme losses. Overall, we present a concept for evaluating a company with respect to its extreme losses, which consists of three stages that build on one another: determining the distribution of the cash flow, identifying the loss distribution, and estimating the statistical key figures for extreme losses. The application of the proposed method is illustrated using a benchmark example of a medium-sized company.
The article is structured as follows: The following section provides a brief overview of the simulation-based corporate planning method, which has already been extensively discussed in the European literature. A brief summary of tail modeling with the GPD is also provided. Section 3 presents an example based on a real medium-sized machinery company, and the simulation model on which our empirical analysis is based on, and the dataset used for analysis are described. Furthermore, the indicators that can be used for a risk report are derived and compared using the three approaches above. The last section summarizes the results and presents a conclusion and prospects for further research.

2. Methodology

2.1. Simulation-Based Corporate Planning

The risk analyses performed by companies are always based on corporate planning. Traditionally, the management’s targets flow into corporate planning. More modern approaches, however, rely on simulation-based corporate planning, which does not process target values (e.g., values set by the management) in the planning but generates stochastic plan values from the risks of the company through simulation (Ernst and Gleißner 2022a). The resulting plan values represent the average value over all scenarios generated by the simulation. They form the basis for further risk analysis, rating analysis, and company valuation (Ernst 2022b). Within the framework of simulation-based corporate planning, risks that exist in the company and are not hedged are identified and quantified. The risks are quantified by the distribution functions of stochastic plan variables. The stochastic plan variables are built into the income statement and balance sheet planning so that we obtain stochastic plan values instead of target values. The impact of the combination of identified and quantified risks on a target variable (e.g., EBIT or cash flow) is conducted through a Monte Carlo simulation. This step performs risk aggregation. As a result, we obtain empirical distribution functions for these target variables. Among other things, this enables the company to draw conclusions about statistical ratios that allow an evaluation of high losses. The target variables here are loss distributions, the evaluation of extreme events, and the assignment of a rating.
Simulation-based corporate planning is designed to generate plan values that show the average over all possible scenarios. Those plan values are plan values that can occur in the mean of the possible deviations from the expected value (Gleißner 2008). The technique of simulation-based planning has been little discussed internationally. In Germany, planning standards and auditing standards, as well as legal requirements, have led to the increasing use of planning values that consider different scenarios. The IDW is the Institute of Auditors in Germany and issues, among other things, guidelines and standards on auditing and business valuation issues. It states in its Practice Note 2/2018 (Institut der Wirtschaftsprüfer 2018, § 43) that probabilities of default in a planning calculation should show the average default risk for different scenarios. The Principles of Proper Planning (Bundesverband deutscher Unternehmensberatungen 2022) also state in Section 2.3.5 that planning should generally be based on comprehensible expected values.
In the already-listed literature on simulation-based planning, it is required for plan values to be “unbiased”. The authors point out that “unbiasedness” is given when the plan value corresponds to the mean value over all possible scenarios. The term unbiasedness is used in different scientific disciplines. In statistics, there are clear criteria for unbiasedness, which are not all fulfilled in corporate finance.
While there is a mathematical basis for the reliability of expectations for statistical estimators with the Gauss-Markov theorem (Kendall and Stuart 1977; Hallin 2014), this is not the case for corporate planning. A plan is not a statistical estimate of an objective property of reality (for the difference between a plan and a forecast, see also Rieg 2018). There can be no objective values in simulation-based planning either, as the simulation itself is based on subjective assumptions. Therefore, the planned values cannot be unbiased. Unbiased planning is an ideal that can only be transferred without restriction under the unrealistic assumption of perfect information. In business planning practice, however, the aim is not to achieve one ideal and only this ideal but to compare different procedures in terms of unbiasedness and to select the relatively better approach (Hubbard 2020). Causal planning models with risk aggregation via Monte Carlo simulation currently represent the best planning methodology (Steinke and Löhr 2014; Gleißner 2016). In addition, it provides information on the extent of possible deviations from the plan, i.e., planning certainty, which can be expressed by a risk measure (e.g., standard deviation). Scenario-based or simulation-based planning comes closest to the ideal of unbiasedness, but without being able to meet the required statistical criteria for unbiasedness. Therefore, instead of the term “unbiased”, we introduce the term “scenario-based” at this point, since scenario-based plan values represent plan values that occur on average over all possible (in our case, simulated) scenarios.
As mentioned above, the topic of simulation-based planning is hardly discussed in the international literature. This may be because in Europe (especially in Austria, Germany, and Switzerland), both the standards of corporate planning and auditing as well as laws mandate the use of scenario-based plan values. Nevertheless, it is remarkable that the added value of simulation-based corporate planning for corporate and risk management has been little discussed internationally.
If the literature is briefly summarized, the following steps can be identified for creating a simulation-based plan:
  • Identification of the relevant risks in the company;
  • Selection and adaption of suitable distribution functions for the identified risks (uniform or triangular distribution, Gaussian-type distribution, etc.); see (Wehrspohn and Ernst 2022);
  • Adoption and integration of the risk distributions into the business plan for the Monte Carlo simulation; see (Ernst 2022b);
  • Risk aggregation to a target value with the help of Monte Carlo simulation; see (Jaeckel 2002; Pachamanova and Fabozzi 2010);
  • Risk analysis; see (Stulz 2003; Jorion 2007; Friberg 2015).
We would like to mention again that all plan data and risk assessments are company-specific. If, in addition, the focus is on medium-sized companies for which no securities are listed on the capital market, it is difficult to derive risk ratios from capital market data and to compare them empirically with the simulated risk data. This extended consideration is therefore excluded from the present study.

2.2. Assessment of Tail Risks

A key result of this model-based corporate planning is the potential cash flow that the company under consideration can generate in the next five years as observation periods. This represents a significant extension to the point estimates or even scenario analyses used in practice. An empirical distribution function is calculated for each year, with which the risk associated with the year can be evaluated. In addition to this five-year view, the result of an infinite planning horizon, the terminal value (TV), can also be examined. Hence, a risk analysis based on predictive planning is conducted. A number of applications show that the cash flow distribution is often based on a non-unimodal distribution and that special attention should be paid to the tail when assessing risk in the loss area. If an independent distribution can be defined and modeled in the loss area, it can be evaluated. If this does not succeed, then standard methods from the theory of extreme value distributions (Embrechts et al. 1997) should be used. The basic idea for the latter is to model the tail region with the GPD (see Appendix B) and then derive statements about the risk. Key risk indicators in corporate planning are usually known as value-at-risk, conditional value-at-risk, risk coverage potential, risk bearing capacity, and risk tolerance (Gleißner and Ernst 2019; Ernst 2022b, 2023). Many of the risk indicators mentioned can be derived from quantiles, so we will focus on the calculation of quantiles and briefly distinguish three methods for determining quantiles and assessing tail risks. Regarding the specific example company in Section 3, the following three methods are applied and compared.

2.2.1. Empirical Estimation of Tail Risks

In the first step, we perform an empirical estimation based on the overall empirical distribution function to detect tail risks. The minimum value, the maximum value, the range, the mean, and the variance as key figures are used to describe the general risk characteristics. Semivariance gives an additional indication of asymmetries in the distribution. Furthermore, the kurtosis and skewness of the empirical cash flow distribution are calculated. Skewness measures the asymmetry of data around the mean. Negative skewness indicates a longer left tail, while positive skewness represents a spread of data to the right of the mean. Kurtosis measures whether a distribution is heavier-tailed or lighter-tailed relative to the normal distribution. A kurtosis of less than three indicates a less outlier-prone distribution, while a kurtosis greater than three indicates heavier-tailed behavior than the normal distribution. Finally, the 99.9% quantile is determined to identify how close the smallest 0.1% of cash flows are to the minimum value. The abovementioned indicators are part of the basic statistical characterization and can be determined using standard methods (Kendall and Stuart 1977; Embrechts et al. 1997; McNeil et al. 2015).

2.2.2. Tail Model I—A Suitable Adapted Distribution for the Tail

In the second step, the distribution of tail risks from the main distribution in the presence of a bimodal distribution is separated. In many examples examined in the preliminary analyses, including the example shown in Section 3, the separation of the available empirical distribution function succeeds without major effort. Simply put, this is achieved through close examination. If the separation is not so simple, special statistical methods are required to achieve separation; see (Hartigan and Hartigan 1985), and for an overview of suitable statistical methods, see (Gnedin 2010). The following focuses on the analysis of tail distributions. To determine which distribution function best describes the empirical distribution function of tail risks, the calibration method is used. Statistical software, or spreadsheet software, is used to calibrate the empirical data to a theoretical distribution function. Here, the Excel add-in Risk Kit performs the desired calibration (Wehrspohn and Zhilyakov 2011, 2021). Risk Kit is a Monte Carlo simulation, and calibration software that integrates with Excel and works similarly to Crystal Ball or @Risk. The program fits the distributions to the empirical data and reports the estimated parameters for the distribution function. Furthermore, the results of goodness-of-fit tests, for example, the Chi-square goodness-of-fit test, Box–Cox transformation, Kolmogorov–Smirnov goodness-of-fit test, Shapiro–Wilk test, or Anderson–Darling goodness-of-fit test (Kendall and Stuart 1977; Shorack and Wellner 2009), are reported. The estimated distribution parameters can then be used for risk assessment and modeling.

2.2.3. Generalized Pareto Distribution

Especially when considering high quantiles in the risk assessment process, separate modeling of the tail of the often unknown parent distribution is necessary. In finance, the GPD is used predominantly as a tail model (Basel Committee on Banking Supervision 2009). A theorem in extreme value theory states that for a broad class of distributions, the distribution of the excesses over a threshold u converges to a GPD if the threshold is sufficiently large; see, e.g., (Gnedenko 1943; Balkema and de Haan 1974; Pickands III 1975). For parameterization and characteristics, see Appendix B.
Since the GPD was introduced by Pickands III (1975), many theoretical enhancements and applications have followed (Davison 1984; Smith 1984, 1985; van Montfort and Witter 1985; Hosking and Wallis 1987; Davison and Smith 1990; Choulakian and Stephens 2001). Its applications include its use in the analysis of extreme events in climate research, as a failure-time distribution in reliability studies, and in the modeling of large insurance claims. Further examples of applications can be found in Embrechts et al. (1997) and the studies listed therein. The GPD is also increasingly used in finance. Especially in the assessment of risks based on high quantiles, the GPD is one of the proposed distributions for modeling the tail of an unknown parent distribution (Basel Committee on Banking Supervision 2009).
Thus, for the separate modeling of the tail with the GPD, only the parameters of the distribution need to be determined via standard maximum likelihood methods from the data pertaining to the tail region of the underlying empirical distribution function; for example, the data that belong to the area below threshold u in the case of the loss tail. The approach is analogous above threshold u if the losses are represented positively by a change in sign; see (Davison 1984; Smith 1984, 1985; Hosking and Wallis 1987; Embrechts et al. 1997; Choulakian and Stephens 2001; Hoffmann and Börner 2021). For further evaluation, particularly for calculating the risk indicators, the correct determination of the threshold value u is essential (McNeil and Saladin 1997; de Fontnouvelle et al. 2005; Chernobai and Rachev 2006; Dutta and Perry 2006). In addition to the simple eyeball criterion or a rule of thumb, e.g., 5% of the data belong to the tail, various methods for determining the optimal threshold value are conceivable; see (Hoffmann and Börner 2021) and the vast literature cited therein. In this paper, we follow Hoffmann and Börner (2020a, 2021) and use a fully automatic process to adapt a GPD that does not require any user intervention or additional parameters. This method was recently used successfully to determine tail risks in cryptocurrencies and showed advantages over other methods (Bruhn and Ernst 2022). In what follows, we use the same algorithm to determine the parameters of the GPD as the tail distribution. A brief description of the procedure is given in Appendix C. An implementation of the algorithm can be found free of charge at github.com (Bruhn 2022).

3. Application

As an example, we analyze a medium-sized company. Using the method of simulation-based planning and the corresponding model, possible outcomes for cash flow are generated. The data are then examined, and the tail risks are assessed. A special interest is the evaluation of risks at high confidence levels and the answer to the question of which method from Section 2.2 is more suitable for practice, for example to estimate high quantiles from data.

3.1. Model and Data

The model used here is the planning model of a medium-sized company. The planning model comprises six periods. The first five periods represent the detailed planning phase, in which all cash flows are planned in detail on the basis of integrated income statements and balance sheet planning. From the sixth planning period onward, the continuation period begins. The cash flow TV calculated here represents cash flow in a growth equilibrium. Based on the cash flow TV, the TV is calculated as a perpetual annuity. For the derivation of the TV, see (Ernst 2022a).
Our planning model is designed as a stochastic planning model. This means that the most important planning assumptions are not based on target value 7 of the company (source from controlling) but on distribution functions. In the first step, as a result of a risk workshop, the risks that have not been hedged or cannot be hedged by the company are identified. These risks are quantified by assigning them a distribution function (Wehrspohn and Ernst 2022). Table 1 presents an example of two identified risks with their distribution functions, namely sales growth and the cost of goods sold. The cells with bold entries are input cells for the Monte Carlo simulation and contain random numbers from the specified distribution functions.
In a second step, the Monte Carlo parameters are built into the income statement and balance sheet planning. This results in simulation-based planning (Rieg and Gleißner 2022). In the Monte Carlo simulation, a number of values are drawn from the distribution functions (20,000 in the example), combined via the planned income statement and planned balance sheet, and aggregated to the target variable of the Monte Carlo simulation. In the model, the cash flows to equity (CFtE) are the target/output variables of the Monte Carlo simulation. Table 2 shows the income statement, and Table 3 shows the calculation of the CFtEs. The cells with bold entries are output cells of the Monte Carlo simulation.
Figure 1 shows the empirical distribution functions of the CFtE for the plan year t1 as an example of the output of the Monte Carlo simulation. From the figure, it is apparent that the empirical density of the distribution of the CFtE exhibits a fat tail. Furthermore, the distribution consists of two distributions that can be separated from each other at a distinct value, which is well isolated and clearly recognizable. This clear separation also occurred in the simulations at the other planning horizons t2, …, t5, and TV, so that a clear division of the empirical density into two sections at a given point is consistently apparent. Other model settings and parameterizations in Table 1, Table 2 and Table 3 were also made and examined via Monte Carlo simulations. These analyses also showed such a division of the distribution into two sections. In practice, however, such a clear separation may not be feasible without further statistical testing, so the use of appropriate statistical procedures is recommended at this point (Hartigan and Hartigan 1985; Ashman et al. 1994; Shorack and Wellner 2009).
We would like to point out once again that the frequency distributions of the cash flows are the result of company-specific risks. Companies with a lower risk profile will not experience tail risks of this magnitude. Conversely, companies with higher risks have stronger, more extreme risks. In the example selected, we have chosen the risk characteristics as they might occur in a medium-sized machinery company. This example represents a case-by-case investigation and shows what can be observed in the practice of risk management in individual cases, but no general conclusions can be drawn.

3.2. Analyses of the Company Example

3.2.1. Basic Statistical Key Figures

First, the overall empirical distribution function is analyzed. The results are shown in Table 4. The first striking feature is the wide range between the minimum and maximum values, which is confirmed by the high variance. If the mean value is added, the negative deviations from the expected values are much stronger than the positive deviations. This is the first indication of tail risks. The values for the semivariance provide interesting results. The semivariance based on the negative deviations from the mean is significantly higher than the semivariance of the positive deviations. This is also an indication of tail risks. This is also confirmed by the skewness and kurtosis. The negative skewness indicates that tail risks lead to the assumption of a longer tail at the left margin. Furthermore, we observe values greater than three for kurtosis. This shows heavier-tailed behavior than the standard normal distribution. Overall, the applied statistical ratios indicate the presence of pronounced tail risks.

3.2.2. Analysis of the Tails by Calibration—The Tail Distribution

In the next step, we analyze the distribution of the tail risk. The first step is to check whether it is possible to separate the distribution of the tail. There are special tests, e.g., Hartigan’s dip test (Hartigan and Hartigan 1985) and Ashman’s D (Ashman et al. 1994), to analyze such issues. The first test examines a distribution for unimodality, and the second test tests for the separability of superimposed distributions at a cut point. However, the data analyzed here show clear separation at the minimum between the two modal values as the cut point (see Figure 1), and the corresponding tests do not contradict this hypothesis.
Using the example of CFtE in period TV, we show in Figure 2 the empirical distribution function of CFtE in detail in the tail area. For other planning horizons, see Appendix A. A calibration with the Risk Kit was performed to check which distribution function best approximates the empirical distribution function. A total of sixty tail distribution functions were fitted to the data, and various goodness-of-fit tests were performed with them (Wehrspohn and Zhilyakov 2011, 2021). Table 5 lists the six distributions that show the best results in full statistics with respect to the goodness-of-fit tests. The distributions listed are ranked according to their suitability. Of these, the Weibull distribution stands out for this dataset, as it shows the best fitting behavior in all tests. Thus, the Weibull distribution best represents the empirical distribution function in the tail area. For period TV, Figure 2 clearly presents the empirical density and the fitted Weibull distribution with their scale and shape parameters. The result of the calibration shows that the distribution functions that are closest to the given empirical distribution function correspond to the distribution functions that are also used in extreme value theory; see (Embrechts et al. 1997; Basel Committee on Banking Supervision 2009). Therefore, in the absence of a suitable data basis, it may be a favorable approach to model tail risks directly with the GPD and thus to use it as an approach for modeling tail risks in corporate planning.
If there is no knowledge about the underlying distribution function for a dataset, it is common practice to examine several distributions with respect to their goodness-of-fit (Embrechts et al. 1997; Basel Committee on Banking Supervision 2009; McNeil et al. 2015). The goal is to find the best fitting distribution with a series of statistical tests; in rare cases, the unknown, true distribution is determined. Five different tests have been performed on all distributions. In addition to the Akaike information criterion, which allows a relative evaluation of different distribution models and a model selection, direct distance measures (dataset vs. model) were added as a criterion; see, e.g., (Shorack and Wellner 2009). While the Kolmogorov–Smirnov test (Kolmogorov 1933; Smirnov 1948) examines the largest distance point by point, the Anderson–Darling test (Anderson and Darling 1952, 1954) concentrates on a weighted integral distance evaluation, whereby the tail ranges are weighted more strongly; see also Appendix C. The latter test is thus of particular importance when the evaluation of tail risks is in focus. In all tests noted in Table 5, the Weibull distribution shows excellent results and is superior to other distributions in terms of goodness of fit, especially with respect to the Anderson–Darling test. Thus, in the following, the Weibull distribution is further investigated as a tail distribution.
For all planning periods, a separate Weibull distribution function was adjusted in the tail area; see Appendix A. With this distribution function, the key figures for the tail area shown in Table 6 can be calculated. For all periods, the compilation of statistical characteristics in Table 6 shows that the loss tail is strongly pronounced. The minimum, mean, and maximum of the loss tail are clearly in the negative range and indicate a company-threatening situation if the risks materialize. This can be seen in comparison to the expected earnings (mean in Table 4). As expected, the 99.9% quantile is very close to the minimum and underlines the critical situation for the analyzed company.

3.2.3. Analysis of the Tails Using the GPD as Tail Model—The Tail of the Tail Distribution

Using the method described in Appendix C, the dataset is separated into a body and a tail region, and for the tail region, the GPD (compare Appendix B) is fitted as a distribution model. The threshold u ^ is determined, and the maximum likelihood estimators σ ^ and ξ ^ of the distribution parameters are calculated from the dataset assigned to the tail. As an example, the result of this modeling procedure is shown in Figure 3 for period t1. By changing the sign, the losses (in mio. euro) are noted as positive values. The main graph shows the empirical cumulative distribution function for the whole dataset. A plateau is clear between the loss values of approximately EUR 3 to 5 million, which delineates the two distribution parts (body and tail distribution); see also Figure 1. For a correct assessment of the risks at high quantiles, the procedure described in Appendix C models the tail of the tail distribution starting at a threshold value of u ^ = 9.83 mio. euro and estimates the distribution parameters of the GPD. The inner graph of Figure 3 focuses on the 99.9% confidence interval, which is important for regulatory purposes.
The empirical cumulative distribution function is plotted with black crosses. The empirical determination of the quantile is compared with the two methods described in Section 2:
Tail Model I:
Adapted Weibull distribution as the tail distribution in light gray; compare also the upper-left graph in Figure A1 in Appendix A for the corresponding density.
Tail Model II:
Adapted GPD as the tail of the tail distribution in black.
The three quantile values determined are close together and scatter only slightly in the range between EUR 14.7 and 15.0 million. The GPD specially adapted in the tail area shows very good results in the goodness-of-fit tests (Cramer von Mises test: CM = 0.0146 and pCM = 99.9%, Anderson-Darling test: AD = 0.1399 and pAD = 99.3%). This high quality can be seen in the inner figure by the close fit of the graph (black line) to the empirical data (black crosses). From the perspective of the generally high-quality requirement in risk assessment, the quantile estimated with the GPD should represent the possibility of fulfilling the requirement that also satisfies the regulatory requirements (Basel Committee on Banking Supervision 2009). The analysis described was conducted in the same way for all periods. The quantiles determined are noted in Table 7 in the top three rows and shown in Figure 4 in the top three lines. Overall, the three methods lead to similar quantile values that differ only slightly from one another within a narrow band that includes all three values. The only abnormality that can be seen is in period t4. The empirical quantile deviates more clearly from the model calculations. A detailed analysis indicates that another substructure could be concealed in the data here; compare also the almost isolated peak at approximately EUR −15.4 million in Figure A1 in Appendix A for period t4. The number of data points in this possible substructure is small, and a precise statistical analysis is not possible here.

3.2.4. Comparison of the Results

If the sample mean from Table 4 is added to the quantile as an additional source of loss, the mean VaR is calculated in Table 7. It is merely an additive variable for each period under consideration, which changes little in the fundamental quality of the results in the respective period, as the three middle graphs in Figure 4 also show. Only when comparing the periods t1 to TV is there a smoothing of the graphs compared to the graphs of the raw quantile. If the loss values above the specified confidence level are weighted with the various tail distributions and integrated, the values for the expected shortfall and thus for the mean CVaR are calculated; see (Embrechts et al. 1997; McNeil et al. 2015; Hoffmann and Börner 2020b, 2021; Bruhn and Ernst 2022). Table 7 shows the results of the mean CVaR per period in comparison. While the empirical results and the values calculated with the GPD as the tail of the tail distribution are very close to each other, the results determined with the Weibull distribution as the tail distribution differ significantly. The latter allows the conclusion that the Weibull distribution as a tail distribution does not represent the dataset at hand with sufficient quality, especially in the case of very high quantiles. This result is also visualized in Figure 4 with the position of the three lines for the mean CVaR. In this resolution, the empirical values and those calculated with the GPD are almost congruent. The results calculated with the Weibull distribution deviate significantly.
This finding is consistent with the interpretation of the inner figure in Figure 3. There, the magnification shows that the cumulative distribution function of the Weibull distribution (dashed line) represents the dataset (crosses) less well than the GPD (solid line).
Overall, the analysis presented above shows that all three methods allow very good conclusions to be drawn about the extreme risk that may materialize. If the GPD is used as a model for the tail of the tail distribution, the dataset can be represented with high quality, especially in the high quantile range, as indicated by the good results of the goodness-of-fit tests. There is also the advantage that no selection of tail distributions has to be examined (see Table 5). This is because the GPD as a tail of a tail distribution is unique (Embrechts et al. 1997; McNeil et al. 2015), and therefore, there is no model uncertainty with regard to the tail of the tail distribution.
Figure 4 shows that, for the example of a medium-sized company, the risk values tend to decrease over the planning period. A number of other companies were modeled (Ernst 2022b; Ernst and Gleißner 2022b), and the behavior of the risk values with increasing planning horizons was examined. Any behavior from slightly falling to sharply rising risk values in a period comparison could be observed. No systematic behavior is recognizable, so the conclusion is to perform the risk assessment for all periods t1, …, t5, and TV in each application and not to extend conclusions from one period to all periods by simple rules of thumb.
Note that when modeling individual organizational units, distribution functions with a very heavy tail may occur under certain circumstances. These could then dominate the simulation and lead to shape parameters | ξ | > 0.5 when adjusting the GPD; see also Appendix B. In practice, such individual cases have to be scrutinized and checked for plausibility.

4. Conclusions

The risk potential of a company is greater than the sum of the weighted individual risks. Therefore, their aggregation is achieved with the help of Monte Carlo simulations. Monte Carlo simulation allows one to aggregate the impact of a combination of entrepreneurial risks on a target figure. In this case study, we selected a cash flow measure (here, cash flow to equity), which in turn is the basis for the simulation-based business valuation. However, it is also possible to aggregate other P&L targets (e.g., EBIT or profit), balance sheet targets (e.g., equity or debt), and KPIs (e.g., ROCE, equity ratio, probability of insolvency, covenants).
The histograms of the aggregated cash flows of the business plan and the analyses have shown that the combination of risks can lead to significant tail risks in individual cases. These can lead to a situation threatening the existence of the company.
Indications for the possible existence of tail risks could already be found with simple statistical parameters such as range, semivariance or semistandard deviation, skewness, kurtosis, or excess, and a consideration of the 99.9% quantile. The histograms show that a bimodal distribution is present here. For further analysis, a separation of the tail risks from the main distribution was performed. In the present example, the separation could already be carried out with the simple principle of minimum detection. In less clear cases, it is more precise and scientifically necessary to separate the two distributions using a statistical method. Here, we applied the “dip test of unimodality” (Hartigan and Hartigan 1985).
The separation of the distributions allows an analysis of the distribution of tail risks to be performed separately. The calibration of the cash flows has shown that the distribution functions that come closest to the empirical distribution function are those that are also used in extreme value theory. Therefore, in the next step, we applied extreme value theory to the data basis of the tail risks.
The combination of statistical modeling of a company with an analysis of risks at high confidence levels has thus far been little studied in connection with the detection of risks endangering a company, although this is very important for corporate management and capital-lending credit institutions. Due to this particular importance, three risk assessment methods were applied to a real-world example and compared to determine loss risks at high quantiles. In addition to the empirical determination, two methodological approaches were used. On the one hand, a separate distribution—out of sixty possible distributions—was adapted for the tail, and, on the other hand, the GPD was adapted to the tail dataset as a unique tail model. Both methods of model building led—also in comparison to empirical risk determination—to comparably good results in risk assessment. For the company example, the comparison of methods shows that fitting a GPD models the empirical data slightly better, especially in the tail area. If model uncertainties with respect to the tail model are to be avoided, then the above result is an indication that modeling with GPD is preferable in practice.
A limitation of the approach presented is that a detailed risk analysis requires high-quality, quantitative data collection. In countries (e.g., Germany) where an early risk detection system is in place, it can be assumed that the required data are available.
The aim of future research is to examine what degree of risk the individual risk variables can lead to in combination. This would allow statements on how individual companies are exposed to tail risks. The present study focused on emerging tail risks in cash flows. A further step would be to examine how the aggregation of risks affects other financially relevant variables. Of particular interest here is the risk profile of rating ratios, covenants, and financing conditions for companies. From this, it can be deduced what risk coverage potential companies need to maintain to be able to offset risks that threaten their existence. From the banks’ perspective, the approach developed above for risk management in companies is well known (European Banking Authority 2020) and allows the assessment of risks in the context of a lending process. In the future, banks may be inclined to require such assessment processes from companies. A further development from the banks’ perspective would be to require companies to deposit some form of equity collateral for high-risk loans.
Finally, it should be mentioned that the approach presented here could also serve to develop an even greater awareness of risks among companies and banks. Specifically, risks, which in combination can lead to significant tail risks, can be identified at an early stage, and eliminated by means of suitable countermeasures in the respective company under consideration before the risks materialize and pose a threat to the company’s existence.

Author Contributions

Conceptualization, C.J.B., D.E. and I.H.; methodology, C.J.B., D.E. and I.H.; software, D.E. and I.H.; validation, C.J.B., D.E. and I.H.; formal analysis, C.J.B., D.E. and I.H.; investigation, C.J.B., D.E. and I.H.; resources, C.J.B., D.E. and I.H.; data curation, D.E. and I.H.; writing—original draft preparation, C.J.B., D.E. and I.H.; writing—review and editing, C.J.B., D.E. and I.H.; visualization, D.E. and I.H.; supervision, C.J.B., D.E. and I.H.; project administration, I.H.; funding acquisition, C.J.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to thank Wehrspohn GmbH & Co. KG for generously providing the Risk Kit tool for our research.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Tail Distributions for Periods t1, …, t5 and TV

As described in Section 3.2, a separate distribution is fitted to the empirical distribution function in the tail area for all periods t1, …, t5, and TV. Figure A1 compares the results.
Figure A1. Densities and fitted distribution for all periods t1, …, t5, and TV (cash flow in mio. euro).
Figure A1. Densities and fitted distribution for all periods t1, …, t5, and TV (cash flow in mio. euro).
Jrfm 16 00469 g0a1

Appendix B. Generalized Pareto Distribution

The GPD is usually expressed as a two-parameter distribution and follows the distribution function:
F x = 1 1 + ξ x σ 1 ξ
where σ is a positive scale parameter and ξ is a shape parameter (sometimes called the tail parameter). The density function is described as
f x = 1 σ 1 + ξ x σ 1 + ξ ξ
with support 0     x < for ξ     0 and 0     x   σ ξ when ξ < 0 (Embrechts et al. 1997; McNeil et al. 2015). The mean and variance are depicted as E [ x ] = σ 1 ξ and V a r [ x ] = σ 2 1 ξ 2 1 2 ξ , respectively. Thus, the mean and variance of the GPD are positive and finite only for ξ < 1 and ξ < 0.5, respectively. For special values of ξ, the GPD leads to other various distributions. When ξ = 0, −1, the GPD translates to an exponential or a uniform distribution. For ξ > 0, the GPD has a long tail to the right and resembles a reparameterized version of the usual Pareto distribution (Embrechts et al. 1997). Several areas of applied statistics have used the latter range of ξ to model datasets that exhibit this form of long tail.
As summarized by Hoffmann and Börner (2021), the preferred method in the literature for estimating the parameters of the GPD is the well-studied maximum likelihood method (Davison 1984; Smith 1984, 1985; Hosking and Wallis 1987). Choulakian and Stephens (2001) stated that it is theoretically possible to have data sets for which no solution to the likelihood equations exists, and they concluded that, in practice, this is extremely rare. In many practical applications, the estimated shape parameter ξ ^ is in the range between −0.5 and 0.5, and a solution to the likelihood equations exists (Hosking and Wallis 1987; Choulakian and Stephens 2001). For practical and theoretical reasons, these authors limit their attention to this range of values. Within this range, the mean and the variance are positive, and the usual properties of maximum likelihood estimation such as consistency and asymptotic efficiency hold. Excluding constructed exceptions, this parameter section covers the distributions known in the financial sector, including those in which the (unknown) underlying parent distribution may have fat tails (Embrechts et al. 1997).

Appendix C. Modeling the Tail of a Distribution

For a very large class of distribution functions, the GPD can be used as a model for the tail; see (Embrechts et al. 1997). The large class of parent distributions includes all common distributions that play a role in finance. Therefore, almost no uncertainty exists regarding the model selection for the tail of an unknown parent distribution. The required quantiles and risk indicators derived from them can then be determined at high confidence levels with sufficient certainty. A certain threshold u divides the parent distribution into two areas: a body and a tail region (Hoffmann and Börner 2021), and the GPD as the tail model is used to evaluate tail risks. This approach is common practice for more accurately calculating high quantiles (European Parliament 2009; Basel Committee on Banking Supervision 2009).
The scale parameter and shape parameter can be easily estimated using the maximum likelihood method if the threshold value u is known. Various authors have proposed methods for determining the appropriate threshold u and subsequently the GPD as a model for the tail from empirical data; see (Chapelle et al. 2005; Bader et al. 2018). Most methods require the setting of parameters, which often requires experience and hinders full automation of the modeling process. User intervention should be avoided in our investigations, so we follow Hoffmann and Börner (2021), who have developed a fully automatic method for determining the threshold value u and the determination of the tail model. The algorithm is implemented in a software tool, which is freely available on github.com (Bruhn 2022). We briefly summarize the description of the algorithm according to Hoffmann and Börner (2021).
The algorithm is based on a suitable distance measure R ^ n = R ^ n F n , F ^ as a function of the estimated GPD as a tail model F ^ x and the empirical distribution function F n of Kolmogorov (1933). As a convenient measure of the distance or “discrepancy” between the empirical distribution functions F n x and a model F x , the weighted mean square error is usually considered, which was introduced in the context of statistical test procedures by Cramér (1928b), von Mises (1931), and Smirnov (1936); compare also (Shorack and Wellner 2009) and the references therein. In decision theory (Ferguson 1967), the weighted mean square error has broad applicability in the determination of the unknown parameters of distributions via minimum distance methods (Wolfowitz 1957; Blyth 1970; Parr and Schucany 1980; Boos 1982).
The distance measure R ^ n includes a nonnegative weight function, which is a suitable preassigned function for accentuating the difference between the distribution functions in the range where the distance measure is desired to have sensitivity. Then, for an equal weight, R ^ n is the Cramér von Mises distance W 2 used in the corresponding statistic (Cramér 1928b; von Mises 1931), whereas when heavily weighting the tails, it is equal to the Anderson-Darling distance A 2 used in the corresponding statistic (Anderson and Darling 1952, 1954). The Anderson–Darling distance weights the difference between the two distributions simultaneously more heavily at both ends of the distribution F x . For a more theoretical overview and comparisons to other distance measures, see, e.g., (Stephens 1974; Shorack and Wellner 2009).
In contrast, Ahmad et al. (1988) used an asymmetric weight function, resulting in the statistics A L n 2 , for the lower tail, or its counterpart A U n 2 , for the upper tail. The distance measure R ^ n should take greater account of the deviations between the measured data and the modeled data, especially in the tail area. Thus, the distance measure of Ahmad et al. (1988) is used in the algorithm to delineate the tail region and determine the threshold u (Hoffmann and Börner 2021).
The automated modeling process can then be constructed using the following algorithm:
  • Sort the random sample taken from an unknown parent distribution in descending order: x ( 1 )     x ( 2 )     .   .   .     x ( n ) ;
  • Let k = 2 ,   .   .   .   ,   n , and find for each k the estimates of the parameters of the GPD. Note: For numerical reasons, the algorithm starts at k = 2 ;
  • Calculate the probabilities F ^ x i for i = 1 ,   .   .   .   ,   k with the estimated GPD and determine the distance R ^ k for k = 2 ,   .   .   .   ,   n ;
  • Find the index k * of the minimum of the distance R ^ k .
Then, the optimal threshold value is estimated by u ^ = x ( k * ), and the model of the tail of the unknown parent distribution is given by the estimated GPD F ^ x , which itself is determined from the subset x ( 1 ) x ( 2 ) .   .   .   x k * .

References

  1. Ahmad, Muhammad Idrees, Colin David Sinclair, and Barrie D. Spurr. 1988. Assessment of flood frequency models using empirical distribution function statistics. Water Resources Research 24: 1323–28. [Google Scholar] [CrossRef]
  2. Anderson, Theodore Wilbur, and Donald Allan Darling. 1952. Asymptotic Theory of Certain “Goodness of Fit” Criteria Based of Stochastic Processes. The Annals of Mathematical Statistics 23: 193–212. [Google Scholar] [CrossRef]
  3. Anderson, Theodore Wilbur, and Donald Allan Darling. 1954. A Test of Goodness of Fit. Journal of the American Statistical Association 49: 765–69. [Google Scholar] [CrossRef]
  4. Ashman, Keith A., Christina M. Bird, and Stephen E. Zepf. 1994. Detecting bimodality in astronomical datasets. The Astronomical Journal 108: 2348–61. [Google Scholar] [CrossRef]
  5. Bader, Brian, Jun Yan, and Xuebin Zhang. 2018. Automated threshold selection for extreme value analysis via Goodness-of-Fit tests with application to batched return level mapping. The Annals of Applied Statistics 12: 310–29. [Google Scholar] [CrossRef]
  6. Balkema, August Aimé, and Laurens Franciscus Maria de Haan. 1974. Residual life time at great age. The Annals of Probability 2: 792–804. [Google Scholar] [CrossRef]
  7. Basel Committee on Banking Supervision. 2009. Observed Range of Practice in Key Elements of Advanced Measurement Approaches (AMA). pp. 1–79. Available online: https://www.bis.org/publ/bcbs160b.pdf (accessed on 25 October 2023).
  8. Blyth, Colin Ross. 1970. On the inference and decision models of statistics. The Annals of Mathematical Statistics 41: 1034–58. [Google Scholar] [CrossRef]
  9. Boos, Dennis Dale. 1982. Minimum anderson-darling estimation. Communication in Statistics—Theory and Methods 11: 2747–74. [Google Scholar] [CrossRef]
  10. Bruhn, Pascal. 2022. FindTheTail—Extreme Value Theory. Available online: https://github.com/PascalBruhn/FindTheTail/releases/tag/v1.1.1 (accessed on 25 August 2023).
  11. Bruhn, Pascal, and Dietmar Ernst. 2022. Assessing the Risk Characteristics of the Cryptocurrency Market: A GARCH-EVT-Copula Approach. Journal of Risk and Financial Management 15: 346. [Google Scholar] [CrossRef]
  12. Bundesverband deutscher Unternehmensberatungen. 2022. Grundsätze ordnungsgemäßer Planung (GOP). BDU Mitteilung. Available online: https://www.bdu.de/media/3706/grundsaetze-ordnungsgemaesser-planung-gop-30.pdf (accessed on 25 August 2023).
  13. Chapelle, Ariane, Yves Crama, Georges Hübner, and Jean-Philippe Peters. 2005. Measuring and managing operational risk in financial sector: An integrated framework. Social Science Research Network Electronic Journal, 1–33. [Google Scholar] [CrossRef]
  14. Chernobai, Anna Sergeevna, and Svetlozar Todorov Rachev. 2006. Applying robust methods to operational risk modeling. Journal of Operational Risk 1: 27–41. [Google Scholar] [CrossRef]
  15. Choulakian, Vramshabouh, and Michael A. Stephens. 2001. Goodness-of-Fit tests for the generalized pareto distribution. Technometrics 43: 478–84. [Google Scholar] [CrossRef]
  16. Cramér, Harald. 1928a. On the composition of elementary errors: First Paper. Scandinavian Actuarial Journal 1928: 13–74. [Google Scholar] [CrossRef]
  17. Cramér, Harald. 1928b. On the composition of elementary errors: Second Paper: Statistical Applications. Scandinavian Actuarial Journal 1: 141–80. [Google Scholar] [CrossRef]
  18. Davison, Anthony Colin. 1984. Modelling Excess over High Threshold, with an Application: Statistical Extremes and Applications. Dordrecht: Reidel Publishing Company. [Google Scholar]
  19. Davison, Anthony Colin, and Richard Lyttleton Smith. 1990. Models for exceedances over high thresholds (with comments). Journal of the Royal Statistical Society 52: 393–442. [Google Scholar]
  20. de Fontnouvelle, Patrick Pierre, Eric Steven Rosengren, and John Samuel Jordan. 2005. Implication of Alternative Operational Risk Modeling Techniques. Cambridge and Boston: National Bureau of Economic Research. [Google Scholar]
  21. Dutta, Kabir, and Jason Perry. 2006. A Tale of Tails: An Empirical Analysis of Loss Distribution Models for Estimating Operational Risk Capital. Working Papers, No. 06-13. Boston: Federal Reserve Bank of Boston. Available online: https://www.econstor.eu/bitstream/10419/55646/1/514906588.pdf (accessed on 25 October 2023).
  22. Embrechts, Paul, Claudia Klüppelberg, and Thomas Mikosch. 1997. Modelling Extremal Events: For Insurance and Finance. Applications of Mathematics, Stochastic Modelling and Applied Probability. Berlin/Heidelberg: Springer. [Google Scholar]
  23. Ernst, Dietmar. 2022a. Bewertung von KMU: Simulationsbasierte Unternehmensplanung und Unternehmensbewertung. ZfKE—Zeitschrift für KMU und Entrepreneurship 70: 91–108. [Google Scholar] [CrossRef]
  24. Ernst, Dietmar. 2022b. Simulation-Based Business Valuation: Methodical Implementation in the Valuation Practice. Journal of Risk and Financial Management 15: 200. [Google Scholar] [CrossRef]
  25. Ernst, Dietmar. 2023. Risk Measures in Simulation-Based Business Valuation: Classification of Risk Measures in Risk Axiom Systems and Application in Valuation Practice. Risks 11: 13. [Google Scholar] [CrossRef]
  26. Ernst, Dietmar, and Werner Gleißner. 2022a. Paradigm Shift in Finance: The Transformation of the Theory from Perfect to Imperfect Capital Markets Using the Example of Company Valuation. Journal of Risk and Financial Management 15: 399. [Google Scholar] [CrossRef]
  27. Ernst, Dietmar, and Werner Gleißner. 2022b. Simulation-based Valuation. SSRN Electronic Journal, 1–28. [Google Scholar] [CrossRef]
  28. European Banking Authority. 2020. Guidelines on Loan Origination and Monitoring. pp. 1–28. Available online: https://www.eba.europa.eu/sites/default/documents/files/document_library/Publications/Guidelines/2020/Guidelines%20on%20loan%20origination%20and%20monitoring/884283/EBA%20GL%202020%2006%20Final%20Report%20on%20GL%20on%20loan%20origination%20and%20monitoring.pdf (accessed on 25 October 2023).
  29. European Parliament. 2009. Directive 2009/138/EC of the European Parliament and of the Council of 25 November 2009 on the taking-up and pursuit of the business of Insurance and Reinsurance (Solvency II). Official Journal of the European Union 52: 1–155. [Google Scholar]
  30. Ferguson, Thomas Shelburne. 1967. Mathematical Statistics: A Decision Theoretic Approach. Probability and mathematical statistics a series of monographs and textbooks, 1. New York: Academic Press. [Google Scholar]
  31. Friberg, Richard. 2015. Managing Risk and Uncertainty: A Strategic Approach. Cambridge and London: MIT Press. [Google Scholar]
  32. Gleißner, Werner. 2008. Erwartungstreue Planung und Planungssicherheit. Controlling 20: 81–88. [Google Scholar] [CrossRef]
  33. Gleißner, Werner. 2016. Bandbreitenplanung, Planungssicherheit und Monte-Carlo-Simulation mehrerer Planjahre. Controller Magazin 4: 16–23. [Google Scholar]
  34. Gleißner, Werner, and Dietmar Ernst. 2019. Company Valuation as Result of Risk Analysis: Replication Approach as an Alternative to the CAPM. Business Valuation OIV Journal 1: 3–18. [Google Scholar] [CrossRef]
  35. Gnedenko, Boris Vladimirovich. 1943. Sur la distribution limite du terme maximum d’une série aléatoire. Annals of Mathematics 44: 423–53. [Google Scholar] [CrossRef]
  36. Gnedin, Oleg Y. 2010. Quantifying bimodality: Corpus ID: 50260393, Draft version, February 19, 2010. Computer Science 1–4. Available online: https://api.semanticscholar.org/CorpusID:50260393 (accessed on 25 October 2023).
  37. Hallin, Marc. 2014. Gauss–Markov Theorem in Statistics. Edited by Narayanaswamy Balakrishnan, Theodore Colton, Brian Everitt, Walter Piegorsch, Fabrizio Ruggeri and Jozef L. Teugels. Hoboken: Statistics Reference Online Wiley, pp. 1–3. [Google Scholar]
  38. Hartigan, John Anthony, and Patrick Michael Hartigan. 1985. The Dip Test of Unimodality. The Annals of Statistics 13: 70–84. [Google Scholar] [CrossRef]
  39. Hoffmann, Ingo, and Christoph J. Börner. 2020a. Tail models and the statistical limit of accuracy in risk assessment. The Journal of Risk Finance 21: 201–16. [Google Scholar] [CrossRef]
  40. Hoffmann, Ingo, and Christoph J. Börner. 2020b. The risk function of the goodness-of-fit tests for tail models. Statistical Papers 62: 1853–69. [Google Scholar] [CrossRef]
  41. Hoffmann, Ingo, and Christoph J. Börner. 2021. Body and tail: An automated taildetecting procedure. The Journal of Risk 23: 1–27. [Google Scholar] [CrossRef]
  42. Hosking, Jonathan Richard Morley, and James R. Wallis. 1987. Parameter and quantile estimation for the generalized Pareto distribution. Technometrics 29: 339–49. [Google Scholar] [CrossRef]
  43. Hubbard, Douglas W. 2020. The Failure of Risk Management. Why It’s Broken and How to Fix It, 2nd ed. Hoboken: Wiley. [Google Scholar]
  44. Institut der Wirtschaftsprüfer. 2018. Berücksichtigung des Verschuldungsgrads bei der Bewertung von Unternehmen. IDW Praxishinweis 2: 352. [Google Scholar]
  45. Jaeckel, Peter. 2002. Monte Carlo Methods in Finance. Wiley Finance Series. Chichester and Weinheim: Wiley. [Google Scholar]
  46. Jorion, Philippe. 2007. Value at Risk: The New Benchmark for Managing Financial Risk, 3rd ed. New York: McGraw-Hill. [Google Scholar]
  47. Kendall, Maurice G., and Alan Stuart. 1977. The Advanced Theory of Statistics, 4th ed. London: Charles Griffin & Co. Ltd. [Google Scholar]
  48. Kolmogorov, Andrej Nikolaevic. 1933. Sulla determinazione empirica di una legge di distribuzione. Giornale dell’Istituto Italiano degli Attuari 4: 83–91. [Google Scholar]
  49. McNeil, Alexander John, and Thomas Saladin. 1997. The Peaks over Threshold Method for Estimating High Quantiles of Loss Distributions: Proceedings XXVIIth. Cairns: International Astin Colloquium. [Google Scholar]
  50. McNeil, Alexander John, Rüdiger Frey, and Paul Embrechts. 2015. Quantitative Risk Management: Concepts, Techniques and Tools. Princeton Series in Finance, revised ed. Princeton and Oxford: Princeton University Press. [Google Scholar]
  51. Pachamanova, Dessislava A., and Frank J. Fabozzi. 2010. Simulation and Optimization in Finance: Modeling with MATLAB, @Risk, or VBA. The Frank J. Fabozzi Series. Hoboken: Wiley. [Google Scholar]
  52. Parr, William C., and William R. Schucany. 1980. Minimum distance and robust estimation. Journal of the American Statistical Association 75: 616–24. [Google Scholar] [CrossRef]
  53. Pickands, James, III. 1975. Statistical inference using extreme order statistics. The Annals of Statistics 3: 119–31. [Google Scholar]
  54. Rieg, Robert. 2018. Eine Prognose ist (noch) kein Plan, Controlling. Operative Planung in Zeiten von Predictive Analytics. Controlling 30: 22–28. [Google Scholar]
  55. Rieg, Robert, and Werner Gleißner. 2022. Was ist ein erwartungstreuer Plan? WPg—Die Wirtschaftsprüfung 75: 1407–14. [Google Scholar]
  56. Shorack, Galen R., and Jon A. Wellner. 2009. Empirical Processes with Applications to Statistics. Philadelphia: Society for Industrial and Applied Mathematics, Volume 59 of Classics in applied mathematics. [Google Scholar]
  57. Smirnov, Nikolai Vasilieyvich. 1936. Sur la distribution de w2-criterion (critérion de Richard Edler von Mises). Comptes Rendus de l‘Académie des Sciences Paris 202: 449–52. [Google Scholar]
  58. Smirnov, Nikolai Vasilieyvich. 1948. Table for Estimating the Goodness of Fit of Empirical Distributions. The Annals of Mathematical Statistics 19: 279–81. [Google Scholar] [CrossRef]
  59. Smith, Richard Lyttleton. 1984. Threshold methods for sample extremes. Statistical Extremes and Applications 131: 621–38. [Google Scholar]
  60. Smith, Richard Lyttleton. 1985. Maximum likelihood estimations in a class of nonregular cases. Biometrika 72: 67–90. [Google Scholar] [CrossRef]
  61. Steinke, Karl-Heinz, and Benjamin W. Löhr. 2014. Bandbreitenplanung als Instrument des Risikocontrollings. Ein Beispiel aus der Praxis bei der Deutschen Lufthansa AG. Controlling 26: 616–23. [Google Scholar] [CrossRef]
  62. Stephens, Michael A. 1974. EDF Statistics for Goodness of Fit and Some Comparisons. Journal of the American Statistical Association 69: 730–37. [Google Scholar] [CrossRef]
  63. Stulz, René M. 2003. Risk Management & Derivatives, 1st ed. Mason: South-Western/Thomson. ISBN 978-0538861014. [Google Scholar]
  64. van Montfort, Martin A. J., and Jan Victor Witter. 1985. Testing exponentiality against generalized Pareto distribution. Journal of Hydrology 78: 305–15. [Google Scholar] [CrossRef]
  65. von Mises, Richard Edler. 1928. Wahrscheinlichkeit, Statistik und Wahrheit, Schriften zur wissenschaftlichen Weltauffassung. Wien: Springer. [Google Scholar]
  66. von Mises, Richard Edler. 1931. Wahrscheinlichkeitsrechnung und ihre Anwendung in der Statistik und theoretischen Physik. Leipzig und Wien: Franz Deuticke. [Google Scholar]
  67. Wehrspohn, Uwe, and Dietmar Ernst. 2022. When Do I Take Which Distribution? A Statistical Basis for Entrepreneurial Applications, 1st ed. Springer eBook Collection. Cham: Springer International Publishing and Imprint Springer. ISBN 9783031073298. [Google Scholar] [CrossRef]
  68. Wehrspohn, Uwe, and Sergey Zhilyakov. 2011. Rapid Prototyping of Monte-Carlo Simulations. SSRN Electronic Journal, 1–59. [Google Scholar] [CrossRef]
  69. Wehrspohn, Uwe, and Sergey Zhilyakov. 2021. Live access to large international data sources with Risk Kit Data—Case Study: Impact of the oil price and FX and interest rates on profit and loss. SSRN Electronic Journal, 1–34. [Google Scholar] [CrossRef]
  70. Wolfowitz, Jacob. 1957. The minimum distance method. The Annals of Applied Statistics 28: 75–88. [Google Scholar] [CrossRef]
Figure 1. Histogram of the CFtEs in mio. euro for period t1 as a result of 20,000 Monte Carlo steps.
Figure 1. Histogram of the CFtEs in mio. euro for period t1 as a result of 20,000 Monte Carlo steps.
Jrfm 16 00469 g001
Figure 2. Histogram of the CFtEs in mio. euro for period TV as a result of 20,000 Monte Carlo steps.
Figure 2. Histogram of the CFtEs in mio. euro for period TV as a result of 20,000 Monte Carlo steps.
Jrfm 16 00469 g002
Figure 3. Exemplarily, body and tail separation for period t1. Comparing different methods of quantile estimation (cash flow in mio. euro).
Figure 3. Exemplarily, body and tail separation for period t1. Comparing different methods of quantile estimation (cash flow in mio. euro).
Jrfm 16 00469 g003
Figure 4. Comparison of different methods for evaluating the tail risks with different key figures (mio. euro) at the 99.9% confidence level.
Figure 4. Comparison of different methods for evaluating the tail risks with different key figures (mio. euro) at the 99.9% confidence level.
Jrfm 16 00469 g004
Table 1. Risk models (in %, excerpt).
Table 1. Risk models (in %, excerpt).
ModelPeriod
t1t2t3t4t5TV
TRIANGULAR DISTRIBUTION
Sales growth1.971.851.661.290.691.05
- Minimum value1.001.001.001.000.500.50
- Most likely value3.002.502.001.501.001.00
- Maximum value4.004.002.502.501.501.50
NORMAL DISTRIBUTION
Material expenses6.851.84−4.43−1.38−1.61−0.30
- Expected value0.000.000.000.000.000.00
- Standard deviation5.005.005.005.005.005.00
Personnel expenses1.615.85−0.820.594.460.52
- Expected value0.000.000.000.000.000.00
- Standard deviation3.003.003.003.003.003.00
Table 2. Income statement (mio. euro, excerpt).
Table 2. Income statement (mio. euro, excerpt).
Plan TargetPeriod
t1t2t3t4t5TV
NET SALES
random76.678.880.482.883.284.0
unbiased77.279.480.882.383.083.8
target77.579.481.082.283.083.8
risk77.279.480.882.383.083.8
  + Change in inventories
fixed0.10.10.10.10.10.1
TOTAL OUTPUT
random76.778.980.582.983.384.1
unbiased77.379.580.982.483.183.9
target77.679.581.182.383.183.9
risk−0.30.0−0.10.10.00.0
  ./. Material Expenses
random41.439.041.443.649.049.8
unbiased42.143.244.044.945.245.7
target42.243.344.144.845.245.7
risk−0.10.0-0.10.10.00.0
GROSS PROFIT
random35.439.939.239.334.434.3
unbiased35.236.236.937.637.938.3
target35.436.237.037.537.938.3
38.3risk−0.10.00.00.10.00.0
  ./. Personnel Expenses
random19.521.020.722.020.621.0
unbiased19.920.420.821.221.321.6
target19.920.420.821.121.321.6
risk−0.10.00.00.00.00.0
  ./. Other Operating Expenses
random11.213.611.814.812.212.3
unbiased12.112.412.612.913.013.2
target11.311.611.812.012.112.3
risk0.80.80.80.90.80.9
Table 3. Cash flow to equity (mio. euro).
Table 3. Cash flow to equity (mio. euro).
Plan TargetPeriod
t1t2t3t4t5TV
EBIT (random)8.072.90−0.322.250.246.08
  + Interest income0.070.110.110.110.110.13
  ./.Interest expenses0.910.940.960.960.960.97
EBT7.232.07−1.181.40−0.605.25
  ./.Taxes2.160.62−0.350.42−0.181.56
Operating Profit After Taxes5.081.45−0.830.98−0.423.68
  + Depreciation1.901.901.901.901.901.90
  ./.CAPEX2.702.702.001.402.002.60
Change in…
  + Provisions0.040.040.040.040.040.04
  ./.Working Capital−0.060.040.060.28−0.030.54
  + Interest-bearing Liabil0.690.750.11−0.210.110.19
Cashflow to Equity (rand.)5.061.40−0.841.03−0.342.67
Cashflow to Equity (unbias.)1.150.620.821.181.020.53
Table 4. Cash flow to equity (mio. euro)—statistical key figures.
Table 4. Cash flow to equity (mio. euro)—statistical key figures.
Key FigurePeriod
t1t2t3t4t5TV
Minimum−16.28−18.43−18.37−17.46−18.54−19.66
Maximum7.106.557.048.727.677.15
Range23.3824.9725.4126.1826.2026.81
Mean1.150.620.821.181.020.53
Variance6.596.677.277.007.317.36
Skewness−3.09−3.00−3.06−3.05−3.07−3.02
Kurtosis16.4716.2416.4416.6016.6416.41
Excess13.4713.2413.4413.6013.6413.41
Quantile (99.9%)−14.92−15.87−16.21−15.62−16.08−16.67
Table 5. Tail distribution.
Table 5. Tail distribution.
DistributionGoodness-Of-Fit-Test
Kolmogorov-SmirnovAndersen-DarlingChi-
Square
AICAIC
(f.c.)
Weibull0.0290.40517.6572518.4712518.513
PowerNormal0.0310.47719.9022521.9252521.967
ModPert0.0320.56618.2812515.8952515.966
Pert0.0411.54226.1232521.9052521.948
Logistic0.0472.23843.8032555.5512555.572
Fisk0.0482.52146.2382560.3422560.384
Table 6. Cash flow to equity (mio. euro)—statistical key figures of tail data.
Table 6. Cash flow to equity (mio. euro)—statistical key figures of tail data.
Key FigurePeriod
t1t2t3t4t5TV
Minimum−16.28−18.43−18.37−17.46−18.54−19.66
Maximum−5.24−6.45−6.42−5.78−6.64−7.37
Range11.0411.9811.9611.6811.8912.29
Mean−10.51−11.23−11.51−11.10−11.50−12.08
Variance4.954.915.365.455.445.10
Skewness−0.13−0.45−0.26−0.15−0.30−0.41
Kurtosis2.613.052.682.602.822.81
Excess−0.390.05−0.32−0.400.18−0.19
Quantile (99.9%)−16.25−18.38−18.34−17.44−18.54−19.61
Table 7. Cash flow to equity (mio. euro)—tail analysis.
Table 7. Cash flow to equity (mio. euro)—tail analysis.
Confidence:
p = 99.9%
Period
t1t2t3t4t5TV
Quantile
Empirical−14.92−15.87−16.21−16.08−16.08−16.67
Tail Distribution−14.71−15.56−16.00−15.44−16.02−16.49
Tail of Tail Distribution−14.77−15.88−16.04−15.53−16.12−16.51
Mean VaR
Empirical−16.07−16.49−17.03−17.26−17.10−17.20
Tail Distribution−15.86−16.19−16.81−16.62−17.04−17.02
Tail of Tail Distribution−15.92−16.50−16.86−16.71−17.14−17.04
Mean CVaR
Empirical−16.59−17.44−17.77−17.41−18.02−17.98
Tail Distribution−17.06−17.40−18.01−17.75−18.00−18.25
Tail of Tail Distribution−16.49−17.39−17.75−17.39−18.0017.98
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Börner, C.J.; Ernst, D.; Hoffmann, I. Tail Risks in Corporate Finance: Simulation-Based Analyses of Extreme Values. J. Risk Financial Manag. 2023, 16, 469. https://doi.org/10.3390/jrfm16110469

AMA Style

Börner CJ, Ernst D, Hoffmann I. Tail Risks in Corporate Finance: Simulation-Based Analyses of Extreme Values. Journal of Risk and Financial Management. 2023; 16(11):469. https://doi.org/10.3390/jrfm16110469

Chicago/Turabian Style

Börner, Christoph J., Dietmar Ernst, and Ingo Hoffmann. 2023. "Tail Risks in Corporate Finance: Simulation-Based Analyses of Extreme Values" Journal of Risk and Financial Management 16, no. 11: 469. https://doi.org/10.3390/jrfm16110469

Article Metrics

Back to TopTop