Next Issue
Volume 8, June
Previous Issue
Volume 8, December

Table of Contents

Risks, Volume 8, Issue 1 (March 2020) – 31 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) We aim to understand the dynamics of Bitcoin blockchain trading volumes and, specifically, how [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Longevity Risk Measurement of Life Annuity Products
Risks 2020, 8(1), 31; https://doi.org/10.3390/risks8010031 - 18 Mar 2020
Viewed by 369
Abstract
This paper captures and measures the longevity risk generated by an annuity product. The longevity risk is materialized by the uncertain level of the future liability compared to the initially foretasted or expected value. Herein we compute the solvency capital (SC) of an [...] Read more.
This paper captures and measures the longevity risk generated by an annuity product. The longevity risk is materialized by the uncertain level of the future liability compared to the initially foretasted or expected value. Herein we compute the solvency capital (SC) of an insurer selling such a product within a single risk setting for three different life annuity products. Within the Solvency II framework, we capture the mortality of policyholders by the mean of the Hull–White model. Using the numerical analysis, we identify the product that requires the most SC from an insurer and the most profitable product for a shareholder. For policyholders we identify the cheapest product by computing the premiums and the most profitable product by computing the benefit levels. We further study how sensitive the SC is with respect to some significant parameters. Full article
Show Figures

Figure 1

Open AccessArticle
Gerber–Shiu Function in a Class of Delayed and Perturbed Risk Model with Dependence
Risks 2020, 8(1), 30; https://doi.org/10.3390/risks8010030 - 17 Mar 2020
Viewed by 447
Abstract
This paper considers the risk model perturbed by a diffusion process with a time delay in the arrival of the first two claims and takes into account dependence between claim amounts and the claim inter-occurrence times. Assuming that the time arrival of the [...] Read more.
This paper considers the risk model perturbed by a diffusion process with a time delay in the arrival of the first two claims and takes into account dependence between claim amounts and the claim inter-occurrence times. Assuming that the time arrival of the first claim follows a generalized mixed equilibrium distribution, we derive the integro-differential Equations of the Gerber–Shiu function and its defective renewal equations. For the situation where claim amounts follow exponential distribution, we provide an explicit expression of the Gerber–Shiu function. Numerical examples are provided to illustrate the ruin probability. Full article
Show Figures

Figure 1

Open AccessArticle
Mean-Variance Optimization Is a Good Choice, But for Other Reasons than You Might Think
Risks 2020, 8(1), 29; https://doi.org/10.3390/risks8010029 - 14 Mar 2020
Cited by 1 | Viewed by 487
Abstract
Mean-variance portfolio optimization is more popular than optimization procedures that employ downside risk measures such as the semivariance, despite the latter being more in line with the preferences of a rational investor. We describe strengths and weaknesses of semivariance and how to minimize [...] Read more.
Mean-variance portfolio optimization is more popular than optimization procedures that employ downside risk measures such as the semivariance, despite the latter being more in line with the preferences of a rational investor. We describe strengths and weaknesses of semivariance and how to minimize it for asset allocation decisions. We then apply this approach to a variety of simulated and real data and show that the traditional approach based on the variance generally outperforms it. The results hold even if the CVaR is used, because all downside risk measures are difficult to estimate. The popularity of variance as a measure of risk appears therefore to be rationally justified. Full article
Show Figures

Figure 1

Open AccessArticle
General Compound Hawkes Processes in Limit Order Books
Risks 2020, 8(1), 28; https://doi.org/10.3390/risks8010028 - 14 Mar 2020
Viewed by 480
Abstract
In this paper, we study various new Hawkes processes. Specifically, we construct general compound Hawkes processes and investigate their properties in limit order books. With regard to these general compound Hawkes processes, we prove a Law of Large Numbers (LLN) and a Functional [...] Read more.
In this paper, we study various new Hawkes processes. Specifically, we construct general compound Hawkes processes and investigate their properties in limit order books. With regard to these general compound Hawkes processes, we prove a Law of Large Numbers (LLN) and a Functional Central Limit Theorems (FCLT) for several specific variations. We apply several of these FCLTs to limit order books to study the link between price volatility and order flow, where the volatility in mid-price changes is expressed in terms of parameters describing the arrival rates and mid-price process. Full article
(This article belongs to the Special Issue Stochastic Modelling in Financial Mathematics)
Show Figures

Figure 1

Open AccessFeature PaperArticle
CARL and His POT: Measuring Risks in Commodity Markets
Risks 2020, 8(1), 27; https://doi.org/10.3390/risks8010027 - 13 Mar 2020
Viewed by 575
Abstract
The present study aims at modelling market risk for four commodities, namely West Texas Intermediate (WTI) crude oil, natural gas, gold and corn for the period 2007–2017. To this purpose, we use Extreme Value Theory (EVT) together with a set of Conditional Auto-Regressive [...] Read more.
The present study aims at modelling market risk for four commodities, namely West Texas Intermediate (WTI) crude oil, natural gas, gold and corn for the period 2007–2017. To this purpose, we use Extreme Value Theory (EVT) together with a set of Conditional Auto-Regressive Logit (CARL) models to predict risk measures for the futures return series of the considered commodities. In particular, the Peaks-Over-Threshold (POT) method has been combined with the Indicator and Absolute Value CARL models in order to predict the probability of tail events and the Value-at-Risk and the Expected Shortfall risk measures for the selected commodities. Backtesting procedures indicate that generally CARL models augmented with specific implied volatility outperform the benchmark model and thus they represent a valuable tool to anticipate and manage risks in the markets. Full article
(This article belongs to the Special Issue Model Risk and Risk Measures)
Show Figures

Figure 1

Open AccessArticle
The Leaders, the Laggers, and the “Vulnerables”
Risks 2020, 8(1), 26; https://doi.org/10.3390/risks8010026 - 12 Mar 2020
Viewed by 408
Abstract
We examine the lead-lag effect between the large and the small capitalization financial institutions by constructing two global weekly rebalanced indices. We focus on the 10% of stocks that “survived” all the rebalancings by remaining constituents of the indices. We sort them according [...] Read more.
We examine the lead-lag effect between the large and the small capitalization financial institutions by constructing two global weekly rebalanced indices. We focus on the 10% of stocks that “survived” all the rebalancings by remaining constituents of the indices. We sort them according to their systemic importance using the marginal expected shortfall (MES), which measures the individual institutions’ vulnerability over the market, the network based MES, which captures the vulnerability of the risks generated by institutions’ interrelations, and the Bayesian network based MES, which takes into account different network structures among institutions’ interrelations. We also check if the lead-lag effect holds in terms of systemic risk implying systemic risk transmission from the large to the small capitalization, concluding a mixed behavior compared to the index returns. Additionally, we find that all the systemic risk indicators increase their magnitude during the financial crisis. Full article
(This article belongs to the Special Issue Financial Networks in Fintech Risk Management)
Show Figures

Figure 1

Open AccessArticle
Importance Sampling in the Presence of PD-LGD Correlation
Risks 2020, 8(1), 25; https://doi.org/10.3390/risks8010025 - 10 Mar 2020
Viewed by 425
Abstract
This paper seeks to identify computationally efficient importance sampling (IS) algorithms for estimating large deviation probabilities for the loss on a portfolio of loans. Related literature typically assumes that realised losses on defaulted loans can be predicted with certainty, i.e., that loss given [...] Read more.
This paper seeks to identify computationally efficient importance sampling (IS) algorithms for estimating large deviation probabilities for the loss on a portfolio of loans. Related literature typically assumes that realised losses on defaulted loans can be predicted with certainty, i.e., that loss given default (LGD) is non-random. In practice, however, LGD is impossible to predict and tends to be positively correlated with the default rate and the latter phenomenon is typically referred to as PD-LGD correlation (here PD refers to probability of default, which is often used synonymously with default rate). There is a large literature on modelling stochastic LGD and PD-LGD correlation, but there is a dearth of literature on using importance sampling to estimate large deviation probabilities in those models. Numerical evidence indicates that the proposed algorithms are extremely effective at reducing the computational burden associated with obtaining accurate estimates of large deviation probabilities across a wide variety of PD-LGD correlation models that have been proposed in the literature. Full article
Show Figures

Figure 1

Open AccessFeature PaperArticle
On Computations in Renewal Risk Models—Analytical and Statistical Aspects
Risks 2020, 8(1), 24; https://doi.org/10.3390/risks8010024 - 04 Mar 2020
Viewed by 510
Abstract
We discuss aspects of numerical methods for the computation of Gerber-Shiu or discounted penalty-functions in renewal risk models. We take an analytical point of view and link this function to a partial-integro-differential equation and propose a numerical method for its solution. We show [...] Read more.
We discuss aspects of numerical methods for the computation of Gerber-Shiu or discounted penalty-functions in renewal risk models. We take an analytical point of view and link this function to a partial-integro-differential equation and propose a numerical method for its solution. We show weak convergence of an approximating sequence of piecewise-deterministic Markov processes (PDMPs) for deriving the convergence of the procedures. We will use estimated PDMP characteristics in a subsequent step from simulated sample data and study its effect on the numerically computed Gerber-Shiu functions. It can be seen that the main source of instability stems from the hazard rate estimator. Interestingly, results obtained using MC methods are hardly affected by estimation. Full article
(This article belongs to the Special Issue Loss Models: From Theory to Applications)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Rational Savings Account Models for Backward-Looking Interest Rate Benchmarks
Risks 2020, 8(1), 23; https://doi.org/10.3390/risks8010023 - 03 Mar 2020
Cited by 1 | Viewed by 694
Abstract
Interest rate benchmarks are currently undergoing a major transition. The LIBOR benchmark is planned to be discontinued by the end of 2021 and superseded by what ISDA calls an adjusted risk-free rate (RFR). ISDA has recently announced that the LIBOR replacement will most [...] Read more.
Interest rate benchmarks are currently undergoing a major transition. The LIBOR benchmark is planned to be discontinued by the end of 2021 and superseded by what ISDA calls an adjusted risk-free rate (RFR). ISDA has recently announced that the LIBOR replacement will most likely be constructed from a compounded running average of RFR overnight rates over a period matching the LIBOR tenor. This new backward-looking benchmark is markedly different when compared with LIBOR. It is measurable only at the end of the term in contrast to the forward-looking LIBOR, which is measurable at the start of the term. The RFR provides a simplification because the cash flows and the discount factors may be derived from the same discounting curve, thus avoiding—on a superficial level—any multi-curve complications. We develop a new class of savings account models and derive a novel interest rate system specifically designed to facilitate a high degree of tractability for the pricing of RFR-based fixed-income instruments. The rational form of the savings account models under the risk-neutral measure enables the pricing in closed form of caplets, swaptions and futures written on the backward-looking interest rate benchmark. Full article
(This article belongs to the Special Issue Interest Rate Risk Modelling in Transformation)
Open AccessArticle
Prediction of Claims in Export Credit Finance: A Comparison of Four Machine Learning Techniques
Risks 2020, 8(1), 22; https://doi.org/10.3390/risks8010022 - 01 Mar 2020
Viewed by 777
Abstract
This study evaluates four machine learning (ML) techniques (Decision Trees (DT), Random Forests (RF), Neural Networks (NN) and Probabilistic Neural Networks (PNN)) on their ability to accurately predict export credit insurance claims. Additionally, we compare the performance of the ML techniques against a [...] Read more.
This study evaluates four machine learning (ML) techniques (Decision Trees (DT), Random Forests (RF), Neural Networks (NN) and Probabilistic Neural Networks (PNN)) on their ability to accurately predict export credit insurance claims. Additionally, we compare the performance of the ML techniques against a simple benchmark (BM) heuristic. The analysis is based on the utilisation of a dataset provided by the Berne Union, which is the most comprehensive collection of export credit insurance data and has been used in only two scientific studies so far. All ML techniques performed relatively well in predicting whether or not claims would be incurred, and, with limitations, in predicting the order of magnitude of the claims. No satisfactory results were achieved predicting actual claim ratios. RF performed significantly better than DT, NN and PNN against all prediction tasks, and most reliably carried their validation performance forward to test performance. Full article
(This article belongs to the Special Issue Machine Learning in Insurance)
Show Figures

Figure 1

Open AccessArticle
Machine Learning in Least-Squares Monte Carlo Proxy Modeling of Life Insurance Companies
Risks 2020, 8(1), 21; https://doi.org/10.3390/risks8010021 - 21 Feb 2020
Viewed by 604
Abstract
Under the Solvency II regime, life insurance companies are asked to derive their solvency capital requirements from the full loss distributions over the coming year. Since the industry is currently far from being endowed with sufficient computational capacities to fully simulate these distributions, [...] Read more.
Under the Solvency II regime, life insurance companies are asked to derive their solvency capital requirements from the full loss distributions over the coming year. Since the industry is currently far from being endowed with sufficient computational capacities to fully simulate these distributions, the insurers have to rely on suitable approximation techniques such as the least-squares Monte Carlo (LSMC) method. The key idea of LSMC is to run only a few wisely selected simulations and to process their output further to obtain a risk-dependent proxy function of the loss. In this paper, we present and analyze various adaptive machine learning approaches that can take over the proxy modeling task. The studied approaches range from ordinary and generalized least-squares regression variants over generalized linear model (GLM) and generalized additive model (GAM) methods to multivariate adaptive regression splines (MARS) and kernel regression routines. We justify the combinability of their regression ingredients in a theoretical discourse. Further, we illustrate the approaches in slightly disguised real-world experiments and perform comprehensive out-of-sample tests. Full article
(This article belongs to the Special Issue Machine Learning in Insurance)
Show Figures

Figure 1

Open AccessArticle
A Survey of the Individual Claim Size and Other Risk Factors Using Credibility Bonus-Malus Premiums
Risks 2020, 8(1), 20; https://doi.org/10.3390/risks8010020 - 21 Feb 2020
Viewed by 493
Abstract
In this paper, a flexible count regression model based on a bivariate compound Poisson distribution is introduced in order to distinguish between different types of claims according to the claim size. Furthermore, it allows us to analyse the factors that affect the number [...] Read more.
In this paper, a flexible count regression model based on a bivariate compound Poisson distribution is introduced in order to distinguish between different types of claims according to the claim size. Furthermore, it allows us to analyse the factors that affect the number of claims above and below a given claim size threshold in an automobile insurance portfolio. Relevant properties of this model are given. Next, a mixed regression model is derived to compute credibility bonus-malus premiums based on the individual claim size and other risk factors such as gender, type of vehicle, driving area, or age of the vehicle. Results are illustrated by using a well-known automobile insurance portfolio dataset. Full article
Show Figures

Figure 1

Open AccessArticle
Delta Boosting Implementation of Negative Binomial Regression in Actuarial Pricing
Risks 2020, 8(1), 19; https://doi.org/10.3390/risks8010019 - 19 Feb 2020
Viewed by 490
Abstract
This study proposes an efficacious approach to analyze the over-dispersed insurance frequency data as it is imperative for the insurers to have decisive informative insights for precisely underwriting and pricing insurance products, retaining existing customer base and gaining an edge in the highly [...] Read more.
This study proposes an efficacious approach to analyze the over-dispersed insurance frequency data as it is imperative for the insurers to have decisive informative insights for precisely underwriting and pricing insurance products, retaining existing customer base and gaining an edge in the highly competitive retail insurance market. The delta boosting implementation of the negative binomial regression, both by one-parameter estimation and a novel two-parameter estimation, was tested on the empirical data. Accurate parameter estimation of the negative binomial regression is complicated with considerations of incomplete insurance exposures, negative convexity, and co-linearity. The issues mainly originate from the unique nature of insurance operations and the adoption of distribution outside the exponential family. We studied how the issues could significantly impact the quality of estimation. In addition to a novel approach to simultaneously estimate two parameters in regression through boosting, we further enrich the study by proposing an alteration of the base algorithm to address the problems. The algorithm was able to withstand the competition against popular regression methodologies in a real-life dataset. Common diagnostics were applied to compare the performance of the relevant candidates, leading to our conclusion to move from light-tail Poisson to negative binomial for over-dispersed data, from generalized linear model (GLM) to boosting for non-linear and interaction patterns, from one-parameter to two-parameter estimation to reflect more closely the reality. Full article
Show Figures

Figure 1

Open AccessArticle
Application of Diffusion Models in the Analysis of Financial Markets: Evidence on Exchange Traded Funds in Europe
Risks 2020, 8(1), 18; https://doi.org/10.3390/risks8010018 - 14 Feb 2020
Viewed by 476
Abstract
Exchange traded funds (ETFs) are financial innovations that may be considered as a part of the index financial instruments category, together with stock index derivatives. The aim of this paper is to explore the trajectories and formulates predictions regarding the spread of ETFs [...] Read more.
Exchange traded funds (ETFs) are financial innovations that may be considered as a part of the index financial instruments category, together with stock index derivatives. The aim of this paper is to explore the trajectories and formulates predictions regarding the spread of ETFs on the financial markets in six European countries. It demonstrates ETFs’ development trajectories with regard to stock index futures and options that may be considered as their substitutes, e.g., in risk management. In this paper, we use mathematical models of the diffusion of innovation that allow unveiling the evolutionary patterns of turnover of ETFs; the time span of the analysis is 2004–2015, i.e., the period of dynamic changes on the European ETF markets. Such an approach has so far rarely been applied in this field of research. Our findings indicate that the development of ETF markets has been strongest in Italy and France and weaker in the other countries, especially Poland and Hungary. The results highlight significant differences among European countries and prove that diffusion has not taken place in all the cases; there are also considerable differences in the predicted development paths. Full article
(This article belongs to the Special Issue Quantitative Methods in Economics and Finance)
Show Figures

Figure 1

Open AccessArticle
Stochastic Mortality Modelling for Dependent Coupled Lives
Risks 2020, 8(1), 17; https://doi.org/10.3390/risks8010017 - 11 Feb 2020
Viewed by 954
Abstract
Broken-heart syndrome is the most common form of short-term dependence, inducing a temporary increase in an individual’s force of mortality upon the occurrence of extreme events, such as the loss of a spouse. Socioeconomic influences on bereavement processes allow for suggestion of variability [...] Read more.
Broken-heart syndrome is the most common form of short-term dependence, inducing a temporary increase in an individual’s force of mortality upon the occurrence of extreme events, such as the loss of a spouse. Socioeconomic influences on bereavement processes allow for suggestion of variability in the significance of short-term dependence between couples in countries of differing levels of economic development. Motivated by analysis of a Ghanaian data set, we propose a stochastic mortality model of the joint mortality of paired lives and the causal relation between their death times, in a less economically developed country than those considered in existing studies. The paired mortality intensities are assumed to be non-mean-reverting Cox–Ingersoll–Ross processes, reflecting the reduced concentration of the initial loss impact apparent in the data set. The effect of the death on the mortality intensity of the surviving spouse is given by a mean-reverting Ornstein–Uhlenbeck process which captures the subsiding nature of the mortality increase characteristic of broken-heart syndrome. Inclusion of a population wide volatility parameter in the Ornstein–Uhlenbeck bereavement process gives rise to a significant non-diversifiable risk, heightening the importance of the dependence assumption in this case. Applying the model proposed to an insurance pricing problem, we obtain the appropriate premium under consideration of dependence between coupled lives through application of the indifference pricing principle. Full article
(This article belongs to the Special Issue Interplay between Financial and Actuarial Mathematics)
Show Figures

Figure 1

Open AccessArticle
Assessing Asset-Liability Risk with Neural Networks
Risks 2020, 8(1), 16; https://doi.org/10.3390/risks8010016 - 09 Feb 2020
Viewed by 641
Abstract
We introduce a neural network approach for assessing the risk of a portfolio of assets and liabilities over a given time period. This requires a conditional valuation of the portfolio given the state of the world at a later time, a problem that [...] Read more.
We introduce a neural network approach for assessing the risk of a portfolio of assets and liabilities over a given time period. This requires a conditional valuation of the portfolio given the state of the world at a later time, a problem that is particularly challenging if the portfolio contains structured products or complex insurance contracts which do not admit closed form valuation formulas. We illustrate the method on different examples from banking and insurance. We focus on value-at-risk and expected shortfall, but the approach also works for other risk measures. Full article
Show Figures

Figure 1

Open AccessArticle
Portfolio Optimization under Correlation Constraint
Risks 2020, 8(1), 15; https://doi.org/10.3390/risks8010015 - 06 Feb 2020
Viewed by 488
Abstract
We consider the problem of portfolio optimization with a correlation constraint. The framework is the multi-period stochastic financial market setting with one tradable stock, stochastic income, and a non-tradable index. The correlation constraint is imposed on the portfolio and the non-tradable index at [...] Read more.
We consider the problem of portfolio optimization with a correlation constraint. The framework is the multi-period stochastic financial market setting with one tradable stock, stochastic income, and a non-tradable index. The correlation constraint is imposed on the portfolio and the non-tradable index at some benchmark time horizon. The goal is to maximize a portofolio’s expected exponential utility subject to the correlation constraint. Two types of optimal portfolio strategies are considered: the subgame perfect and the precommitment ones. We find analytical expressions for the constrained subgame perfect (CSGP) and the constrained precommitment (CPC) portfolio strategies. Both these portfolio strategies yield significantly lower risk when compared to the unconstrained setting, at the cost of a small utility loss. The performance of the CSGP and CPC portfolio strategies is similar. Full article
(This article belongs to the Special Issue Systemic Risk in Finance and Insurance)
Show Figures

Figure 1

Open AccessArticle
Loss Reserving Estimation With Correlated Run-Off Triangles in a Quantile Longitudinal Model
Risks 2020, 8(1), 14; https://doi.org/10.3390/risks8010014 - 03 Feb 2020
Viewed by 504
Abstract
In this paper, we consider a loss reserving model for a general insurance portfolio consisting of a number of correlated run-off triangles that can be embedded within the quantile regression model for longitudinal data. The model proposes a combination of the between- and [...] Read more.
In this paper, we consider a loss reserving model for a general insurance portfolio consisting of a number of correlated run-off triangles that can be embedded within the quantile regression model for longitudinal data. The model proposes a combination of the between- and within-subportfolios (run-off triangles) estimating functions for regression parameter estimation, which take into account the correlation and variation of the run-off triangles. The proposed method is robust to the error correlation structure, improves the efficiency of parameter estimators, and is useful for the estimation of the reserve risk margin and value at risk (VaR) in actuarial and finance applications. Full article
(This article belongs to the Special Issue Loss Models: From Theory to Applications)
Show Figures

Figure 1

Open AccessArticle
A Comprehensive Stability Indicator for Banks
Risks 2020, 8(1), 13; https://doi.org/10.3390/risks8010013 - 03 Feb 2020
Cited by 1 | Viewed by 484
Abstract
Stability indicators are essential to banks in order to identify instability caused by adverse economic circumstances or increasing risks such as customer defaults. This paper develops a novel comprehensive stability indicator (CSI) that can readily be used by individual banks, or [...] Read more.
Stability indicators are essential to banks in order to identify instability caused by adverse economic circumstances or increasing risks such as customer defaults. This paper develops a novel comprehensive stability indicator (CSI) that can readily be used by individual banks, or by regulators to benchmark financial health across banks. The CSI incorporates the three key risk factors of Creditworthiness, Conditions and Capital (3Cs), using a traffic light system (green, orange and red) to classify bank risk. The CSI achieves similar outcomes in ranking the risk of 20 US banks to the much more complex US Federal Reserve Dodd–Frank stress tests. Full article
Show Figures

Figure 1

Open AccessArticle
Do We Need Stochastic Volatility and Generalised Autoregressive Conditional Heteroscedasticity? Comparing Squared End-Of-Day Returns on FTSE
Risks 2020, 8(1), 12; https://doi.org/10.3390/risks8010012 - 01 Feb 2020
Viewed by 540
Abstract
The paper examines the relative performance of Stochastic Volatility (SV) and Generalised Autoregressive Conditional Heteroscedasticity (GARCH) (1,1) models fitted to ten years of daily data for FTSE. As a benchmark, we used the realized volatility (RV) of FTSE sampled at 5 min intervals [...] Read more.
The paper examines the relative performance of Stochastic Volatility (SV) and Generalised Autoregressive Conditional Heteroscedasticity (GARCH) (1,1) models fitted to ten years of daily data for FTSE. As a benchmark, we used the realized volatility (RV) of FTSE sampled at 5 min intervals taken from the Oxford Man Realised Library. Both models demonstrated comparable performance and were correlated to a similar extent with RV estimates when measured by ordinary least squares (OLS). However, a crude variant of Corsi’s (2009) Heterogeneous Autoregressive (HAR) model, applied to squared demeaned daily returns on FTSE, appeared to predict the daily RV of FTSE better than either of the two models. Quantile regressions suggest that all three methods capture tail behaviour similarly and adequately. This leads to the question of whether we need either of the two standard volatility models if the simple expedient of using lagged squared demeaned daily returns provides a better RV predictor, at least in the context of the sample. Full article
(This article belongs to the Special Issue Measuring and Modelling Financial Risk and Derivatives)
Show Figures

Figure 1

Open AccessFeature PaperArticle
General Conditions of Weak Convergence of Discrete-Time Multiplicative Scheme to Asset Price with Memory
Risks 2020, 8(1), 11; https://doi.org/10.3390/risks8010011 - 30 Jan 2020
Viewed by 527
Abstract
We present general conditions for the weak convergence of a discrete-time additive scheme to a stochastic process with memory in the space D [ 0 , T ] . Then we investigate the convergence of the related multiplicative scheme to a process that [...] Read more.
We present general conditions for the weak convergence of a discrete-time additive scheme to a stochastic process with memory in the space D [ 0 , T ] . Then we investigate the convergence of the related multiplicative scheme to a process that can be interpreted as an asset price with memory. As an example, we study an additive scheme that converges to fractional Brownian motion, which is based on the Cholesky decomposition of its covariance matrix. The second example is a scheme converging to the Riemann–Liouville fractional Brownian motion. The multiplicative counterparts for these two schemes are also considered. As an auxiliary result of independent interest, we obtain sufficient conditions for monotonicity along diagonals in the Cholesky decomposition of the covariance matrix of a stationary Gaussian process. Full article
(This article belongs to the Special Issue Stochastic Modelling in Financial Mathematics)
Open AccessArticle
A Discrete-Time Approach to Evaluate Path-Dependent Derivatives in a Regime-Switching Risk Model
Risks 2020, 8(1), 9; https://doi.org/10.3390/risks8010009 - 29 Jan 2020
Viewed by 442
Abstract
This paper provides a discrete-time approach for evaluating financial and actuarial products characterized by path-dependent features in a regime-switching risk model. In each regime, a binomial discretization of the asset value is obtained by modifying the parameters used to generate the lattice in [...] Read more.
This paper provides a discrete-time approach for evaluating financial and actuarial products characterized by path-dependent features in a regime-switching risk model. In each regime, a binomial discretization of the asset value is obtained by modifying the parameters used to generate the lattice in the highest-volatility regime, thus allowing a simultaneous asset description in all the regimes. The path-dependent feature is treated by computing representative values of the path-dependent function on a fixed number of effective trajectories reaching each lattice node. The prices of the analyzed products are calculated as the expected values of their payoffs registered over the lattice branches, invoking a quadratic interpolation technique if the regime changes, and capturing the switches among regimes by using a transition probability matrix. Some numerical applications are provided to support the model, which is also useful to accurately capture the market risk concerning path-dependent financial and actuarial instruments. Full article
(This article belongs to the Special Issue Model Risk and Risk Measures)
Show Figures

Figure 1

Open AccessArticle
Modelling Unobserved Heterogeneity in Claim Counts Using Finite Mixture Models
Risks 2020, 8(1), 10; https://doi.org/10.3390/risks8010010 - 29 Jan 2020
Viewed by 523
Abstract
When modelling insurance claim count data, the actuary often observes overdispersion and an excess of zeros that may be caused by unobserved heterogeneity. A common approach to accounting for overdispersion is to consider models with some overdispersed distribution as opposed to Poisson models. [...] Read more.
When modelling insurance claim count data, the actuary often observes overdispersion and an excess of zeros that may be caused by unobserved heterogeneity. A common approach to accounting for overdispersion is to consider models with some overdispersed distribution as opposed to Poisson models. Zero-inflated, hurdle and compound frequency models are typically applied to insurance data to account for such a feature of the data. However, a natural way to deal with unobserved heterogeneity is to consider mixtures of a simpler models. In this paper, we consider k-finite mixtures of some typical regression models. This approach has interesting features: first, it allows for overdispersion and the zero-inflated model represents a special case, and second, it allows for an elegant interpretation based on the typical clustering application of finite mixture models. k-finite mixture models are applied to a car insurance claim dataset in order to analyse whether the problem of unobserved heterogeneity requires a richer structure for risk classification. Our results show that the data consist of two subpopulations for which the regression structure is different. Full article
(This article belongs to the Special Issue Machine Learning in Insurance)
Show Figures

Figure 1

Open AccessArticle
Pricing of Commodity Derivatives on Processes with Memory
Risks 2020, 8(1), 8; https://doi.org/10.3390/risks8010008 - 21 Jan 2020
Viewed by 480
Abstract
Spot option prices, forwards and options on forwards relevant for the commodity markets are computed when the underlying process S is modelled as an exponential of a process ξ with memory as, e.g., a Volterra equation driven by a Lévy process. Moreover, the [...] Read more.
Spot option prices, forwards and options on forwards relevant for the commodity markets are computed when the underlying process S is modelled as an exponential of a process ξ with memory as, e.g., a Volterra equation driven by a Lévy process. Moreover, the interest rate and a risk premium ρ representing storage costs, illiquidity, convenience yield or insurance costs, are assumed to be stochastic. When the interest rate is deterministic and the risk premium is explicitly modelled as an Ornstein-Uhlenbeck type of dynamics with a mean level that depends on the same memory term as the commodity, the process ( ξ ; ρ ) has an affine structure under the pricing measure Q and an explicit expression for the option price is derived in terms of the Fourier transform of the payoff function. Full article
Open AccessEditorial
Acknowledgement to Reviewers of Risks in 2019
Risks 2020, 8(1), 7; https://doi.org/10.3390/risks8010007 - 21 Jan 2020
Viewed by 407
Abstract
The editorial team greatly appreciates the reviewers who have dedicated their considerable time and expertise to the journal’s rigorous editorial process over the past 12 months, regardless of whether the papers are finally published or not [...] Full article
Open AccessArticle
Markov Chain Monte Carlo Methods for Estimating Systemic Risk Allocations
Risks 2020, 8(1), 6; https://doi.org/10.3390/risks8010006 - 15 Jan 2020
Viewed by 590
Abstract
In this paper, we propose a novel framework for estimating systemic risk measures and risk allocations based on Markov Chain Monte Carlo (MCMC) methods. We consider a class of allocations whose jth component can be written as some risk measure of the [...] Read more.
In this paper, we propose a novel framework for estimating systemic risk measures and risk allocations based on Markov Chain Monte Carlo (MCMC) methods. We consider a class of allocations whose jth component can be written as some risk measure of the jth conditional marginal loss distribution given the so-called crisis event. By considering a crisis event as an intersection of linear constraints, this class of allocations covers, for example, conditional Value-at-Risk (CoVaR), conditional expected shortfall (CoES), VaR contributions, and range VaR (RVaR) contributions as special cases. For this class of allocations, analytical calculations are rarely available, and numerical computations based on Monte Carlo (MC) methods often provide inefficient estimates due to the rare-event character of the crisis events. We propose an MCMC estimator constructed from a sample path of a Markov chain whose stationary distribution is the conditional distribution given the crisis event. Efficient constructions of Markov chains, such as the Hamiltonian Monte Carlo and Gibbs sampler, are suggested and studied depending on the crisis event and the underlying loss distribution. The efficiency of the MCMC estimators is demonstrated in a series of numerical experiments. Full article
Show Figures

Figure 1

Open AccessArticle
Measuring Financial Contagion and Spillover Effects with a State-Dependent Sensitivity Value-at-Risk Model
Risks 2020, 8(1), 5; https://doi.org/10.3390/risks8010005 - 10 Jan 2020
Viewed by 596
Abstract
In this paper, we measure the size and the direction of the spillover effects among European commercial banks, with respect to their size, geographical position, income sources, and systemic importance for the period from 2006 to 2016, using a state-dependent sensitivity value-at-risk model, [...] Read more.
In this paper, we measure the size and the direction of the spillover effects among European commercial banks, with respect to their size, geographical position, income sources, and systemic importance for the period from 2006 to 2016, using a state-dependent sensitivity value-at-risk model, conditioning on the state of the financial market. Low during normal times, the same shocks cause notable spillover effects during the volatile period. The results suggest a high level of interconnectedness across all the European regions, highlighting the importance of large and systemic important banks that create considerable systemic risk during the entire period. Regarding the non-interest income banks, the outcomes reveals an alert signal concerning the spillovers spread to interest income banks. Full article
(This article belongs to the Special Issue Model Risk and Risk Measures)
Open AccessArticle
Lead Behaviour in Bitcoin Markets
Risks 2020, 8(1), 4; https://doi.org/10.3390/risks8010004 - 04 Jan 2020
Cited by 1 | Viewed by 856
Abstract
We aim to understand the dynamics of Bitcoin blockchain trading volumes and, specifically, how different trading groups, in different geographic areas, interact with each other. To achieve this aim, we propose an extended Vector Autoregressive model, aimed at explaining the evolution of trading [...] Read more.
We aim to understand the dynamics of Bitcoin blockchain trading volumes and, specifically, how different trading groups, in different geographic areas, interact with each other. To achieve this aim, we propose an extended Vector Autoregressive model, aimed at explaining the evolution of trading volumes, both in time and in space. The extension is based on network models, which improve pure autoregressive models, introducing a contemporaneous contagion component that describes contagion effects between trading volumes. Our empirical findings show that transactions activities in bitcoins is dominated by groups of network participants in Europe and in the United States, consistent with the expectation that market interactions primarily take place in developed economies. Full article
(This article belongs to the Special Issue Financial Networks in Fintech Risk Management)
Show Figures

Figure 1

Open AccessArticle
In-Sample Hazard Forecasting Based on Survival Models with Operational Time
Risks 2020, 8(1), 3; https://doi.org/10.3390/risks8010003 - 03 Jan 2020
Viewed by 633
Abstract
We introduce a generalization of the one-dimensional accelerated failure time model allowing the covariate effect to be any positive function of the covariate. This function and the baseline hazard rate are estimated nonparametrically via an iterative algorithm. In an application in non-life reserving, [...] Read more.
We introduce a generalization of the one-dimensional accelerated failure time model allowing the covariate effect to be any positive function of the covariate. This function and the baseline hazard rate are estimated nonparametrically via an iterative algorithm. In an application in non-life reserving, the survival time models the settlement delay of a claim and the covariate effect is often called operational time. The accident date of a claim serves as covariate. The estimated hazard rate is a nonparametric continuous-time alternative to chain-ladder development factors in reserving and is used to forecast outstanding liabilities. Hence, we provide an extension of the chain-ladder framework for claim numbers without the assumption of independence between settlement delay and accident date. Our proposed algorithm is an unsupervised learning approach to reserving that detects operational time in the data and adjusts for it in the estimation process. Advantages of the new estimation method are illustrated in a data set consisting of paid claims from a motor insurance business line on which we forecast the number of outstanding claims. Full article
(This article belongs to the Special Issue Machine Learning in Insurance)
Show Figures

Figure 1

Open AccessArticle
Variations of Particle Swarm Optimization for Obtaining Classification Rules Applied to Credit Risk in Financial Institutions of Ecuador
Risks 2020, 8(1), 2; https://doi.org/10.3390/risks8010002 - 30 Dec 2019
Viewed by 735
Abstract
Knowledge generated using data mining techniques is of great interest for organizations, as it facilitates tactical and strategic decision making, generating a competitive advantage. In the special case of credit granting organizations, it is important to clearly define rejection/approval criteria. In this direction, [...] Read more.
Knowledge generated using data mining techniques is of great interest for organizations, as it facilitates tactical and strategic decision making, generating a competitive advantage. In the special case of credit granting organizations, it is important to clearly define rejection/approval criteria. In this direction, classification rules are an appropriate tool, provided that the rule set has low cardinality and that the antecedent of the rules has few conditions. This paper analyzes different solutions based on Particle Swarm Optimization (PSO) techniques, which are able to construct a set of classification rules with the aforementioned characteristics using information from the borrower and the macroeconomic environment at the time of granting the loan. In addition, to facilitate the understanding of the model, fuzzy logic is incorporated into the construction of the antecedent. To reduce the search time, the particle swarm is initialized by a competitive neural network. Different variants of PSO are applied to three databases of financial institutions in Ecuador. The first institution specializes in massive credit placement. The second institution specializes in consumer credit and business credit lines. Finally, the third institution is a savings and credit cooperative. According to our results, the incorporation of fuzzy logic generates rule sets with greater precision. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop