Previous Issue

Table of Contents

Risks, Volume 5, Issue 3 (September 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-11
Export citation of selected articles as:

Research

Open AccessArticle Analyzing the Gaver—Lewis Pareto Process under an Extremal Perspective
Risks 2017, 5(3), 33; doi:10.3390/risks5030033
Received: 10 April 2017 / Revised: 14 June 2017 / Accepted: 16 June 2017 / Published: 27 June 2017
PDF Full-text (595 KB) | HTML Full-text | XML Full-text
Abstract
Pareto processes are suitable to model stationary heavy-tailed data. Here, we consider the auto-regressive Gaver–Lewis Pareto Process and address a study of the tail behavior. We characterize its local and long-range dependence. We will see that consecutive observations are asymptotically tail independent, a
[...] Read more.
Pareto processes are suitable to model stationary heavy-tailed data. Here, we consider the auto-regressive Gaver–Lewis Pareto Process and address a study of the tail behavior. We characterize its local and long-range dependence. We will see that consecutive observations are asymptotically tail independent, a feature that is often misevaluated by the most common extremal models and with strong relevance to the tail inference. This also reveals clustering at “penultimate” levels. Linear correlation may not exist in a heavy-tailed context and an alternative diagnostic tool will be presented. The derived properties relate to the auto-regressive parameter of the process and will provide estimators. A comparison of the proposals is conducted through simulation and an application to a real dataset illustrates the procedure. Full article
Figures

Figure 1

Open AccessArticle Backtesting the Lee–Carter and the Cairns–Blake–Dowd Stochastic Mortality Models on Italian Death Rates
Risks 2017, 5(3), 34; doi:10.3390/risks5030034
Received: 23 December 2016 / Revised: 21 June 2017 / Accepted: 27 June 2017 / Published: 4 July 2017
PDF Full-text (1997 KB) | HTML Full-text | XML Full-text
Abstract
This work proposes a backtesting analysis that compares the Lee–Carter and the Cairns–Blake–Dowd mortality models, employing Italian data. The mortality data come from the Italian National Statistics Institute (ISTAT) database and span the period 1975–2014, over which we computed back-projections evaluating the performances
[...] Read more.
This work proposes a backtesting analysis that compares the Lee–Carter and the Cairns–Blake–Dowd mortality models, employing Italian data. The mortality data come from the Italian National Statistics Institute (ISTAT) database and span the period 1975–2014, over which we computed back-projections evaluating the performances of the models compared with real data. We propose three different backtest approaches, evaluating the goodness of short-run forecast versus medium-length ones. We find that neither model was able to capture the improving shock on mortality observed for the male population on the analysed period. Moreover, the results suggest that CBD forecasts are reliable prevalently for ages above 75, and that LC forecasts are basically more accurate for this data. Full article
Figures

Figure 1

Open AccessArticle Implied Distributions from GBPUSD Risk-Reversals and Implication for Brexit Scenarios
Risks 2017, 5(3), 35; doi:10.3390/risks5030035
Received: 19 March 2017 / Revised: 28 June 2017 / Accepted: 28 June 2017 / Published: 4 July 2017
Cited by 1 | PDF Full-text (2010 KB) | HTML Full-text | XML Full-text
Abstract
Much of the debate around a potential British exit (Brexit) from the European Union has centred on the potential macroeconomic impact. In this paper, we instead focus on understanding market expectations for price action around the Brexit referendum date. Extracting implied distributions from
[...] Read more.
Much of the debate around a potential British exit (Brexit) from the European Union has centred on the potential macroeconomic impact. In this paper, we instead focus on understanding market expectations for price action around the Brexit referendum date. Extracting implied distributions from the GBPUSD option volatility surface, we originally estimated, based on our visual observation of implied probability densities available up to 13 June 2016, that the market expected that a vote to leave could result in a move in the GBPUSD exchange rate from 1.4390 (spot reference on 10 June 2016) down to a range in 1.10 to 1.30, i.e., a 10–25% decline—very probably with highly volatile price action. To quantify this more objectively, we construct a mixture model corresponding to two scenarios for the GBPUSD exchange rate after the referendum vote, one scenario for “remain” and one for “leave”. Calibrating this model to four months of market data, from 24 February to 22 June 2016, we find that a “leave” vote was associated with a predicted devaluation of the British pound to approximately 1.37 USD per GBP, a 4.5% devaluation, and quite consistent with the observed post-referendum exchange rate move down from 1.4877 to 1.3622. We contrast the behaviour of the GBPUSD option market in the run-up to the Brexit vote with that during the 2014 Scottish Independence referendum, finding the potential impact of Brexit to be considerably higher. Full article
(This article belongs to the Special Issue The implications of Brexit)
Figures

Figure 1

Open AccessArticle A Robust Approach to Hedging and Pricing in Imperfect Markets
Risks 2017, 5(3), 36; doi:10.3390/risks5030036
Received: 5 March 2017 / Revised: 13 July 2017 / Accepted: 15 July 2017 / Published: 18 July 2017
PDF Full-text (405 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes a model-free approach to hedging and pricing in the presence of market imperfections such as market incompleteness and frictions. The generality of this framework allows us to conduct an in-depth theoretical analysis of hedging strategies with a wide family of
[...] Read more.
This paper proposes a model-free approach to hedging and pricing in the presence of market imperfections such as market incompleteness and frictions. The generality of this framework allows us to conduct an in-depth theoretical analysis of hedging strategies with a wide family of risk measures and pricing rules, and study the conditions under which the hedging problem admits a solution and pricing is possible. The practical implications of our proposed theoretical approach are illustrated with an application on hedging economic risk. Full article
(This article belongs to the Special Issue Quantile Regression for Risk Assessment)
Open AccessArticle Bubbles, Blind-Spots and Brexit
Risks 2017, 5(3), 37; doi:10.3390/risks5030037
Received: 28 April 2017 / Revised: 12 July 2017 / Accepted: 13 July 2017 / Published: 18 July 2017
PDF Full-text (373 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we develop a well-established financial model to investigate whether bubbles were present in opinion polls and betting markets prior to the UK’s vote on EU membership on 23 June 2016. The importance of our contribution is threefold. Firstly, our continuous-time
[...] Read more.
In this paper we develop a well-established financial model to investigate whether bubbles were present in opinion polls and betting markets prior to the UK’s vote on EU membership on 23 June 2016. The importance of our contribution is threefold. Firstly, our continuous-time model allows for irregularly spaced time series—a common feature of polling data. Secondly, we build on qualitative comparisons that are often made between market cycles and voting patterns. Thirdly, our approach is theoretically elegant. Thus, where bubbles are found we suggest a suitable adjustment. We find evidence of bubbles in polling data. This suggests they systematically over-estimate the proportion voting for remain. In contrast, bookmakers’ odds appear to show none of this bubble-like over-confidence. However, implied probabilities from bookmakers’ odds appear remarkably unresponsive to polling data that nonetheless indicates a close-fought vote. Full article
(This article belongs to the Special Issue The implications of Brexit)
Figures

Figure 1

Open AccessFeature PaperArticle Stress Testing German Industry Sectors: Results from a Vine Copula Based Quantile Regression
Risks 2017, 5(3), 38; doi:10.3390/risks5030038
Received: 14 April 2017 / Revised: 12 July 2017 / Accepted: 13 July 2017 / Published: 19 July 2017
PDF Full-text (1111 KB) | HTML Full-text | XML Full-text
Abstract
Measuring interdependence between probabilities of default (PDs) in different industry sectors of an economy plays a crucial role in financial stress testing. Thereby, regression approaches may be employed to model the impact of stressed industry sectors as covariates on other response sectors. We
[...] Read more.
Measuring interdependence between probabilities of default (PDs) in different industry sectors of an economy plays a crucial role in financial stress testing. Thereby, regression approaches may be employed to model the impact of stressed industry sectors as covariates on other response sectors. We identify vine copula based quantile regression as an eligible tool for conducting such stress tests as this method has good robustness properties, takes into account potential nonlinearities of conditional quantile functions and ensures that no quantile crossing effects occur. We illustrate its performance by a data set of sector specific PDs for the German economy. Empirical results are provided for a rough and a fine-grained industry sector classification scheme. Amongst others, we confirm that a stressed automobile industry has a severe impact on the German economy as a whole at different quantile levels whereas, e.g., for a stressed financial sector the impact is rather moderate. Moreover, the vine copula based quantile regression approach is benchmarked against both classical linear quantile regression and expectile regression in order to illustrate its methodological effectiveness in the scenarios evaluated. Full article
(This article belongs to the Special Issue Quantile Regression for Risk Assessment)
Figures

Figure 1

Open AccessFeature PaperArticle Valuation of Non-Life Liabilities from Claims Triangles
Risks 2017, 5(3), 39; doi:10.3390/risks5030039
Received: 28 June 2017 / Revised: 15 July 2017 / Accepted: 17 July 2017 / Published: 19 July 2017
PDF Full-text (528 KB) | HTML Full-text | XML Full-text
Abstract
This paper provides a complete program for the valuation of aggregate non-life insurance liability cash flows based on claims triangle data. The valuation is fully consistent with the principle of valuation by considering the costs associated with a transfer of the liability to
[...] Read more.
This paper provides a complete program for the valuation of aggregate non-life insurance liability cash flows based on claims triangle data. The valuation is fully consistent with the principle of valuation by considering the costs associated with a transfer of the liability to a so-called reference undertaking subject to capital requirements throughout the runoff of the liability cash flow. The valuation program includes complete details on parameter estimation, bias correction and conservative estimation of the value of the liability under partial information. The latter is based on a new approach to the estimation of mean squared error of claims reserve prediction. Full article
Figures

Figure 1

Open AccessFeature PaperArticle The Class of (p,q)-spherical Distributions with an Extension of the Sector and Circle Number Functions
Risks 2017, 5(3), 40; doi:10.3390/risks5030040
Received: 24 May 2017 / Revised: 17 July 2017 / Accepted: 19 July 2017 / Published: 21 July 2017
PDF Full-text (1472 KB) | HTML Full-text | XML Full-text
Abstract
For evaluating the probabilities of arbitrary random events with respect to a given multivariate probability distribution, specific techniques are of great interest. An important two-dimensional high risk limit law is the Gauss-exponential distribution whose probabilities can be dealt with based on the Gauss–Laplace
[...] Read more.
For evaluating the probabilities of arbitrary random events with respect to a given multivariate probability distribution, specific techniques are of great interest. An important two-dimensional high risk limit law is the Gauss-exponential distribution whose probabilities can be dealt with based on the Gauss–Laplace law. The latter will be considered here as an element of the newly-introduced family of ( p , q ) -spherical distributions. Based on a suitably-defined non-Euclidean arc-length measure on ( p , q ) -circles, we prove geometric and stochastic representations of these distributions and correspondingly distributed random vectors, respectively. These representations allow dealing with the new probability measures similarly to with elliptically-contoured distributions and more general homogeneous star-shaped ones. This is demonstrated by the generalization of the Box–Muller simulation method. In passing, we prove an extension of the sector and circle number functions. Full article
Figures

Figure 1

Open AccessArticle Robust Estimation of Value-at-Risk through Distribution-Free and Parametric Approaches Using the Joint Severity and Frequency Model: Applications in Financial, Actuarial, and Natural Calamities Domains
Risks 2017, 5(3), 41; doi:10.3390/risks5030041
Received: 31 May 2017 / Revised: 5 July 2017 / Accepted: 21 July 2017 / Published: 26 July 2017
PDF Full-text (6492 KB) | HTML Full-text | XML Full-text
Abstract
Value-at-Risk (VaR) is a well-accepted risk metric in modern quantitative risk management (QRM). The classical Monte Carlo simulation (MCS) approach, denoted henceforth as the classical approach, assumes the independence of loss severity and loss frequency. In practice, this assumption does not always hold
[...] Read more.
Value-at-Risk (VaR) is a well-accepted risk metric in modern quantitative risk management (QRM). The classical Monte Carlo simulation (MCS) approach, denoted henceforth as the classical approach, assumes the independence of loss severity and loss frequency. In practice, this assumption does not always hold true. Through mathematical analyses, we show that the classical approach is prone to significant biases when the independence assumption is violated. This is also corroborated by studying both simulated and real-world datasets. To overcome the limitations and to more accurately estimate VaR, we develop and implement the following two approaches for VaR estimation: the data-driven partitioning of frequency and severity (DPFS) using clustering analysis, and copula-based parametric modeling of frequency and severity (CPFS). These two approaches are verified using simulation experiments on synthetic data and validated on five publicly available datasets from diverse domains; namely, the financial indices data of Standard & Poor’s 500 and the Dow Jones industrial average, chemical loss spills as tracked by the US Coast Guard, Australian automobile accidents, and US hurricane losses. The classical approach estimates VaR inaccurately for 80% of the simulated data sets and for 60% of the real-world data sets studied in this work. Both the DPFS and the CPFS methodologies attain VaR estimates within 99% bootstrap confidence interval bounds for both simulated and real-world data. We provide a process flowchart for risk practitioners describing the steps for using the DPFS versus the CPFS methodology for VaR estimation in real-world loss datasets. Full article
Figures

Figure 1

Open AccessArticle Stochastic Period and Cohort Effect State-Space Mortality Models Incorporating Demographic Factors via Probabilistic Robust Principal Components
Risks 2017, 5(3), 42; doi:10.3390/risks5030042
Received: 7 February 2017 / Revised: 31 May 2017 / Accepted: 17 July 2017 / Published: 27 July 2017
PDF Full-text (7864 KB) | HTML Full-text | XML Full-text
Abstract
In this study we develop a multi-factor extension of the family of Lee-Carter stochastic mortality models. We build upon the time, period and cohort stochastic model structure to extend it to include exogenous observable demographic features that can be used as additional factors
[...] Read more.
In this study we develop a multi-factor extension of the family of Lee-Carter stochastic mortality models. We build upon the time, period and cohort stochastic model structure to extend it to include exogenous observable demographic features that can be used as additional factors to improve model fit and forecasting accuracy. We develop a dimension reduction feature extraction framework which (a) employs projection based techniques of dimensionality reduction; in doing this we also develop (b) a robust feature extraction framework that is amenable to different structures of demographic data; (c) we analyse demographic data sets from the patterns of missingness and the impact of such missingness on the feature extraction, and (d) introduce a class of multi-factor stochastic mortality models incorporating time, period, cohort and demographic features, which we develop within a Bayesian state-space estimation framework; finally (e) we develop an efficient combined Markov chain and filtering framework for sampling the posterior and forecasting. We undertake a detailed case study on the Human Mortality Database demographic data from European countries and we use the extracted features to better explain the term structure of mortality in the UK over time for male and female populations when compared to a pure Lee-Carter stochastic mortality model, demonstrating our feature extraction framework and consequent multi-factor mortality model improves both in sample fit and importantly out-off sample mortality forecasts by a non-trivial gain in performance. Full article
(This article belongs to the Special Issue Ageing Population Risks)
Figures

Figure 1

Open AccessArticle On the First Crossing of Two Boundaries by an Order Statistics Risk Process
Risks 2017, 5(3), 43; doi:10.3390/risks5030043
Received: 14 July 2017 / Revised: 11 August 2017 / Accepted: 15 August 2017 / Published: 18 August 2017
PDF Full-text (389 KB) | HTML Full-text | XML Full-text
Abstract
We derive a closed form expression for the probability that a non-decreasing, pure jump stochastic risk process with the order statistics (OS) property will not exit the strip between two non-decreasing, possibly discontinuous, time-dependent boundaries, within a finite time interval. The result yields
[...] Read more.
We derive a closed form expression for the probability that a non-decreasing, pure jump stochastic risk process with the order statistics (OS) property will not exit the strip between two non-decreasing, possibly discontinuous, time-dependent boundaries, within a finite time interval. The result yields new expressions for the ruin probability in the insurance and the dual risk models with dependence between the claim severities or capital gains respectively. Full article

Journal Contact

MDPI AG
Risks Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
E-Mail: 
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Risks Edit a special issue Review for Risks
logo
loading...
Back to Top