Next Article in Journal
Variations of Particle Swarm Optimization for Obtaining Classification Rules Applied to Credit Risk in Financial Institutions of Ecuador
Previous Article in Journal
Social Security Benefit Valuation, Risk, and Optimal Retirement
Previous Article in Special Issue
Claim Watching and Individual Claims Reserving Using Classification and Regression Trees
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Risks Special Issue on “Granular Models and Machine Learning Models”

School of Risk and Actuarial Studies, University of New South Wales, Kensington, NSW 2052, Australia
Submission received: 8 December 2019 / Accepted: 13 December 2019 / Published: 30 December 2019
(This article belongs to the Special Issue Claim Models: Granular Forms and Machine Learning Forms)
It is probably fair to date loss reserving by means of claim modelling from the late 1960s. For much of the 50 years since then, the models have either remained algebraically simple, or have incrementally proceeded to greater algebraic complexity. Much of the limitation on model sophistication has derived from computing limitations.
As computing power has increased, so has model complexity and sophistication. At some point on this journey, the modelling of the claim process of individual claims in detail (granular modelling (GM)) became feasible. More recently, machine learning (ML) has increasingly found its way into the literature. Again, this may (though not necessarily will) target individual claims.
The two approaches stand in stark contrast with each other in at least one respect. In my own contribution to the present volume, I refer to them as the Watchmaker (GM) and the Oracle (ML), the one being concerned with ever more detailed and minute modelling, and the other with incisive generalizations about data on the basis of reasoning that may be obscure or even impenetrable.
The appearance of two (relatively) new approaches to the estimation of individual claim loss reserves immediately creates a tension between them, with natural questions about their relative performances. But the issue is greater than this. There are also questions about the performance of new versus old models.
I have no doubt that some of these new approaches will prove useful in future, and quite possibly dominate all others. For the present, however, their status is, in my view, unproven. The research record contains a number of papers in this field, but some of them consist of an application to a single dataset with little in the way of general conclusions or indication of the extent to which the results could be extrapolated to other datasets.
The consequence is a fragmented research record, leaving open questions about the general applicability of GM and ML. Some of the (to my mind) landmark questions requiring answer are the following.
  • Modelling of individual claims. This is possible with GM and ML. However, it is a statistical truism that enlargement of the volume of data used does not necessarily increase predictive power. Indeed, in Section 8.2 of my own contribution to this volume, I give an example where it will not. So, can we identify the circumstances in which the use of individual claims is likely to bring predictive benefit?
  • Complexity. One might reasonably guess that the answer to the previous question will be somehow related to the complexity of the dataset under analysis. In short, datasets with simple algebraic structures have simple methods of analysis, and complex datasets have more complex methods, and possibly individual claims. So, can we design a metric of data complexity (perhaps based on relative entropy or similar) that could be used to triage datasets?
  • Predictive gain. In cases where some predictive gain is found, say reduced prediction error or more granular reserving or some other form of GM/ML supremacy, what exactly is the gain in quantitative terms, and are there any general indications of the circumstances in which it might occur?
  • Interpretability. Explainable neural nets (NNs) have entered the literature. These structured NN outputs so as to increase their interpretability. Even so, the results are not always quite transparent. Can we define alternative constraints in the form of output so as to enhance interpretability further?
  • Interpretability (continued). In any case, to what extent is interpretability paramount? Can we define circumstances in which it is essential, and others where it does not matter?
The present volume commences with two articles on loss reserving at the individual claim level, in each case using a form of machine learning. De Felice and Moriconi (2019) use CART (Classification And Regression Trees) together with some granular features, whereas Duval and Pigeon (2019) use gradient boosting.
These are followed by two articles on neural networks. Kuo (2019) apples deep learning to claim triangles, but with multi-triangle input and other input features. Then, Poon (2019) is concerned with the issue of interpretability, applying an unexplainability penalty to the neural network.
Finally, my own contribution (Taylor 2019) discusses the merits and demerits of GM and ML models, and compares the two families.

Funding

This research received funding assistance from the Australian Research Council, grant number LP130100723.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. De Felice, Massimo, and Franco Moriconi. 2019. Claim Watching and Individual Claims Reserving Using Classification and Regression Trees. Risks 7: 102. [Google Scholar] [CrossRef] [Green Version]
  2. Duval, Francis, and Mathieu Pigeon. 2019. Individual Loss Reserving Using a Gradient Boosting-Based Approach. Risks 7: 79. [Google Scholar]
  3. Kuo, Kevin. 2019. DeepTriangle: A Deep Learning Approach to Loss Reserving. Risks 7: 97. [Google Scholar] [CrossRef] [Green Version]
  4. Poon, Jacky HL. 2019. Penalising Unexplainability in Neural Networks for Predicting Payments per Claim Incurred. Risks 7: 95. [Google Scholar] [CrossRef] [Green Version]
  5. Taylor, Greg. 2019. Loss Reserving Models: Granular and Machine Learning Forms. Risks 7: 82. [Google Scholar] [CrossRef] [Green Version]

Share and Cite

MDPI and ACS Style

Taylor, G. Risks Special Issue on “Granular Models and Machine Learning Models”. Risks 2020, 8, 1. https://doi.org/10.3390/risks8010001

AMA Style

Taylor G. Risks Special Issue on “Granular Models and Machine Learning Models”. Risks. 2020; 8(1):1. https://doi.org/10.3390/risks8010001

Chicago/Turabian Style

Taylor, Greg. 2020. "Risks Special Issue on “Granular Models and Machine Learning Models”" Risks 8, no. 1: 1. https://doi.org/10.3390/risks8010001

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop