Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (19)

Search Parameters:
Keywords = insurance loss analytic

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 2014 KiB  
Article
A Behavioral Theory of the Income-Oriented Investors: Evidence from Japanese Life Insurance Companies
by Hiroyuki Sasaki
J. Risk Financial Manag. 2025, 18(7), 364; https://doi.org/10.3390/jrfm18070364 - 1 Jul 2025
Viewed by 400
Abstract
This study investigates the yield-seeking behavior of income-oriented institutional investors, who are essential players in financial markets. While external pressures compelling firms to “reach for yield” are well-documented, the firm-level behavioral drivers underlying this phenomenon remain largely underexplored. Drawing on the behavioral theory [...] Read more.
This study investigates the yield-seeking behavior of income-oriented institutional investors, who are essential players in financial markets. While external pressures compelling firms to “reach for yield” are well-documented, the firm-level behavioral drivers underlying this phenomenon remain largely underexplored. Drawing on the behavioral theory of the firm, this study argues that an investor’s performance relative to their social aspiration level (the peer average) influences their yield-seeking decisions, and that this effect is moderated by “portfolio slack,” defined as unrealized gains or losses. To test this theory in the context of persistent low-yield pressure, this study constructs and analyzes a panel dataset of Japanese life insurance companies from 2000 to 2019. The analysis reveals that these investors increase their portfolio income yield after underperforming their peers and decrease it after outperforming. Furthermore, greater portfolio slack amplifies yield increases after underperformance and mitigates yield decreases after outperformance. In contrast, organizational slack primarily mitigates yield reductions after outperformance. This research extends the behavioral theory of the firm to the asset management context by identifying distinct performance feedback responses and proposing portfolio slack as an important analytical construct, thereby offering key insights for investment managers and financial regulators. Full article
(This article belongs to the Section Financial Markets)
Show Figures

Figure 1

19 pages, 1734 KiB  
Article
Modeling Age-to-Age Development Factors in Auto Insurance Through Principal Component Analysis and Temporal Clustering
by Shengkun Xie and Chong Gan
Risks 2025, 13(6), 100; https://doi.org/10.3390/risks13060100 - 22 May 2025
Viewed by 453
Abstract
The estimation of age-to-age development factors is fundamental to loss reserving, with direct implications for risk management and regulatory compliance in the auto insurance sector. The precise and robust estimation of these factors underpins the credibility of case reserves and the effective management [...] Read more.
The estimation of age-to-age development factors is fundamental to loss reserving, with direct implications for risk management and regulatory compliance in the auto insurance sector. The precise and robust estimation of these factors underpins the credibility of case reserves and the effective management of future claim liabilities. This study investigates the underlying structure and sources of variability in development factor estimates by applying multivariate statistical techniques to the analysis of development triangles. Departing from conventional univariate summaries (e.g., mean or median), we introduce a comprehensive framework that incorporates temporal clustering of development factors and addresses associated modeling complexities, including high dimensionality and temporal dependency. The proposed methodology enhances interpretability and captures latent structures in the data, thereby improving the reliability of reserve estimates. Our findings contribute to the advancement of reserving practices by offering a more nuanced understanding of development factor behavior under uncertainty. Full article
Show Figures

Figure 1

29 pages, 3857 KiB  
Article
Exploring the Impacts of Autonomous Vehicles on the Insurance Industry and Strategies for Adaptation
by Xiaodan Lin, Chen-Ying Lee and Chiang Ku Fan
World Electr. Veh. J. 2025, 16(3), 119; https://doi.org/10.3390/wevj16030119 - 21 Feb 2025
Viewed by 2721
Abstract
This study investigates the impacts of autonomous vehicles (AVs) on the insurance industry from the viewpoint of insurance companies, highlighting the necessity for adaptation due to technological advancements. The research is motivated by the gap in understanding between traditional insurers and automaker-backed insurance [...] Read more.
This study investigates the impacts of autonomous vehicles (AVs) on the insurance industry from the viewpoint of insurance companies, highlighting the necessity for adaptation due to technological advancements. The research is motivated by the gap in understanding between traditional insurers and automaker-backed insurance services regarding AV implications. The purpose is to identify potential impacts, evaluate the level of concern among diverse insurance companies, and examine their differing perspectives. The methodology includes a literature review, the Analytic Hierarchy Process (AHP), and Spearman correlation analysis. The literature review clarifies the definition of AVs and their impacts on traditional insurance. The AHP assesses the level of concern among insurance companies, and Spearman correlation analysis explores the similarities and differences in perspectives. The findings show that insurance companies largely agree on the transformative impacts of AVs. The primary effects are in “Updates in Insurance Business Operations” and the “Emergence of New Risks”, with less impact on “Changes in the Insurance Market”. A major concern is the complexity of multi-party liability claims. Companies differ in their focus on specific impacts like legal frameworks or system malfunctions, but share concerns about multi-party liability, system malfunctions, and legal gaps. The study anticipates minor impacts on market dynamics and traditional insurance models. The conclusions emphasize that AVs will significantly impact the insurance industry, requiring innovation and adaptation to maintain competitiveness. This includes developing new products, optimizing processes, and collaborating with stakeholders. The study has several implications: customized insurance products, optimized no-fault claims processes, collaborations with automakers and tech firms, data-driven risk assessments, enhanced risk management, and adapting traditional models. Recommendations include building loss experience databases, adopting no-fault insurance, strategic partnerships, developing customized products, strengthening risk management and cybersecurity, monitoring regulations, adjusting traditional models, focusing on product liability insurance, and training professionals. Full article
Show Figures

Figure 1

22 pages, 6143 KiB  
Article
Unified Spatial Clustering of Territory Risk to Uncover Impact of COVID-19 Pandemic on Major Coverages of Auto Insurance
by Shengkun Xie and Nathaniel Ho
Risks 2024, 12(7), 108; https://doi.org/10.3390/risks12070108 - 1 Jul 2024
Viewed by 1223
Abstract
This research delves into the fusion of spatial clustering and predictive modeling within auto insurance data analytics. The primary focus of this research is on addressing challenges stemming from the dynamic nature of spatial patterns in multiple accident year claim data, by using [...] Read more.
This research delves into the fusion of spatial clustering and predictive modeling within auto insurance data analytics. The primary focus of this research is on addressing challenges stemming from the dynamic nature of spatial patterns in multiple accident year claim data, by using spatially constrained clustering. The spatially constrained clustering is implemented under hierarchical clustering with a soft contiguity constraint. It is highly desirable for insurance companies and insurance regulators to be able to make meaningful comparisons of loss patterns obtained from multiple reporting years that summarize multiple accident year loss metrics. By integrating spatial clustering techniques, the study not only improves the credibility of predictive models but also introduces a strategic dimension reduction method that concurrently enhances the interpretability of predictive models used. The evolving nature of spatial patterns over time poses a significant barrier to a better understanding of complex insurance systems as these patterns transform due to various factors. While spatial clustering effectively identifies regions with similar loss data characteristics, maintaining up-to-date clusters is an ongoing challenge. This research underscores the importance of studying spatial patterns of auto insurance claim data across major insurance coverage types, including Accident Benefits (AB), Collision (CL), and Third-Party Liability (TPL). The research offers regulators valuable insights into distinct risk profiles associated with different coverage categories and territories. By leveraging spatial loss data from pre-pandemic and pandemic periods, this study also aims to uncover the impact of the COVID-19 pandemic on auto insurance claims of major coverage types. From this perspective, we observe a statistically significant increase in insurance premiums for CL coverage after the pandemic. The proposed unified spatial clustering method incorporates a relabeling strategy to standardize comparisons across different accident years, contributing to a more robust understanding of the pandemic effects on auto insurance claims. This innovative approach has the potential to significantly influence data visualization and pattern recognition, thereby improving the reliability and interpretability of clustering methods. Full article
Show Figures

Figure 1

19 pages, 433 KiB  
Article
Analyzing Size of Loss Frequency Distribution Patterns: Uncovering the Impact of the COVID-19 Pandemic
by Shengkun Xie and Yuanshun Li
Risks 2024, 12(2), 40; https://doi.org/10.3390/risks12020040 - 18 Feb 2024
Viewed by 2125
Abstract
This study delves into a critical examination of the Size of Loss distribution patterns in the context of auto insurance during pre- and post-pandemics, emphasizing their profound influence on insurance pricing and regulatory frameworks. Through a comprehensive analysis of the historical Size of [...] Read more.
This study delves into a critical examination of the Size of Loss distribution patterns in the context of auto insurance during pre- and post-pandemics, emphasizing their profound influence on insurance pricing and regulatory frameworks. Through a comprehensive analysis of the historical Size of Loss data, insurers and regulators gain essential insights into the probabilities and magnitudes of insurance claims, informing the determination of precise insurance premiums and the management of case reserving. This approach aids in fostering fair competition, ensuring equitable premium rates, and preventing discriminatory pricing practices, thereby promoting a balanced insurance landscape. The research further investigates the impact of the COVID-19 pandemic on these Size of Loss patterns, given the substantial shifts in driving behaviours and risk landscapes. Also, the research contributes to the literature by addressing the need for more studies focusing on the implications of the COVID-19 pandemic on pre- and post-pandemic auto insurance loss patterns, thus offering a holistic perspective encompassing both insurance pricing and regulatory dimensions. Full article
(This article belongs to the Special Issue Risks: Feature Papers 2023)
Show Figures

Figure 1

27 pages, 675 KiB  
Article
Bayesian Inference for the Loss Models via Mixture Priors
by Min Deng and Mostafa S. Aminzadeh
Risks 2023, 11(9), 156; https://doi.org/10.3390/risks11090156 - 31 Aug 2023
Cited by 2 | Viewed by 1695
Abstract
Constructing an accurate model for insurance losses is a challenging task. Researchers have developed various methods to model insurance losses, such as composite models. Composite models combine two distributions: one for part of the data with small and high frequencies and the other [...] Read more.
Constructing an accurate model for insurance losses is a challenging task. Researchers have developed various methods to model insurance losses, such as composite models. Composite models combine two distributions: one for part of the data with small and high frequencies and the other for large values with low frequencies. The purpose of this article is to consider a mixture of prior distributions for exponential–Pareto and inverse-gamma–Pareto composite models. The general formulas for the posterior distribution and the Bayes estimator of the support parameter θ are derived. It is shown that the posterior distribution is a mixture of individual posterior distributions. Analytic results and Bayesian inference based on the proposed mixture prior distribution approach are provided. Simulation studies reveal that the Bayes estimator with a mixture distribution outperforms the Bayes estimator without a mixture distribution and the ML estimator regarding their accuracies. Based on the proposed method, the insurance losses from natural events, such as floods from 2000 to 2019 in the USA, are considered. As a measure of goodness-of-fit, the Bayes factor is used to choose the best-fitted model. Full article
Show Figures

Figure 1

17 pages, 1291 KiB  
Article
AutoReserve: A Web-Based Tool for Personal Auto Insurance Loss Reserving with Classical and Machine Learning Methods
by Lu Xiong, Vajira Manathunga, Jiyao Luo, Nicholas Dennison, Ruicheng Zhang and Zhenhai Xiang
Risks 2023, 11(7), 131; https://doi.org/10.3390/risks11070131 - 14 Jul 2023
Viewed by 3243
Abstract
In this paper, we developed a Shiny-based application called AutoReserve. This application serves as a tool used for a variety of types of loss reserving. The primary target audience of the app is personal auto actuaries, who are professionals in the insurance industry [...] Read more.
In this paper, we developed a Shiny-based application called AutoReserve. This application serves as a tool used for a variety of types of loss reserving. The primary target audience of the app is personal auto actuaries, who are professionals in the insurance industry specializing in assessing risks and determining insurance premiums for personal vehicles. However, the app is not limited exclusively to actuaries. Other individuals or entities, such as insurance companies, researchers, or analysts, who have access to the necessary data and require insights or analysis related to personal auto insurance, can also benefit from using the app. It is the first web-based application of its kind that is free to use and deployable from the personal computer or mobile device. AutoReserve is a software solution that caters to the needs of insurance professionals where only a few existing web-based applications are available. The application is divided into three parts: a summary of the loss data, a classical loss reserving tool, and a machine learning loss reserving tool. Each component of the application functions differently and allows for inputs from the user to analyze the provided loss data. The user, in other words, individuals or entities who utilize the Auto Reserve application, can then use the outputs for these three sections to improve his or her risk management or loss reserving process. AutoReserve is unique compared to other loss reserving tools because of its ability to employ both traditional, spreadsheet-based and modern, machine-learning-based loss reserving tools. AutoReserve is accessible on the web. The app is currently usable and is still undergoing frequent updates with new features and bug fixes. Full article
(This article belongs to the Special Issue Computational Technologies for Financial Security and Risk Management)
Show Figures

Figure 1

28 pages, 1076 KiB  
Article
Optimal Private Health Insurance Contract towards the Joint Interests of a Policyholder and an Insurer
by Peng Yang and Zhiping Chen
Mathematics 2023, 11(10), 2240; https://doi.org/10.3390/math11102240 - 10 May 2023
Viewed by 1534
Abstract
This paper investigates the optimal private health insurance contract design problem, considering the joint interests of a policyholder and an insurer. Both the policyholder and the insurer jointly determine the premium of private health insurance. In order to better reflect reality, the illness [...] Read more.
This paper investigates the optimal private health insurance contract design problem, considering the joint interests of a policyholder and an insurer. Both the policyholder and the insurer jointly determine the premium of private health insurance. In order to better reflect reality, the illness expenditure is modelled by an extended compound Poisson process depending on health status. Under the mean–variance criterion and by applying dynamic programming, control theory, and leader–follower game techniques, analytically time-consistent private health insurance strategies are derived, optimal private health insurance contracts are designed, and their implications toward insurance are analysed. Finally, we perform numerical experiments assuming that the policyholder and the insurer calculate their wealth every year and they deposit their disposable income into the Bank of China with the interest rate being r=0.021. The values of other model parameters are set by referring to the data in the related literature. We find that the worse the policyholder’s health, the higher the premium that they pay for private health insurance, and buying private health insurance can effectively reduce the policyholder’s economic losses caused by illnesses. Full article
(This article belongs to the Section E5: Financial Mathematics)
Show Figures

Figure 1

21 pages, 42012 KiB  
Article
Developing Flood Risk Zones during an Extreme Rain Event from the Perspective of Social Insurance Management
by Shakti P. C., Kohin Hirano and Koyuru Iwanami
Sustainability 2023, 15(6), 4909; https://doi.org/10.3390/su15064909 - 9 Mar 2023
Cited by 3 | Viewed by 2489
Abstract
Recently, Japan has been hit by more frequent and severe rainstorms and floods. Typhoon Hagibis caused heavy flooding in many river basins in central and eastern Japan from 12–13 October 2019, resulting in loss of life, substantial damage, and many flood insurance claims. [...] Read more.
Recently, Japan has been hit by more frequent and severe rainstorms and floods. Typhoon Hagibis caused heavy flooding in many river basins in central and eastern Japan from 12–13 October 2019, resulting in loss of life, substantial damage, and many flood insurance claims. Considering that obtaining accurate assessments of flood situations remains a significant challenge, this study used a geographic information system (GIS)-based analytical hierarchy process (AHP) approach to develop flood susceptibility maps for the Abukuma, Naka, and Natsui River Basins during the Typhoon Hagibis event. The maps were based on population density, building density, land-use profile, distance from the river, slope, and flood inundation. A novel approach was also employed to simulate the flood inundation profiles of the river basins. In addition, a crosscheck evaluated the relationship between flood insurance claims and the developed flood risk zones within the river basins. Over 70% of insurance claims were concentrated in high to very high risk zones identified by the flood susceptibility maps. These findings demonstrate the effectiveness of this type of assessment in identifying areas that are particularly vulnerable to flood damage, which can be a useful reference for flood disaster management and related stakeholder concerns for future extreme flood events. Full article
Show Figures

Figure 1

28 pages, 953 KiB  
Article
Enhancing Logistic Regression Using Neural Networks for Classification in Actuarial Learning
by George Tzougas and Konstantin Kutzkov
Algorithms 2023, 16(2), 99; https://doi.org/10.3390/a16020099 - 9 Feb 2023
Cited by 11 | Viewed by 7738
Abstract
We developed a methodology for the neural network boosting of logistic regression aimed at learning an additional model structure from the data. In particular, we constructed two classes of neural network-based models: shallow–dense neural networks with one hidden layer and deep neural networks [...] Read more.
We developed a methodology for the neural network boosting of logistic regression aimed at learning an additional model structure from the data. In particular, we constructed two classes of neural network-based models: shallow–dense neural networks with one hidden layer and deep neural networks with multiple hidden layers. Furthermore, several advanced approaches were explored, including the combined actuarial neural network approach, embeddings and transfer learning. The model training was achieved by minimizing either the deviance or the cross-entropy loss functions, leading to fourteen neural network-based models in total. For illustrative purposes, logistic regression and the alternative neural network-based models we propose are employed for a binary classification exercise concerning the occurrence of at least one claim in a French motor third-party insurance portfolio. Finally, the model interpretability issue was addressed via the local interpretable model-agnostic explanations approach. Full article
(This article belongs to the Special Issue Deep Neural Networks and Optimization Algorithms)
Show Figures

Figure 1

20 pages, 838 KiB  
Article
Willingness to Pay for Environmental Quality Improvement Programs and Its Determinants: Empirical Analysis in Western Nepal
by Uttam Paudel, Shiva Raj Adhikari and Krishna Prasad Pant
Sustainability 2023, 15(3), 2176; https://doi.org/10.3390/su15032176 - 24 Jan 2023
Cited by 7 | Viewed by 3390
Abstract
Environmental conditions in western Nepal are experiencing a possible threat to economic losses and sustainability, especially due to decreased productivity and increased health risks. This research investigates the maximum willingness to pay (WTP) of the local community for environmental quality improvement programs by [...] Read more.
Environmental conditions in western Nepal are experiencing a possible threat to economic losses and sustainability, especially due to decreased productivity and increased health risks. This research investigates the maximum willingness to pay (WTP) of the local community for environmental quality improvement programs by using the contingent valuation technique. It also explores socio-economic and behavioral determinants that influence the maximum WTP for environmental quality improvement. A cross-sectional analytical design is employed using primary data obtained through in-depth face-to-face interviews with people in the community, interviews with key informants, focus group discussions and direct observations. Of the total of 420 households sampled, 72% were willing to pay for the environmental improvement program. The average WTP of households per annum for environmental protection at the community level is given as Nepalese rupees (NPR) 1909 (confidence interval—CI: 1796–2022). Environmental factors (prolonged drought, sporadic rains and drying sprout), socio-economic factors (family size, occupation, regular saving habits in microfinance, distance to the nearest health facility, health insurance enrollment, owning a home and owning arable land) and behavioral factors (cleanliness of the toilet) are the major factors influencing the household’s WTP decision. The findings of this study provide an important guideline and basis for the implementation of cost sharing in environmental quality improvement programs among the community, governments and other stakeholders in this sector. Full article
(This article belongs to the Special Issue Sustainable Industrial Systems—from Theory to Practice)
Show Figures

Figure 1

19 pages, 9181 KiB  
Article
ECLIPSE: Holistic AI System for Preparing Insurer Policy Data
by Varun Sriram, Zijie Fan and Ni Liu
Risks 2023, 11(1), 4; https://doi.org/10.3390/risks11010004 - 21 Dec 2022
Cited by 2 | Viewed by 2682
Abstract
Reinsurers possess high volumes of policy listings data from insurers, which they use to provide insurers with analytical insights and modeling that guide reinsurance treaties. These insurers often act on the same data for their own internal modeling and analytics needs. The problem [...] Read more.
Reinsurers possess high volumes of policy listings data from insurers, which they use to provide insurers with analytical insights and modeling that guide reinsurance treaties. These insurers often act on the same data for their own internal modeling and analytics needs. The problem is this data is messy and needs significant preparation in order to extract meaningful insights. Traditionally, this has required intensive manual labor from actuaries. However, a host of modern AI techniques and ML system architectures introduced in the past decade can be applied to the problem of insurance data preparation. In this paper, we explore a novel application of AI/ML on policy listings data that poses its own unique challenges, by outlining the holistic AI-based platform we developed, ECLIPSE (Elegant Cleaning and Labeling of Insurance Policies while Standardizing Entities). With ECLIPSE, actuaries not only save time on data preparation but can build more effective loss models and provide crisper insights. Full article
(This article belongs to the Special Issue Data Science in Insurance)
Show Figures

Figure 1

23 pages, 857 KiB  
Article
A Generalized Linear Mixed Model for Data Breaches and Its Application in Cyber Insurance
by Meng Sun and Yi Lu
Risks 2022, 10(12), 224; https://doi.org/10.3390/risks10120224 - 23 Nov 2022
Cited by 5 | Viewed by 3140
Abstract
Data breach incidents result in severe financial loss and reputational damage, which raises the importance of using insurance to manage and mitigate cyber related risks. We analyze data breach chronology collected by Privacy Rights Clearinghouse (PRC) since 2001 and propose a Bayesian generalized [...] Read more.
Data breach incidents result in severe financial loss and reputational damage, which raises the importance of using insurance to manage and mitigate cyber related risks. We analyze data breach chronology collected by Privacy Rights Clearinghouse (PRC) since 2001 and propose a Bayesian generalized linear mixed model for data breach incidents. Our model captures the dependency between frequency and severity of cyber losses and the behavior of cyber attacks on entities across time. Risk characteristics such as types of breach, types of organization, entity locations in chronology, as well as time trend effects are taken into consideration when investigating breach frequencies. Estimations of model parameters are presented under Bayesian framework using a combination of Gibbs sampler and Metropolis–Hastings algorithm. Predictions and implications of the proposed model in enterprise risk management and cyber insurance rate filing are discussed and illustrated. We find that it is feasible and effective to use our proposed NB-GLMM for analyzing the number of data breach incidents with uniquely identified risk factors. Our results show that both geological location and business type play significant roles in measuring cyber risks. The outcomes of our predictive analytics can be utilized by insurers to price their cyber insurance products, and by corporate information technology (IT) and data security officers to develop risk mitigation strategies according to company’s characteristics. Full article
Show Figures

Figure 1

25 pages, 576 KiB  
Article
Tsallis Entropy for Loss Models and Survival Models Involving Truncated and Censored Random Variables
by Vasile Preda, Silvia Dedu, Iuliana Iatan, Ioana Dănilă Cernat and Muhammad Sheraz
Entropy 2022, 24(11), 1654; https://doi.org/10.3390/e24111654 - 14 Nov 2022
Cited by 5 | Viewed by 2350
Abstract
The aim of this paper consists in developing an entropy-based approach to risk assessment for actuarial models involving truncated and censored random variables by using the Tsallis entropy measure. The effect of some partial insurance models, such as inflation, truncation and censoring from [...] Read more.
The aim of this paper consists in developing an entropy-based approach to risk assessment for actuarial models involving truncated and censored random variables by using the Tsallis entropy measure. The effect of some partial insurance models, such as inflation, truncation and censoring from above and truncation and censoring from below upon the entropy of losses is investigated in this framework. Analytic expressions for the per-payment and per-loss entropies are obtained, and the relationship between these entropies are studied. The Tsallis entropy of losses of the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u is computed for the exponential, Weibull, χ2 or Gamma distribution. In this context, the properties of the resulting entropies, such as the residual loss entropy and the past loss entropy, are studied as a result of using a deductible and a policy limit, respectively. Relationships between these entropy measures are derived, and the combined effect of a deductible and a policy limit is also analyzed. By investigating residual and past entropies for survival models, the entropies of losses corresponding to the proportional hazard and proportional reversed hazard models are derived. The Tsallis entropy approach for actuarial models involving truncated and censored random variables is new and more realistic, since it allows a greater degree of flexibility and improves the modeling accuracy. Full article
(This article belongs to the Special Issue Measures of Information II)
Show Figures

Figure 1

16 pages, 482 KiB  
Article
A Continuous Granular Model for Stochastic Reserving with Individual Information
by Zhigao Wang and Wenchen Liu
Symmetry 2022, 14(8), 1582; https://doi.org/10.3390/sym14081582 - 1 Aug 2022
Viewed by 1428
Abstract
This paper works on the claims data generated by individual policies which are randomly exposed to a period of continuous time. The main aim is to model the occurrence times of individual claims, as well as their developments given the feature information and [...] Read more.
This paper works on the claims data generated by individual policies which are randomly exposed to a period of continuous time. The main aim is to model the occurrence times of individual claims, as well as their developments given the feature information and exposure periods of individual policies, and thus project the outstanding liabilities. In this paper, we also propose a method to compute the moments of outstanding liabilities in an analytic form. It is significant for a general insurance company to more accurately project outstanding liabilities in risk management. It is well-known that the features of individual policies have effects on the occurrence of claims and their developments and thus the projection of outstanding liabilities. Neglecting the information can unquestionably decrease the prediction accuracy of stochastic reserving, where the accuracy is measured by the mean square error of prediction (MSEP), whose analytic form is computed according to the derived moments of outstanding liabilities. The parameters concerned in the proposed model are estimated based on likelihood and quasi-likelihood and the properties of estimated parameters are further studied. The asymptotic behavior of stochastic reserving is also investigated. The asymptotic distribution of parameter estimators is multivariate normal distribution which is a symmetric distribution and the asymptotic distribution of the deviation of the estimated loss reserving from theoretical loss reserve also follows a normal distribution. The confidence intervals for the parameter estimators and the deviation can be easily obtained through the symmetry of the normal distribution. Some simulations are conducted in order to support the main theoretical results. Full article
Show Figures

Figure 1

Back to TopTop