Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (319)

Search Parameters:
Keywords = actuary

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 4527 KB  
Article
Dynamic Pricing of Multi-Peril Agricultural Insurance via Backward Stochastic Differential Equations with Copula Dependence and Reinforcement Learning
by Yunjiao Pei, Jun Zhao, Yankai Chen, Jianfeng Li, Qiaoting Chen, Zichen Liu, Xiyan Li, Yifan Zhai and Qi Tang
Mathematics 2026, 14(6), 1043; https://doi.org/10.3390/math14061043 - 19 Mar 2026
Viewed by 145
Abstract
Pricing multi-peril agricultural insurance under compound climate hazards demands a framework that captures stochastic dependence among heterogeneous perils, accommodates non-stationary loss dynamics, and supports adaptive policy optimisation. We demonstrate that backward stochastic differential equations, combined with copula dependence, recurrent neural networks, and reinforcement [...] Read more.
Pricing multi-peril agricultural insurance under compound climate hazards demands a framework that captures stochastic dependence among heterogeneous perils, accommodates non-stationary loss dynamics, and supports adaptive policy optimisation. We demonstrate that backward stochastic differential equations, combined with copula dependence, recurrent neural networks, and reinforcement learning, provide a unifying language for this task; the contribution lies in their principled integration. The dynamic premium is the unique adapted solution of a BSDE whose driver encodes compound-risk dependence through a Student-t copula, forward loss dynamics through a jump-diffusion process, and a green-finance adjustment through an optimal control variable. Within this framework we derive three progressive results by adapting standard BSDE theory to the compound-dependence and policy-control setting. First, existence and uniqueness hold under Lipschitz and square-integrability conditions. Second, a comparison theorem guarantees that a larger correlation matrix yields higher premiums; the degrees-of-freedom effect enters separately through the risk-loading magnitude. Third, the Euler discretisation converges at a rate of one half of the time-step size, with copula estimation, LSTM conditional expectation approximation, and Q-learning HJB solution as sequential components. Applied to eleven Zhejiang cities (2014–2023, N × T=110), in this illustrative application the framework reduces premium variance by 43.5 percent (bootstrap 95% CI: [38.2%,48.7%]) while maintaining actuarial adequacy with a mean loss ratio of 0.678, though the modest sample size warrants caution in generalising these findings. Each component contributes statistically significant improvements confirmed by the Friedman test at the 0.1 percent significance level. Full article
Show Figures

Figure 1

31 pages, 2512 KB  
Systematic Review
Optimization of Loss Determination in Claims Settlement Using Smart Industry Tools: A Systematic Review and Implications for the Construction Industry
by Jorge Acevedo-Bastías, Sebastián González Fernández, Luis López-Quijada and Vinicius Minatogawa
Buildings 2026, 16(6), 1175; https://doi.org/10.3390/buildings16061175 - 17 Mar 2026
Viewed by 248
Abstract
The claims resolution process is a cornerstone of the insurance industry, aiming to fairly and accurately determine the economic losses caused by adverse events. Traditionally, adjusters have relied heavily on expert judgment to perform this task. While this approach is essential, it often [...] Read more.
The claims resolution process is a cornerstone of the insurance industry, aiming to fairly and accurately determine the economic losses caused by adverse events. Traditionally, adjusters have relied heavily on expert judgment to perform this task. While this approach is essential, it often suffers from subjectivity, inconsistent criteria, and difficulty integrating complex data sources into objective analyses. In this context, Smart Industry tools—such as Artificial Intelligence (AI), Machine Learning (ML), Computer Vision (CV), and the Internet of Things (IoT)—have demonstrated high potential to automate damage detection and assessment; however, their effective integration into loss determination remains uneven across different productive sectors. This study addresses this problem through two objectives. First, we conducted a systematic literature review following PRISMA guidelines to identify which Smart Industry tools are currently used in the insurance sector for loss determination and to analyze their level of maturity in different productive sectors. We searched the Web of Science and Scopus databases, identifying 253 studies, of which 23 met our inclusion criteria. Second, based on the gaps we identified between the construction sector and more advanced industries such as automotive, we propose a methodological framework based on Building Information Modeling (BIM). Our results show that most solutions focus on the detection and technical classification of damage, especially in the automotive sector, while construction lacks methods to convert these technical findings into operational economic estimates. The proposed framework addresses this gap by standardizing technical and economic data from the underwriting stage, enabling more automated, traceable, and objective loss determination for infrastructure claims. Full article
Show Figures

Figure 1

31 pages, 5578 KB  
Article
Modeling the Probability of Default Term Structure Using Different Methodologies Under IFRS 9
by Kgotso Rudolf Moremoholo, Sandile Charles Shongwe and Frans Frederick Koning
Int. J. Financial Stud. 2026, 14(3), 62; https://doi.org/10.3390/ijfs14030062 - 3 Mar 2026
Viewed by 534
Abstract
To mitigate credit risk, banks are required to set aside a specific amount as a safety net to absorb the expected loss on a banks’ loan portfolio called loan loss provisions (LLPs) or provisions for bad debts. All banks worldwide had to adopt [...] Read more.
To mitigate credit risk, banks are required to set aside a specific amount as a safety net to absorb the expected loss on a banks’ loan portfolio called loan loss provisions (LLPs) or provisions for bad debts. All banks worldwide had to adopt International Financial Reporting Standard 9 (IFRS 9) as the financial reporting standard. Unlike its predecessor (i.e., International Accounting Standard 39, IAS 39), IFRS 9 accelerates the recognition of losses by requiring provisions to cover both already-incurred losses and some losses expected in the future by calculating the expected credit loss (ECL). To evaluate if the obligor’s credit quality has deteriorated, the IFRS 9 standard requires banks to compare the obligor’s probability of default (PD) at the inception phase of the loan and at the reporting date. Thus, three methodologies are explored in this study (i.e., Cox proportional hazard (PH), Extended Cox PH, and Random Boosting Forest (RBF)) for computation of the PD term structures using Kaplan–Meier as the benchmark model under IFRS 9. The purpose of this research is to illustrate the application of three methodologies on the publicly available mortgage loan portfolio from Freddie Mac using different measures of goodness-of-fit and the predictive accuracy measure, i.e., the Concordance index (C-index). The comparison analysis reveals that the extended Cox PH and RBF models provide better predictive accuracy (higher C-index) but at the cost of increased complexity and potential overfitting (higher information criteria). However, Cox PH has shown the most efficient fit, and offers a stable and understandable hazard trajectory. Finally, for reproducibility, the SAS and R codes are included to illustrate how each of the results (in form of a table or figure) were obtained. Full article
Show Figures

Figure 1

33 pages, 820 KB  
Article
The Kerper–Bowron Method: A Foundational Change for Service Contract Claim Estimation and Accounting
by John Kerper and Lee Bowron
Risks 2026, 14(3), 44; https://doi.org/10.3390/risks14030044 - 24 Feb 2026
Viewed by 570
Abstract
The Kerper–Bowron Method (KB Method) is a patent-pending approach that revolutionizes service contract loss estimation and accounting by introducing a precise, contract-level approach to forecasting expected losses and cancellations. Building on a prior 2007 paper, this update presents the Earned Contract formula, aligning [...] Read more.
The Kerper–Bowron Method (KB Method) is a patent-pending approach that revolutionizes service contract loss estimation and accounting by introducing a precise, contract-level approach to forecasting expected losses and cancellations. Building on a prior 2007 paper, this update presents the Earned Contract formula, aligning with Solvency II and modern accounting standards. By leveraging a probabilistic exposure base and Generalized Linear Models, the KB Method enhances accuracy in claims and cancel liabilities as well as other liability and asset estimates across global service contract markets. This methodology offers superior precision, automation, and compliance, redefining actuarial and financial practices for vehicle and other service contracts. Full article
(This article belongs to the Special Issue Advances in Risk Models and Actuarial Science)
Show Figures

Figure 1

45 pages, 9784 KB  
Article
Building a Life Table for Lebanon: Towards a Deeper Understanding of Our Future
by Natalia Bou Sakr, Stéphane Loisel, Gihane Mansour and Yahia Salhi
Risks 2026, 14(2), 34; https://doi.org/10.3390/risks14020034 - 5 Feb 2026
Viewed by 519
Abstract
Lebanon does not have a national mortality table that reflects its demographic and health conditions. Despite ongoing changes in mortality patterns driven by economic crises, political instability, and social changes, outdated foreign tables such as AM80 remain in use in the insurance and [...] Read more.
Lebanon does not have a national mortality table that reflects its demographic and health conditions. Despite ongoing changes in mortality patterns driven by economic crises, political instability, and social changes, outdated foreign tables such as AM80 remain in use in the insurance and public sectors. This dependency introduces significant risks in actuarial calculations, policy design, and long-term planning. This study addresses this gap by building a mortality table specifically adapted to the Lebanese insurance context, together with a first estimation of population-level mortality. In the absence of any official mortality database, we collaborated directly with local insurance companies to access and organize internal records of insured lives. These data, which represent one of the few available structured sources of mortality information in the country, form the core of our analysis. We apply actuarial methods to estimate age-specific death rates and life expectancy and benchmark the results against national and international references to assess consistency and range. By offering a locally grounded, data-driven alternative to imported mortality assumptions, this work fills a critical statistical need. The resulting table supports more accurate forecasting, pricing, and demographic modeling, with applications across insurance, pensions, and public health planning in Lebanon. Full article
(This article belongs to the Special Issue Advances in Risk Models and Actuarial Science)
Show Figures

Graphical abstract

16 pages, 567 KB  
Article
Insuring Algorithmic Operations: Liability Risk, Pricing, and Risk Control
by Zhiyong (John) Liu, Jin Park, Mengying Wang and He Wen
Risks 2026, 14(2), 26; https://doi.org/10.3390/risks14020026 - 31 Jan 2026
Viewed by 579
Abstract
Businesses increasingly rely on algorithmic systems and machine learning models to make operational decisions about customers, employees, and counterparties. These “algorithmic operations” can improve efficiency but also concentrate liability in a small number of technically complex, drifting models. Algorithmic operations liability (AOL) risk [...] Read more.
Businesses increasingly rely on algorithmic systems and machine learning models to make operational decisions about customers, employees, and counterparties. These “algorithmic operations” can improve efficiency but also concentrate liability in a small number of technically complex, drifting models. Algorithmic operations liability (AOL) risk arises when these systems generate legally cognizable harm. We develop a simple taxonomy of AOL risk sources: model error and bias, data quality failures, distribution shift and concept drift, miscalibration, machine learning operations (MLOps) and integration failures, governance gaps, and ecosystem-level externalities. Building on this taxonomy, we outline a simple analysis of AOL risk pricing using some basic actuarial building blocks: (i) a confusion-matrix-based expected-loss model for false positives and false negatives; (ii) drift-adjusted error rates and stress scenarios; and (iii) credibility-weighted rates when insureds have limited experience data. We then introduce capital and loss surcharges that incorporate distributional uncertainty and tail risk. Finally, we link the framework to AOL risk controls by identifying governance, documentation, model-monitoring, and MLOps practices that both reduce loss frequency and severity and serve as underwriting prerequisites. Full article
(This article belongs to the Special Issue AI for Financial Risk Perception)
Show Figures

Figure 1

26 pages, 766 KB  
Article
Regression Extensions of the New Polynomial Exponential Distribution: NPED-GLM and Poisson–NPED Count Models with Applications in Engineering and Insurance
by Halim Zeghdoudi, Sandra S. Ferreira, Vinoth Raman and Dário Ferreira
Computation 2026, 14(1), 26; https://doi.org/10.3390/computation14010026 - 21 Jan 2026
Viewed by 511
Abstract
The New Polynomial Exponential Distribution (NPED), introduced by Beghriche et al. (2022), provides a flexible one-parameter family capable of representing diverse hazard shapes and heavy-tailed behavior. Regression frameworks based on the NPED, however, have not yet been established. This paper introduces two methodological [...] Read more.
The New Polynomial Exponential Distribution (NPED), introduced by Beghriche et al. (2022), provides a flexible one-parameter family capable of representing diverse hazard shapes and heavy-tailed behavior. Regression frameworks based on the NPED, however, have not yet been established. This paper introduces two methodological extensions: (i) a generalized linear model (NPED-GLM) in which the distribution parameter depends on covariates, and (ii) a Poisson–NPED count regression model suitable for overdispersed and heavy-tailed count data. Likelihood-based inference, asymptotic properties, and simulation studies are developed to investigate the performance of the estimators. Applications to engineering failure-count data and insurance claim frequencies illustrate the advantages of the proposed models relative to classical Poisson, negative binomial, and Poisson–Lindley regressions. These developments substantially broaden the applicability of the NPED in actuarial science, reliability engineering, and applied statistics. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Graphical abstract

35 pages, 830 KB  
Article
Predicting Financial Contagion: A Deep Learning-Enhanced Actuarial Model for Systemic Risk Assessment
by Khalid Jeaab, Youness Saoudi, Smaaine Ouaharahe and Moulay El Mehdi Falloul
J. Risk Financial Manag. 2026, 19(1), 72; https://doi.org/10.3390/jrfm19010072 - 16 Jan 2026
Viewed by 1126
Abstract
Financial crises increasingly exhibit complex, interconnected patterns that traditional risk models fail to capture. The 2008 global financial crisis, 2020 pandemic shock, and recent banking sector stress events demonstrate how systemic risks propagate through multiple channels simultaneously—e.g., network contagion, extreme co-movements, and information [...] Read more.
Financial crises increasingly exhibit complex, interconnected patterns that traditional risk models fail to capture. The 2008 global financial crisis, 2020 pandemic shock, and recent banking sector stress events demonstrate how systemic risks propagate through multiple channels simultaneously—e.g., network contagion, extreme co-movements, and information cascades—creating a multidimensional phenomenon that exceeds the capabilities of conventional actuarial or econometric approaches alone. This paper addresses the fundamental challenge of modeling this multidimensional systemic risk phenomenon by proposing a mathematically formalized three-tier integration framework that achieves 19.2% accuracy improvement over traditional models through the following: (1) dynamic network-copula coupling that captures 35% more tail dependencies than static approaches, (2) semantic-temporal alignment of textual signals with network evolution, and (3) economically optimized threshold calibration reducing false positives by 35% while maintaining 85% crisis detection sensitivity. Empirical validation on historical data (2000–2023) demonstrates significant improvements over traditional models: 19.2% increase in predictive accuracy (R2 from 0.68 to 0.87), 2.7 months earlier crisis detection compared to Basel III credit-to-GDP indicators, and 35% reduction in false positive rates while maintaining 85% crisis detection sensitivity. Case studies of the 2008 crisis and 2020 market turbulence illustrate the model’s ability to identify subtle precursor signals through integrated analysis of network structure evolution and semantic changes in regulatory communications. These advances provide financial regulators and institutions with enhanced tools for macroprudential supervision and countercyclical capital buffer calibration, strengthening financial system resilience against multifaceted systemic risks. Full article
(This article belongs to the Special Issue Financial Regulation and Risk Management amid Global Uncertainty)
Show Figures

Figure 1

30 pages, 4414 KB  
Article
Model Averaging and Grid Maps for Modeling Heavy-Tailed Insurance Data
by Lira B. Mothibe and Sandile C. Shongwe
Risks 2026, 14(1), 11; https://doi.org/10.3390/risks14010011 - 5 Jan 2026
Viewed by 552
Abstract
This work presents a practical approach to improve risk quantification for heavy-tailed insurance claims through model averaging and grid map visualization, addressing the drawbacks of traditional single “best” model selection commonly used in actuarial and model-fitting literature. This is a data-driven study with [...] Read more.
This work presents a practical approach to improve risk quantification for heavy-tailed insurance claims through model averaging and grid map visualization, addressing the drawbacks of traditional single “best” model selection commonly used in actuarial and model-fitting literature. This is a data-driven study with a focus on Danish fire loss data, where the following are fitted: (i) 16 standard single distributions, (ii) 256 composite distributions, and (iii) 256 mixture distributions; wherein, for the composite and mixture distributions, we focus on the top 20 leading models in terms of the information criterion (i.e., Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC)). Model selection uncertainty is explicitly addressed by AIC and BIC weighted averaging within the Occam’s window (relying on weighted point estimates), while grid maps simultaneously plot information criteria against risk measures, specifically the Value-at-Risk (VaR) and Tail Value-at-Risk (TVaR) at 95% and 99% thresholds, to highlight critical-fit versus tail-risk trade-offs. It is observed that the model-averaged risk measures from composite models align more closely with the empirical values. That is, model-averaged estimates across all categories align closely with empirical VaR0.95 but conservatively elevate TVaR0.99, promoting safer capital reserves. Grid maps and model averaging confirm that mixture and composite models better capture the heavy-tailed nature of Danish fire claims data as compared to fitting a single distribution. Full article
(This article belongs to the Special Issue Statistical Models for Insurance)
Show Figures

Figure 1

27 pages, 10004 KB  
Article
Nowcast-It: A Practical Toolbox for Real-Time Adjustment of Reporting Delays in Epidemic Surveillance
by Amna Tariq, Ping Yan, Amanda Bleichrodt and Gerardo Chowell
Viruses 2025, 17(12), 1598; https://doi.org/10.3390/v17121598 - 10 Dec 2025
Cited by 1 | Viewed by 787
Abstract
One difficulty that arises in tracking and forecasting real-time epidemics is reporting delays, which are defined as the inherent delay between the time of symptom onset and the time a case is reported. Reporting delays can be caused by delays in case detection, [...] Read more.
One difficulty that arises in tracking and forecasting real-time epidemics is reporting delays, which are defined as the inherent delay between the time of symptom onset and the time a case is reported. Reporting delays can be caused by delays in case detection, symptom onset after infection, seeking medical care, or diagnostics, and they distort the accurate forecasting of diseases during epidemics and pandemics. To address this, we introduce a practical nowcasting approach grounded in survival analysis and actuarial science, explicitly allowing for non-stationarity in reporting delay patterns to better capture real-world variability. Despite its relevance, no flexible and accessible toolbox currently exists for non-stationary delay adjustment. Here, we present Nowcast-It, a tutorial-based toolbox that includes two components: (1) an R code base for delay adjustment and (2) a user-friendly R-Shiny application to enable interactive visualization and reporting delay correction without prior coding expertise. The toolbox supports daily, weekly, or monthly resolution data and enables model performance assessment using metrics such as mean absolute error, mean squared error, and 95% prediction interval coverage. We demonstrate the utility of Nowcast-It toolbox using publicly available weekly Ebola case data from the Democratic Republic of Congo. We and others have adjusted for reporting delays in real-time analyses (e.g., Singapore) and produced early COVID-19 forecasts; here, we package those delay adjustment routines into an accessible toolbox. It is designed for researchers, students, and policymakers alike, offering a scalable and accessible solution for addressing reporting delays during outbreaks. Full article
Show Figures

Figure 1

11 pages, 528 KB  
Article
Disparities in Colorectal Cancer Mortality and Survival Trends Among Hispanics Living in Puerto Rico (2000–2021): A Comparison Between Early-Onset and Average-Onset Disease
by Camille Montalvo-Pacheco, Carlos R. Torres-Cintrón, Marilyn Moró-Carrión, Hilmaris Centeno-Girona, Luis D. Borrero-García and María González-Pons
Life 2025, 15(11), 1742; https://doi.org/10.3390/life15111742 - 13 Nov 2025
Viewed by 960
Abstract
Colorectal cancer (CRC) is the leading cause of cancer-related death in Puerto Rico, a U.S. territory with noted disparities in CRC incidence, particularly among those with early-onset disease (EOCRC). Although EOCRC incidence has been consistently increasing in the U.S. mainland, and a disparate [...] Read more.
Colorectal cancer (CRC) is the leading cause of cancer-related death in Puerto Rico, a U.S. territory with noted disparities in CRC incidence, particularly among those with early-onset disease (EOCRC). Although EOCRC incidence has been consistently increasing in the U.S. mainland, and a disparate burden has been reported among Hispanics, EOCRC mortality and survival are yet to be assessed among Hispanics living in Puerto Rico (PRH). In this study, we analyzed EOCRC mortality and survival trends in PRH and compared these to those of other U.S. populations. Mortality data were obtained from the Puerto Rico Central Cancer Registry and the Surveillance, Epidemiology, and End Results (SEER) program. Descriptive characteristics and temporal trends were derived via SEER*Stat software (version 9.0.42) and Joinpoint regression models, respectively. Relative survival was estimated using the Actuarial method and the Ederer II approach. Overall, CRC mortality trends showed a decline, but an increase in EOCRC mortality among Hispanics. PRH exhibited the lowest 5-year survival in regional cancers (54.10%), with NHB having the lowest survival among younger individuals. This study highlights significant disparities in EOCRC mortality trends and underscores an urgent need for targeted public health strategies and research efforts to address the disproportionate burden of EOCRC among PRH. Full article
(This article belongs to the Special Issue Cancer Epidemiology)
Show Figures

Figure 1

17 pages, 591 KB  
Article
Extending Approximate Bayesian Computation to Non-Linear Regression Models: The Case of Composite Distributions
by Mostafa S. Aminzadeh and Min Deng
Risks 2025, 13(11), 220; https://doi.org/10.3390/risks13110220 - 5 Nov 2025
Viewed by 595
Abstract
Modeling loss data is a crucial aspect of actuarial science. In the insurance industry, small claims occur frequently, while large claims are rare. Traditional heavy-tail distributions, such as Weibull, Log-Normal, and Inverse Gaussian distributions, are not suitable for describing insurance data, which often [...] Read more.
Modeling loss data is a crucial aspect of actuarial science. In the insurance industry, small claims occur frequently, while large claims are rare. Traditional heavy-tail distributions, such as Weibull, Log-Normal, and Inverse Gaussian distributions, are not suitable for describing insurance data, which often exhibit skewness and fat tails. The literature has explored classical and Bayesian inference methods for the parameters of composite distributions, such as the Exponential–Pareto, Weibull–Pareto, and Inverse Gamma–Pareto distributions. These models effectively separate small to moderate losses from significant losses using a threshold parameter. This research aims to introduce a new composite distribution, the Gamma–Pareto distribution with two parameters, and employ a numerical computational approach to find the maximum likelihood estimates (MLEs) of its parameters. A novel computational approach for a nonlinear regression model where the loss variable is distributed as the Gamma–Pareto and depends on multiple covariates is proposed. The maximum likelihood (ML) and Approximate Bayesian Computation (ABC) methods are used to estimate the regression parameters. The Fisher information matrix, along with a multivariate normal distribution as the prior distribution, is utilized through the ABC method. Simulation studies indicate that the ABC method outperforms the ML method in terms of accuracy. Full article
Show Figures

Figure 1

15 pages, 1259 KB  
Article
Relative Survival Following TEER for Significant Mitral Regurgitation: A Contemporary Cohort Analysis
by Marcel Almendarez, Isaac Pascual, Beatriz Nieves, Rut Alvarez Velasco, Alberto Alperi, Rebeca Lorca, Carmen de la Hoz, Victor Leon, Luis Carlos Zamayoa, Ismael Rivera, Angela Herrero and Pablo Avanzas
J. Clin. Med. 2025, 14(21), 7825; https://doi.org/10.3390/jcm14217825 - 4 Nov 2025
Viewed by 840
Abstract
Background/Objectives: Mitral regurgitation (MR) is the most common valvular defect worldwide, with an increasing incidence attributed to the aging population. Transcatheter edge-to-edge repair (TEER) is a viable treatment, but its long-term survival impact, particularly across sexes, remains underexplored. We aimed to assess relative [...] Read more.
Background/Objectives: Mitral regurgitation (MR) is the most common valvular defect worldwide, with an increasing incidence attributed to the aging population. Transcatheter edge-to-edge repair (TEER) is a viable treatment, but its long-term survival impact, particularly across sexes, remains underexplored. We aimed to assess relative survival (RS) and excess mortality (EM) in patients undergoing TEER for significant MR, with a focus on sex-based differences. Methods: We analyzed 253 patients treated with TEER between October 2015 and August 2024, stratified by sex. Observed survival (OS) was calculated using the actuarial life table method; expected survival (ES) was estimated via the Ederer II method using matched population data. Primary endpoints were RS and EM; secondary endpoints included mortality differences by MR subtype. Results: OS at 1, 2, and 3 years was 88.9%, 87.4%, and 78.9%, respectively. EM peaked in the first year (7.8%) and declined thereafter. RS was lower than in the general population, primarily due to persistently reduced RS and elevated EM in men. Women achieved RS comparable to matched peers from year one. Sex was not an independent predictor of mortality (HR 0.88, 95% CI 0.38–2.03, p = 0.771). Conclusions: In patients with significant MR undergoing TEER, EM was concentrated in the first year. Women reached RS comparable to the general population, while men showed persistent excess mortality. Sex was not independently associated with survival after adjustment. Full article
(This article belongs to the Special Issue Clinical Therapeutic Advances of Mitral Regurgitation)
Show Figures

Figure 1

26 pages, 2278 KB  
Article
Optimal Decision-Making for Annuity Insurance Under the Perspective of Disability Risk
by Ziran Xu, Lufei Sun and Xiang Yuan
Mathematics 2025, 13(20), 3290; https://doi.org/10.3390/math13203290 - 15 Oct 2025
Viewed by 731
Abstract
Annuity insurance is a crucial financial tool for mitigating risks associated with aging, yet it has not gained significant traction in China’s insurance market, especially amid the challenges posed by an aging population. This study develops a discrete-time multi-period life-cycle model to analyze [...] Read more.
Annuity insurance is a crucial financial tool for mitigating risks associated with aging, yet it has not gained significant traction in China’s insurance market, especially amid the challenges posed by an aging population. This study develops a discrete-time multi-period life-cycle model to analyze optimal annuity purchases for China’s middle-aged population under disability risk and explores in depth the impact and underlying mechanisms of disability risk on their annuity insurance purchase decisions. Disability is endogenized via two channels: financial-constraint effects (medical costs and pre-retirement income loss) and stochastic health state transitions with recovery and mortality. Using data from China Health and Retirement Longitudinal Study (2018–2020) to estimate age- and gender-specific transition matrices and data from China Household Finance Survey (2019) to link income with initial assets, we solve the model by the endogenous grid method and simulate actuarially fair annuities. The findings reveal substantial under-demand for annuities among China’s middle-aged population. Under inflation, the modest yield premium of annuities over inflation significantly depresses purchases by middle- and low-wealth households, while high-wealth individuals are jointly constrained by rapidly rising health expenditures and inadequate annuity returns. Notably, behavioral patterns could shift fundamentally under a hypothetical zero-inflation scenario. Full article
(This article belongs to the Special Issue Computational Models in Insurance and Financial Mathematics)
Show Figures

Figure 1

17 pages, 803 KB  
Article
Bootstrap Initialization of MLE for Infinite Mixture Distributions with Applications in Insurance Data
by Aceng Komarudin Mutaqin
Risks 2025, 13(10), 196; https://doi.org/10.3390/risks13100196 - 4 Oct 2025
Viewed by 881
Abstract
Maximum likelihood estimation (MLE) in infinite mixture distributions often lacks closed-form solutions, requiring numerical methods such as the Newton–Raphson algorithm. Selecting appropriate initial values is a critical challenge in these procedures. This study introduces a bootstrap-based approach to determine initial parameter values for [...] Read more.
Maximum likelihood estimation (MLE) in infinite mixture distributions often lacks closed-form solutions, requiring numerical methods such as the Newton–Raphson algorithm. Selecting appropriate initial values is a critical challenge in these procedures. This study introduces a bootstrap-based approach to determine initial parameter values for MLE, employing both nonparametric and parametric bootstrap methods to generate the mixing distribution. Monte Carlo simulations across multiple cases demonstrate that the bootstrap-based approaches, especially the nonparametric bootstrap, provide reliable and efficient initialization and yield consistent maximum likelihood estimates even when raw moments are undefined. The practical applicability of the method is illustrated using three empirical datasets: third-party liability claims in Indonesia, automobile insurance claim frequency in Australia, and total car accident costs in Spain. The results indicate stable convergence, accurate parameter estimation, and improved reliability for actuarial applications, including premium calculation and risk assessment. The proposed approach offers a robust and versatile tool both for research and in practice in complex or nonstandard mixture distributions. Full article
Show Figures

Figure 1

Back to TopTop