Next Article in Journal
Application on Fuzzy Third-Order Subordination and Superordination Connected with Lommel Function
Previous Article in Journal
On Exact Non-Traveling Wave Solutions to the Generalized Nonlinear Kadomtsev–Petviashvili Equation in Plasma Physics and Fluid Mechanics
Previous Article in Special Issue
A Bivariate Extension of Type-II Generalized Crack Distribution for Modeling Heavy-Tailed Losses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ResPoNet: A Residual Neural Network for Efficient Valuation of Large Variable Annuity Portfolios

1
Economics and Management School, Wuhan University, Wuhan 430072, China
2
Ningbo National Institute of Insurance Development (NIID), Wuhan University, Ningbo 315100, China
3
Guangzhou Futures Exchange, Guangzhou 510630, China
4
Department of Statistical and Actuarial Sciences, The University of Western Ontario, London, ON N6A 3K7, Canada
5
Division of Physical Sciences and Mathematics, University of the Philippines Visayas, Miag-ao, Iloilo 5023, Philippines
6
School of Finance, Guangdong University of Foreign Studies, Guangzhou 510006, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(12), 1916; https://doi.org/10.3390/math13121916
Submission received: 29 April 2025 / Revised: 1 June 2025 / Accepted: 5 June 2025 / Published: 8 June 2025
(This article belongs to the Special Issue Actuarial Statistical Modeling and Applications)

Abstract

:
Accurately valuing large portfolios of Variable Annuities (VAs) poses a significant challenge due to the high computational burden of Monte Carlo simulations and the limitations of spatial interpolation methods that rely on manually defined distance metrics. We introduce a residual portfolio valuation network (ResPoNet), a novel residual neural network architecture enhanced with weighted loss functions, designed to improve valuation accuracy and scalability. ResPoNet systematically accounts for mortality risk and path-dependent liabilities using residual layers, while the custom loss function ensures better convergence and interpretability. Numerical results on synthetic portfolios of 100,000 contracts show that ResPoNet achieves significantly lower valuation errors than baseline neural and spatial methods, with faster convergence and improved generalization. Sensitivity analysis reveals key drivers of performance, including guarantee complexity and contract maturity, demonstrating the robustness and practical applicability of ResPoNet in large-scale VA valuation.

1. Introduction

Variable annuities (VAs) remain a critical financial tool in retirement planning as aging demographics and financial market volatility intensify demand for hybrid wealth-longevity solutions. According to the 2024 annual sales report of the Life Insurance Marketing and Research Association (LIMRA), traditional VA sales surged 19% to $61.2 billion, marking the first annual growth since 2021, while the broader U.S. retail annuity market reached a historic $432.4 billion [1]. VAs not only allow policyholders to participate in financial market investment in order to take advantage of the upside market trends, but these products also provide investment protection in the event of adverse market scenarios. After the 1980s, a guaranteed minimum death benefit (GMDB) was included in insurance contracts. In the early 1990s, insurers started to provide different guaranteed minimum life benefits (GMLB) to meet clients’ additional needs. Furthermore, the guaranteed minimum accumulation benefit (GMAB) and the guaranteed minimum income benefit (GMIB) provide accumulation and income protection for a fixed number of periods, respectively. A contractual feature widely valued by insurance purchasers is the guaranteed minimum maturity benefit (GMMB), which ensures policyholders receive the greater of their total premium payments or a predetermined stepped-up value at the conclusion of the accumulation phase. The guaranteed minimum withdrawal benefit (GMWB) guarantees periodic withdrawals throughout the policyholder’s lifetime, provided cumulative withdrawals remain within contractual limits. For further details concerning VAs, refer to [2].
As VAs continue to evolve and permeate the annuity market as a major component, most insurers are confronted with the challenge of managing large portfolios of VA contracts. A notable amount of research focuses on the fair valuation of individual VA contracts [3,4,5]; however, this undertaking could not be simply extended to large VA portfolios due to the intricacies of the various payoff functions and the massive calculation required when the number of contracts is extremely large [6]. Hence, most insurers rely excessively on labor-intensive Monte Carlo (MC) simulation, a computationally intensive method [7], to value these portfolios. This particular issue motivates the development of an alternative technique that would significantly reduce the calculation times in VA portfolio computation.
Ref. [8] proposed a valuation framework based on a spatial interpolation that alleviates the calculation load of the MC simulation. Nonetheless, the suggested framework requires that expert users choose the appropriate distance function depending on the attributes of the portfolio under consideration and on the space of the VA-contract samples in which the input portfolio is defined. Thus, Ref. [9] put forward the replacement of the traditional spatial-interpolation techniques with a neural network (NN) in systematically finding the optimal distance function. Since the network only requires a vector of parameters that could wholly describe each VA contract in the input portfolio, this learned distance function can then be used to interpolate the valuation accurately. Insurers only need to price representative contracts (i.e., a set of policies that represent the portfolio) using the MC simulation,  then utilize the distances from the input policies to the representative contracts to value the whole portfolio. Nevertheless, a shortcoming of [9] is that stochastic risk factors affecting the valuation of variable annuities are not taken into account adequately. Additionally, prior works overlook the potential for neural networks to deviate significantly from traditional spatial interpolation behavior during training.
As noted above, existing valuation methodologies exhibit critical limitations. These include the computational infeasibility of MC simulations, the inherent subjectivity in expert-defined distance metrics for spatial interpolation, and the inadequacy of prior neural network approaches in capturing comprehensive risk dynamics. These limitations become increasingly pronounced as VA portfolios expand in both size and complexity, exacerbating the challenges in practical valuation. Insurers are under increasing pressure to balance accuracy, scalability, and computational efficiency, particularly when managing heterogeneous portfolios with varied guarantee structures (e.g., GMDB, GMMB, and GMIB) and path-dependent liabilities. Furthermore, current models’ failure to systematically account for stochastic mortality risk and dynamic market dependencies significantly compromises their practical applicability and reliability. This study responds to these challenges by introducing a residual portfolio valuation network (ResPoNet) that enhances valuation precision, accelerates convergence during training, and maintains interpretability through a weighted loss function.
The proposed ResPoNet enhances the neural network architecture in [9] through structural innovations (e.g., residual connections) and methodological refinements (e.g., weighted loss functions), offering a robust solution for large-scale VA portfolio valuation and addressing the key limitations in prior work. Our contributions relative to the existing literature are highlighted by the following accomplishments: (i) We improve the accuracy of the estimated valuation of large portfolios by inserting the residual connection into the NN. (ii) Our extended model captures the stochastic risk factors impacting the VA contracts and speeds up the training process. (iii) We encapsulate the loss term of the weights in the framework, which not only makes the training process smoother but also reflects the distance between the value of the representative contracts and the input policy.
The remainder of this paper is organized as follows. Section 2 briefly summarizes the methods of pricing a large portfolio of VA policies. It also introduces previous studies covering machine learning methods applied to the insurance field. In Section 3, background knowledge related to training methodologies is laid out, and the ResPoNet framework and its superiority are highlighted. Section 4 presents the results of the numerical experiments and a performance comparison of the traditional spatial interpolation methods and certain neural networks. Section 5 conducts a sensitivity analysis. Lastly, Section 6 provides some concluding remarks.

2. Pertinent Literature on VAs

Refs. [10,11] pioneered the valuation of the guaranteed minimum benefit (GMB) embedded in insurance contracts. Variable annuity guarantees are generally accepted as granting financial options to the policyholder [12], so the traditional replicating-portfolio approach of option pricing is used to approximate the cash-flow valuation of the VA products [9,13]. Following [14], GMWB policies are valued under static and dynamic withdrawal scenarios. Ref. [4] demonstrated that a universal pricing framework for various GMBs could be achieved, but a numerical solution could only be possible by adopting the MC technique together with a finite mesh discretization approach. Notably, the typical focus of pricing is on individual contracts [5,14,15]. However, the existing valuation methods for individual VAs could not be feasibly extended to a large portfolio of VAs [16]. For instance, the complexity of the valuation function does not lead to closed-form solutions when evaluating the liability in the benefit guarantees. Moreover, the calculation cost increases substantially as the number of policies becomes larger and larger. Needless to say, the methods tailored to specific types of individual policies are unsuitable for a further generalization design that could tackle the pricing of large portfolios containing disparate types of policies. The most commonly used method is MC simulation, which is computationally intensive and costly [17].
To circumvent the persistent issues associated with MC simulation, both statistical methods and machine learning techniques are explored to approximate the value of a VA portfolio. For instance, Ref. [18] proposed an efficient willow tree method for valuing VAs with multiple guarantees, outperforming MC and nested simulations. Ref. [19] introduced rank-order Kriging to value VA portfolios via rank transformation. Ref. [20] developed a green mesh method for real-time VA valuation. Machine learning techniques are particularly noted for their low computational cost, and advances in learning theory have further accelerated progress in this area. Clustering methods, including k-means-based techniques [6,21], have been proposed to reduce the number of VA contracts that must be computed using the MC simulation. In clustering, a set of representative contracts is used to estimate the valuation of the entire input portfolio. Bootstrap aggregation (bagging) is another approach used to reduce the variance of a supervised learning method [22]. Bagging with regression trees was empirically shown by [23] to effectively evaluate the fair market price of VA contracts. A foundational issue in ensemble-based valuation was identified by [24], who demonstrated that bagging estimators exhibit systematic prediction bias due to nonlinear interactions between constituent models. The moment-matching methods were developed [16] to match the statistical properties between the representative policies and the original input portfolio. Subsequent work by [25] introduced a hybrid architecture integrating random forest regression with MC simulation. Model selection trade-offs in interval forecasting were analyzed by [26], with gradient boosting coupled with bootstrap resampling. In addition, Ref. [27] developed neural tangent kernel regression to VA portfolios.
A framework based on spatial interpolation could reduce the computational burden of the MC simulation [8]. Specifically, Ref. [9] replaced spatial interpolation techniques, such as Kriging, inverse distance weighting (IDW), and radial basis function (RBF), with neural networks to automate the process of finding the optimal distance function. These neural network methods have been widely applied to address insurance problems. Some early applications include predicting the insolvency or financial distress of insurers [28], determining the total insurance claim amount [29], critically testing the service quality of life insurance businesses [30], and using NNs and linear regression to investigate the relationship between customers and insurance service providers [31].
In extending the work of [9], we propose a ResPoNet that applies a NN to the pricing of a large portfolio of VA contracts. So far, machine learning methods such as clustering algorithms and neural networks have proven useful in valuation, hedging, and prediction for financial product development. However, to the best of our knowledge, there has been no application of residual networks (ResNets) in the pricing of VA portfolios. As an innovation, we infuse residual connections into the training process and add a loss of weight item to the loss function, thereby obtaining a ResNet with a higher precision and a more stable learning process. The ResNet strengthens the convolutional neural network with the incorporation of a channel called identity mapping, which replicates the characteristics of the shallow network and improves the network performance [32]. Ref. [33] focused on deep residual networks and stock price graphs to examine their potential in stock price trend prediction problems. Ref. [34] proposed an improved residual network model for offline signature verification by incorporating a convolutional block attention module. In an effort to develop a more effective alternative to previous approaches for dealing with class imbalance, Ref. [35] constructed a new loss function. Consequently, the focal loss facilitates the training of a high-accuracy detector that significantly outperforms other alternatives. In this paper, we consider these advancements: the new loss function and ResNet in the insurance field.

3. Framework Description

In the succeeding sections, we denote all vectors in bold and small English/Greek letters and all matrices in bold and capitalized English/Greek letters. A neural network is a series of algorithms that endeavor to discover connotative relationships in data sets. It is a machine learning technique that can process information by using its distributed parallel system. Generally, a neural network with a single hidden layer could approximate any continuous function to an arbitrary degree of accuracy, depending on the number of nodes in the hidden layer [36]. In this section, the inputs are the differences in attributes between the representative contracts obtained using the clustering method and the policies to be priced, along with the output weights of these representative contracts.
A neural network consists of an input layer, hidden layers, and an output layer. These layers are collections of interconnected processing units called neurons. The information is fed into the input layer, multiplied by the weight, and then transmitted to the hidden layers. The hidden layers apply activation functions to the weighted information. The newly processed information is transferred to the next layer as the sum of weights. Suppose x 1 ( l ) , x 2 ( l ) , …, x J ( l ) are the inputs at layer l th . The i th linear combination at the next layer ( l + 1 ) th can be obtained as
a i ( l + 1 ) = h ( j = 1 J θ i j ( l ) x j ( l ) + b j ( l ) ) ,
where the parameter θ i j ( l ) denotes a weight and the parameter b j ( l ) represents a bias. The quantity a i ( l + 1 ) denotes the activation of the i th neuron at level ( l + 1 ) th , and  h ( · ) stands for the activation function. Generally, the number of hidden layers is uncertain. Once the information has passed through the hidden layers, it is sent to the last layer, called the output layer. Refer to [37] for more details about neural networks.

3.1. Residual Neural Network Structure

Ref. [9] utilized a mortality table to price insurance contracts; however, they did not consider the stochastic mortality risk factors that insurers have to deal with in practice. Compared with the fixed mortality, stochastic mortality, which is used in this paper, captures dynamic changes so that the training process generates more accurate price estimates. Additionally, the residual network could better learn to correct random risk factors. ResNet could also learn this stochastic correction to improve performance. Figure 1 shows the workings of the residual network. In the training process, the residual network is split into two branches: the identity mapping x and the residual mapping F ( x ) . The residual mapping branch passes through the weight layer to reach the next layer, while the identity mapping branch crosses the weight layer. The process of the identity mapping branch crossing the weight layer is called cross-layer connection. The two branches converge at F ( x ) + x so that the network learns the features retained on x while learning F ( x ) .
Compared with ResNet, our proposed ResPoNet has multiple layers, including a convolutional layer, non-linearity layer, residual layer, and fully connected (FC) layer, as shown in Figure 2. To clarify the characteristics of each representative contract, we first build a convolutional layer with standardized differences between the representative contract and the policy to be priced [38]. Both numerical and categorical attributes are included. Through the non-linear activation function known as the sigmoid function, the neurons of the attributes obtained at the input layer are fully connected: each neuron at the upper layer is connected with every single neuron at the next layer to access the implicit relationship. The two parameters of the fully connected layer, m and n, denote the number of neurons needed to connect at the upper and lower layers, respectively. The network performs a residual connection at the second layer of the activation function. The residual mapping branch passes through the second fully connected layer and a batch normalization layer, whilst the identity mapping branch directly passes through the softmax layer, which enlarges the distances between the outputs.
The basic procedure of the ResPoNet algorithm is described in Algorithm 1. The algorithm takes as input a training set comprising three components: a vector v of insurance policy values in the training set, an attribute matrix K with m 1 vectors, where each vector describes the attribute differences between a portfolio policy and m 2 representative contracts, and an m 1 × m 2 distance matrix D derived from traditional spatial interpolation, where each element D i j represents the distance from policy i to representative contract j. Here, m 1 and m 2 denote the sizes of the training set and the representative contract set, respectively. In the procedural stage, for each policy in the portfolio, the ResPoNet is trained using ( v , K , D ) to learn two key components: an optimal distance function that adapts to the policy attributes and a set of optimal weights for the representative contracts. This training process involves iterative optimization to minimize the discrepancy between the network’s predicted valuations and the ground-truth values from the training set. The output of the algorithm is a tuple o = ( β , ϵ ) , where β R m 2 represents the learned weights assigned to the representative contracts, and  ϵ R m 2 denotes the corresponding bias parameters. These parameters collectively enable the model to compute policy valuations by integrating attribute differences and spatial distances in a computationally efficient manner, balancing accuracy and interpretability.
Algorithm 1: General algorithm procedure of ResPoNet
Mathematics 13 01916 i001
For the specific implementation of ResPoNet, we predefine the number of neurons and network parameters. Notably, the neuron count in each FC layer is systematically configured as 128, a design specification aimed at maintaining uniform layer dimensionality to facilitate stable gradient propagation throughout the network. The residual layer comprises two 128-neuron FC layers with sigmoid activation and batch normalization, connected by a skip connection (identity mapping) that adds the input to the block’s output. Six attributes of a policy are utilized as inputs to train the network. The six differences in the attributes between each policy in the portfolio and each corresponding representative contract form a group. Each group is then connected to a node at the next layer via the sparse connection. Therefore, we set the convolution kernel to 1 × 6 and the stride to 6 to achieve the sparse connection at the first layer (see Figure 3). Every rectangle composed of squares represents a group of attribute’s differences. Each neuron at the input layer is known as a vector k , revealing the difference. If c and n denote the categorical attribute and the numerical attribute, respectively, then k c represents the difference in the categorical attribute of k , while k n is the difference in the numerical attribute. We denote the input policy and representative contract as p and p i , respectively. p c and p n represent the categorical and numerical attributes of the input policy, respectively, while p ci and p ni represent the categorical and numerical attributes of the representative contract, respectively. Thus, the difference in the w t h categorical attribute k c w has the following form:
k c w = 0 , p c w = p c i w , 1 , p c w p c i w .
k n w has the form k n w = g ( p n w p n i w ) . The function g ( · ) standardizes the differences in the numerical attributes. Following [39], we set the parameters ( m , n ) at the fully connected layer based on the intuitive bases.
For the residual mapping process, the FC layer and the batch-normalization layer constitute the correction module. Finally, the network outputs the weights of each representative contract needed to estimate the input VA policy. Thus,
V p ^ = i = 1 m o i × V p i ,
where o i contains the weights and biases, and V p ^ and V p i represent the valuation of the input policy and each representative contract, respectively. After adding the representative contract value (value (R)) in Figure 2, these outputs take into account both the loss of value and the loss of weights computed using the traditional spatial interpolation method.

3.2. Training Methodology

This subsection explains how we train a neural network and emphasizes the importance of weight regularization. As suggested in Section 2, the focal loss improves detection accuracy by improving the loss function. Here, we add a loss of weight item into the loss function for further improvement so that ResPoNet could approximate the correct valuation, thereby retaining the universal meaning of the distance function. Neural network learning falls into three categories: supervised learning, unsupervised learning, and reinforced learning. We adopt the backpropagation algorithm to exploit supervised learning when training the network by providing it with inputs and matching output patterns [40]. The training begins with random weights, and the parameters are iteratively adjusted to minimize the loss function. Next, the mean squared error (MSE) quantifies the distance between the estimated value and the true value. Generally, minimizing the loss function corresponds to minimizing the MSE.
Using all portfolio policies as network inputs during training is not advisable due to computational inefficiency. Instead, we select a small set of VAs as the training set. The small sample size is sufficient because the training loss converges given a certain number of samples. Another relevant aspect of a neural network is the ability to predict cases not included in the training set. The problem of overfitting means that although the training set error may be a very small value, the error arising from the new data is large because the network only fits the training examples but fails to generalize to the new situation. To address the overfitting problem, we adopt a certain subset of the data as the validation set. In numerical experiments, if the validation set achieves comparable accuracy to that of the training set, this indicates that the issue of overfitting does not exist.
The proposed ResPoNet utilizes a composite loss function that combines valuation error with a weight regularization term to improve both accuracy and interpretability. Specifically, the loss function is defined as follows:
L o s s ( β , ϵ ) = L o s s V + α L o s s β .
Here, the loss of valuation ( L o s s V ), referred to as the MSE loss function, measures the discrepancy between the predicted and true values of VA policies:
L o s s V = 1 2 N i = 1 N | V ^ p i ( β , ϵ ) V p i | 2 ,
where N is the training set size, V ^ p i denotes the predicted value, and V p i represents the ground truth from nested MC simulations. The weight regularization term ( L o s s β ) preserves the interpretability of spatial interpolation:
L o s s β = 1 2 N C i = 1 N j = 1 C | β ^ i j ( p i ) β i j ( p i ) | 2 , s . t . j = 1 C β i j ( p i ) = 1 ,
where C denotes the number of representative contracts, β ^ i j represents the estimated value of the weight, and β i j = D i j ρ k = 1 C D i k ρ denotes normalized weights derived from IDW based on the distance matrix D , with ρ = 2 as the default distance decay parameter. These weights encode prior domain knowledge about spatial similarity between contracts. This term ensures that ResPoNet retains the intuitive distance-based weighting of spatial methods while learning corrections for mortality risk and path dependence. The hyperparameter α balances the two loss components in Equation (3). When α = 0 , the model reduces to the baseline neural network, which optimizes solely for L o s s V , but this approach may compromise interpretability. When α > 0 , the L o s s β term constrains β , enforcing distance-based weights and improving the stability of convergence. The optimal value of α is determined by minimizing the relative error (RE) metric through a validation procedure, as detailed in Section 4.1.
Notably, standard loss functions (e.g., baseline NNs using only L o s s V ) optimize solely for valuation accuracy, which may lead to weights β i j that deviate from real-world distance relationships and compromise interpretability. As presented in Section 4.2, ResPoNet’s MSE exhibits superior portfolio valuation accuracy compared to the MSE-based loss function without weight regularization, while its mean absolute error (MAE) improves significantly. The weighted loss introduced here integrates L o s s β to encode prior knowledge from spatial interpolation. This dual-objective formulation distinguishes ResPoNet from baseline models using standard MSE or MAE losses, as it explicitly enforces consistency with domain knowledge while improving convergence speed (see the numerical experiments in Section 4.2).
The greater the distance between the input policy in the portfolio and the representative contract, the smaller the weight. The optimal distance function in ResPoNet is implicitly learned through a neural network (see [37] for neural network fundamentals), incorporating convolutional and residual layers. It processes standardized K to generate β via Softmax. Optimized via a combined loss, as described in Equations (3)–(5), and balancing valuation accuracy ( L o s s V ) and distance consistency ( L o s s β ), the parameters are updated using a gradient descent scheme. The neural network’s parameters are updated iteratively to satisfy the constraint in Equation (5), thereby grounding the learned distance metric in interpretable weight spaces.
As is customary in the neural network literature, we employ a stochastic gradient descent (SGD) optimizer to train the network [41]. SGD is a well-established training algorithm in both academia and industry, known for its proven generalization performance and efficient convergence. This requires computing the derivative of the cost function with respect to each parameter and then updating them by moving to the negative direction, scaled by the learning rate γ . In particular, we have
[ β i , ϵ i ] = [ β i 1 , ϵ i 1 ] γ L o s s ( β i 1 , ϵ i 1 ) ,
where the set [ β i , ϵ i ] represents the corresponding weight and bias at i; the learning rate γ > 0 controls the convergence progress of the model; ∇ denotes the gradient; and L o s s ( · ) stands for the loss function to replace the cost function in the gradient descent. When the learning rate is large, the training process speeds up, but it is prone to gradient explosions. In contrast, the convergence slows down with a small learning rate, and the network is more prone to overfitting. The initial learning rate γ starts at 0.1 and then decays exponentially [42] by a factor of 0.9 per 50 epochs to balance early convergence speed and late-stage optimization precision. The batch size is set to 32, a compromise between gradient estimate stability and memory efficiency. Stochastic mortality risk is modeled via a nested MC framework [43]: (i) In the outer loop, 100 stochastic mortality scenarios are generated using an affine model to capture dynamic mortality rate trends; (ii) In the inner loop, for each scenario, 10,000 MC paths simulate policy cash flows under the scenario. This configuration ensures that ResPoNet is trained on reliable, low-noise labels while remaining feasible for large-scale portfolio valuation.
The accuracy of neural network-based valuation is quantified using RE, defined as follows:
R E = | V i V i ^ | V i ,
where V i is the estimated value of the input portfolio computed by nested MC simulations and V i ^ is the estimated value of the input portfolio computed by ResPoNet. Training stops when the RE in Equation (7) falls below the expected threshold. The stopping time could also be determined by viewing the diagram of MSE. Figure 4 shows how ResPoNet converges over 2500 epochs. MSE decreases as a function of the epoch, which becomes less steep after a few hundred epochs. This indicates that the network parameters are approaching their respective optimal values. When the value converges to 0 after approximately 2500 epochs, it is appropriate to stop training since additional training ceases to improve accuracy [9].

4. Numerical Experiment

We present the numerical results of implementing the ResPoNet framework and compare its performance against conventional spatial interpolation methods and neural network variants using Python 3.9.13. The nested simulation model [44] is combined with a stochastic mortality model [43] to value the GMBs embedded in VA contracts. The fixed mortality model is insufficient in accounting for uncertainties in future risk factors, limiting its predictive accuracy for future mortality dynamics. Our approach advances the utility of the stochastic mortality model, as it provides neural networks with more accurate training data.

4.1. Sampling

Since the enterprise data of insurance companies are confidential business information, we use a synthetic portfolio of 100,000 VA contracts with attribute values chosen uniformly at random from Table 1, similar to [9]. Among the six attributes, policy type and policyholder gender are categorical attributes, while age, maturity, value of the guarantee, and account value are numerical attributes. Categorical attributes are encoded using one-hot encoding, while numerical attributes are standardized to zero mean and unit variance to normalize feature scales.
We randomly select 3000 contracts from the 100,000 samples and divide them into two sets: 2000 policies comprise the training set and 1000 policies make up the validation set. Then, the k-means clustering algorithm with 300 clusters is applied to partition the synthetic portfolio of 100,000 contracts, with the cluster centroids serving as representative contracts for the input portfolio. Both the optimal number of clusters and the sample size are validated in Section 5.1. This particular aspect differentiates our approach from the simple uniform sampling method used by [9]. In general, data clustering [45] is the process of partitioning a set of items into clusters such that items within the same cluster are similar to each other, and items in different clusters are distinct from one another. In the comparison in [46] between the random sampling and data clustering methods, it was demonstrated that the latter is better than the former in terms of accuracy. For the choice of the α parameter needed in the training process, we rely on Figure 5, in which its optimal value is approximately 0.03, when RE attains its minimum.

4.2. Performance Analysis

We evaluate the convergence performance of our proposed ResPoNet compared to other alternatives, with the stopping decision as a primary consideration to avoid overtraining the network. Four network structures are employed: neural network (NN), NN with the loss of weight (NN + Lw), NN with the residual connection (NN + Res), and NN with the loss of weight and the residual connection (NN + Lw + Res, or ResPoNet). In the left panel of Figure 6, we observe that NN is the most unstable, and this is followed by the NN + Lw. Since NN + Res reduces the fluctuation of the MSE in the first few hundred epochs, the training process is smooth. The addition of the loss of weight to NN + Res boosts the MSE’s rate of convergence. Overall, our proposed structure has the best convergence performance. The same conclusion is achieved by comparing the convergence performance on the verification set, as presented in the right panel of Figure 6. The NN + Lw + Res converges the fastest amongst all of the other proposed structures and is also the most stable. Additionally, the MSE of the validation set could be adopted to validate whether there is an overfitting problem in ResPoNet. As shown in Figure 6 (right), this is not an issue, as the MSE of validation data converges as well. To systematically evaluate ResPoNet against critical baselines, we organize comparisons into two tiers: (i) ResPoNet against neural network variants (see Table 2 and Figure 7), and (ii) performance relative to traditional spatial interpolation methods and MC simulation (see Table 3 and Figure 8). These results consistently demonstrate ResPoNet’s superiority in both accuracy and computational efficiency.
To measure each NN framework’s estimation performance, we consider the R 2 and MAE in addition to the RE. The statistic R 2 describes the interpretation strength of the valuation model for the policy value, while the MAE measures the averaged absolute difference between the predicted and the true values [24]. Table 2 displays the accuracy of each network structure in the estimation of the value for the annuity portfolio’s input variable. It illustrates the quantified accuracy of different neural network schemes and the superior performance of the ResPoNet framework under all indicators. The ResPoNet’s RE is improved more than 80 times compared with the baseline (NN). This evinces the effectiveness of the residual connection and loss of weight in the estimation process of our proposed framework. Since the same training set is employed to train these four networks and the same large portfolio needs to be priced, the running time is almost the same. In spite of the little difference in R 2 , ResPoNet explains the most variation in value amongst all of the proposed approaches. In addition, the ResPoNet’s MAE is the smallest, implying that it is the most accurate even for the valuation of every insurance policy in the portfolio.
Figure 7 presents a clearer comparison of the quantitative accuracy across various NN architectures, highlighting the superior performance of the ResPoNet valuation model across all evaluation metrics. Relative to the pre-optimized NN model, the ResPoNet model demonstrates a marked improvement, with significant reductions in both RE and MAE. The comparison between the “NN + Lw” and “NN + Res” models further reveals that integrating residual connections into the neural network structure yields a more pronounced enhancement in estimation accuracy than introducing weight loss terms into the loss function. While the RE values for both the “NN + Res” model and the ResPoNet model are comparable, the latter proves to be the more robust choice for large variable annuity portfolios, where even small errors can lead to substantial deviations in the estimated portfolio value from the true value. Consequently, the ResPoNet valuation model emerges as the superior model. Thus, ResPoNet exhibits the best performance across both the training-stopping process (see Figure 6) and the accuracy evaluation (see Table 2 and Figure 7). In Section 5, we will further assess the estimation accuracy of the “NN + Res” and ResPoNet models by varying the sample size, thereby providing additional evidence of the ResPoNet model’s stability and robustness.
We now compare the accuracy and computational efficiency of the proposed ResPoNet to the MC method and other traditional spatial-interpolation schemes. The accuracy is measured by the RE, and the efficiency is evaluated in terms of the computation time in seconds. The spatial interpolation methods that we choose as our comparable alternatives include Kriging, IDW with power parameters p of 1 and 100, and RBF, which manifested fine performance in [9]. Table 3 depicts a comparison of the accuracy and computational time between the ResPoNet valuation model and other traditional spatial interpolation techniques, with more intuitive results shown in Figure 8. Our results are approximately consistent with [9]. Since the result of MC simulation is utilized as the benchmark, the RE of MC in Table 3 is 0. Amongst the traditional spatial interpolation methods, Kriging is the least accurate with the largest RE, while IDW (p = 100) has the best performance. However, it is worth noting that our proposed ResPoNet produces results that are over 10 times more accurate than IDW (p = 100) and significantly outperforms other alternatives.
Based on Table 3, MC simulation is undoubtedly the most time-consuming, followed by several traditional spatial interpolation methods with a similar amount of time. ResPoNet has the shortest computational running time, requiring only 3 s to value the entire large portfolio of VAs. It is noted that the running time of ResPoNet includes the estimation time but not the training time (about 1075 s) because the trained ResPoNet could value the large annuity portfolios directly without retraining until the economic environment or insurance regulations change significantly. Conversely, the traditional spatial interpolation methods need to recalculate the distances and weights every time since they lack generalization. It has to be pointed out that the training time of ResPoNet could be shortened further by reducing the size of the training set and the number of iterations. However, this may lead to a trade-off, as such reductions could potentially compromise the model’s estimation accuracy. Therefore, users of the model must balance the enhancement in computational efficiency with the potential loss in valuation precision.

4.3. Model Comparison

ResPoNet represents a methodological advancement that synthesizes NN architectures with spatial interpolation techniques, as evidenced by its superior performance demonstrated in Section 4.2. While machine learning alternatives to MC simulations have gained traction in large-scale variable annuity portfolio valuation due to their computational efficiency advantages, this section conducts numerical experiments using four common machine learning models to further explore the efficiency and applicability of our proposed framework.
The prioritization of tree-based methods in our experimental design stems from their inherent capacity to capture nonlinear interactions, a critical necessity given the compound interest accumulation mechanism of guaranteed minimum benefits and the path-dependent nature of account values. This theoretical justification aligns with the broader ensemble learning taxonomy, which includes bagging, boosting, and stacking, and informs our framework for benchmarking ResPoNet. Bagging mitigates variance through bootstrap aggregation. Sub-training sets are iteratively sampled with replacement from the original data, and predictions are averaged across learners. Boosting adopts a sequential refinement paradigm, iteratively training weak learners to correct residuals from prior iterations. Stacking, though not directly employed here, extends this hierarchy by using base learner predictions as inputs to a meta-learner. Therefore, to operationalize this taxonomy, we select four representative models, including Regression Tree (RT) as a baseline individual learner, bagging and Random Forest (RF) as exemplars of variance-reduction techniques, and Gradient Boosting Decision Tree (GBDT) as a boosting implementation. These choices are theoretically and empirically motivated. Prior applications to variable annuity valuation [23,24] further validate this selection.
The experimental design utilizes the samples generated in Section 4.1 to compare the valuation accuracy of the four aforementioned machine learning models with that of ResPoNet. Table 4 quantifies model performance using RE and MAE. ResPoNet achieves exceptional precision (RE = 0.0562%, MAE = 6082.83), surpassing all alternatives by substantial margins. Among tree-based models, GBDT ranks second but exhibits four times higher RE and 31.6% larger MAE than ResPoNet. Bagging and RF deliver accuracy comparable to spatial interpolation methods, while RT performs only marginally better than unoptimized neural networks despite its computational simplicity.

5. Sensitivity Analysis

Sensitivity analysis involves holding other parameters constant while systematically varying key theoretical parameters to observe resultant changes in outcomes. Parameters exhibiting high sensitivity are characterized by their capacity to induce disproportionate changes in valuation outputs through marginal adjustments. Given that policy valuation serves as a critical input for insurers’ risk exposure quantification and risk management strategies, identifying factors with material effects on pricing efficiency and accuracy holds substantial operational significance. This section conducts a comprehensive sensitivity analysis of the ResPoNet model, evaluating two critical dimensions: (i) sample size effects, examining the relationship between training set cardinality and model performance; and (ii) policy characteristics, including guarantee type composition, policyholder gender, mortality rate assumptions, and temporal maturity profiles. Through systematic isolation of these parameters, we quantify their respective impacts on valuation accuracy.

5.1. Sample Size Effects

The proposed ResPoNet requires three sets to estimate the value of a VA portfolio: a set of representative contracts, a training set, and a validation set. Given that validation sets serve solely for overfitting detection without direct influence on model accuracy, our analysis focuses on the effects of representative contract size and the progressive scaling of the training set. For each combinatorial sample configuration, we quantify pricing precision using RE metrics. This dual-strategy approach isolates marginal effects of data availability, providing numerical validation of ResPoNet’s scalability properties in large-portfolio actuarial applications. Consistent with prior literature (e.g., [9]), both the standard deviation of RE and computational time exhibit monotonic declines as the number of representative contracts increases. This trend aligns with theoretical expectations, as the computational complexity of the network, determined by the neuron count in its hidden layer, scales proportionally with the size of the representative contract set. While such behavior is inherently predictable, Section 5.1 provides a focused analysis of accuracy patterns, and Section 5.2 examines statistical variability. Notably, ResPoNet demonstrates superior computational efficiency compared to spatial interpolation methods (see Figure 8), with incremental runtime differences between varying representative set sizes becoming operationally negligible for large-scale portfolio applications.
Table 5 reports the REs of the NN + Res and ResPoNet under different numbers of representative contracts. It shows that there is not necessarily a positive or negative correlation between the size of representative contracts and the accuracy of the valuation. When the size of the representative contracts is small, it cannot represent the complete set, so the error is large. When too many representative contracts require more outputs under the same inputs, model training also becomes difficult. As shown in Table 5, all REs of ResPoNet are smaller than those of the NN + Res framework under different numbers of representative contracts. This further illustrates the effectiveness of the loss of weight in our proposed framework. Within the context and given assumptions of our experiment, 300 representative contracts are found to be the most effective in representing the entire portfolio. To amplify the comparative trends observed in Table 5, Figure 9 visualizes the REs of both frameworks, explicitly contrasting the error divergence between ResPoNet and NN + Res across policy set sizes. The graphical representation reinforces the non-monotonic sensitivity of accuracy to scale: while undersized sets and oversized sets exhibit higher errors, ResPoNet’s stability near 300 contracts supports our optimality claim, which corroborates the trade-off between representativeness and computational feasibility.
Table 6 examines valuation accuracy in relation to the training-set size, with the representative contracts set size fixed at 300. As the training-set size increases from 1000 to 3000, RE decreases significantly from 0.2880% to 0.0521%, reflecting improved accuracy in predictions. The decrease in MAE further demonstrates that larger training sets enhance the model’s ability to estimate the values of large VA portfolios. Therefore, there is a strong inverse relationship between training set size and valuation error, with RE decreasing sharply and MAE declining steadily, confirming enhanced precision at scale. In addition, the modest rise in R 2 indicates near-optimal model fit even at smaller training sizes, though marginal gains in explanatory power align with error reduction. Figure 10 maps the tabulated error metrics from Table 6 to a dual-axis bar chart, where the persistent decline in RE/MAE and the stable R 2 jointly depict the accuracy–computation trade-off. Since the training time also increases with the increase in the size of the training set, as noted in [9], users must balance accuracy with computational running time. Larger datasets improve accuracy but require longer training durations, necessitating a thoughtful trade-off based on practical constraints and priorities.

5.2. Policy Attribute Analysis

The ResPoNet model’s sensitivity to policy attributes is further tested across six primary pricing factors (see Table 1): two categorical variables (guarantee type, policyholder gender) and four numerical variables (age, maturity, guaranteed value, account value). To isolate attribute-specific effects, we systematically vary one attribute while holding others constant, employing distinct methods for categorical and numerical variables. Categorical attributes are evaluated through repeated trials, with results averaged to mitigate sampling variability, whereas numerical attributes are assessed via stepwise sampling to capture continuous effects.
To examine how minimum benefit guarantee configurations affect the pricing efficiency and accuracy of ResPoNet for large variable annuity portfolios, we evaluate seven guarantee structures, including three single guarantees (GMDB, GMMB, GMIB) and four multi-guarantee combinations, through a multi-stage framework combining stratified sampling, MC simulation, and repeated cross-validation. First, 100,000 policy samples are stratified by guarantee, partitioned into training (60%), validation (20%), and test (20%) sets to ensure proportional representation. Each of the ten independent trials randomly selects 6000 training-set policies for MC-based liability valuation, holding other key attributes constant to examine guarantee-type effects. The model undergoes ten-fold cross-validation, iteratively training on 54,000 policies and validating on 6000, with repeated resampling to quantify performance stability. Statistical metrics, including RE means, standard deviations, and 95% confidence intervals, are computed across ten trials for each guarantee type.
Figure 11 illustrates the systematic expansion of valuation uncertainty as minimum-benefit guarantees transition from single to combinatorial structures, with error bars denoting the 95% confidence interval (CI). Single guarantees, including GMDB, GMMB, and GMIB, exhibit tightly bounded error distributions, indicative of stable model performance under isolated risk exposures. In contrast, multi-guarantee combinations demonstrate both higher RE and error dispersion. The increasing RE, from 0.0337 for GMDB to 0.0543 for GMDB + GMMB + GMIB, reveals that model accuracy diminishes with guarantee-type complexity, where interaction effects between guarantees amplify liability valuation uncertainty compared to single-risk scenarios. Notably, the GMDB + GMMB combination shows marginally lower RE than other multi-guarantee structures, potentially reflecting covarying risk factors in death/maturity benefits.
Furthermore, we perform a sensitivity analysis to gender-based performance using the same experimental protocol as the guarantee-type evaluation to examine demographic effects. Figure 12 demonstrates that valuation precision remains statistically comparable across gender subgroups, with error bars denoting the 95% CI, and overlapping CIs suggesting minimal structural differences in model accuracy. This further proves that gender primarily influences mortality rate randomization rather than ResPoNet’s core valuation precision, a distinction evident in the near-identical RE distributions. The marginally tighter confidence interval for female policies likely reflects actuarial tables’ lower longevity risk uncertainty for female cohorts, not model gender bias. Thus, gender’s influence is limited to mortality-related aspects rather than fundamentally affecting valuation precision.
Additionally, the assessment extends to four additional policy attributes, including age, guaranteed value, account value, and maturity, though these results are not displayed here for brevity. Three attributes, including age, guaranteed value, and account value, exhibit RE fluctuations within a narrow range, with no statistically distinguishable upward or downward trends in valuation precision. Notably, time-to-maturity demonstrates little RE fluctuation but reveals a slight upward trend as maturity time lengthens. This pattern may stem from the complex structure of variable annuity policies, which confront numerous risk factors, inherently entailing more risks as time extends. Longer maturity horizons introduce heightened uncertainty in future market dynamics, which pose greater challenges for the model to precisely capture. Moreover, the cumulative effect of assorted minor uncertainties over time could also contribute to this subtle upward RE trend, reflecting the compounding influence of temporal-related complexities on valuation precision.

6. Conclusions

The accurate valuation of large VA policy portfolios stands as a pivotal concern for insurers, industry practitioners, and regulators. In practice, MC simulation is widely employed for valuation, yet as the scale of policies expands, the computational cost increases tremendously, necessitating the urgent pursuit of more efficient alternatives. Recent proposals to combine spatial interpolation with MC simulations are constrained by their reliance on expert-defined distance metrics, which introduce subjectivity and uncertainty. To address these limitations, we propose ResPoNet, a neural network architecture enhanced with residual connections and a weighted loss function.
Numerical comparisons demonstrate that ResPoNet achieves faster and more stable training than alternative spatial interpolation models and machine learning benchmarks. Furthermore, ResPoNet delivers consistent accuracy across both training and out-of-sample test sets, aligning with [24]’s findings on neural networks’ superior nonlinear fitting capacity. The residual connections demonstrate adaptive correction capabilities for path-dependent risks under stochastic mortality scenarios. While training set size improves accuracy until reaching a saturation threshold, ResPoNet’s accuracy exhibits a non-linear dependence on the representative contract set size. Sensitivity analysis identifies 300 representative contracts as optimal under our experimental conditions for balancing accuracy and computational efficiency. Among policy attributes, the complexity of the guarantee type and the maturity period have a negative effect on the accuracy of the ResPoNet model.
This study advances the valuation methodology for large VA portfolios and demonstrates, through numerical experiments and comparative analysis, improvements in valuation accuracy using the ResPoNet model. ResPoNet’s scalable design and interpretability, coupled with its rapid valuation speed, facilitate seamless integration into actuarial platforms such as AXIS or Prophet. This integration significantly reduces portfolio valuation time from hours to real-time processing, enabling efficient risk assessment for large VA portfolios while ensuring compliance with the Solvency II regulatory capital framework [47].
Future research directions include: (i) extending ResPoNet to model mortality shocks using affine jump-diffusion processes and extreme value theory [48], thereby enhancing ResPoNet’s robustness through adversarial training paradigms; (ii) incorporating surrender and partial withdrawal decisions via behavioral economic models [49] to address path-dependent risks; (iii) calibrating the model to real-market annuity data under regulatory frameworks, as contrastive learning techniques [50] may mitigate data scarcity in emerging guarantee types; and (iv) integrating stochastic interest rate dynamics to enhance capital market risk capture [13].

Author Contributions

Conceptualization, H.X. and J.X.; methodology, H.X. and J.X.; software, J.X. and Y.Z.; validation, H.X., R.M. and Y.Z.; formal analysis, H.X. and R.M.; investigation, H.X. and R.M.; resources, R.M.; data curation, J.X.; writing—original draft preparation, H.X. and J.X.; writing—review and editing, H.X. and R.M.; visualization, H.X., J.X. and Y.Z.; supervision, H.X. and R.M.; project administration, R.M.; funding acquisition, H.X. and R.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Humanities and Social Sciences Foundation of the Ministry of Education of China (MOE) (23YJCZH001) and the National Foreign Expert Individual Program of State Administration of Foreign Experts Affairs of China (H20240471).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

Jie Xu is employed by Guangzhou Futures Exchange and her participation in this research stems from her academic role as a graduate student. The collaboration is purely academic and does not involve any corporate resources or obligations of Guangzhou Futures Exchange. Therefore, no conflict of interest exists.

References

  1. LIMRA. 2025 LIMRA: 2024 Retail Annuity Sales Power to a Record $432.4 Billion. LIMRA Newsroom. Available online: https://www.limra.com/en/newsroom/news-releases/2025/limra-2024-retail-annuity-sales-power-to-a-record-$432.4-billion/ (accessed on 5 March 2025).
  2. Haefeli, D. Variable Annuities—An Analysis of Financial Stability; The Geneva Association: Geneva, Switzerland, 2013. [Google Scholar]
  3. Bacinello, A.R.; Millossovich, P.; Olivieri, A.; Pitacco, E. Variable annuities: A unifying valuation approach. Insur. Math. Econ. 2011, 49, 285–297. [Google Scholar] [CrossRef]
  4. Bauer, D.; Kling, A.; Russ, J. A universal pricing framework for guaranteed minimum benefits in variable annuities. Astin Bull. J. Iaa 2008, 38, 621–651. [Google Scholar] [CrossRef]
  5. Shen, Y.; Sherris, M.; Ziveyi, J. Valuation of guaranteed minimum maturity benefits in variable annuities with surrender options. Insur. Math. Econ. 2016, 69, 127–137. [Google Scholar] [CrossRef]
  6. Gan, G.; Lin, X.S. Valuation of large variable annuity portfolios under nested simulation: A functional data approach. Insur. Math. Econ. 2015, 62, 138–150. [Google Scholar] [CrossRef]
  7. Moenig, T.; Bauer, D. Revisiting the risk-neutral approach to optimal policyholder behavior: A study of withdrawal guarantees in variable annuities. In Proceedings of the 12th Symposium on Finance, Banking, and Insurance, Karlsruhe, Germany, 15–16 December 2011. [Google Scholar]
  8. Burrough, P.A.; McDonnell, R.; McDonnell, R.A.; Lloyd, C.D. Principles of Geographical Information Systems; Oxford University Press: Oxford, UK, 2015. [Google Scholar]
  9. Hejazi, S.A.; Jackson, K.R. A neural network approach to efficient valuation of large portfolios of variable annuities. Insur. Math. Econ. 2016, 70, 169–181. [Google Scholar] [CrossRef]
  10. Boyle, P.P.; Schwartz, E.S. Equilibrium prices of guarantees under equity-linked contracts. J. Risk Insur. 1977, 44, 639–660. [Google Scholar] [CrossRef]
  11. Brennan, M.J.; Schwartz, E.S. The pricing of equity-linked life insurance policies with an asset value guarantee. J. Financ. Econ. 1976, 3, 195–213. [Google Scholar] [CrossRef]
  12. Hardy, M. Investment Guarantees: Modeling and Risk Management for Equity-Linked Life Insurance; John Wiley & Sons: Abingdon, UK, 2003; Volume 215. [Google Scholar]
  13. Zhao, Y.; Mamon, R. An efficient algorithm for the valuation of a guaranteed annuity option with correlated financial and mortality risks. Insur. Math. Econ. 2018, 78, 1–12. [Google Scholar] [CrossRef]
  14. Milevsky, M.A.; Salisbury, T.S. Financial valuation of guaranteed minimum withdrawal benefits. Insur. Math. Econ. 2006, 38, 21–38. [Google Scholar] [CrossRef]
  15. Huang, Y.; Forsyth, P. Analysis of a penalty method for pricing a guaranteed minimum withdrawal benefit (gmwb). Ima J. Numer. Anal. 2012, 32, 320–351. [Google Scholar] [CrossRef]
  16. Xu, W.; Chen, Y.; Coleman, C.; Coleman, T.F. Moment matching machine learning methods for risk management of large variable annuity portfolios. J. Econ. Dyn. Control 2018, 87, 1–20. [Google Scholar] [CrossRef]
  17. Doyle, D.; Groendyke, C. Using neural networks to price and hedge variable annuity guarantees. Risks 2019, 7, 1. [Google Scholar] [CrossRef]
  18. Dong, B.; Xu, W.; Sevic, A.; Sevic, Z. Efficient willow tree method for variable annuities valuation and risk management. Int. Rev. Financ. Anal. 2020, 68, 101429. [Google Scholar] [CrossRef]
  19. Gan, G.; Valdez, E.A. Valuation of large variable annuity portfolios with rank order kriging. N. Am. Actuar. J. 2020, 24, 100–117. [Google Scholar] [CrossRef]
  20. Liu, K.; Tan, K.S. Real-time valuation of large variable annuity portfolios: A green mesh approach. N. Am. Actuar. J. 2021, 25, 313–333. [Google Scholar] [CrossRef]
  21. Gan, G. Application of data clustering and machine learning in variable annuity valuation. Insur. Math. Econ. 2013, 53, 795–801. [Google Scholar] [CrossRef]
  22. Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef]
  23. Gan, G.; Quan, Z.; Valdez, E. Machine learning techniques for variable annuity valuation. In Proceedings of the 2018 4th International Conference on Big Data and Information Analytics (BigDIA), Houston, TX, USA, 17–19 December 2018; pp. 1–6. [Google Scholar]
  24. Gweon, H.; Li, S.; Mamon, R. An effective bias-corrected bagging method for the valuation of large variable annuity portfolios. Astin Bull. J. Iaa 2020, 50, 853–871. [Google Scholar] [CrossRef]
  25. Gweon, H.; Li, S. A hybrid data mining framework for variable annuity portfolio valuation. Astin Bull. J. Iaa 2023, 53, 580–595. [Google Scholar] [CrossRef]
  26. Sun, T.; Wang, H.; Wang, D. Robust prediction intervals for valuation of large portfolios of variable annuities: A comparative study of five models. Comput. Econ. 2024, 1–22. [Google Scholar] [CrossRef]
  27. Lim, H.B.; Shyamalkumar, N.D.; Tao, S. Valuation of variable annuity portfolios using finite and infinite width neural networks. Insur. Math. Econ. 2025, 120, 269–284. [Google Scholar] [CrossRef]
  28. Brockett, P.L.; Cooper, W.W.; Golden, L.L.; Pitaktong, U. A neural network method for obtaining an early warning of insurer insolvency. J. Risk Insur. 1994, 61, 402–424. [Google Scholar] [CrossRef]
  29. Dalkilic, T.E.; Tank, F.; Kula, K.S. Neural networks approach for determining total claim amounts in insurance. Insur. Math. Econ. 2009, 45, 236–241. [Google Scholar] [CrossRef]
  30. Prakash, A.; Mohanty, R.; Kallurkar, S. Service quality modelling for life insurance business using neural networks. Int. J. Product. Qual. 2011, 7, 263–286. [Google Scholar] [CrossRef]
  31. Ansari, A.; Riasi, A. Modelling and evaluating customer loyalty using neural networks: Evidence from startup insurance companies. Future Bus. J. 2016, 2, 15–30. [Google Scholar] [CrossRef]
  32. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  33. Liu, H.; Song, B. Stock price trend prediction model based on deep residual network and stock price graph. In Proceedings of the 2018 11th International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, China, 8–9 December 2018; Volume 2, pp. 328–331. [Google Scholar]
  34. Muhtar, Y.; Muhammat, M.; Yadikar, N.; Aysa, A.; Ubul, K. FC-ResNet: A multilingual handwritten signature verification model using an improved ResNet with CBAM. Appl. Sci. 2024, 13, 8022. [Google Scholar] [CrossRef]
  35. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  36. Cybenko, G. Approximation by superpositions of a sigmoidal function. Math. Control. Signals Syst. 1989, 2, 303–314. [Google Scholar] [CrossRef]
  37. Bishop, C.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  38. Zhang, J.; Ye, G.; Tu, Z.; Qin, Y.; Zhang, J.; Liu, X.; Luo, S. A spatial attentive and temporal dilated (SATD) GCN for skeleton-based action recognition. Caai Trans. Intell. Technol. 2020, 7, 46–55. [Google Scholar] [CrossRef]
  39. Wanas, N.; Auda, G.; Kamel, M.S.; Karray, F. On the optimal number of hidden nodes in a neural network. In Proceedings of the Conference Proceedings. IEEE Canadian Conference on Electrical and Computer Engineering (Cat. No. 98TH8341), Waterloo, ON, Canada, 25–28 May 1998; Volume 2, pp. 918–921. [Google Scholar]
  40. Dongare, A.; Kharde, R.; Kachare, A.D. Introduction to artificial neural network. Int. J. Eng. Innov. Technol. 2012, 2, 189–194. [Google Scholar]
  41. Boyd, S.; Venberghe, L. Convex Optimization; Cambridge University Press: Ner York, NY, USA, 2004. [Google Scholar]
  42. Tai, Y.; Yang, J.; Liu, X. Image super-resolution via deep recursive residual network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3147–3155. [Google Scholar]
  43. Biffis, E. Affine processes for dynamic mortality and actuarial valuations. Insur. Math. Econ. 2005, 37, 443–468. [Google Scholar] [CrossRef]
  44. Hull, J.C. Options Futures and Other Derivatives; Pearson Education: Noida, India, 2003. [Google Scholar]
  45. Gan, G.; Ma, C.; Wu, J. Data Clustering: Theory, Algorithms, and Applications; SIAM: University City, PA, USA, 2020. [Google Scholar]
  46. Gan, G.; Valdez, E.A. An empirical comparison of some experimental designs for the valuation of large variable annuity portfolios. Depend. Model. 2016, 4, 382–400. [Google Scholar]
  47. Hejazi, S.A.; Jackson, K.R. Efficient valuation of SCR via a neural network approach. J. Comput. Appl. Math. 2017, 313, 427–439. [Google Scholar] [CrossRef]
  48. Huang, F.; Maller, R.; Ning, X. Modelling life tables with advanced ages: An extreme value theory approach. Insur. Math. Econ. 2020, 93, 95–115. [Google Scholar] [CrossRef]
  49. Barberis, N.C. Thirty years of prospect theory in economics: A review and assessment. J. Econ. Perspect. 2013, 27, 173–196. [Google Scholar] [CrossRef]
  50. Huang, Y.; Song, Y.; Cai, Z. A supervised contrastive learning method with novel data augmentation for transient stability assessment considering sample imbalance. Reliab. Eng. Syst. Saf. 2025, 256, 110716. [Google Scholar] [CrossRef]
Figure 1. Details of ResNet structure.
Figure 1. Details of ResNet structure.
Mathematics 13 01916 g001
Figure 2. Residual portfolio valuation network (ResPoNet) structure.
Figure 2. Residual portfolio valuation network (ResPoNet) structure.
Mathematics 13 01916 g002
Figure 3. Neural network structure with sparse connection.
Figure 3. Neural network structure with sparse connection.
Mathematics 13 01916 g003
Figure 4. ResPoNet’s MSE.
Figure 4. ResPoNet’s MSE.
Mathematics 13 01916 g004
Figure 5. Relative error versus α .
Figure 5. Relative error versus α .
Mathematics 13 01916 g005
Figure 6. MSE covering the training set (left) and validation set (right).
Figure 6. MSE covering the training set (left) and validation set (right).
Mathematics 13 01916 g006
Figure 7. Comparison of estimation accuracy across different NN architectures.
Figure 7. Comparison of estimation accuracy across different NN architectures.
Mathematics 13 01916 g007
Figure 8. Comparison of accuracy and efficiency performance of different models.
Figure 8. Comparison of accuracy and efficiency performance of different models.
Mathematics 13 01916 g008
Figure 9. Valuation accuracy with respect to the size of the representative contract set.
Figure 9. Valuation accuracy with respect to the size of the representative contract set.
Mathematics 13 01916 g009
Figure 10. Valuation accuracy with respect to the size of the training set.
Figure 10. Valuation accuracy with respect to the size of the training set.
Mathematics 13 01916 g010
Figure 11. Valuation performance by guarantee type.
Figure 11. Valuation performance by guarantee type.
Mathematics 13 01916 g011
Figure 12. Valuation performance by gender.
Figure 12. Valuation performance by gender.
Mathematics 13 01916 g012
Table 1. Contract attributes and their ranges of values.
Table 1. Contract attributes and their ranges of values.
AttributeValue
guarantee typeGMDB, GMMB, GMIB
gendermale, female
age20, 21, …, 60
guarantee value[0.5 ×   10 4 , 6 ×   10 5 ]
account value[1 ×   10 4 , 5 ×   10 5 ]
maturity10, 11, …, 25
Table 2. Performance metrics of different NN structures.
Table 2. Performance metrics of different NN structures.
StructureRE (%) R 2 MAE
NN4.06380.982814,653.0292
NN + Lw3.74590.991811,564.7631
NN + Res0.10260.99607230.1620
ResPoNet0.05620.99836082.8272
Table 3. RE and running times of the traditional spatial interpolation methods and ResPoNet. The running time of ResPoNet includes training time and estimation time, while the running time of the other methods only includes the estimation time associated with the distance function of a given form.
Table 3. RE and running times of the traditional spatial interpolation methods and ResPoNet. The running time of ResPoNet includes training time and estimation time, while the running time of the other methods only includes the estimation time associated with the distance function of a given form.
MethodRE (%)Running Time (s)
MC013,567
Kriging2.889596
IDW (p = 100)0.581271
IDW (p = 1)1.303372
RBF1.320282
ResPoNet0.05621078
Table 4. Performance metrics of different machine learning structures.
Table 4. Performance metrics of different machine learning structures.
StructureRE (%)MAE
RT3.910314,321.8205
GBDT0.21858006.1902
Bagging2.926512,114.4432
RF1.537211,073.4139
ResPoNet0.05626082.8272
Table 5. Sensitivity analysis of the size of the representative contract set. The size of the training set is 2000 and the size of the validation set is 1000.
Table 5. Sensitivity analysis of the size of the representative contract set. The size of the training set is 2000 and the size of the validation set is 1000.
Size of Representative ContractsRE of ResPoNet (%)RE of NN + Res (%)
1000.38170.3941
2000.27810.3524
3000.05620.1026
4000.20520.3287
5000.14150.2563
Table 6. Valuation accuracy with respect to the size of the training set. The chosen size of the representative contract set is 300.
Table 6. Valuation accuracy with respect to the size of the training set. The chosen size of the representative contract set is 300.
Size of the Training SetRE (%) R 2 MAE
10000.28800.99786583.5345
15000.25750.99816278.6954
20000.05620.99836082.8272
25000.05320.99835945.3896
30000.05210.99855886.1785
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xiong, H.; Xu, J.; Mamon, R.; Zhao, Y. ResPoNet: A Residual Neural Network for Efficient Valuation of Large Variable Annuity Portfolios. Mathematics 2025, 13, 1916. https://doi.org/10.3390/math13121916

AMA Style

Xiong H, Xu J, Mamon R, Zhao Y. ResPoNet: A Residual Neural Network for Efficient Valuation of Large Variable Annuity Portfolios. Mathematics. 2025; 13(12):1916. https://doi.org/10.3390/math13121916

Chicago/Turabian Style

Xiong, Heng, Jie Xu, Rogemar Mamon, and Yixing Zhao. 2025. "ResPoNet: A Residual Neural Network for Efficient Valuation of Large Variable Annuity Portfolios" Mathematics 13, no. 12: 1916. https://doi.org/10.3390/math13121916

APA Style

Xiong, H., Xu, J., Mamon, R., & Zhao, Y. (2025). ResPoNet: A Residual Neural Network for Efficient Valuation of Large Variable Annuity Portfolios. Mathematics, 13(12), 1916. https://doi.org/10.3390/math13121916

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop