Application of Machine Learning to Mortality Modeling and Forecasting

: Estimation of future mortality rates still plays a central role among life insurers in pricing their products and managing longevity risk. In the literature on mortality modeling, a wide number of stochastic models have been proposed, most of them forecasting future mortality rates by extrapolating one or more latent factors. The abundance of proposed models shows that forecasting future mortality from historical trends is non-trivial. Following the idea proposed in Deprez et al. (2017), we use machine learning algorithms, able to catch patterns that are not commonly identiﬁable, to calibrate a parameter (the machine learning estimator), improving the goodness of ﬁt of standard stochastic mortality models. The machine learning estimator is then forecasted according to the Lee-Carter framework, allowing one to obtain a higher forecasting quality of the standard stochastic models. Out-of sample forecasts are provided to verify the model accuracy.


Introduction
During the 20th Century, mortality has declined at all ages, producing a steep increase in life expectancy.This decrease is mainly due to the reduction of infectious disease mortality (between 1900 and 1950), as well as cardio-circulatory diseases and cancer mortality (in the most recent decades).Knowledge of future mortality rates is an important matter for life insurance companies with the goal of achieving adequate pricing of their life products.Therefore, sophisticated techniques to forecast future mortality rates have become increasingly popular in actuarial science, in order to deal with the longevity risk.Among the stochastic mortality models proposed in the literature, the Lee-Carter model Lee and Carter (1992) is the most widely used in the world, probably for its robustness.The original model applies singular-value decomposition (SVD) to the log-force of mortality to find three latent parameters: a fixed age component and a time component capturing the mortality trend that is multiplied by an age-specific function.Then, the time component is forecasted using a random walk.More recent approaches involve non-linear regression and generalized linear models (GLM), e.g., Brouhns et al. (2002) assumed a Poisson distribution for deaths and calculated the Lee-Carter model parameters by log-likelihood maximization.
In recent years, machine learning techniques have assumed an increasingly central role in many areas of research, from computer science to medicine, including actuarial science.Machine learning is an application of artificial intelligence through a series of algorithms that are optimized on data samples or previous experience.That is, given a certain model defined as a function of a group of parameters, learning consists of improving these parameters using datasets or accumulated experience (the "training data").Even though machine learning may not explain everything, it is very useful in detecting patterns, even unknown and unidentifiable ones, as well as hidden correlations.In this way, it allows us to understand processes better, make predictions about the future based on historical data, and categorize sets of data automatically.
We can distinguish between supervised and unsupervised learning methods.In the supervised learning methods, the goal is to establish the relations between a range of predictors (independent variables) and a determined target (dependent variable), whereas in the unsupervised learning methods, the algorithm sets patterns among a range of variables in order to group records that show similarities, without considering an output measure.While in the supervised method, the algorithm learns from the dataset the rules that are fed to the machine, in the unsupervised method, it has to identify the rules autonomously.Logistic and multiple regression, classification and regression trees, and naive Bayes are examples of supervised learning methods, while association rules and clustering are classified as unsupervised learning methods.
Despite the increasing usage in different fields of research, applications of machine learning in demography are not so popular.The main reason lies in the findings often being seen as "black boxes" and considered difficult to interpret.Moreover, the algorithms are not theory driven (but quite data driven), while demographers are often interested in analyzing specific hypotheses.They are likely to be unwilling to use algorithms whose decisions cannot be rationally explained.
However, we believe that machine learning techniques can be valuable as a complement to standard mortality models, rather than a substitute.
In the literature related to mortality modeling, there are very few contributions on this topic.The work in Deprez et al. (2017) showed that machine learning algorithms are useful to assess the goodness of fit of the mortality estimates provided by standard stochastic mortality models (they considered Lee-Carter and Renshaw-Haberman models).They applied a regression tree boosting machine to "analyze how the modeling should be improved based on feature components of an individual, such as its age or its birth cohort.This (non-parametric) regression approach then allows us to detect the weaknesses of different mortality models" (p.337).In addition, they investigated cause-of-death mortality.In a recent paper, the work in Hainaut (2018) used neural networks to find the latent factors of mortality and forecast them according to a random walk with drift.Finally, the work in Richman and Wüthrich (2018) extended the Lee-Carter model to multiple populations using neural networks.
We investigate the ability of machine learning to improve the accuracy of some standard stochastic mortality models, both in the estimation and forecasting of mortality rates.The novelty of this paper is primarily in the mortality forecasting that takes advantage of machine learning, clearly capturing patterns that are not identifiable with a standard mortality model.Following Deprez et al. (2017), we use tree-based machine learning techniques to calibrate a parameter (the machine learning estimator) to be applied to mortality rates fitted by the standard mortality model.
We analyze three famous stochastic mortality models: the Lee-Carter model Lee and Carter (1992), which is still the most frequently implemented, the Renshaw-Haberman model Renshaw and Haberman (2006), which also considers the cohort effect, and the Plat model Plat (2009), which tries to combine the parameters of the Lee-Carter model with those of the Cairns-Blake-Dowd model with the cohort effect, named "M7" (Cairns et al. (2009)).
Three different kinds of supervised learning methods are considered for calibrating the machine learning estimator: decision tree, random forest, and gradient boosting, which are all tree-based.
We show that the implementation of these machine learning techniques, based on features components such as age, sex, calendar year, and birth cohort, leads to a better fit of the historical data, with respect to the estimates given by the Lee-Carter, Renshaw-Haberman, and Plat models.We also apply the same logic to improve the mortality forecasts provided by the Lee-Carter model, where the machine learning estimator is extrapolated using the Lee-Carter framework.Out-of-sample tests are performed for the improved model in order to verify the quality of forecasting.
The paper is organized as follows.In Section 2, we specify the model and introduce the tree-based machine learning estimators.In Section 3, we present the stochastic mortality models considered in the paper.In Section 4, we illustrate the usage of tree-based machine learning estimators to improve both the fitting and forecasting quality of the original mortality models.Conclusions and further research are then given in Section 5.

The Model
We consider the following categorical variables, identifying an individual: gender (g), age (a), calendar year (t), and year of birth (c).We assign to each individual the feature x = (g, a, t, c) ∈ X with X = G × A × T × C the feature space, where: G = {males, f emales}, A = {0, ..., ω}, T = {t 1 , ..., t n }, C = {c 1 , ..., c m }.Other categorical variables could be included in the feature space X, e.g., the marital status, the income, and other individual information.
We assume that the number of deaths D x meets the following conditions: where m x is the central death rate and E x are the exposures.
Let us define d mdl x as the expected number of deaths estimated by a standard stochastic mortality model (such as Lee-Carter, Cairns-Blake-Dowd, etc.) and m mdl x the corresponding central death rate.Following Deprez et al. (2017), but modeling the central death death rate instead of mortality rate (q x ), we initially set: The condition ψ x ≡ 1 means that the specified mortality model perfectly fits the crude rates.However, in the real world, a mortality model could overestimate (ψ x ≤ 1) or underestimate (ψ x ≥ 1) the crude rates.Therefore, we calibrate the parameter ψ x , based on the feature x, according to three different machine learning techniques.We find ψ x as a solution of a regression tree algorithm applied to the ratio between the death observations and the corresponding value estimated by the specified mortality model We denote by ψmdl,ML x the machine learning estimator obtained by solving Equation (1), where mdl indicates the stochastic mortality model and ML the machine learning algorithm used to improve the mortality rates given by a certain model.The estimator ψmdl,ML x is then applied to the central death rate of the specified mortality model, m mdl x , aiming to obtain a better fit of the observed data: As in Deprez et al. (2017), we measure the improvement in the mortality rates attained by the tree growing algorithm through the relative changes of central death rates: The work in Hainaut (2018) used neural networks to learn the logarithm of the central death rates directly from the features of the mortality data, by using age, calendar year, and gender (and region) as predictors in a neural network.We instead rely on the classical form of the Lee-Carter model that we improve ex-post using machine learning algorithms, considered complementary and not an alternative to the standard mortality modeling.
To estimate ψmdl,ML x , we use the following tree-based machine learning (ML) techniques:

Decision Trees
The tree-based methods for regression and classification (Breiman et al. 1984) have become popular alternatives to linear regression.They are based on the partition of the feature space X , through a sequence of binary splits, and the set of splitting rules used to segment the predictor space can be summarized in a tree (Hastie et al. 2016).Once the entire feature space is split into a certain number of simple regions recursively, the response for a given observation can be predicted using the mean of the training observations in the region to which that observation belongs (James et al. 2017;Alpaydin 2010).
Let (X τ ) τ∈T be the partition of X ; the decision tree estimator is calculated as: Decision trees (DT) algorithms have advantages over other types of regression models.As pointed out by James et al. (2017): they are easy to interpret; they can easily handle qualitative predictors without the need to create dummy variables; they can catch any kind of correlation in the data.However, they suffer from some important drawbacks: they do not always have predictive accuracy levels similar to those of traditional regression and classification models; they can lack robustness: a small modification of the data can produce a tree that strongly differs from the one initially estimated.
The ML estimator is obtained using the R package rpart (Therneau and Atkinson 2017).The algorithm provides the estimate of ψ(x) given by the average of the response variable values ψ(x) belonging to the same region identified by the regression tree.The values of the complexity parameter (cp) for the decision trees are chosen with the aim of making the number of splits considered uniform.

Random Forest
The aggregation of many decision trees can improve the predictive performance of trees.Therefore, we first apply bagging (also called bootstrap aggregation) to produce a certain number, B, of decision trees from the bootstrapped training samples, in turn obtained from the bootstrap of the original training dataset.Random forest (RF) differs from bagging in the way of considering the predictors: RF algorithms account only for a random subset of the predictors at each split in the tree, as described in detail by Breiman (2001).If there is a strong predictor in the dataset, the other predictors will have more of a chance to be chosen as split candidates from the final set of predictors (James et al. 2017).The RF estimator is calculated as follows: The RF estimator is obtained by applying the algorithm from the R package randomForest (Liaw 2018).Since this procedure proved to be very costly from a computational point of view, the number of trees must be carefully chosen: it should not be too large, but at the same time able to produce an adequate percentage of variance explained and a low mean of squared residuals, MSR.

Gradient Boosting
Consider the loss in using a certain function to predict a variable on the training data; gradient boosting (GB) aims at minimizing the in-sample loss with respect to this function by a stage-wise adaptive learning algorithm that combines weak predictors.
Let ψ(x) be the function; the gradient boosting algorithm finds an approximation ψ(x) to the function ψ(x) that minimizes the expected value of the specified differentiable loss function (optimization problem).At each stage i of gradient boosting (1 ≤ i ≤ N), we suppose that there is some imperfect models ψ(x i ), then the gradient boosting algorithm improves on ψ(x i ) by constructing a new model that adds an estimator h to provide a better model: where h i ∈ H is a base learner function (H is the set of arbitrary differentiable functions) and λ is a multiplier obtained by solving the optimization problem.The GB estimator is obtained using the R package gbm (Ridgeway 2007).The gbm package requires choosing the number of trees (n.trees) and other key parameters as the number of cross-validation folds (cv.f olds), the depth of each tree involved in the estimate (interaction.depth),and the learning rate parameter (shrinkage).The number of trees, representing the number of GB iterations, must be accurately chosen, as a high number would reduce the error on the training set, while a low number would result in overfitting.The number of cross-validation folds to perform should be chosen according to the dataset size.In general, five-fold cross-validation, which corresponds to 20% of the data involved in testing, is considered a good choice in many cases.Finally, the interaction depth represents the highest level of variable interactions allowed or the maximum nodes for each tree.

Mortality Models
Let us consider the generalized age period cohort (GAPC) stochastic mortality models' family (see Villegas et al. 2015 for further details).In the GAPC models, the effects of age, calendar year, and cohort are caught by a predictor, in our framework denoted by η x , as follows: where: • α a : age-specific parameter providing the average age profile of mortality; t , ∀i: age-period terms describing the mortality trends (κ t is the time index, and modifies the effect of κ a • γ t−a : represents the cohort effect, where γ t−a is the cohort parameter and β (0) a modifies its effect across ages (c = t − a is the year of birth).
The mortality predictor is related to a link function g, so that: . In this paper, we consider the log link function and assume that the numbers of deaths D x follow a Poisson distribution.

Lee-Carter Model
Under the above-described framework, the Lee-Carter (LC) model as proposed by Brouhns et al. (2002) requires a log link function to target the central death rate.In the LC model, the logarithm of the central death rate is described by: with the constraints: (1) a = 1 to avoid identifiability problems with the parameters.In order to forecast mortality with the LC model, the time index κ (1) t is modeled by an autoregressive integrated moving average (ARIMA) process.In general, a random walk with drift properly fits the data: where δ is the drift parameter and t are the error terms, normally distributed with null mean and variance σ 2 k .

Renshaw-Haberman Model
The Renshaw-Haberman model (Renshaw and Haberman (2006)) extends the LC model by including a cohort effect.The model's predictor has the following expression, where the log link function is used to target the central death rate: According to Haberman and Renshaw (2011) and Hunt and Villegas (2015), we set β (0) a = 1 ∀a ∈ A, as the model is more stable with respect to the original version.
The model is subject to the following constraints, where c = t − a: ∑ t∈T κ (1) (1) a = 1, and ∑ c∈C γ c = 0. Parameters κ (1) t and γ t−a are modeled by ARIMA processes, assuming the independence between them.

Plat Model
The Plat model Plat (2009) aims to combine M7 and LC models in order to obtain a model appropriate for the entire age range and for capturing the cohort effect, thus overcoming the disadvantages of the previous models.
The Plat model is subject to the following constraints: Villegas et al. (2015): "the first three constraints ensure that the period indexes are centered around zero, while the last three constraints ensure that the cohort effect fluctuates around zero and has no linear or quadratic trend".
The model accuracy was measured by the Bayes information criterion (BIC) and the Akaike information criterion (AIC), which are measures generally used to evaluate the goodness of fit of mortality models 1 .Log-likelihood L, AIC and BIC values are reported in Table 1, from which we observe that the RH model fits the historical data very well.It has the highest BIC and AIC values for both genders, with respect to the other models; then, in order, the LC model and the Plat model.) 1,287,051 (2) 616,967 (1) 1,752,089 (3) 353,442 (2) 275,193 (1) 564,028 (3)  BIC (Rank) 1,289,218 (2) 620,570 (1) 1,755,671 (3) 355,608 (2) 278,796 (1) 567,610 (3) The goodness of fit is also tested by the residuals analysis.From Figure 1, we can observe that the RH provided the best fit, despite the highest number of parameters.The LC model provided a good fit especially for the old-age population, while the Plat model provided the worst performance despite the high number of parameters involved.

Model Fitting Improved by Machine Learning
In the following, we specify the parameters used to calibrate the ML algorithms described in Section 2 using the rpart, randomForest, and gbm packages, respectively: • ψmdl,DT x was estimated with the rpart package by setting: cp = 0.003 (complexity parameter); • ψmdl,RF x was estimated with the randomForest package by setting: ntrees = 200 (number of trees).Since this procedure proved to be very costly from a computational point of view, we limited the number of trees to 200, in order to guarantee both an adequate percentage of variance explained by the model and a low mean of squared residuals, MSR (see Table 2); • ψmdl,GB x is estimated with the gbm package by setting: n.trees = 5000 (number of trees); cv.f olds = 5 (number of cross-validation folds); interaction.depth= 6; shrinkage = 0.001 (learning rate) according to the algorithm implementation speed.The parameter cv.f olds is used to estimate the optimal number of iterations through the function gbm.per f (see Figure 2).The level of improvement in central death rates resulting from the application of ML algorithms was measured by ∆m mdl,ML x , the relative changes described in Equation (3).Numerical results for the LC, RH, and Plat model combined with the tree-based ML algorithms are shown in Figure 3 for males.Similar results were obtained for females.
The white areas represent very small variations of ∆m mdl,ML x , approximately around zero.Larger white areas were observed for gradient boosting applied to the LC and RH model.In all cases, there were also significant changes that were less prominent for the RH model that best fit the historical data.Many regions were identified by diagonal splits (highlighting a cohort effect), strengthening our choice to insert the cohort parameter in the decision tree algorithms.
Especially for the LC model, we point out that the relative changes were mainly concentrated in the young ages.For the Plat model, we observed small values of ∆m PL,ML x with respect to the other mortality models, with the exception of the population aged under 40 that showed quite significant changes.From these early results, DT and RT seemed to work better than the GB algorithm.Since the most significant changes were concentrated in the younger ages, we show the mortality rates (in log scale) only for the age group 0-50 (Figure 4).For the sake of brevity, we show the results for the male population.Similar results were obtained for females and are reported in the Appendix (see Figure A1).
From the plots, we can argue that ML estimators led to an improvement in the quality of fit in all the mortality models considered.The plots show that the application of an ML estimator involves significant changes in the values of the mortality rate with a significant improvement in the fitting of the data.Among the stochastic mortality models considered here, the Plat model is the one that achieved the highest fit improvements from the use of ML algorithms.  in 1915, 1940, 1965, 1990, and 2014.
Further, we measured the goodness of fit of the models with the mean absolute percent error (MAPE), defined as: where N is the data dimension and m mdl,ML x and mmdl,ML x are respectively the actual and estimated values of mortality.The MAPEs are summarized in Table 3.The highest MAPE reduction was achieved by the Plat model, with a reduction from 25.81% to 4.79% after the application of the RF algorithm (from 22.34% to 4.49% for the female population).
In summary, all the ML algorithms improved the standard stochastic mortality models herein considered, and the RF algorithm turned out to be the most effective one.

LC Model Forecasting Improved by Machine Learning
In this subsection, we describe how the ML estimator, ψx , can be used to obtain an improvement of the mortality forecasting given by the standard stochastic models.
Setting aside the logic of machine learning, our idea was to model and forecast ψx using the same framework of the original mortality model.The forecasted values of ψx were then used to improve the forecasted values of mortality rates obtained from the original model.This approach was tested on the LC model; therefore, the ML estimator ψLC,ML x is modeled as: where the sets of parameters α ψ a , β (1,ψ) a

, and κ
(1,ψ) t have the same meaning of α a , β (1) a , and κ (1) t in Equation ( 8).Combining Equations ( 2), (8), and ( 14), we obtain the following LC model improved by machine learning: To verify the model accuracy, we provide out-of-sample forecasts, where the fitting period was set to 1915-2000 and the forecasting period to 2001-2014.In the forecasting, κ (1,ψ) t and κ (1) t were both modeled by a random walk with drift using values for the past 41 years .The plots of the time-dependent parameters κ (1) t and κ (1,ψ) t by gender are provided in the Appendix (Figure A2).

The values of parameter κ
(1) t of the LC standard model (Figure A2a,b) have been strongly decreasing from the end of the Second World War, which resulted in a strong reduction of mortality over time, with a further acceleration after the mid-1980s.The ML algorithms reduced this effect through the parameter κ (1,ψ) t ,which showed a growing trend after 1960 with greater strength since the 1980s (Figure A2c-h).
The use of the same framework of the original mortality model to fit and forecast the ML estimators ψx has a dual purpose.On the one hand, it allows improving the forecasting provided by the original model and on the other hand analyzing the effect of the improvement directly on the model's parameters.As discussed in the Introduction, machine learning is recognized to be very effective at detecting unknown and unidentifiable patterns in the data, but lacks an underlying theory that may be fundamental to provide a rational explanation of the results obtained.From this point of view, our approach can contribute to filling the gap between machine learning and theory combining a data-driven approach with a model-driven one.

Goodness of Forecasting
The forecasting results given by the out-of-sample test were compared using two measures: the root mean squared logarithmic error (RMSLE) and the root mean squared error (RMSE).The first one (RMSLE) takes into account log m x , providing a relatively large amount of weight to errors at young ages, while the second one (RMSE) is based on m x and provides a relatively large amount of weight to errors at older ages.
Table 4 shows the out-of-sample test results for the LC model improved by machine learning when ψLC,ML x was forecasted using the LC framework.Values in bold indicate the model with the smaller RMSLE and RMSE.The RF algorithm provided the best performance, except for male RMSE, where GB was the best.The higher reduction of RMSLE was 77% for male and 71% for female, while when considering RMSE, it was 51% for male and 80% for females.However, we can conclude that all the ML estimators produced a significant improvement in forecasting with respect to the standard LC model.We now use a more recent period (starting from 1960 instead of 1915) to analyze the level of the improvement provided by ML algorithms on a smaller dataset.The aim was to check if the change of the calibration period can have an important impact on the results, since the ML algorithms work better with larger datasets.We show in Table 5 the values of MAPE used to analyze the quality of fitting.Furthermore, for a shorter calibration period, all the ML algorithms improved the standard stochastic mortality models, and the level of the improvement in the model fit remained high.The RF algorithm continued to be the best one.The values of ∆m mdl,ML x for the LC, RH, and Plat model combined with the tree-based ML algorithms for the new calibration period are shown in Figure 5 for males.Similar results were obtained for females and are reported in the Appendix (see Figure A3).Also in this case, there were significant changes, but there were less regions identified by diagonal splits (highlighting a cohort effect) with respect to the time period 1915-2014.Moreover, we observed that the significant changes were concentrated both in the young and old ages.The concentration in the old ages was less evident in the 1915-2014 analysis.(1,ψ) t showed a roughly monotone trend since 1960, in the case of a shorter estimation period, this trend oscillated: decreasing until the mid-1980s, increasing until 1997, then decreasing and increasing again for a few years (see Figures A2c-h and A4c-h).As a consequence, the values of future κ (1,ψ) t were approximately constant (due to the random walk with drift behavior), while they were increasing in the case of a longer estimation period .We observed that the reduction of mortality over the time period 1960-2000 was less strong than that registered for the years 1915-2000 2 , and this fact led to more adequate projections, requiring less adjustments from κ

Goodness of Forecasting
Table 6 shows the out-of-sample test results for the LC model.Values in bold indicate the model with the smaller RMSLE and RMSE.The best performance in terms of RMSLE was given by the RF algorithm, while in terms of RMSE, GB provided smaller values.The higher reduction of RMSLE was 68% for male and 64% for female, while in terms of RMSE, it was 8% for male and 6% for females.
In light of these results, we can state that, also with a smaller dataset, all the ML estimators produced a better quality of forecasting with respect to the standard LC, but the level of the improvement was less satisfactory for the older ages than the one obtained with the larger dataset .The reduction level obtained by RMSE for both genders was significantly lower than that achieved in the case of the 1915-2014 dataset.ML algorithms require large datasets to attain excellent performance, and using a smaller dataset makes the algorithms less effective in detecting unknown patterns, especially at old ages, where there are few observations.

Conclusions
Our paper illustrates how machine learning can be used to improve both fitting and forecasting of standard stochastic mortality models (such as LC, RH, and Plat), taking the advantages of artificial intelligence to better understand processes that are not identifiable by standard models.We extend the work of Deprez et al. (2017), which applied a regression tree boosting machine to improve the fitting of the LC and the RH model.We tested the improvement in the fitting quality of the LC, RH, and Plat model using not only the decision tree, but also two more powerful ML algorithms: random forest and gradient boosting.Our results, obtained from a case study structured on the Italian population, demonstrate that the random forest algorithm was more effective, though the other two algorithms produced significant improvements.

1
The AIC and BIC statistics are both function of the log-likelihood, L, and the number of parameters involved in the model, ν: AIC = 2ν − 2L, and BIC = ν log N − 2L, where N is the number of observations.

Figure 1 .
Figure 1.Heat map of standardized residuals of the mortality models.Ages 0-100 and years 1915-2014, Italian population.

Figure 2 .
Figure 2. Estimates of the optimal number of boosting iterations for the LC, RH, and Plat model.Black line: Out-of-bag estimates; green line: cross-validation estimates.

Figure 3 .
Figure 3. Values of ∆m mdl,ML x

Figure 5 .t
Figure 5. Values of ∆m mdl,ML x

Table 2 .
Explained variance and MSR by the RF algorithm for the LC, RH, and Plat model.Ages 0-100 and years 1915-2014, Italian population.

Table 3 .
MAPE of fitted with respect to observed data for the LC, RH, and Plat model, before (No ML) and after (DT, RF, GB) the application of ML algorithms.Ages 0-100 and years 1915-2014, Italian population.Values in bold indicate the specification with the smaller MAPE for each model.

Table 4 .
Out-of-sample test results: RMSLE and RMSE for the LC model without and with machine learning.ML estimator ψLC,ML x modeled according to the LC framework.Years 2000Years  -2014Years    (fitting period: 1915Years  -2000)).Values in bold indicate the model with the smaller RMSLE and RMSE.

Table 5 .
MAPE of fitted with respect to observed data for the LC, RH, and Plat model, before (No ML) and after (DT, RF, GB) the application of ML algorithms.Ages 0-100 and years 1960-2014, Italian population.Values in bold indicate the specification with the smaller MAPE for each model.

Table 6 .
Out-of-sample test results: RMSLE and RMSE for the LC model without and with machine learning.Years 2000Years  -2014Years   (fitting period: 1960Years  -2000)).Values in bold indicate the model with the smaller RMSLE and RMSE.