Next Article in Journal / Special Issue
Prediction of Claims in Export Credit Finance: A Comparison of Four Machine Learning Techniques
Previous Article in Journal
A Survey of the Individual Claim Size and Other Risk Factors Using Credibility Bonus-Malus Premiums
Previous Article in Special Issue
Modelling Unobserved Heterogeneity in Claim Counts Using Finite Mixture Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Learning in Least-Squares Monte Carlo Proxy Modeling of Life Insurance Companies

1
Department of Mathematics, TU Kaiserslautern, Erwin-Schrödinger-Straße, Geb. 48, 67653 Kaiserslautern, Germany
2
Mathematical Institute, University Cologne, Weyertal 86-90, 50931 Cologne, Germany
3
Department Financial Mathematics, Fraunhofer ITWM, Fraunhofer-Platz 1, 67663 Kaiserslautern, Germany
*
Author to whom correspondence should be addressed.
Risks 2020, 8(1), 21; https://doi.org/10.3390/risks8010021
Submission received: 30 December 2019 / Revised: 10 February 2020 / Accepted: 12 February 2020 / Published: 21 February 2020
(This article belongs to the Special Issue Machine Learning in Insurance)

Abstract

:
Under the Solvency II regime, life insurance companies are asked to derive their solvency capital requirements from the full loss distributions over the coming year. Since the industry is currently far from being endowed with sufficient computational capacities to fully simulate these distributions, the insurers have to rely on suitable approximation techniques such as the least-squares Monte Carlo (LSMC) method. The key idea of LSMC is to run only a few wisely selected simulations and to process their output further to obtain a risk-dependent proxy function of the loss. In this paper, we present and analyze various adaptive machine learning approaches that can take over the proxy modeling task. The studied approaches range from ordinary and generalized least-squares regression variants over generalized linear model (GLM) and generalized additive model (GAM) methods to multivariate adaptive regression splines (MARS) and kernel regression routines. We justify the combinability of their regression ingredients in a theoretical discourse. Further, we illustrate the approaches in slightly disguised real-world experiments and perform comprehensive out-of-sample tests.

1. Introduction

The Solvency II directive of the European Parliament and European Council (2009) requires from insurance companies a derivation of the solvency capital requirement (SCR) using the full probability distributions of losses over a one-year period. Some life insurers comply with this requirement by setting up internal models. Other insurers opt for the much simpler standard formula, which enables an aggregation of the company’s exposures to single risks. Lacking an analytical valuation formula for the losses in a one-year period, life insurers with an internal model are supposed to utilize a Monte Carlo approach usually called nested simulations approach (Bauer et al. (2012)). In practice their cash-flow-projection (CFP) models need to be simulated several hundred thousand to several million times for a robust implementation of the nested simulations approach. But the insurers are currently far from being endowed with sufficient computational capacities to perform such expensive simulation tasks. By applying suitable approximation techniques like the least-squares Monte Carlo (LSMC) approach of Bauer and Ha (2015), the insurers are able to overcome these computational hurdles though. For example, they can implement the LSMC framework formalized by Krah et al. (2018) and applied by, for example, Bettels et al. (2014), to derive their full loss distributions. The central idea of this framework is to carry out a comparably small number of wisely chosen nested Monte Carlo simulations and to feed the simulation results into a supervised machine learning algorithm that translates the results into a proxy function of the insurer’s loss (output) with respect to the underlying risk factors (input).
Our starting point is the LSMC framework from Krah et al. (2018). In the following the same approach for the proxy derivation is assumed, we will only amend the calibration and validation steps. Therefore, we neither repeat the simulation setting nor the procedure for the full loss distribution forecast and SCR calculation here in detail. The purpose of this exposition is to introduce different machine learning methods that can be applied in the calibration step of the LSMC framework, to point out their similarities and differences and to compare their out-of-sample performances in the same slightly disguised real-world LSMC example already used in Krah et al. (2018).
We describe the data basis used for calibration and validation in Section 2.1, the structure of the calibration algorithm in Section 2.2 and our validation approach in Section 2.3. Our focus lies on out-of-sample performance rather than computational efficiency as the latter becomes only relevant if the former gives reason for it. We analyze a very realistic data basis with 15 risk factors and validate the proxy functions based on a very comprehensive and computationally expensive nested simulations test set comprising the SCR estimate.
The main idea of our approach is to combine different regression methods with an adaptive algorithm, in which the proxy functions are built up of basis functions in a stepwise fashion. In a four risk factor LSMC example, Teuguia et al. (2014) applied a full model approach, forward selection, backward elimination and a bidirectional approach as, for example, discussed in Hocking (1976) with orthogonal polynomial basis functions. They stated that only forward selection and the bidirectional approach were feasible when the number of risk factors or the polynomial degree exceeded 7, as then the resulting other models exploded. Life insurance companies covering a wide range of contracts in their portfolio are typically exposed to even more risk factors like, for example, 15. Complex business regulation frameworks such as those in Germany cause non-linear dependencies between risk factors and losses, which naturally lead to polynomials of higher degrees in the chosen proxy models. In these cases, even the standard forward selection and bidirectional approaches become infeasible as the sets of candidate terms from which the basis functions are chosen will explode then as well. We therefore follow the suggestion of Krah et al. (2018) to implement the so-called principle of marginality, an iteration-wise update technique of the set of candidate terms that lets the algorithm get along with comparably few carefully selected candidate terms.
Our main contribution is to identify, explain and illustrate a collection of regression methods and model selection criteria from the variety of regression design options that provide suitable proxy functions in the LSMC framework when applied in combination with the principle of marginality. After some general remarks in Section 3.1, we describe ordinary least-squares (OLS) regression in Section 3.2, generalized linear models (GLMs) by Nelder and Wedderburn (1972) in Section 3.3, generalized additive models (GAMs) by Hastie and Tibshirani (1986) and Hastie and Tibshirani (1990) in Section 3.4, feasible generalized least-squares (FGLS) regression in Section 3.5, multivariate adaptive regression splines (MARS) by Friedman (1991) in Section 3.6, and kernel regression by Watson (1964) and Nadaraya (1964) in Section 3.7. While some regression methods such as OLS and FGLS regression or GLMs can immediately be applied in conjunction with numerous model selection criteria such as Akaike information criterion (AIC), Bayesian information criterion (BIC), Mallow’s C P or generalized cross-validation (GCV), other regression methods such as GAMs, MARS, kernel, ridge or robust regression require well thought-through modifications thereof or work only with non-parametric alternatives such as k-fold or leave-one-out cross-validation. For adaptive approaches of FGLS, ridge and robust regression in life insurance proxy modeling, see also Hartmann (2015), Krah (2015) and Nikolić et al. (2017), respectively.
In the theory sections, we present the models with their assumptions, important properties and popular estimation algorithms and demonstrate how they can be embedded in the adaptive algorithm by proposing feasible implementation designs and combinable model selection criteria. While we shed light on the theoretical basic concepts of the models to lay the groundwork for the application and interpretation of the later following numerical experiments, we forego describing in detail technical enhancements or peculiarities of the involved algorithms and instead refer the interested reader to further sources. Additionally we provide the practicioners with R packages containing useful implementations of the presented regression routines. We complement the theory sections by corresponding empirical results in Section 4, throughout which we perform the same Monte Carlo approximation task to make the performance of the various methods comparable. We measure the approximation quality of the resulting proxy functions by means of aggregated validation figures on three out-of-sample test sets.
Conceivable alternatives to the entire adaptive algorithm are other typical machine learning techniques such as artificial neural networks (ANNs), decision tree learning or support vector machines. In particular, the classical feed forward networks proposed by Hejazi and Jackson (2017) and applied in various ways by Kopczyk (2018), Castellani et al. (2018), Born (2018) and Schelthoff (2019) were shown to capture the complex nature of CFP models well. A major challenge here is not only to find reliable hyperparameters such as the numbers of hidden layers and nodes in the network, batch size, weight initializer probability distribution, learning rate or activation functions but also the high dependence on the random seeds. We plan to contribute to this in a further publication which will be dedicated to hyperparameter search algorithms and stabilization methods such as ensemble methods. As an alternative to feed forward networks, Kazimov (2018) suggested to use radial basis function networks albeit so far none of the tested approaches performed better than the ordinary least squares regression in Krah et al. (2018).
In decision tree learning, random forests and tree-based gradient boosting machines were considered by Kopczyk (2018) and Schoenenwald (2019). While random forests were outperformed by feed forward networks but did better than the least absolute shrinkage and selection operator (LASSO) by Tibshirani (1996) in the example of the former author, they generally performed worse than the adaptive approaches by Krah et al. (2018) with OLS regression in numerous examples of the latter author. The gradient boosting machines, requiring more parameter tuning and thus being more versatile and demanding, came overall very close to the adaptive approaches.
Castellani et al. (2018) compared support vector regression (SVR) by Drucker et al. (1997) to ANNs and the adaptive approaches by Teuguia et al. (2014) in a seven risk factor example and found the performance of SVR placed somewhere inbetween the other two approaches with the ANNs getting closest to the nested simulations benchmark. As some further non-parametric approaches, Sell (2019) tested least-squares support-vector machines (LS-SVM) by Suykens and Vandewalle (1999) and shrunk additive least-squares approximations (SALSA) by Kandasamy and Yu (2016) in comparison to ANNs and the adaptive approaches by Krah et al. (2018) with OLS regression. In his examples, SALSA was able to beat the other two approaches whereas LS-SVM was left far behind. The analyzed machine learning alternatives have in common that they require at least to some degree a fine-tuning of some model hyperparameters. Since this is often a non-trivial but crucial task for generating suitable proxy functions, finding efficient and reliable search algorithms should become a subject of future research.

2. Calibration and Validation in the LSMC Framework

2.1. Fitting and Validation Points

2.1.1. Outer Scenarios and Inner Simulations

Our starting point is the LSMC approach (Krah et al. (2018)). LSMC proxy functions are calibrated conditional on the fitting points generated by the Monte Carlo simulations of the CFP model. Additional out-of-sample validation points serve as a mean for an assessment of the goodness-of-fit. The explaining variables of a proxy function are financial and actuarial risks the insurance company is exposed to. Examples for these risks are changes in interest rates, equity, credit, mortality, morbidity, lapse and expense levels over the one-year period. The dependent variable is an economic variable like the available capital, loss of available capital or best estimate of liabilites over the one-year period. Figure 1 plots the fitting values of an exemplary economic variable with respect to a financial risk factor. By an outer scenario we refer to a specific realized stress level combination of these risk factors over one year, and by an inner simulation to a stochastic path of an outer scenario in the CFP model under the given risk-neutral probability measure. Each outer scenario is assigned the probability weighted mean value of the economic variable over the corresponding inner simulations. In the LSMC context the fitting values are the mean values over only few inner simulations whereas the validation values are derived as the mean values over many inner simulations.

2.1.2. Different Trade-Off Requirements

According to the law of large numbers, this construction makes the validation values comparably stable while the fitting values are very volatile. Typically, the very limited fitting and validation simulation budgets are of similar sizes. Hence the few inner simulations in the case of the fitting points allow a great diversification among the outer scenarios whereas the many inner simulations in the case of the validation points let the validation values be quite close to their expectations but at the cost of only little diversification among the outer scenarios. These opposite ways to deal with the trade-off between the numbers of outer scenarios and inner simulations reflect the different requirements for the fitting and validation points in the LSMC approach. While the fitting scenarios should cover the domain of the real-world scenarios well to serve as a good regression basis, the validation values should approximate the expectations of the economic variable at the validation scenarios well to provide appropriate target values for the proxy functions.

2.2. Calibration Algorithm

2.2.1. Five Major Components

The calibration of the proxy function is performed by an adaptive algorithm that can be decomposed into the following five major components: (1) a set of allowed basis function types for the proxy function, (2) a regression method, (3) a model selection criterion, (4) a candidate term update principle, and (5) the number of steps per iteration and the directions of the algorithm. For illustration, we adopt the flowchart of the adaptive algorithm from Krah et al. (2018) and depict it in Figure 2. While components (1) and (5) enter the flowchart implicitly through the start proxy, candidate terms and the order of the processes and decisions in the chart, components (2), (3) and (4) are explicitly indicated through the labels “Regression”, “Model Selection Criterion” and “Get Candidate Terms”.
Let us briefly recapitulate the choice of components (1)–(5) from the successful applications of the adaptive algorithm in the insurance industry as described in Krah et al. (2018). As the function types for the basis functions (1), let only monomials be allowed. Let the regression method (2) be ordinary least-squares (OLS) regression and the model selection criterion (3) Akaike information criterion (AIC) from Akaike (1973). Let the set of candidate terms (4) be updated by the principle of marginality to which we will return in greater detail below. Lastly, when building up the proxy function iteratively, let the algorithm make only one step per iteration in the forward direction (5) meaning that in each iteration exactly one basis function is selected which cannot be removed anymore (adaptive forward stepwise selection).

2.2.2. Iterative Procedure

The algorithm starts in the upper left side of Figure 2 with the specification of the start proxy basis functions. We specify only the intercept so that the first regression ( k = 0 ) reduces to averaging over all fitting values. In order to harmonize the choices of OLS regression and AIC, we assume that the errors are normally distributed and homoscedastic because then the OLS estimator coincides with the maximum likelihood estimator. AIC is a relative measure for the goodness-of-fit of the proxy function and is defined as twice the negative of the maximum log-likelihood plus twice the number of degrees of freedom. The smaller the AIC score, the better the fit, and thus the trade-off between a too complex (overfitting) and too simple model (underfitting).
At the beginning of each iteration ( k = 1 , , K 1 ), the set of candidate terms is updated by the principle of marginality which stipulates that a monomial basis function becomes a candidate if and only if all its derivatives are already included in the proxy function. The choice of a monomial basis is compatible to the principle of marginality. Using such a principle saves computational costs by selecting the basis functions conditionally on the current proxy function structure. In the first iteration ( k = 1 ), all linear monomials of the risk factors become candidates as their derivatives are constant values which are represented by the intercept.
The algorithm proceeds on the lower left side of the flowchart with a loop in which all candidate terms are separately added to the proxy function structure and tested with regard to their additional explanatory power. With each candidate, the fitting values are regressed against the fitting scenarios and the AIC score is calculated. If no candidate reduces the currently smallest AIC score, the algorithm terminates, and otherwise, the proxy function is updated by the one which reduces AIC most. Then the next iteration ( k + 1 ) begins with the update of the set of candidate terms, and so on. As long as no termination occurs, this procedure is repeated until the prespecified maximum number of terms K max is reached.

2.3. Validation Figures

2.3.1. Validation Sets

Since it is the objective of this paper to propose suitable regression methods for the proxy function calibration in the LSMC framework, we introduce several validation figures serving as indicators for the approximation quality of the proxy functions. We measure the out-of-sample performance of each proxy function on three different validation sets by calculating five validation figures per set.
The three validation sets are a Sobol set, a nested simulations set and a capital region set. Unlike the Sobol set, the nested simulations and capital region sets do not serve as feasible validation sets in the LSMC routine as they become known only after evaluating the proxy function as explained below. Furthermore, they require massive computational capacities. Yet they can be regarded as the natural benchmark for the LSMC-based method and are thus very valuable for this analysis. Figure 3 plots the nested simulation values of an exemplary economic variable with respect to a financial risk factor. The Sobol set consists of, for example, between L = 15 and L = 200 Sobol validation points, of which the scenarios follow a Sobol sequence covering the fitting space uniformly. Thereby, the fitting space is the cube on which the outer fitting scenarios are defined. It has to cover the space of real-world scenarios used for the full loss distribution forecast sufficiently well. For interpretive reasons, sometimes the Sobol set is extended by points with, for example, one-dimensional risk scenarios or scenarios producing a risk capital close to the SCR ( = 99.5 % value-at-risk) in previous risk capital calculations.
The nested simulations set comprises the, for example, L = 820 to L = 6554 validation points of which the scenarios correspond to the, for example, highest 2.5 % to 5 % losses from the full loss distribution forecast made by the proxy function that had been derived under the standard calibration algorithm choices described in Section 2.2. Like in the example of Chapter 5.2 in Krah et al. (2018), the order of these losses-which scenarios lead to which quantiles?following from the fourth and last step of the LSMC approach is very similar to the order following from the nested simulations approach. Therefore the scenarios of the nested simulations set are simply chosen by the order of the losses resulting from the LSMC approach. Several of these scenarios consist of stresses falling out of the fitting space. Compare Figure 1 and Figure 3 which depict fitting and nested simulation values from the same proxy modeling task with respect to the same risk factor. Severe outliers due to extreme stresses far outside of the fitting space should be excluded from the set. The capital region set is a subset of the nested simulations set containing the nested simulations SCR estimate, that is, the scenario leading to the 99.5 % loss, and the, for example, 64 losses above and below, which makes in total, for example, L = 129 validation points.

2.3.2. Validation Figures

The five validation figures reported in our numerical experiments comprise two normalized mean absolute errors (MAEs), one with respect to the magnitude of the economic variable itself and one with respect to the magnitude of the corresponding market value of assets. They comprise further the mean error, that is, the mean of the residuals, as well as two validation figures based on the change of the economic variable from its base value (see the definition of the base value below): the normalized MAE with respect to the magnitude of the changes and the mean error of these changes. The smaller the normalized MAEs are, the better the proxy function approximates the economic variable. However, the validation values are afflicted with Monte Carlo errors so that the normalized MAEs serve only as meaningful indicators as long as the proxy functions do not become too precise. The means of the residuals should be possibly close to zero since they indicate systematic deviations of the proxy functions from the validation values. While the first three validation figues measure how well the proxy function reflects the economic variable in the CFP model, the latter two address the approximation effects on the SCR, compare Chapter 3.4.1 of Krah et al. (2018).
Let us write the absolute value as · and let L denote the number of validation points. Then we can express the MAE of the proxy function f ^ x i evaluated at the validation scenarios x i versus the validation values y i as 1 L i = 1 L y i f ^ x i . After normalizing the MAE with respect to the mean of the absolute values of the economic variable or the market value of assets, that is, 1 L i = 1 L d i with d i y i , a i , we obtain the first two validation figures, that is,
mae = i = 1 L y i f ^ x i i = 1 L d i .
In the following, we will refer to (1) with d i = y i as the MAE with respect to the relative metric, and to (1) with d i = a i as the MAE with respect to the asset metric. The mean of the residuals is given by
res = 1 L i = 1 L y i f ^ x i .
Let us refer by the base value y 0 to the validation value corresponding to the base scenario x 0 in which no risk factor has an effect on the economic variable. In analogy to (1) but only with respect to the relative metric, we introduce another normalized MAE by
mae 0 = i = 1 L y i y 0 f ^ x i f ^ x 0 i = 1 L y i y 0 .
The mean of the corresponding residuals is given by
res 0 = 1 L i = 1 L y i y 0 f ^ x i f ^ x 0 .
In addition to these five validation figures, let us define the base residual which can be used as a substitute for (4) depending on personal taste. The base residual can easily be extracted from (2) and (4) by
res base = y 0 f ^ x 0 = res res 0 .

3. Machine Learning Regression Methods

3.1. General Remarks

As the main part of our work, we will compare various types of machine learning regression approaches for determining suitable proxy functions in the LSMC framework. The methods we present in this section range from ordinary and generalized least-squares regression variants over GLM and GAM approaches to multivariate adaptive regression splines and kernel regression approaches.
The performance of the newly derived proxy functions when applied to the described validation sets is one way of comparing the different methods. Another way consists of ensuring compatibility with the principle of marginality and utilizing a suitable model selection criterion such as AIC in order to be able to compare iteration-wise the candidate models inside the approaches.
We will in the following sections shortly introduce the different methods, collect some theoretical properties and then concentrate on aspects of their implementation. Their numerical performance on the different validation sets is the subject of Section 4.
Our aim in the calibration step below is to estimate the conditional expectation Y ( X ) under the risk-neutral measure given an outer scenario X. In contrast to Krah et al. (2018) Y ( X ) does not necessarily have to be the available capital but can instead be, for example, the best estimate of liabilites or the market value of assets. The D-dimensional fitting scenarios are always generated under the physical probability measure P on the fitting space which itself is a subspace of R D .

3.2. Ordinary Least-Squares (OLS) Regression

3.2.1. The Regression Model

In iteration K 1 of the adaptive forward stepwise algorithm (as given in Section 2.2), the OLS approximation consists of a linear combination of suitable linearly independent basis functions e k X L 2 R D , B , P , k = 0 , 1 , , K 1 , that is,
Y ( X ) K < f ( X ) = k = 0 K 1 β k e k X .
We call f ( X ) the predictor of Y ( X ) or the systematic component.
With the fitting points x i , y i , i = 1 , , N , and uncorrelated errors ϵ i (the random components) having the same variance σ 2 > 0 (= homoscedastic errors), we obtain the classical linear regression model
y i = k = 0 K 1 β k e k x i + ϵ i ,
where e 0 x i = 1 and β 0 is the intercept. Then, the ordinary least-squares (OLS) estimator β ^ OLS of the coefficients is given by
β ^ OLS = arg min β R K i = 1 N y i k = 0 K 1 β k e k x i 2 .
Using the notation z i k = e k x i the OLS problem is solved explicitly by
β ^ OLS = Z T Z 1 Z T y .
The proxy function f ^ X for the economic variable Y ( X ) given an outer scenario X is
Y ( X ) K , N < f ^ X = k = 0 K 1 β ^ OLS , k e k X .
For a practical implementation see, for example, function lm( · ) in the R package stats of R Core Team (2018).

3.2.2. Gauss-Markov Theorem, ML Estimation and AIC

Under the assumptions of strict exogeneity E ϵ Z = 0 (A1), a spherical error variance V ϵ Z = σ 2 I N with I N the N-dimensional identity matrix (A2), and linearly independent basis functions (A3), we have (compare, for example, Hayashi (2000)):
  • The OLS estimator is the best linear unbiased estimator (BLUE) of the coefficients in the classical linear regression model (7) (Gauss-Markov Theorem).
  • If the errors ϵ in (7) are in addition normally distributed (A4), then the OLS estimator and the maximum likelihood (ML) estimator of the coefficients coincide.
  • Under Assumptions (A1)-(A4) the Akaike information criterion (AIC) has the form
    AIC = 2 l β ^ OLS , σ ^ 2 + 2 K + 1 = N log 2 π σ ^ 2 + 1 + 2 K + 1 .

3.3. Generalized Linear Models (GLMs)

3.3.1. The Regression Model

The systematic component of a GLM (see Nelder and Wedderburn (1972) for its introduction) equals the linear predictor η = f ( X ) of the model in (6). However, one uses a monotonic link function g ( · ) that relates the economic variable Y ( X ) to the linear predictor via
g ( Y ( X ) = μ ) K < f ( X ) = η = k = 0 K 1 β k z k = z T β ,
with z = e 0 X , , e K 1 X T .
Of course, the choice of the link function g ( . ) is a critical aspect. A possible motivation is a non-negativity requirement on Y ( X ) that can be satisfied using g ( y ) = ln ( y ) . Further comments on choices of the link function are motivated below.

3.3.2. Canonical Link Function, GLM Estimation and IRLS Algorithm

While the normal distribution assumption for the random component allowed the derivation of nice properties in the linear model of the preceding section, the GLM considers random components with (conditional) distributions from the exponential family. Its canonical form with parameter θ is given by the density function
π ( y θ , ϕ ) = exp y θ b ( θ ) a ( ϕ ) + c ( y , ϕ ) ,
where a ( ϕ ) , b ( θ ) and c ( y , ϕ ) are specific functions. For example, a normally distributed economic variable with mean μ and variance σ 2 is given by a ( ϕ ) = ϕ , b ( θ ) = θ 2 2 and c ( y , ϕ ) = 1 2 y 2 σ 2 + log 2 π σ 2 with θ = μ and ϕ = σ 2 .
For a random variable Y with a distribution from the exponential family, we have
E ( Y ) = μ = b ( θ ) , V a r ( Y ) = b ( θ ) a ( ϕ ) = : V μ a ( ϕ ) .
a ( ϕ ) is called a dispersion parameter, V [ . ] the variance function. We will in the following make the simplifying assumption a ( ϕ i ) = ϕ , i = 1 , , N for a constant value of ϕ (A5) and then obtain the ML estimator in the GLM from Equation (13) as
β ^ GLM = arg max β R K i = 1 N y i θ i b ( θ i ) ϕ + c ( y i , ϕ ) .
Under (A5), there does in general not exist a closed-form solution for the GLM coefficient estimator (15). The resulting iterative method will be simplified for so-called canonical link functions g ( μ ) = θ which due to relation (14) are given by
g ( μ ) = ( b ) 1 ( μ ) ,
with b ( . ) from the definition of the exponential family. Examples of pairs of canonical link functions and corresponding distributions are g ( μ ) = μ and the normal, g ( μ ) = 1 / μ and the gamma, and g ( μ ) = 1 / μ 2 and the inverse Gaussian distribution.
In Chapter 2.5, McCullagh and Nelder (1989) apply Fisher’s scoring method to obtain an approximation to the GLM estimator. Further, McCullagh and Nelder (1989) justify how Fisher’s scoring method can be cast in the form of the iteratively reweighted least squares (IRLS) algorithm. To state the IRLS algorithm in our context, we need some notation.
Let η ^ ( t ) i = f ^ x i be the estimate for the linear predictor evaluated at fitting scenario x i , compare (12). Let μ ^ ( t ) i = g 1 η ^ ( t ) i be the estimate for the economic variable, and d η d μ μ ^ ( t ) i = g μ ^ ( t ) i the first derivative of the link function with respect to the economic variable evaluated at μ ^ ( t ) i . Furthermore, we introduce the weight matrix W ( t ) = diag w 1 β ^ ( t ) , , w N β ^ ( t ) with components given by
w ^ i β ^ ( t ) = d η d μ μ ^ ( t ) i 2 V μ ^ ( t ) i 1 ,
and V μ ^ ( t ) i the variance function from above evaluated at μ ^ ( t ) i . Finally, we define D ( t ) = diag ( d ( t ) 1 , , d ( t ) N ) with d ( t ) i = g μ ^ ( t ) i which allows us to formulate the IRLS algorithm for canonical link functions.
IRLS algorithm.
Perform the iterative approximation procedure below with an initialization of μ ^ ( 0 ) i = y i + 0.1 and η ^ ( 0 ) i = g μ ^ ( 0 ) i as proposed by Dutang (2017) until convergence:
β ^ ( t + 1 ) = Z T W ( t ) Z 1 Z T W ( t ) s ^ ( t ) β ^ ( t ) ,
s ^ ( t ) β ^ ( t ) = Z β ^ ( t ) + D ( t ) ( y μ ^ t )
After convergence, we set β ^ GLM = β ^ ( t + 1 ) .
Green (1984) proposes to solve the system Z T W ( t ) Z β ^ ( t + 1 ) = Z T W ( t ) s ^ ( t ) which is equivalent to (18) via a QR decomposition to increase numerical stability. For a practical implementation of GLMs using the IRLS algorithm, see, for example, function glm( · ) in R package stats of R Core Team (2018).
By inserting (17), (19) and the GLM estimator into (18) and by using (12), we obtain
β ^ GLM = arg min β R K i = 1 N V μ ^ GLM i y i μ ^ GLM i 2 ,
that is, the GLM estimator minimizes the squared sum of raw residuals scaled by the estimated individual variances of the economic variable.
The Pearson residuals are defined as the raw residuals divided by the estimated individual standard deviations, that is,
ϵ ^ i = y i μ ^ GLM i V μ ^ GLM i .

3.3.3. AIC and Dispersion Estimation

Since AIC depends on the ML estimators, it is combinable with GLMs in the adaptive algorithm. Here, it has the form
AIC = 2 l β ^ GLM , ϕ ^ + 2 K + p ,
where K is the number of coefficients and p indicates the number of the additional model parameters associated with the distribution of the random component. For instance, in the normal model, we have p = 1 due to the error variance/dispersion. A typical estimate of the dispersion in GLMs is the Pearson residual chi-squared statistic divided by N K as described by Zuur et al. (2009) and implemented, for example, in function glm( · ) belonging to R package stats, that is,
ϕ ^ = 1 N K i = 1 N ϵ ^ i 2 ,
with ϵ ^ i given by (21). Even though this is not the ML estimator, it is a good estimate because, if the model is specified correctly, the Pearson residual chi-squared statistic divided by the dispersion is asymptotically χ N K 2 distributed and the expected value of a chi-squared distribution with N K degrees of freedom is N K .

3.4. Generalized Additive Models (GAMs)

3.4.1. The Regression Model

Generalized additive models (GAMs) as introduced by Hastie and Tibshirani (1986) and Hastie and Tibshirani (1990) can be regarded as richly parameterized GLMs with smooth functions. While GAMs inherit from GLMs the random component (13) and the link function (12), they inherit from the additive models of Friedman and Stuetzle (1981) the linear predictor with the smooth functions. In the adaptive algorithm, we apply GAMs of the form
g ( Y ( X ) = μ ) K < f ( X ) = η = β 0 + k = 1 K 1 h k z k ,
where z k = e k X , β 0 is the intercept and h k · , k = 1 , , K 1 , are the smooth functions to be estimated. In addition to the smooth functions, GAMs can also include simple linear terms of the basis functions as they appear in the linear predictor of GLMs. A smooth function h k · can be written as a basis expansion
h k z k = j = 1 J β k j b k j z k ,
with coefficients β k j and known basis functions b k j z k , j = 1 , , J , which should not be confused with their arguments, namely the first-order basis functions z k = e k X , k = 0 , , K 1 . The slightly adapted Figure 4 from Wood (2006) depicts an exemplary approximation of y by a GAM with a basis expansion in one dimension z k without an intercept. The solid colorful curves represent the pure basis functions b k j z k , j = 1 , , J , the dashed colorful curves show them after scaling with the coefficients β k j b k j z k , j = 1 , , J , and the black curve is their sum (25).
Typical examples for basis functions are thin plate regression splines, duchon splines, cubic regression splines or Eilers and Marx style P-splines. See, for example, function gam( · ) in R package mgcv of Wood (2018) for a practical implementation of GAMs admitting these types of basis functions and using the PIRLS algorithm, which we present below.
In vector notation, we can write β = β 0 , β 1 T , , β K 1 T T with β k = β k 1 , , β k J T and a = 1 , b 1 z 1 T , , b K 1 z K 1 T T with b k z k = b k 1 z k , , b k J z k T , hence (24) becomes
g ( Y ( X ) = μ ) K < f ( X ) = η = a T β .
In order to make the smooth functions h k · , k = 1 , , K 1 , identifiable, identifiability constraints i = 1 N h k z i k = 0 with z i k = e k x i can be imposed. According to Wood (2006) this can be achieved by modification of the basis functions b k j · with one of them being lost.

3.4.2. Penalization and GAM Estimation via PIRLS Algorithm

Let the deviance corresponding to observation y i be D i β = 2 l sat i l i β , ϕ ϕ where D i β is independent of dispersion ϕ , where l sat i = max β i l i β i , ϕ is the saturated log-likelihood and l i β , ϕ the log-likelihood. Then the model deviance can be written as D β = i = 1 N D i β . It is a generalization of the residual sum of squares for ML estimation. For instance, in the normal model the unit deviance is y i μ i 2 . For given smoothing parameters λ k > 0 , k = 1 , , K 1 , the GAM estimator β ^ GAM of the coefficients is defined as the minimizer of the penalized deviance
β ^ GAM = arg min β R ( K 1 ) J + 1 D β + k = 1 K 1 λ k h k z k 2 d z k , where h k z k 2 d z k = β k T b k z k b k z k T d z k β k = β k T S k β k
are the smoothing penalties. The smoothing parameters λ k control the trade-off between a too wiggly model (overfitting) and a too smooth model (underfitting). The larger the λ k values are, the more pronounced is the wiggliness of the basis functions reflected by their second derivatives in the minimization problem (27), and the higher is thus the penalty associated with the coefficients and the smoother is the estimated model.
A major advantage of the definition of GAMs via (24), (25), and (27) is its compatibility with information criteria and other model selection criteria such as generalized cross-validation. Besides, the resulting penalty matrix favors numerical stability in the PIRLS algorithm.
Since the saturated log-likelihood is a constant for a fixed distribution and set of fitting points, we can turn the minimization problem (27) into the maximization task of the penalized log-likelihood, that is,
β ^ GAM = arg max β R ( K 1 ) J + 1 l β , ϕ 1 2 k = 1 K 1 λ k β k T S k β k .
Wood (2000) points out that Fisher’s scoring method can be cast in a penalized version of the iteratively reweighted least squares (PIRLS) algorithm when being used to approximate the GAM coefficient estimator (28). We formulate the PIRLS algorithm based on Marx and Eilers (1998) who indicate the iterative solution explicitly.
Let β ^ ( t ) now be the GAM coefficient approximation in iteration t. Then the vector of the dependent variable s ^ ( t ) = s ^ 1 β ^ ( t ) , , s ^ N β ^ ( t ) T and the weight matrix given by W ( t ) = diag w 1 β ^ ( t ) , , w N β ^ ( t ) have the same form as in the IRLS algorithm, see (19) and (17). Additionally, let S = blockdiag 0 , λ 1 S 1 , , λ K 1 S K 1 with S 11 = 0 belonging to the intercept be the penalty matrix.
PIRLS algorithm.
Perform the iterative approximation procedure below with initialization of μ ^ ( 0 ) i = y i + 0.1 and η ^ ( 0 ) i = g μ ^ ( 0 ) i until convergence occurs:
β ^ ( t + 1 ) = arg min β R ( K 1 ) J + 1 i = 1 N w i β ^ ( t ) 1 s ^ i β ^ ( t ) β 0 k = 1 K 1 j = 1 J β k j b k j z i k 2 + k = 1 K 1 λ k β k T S k β k = Z T W ( t ) Z + S 1 Z T W ( t ) s ^ ( t ) .
After convergence, we set β ^ GAM = β ^ ( t + 1 ) .

3.4.3. Smoothing Parameter Selection, AIC and Stagewise Selection

The smoothing parameters λ k can be selected such that they minimize a suitable model selection criterion, for the sake of consistency, preferably the one used in the adaptive algorithm for basis function selection. The GAM estimator (28) does not exactly maximize the log-likelihood, therefore AIC has another form for GAMs than for GLMs. Hastie and Tibshirani (1990) propose a widely used version of AIC for GAMs, which uses effective degrees of freedom df in place of the number of coefficients ( K 1 ) J + 1 . This is
AIC = 2 l β ^ GAM , ϕ ^ + 2 df + p ,
where
df = tr I + S 1 I .
Note that I + S = Z T W Z + S is already approximately calculated in the PIRLS algorithm. For GAMs, an estimate of the dispersion ϕ ^ is obtained similarly to GLMs by (23). The parameter p is defined as in (22).
Another popular and effective smoothing parameter selection criterion invented by Craven and Wahba (1979) is generalized cross-validation (GCV), that is,
GCV = N D β ^ GAM N df 2 ,
with the model deviance D β ^ GAM evaluated at the GAM estimator and the effective degrees of freedom defined just like for AIC.
Note that the adaptive forward stepwise algorithm depicted in Figure 2 can become computationally infeasible with GAMs as opposed to, for example, GLMs. In iteration k, a GAM has ( K 1 ) J + 1 coefficients which need to be estimated while a GLM has only K coefficients. This difference in the estimation effort is increased further due to the iterative nature of the IRLS and PIRLS algorithms. Moreover, GAMs involve the task of optimal smoothing parameter selection. To deal with this aspect, Wood (2000), Wood et al. (2015) and Wood et al. (2017) have developed practical GAM fitting methods for large data sets. However, the suitable application of these methods in the adaptive algorithm is beyond the scope of our analysis, in particular as our focus is not on computational performance. Besides parallelizing the candidate loop on the lower left side of Figure 2, we achieve the necessary performance gains in GAMs by replacing the stepwise algorithm by a stagewise algorithm. This means that in each iteration, a predefined number L or proportion of candidate basis functions is selected simultaneously until a termination criterion is fulfilled. Thereby we select in one stage those basis functions which reduce the model selection criterion of our choice most when added separately to the current proxy function structure. When there are not at least as many basis functions as targeted, the algorithm shall be terminated after the ones which lead to a reduction in the model selection criterion have been selected.

3.5. Feasible Generalized Least-Squares (FGLS) Regression

3.5.1. The Regression Model

The regression model here equals the OLS case. However, we now let the errors have the covariance matrix Σ = σ 2 Ω where Ω is positive definite and known and σ 2 > 0 is unknown. We transform the generalized regression model according to Hayashi (2000) to obtain a model (*) which satisfies Assumptions (A1), (A2) and (A3) of the classical linear regression model. For this, choose an invertible matrix H with Ω 1 = H T H which can, for example, be the Cholesky matrix. Then, the generalized response vector y * , design matrix Z * and error vector ϵ * are given by
y * = H y , Z * = H Z , ϵ * = y * Z * β = H y Z β = H ϵ .
In analogy to the OLS estimator, the generalized least-squares (GLS) estimator β ^ GLS of the coefficients is given as the minimizer of the generalized residual sum of squares, that is,
β ^ GLS = arg min β R K i = 1 N ϵ * , i 2 .
The closed-form expression of the GLS estimator is
β ^ GLS = Z * T Z * 1 Z * T y * = Z T Ω 1 Z 1 Z T Ω 1 y ,
and the proxy function becomes
f ^ X = z T β ^ GLS ,
where z = e 0 X , , e K 1 X T . The scalar σ 2 can be estimated in analogy to OLS regression by s GLS = 1 N K ϵ ^ * T ϵ ^ * where ϵ ^ * = y * Z * β ^ GLS is the residual vector.

3.5.2. Gauss-Markov-Aitken Theorem and ML Estimation

Under the assumptions (A1), (A3), and a covariance matrix Σ = σ 2 Ω of which Ω is positive definite and known (A6), we have:
  • The GLS estimator is the BLUE of the coefficients in the generalized regression model (7) (Gauss-Markov-Aitken theorem).
  • If in addition we have jointly normally distributed errors conditional on the fitting scenarios (A7) then the ML coefficient estimator coincides with the GLS estimator. Further, the ML estimator of the scalar σ ^ 2 can be expressed as N N K times s GLS .
As a consequence, given a known matrix Ω , we have a closed form solution for the GLS estimator that coincides with the ML estimator of the regression coefficients and the adaptive algorithm inside the LSMC approach goes through.

3.5.3. Unknown Ω and FGLS Estimation via ML Algorithm

In the LSMC framework, Ω is unknown. However, if a consistent estimator Ω ^ exists, we can apply feasible generalized least-squares (FGLS) regression, of which the estimator
β ^ FGLS = Z T Ω ^ 1 Z 1 Z T Ω ^ 1 y
has asymptotically the same properties as the GLS estimator (35).
With z = e 0 X , , e K 1 X T the FGLS proxy function is then given as
f ^ X = z T β ^ FGLS .
For the estimation of Ω we will in the following set σ 2 = 1 which can be done without loss of generality and consider Σ = Ω . Furthermore, we assume in addition to (A1), (A3) and (A7) that the elements of the covariance matrix Σ are twice differentiable functions of parameters α = α 0 , , α M 1 T with K + M N . We then write Σ = Σ α (A8). The following result is the basis of the iterative ML algorithm for the regression coefficients and the variance matrix.
Theorem 1.
The generalized regression model (7) under Assumptions (A1), (A3), (A7) and (A8) has the following first-order ML conditions:
β ^ ML = Z T Σ ^ 1 Z 1 Z T Σ ^ 1 y ,
l α m = 1 2 tr Σ 1 α m Σ α = α ^ ML 1 2 ϵ ^ T Σ 1 α m α = α ^ ML ϵ ^ = 0 ,
where m = 0 , , M 1 , Σ ^ = Σ α ^ ML and ϵ ^ = y Z β ^ ML .
The system in (39) and (40) is then solved iteratively (see, for example, Magnus (1978)). We start the procedure with β ( 0 ) and then use PORT optimization routines as described in Gay (1990) and implemented in function nlminb( · ) belonging to R package stats of R Core Team (2018). In this iterative routine, α ^ ( t + 1 ) can be initialized, for example, by random numbers from the standard normal distribution.
ML algorithm.
Perform the following iterative approximation procedure with, for example, an initialization of β ^ ( 0 ) = β ^ OLS until convergence:
1. 
Calculate the residual vector ϵ ^ ( t + 1 ) = y Z β ^ ( t ) .
2. 
Substitute ϵ ^ ( t + 1 ) into the M equations in M unknowns α m given by (40) and solve them. If an explicit solution exists, set α ^ ( t + 1 ) = α ϵ ^ ( t + 1 ) . Otherwise, select the maximum likelihood solution α ^ ( t + 1 ) iteratively, for example, by using PORT optimization routines.
3. 
Calculate
Σ ^ ( t + 1 ) = Σ α ^ ( t + 1 ) , β ^ ( t + 1 ) = Z T Σ ^ ( t + 1 ) 1 Z 1 Z T Σ ^ ( t + 1 ) 1 y .
Continue with the next iteration.
After convergence, we set β ^ ML = β ^ ( t + 1 ) and α ^ ML = α ^ ( t + 1 ) .
Theorem 5 of Magnus (1978) states that under some further regularity conditions the FGLS coefficient estimator can be derived as the ML coefficient estimator by the ML algorithm under Assumptions (A1), (A3), (A7) and (A8).

3.5.4. Heteroscedasticity, Variance Model Selection and AIC

Besides Assumption (A8) about the structure of the covariance matrix, we assume that the errors are uncorrelated with possibly different variances (= heteroscedastic errors), that is, Σ = diag σ 1 2 , , σ N 2 . We model each variance σ i 2 , i = 1 , , N , by a twice differentiable function in dependence of parameters α = α 0 , , α M 1 T and a suitable set of linearly independent basis functions e m X L 2 R D , B , P , m = 0 , 1 , , M 1 , with v i = e 0 x i , , e M 1 x i T , that is,
σ i 2 = σ 2 V α , v i ,
where V α , v i is referred to as the variance function in analogy to V μ for GLMs and GAMs. Without loss of generality, we set again σ 2 = 1 .
Hartmann (2015) has already applied FGLS regression with different variance models in the LSMC framework. In her numerical examples, variance models with multiplicative heteroscedasticity led to the best performance of the proxy function in the validation. Therefore, we restrict our analyis on these kinds of structures, compare, for example, Harvey (1976), that is,
V α , v i = exp v i T α .
Like the proxy function, the variance function (43) has to be calibrated to apply FGLS regression, which means that the variance function has to be composed of suitable basis functions. Again, such a composition can be found with the aid of a model selection criterion. We still choose AIC, but have to take care for the fact that in FGLS regression the covariance matrix now contains M unknown parameters instead of only one in the OLS case (the same variance for all observations). Under Assumption (A7), AIC is given as
AIC = 2 l β ^ FGLS , Σ ^ + 2 K + M = N log 2 π + log det Σ ^ + y Z β ^ FGLS T Σ ^ 1 y Z β ^ FGLS + 2 K + M .
When using a variance model with multiplicative heteroscedasticity, AIC becomes
AIC = N log 2 π + i = 1 N v i T α ^ + i = 1 N exp v i T α ^ ϵ ^ i 2 + 2 K + M .
As an alternative or complement, the basis functions of the variance model can be selected with respect to their correlations with the final OLS residuals or based on graphical residual analysis.
For the final implementation of a variance model we use modified versions of two algorithms from Hartmann (2015). Our type I variant starts with the derivation of the proxy function by the standard adaptive OLS regression approach and then selects the variance model adaptively from the set of proxy basis functions of which the exponents sum up to at most two. The type II variant builds on the type I algorithm by taking the resulting variance model as given in its adaptive proxy basis function selection procedure with FGLS regression in each iteration.
Note further, that we should only apply FGLS regression as a substitute of OLS regression if heteroscedasticity prevails. This can be tested with the help of the Breusch-Pagan test of Breusch and Pagan (1979) for the following special structure of the variance function
V α , v i = h v i , T α ,
where the function h ( · ) is twice differentiable and the first element of v i is v 0 i = 1 . Further, the assumption of normally distributed errors is made. We use it in the numerical computations to check if heteroscedasticity still prevails during the iteration procedure.

3.6. Multivariate Adaptive Regression Splines (MARS)

3.6.1. The Regression Model

The multivariate adaptive regression splines (MARS) were introduced by Friedman (1991). The classical MARS model is a form of the classical linear regression model (7) where the basis functions e k x i are so-called hinge functions. Therefore, the theory of OLS regression applies in this context. GLMs (12) can also be applied in conjunction with MARS models. In this case we speak of generalized MARS models.
We describe the standard MARS algorithm in the LSMC routine according to Chapter 9.4 of Hastie et al. (2017). The building blocks of MARS proxy functions are reflected pairs of piecewise linear functions with knots t as depicted in Figure 5, that is,
X d t + = max X d t , 0 , t X d + = max t X d , 0 ,
where the X d , d = 1 , , D , represent the risk factors that together form the outer scenario X = X 1 , , X D T .
For each risk factor, reflected pairs with knots at each fitting scenario stress x d i , i = 1 , , N , are defined. All pairs are united in the following collection serving as the initial candidate basis function set of the MARS algorithm, that is,
C 1 = X d t + , t X d + t x d 1 , x d 2 , , x d N d = 1 , , D .
We call the elements of C 1 hinge functions and consider them as functions h X over the entire input space R D . C 1 contains in total 2 D N basis functions.
The adaptive basis function selection algorithm now consists of two parts, the forward and the backward pass.

3.6.2. Adaptive Forward Stepwise Selection and Forward Pass

The forward pass of the MARS algorithm can be viewed as a variation of the adaptive forward stepwise algorithm depicted in Figure 2. The start proxy function consists only of the intercept, that is, h 0 X = 1 . In the classical MARS model, the regression method of choice is the standard OLS regression approach with the estimator (8), where in each iteration a reflected pair of hinge functions is selected instead of e k x i . Similarly, the regression method of choice in the generalized MARS model is the IRLS algorithm (18). Let us denote the MARS coefficient estimator by β ^ MARS . Note that the theory on AIC cannot be transferred without any adjustments since the notion of the degrees of freedom has to be reconsidered due to the knots in the hinge functions acting as additional degrees of freedom.
After each iteration, the set of candidate basis functions is extended by the products of the last two selected hinge functions with all hinge functions in C 1 that depend on risk factors of which the last two selected hinge functions do not depend on. Let the reflected pair selected in the first iteration ( k = 1 ) be
h 1 X = X d 1 t 1 + , h 2 X = t 1 X d 1 + .
Further, let C 1 , = C 1 h 1 X , h 2 X . Then, the set of candidate basis functions is updated at the beginning of the second iteration ( k = 2 ) such that
C 2 = C 1 , X d t + h 1 X , t X d + h 1 X t x d 1 , x d 2 , , x d N d = 1 , , D , d d 1 X d t + h 2 X , t X d + h 2 X t x d 1 , x d 2 , , x d N d = 1 , , D , d d 1 .
The second set C 2 thus contains 2 D N 1 + 4 D 1 N basis functions. Often, the order of interaction is limited to improve the interpretability of the proxy functions. Besides the maximum allowed number of terms, a minimum threshold for the decrease in the residual sum of squares can be employed as a termination criterion in the forward pass. Typically, the proxy functions generated in the forward pass overfit the data since model complexity is only penalized conservatively by stipulating a maximum number of basis functions and a minimum threshold.

3.6.3. Backward Pass and GCV

Due to the overfitting tendency of the proxy function generated in the forward pass, a backward pass is executed afterwards. Apart from the direction and slight differences, the backward pass is similar to the forward pass. In each iteration, the hinge function of which the removal causes the smallest increase in the residual sum of squares is removed and the backward model selection criterion for the resulting proxy function is evaluated. By this backward procedure, we generate the “best” proxy functions of each size in terms of the residual sum of squares. Out of all these best proxy functions, we finally select the one which minimizes the backward model selection criterion. As a result, the final proxy function will not only contain reflected pairs of hinge functions but also single hinge functions of which the complements have been removed. Optionally, the backward pass can also be omitted.
Let the number of basis functions in the MARS model be K and the number of knots be T. The standard choice for the backward model selection criterion is GCV defined as
GCV = N D β ^ MARS N df 2 ,
with the effective degrees of freedom df = K + 3 T .
An especially fast MARS algorithm was later developed by Friedman (1993) and is implemented, for example, in function earth( · ) of R package earth provided by Milborrow (2018).

3.7. Kernel Regression

3.7.1. The One-dimensional Regression Model

Kernel regression (which goes back to Nadaraya (1964) and Watson (1964)) is a type of locally weighted OLS regression where the weights vary with the input variable (the target scenario). We start with locally constant (LC) regression where for each x 0 R the fixed univariate kernel with given bandwidth λ > 0 be
K λ x 0 , x i = D x i x 0 λ ,
where D · denotes the specified kernel function. Solving the corresponding least squares problem
β ^ LC x 0 = arg min β x 0 R i = 1 N K λ x 0 , x i y i β 0 x 0 2 ,
one obtains the Nadaraya-Watson kernel smoother as the kernel-weighted average at each x 0 over the fitting values y i , that is,
f ^ LC x 0 = β ^ LC x 0 = i = 1 N K λ x 0 , x i y i i = 1 N K λ x 0 , x i .
Typical examples for the fixed kernel are the Epanechnikov (see the green shaded areas of Figure 6 inspired by Hastie et al. (2017)), tri-cube and uniform kernels or gaussian kernel. Note that a kernel smoother is continuous and varies over the domain of the target scenarios x 0 , it needs to be estimated separately at all of them.
The bias at the boundaries of the domain of the LC kernel estimator (53) (see the left panel of Figure 6) is mainly eliminated by fitting locally linear functions instead of locally constant functions, see the right panel of Figure 6. At each target x 0 , the LL kernel estimator is defined as the minimizer of the kernel-weighted residual sum of squares, that is,
β ^ LL x 0 = arg min β x 0 R 2 i = 1 N K λ x 0 , x i y i β 0 x 0 β 1 x 0 x i 2 ,
with β x 0 = β 0 x 0 , β 1 x 0 T . The proxy function at x 0 is given by
f ^ LL x 0 = β ^ LL , 0 x 0 + β ^ LL , 1 x 0 x 0 .
Again the minimization problem (55) must be solved separately for all target scenarios so that the coefficients of the proxy function vary across their domain. For each target scenario x 0 a weighted least-squares (WLS) problem with weights K λ x 0 , x i has to be solved. Its solution is the WLS estimator
β ^ LL x 0 = Z T W x 0 Z 1 Z T W x 0 y ,
with y the response vector, W x 0 = diag K λ x 0 , x 1 , , K λ x 0 , x N the weight matrix and Z the design matrix which contains row-wise the vectors 1 , x i T . We call H the hat matrix if y ^ = H y such that y ^ = f ^ LL x 1 , , f ^ LL x N T contains the proxy function values at their target scenarios.
When we use proxy functions in LL regression that are composed of polynomial basis functions with exponents greater than one, we could also speak of local polynomial regression.

3.7.2. The Multidimensional Regression Model

We generalize LC regression to R K by expressing the kernel with respect to the basis function vector z = e 0 X , , e K 1 X T following from the adaptive forward stepwise selection with OLS regression and small K max . At each target scenario vector z 0 R K with elements z 0 k , basis function vector z i R K with elements z i k evaluated at fitting scenario x i and given bandwidth vector λ = λ 0 , , λ K 1 T , the multivariate kernel is defined as the product of univariate kernels, that is,
K λ z 0 , z i = k = 0 K 1 D z i k z 0 k λ k .
The LC kernel estimator in R K is defined at each z 0 as
f ^ LC z 0 = β ^ LC z 0 = i = 1 N K λ z 0 , z i y i i = 1 N K λ z 0 , z i .
Since we let e 0 X represent the intercept so that z i 0 = z 00 = 1 , the corresponding univariate kernel D z i 0 z 00 λ 0 = D 0 is constant over all fitting points, thus cancels in (59) and can be omitted in (58).
The LL kernel estimator in R K is given as the multidimensional analogue of (55) at each z 0 , that is,
β ^ LL z 0 = arg min β z 0 R K i = 1 N K λ z 0 , z i y i z i , T β z 0 2 ,
with β z 0 = β 0 z 0 , , β K 1 z 0 T and the proxy function at z 0 is given by
f ^ LL z 0 = z 0 T β ^ LL z 0 .
The LL kernel estimator can again be computed by WLS regression, that is,
β ^ LL z 0 = Z T W z 0 Z 1 Z T W z 0 y ,
where W z 0 = diag K λ z 0 , z 1 , , K λ z 0 , z N is the weight matrix and Z the design matrix containing row-wise the vectors z i , T . The hat matrix H satisfies y ^ = H y with y ^ = f ^ LL z 1 , , f ^ LL z N T containing the proxy function values at their target scenario vectors.

3.7.3. Bandwidth Selection, AIC and LOO-CV

The bandwidths λ k in kernel regression can be selected similarly to the smoothing parameters in GAMs by minimization of a suitable model selection criterion. In fact, kernel smoothers can be interpreted as local non-parametric GLMs with identity link functions. More precisely, at each target scenario the kernel smoother can be viewed as a GLM (12) where the parametric weights V μ ^ GLM i in (20) are the non-parametric kernel weights K λ z 0 , z i in (60). Since GLMs are special cases of GAMs and the bandwidths in kernel regression can be understood as smoothing parameters, kernel smoothers and GAMs are sometimes lumped together in one category. If the numbers N of the fitting points and K of the basis functions are large, from a computational perspective it might be beneficial to perform bandwidth selection based on a reduced set of fitting points.
Hurvich et al. (1998) propose to select the bandwidths λ 1 , , λ K 1 based on an improved version of AIC which works in the context of non-parametric proxy functions that can be written as linear combinations of the observations. It has the form
AIC = log σ ^ 2 + 1 + tr H / N 1 tr H + 2 / N ,
where σ ^ 2 = 1 N y y ^ T y y ^ and H is the hat matrix.
As an alternative, leave-one-out cross-validation (LOO-CV) is suggested by Li and Racine (2004) for bandwidth selection. Let us refer to
β ^ LL , j z 0 = arg min β z 0 R K i j , i = 1 N K λ z 0 , z i y i z i , T β z 0 2
as the leave-one-out LL kernel estimator and to f ^ LL , j z 0 = z 0 T β ^ LL , j z 0 as the leave-one-out proxy function at z 0 . The objective of LOO-CV is to choose the bandwidths λ 1 , , λ K 1 which minimize
CV = 1 N i = 1 N y i f ^ LL , i z 0 2 .

3.7.4. Adaptive Forward Stepwise OLS Selection

A practical implementation of kernel regression can be found, for example, via the combination of functions npreg( · ) and npregbw( · ) from R package np of Racine and Hayfield (2018).
In the other sections, basis function selection depends on the respective regression methods. Since the crucial process of bandwidth selection in kernel regression takes a very long time in the implementation of our choice, it would be infeasible to proceed here in the same way. Therefore, we derive the basis functions for LC and LL regression by adaptive forward stepwise selection based on OLS regression, by risk factor wise linear selection or a combination thereof. Thereby, we keep the maximum allowed number K max of terms rather small as we aim to model the subtleties by kernel regression.

4. Numerical Experiments

4.1. General Remarks

4.1.1. Data Basis

In our slightly disguised real-world example, the life insurance company has a portfolio with a large proportion of traditional German annuity business. This choice was made in order to challenge the regression techniques since German traditional annuity business features high interest rate guarantees which may lead to large losses in low interest rate environments. We let the insurance company be exposed to D = 15 relevant financial and actuarial risk factors. For the derivation of the fitting points, we run its CFP model conditional on N = 25 , 000 fitting scenarios with each of these outer scenarios entailing two antithetic inner simulations. For a subset of the resulting fitting values of the best estimate of liabilities (BEL), see Figure 1, for summary statistics, the left column of Table 1, and for a histogram, the left panel of Figure 7.
The Sobol validation set is generated based on L = 51 validation scenarios with 1000 inner simulations, comprising 26 Sobol scenarios, 15 one-dimensional risk scenarios, 1 base scenario and 9 scenarios that turned out to be capital region scenarios in the previous year risk capital calculations. The nested simulations set which is due to its high computational costs not available in the regular LSMC approach reflects the highest 5 % real-world losses and is based on L = 1638 outer scenarios with respectively 4000 inner simulations. From the 1638 real-world scenarios, 14 exhibit extreme stresses far beyond the bounds of the fitting space and are therefore excluded from the analysis. For the remaining nested simulation values of BEL, see Figure 3, for summary statistics, the right column of Table 1, and for a histogram, the right panel of Figure 7. The capital region set consists of the L = 129 nested simulations points which correspond to the nested simulations SCR estimate ( = 99.5 % highest loss) and the 64 losses above and below ( = 99.3 % to 99.7 % highest losses).

4.1.2. Validation Figures

We will output validation figure (1) with respect to the relative and asset metric, and additionally figures (2)–(4). While figures (3) and (4) are evaluated with respect to a base value resulting from 1000 inner simulations on the Sobol set, that is, v . mae 0 , v . res 0 , they are computed with respect to a base value resulting from 16 , 000 inner simulations on the nested simulations set, that is, ns . mae 0 , ns . res 0 , and capital region set, that is, cr . mae 0 , cr . res 0 . The latter base value is supposed to be the more reliable validation value since it is the one associated with a lower standard error. Therefore it is worth noting here that figure v . res 0 can easily be transformed such that it is also evaluated with respect to the latter base value by subtracting from it the difference of 14 which the two different base values incur. We will not explicitly state the base residual (5) as it is just (2) minus (4).

4.1.3. Economic Variables

We derive the OLS proxy functions for two economic variables, namely for the best estimate of liabilities (BEL) and the available capital (AC) over a one-year risk horizon, that is, Y ( X ) BEL ( X ) , AC ( X ) . Their approximation quality is assessed by validation figures (1) with respect to the relative and asset metric and (2). Essentially, AC is obtained as the market value of assets minus BEL, which means that AC reflects the negative behavior of BEL. Therefore, we will only derive BEL proxy functions with the other regression methods. The profit resulting from a certain risk constellation captured by an outer scenario X can be computed as AC ( X ) minus the base AC. Validation figures (3) and (4) address the approximation quality of this difference. Taking the negative of the profit yields the loss and evaluating the loss at all real-world scenarios the real-world loss distribution from which the SCR is derived as the 99.5 % value-at-risk. The out-of-sample performances of two different OLS proxy functions of BEL on the Sobol, nested simulations and capital region sets serve as the benchmark for the other regression methods.

4.1.4. Numerical Stability

Let us discuss the subject of numerical stability of QR decompositions in the OLS regression design under a monomial basis. If the weighting in the weighted least-squares problems associated with GLMs, heteroscedastic FGLS regression and kernel regression is good-natured, similar arguments apply as they can also be solved via QR decompositions according to Green (1984) where the weighting is just a scaling. However, the weighting itself raises additional numerical questions that need to be taken into consideration when making the regression design choices. In GLMs, these choices are the random component (13) and link function (12), in FGLS regression it is the functional form of the heteroscedatic variance model (42) and in kernel regression it is the kernel function (58). The following arguments do not apply to GAMs and MARS models as these are constructed out of spline functions, see (25) and (47), respectively. In GAMs, the penalty matrix increases numerical stability.
McLean (2014) justifies that from the perspective of numerical stability performing a QR decomposition on a monomial design matrix Z is asymptotically equivalent to using a Legendre design matrix Z and transforming the resulting coefficient estimator into the monomial one. Under the assumption of an orthonormal basis, Weiß and Nikolić (2019) have derived an explicit upper bound for the condition number of non-diagonal matrix 1 N ( Z ) T ( Z ) for N < , where the factor 1 N is used for technical reasons. This upper bound increases in (1) the number of basis functions, (2) the Hardy-Krause variation of the basis, (3) the convergence constant of the low-discrepancy sequence, and (4) the outer scenario dimension. Our previously defined type of restriction setting controls aspect (1) through the specification of K max and aspect (2) through the limitation of exponents d 1 d 2 d 3 . Aspects (3) and (4) are beyond the scope of the calibration and validation steps of the LSMC framework and therefore left aside here.

4.1.5. Interpolation and Extrapolation

In the LSMC framework, let us refer by interpolation to prediction inside the fitting space and by extrapolation to prediction outside the fitting space. Runge (1901) found that high-degree polynomial interpolation at equidistant points can oscillate toward the ends of the interval with the approximation error getting worse the higher the degree is. In a least-squares problem, Runge’s phenomenon was shown by Dahlquist and Björck (1974) not to apply to polynomials of degree d fitted based on N equidistant points if the inequality d < 2 N holds. With N = 25,000 fitting points the inequality becomes d < 316 so that we clearly do not have to impose any further restrictions in OLS, FGLS and kernel regression as well as in GLMs to keep this phenomenon under control. Splines as they occur in GAMs and MARS models do not suffer from this oscillation issue by construction.
Since Runge’s phenomenon concerns the ends of the interval and the real-world scenarios for the insurer’s full loss distribution forecast in the fourth step of the LSMC framework partly go beyond the fitting space, its scope comprises the extrapolation area as well. High-degree polynomial extrapolation can worsen the approximation error and play a crucial role if many real-world scenarios go far beyond the fitting space.

4.1.6. Principle of Parsimony

Another problem that can occur in an adaptive algorithm is overfitting. Burnham and Anderson (2002) state that overfitted models often have needlessly large sampling variances which means that their precision of the predictions is poorer than that of more parsimonious models which are also free of bias. In cases where AIC leads to overfitting, implementing restriction settings of the form K max - d 1 d 2 d 3 becomes relevant for adhering to the principle of parsimony.

4.2. Ordinary Least-Squares (OLS) Regression

4.2.1. Settings

We build the OLS proxy functions (10) of Y ( X ) BEL ( X ) , AC ( X ) with respect to an outer scenario X out of monomial basis functions that can be written as e k X = l = 1 15 X l r k l with r k l N 0 so that each basis function can be represented by a 15-tuple r k 1 , , r k 15 . The final proxy function depends on the restrictions applied in the adaptive algorithm. The purpose of setting restrictions is to guarantee numerical stability, to keep the extrapolation behavior under control and the proxy functions parsimonious. In order to illustrate the impact of restrictions, we run the adaptive algorithm for BEL under two different restriction settings with the second one being so relaxed that it will not take effect in our example. Additionally, we run the adaptive algorithm under the first restriction setting for AC to give an example of how the behavior of BEL can transfer to AC. As the first ingredient of our restriction setting acts the maximum allowed number of terms K max . Furthermore, we limit the exponents in the monomial basis. Firstly we apply a uniform threshold to all exponents, that is, r k l d 1 . Secondly we restrict the degree, that is, l = 1 15 r k l d 2 . Thirdly we restrict the exponents in interaction basis functions, that is, if there are some l 1 l 2 with r k l 1 , r k l 2 > 0 , we require r k l 1 , r k l 2 d 3 . Let us denote this type of restriction setting by K max - d 1 d 2 d 3 .
As the first and second restriction settings, we choose 150–443 and 300–886, respectively, motivated by Teuguia et al. (2014) who found in their LSMC example in Chapter 4 with four risk factors and 50,000 fitting scenarios entailing two inner simulations that the validation error computed based on 14 validation scenarios started to stabilize at degree 4 when using monomial or Legendre basis functions in different adaptive basis function selection procedures. Furthermore, they pointed out that the LSMC approach becomes infeasible for degrees higher than 12.
We apply R function lm( · ) implemented in R package stats of R Core Team (2018).

4.2.2. Results

Table A1 contains the final BEL proxy function derived under the first restriction setting 150–443 with the basis function representations and coefficients. Thereby reflect the rows the iterations of the adaptive algorithm and depict thus the sequence in which the basis functions are selected. Moreover, the iteration-wise AIC scores and out-of-sample MAEs (1) with respect to the relative metric in % on the Sobol, nested simulations and capital region sets are reported, that is, v.mae, ns.mae and cr.mae. Table A2 contains the AC counterpart of the BEL proxy function derived under 150–443 and Table A3 the final BEL proxy function derived under the more relaxed restriction setting 300–886. Table A4 and Table A5 indicate respectively for the BEL and AC proxy functions derived under 150–443 the AIC scores and all five previously defined validation figures evaluated on the Sobol, nested simulations and capital region sets after each tenth iteration. Similarly, Table A6 reports these figures for the BEL proxy function derived under 300-886. Here the last row corresponds to the final iteration.
Lastly, we manipulate the validation values on all three validation sets twice insofar as we subtract respectively add pointwise 1.96 times the standard errors from respectively to them (inspired by 95 % confidence interval of gaussian distribution). We then evaluate the validation figures for the final BEL proxy functions under both restriction settings on these manipulated sets of validation value estimates and depict them in Table A7 in order to assess the impact of the Monte Carlo error associated with the validation values.

4.2.3. Improvement by Relaxation

Table A1 and Table A2 state that the adaptive algorithm terminates under 150–443 for both BEL and AC when the maximum allowed number of terms is reached. This gives reason to relax the restriction setting to, for example, 300–886 which eventually lets the algorithm terminate due to no further reduction in the AIC score without hitting restrictions 886, compare Table A3 for BEL. In fact, only restrictions 224–464 are hit. Except for the already very small figures cr.mae, cr . mae a and cr.res all validation figures are further improved by the additional basis functions, see Table A4 and Table A6. The largest improvement takes place between iterations 180 and 190. The result that at maximum degrees 464 are selected is consistent with the result of Teuguia et al. (2014) who conclude in their numerical examples of Chapter 4 that under a monomial, Legendre or Laguerre basis the optimum degree is probably 4 or 5. Furthermore, Bauer and Ha (2015) derive a similar result in their one risk factor LSMC example of Chapter 6 when using 50 , 000 fitting scenarios and Legendre, Hermite, Chebychev basis functions or eigenfunctions.
According to our Monte Carlo error impact assessment in Table A7, the slight deterioration at the end of the algorithm is not sufficient to indicate a slight overfitting tendency of AIC. Under the standard choices of the five major components, compare Section 2.2, the adaptive algorithm manages thus to provide a numerically stable and parsimonious proxy function even without a restriction setting. Here, allowing a priori unlimited degrees of freedom is thus beneficial to capturing the complex interactions in the CFP model.

4.2.4. Reduction of Bias

Overall, the systematic deviations indicated by the means of residuals (2) and (4) are reduced significantly on the three validation sets by the relaxation but not completely eliminated. For the 300–886 OLS residuals on the three sets, see the diamond-shaped residuals in Figure 8, Figure 9 and Figure 10, respectively. While the reduction of the bias comes along with the general improvement stated above, the remainder of the bias indicates that sample size is not sufficiently large or that the functional form is not flexible enough to replicate the complex interactions in CFP models. Note that if the functional form is correctly specified, Proposition 3.2 of Bauer and Ha (2015) states that if sample size is not sufficiently large, the AC proxy function will on average be positively biased in the tail reflecting the high losses and the BEL proxy function will thus be negatively biased there. Since Propositions 1 and 2 of Gordy and Juneja (2010) state that this result holds for the nested simulations estimators as well, the validation values of the nested simulations and capital region sets need to be more accurate in order to serve for bias detection in this case. For an illustration of such as bias, see Figures 5 and 6 of Bauer and Ha (2015). The bias in our one sample example is in the opposite systematic direction, which is an indication of insufficiency of polynomials. This is also consistent with the observations in the industry that the polynomials seem not to able to replicate the sudden changes in steepness of AC and BEL which are a consequence of regulation and complex management actions in the CFP models.
Unlike figures (1) and (2), figures (3) and (4) do not forgive a bad fit of the base value if the validation values are well approximated by a proxy function. Contrariwise, if a proxy function shows the same systematic deviation from the validation values and the base value, (3) and (4) will be close to zero whereas (1) and (2) will be not. The comparisons v . res < v . res 0 , cr . res < cr . res 0 but ns . res > ns . res 0 , holding under both restrictions settings, indicate that on the Sobol and capital region sets primarily the base value is not approximated well whereas on the nested simulations set not only the base value but also the validation values are missed. The MAEs capture this result, too, that is, v . mae , cr . mae < ns . mae but ns . mae 0 < v . mae 0 , cr . mae 0 .

4.2.5. Relationship between BEL and AC

The MAEs with respect to the relative metric for BEL are much smaller than for AC since the two economic variables are subject to similar absolute fluctuations with, for example, in the base case BEL being approximately 20 times the size of AC. The similar absolute fluctuations are reflected by the iteration-wise very similar MAEs with respect to the asset metric of BEL and AC, compare v . mae a , ns . mae a and cr . mae a given in % in Table A4 and Table A5. Furthermore, they manifest themselves in the iteration-wise opposing means of residuals v.res, v . res 0 , ns.res and cr.res as well as in the similar-sized MAEs v . mae 0 , ns . mae 0 and cr . mae 0 .

4.3. Generalized Linear Models (GLMs)

4.3.1. Settings

We derive the GLMs (12) of BEL under restriction settings 150–443 and 300–886 which we also employed for the derivation of the OLS proxy functions. Thereby, we run each restriction setting with the canonical choices of random components for continuous (non-negative) response variables, that is, the gaussian, gamma and inverse gaussian distributions, compare McCullagh and Nelder (1989). In cases where the economic variable can also attain negative values (for example, AC), a suitable shift of the response values in a preceding step would be required. We combine each of the three random component choices with the commonly used identity, inverse and log link functions, that is, g μ id μ , 1 μ , log μ , compare Hastie and Pregibon (1992). In combination with the inverse gaussian random component, we consider additionally link function 1 μ 2 . Further choices are conceivable but go beyond this first shot.
We take R function glm( · ) implemented in R package stats of R Core Team (2018).

4.3.2. Results

While Table A8, Table A9 and Table A10 display the AIC scores and five previously defined validation figures after each tenth iteration for the just mentioned combinations under 150–443, Table A11, Table A12 and Table A13 do so under 300-886 and include furthermore the final iterations. Table A14 gives an overview of the AIC scores and validation figures corresponding to all considered final GLMs and highlights in green and red respectively the best and worst values observed per figure.

4.3.3. Improvement by Relaxation

The OLS regression is the special case of a GLM with gaussian random component and identity link function which is why the first sections of Table A8 and Table A11 coincide respectively with Table A4 and Table A6. The adaptive algorithm terminates under 150–443 not only for this combination but also for all other ones when the maximum allowed number of terms is reached. Under 300–886 termination occurs due to no further reduction in the AIC score without hitting the restrictions-the different GLMs stop between 208–454 and 250–574.
For all GLMs except for the one with gamma random component and identity link, the AIC scores and eight most significant validation figures for measuring the approximation quality, namely leftmost figure v.mae to rightmost figure ns.res in the tables, are improved through the relaxation as can be seen in Table A14. For gamma random component with identity link, the deteriorations are negligible. Overall, figures ns . mae 0 and cr . mae 0 are deteriorated by at maximum 0.5 % points and figures ns . res 0 and cr . res 0 by at maximum 4 units. Figures cr.mae and cr . mae a are especially small under 150–443 so that slight deteriorations by at maximum 0.05 % points under 300-886 towards the levels of v.mae and v . mae a or ns.mae and ns . mae a are not surprising. Similar arguments apply to the acceptability of the maximum deterioration of cr.res by 13 to 17 units for inverse gaussian with 1 μ 2 link. We conclude that the more relaxed restriction setting 300–886 performs better than 150–443 for all GLMs in our numerical example. This result appears plausible in comparison with the OLS result from the previous section and hence also compared to the OLS results of Teuguia et al. (2014) and Bauer and Ha (2015).
AIC cannot be said to show an overfitting tendency according to Table A11, Table A12 and Table A13 and also Table A7 since the validation figures do not deteriorate in the late iterations more than they underly Monte Carlo fluctuations, compare the OLS interpretation. Using GLMs instead of OLS regression in the standard adaptive algorithm, compare Section 2.2, lets the algorithm thus maintain its property to yield numerically stable and parsimonious proxy functions even without restriction settings.

4.3.4. Reduction of Bias

According to Table A14, inverse gaussian with 1 μ 2 link shows the most significant decrease in v.mae by 0.088 % points when moving from 150–443 to 300–886. Under 300–886 this combination even outperforms all other ones (highlighted in green) whereas under 150–443 it is vice versa (highlighted in red). Hence, the performance of a random component link combination under 150–443 does not generalize to 300–886. On the Sobol and nested simulations sets, the MAEs (1) are not only considerably lower for inverse gaussian with 1 μ 2 link than for all others but also the closest together even when the capital region set is included. This speaks for a great deal of consistency.
In fact, the systematic overestimation of 81 % of the points on the nested simulations set by inverse gaussian with 1 μ 2 link is certainly smaller than, for example, that of 89 % by gaussian with identity link but still very pronounced. On the capital region set, the overestimation rates for these two combinations are 41 % and 56 % , respectively, meaning that here the bias is negligibe. Surprisingly, for most GLMs the bias is here smaller than for inverse gaussian with 1 μ 2 link but since this result does not generalize to the nested simulations set, we regard it as a chance event and do not question the rather mediocre performance of inverse gaussian with 1 μ 2 link here further. Interpreting the mean of residuals (2) provides similar insights.
In particular, for inverse gaussian 1 μ 2 link GLM the reduction of the bias comes along with the general improvement by the relaxation. The small remainder of the bias indicates not only that this GLM is a promising choice here but also that identifying suitable regression methods and functional forms is crucial to further improving the accuracy of the proxy function. For the residuals on the three sets, see the triangle-shaped residuals in Figure 8, Figure 9 and Figure 10, respectively.

4.3.5. Major and Minor Role of Link Function and Random Component

Apart from the just considered case, for all three random components, the relaxation to 300–886 yields the largest out-of-sample performance gains in terms of v.mae with identity link (between 0.047 % and 0.058 % points), closely followed by log link (between 0.033 % and 0.047 % points), and the least gains with inverse link (between 0.017 % and 0.020 % points). While with identity link the largest improvements before finalization take place for gaussian, gamma and inverse gaussian random components between iterations 180 to 190, 170 to 180, and 150 to 160, respectively, with log link they occur much sooner between iterations 120 to 130, 110 to 120, and 110 to 120, respectively, see Table A11, Table A12 and Table A13. As a result of this behavior, under 150–443 log link performs better than identity link for gaussian and inverse gaussian whereas under 300–886 it is vice versa. Inverse link always performs worse than identity and log links, in particular under 300–886.
Applying the same link with different random components does not bring much variation under 300–886 with gamma and inverse gaussian being slightly better than gaussian for all considered links though. A possible explanation is that the distribution of BEL is slightly skewed conditional on the outer scenarios. Thereby results the skewness in the inner simulations from an asymmetric profit sharing mechanism in the CFP model. While the policyholders are entitled to participate at the profits of an insurance company, see, for example, Mourik (2003), the company has to bear its losses fully by itself. Since gaussian performs only slightly worse than the skewed distributions, it should still be considered for practical reasons because it has a closed-form solution and a great deal of statistical theory has been developed for it, compare, for example, Dobson (2002). By conclusion, the choice of the link is more important than that of the random component so that trying alternative link functions might be beneficial.

4.4. Generalized Additive Models (GAMs)

4.4.1. Settings

For the derivation of the GAMs (26) of BEL, we apply only restriction settings K max -443 with K max 150 in the adaptive algorithm since we use smooth functions (25) constructed out of splines that may already have exponents greater than 1 to which the monomial first-order basis functions are raised. As the model selection criterion we take GCV (32) used by our chosen implementation by default. We vary different ingredients of GAMs while holding others fixed to carve out possible effects of these ingredients on the approximation quality of GAMs in adaptive algorithms and our application.
We rely on R function gam( · ) implemented in R package mgcv of Wood (2018).

4.4.2. Results

Table A15 contains the validation figures for GAMs with varying number of spline functions per smooth function, that is, J 4 , 5 , 8 , 10 , after each tenth and the finally selected smooth function. In the case of adaptive forward stepwise selection the iteration numbers coincide with the numbers of selected smooth functions. In contrast, table sections with adaptive forward stagewise selection results do not display the iteration numbers in the smooth function column k. In Table A16, we display the effective degrees of freedom, p-values and significance codes of each smooth function of the J = 4 and J = 10 GAMs from the previous table at stages k 50 , 100 , 150 . The p-values and significance codes are based on a test statistic of Marra and Wood (2012) having its foundations in the frequentist properties of Bayesian confidence intervals analyzed in Nychka (1988). Table A17 and Table A18 report the validation figures respectively for GAMs with numbers J = 5 and J = 10 , where the types of the spline functions are varied. Thin plate regression splines, penalized cubic regression splines, duchon splines and Eilers and Marx style P-splines are considered. Thereafter, Table A19 and Table A20 display the validation figures respectively for GAMs with numbers J = 4 and J = 8 and different random component link function combinations. As in GLMs, we apply the gaussian, gamma and inverse gaussian distributions with identity, log, inverse and 1 μ 2 (only inverse gaussian) link functions.
Table A21 compares by means of two exemplary GAMs the effects of adaptive forward stagewise selection of length L = 5 and adaptive forward stepwise selection. Last but not least, Table A22 contains a mixture of GAMs challenging the results which we will have deduced from the other GAM tables. Table A23 gives an overview of the validation figures corresponding to all derived final GAMs and highlights in green and red respectively the best and worst values observed per figure.

4.4.3. Efficiency and Performance Gains by Tailoring the Spline Function Number

Table A15 indicates that the MAEs (1) and (3) of the exemplary GAMs built up of thin plate regression splines with gaussian random component and identity link tend to increase with the number J of spline functions per dimension until k = 100 . Running more iterations reverses this behavior until k = 150 . Hence, as long as comparably few smooth functions have been selected in the adaptive algorithm fewer spline functions tend to yield better out-of-sample performances of the GAMs whereas many smooth functions tend to perform better with more spline functions. A possible explanation of this observation is that an omitted-variable bias due to too few smooth functions is aggravated here by an overfitting due to too many spline functions. For more details on an omitted-variable bias, see, for example, Pindyck and Rubinfeld (1998), and for the needlessly large sampling variances and thus low estimation precision of overfitted models, see, for example, Burnham and Anderson (2002). Differently, the absolute values of the means of residuals (2) and (4) tend to become smaller with increasing J regardless of k.
According to Table A16, the components of the effective degrees of freedom (31) associated with each smooth function tend to decrease for J = 4 and J = 10 slightly in k. This is plausible as the explanatory power of each additionally selected smooth term is expected to decline by trend in the adaptive algorithm. Conditional on df > 1 , that is for proportions of at least 40 % of all smooth terms, the averages of the effective degrees of freedom belonging to k 50 , 100 , 150 amount for J = 4 and J = 10 to 2.494 , 2.399 , 2.254 and 5.366 , 4.530 , 4.424 , respectively. The values are by construction smaller than J 1 since one degree of freedom per smooth function is lost to the identifiability constraints. Hence, for at least 40 % of the smooth functions, on average J = 6 is a reasonable choice to capture the CFP model properly while maintaining computational efficiency, compare Wood (2017). The other side of the coin here is that up to 60 % of the smooth functions are supposed to be replacable by simple linear terms without losing accuracy so that here tremendous efficiency gains can be realized by making the GAMs more parsimonious. Furthermore, setting J individually for each smooth function can help improve computational efficiency (if J should be set below average) and out-of-sample performance (if J should be set above average). However, such a tailored approach entails the challenge that the optimal J per smooth function is not stable across all k, compare row-wise the degrees of freedom in the table for J = 4 and J = 10 .

4.4.4. Dependence of Best Spline Function Type

According to Table A17 and Table A18, the adaptive algorithm terminates only due to no further decrease in GCV when the GAMs are composed of duchon splines discussed in Duchon (1977). Whether GCV has an overfitting tendency here cannot be deduced from this example since only restriction settings with K max 150 are tested. The thin plate regression splines of Wood (2003) and penalized cubic regression splines of Wood (2017) perform similarly and significantly better than the duchon splines for both J = 5 and J = 10 . For J = 5 the Eilers and Marx style P-splines proposed by Eilers and Marx (1996) perform by far best when K max = 100 smooth functions are allowed. However, for J = 10 they are outperformed by both the thin plate regression splines and penalized cubic regression splines when between K max = 125 and 150 smooth functions are allowed. This result illustrates well that the best choice of the spline function type varies with J and K max , meaning that it should be selected together with these parameters.

4.4.5. Minor Role of Link Function and Random Component

For GLMs, we have seen that varying the random component barely alters the validation results whereas varying the link function can make a noticeable impact. While this result mostly applies to the earlier compositions of GAMs as well, it certainly does not to the later ones. See for instance early composition k = 40 in Table A19. Here identity link GAMs with gamma and inverse gaussian random components perform more similar to each other than identity and log link GAMs with gamma random component or identity and log link GAMs with inverse gaussian random component do. Log link GAMs with gamma and inverse gaussian random components show such a behavior as well. However identity link GAM with the less flexible gaussian random component (no skewness) does not show at all a behavior similar to that of identity link GAMs with gamma or inverse gaussian random components. Now see later compositions k 70 , 80 to verify that all available GAMs in the table produce very similar validation results.
For another example see Table A20. For early composition k = 50 , identity link GAMs with gaussian and gamma random components behave very similar to each other just like log link GAMs with gaussian and gamma random components do. For later compositions k 100 , 110 , again all available GAMs produce very similar validation results. A possible explanation of this result is that the impact of the link function and random component decreases with the number of smooth functions as the latter take the modeling over. By conclusion, the choices of the random component and link function do not play a major role when the GAM is built up of many smooth functions.

4.4.6. Consistency of Results

Table A21 shows based on two exemplary GAMs constructed out of J = 8 thin plate regression splines per dimension varying in the random component and link function that the adaptive forward stagewise selection of length L = 5 and adaptive forward stepwise selection lead to very similar GAMs and validation results. As a result, stagewise selection should be preferred due to its considerable run time advantage. As we will see in the following, the run time can be further reduced without any drawbacks by dynamically selecting even more than 5 smooth functions per iteration.
The purpose of Table A22 is to challenge the hypotheses deduced above. Like Table A15, this table contains the results of GAMs with varying spline function number J 5 , 8 , 10 and fixed spline function type. Instead of thin plate regression splines, now Eilers and Marx style P-splines are considered. Since adaptive forward stepwise and stagewise selection do not yield significant differences in the examples of Table A21, we do not expect that permutations thereof affect the results much here as well. This allows us to randomly assign three different adaptive forward selection approaches to the three exemplary proxy function derivation procedures. As one of these approaches, we choose a dynamic stagewise selection approach in which L is determined in each iteration as the proportion 0.25 of the size of the candidate term set. Again we see that as long as only k 90 , 100 smooth functions have been selected, J = 5 performs better than J = 8 and J = 8 better than J = 10 . However, k = 150 smooth functions are not sufficient this time for J = 10 to catch up with the performance of J = 5 . The observed performance order is consistent with the hypotheses of a high stability of the GAMs with respect to the adaptive selection procedure and random component link function combination.

4.4.7. Potential of Improved Interaction Modeling

Table A23 presents as the most suitable GAM the one with highest allowed maximum number of smooth functions K max = 150 and highest number of spline functions J = 10 per dimension. The slight deterioration after k = 130 reported by Table A15 indicates that at least one of the parameters is already comparably high. According to Table A16, there are a few smooth terms which might benefit from being composed of more than ten spline functions and increasing K max might be helpful to capturing the interactions in the CFP model more appropriately, particularly in the light of the fact that the best GLM, having 250 basis functions, outperforms the best GAM on both the Sobol and nested simulations set, compare Table A14, with the best GAM showing a comparably low bias across the three validation sets though, see the dot-shaped residuals in Figure 8, Figure 9 and Figure 10, respectively. Variations in the random component link function combination and adaptive selection procedure are not expected to change the performance much. By conclusion, we recommend the fast gaussian identity link GAMs (several expressions in the PIRLS algorithm simplify) with tailored spline function numbers per smooth function and simple linear terms under stagewise selection approaches of suitable lengths L 5 and more relaxed restriction settings where K max > 150 .

4.5. Feasible Generalized Least-Squares (FGLS) Regression

4.5.1. Settings

Like the OLS proxy functions and GLMs, we derive the FGLS proxy functions (38) under restriction settings 150–443 and 300–886. For the performance assessment of FGLS regression, we apply type I and II algorithms with variance models of different complexity, where type I results are obtained as a by-product of type II algorithm since the latter algorithm builds upon the former one. We control the complexity through the maximum allowed numbers of variance model terms M max 2 ; 6 ; 10 ; 14 ; 18 ; 22 .
We combine R functions nlminb( · ) and lm( · ) implemented in R package stats of R Core Team (2018).

4.5.2. Results

Table A24 and Table A25 display respectively the adaptively selected FGLS variance models of BEL corresponding to maximum allowed numbers of terms M max based on final 150–443 and 300–886 OLS proxy functions given in Table A1 and Table A3. For reasons of numerical stability and simplicity, only basis functions with exponents summing up to at max two are considered as candidates. Additionally, the AIC scores and MAEs with respect to the relative metric are reported in the tables. By construction, these results are also the type I algorithm outcomes. Table A26 and Table A27 summarize respectively under 150–443 and 300–886 all iteration-wise out-of-sample test results. The results of type II algorithm after each tenth and the final iteration of adaptive FGLS proxy function selection are respectively displayed by Table A28 and Table A29. Table A30 gives an overview of the AIC scores and validation figures corresponding to all final FGLS proxy functions and highlights as in the previous overview tables in green and red respectively the best and worst values observed per figure.

4.5.3. Consistency Gains by Variance Modeling

By looking at Table A24 and Table A25 we see similar out-of-sample performance patterns during adaptive variance model selection based on the basis function sets of 150–443 and 300–886 OLS proxy functions. In both cases, the p-values of Breusch-Pagan test indicate that heteroscedasticity is not eliminated but reduced when the variance models are extended, that is, when M max is increased. In fact, in a more good-natured LSMC example Hartmann (2015) shows that a type I alike algorithm manages to fully eliminate heteroscedasticity. While the MAEs (1) barely change on the Sobol set, they decrease significantly on the nested simulations set and increase noticeably on the capital region set. Under 300–886 the effects are considerably smaller than under 150–443 since the capital region performance of 300–886 OLS proxy function is less extraordinarily good than that of 150–443 OLS proxy function. The three MAEs approach each other under both restriction settings. Hence the reductions in heteroscedasticity lead to consistency gains across the three validation sets.
Table A26 and Table A27 complete the just discussed picture. The remaining validation figures on the Sobol set improve through type I FGLS regression slightly compared to OLS regression. Like ns.mae, figure ns.res and the base residual improve a lot with increasing M max under 150–443 and a little less under 300-886 but ns . mae 0 and ns . res 0 do not alter much as the aforementioned two figures cancel each other out here. On the capital region set, the figures deteriorate or remain comparably high in absolute values. The type I FGLS figures converge fast so that increasing M max successively from 10 to 22 barely affects the out-of-sample performance anymore. As a result of heteroscedasticity modeling, the proxy functions are shifted such that overall approximation quality increases. Unfortunately, this does not guarantee an improvement in the relevant region for SCR estimation as our example illustrates well.

4.5.4. Monotonicity in Complexity

Let us address the type II FGLS results under 150-443 in Table A28 now. For M max = 2 , figures (3) and (4) are improved on all three validation sets significantly compared to OLS regression with the type I figures lying inbetween. The other validation figures are similar for OLS, type I and II FGLS regression, which traces the performance gains in (3) and (4) back to a better fit of the base value. For M max = 6 to 22, the type II figures show the same effects as the type I ones but more pronouncedly, see the previous two paragraphs. These effects are by trend the more distinct the more complex the variance model becomes. The type II figures stabilize less than the type I ones because of the additional variability coming along with adaptive FGLS proxy function selection. Hartmann (2015) shows in terms of Sobol figures in her LSMC example that increasing the complexity while omitting only one regressor from the simpler variance model can deteriorate the out-of-sample performance dramatically. Intuitively, it is plausible that the FGLS validation figures are the farther from the OLS figures away the more elaborately heteroscedasticity is modeled.
Now let us relate the type II FGLS results under 300-886 in Table A29 to the other FGLS results. Under 300–886 for M max = 2 , figures (3) and (4) are already at a comparably good level with both OLS and type I FGLS regression so that they do not alter much or even deteriorate with type II FGLS regression. Like under 150–443 for M max = 6 to 22, the type II figures show the effects of the type I ones more pronouncedly. Under both restriction settings, ns.mae and ns.res decrease thereby significantly. While this barely causes ns . res 0 to change under 150–443, it lets ns . res 0 increase in absolute values under 300–886. The slight improvements on the Sobol set and the deteriorations on the capital region set carry over to 300–886. When M max is increased up to 22, the type II FGLS validation figures under 300–886 do not stop fluctuating. The variability entailed by adaptive FGLS proxy function selection intensifies thus through the relaxation of the restriction setting in this numerical example. According to Breusch-Pagan test, heteroscedasticity is neither eliminated by the type II algorithm here nor by a type II alike approach of Hartmann (2015) in her more good-natured example.

4.5.5. Improvement by Relaxation

Among all FGLS proxy functions listed in Table A30, we consider type II with M max = 14 in variance model selection under 300–886 as the best performing one. Apart from nested simulations validation under type I algorithm, 300–886 performs better than 150–443. Since on the other hand type II algorithm performs better than type I algorithm under the respective restriction settings, 300–886 and type II algorithm are the most promising choices here. Differently M max = 14 does not constitute a stable choice due to the high variability coming along with 300–886 and type II algorithm.
While all type I FGLS proxy functions are by definition composed of the same basis functions as the OLS proxy function, the compositions of type II FGLS proxy functions vary with M max because of their renewed adaptive selection. Consequently, under 300–886 all type I FGLS proxy functions hit the same restrictions 224–464 as the OLS proxy function does, whereas the restrictions hit by type II FGLS proxy functions vary between 224–454 and 258–564. This variation is consistent with the OLS and GLM results from the previous sections and hence the OLS results of Teuguia et al. (2014) and Bauer and Ha (2015).
AIC does not have an overfitting tendency according to Table A26, Table A27, Table A28 and Table A29 as the validation figures do not deteriorate in the late iterations more than they underly Monte Carlo fluctuations, compare the OLS and GLM interpretations. Using FGLS instead of OLS regression in the standard adaptive algorithm, compare Section 2.2, lets the algorithm thus yield numerically stable and parsimonious proxy functions without restriction settings as well.

4.5.6. Reduction of Bias

The type II M max = 14 FGLS proxy function under 300-886 reaches with 258 terms the highest observed number across all numerical experiments and not only outperforms all derived GLMs and GAMs in terms of combined Sobol and nested simulations validation, it also shows by far the smallest bias on these two validation sets and approximates the base value comparably well. This observation speaks for a high interaction complexity of the CFP model. The reduction of the bias comes again along with the general improvement by the relaxation. Given the fact that the capital region set presents the most extreme and challenging validation set in our analysis, the still mediocre performance here can be regarded as acceptable for now. Nevertheless, especially the bias on this set motivates the search for even more suitable regression methods and functional forms. For the residuals of the 300–886 FGLS proxy function on the three sets, see the x-shaped residuals in Figure 8, Figure 9 and Figure 10, respectively.

4.6. Multivariate Adaptive Regression Splines (MARS)

4.6.1. Settings

We undertake a two-step approach to identify suitable generalized MARS models out of numerous possibilities. In the first step, we vary several MARS ingredients over a wide range and obtain in this way a large number of different MARS models. To be more specific, we vary the maximum allowed number of terms K max 50 , 113 , 175 , 237 , 300 and the minimum threshold for the decrease in the residual sum of squares t min 0 , 1.25 , 2.5 , 3.75 , 5 · 10 5 in the forward pass, the order of interaction o 3 , 4 , 5 , 6 , the pruning method p n , b , f , s with n = none , b = backward , f = forward and s = seqrep in the backward pass, as well as the random component link function combination of the GLM extension. In addition to the 10 random component link function combinations applied in the numerical experiments of the GLMs, compare, for example, Table A14, we use poisson random component with identity, log and squareroot link functions. We work with the default fast MARS parameter fast . k = 20 of our chosen implementation.
We use R function earth( · ) implemented in R package earth of Milborrow (2018).

4.6.2. Results

In total, these settings yield 4 · 5 · 5 · 4 · 13 = 5200 MARS models with a lot of duplicates in our first step. We validate the 5200 MARS models on the Sobol, nested simulations and capital region sets through evaluation of the five validation figures. Then we collect the five best performing MARS models in terms of each validation figure per set which gives us in total 5 · 5 = 25 best performing models per first step validation set. Since the MAEs (1) with respect to the relative and asset metric entail the same best performing models, only 5 · 4 = 20 of the collected models per first step set are potentially different. Based on the ingredients of each of these 20 MARS models per first step set, we define 5 · 5 = 25 new sets of ingredients varying only with respect to K max and t min and derive the corresponding new but similar MARS models in the second step. As a result, we obtain in total 20 · 25 = 500 new MARS models per first step set. Again, we assess their out-of-sample performances through evaluation of the five validation figures on the three validation sets. Out of the 500 new MARS models per first step set, we collect then the best performing ones in terms of each validation figure per second step set. Now this gives us in total 5 · 3 = 15 best MARS models per first step set, or taking into account that the MAEs (1) with respect to the relative and asset metric entail once more the same best performing models, 4 · 3 = 12 potentially different best models per first step set. In total, this makes 12 · 3 = 4 · 9 = 36 best MARS models, which can be found in Table A31 sorted by first and second step validation sets.

4.6.3. Poor Interaction Modeling and Extrapolation

In Table A31, the out-of-sample performances of all MARS models derived in our two-step approach are sorted using the first step validation set as the primary and the second step validation set as the secondary sort key. Let us address the first step second step validation set combinations by the headlines in Table A31. By construction, the combinations Sobol set 2 , Nested simulations set 2 and Capital region set 2 yield respectively the MARS models with the best validation figures (1)–(4) on the Sobol, nested simulations and capital region sets. See that in the table all corresponding diagonal elements are highlighted in green. But the best MAEs (1) and (3) are not even close to what OLS regression, GLMs, GAMs and FGLS regression achieve. Finding small residuals (2) and (4) regardless of the other validation figures is not sufficient. The performances on the nested simulations and capital region sets, comprising several scenarios beyond the fitting space, are especially poor. All these results indicate that MARS models do not seem very suitable for our application. Despite the possibility to select up to 300 basis functions, the MARS algorithm selects only at maximum 148 basis functions, which suggests that without any alterations, the algorithm is not able to capture the behavior of the CFP model properly, in particular extrapolation behavior is comparably poor.
The MARS model with the set of ingredients K max = 50 , t min = 0 , o = 4 , p = b , inverse gaussian random component and identity link function is selected as the best one six times out of 36, or once for each Sobol and nested simulations first step validation set combination. Furthermore, this model performs best in terms of v . res 0 , ns . mae 0 and ns . mae a . Since there is no other MARS model with a similar high occurrence and performance, we consider it the best performing and most stable one found in our two-step approach. For illustration of a MARS model, see this one in Table A32. The fact that this best MARS model performs worse than other ones in terms of several validation figures stresses the infeasibility of MARS models in this application.

4.6.4. Limitations

Table A31 suggests that, up to a certain upper limit, the higher the maximum allowed number of terms K max the higher tends the performance on the Sobol set to be. However, this result does not generalize to the nested simulations and capital region sets. Since at maximum 148 basis functions are selected here even if up to 300 basis functions are allowed, extending the range of K max in the first step of this numerical experiment would not affect the output in this regard. The threshold t min is an instrument controlling the number of basis functions selected in the forward pass up to K max which cannot be extended below zero, meaning that its variability has already been exhausted here as well. For the interaction order o similar considerations as for K max apply. The pruning method p used in the backward pass does not play a large role compared to the other ingredients as it only helps reduce the set of selected basis functions. In terms of Sobol validation, inverse gaussian random component with identity link performs best, whereas in terms of nested simulations and capital region validation, inverse gaussian random component with any link or log link with gaussian or poisson random component perform best. We conclude that if there was a suitable MARS model for our application, our two-step approach would have found it.

4.7. Kernel Regression

4.7.1. Settings

We make a series of adjustments affecting either the structure or the derivation process of the multidimensional LC and LL proxy functions (59) and (61) to get as broad a picture of the potential of kernel regression in our application as possible. Our adjustments concern the kernel function and its order, the bandwidth selection criterion, the proportion of fitting points used for bandwidth selection, and the sets of basis functions of which the local proxy functions are composed of. Thereby we combine in various ways the gaussian, Epanechnikov and uniform kernels, orders o 2 , 4 , 6 , 8 , bandwidth selection criteria LOO-CV and AIC, and between 2500 (proportion bw = 0.1 ) and 25,000 (proportion bw = 1 ) fitting points for bandwidth selection.
We work with R functions npregbw( · ) and npreg( · ) implemented in R package np of Racine and Hayfield (2018).

4.7.2. Results

Furthermore, we alternate the four basis function sets contained in Table A33 and Table A34. The first two basis function sets with K max 16 , 27 are derived by adaptive forward stepwise selection based on OLS regression, the third one with K max = 15 by risk factor wise linear selection and the last one with K max = 22 by a combination thereof. All combinations including their out-of-sample performances can be found in Table A35. Again, the best and worst values observed per validation figure are highlighted in green and red, respectively.

4.7.3. Poor Interaction Modeling and Extrapolation

We draw the following conclusions based on the validation results in Table A35. The comparisons of LC and LL regression applied with gaussian kernel and 16 basis functions or Epanechnikov kernel and 15 basis functions suggest that LL regression performs better than LC regression. However, even the best Sobol, nested simulations and capital region results of LL regression are still outperformed by OLS regression, GLMs, GAMs and FGLS regression. Possible explanations for this observation are that kernel regression is not able to model the interactions of the risk factors equally well with its few basis functions and that local regression approaches perform rather poorly close to and especially beyond the boundary of the fitting space because of the thinned out to missing data basis in this region. While the first explanation applies to all three validation sets, the latter one applies only to the nested simulations and capital region sets on which the validation figures are indeed worse than on the Sobol set. While LC regression produces interpretable results with the sets of 22 and 27 basis functions, the more complex LL regression does not in most cases.

4.7.4. Limitations

On the Sobol and capital region sets, both LC and LL regression show similar behaviors when relying on gaussian kernel and 16 basis functions compared to Epanechnikov kernel and 15 basis functions. But on the nested simulations set, gaussian kernel and 16 basis functions are the superior choices. Using a uniform kernel with LC regression deteriorates the out-of-sample performance. The results of LC regression indicate furthermore that an extension of the basis function sets from 15 to 27 only slightly affects the validation performance. With gaussian kernel switching from 16 to 27 basis functions barely has an impact and with Epanechnikov kernel only the nested simulations and capital region validation performance improve when using 27 as opposed to 15, 16 or 22 basis functions. While increasing the order of the gaussian or Epanechnikov kernel deteriorates the validation figures dramatically, for the uniform kernel the effects can go in both directions. AIC performs worse than LOO-CV when used for bandwidth selection of the gaussian kernel in LC regression. For LC regression, increasing the proportion of fitting points entering bandwidth selection improves all validation figures until a specific threshold is reached. But thereafter the nested simulations and capital region figures are deteriorated. For LL regression no such deterioration is observed.
Overall we do not see much potential in kernel regression for our practical example compared to most of the previously analyzed regression methods. Nonetheless in order to achieve comparably good kernel regression results, we consider LL regression more promising than LC regression due to the superior but still poor modeling close to and beyond the boundary of the fitting space. We would apply it with gaussian, Epanechnikov or other similar kernel functions. A high proportion of fitting points for bandwidth selection is recommended and it might be worth trying alternative comparably small basis function sets reflecting, for example, the risk factor interactions better than in our examples.

5. Conclusions

For high-dimensional variable selection applications such as the calibration step in the LSMC framework, we have presented various machine learning regression approaches ranging from ordinary and generalized least-squares regression variants over GLM and GAM approaches to multivariate adaptive regression splines and kernel regression approaches. At first we have justified the combinability of the ingredients of the regression routines such as the estimators and proposed model selection criteria in a theoretical discourse. Afterwards we have applied numerous configurations of these machine learning routines to the same slightly disguised real-world example in the LSMC framework. With the aid of different validation figures, we have analyzed the results, compared the out-of-sample performances and adviced to use certain routine designs.
In our slightly disguised real-world example and given LSMC setting, the adaptive OLS regression, GLM, GAM and FGLS regression algorithms turned out to be suitable machine learning methods for proxy modeling of life insurance companies with potential for both performance and computational efficiency gains by fine-tuning model hyperparameters and implementation designs. Differently, the MARS and kernel regression algorithms were not found to be convincing in our application. In order to study the robustness of our results, the approaches can be repeated in multiple other LSMC examples.
After all, none of our tested approaches was able to completely eliminate the bias observed in the validation figures and to yield consistent results across the three validation sets though. Investigations on whether these observations are systematic for the approaches, a result of the Monte Carlo error or a combination thereof help further narrow down the circle of recommended regression techniques. In order to assess the variance and bias of the proxy estimates conditional on an outer scenario, seed stability analyses in which the sets of fitting points are varied and convergence analyses in which sample size is increased need to be carried out. While such analyses would be computationally very costly, they would provide valuable insights into how to further improve approximation quality, that is, whether additional fitting points are necessary to reflect the underlying CFP model more accurately, whether more suitable functional forms and estimation assumptions are required for a more appropriate proxy modeling, or whether both aspects are relevant. Furthermore, one could deduce from such an analysis the sample sizes needed by the different regression algorithms to meet certain validation criteria. Since the generation of large sample sizes is currently computationally expensive for the industry, algorithms getting along with comparably few fitting points should be striven for.
Picking a suitable calibration algorithm is most important from the viewpoint of capturing the CFP model and hence the SCR appropriately. Therefore, if the bias observed in the validation figures indicates indeed issues with the functional forms of our approaches, doing further research on techniques not entailing such a bias or at least a smaller one is vital. On the one hand, one can fine-tune the approaches of this exposition and try different configurations thereof, and on the other hand, one can analyze further machine learning alternatives such as the ones mentioned in the introduction and already used in other LSMC applications. Ideally, various approaches like adaptive OLS regression, GLM, GAM and FGLS regression algorithms, artificial neural networks, tree-based methods and support vector machines would be fine-tuned and compared based on the same realistic and comprehensive data basis. Since the major challenges of machine learning calibration algorithms are hyperparameter selection and in some cases their dependence on randomness, future research should be dedicated to efficient hyperparameter search algorithms and stabilization methods such as ensemble methods.

Author Contributions

Conceptualization, A.-S.K., Z.N. and R.K.; Formal analysis, A.-S.K.; Investigation, A.-S.K.; Methodology, A.-S.K., Z.N. and R.K.; Project administration, A.-S.K.; Resources, Z.N.; Software, A.-S.K.; Supervision, R.K.; Validation, Z.N. and R.K.; Visualization, A.-S.K.; Writing–original draft, A.-S.K. and R.K.; Writing–review and editing, Z.N. and R.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The first author would like to thank Christian Weiß for his valuable comments which greatly helped to improve the paper. Furthermore, she is grateful to Magdalena Roth, Tamino Meyhöfer and her colleagues who have been supportive by providing her with academic time and computational resources. Additionally, we gratefully acknowledge very constructive comments by two anonymous reviewers.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Ordinary least squares (OLS) proxy function of BEL derived under 150–443 in the adaptive algorithm with the final coefficients. Furthermore, Akaike information criterion (AIC) scores and out-of-sample mean absolute errors (MAEs) in % after each iteration.
Table A1. Ordinary least squares (OLS) proxy function of BEL derived under 150–443 in the adaptive algorithm with the final coefficients. Furthermore, Akaike information criterion (AIC) scores and out-of-sample mean absolute errors (MAEs) in % after each iteration.
k r k 1 r k 2 r k 3 r k 4 r k 5 r k 6 r k 7 r k 8 r k 9 r k 10 r k 11 r k 12 r k 13 r k 14 r k 15 β ^ OLS , k AICv.maens.maecr.mae
000000000000000014,718.24437,2514.5573.2314.027
10000000100000007850.17386,7222.4740.8450.913
2100000000000000−269.33375,1442.0652.1391.831
3000001000000000145.21366,5671.6560.4440.496
4000000000000001−5.36358,8941.6471.0060.556
5000000100000000434.04355,7321.6350.8530.469
61000000100000001753.4354,3181.6790.9560.374
700000002000000019,145.78349,7591.2340.4910.628
820000000000000033.33347,7960.9990.340.594
9000001010000000868.25346,4440.9120.3570.602
1000000001000000130.59345,0450.8390.3890.650
111000000000000011.65341,0830.7590.3980.465
1201000000000000086.79339,3600.7180.3940.390
1310000100000000033.35337,7310.5740.6530.512
1400000000100000049.59336,8430.5890.6580.518
1500000000000100071.25335,9800.6280.6780.512
160000001100000002667.92335,3510.6090.6710.503
1710000010000000096.43334,8760.5790.7010.545
18100000010000001−6.31334,4130.5930.720.531
19000000020000001−47.09333,9040.5620.6210.474
2000000000000001048.93333,4470.5650.5970.454
21100000020000000−3,412.68333,1160.5530.5430.407
220000000000000020.02332,8060.5620.4780.358
23200000000000001−0.12332,5470.550.450.381
2400000000000010043.77332,2940.5450.4680.378
25001000000000000118.94332,0420.530.4640.362
26001000010000000−1288.45331,6870.5220.4530.355
27101000000000000−44.72331,4050.5250.4440.343
28000000030000000−24,908.99331,1360.4990.4050.327
29200000010000000−86.88330,5620.5040.3480.268
300000010000000010.55330,3610.5180.4180.264
3100000110000000077.26330,1630.5120.4430.272
3210000000100000024.78329,9880.5080.4430.264
3300000200000000014.33329,8340.4770.4910.286
34010000000000001−0.39329,6880.4770.50.290
3500000000001000028.36329,5500.4760.5020.291
36010000010000000−370.92329,4420.4720.4990.288
37110000000000000−17.9329,1470.4620.5050.301
380001000000000008574.53329,0430.4720.5180.3
39000001010000001−2.17328,9350.4740.510.295
40000000011000000223.91328,8320.4750.5090.291
41000001020000000−1801.73328,7330.4550.4450.248
42100001010000000−102.1327,9270.3720.3450.237
430000001000000010.7327,8580.3680.3530.235
440000000010000010.56327,7920.3660.3520.233
45100100000000000−3034.32327,7290.3650.3560.228
46000100010000000−13,127.81327,6590.3680.3640.227
47100000000001000−17.54327,6030.3680.3660.226
48000000010001000−187.07327,5370.3740.3670.226
49000001110000000−300.54327,4830.3690.3670.230
50100001000000001−0.09327,4320.3680.3910.221
51000002010000000−60.84327,3820.3590.390.228
52001001000000000−20.91327,3310.3520.390.225
53100000000000002−0.0327,2870.3460.3770.206
54000000010000002−0.09327,1490.3390.3570.185
552000010000000001.44327,1050.3150.3210.173
56001000000000001−0.5327,0640.3150.3220.173
57100000000000010−6.06327,0250.3220.3170.175
58000000120000000−6,600.49326,9860.3170.310.172
59100000110000000−407.57326,8230.3080.3020.183
600010000200000003378.82326,7870.3060.3010.183
61101000010000000205.28326,7330.3040.2990.183
62010000001000000−18.73326,7000.3060.2990.182
63001001010000000175.39326,6680.3040.2960.182
64000000000001001−0.2326,6380.3040.2980.181
650100000100000012.45326,6100.3010.2960.183
661100000000000010.11326,5720.2970.2990.180
67200001010000000−13.02326,5450.2920.2860.169
6811000001000000093.69326,5190.2920.2870.172
69010000020000000891.58326,4780.2940.2820.173
70000000110000001−6.21326,4530.2910.2810.175
71000000010010000−112.56326,4280.2890.2810.176
72100000000010000−5.27326,3980.2840.2820.173
731000000300000001129.77326,3740.2760.2640.162
74100000100000001−0.29326,3520.2720.2660.158
75100000011000000−56.54326,3310.2690.2660.157
76200000001000000−3.02326,3130.2710.2660.155
77100001100000000−10.59326,2950.2640.270.151
78010001000000000−6.99326,2780.2640.2750.153
79100002000000000−2.25326,2610.2520.2850.154
80000000002000000−14.77326,2450.2630.3090.157
812100000000000001.95326,2290.2670.3060.155
820101000000000002248.54326,2140.2660.3070.156
83000000030000001−111.77326,2010.2630.3020.158
84100000001000001−0.11326,1870.2620.3020.157
85000000000000101−0.18326,1740.2630.3050.156
8601000101000000045.58326,1610.2650.3030.157
87000100020000000−83,291.89326,1490.2670.3080.156
88001000100000000−56.2326,1370.2670.3080.156
89100000000000100−5.32326,1260.2670.310.156
90000002100000000−10.87326,1160.2670.3130.158
91000100000000001−32.75326,1060.2650.3170.158
92000000020000002−0.09326,0970.2650.3080.151
9301000000000001010.87326,0890.2650.3080.151
94100001110000000−48.93326,0810.2640.3060.148
9500000020000000069.57326,0730.2560.2880.141
96000100030000000−542,688.19326,0660.2560.2890.141
9700000000000002010.44326,0580.2480.2750.136
98000000011000001−1.08326,0510.2480.2760.136
99001000110000000419.05326,0450.2490.2750.136
10001100000000000012.8326,0380.250.2760.136
101000001000010000−3.94326,0330.250.2760.136
102100000020000001−10.12326,0270.2480.2810.138
103200000010000001−0.36326,0170.2440.2830.135
1040010000100000011.74326,0120.2440.2820.136
105000000000000003−0.0326,0060.2420.2680.132
106200000100000000−7.09326,0010.2380.2650.131
107200000110000000−109.46325,9820.2380.2630.129
108000000000010001−0.1325,9770.2370.2630.128
1090100000000010005.76325,9720.2350.2630.129
11010000001001000054.51325,9680.2370.2640.129
111100000120000000−1386.73325,9630.2350.2640.129
112000000001000002−0.0325,9590.2370.2650.13
1130100000010000010.11325,9550.2350.2650.13
1140100010000000010.05325,9510.2340.2660.13
1151010010000000004.3325,9480.2360.2650.127
116100002010000000−19.81325,9440.2370.2620.126
117200002000000000−0.87325,9380.2410.2670.124
118010001010000001−0.36325,9350.2410.2670.124
119011000010000000−80.29325,9310.2410.2670.125
120000000001000100−6.95325,9280.2410.2670.124
121000001000000002−0.0325,9250.2430.2590.121
122000000020010000436.56325,9230.2410.2590.121
123000002000000001−0.03325,9200.2430.2630.121
1240000010010000002.99325,9180.2420.2630.12
125100001010000001−0.59325,9160.2410.2610.119
126200001000000001−0.02325,9080.2470.2650.124
127000001020000001−4.66325,9020.2490.2790.123
128000000130000000−8179.68325,9000.2490.280.124
129000001030000000691.4325,8980.2490.280.123
1301000000000001010.04325,8960.250.2810.122
1310000000001000007.04325,8940.2460.2640.12
132001000000100000−27.72325,8920.2470.2640.119
1332000000000100001.26325,8910.2470.2640.119
134000001000001000−2.67325,8890.2490.2650.118
1351000010000010001.53325,8870.250.2660.119
136000000000000011−0.07325,8850.250.2650.12
13710000001000100040.44325,8840.2510.2650.119
138000000020001000434.5325,8780.2490.2640.119
139000000001001000−5.99325,8770.2480.2640.119
14000000000200100014.64325,8730.2460.2630.12
141000002020000000−119.42325,8710.2470.270.121
1420000000100000030.0325,8700.2480.2710.121
1431000000000010010.07325,8680.2480.2710.121
1440000000100010011.06325,8610.2460.2710.121
145100000110000001−0.74325,8590.2470.2710.121
146000000001000010−5.61325,8580.2460.2710.121
147010000000000011−0.08325,8570.2470.270.121
148000000100000100−37.16325,8550.2470.2710.122
1490000001000001010.41325,8510.2470.2710.122
150010100010000000−7290.99325,8500.2470.2710.122
Table A2. OLS proxy function of available capital (AC) derived under 150–443 in the adaptive algorithm with the final coefficients. Furthermore, AIC scores and out-of-sample MAEs in % after each iteration.
Table A2. OLS proxy function of available capital (AC) derived under 150–443 in the adaptive algorithm with the final coefficients. Furthermore, AIC scores and out-of-sample MAEs in % after each iteration.
k r k 1 r k 2 r k 3 r k 4 r k 5 r k 6 r k 7 r k 8 r k 9 r k 10 r k 11 r k 12 r k 13 r k 14 r k 15 β ^ OLS , k AICv.maens.maecr.mae
0000000000000000745.35391,37560.6297.518257.762
10000000100000005766.61382,61050.40299.306256.789
2100000000000000272.75367,66735.28538.12499.902
30000000000000015.46359,99730.73918.2172.719
4000001000000000128.41356,70530.11925.08829.357
5100000010000000−1750.72355,35430.86728.17321.870
6000000020000000−19,127.27351,00222.94214.94844.668
7200000000000000−33.25349,14719.0312.14242.535
8000000100000000307.32347,77718.22110.92835.420
9000001010000000−868.05346,42316.66211.52735.941
10010000000000000−87.54345,02515.98710.26431.461
11000000010000001−30.51343,57014.85811.18734.502
12100000000000001−1.66339,28213.09212.66923.174
13100001000000000−33.33337,64810.42720.97630.402
14000000000001000−70.63336,84011.08721.59829.972
15000000001000000−41.37336,12011.43621.76430.408
16000000110000000−2666.44335,49511.08821.54329.890
17100000100000000−96.48335,02210.54522.47932.334
181000000100000016.3334,56310.80423.09531.519
1900000002000000147.02334,05810.23219.91328.128
20000000000000010−48.77333,61010.29219.16326.995
211000000200000003412.54333,28110.08317.43824.190
22000000000000002−0.02332,97010.24615.32821.326
232000000000000010.12332,71410.0214.43622.671
24001000000000000−120.68332,4579.83414.28321.608
250010000100000001287.63332,1089.72513.96921.273
2610100000000000044.71331,8329.75513.66120.501
2700000003000000024,899.66331,5699.27512.46219.873
2820000001000000087.04331,0049.29210.75717.022
29000000000000100−43.38330,7429.17111.18316.023
30000001000000001−0.55330,5439.44413.40915.766
31000001100000000−77.35330,3459.32414.20716.192
32100000001000000−25.2330,1619.24614.20315.692
33000002000000000−14.37330,0078.67215.76416.964
340100000000000010.39329,8598.68216.03117.223
35000000000010000−27.8329,7288.66516.1117.264
36000100000000000−8757.49329,6198.87116.5317.005
370000010100000012.17329,5138.93716.27616.790
38010000010000000369.16329,4088.84216.16916.738
3911000000000000017.97329,1098.63716.38717.527
40000000011000000−222.55329,0088.65616.35917.271
410000010200000001791.7328,9108.29714.28214.748
42100001010000000101.23328,1116.78311.11214.144
43000000100000001−0.7328,0416.71311.35514.013
44000000001000001−0.57327,9726.68311.32513.867
451001000000000003083.05327,9056.65411.45613.595
4600010001000000012,863.79327,8376.711.72113.5
4710000000000100017.78327,7806.7111.77713.450
48000000010001000190.46327,7116.82411.81813.468
49000001110000000300.76327,6576.72411.79313.716
501000010000000010.09327,6076.71812.56513.182
5100000201000000060.83327,5576.54312.53313.558
5200100100000000020.91327,5076.41512.5313.394
531000000000000020.0327,4636.31412.11812.252
540000000100000020.08327,3276.17611.48611.049
55200001000000000−1.46327,2845.75110.33910.295
560010000000000010.5327,2425.74610.36710.287
571000000000000106.08327,2035.87110.21110.450
580000001200000006593.98327,1655.789.97310.274
59100000110000000406.73327,0035.6189.72210.897
60001000020000000−3,364.02326,9685.5819.67110.904
61101000010000000−204.12326,9145.5429.62610.921
6201000000100000018.9326,8815.5889.61110.837
63001001010000000−175.17326,8495.5469.51410.817
640000000000010010.21326,8185.549.59710.799
65010000010000001−2.44326,7915.4949.53210.896
66110000000000001−0.11326,7535.4139.61610.708
6720000101000000012.99326,7265.3179.21510.046
68110000010000000−93.57326,7005.3299.25510.231
69010000020000000−890.62326,6605.3559.0910.326
70000000010010000113.04326,6355.3139.09510.357
711000000000100005.23326,6055.2319.10110.164
720000001100000016.2326,5815.1869.06810.265
73100000030000000−1,133.83326,5565.0348.4889.647
741000001000000010.29326,5344.958.589.374
7510000001100000056.56326,5134.9088.5599.323
762000000010000003.02326,4954.9368.5739.223
7710000110000000010.61326,4774.8248.7058.996
780100010000000006.97326,4614.8218.8499.071
791000020000000002.25326,4444.6029.179.162
80210000000000000−1.94326,4294.6889.0698.997
81010100000000000−2,257.4326,4144.6769.0999.070
8200000000200000014.06326,3994.8539.8319.278
831000000010000010.11326,3854.8449.8519.203
840000000000001010.18326,3724.8619.9359.174
85000000030000001111.58326,3584.7969.7699.270
86010001010000000−45.11326,3464.8269.7249.330
8700010002000000082,935.66326,3344.8719.8659.284
8800100010000000056.0326,3224.8679.8629.267
891000000000001005.35326,3114.8579.9389.258
9000000210000000010.88326,3014.8710.0439.414
9100010000000000132.81326,2914.83310.1569.394
9210000111000000048.96326,2834.81210.0859.185
93010000000000010−10.9326,2744.80110.0839.210
940000000200000020.09326,2664.8039.8188.787
95000000200000000−69.45326,2584.6599.258.413
96000100030000000543,840.26326,2514.6639.2698.393
97000000000000020−10.31326,2444.518.8418.101
980000000110000011.07326,2374.5238.8478.091
99001000110000000−417.88326,2314.5318.848.101
100011000000000000−12.92326,2244.5468.8478.081
1010000010000100003.94326,2194.5588.8668.072
10210000002000000110.1326,2134.5139.0128.203
1032000000100000010.36326,2044.4539.0848.035
104001000010000001−1.74326,1984.4459.0638.070
1052000001000000007.09326,1934.3838.9678.008
106200000110000000109.5326,1744.3718.8997.889
1070000000000000030.0326,1694.3328.4547.669
108010000000001000−5.85326,1644.298.4567.689
1090000000000100010.1326,1594.2828.4577.657
110100000010010000−54.88326,1544.3138.4637.689
1111000001200000001380.74326,1504.2918.4897.7
1120000000010000020.0326,1464.3158.4987.751
113010000001000001−0.11326,1424.2878.5017.736
114101001000000000−4.3326,1384.328.4617.558
115010001000000001−0.05326,1354.2998.5147.566
11610000201000000020.09326,1314.328.4177.498
1172000020000000000.87326,1254.3938.5617.371
1180100010100000010.36326,1224.3898.5647.409
11901100001000000079.51326,1184.3948.567.411
1200000010000000020.0326,1154.438.3047.187
1210000000010001006.91326,1134.428.3057.176
122000000020010000−435.81326,1104.398.3017.212
1230000020000000010.03326,1074.4198.457.206
124000001001000000−2.99326,1054.4078.4347.163
1251000010100000010.59326,1034.3948.3667.095
1262000010000000010.02326,0964.5028.4997.382
1270000010200000014.66326,0894.5438.9627.340
128000001030000000−692.59326,0884.5378.9617.248
1290000001300000008097.7326,0864.5398.9957.316
130100000000000101−0.04326,0844.5559.0247.285
1310000010000010002.73326,0824.599.0657.246
132100001000001000−1.53326,0804.6129.0977.280
133200000000010000−1.28326,0784.6169.0867.251
1340000000000000110.07326,0774.6079.0557.287
135000000000100000−6.96326,0754.5338.5277.230
13600100000010000027.74326,0734.5568.527.115
137000002020000000122.08326,0714.5718.7467.171
1380000000010010006.0326,0704.5568.7457.190
139000000002001000−14.5326,0664.5338.6997.199
140100000000001001−0.07326,0644.5328.7227.227
141000000010001001−1.05326,0574.5078.7337.250
1421000001100000010.74326,0564.5158.7197.238
1430000000010000105.71326,0544.5038.7067.263
144100000010001000−39.87326,0534.4998.7157.244
145000000020001000−431.71326,0474.478.6697.215
146000000010000003−0.0326,0464.4888.6987.207
1470100000000000110.08326,0454.4948.6947.223
14800000010000010037.33326,0434.4968.7037.236
149000000100000101−0.42326,0394.5088.7067.253
1500101000100000007224.25326,0384.5128.7127.265
Table A3. OLS proxy function of BEL derived under 300–886 in the adaptive algorithm with the final coefficients. Furthermore, AIC scores and out-of-sample MAEs in % after each iteration.
Table A3. OLS proxy function of BEL derived under 300–886 in the adaptive algorithm with the final coefficients. Furthermore, AIC scores and out-of-sample MAEs in % after each iteration.
k r k 1 r k 2 r k 3 r k 4 r k 5 r k 6 r k 7 r k 8 r k 9 r k 10 r k 11 r k 12 r k 13 r k 14 r k 15 β ^ OLS , k AICv.maens.maecr.mae
000000000000000014,689.75437,2514.5573.2314.027
10000000100000007990.98386,7222.4740.8450.913
2100000000000000−274.24375,1442.0652.1391.831
3000001000000000145.73366,5671.6560.4440.496
4000000000000001−5.11358,8941.6471.0060.556
5000000100000000416.79355,7321.6350.8530.469
61000000100000002332.91354,3181.6790.9560.374
700000002000000024,914.36349,7591.2340.4910.628
820000000000000049.42347,7960.9990.340.594
9000001010000000859.49346,4440.9120.3570.602
1000000001000000129.5345,0450.8390.3890.65
111000000000000011.71341,0830.7590.3980.465
1201000000000000091.65339,3600.7180.3940.39
1310000100000000036.34337,7310.5740.6530.512
1400000000100000051.78336,8430.5890.6580.518
1500000000000100068.02335,9800.6280.6780.512
160000001100000002661.47335,3510.6090.6710.503
17100000100000000109.14334,8760.5790.7010.545
18100000010000001−12.63334,4130.5930.720.531
19000000020000001−114.48333,9040.5620.6210.474
2000000000000001035.4333,4470.5650.5970.454
21100000020000000−4570.15333,1160.5530.5430.407
220000000000000020.02332,8060.5620.4780.358
23200000000000001−0.26332,5470.550.450.381
2400000000000010047.17332,2940.5450.4680.378
25001000000000000123.47332,0420.530.4640.362
26001000010000000−1,240.44331,6870.5220.4530.355
27101000000000000−43.82331,4050.5250.4440.343
28000000030000000−32,661.61331,1360.4990.4050.327
29200000010000000−140.9330,5620.5040.3480.268
300000010000000010.56330,3610.5180.4180.264
3100000110000000087.33330,1630.5120.4430.272
3210000000100000025.31329,9880.5080.4430.264
3300000200000000014.22329,8340.4770.4910.286
34010000000000001−0.44329,6880.4770.50.29
3500000000001000026.88329,5500.4760.5020.291
36010000010000000−391.81329,4420.4720.4990.288
37110000000000000−18.58329,1470.4620.5050.301
3800010000000000011,959.32329,0430.4720.5180.3
39000001010000001−2.15328,9350.4740.510.295
40000000011000000228.32328,8320.4750.5090.291
41000001020000000−1938.37328,7330.4550.4450.248
42100001010000000−112.83327,9270.3720.3450.237
430000001000000010.71327,8580.3680.3530.235
440000000010000010.72327,7920.3660.3520.233
45100100000000000−4230.29327,7290.3650.3560.228
46000100010000000−10,720.3327,6590.3680.3640.227
47100000000001000−18.39327,6030.3680.3660.226
48000000010001000−212.78327,5370.3740.3670.226
49000001110000000−177.64327,4830.3690.3670.23
50100001000000001−0.09327,4320.3680.3910.221
51000002010000000−57.4327,3820.3590.390.228
52001001000000000−23.55327,3310.3520.390.225
53100000000000002−0.0327,2870.3460.3770.206
54000000010000002−0.08327,1490.3390.3570.185
552000010000000001.15327,1050.3150.3210.173
56001000000000001−0.65327,0640.3150.3220.173
57100000000000010−4.41327,0250.3220.3170.175
58000000120000000−6095.97326,9860.3170.310.172
59100000110000000−332.88326,8230.3080.3020.183
600010000200000003624.77326,7870.3060.3010.183
61101000010000000191.46326,7330.3040.2990.183
62010000001000000−17.49326,7000.3060.2990.182
63001001010000000183.68326,6680.3040.2960.182
64000000000001001−0.2326,6380.3040.2980.181
650100000100000012.55326,6100.3010.2960.183
661100000000000010.13326,5720.2970.2990.18
67200001010000000−29.57326,5450.2920.2860.169
6811000001000000095.55326,5190.2920.2870.172
69010000020000000922.48326,4780.2940.2820.173
70000000110000001−6.22326,4530.2910.2810.175
71000000010010000−134.95326,4280.2890.2810.176
72100000000010000−4.47326,3980.2840.2820.173
73100000030000000−26,186.72326,3740.2760.2640.162
74100000100000001−0.29326,3520.2720.2660.158
75100000011000000−58.01326,3310.2690.2660.157
76200000001000000−3.11326,3130.2710.2660.155
77100001100000000−2.1326,2950.2640.270.151
78010001000000000−8.73326,2780.2640.2750.153
79100002000000000−1.93326,2610.2520.2850.154
80000000002000000−14.9326,2450.2630.3090.157
81210000000000000−1.22326,2290.2670.3060.155
820101000000000003341.29326,2140.2660.3070.156
83000000030000001−43.84326,2010.2630.3020.158
84100000001000001−0.12326,1870.2620.3020.157
85000000000000101−0.18326,1740.2630.3050.156
8601000101000000067.19326,1610.2650.3030.157
87000100020000000−432,954.98326,1490.2670.3080.156
88001000100000000−34.58326,1370.2670.3080.156
89100000000000100−5.1326,1260.2670.310.156
90000002100000000−10.78326,1160.2670.3130.158
91000100000000001−66.99326,1060.2650.3170.158
92000000020000002−0.09326,0970.2650.3080.151
930100000000000100.35326,0890.2650.3080.151
94100001110000000−93.83326,0810.2640.3060.148
9500000020000000070.45326,0730.2560.2880.141
96000100030000000−1,073,454.04326,0660.2560.2890.141
97000000000000020−21.59326,0580.2480.2750.136
98000000011000001−1.1326,0510.2480.2760.136
99001000110000000398.94326,0450.2490.2750.136
10001100000000000022.03326,0380.250.2760.136
101000001000010000−4.12326,0330.250.2760.136
1021000000200000011.3326,0270.2480.2810.138
1032000000100000010.2326,0170.2440.2830.135
104100000030000001351.11326,0090.2450.2890.138
1050010000100000011.09326,0030.2440.2880.139
106000000000000003−0.0325,9970.2420.2740.136
107200000100000000−7.78325,9920.2390.2710.134
108200000110000000−126.28325,9730.2380.2690.132
109000000000010001−0.1325,9680.2380.2690.131
11010000001001000057.61325,9630.2390.2690.132
1110100000000010009.91325,9590.2370.2690.132
112100000120000000−1698.92325,9540.2360.270.132
113000000001000002−0.01325,9500.2370.270.133
1140100000010000010.1325,9460.2360.2710.133
1150100010000000010.05325,9420.2340.2720.132
1161010010000000005.0325,9390.2360.2710.129
117100002010000000−17.6325,9350.2380.2680.127
118200002000000000−0.79325,9290.2420.2730.128
119010001010000001−0.55325,9250.2410.2730.128
120011000010000000−119.81325,9220.2420.2730.129
121000000001000100−7.16325,9190.2410.2730.128
122000001000000002−0.0325,9160.2430.2650.124
123000000020010000497.02325,9140.2410.2650.125
124000002000000001−0.03325,9110.2430.2690.125
125100001010000001−0.58325,9090.2420.2670.123
126200001000000001−0.02325,9010.2480.2710.129
127000001020000001−4.48325,8950.2510.2860.129
1280000010010000002.93325,8930.250.2850.128
129000000130000000−5069.15325,8910.250.2860.128
1301000000000001010.03325,8890.2510.2870.127
1310000010300000002631.07325,8870.2510.2870.125
13200000000010000030.03325,8850.2460.270.124
133001000000100000−27.79325,8830.2480.270.123
134000001000001000−2.68325,8810.2490.2710.122
1351000010000010002.18325,8790.2510.2720.123
136000000000000011−0.07325,8780.250.2710.124
13710000001000100052.06325,8760.2510.2720.123
138000000020001000507.79325,8700.250.270.123
1390000000010010000.09325,8690.2480.270.123
14000000000200100014.53325,8650.2460.2690.123
1410000000100000030.0325,8640.2470.270.122
1422000000000100001.48325,8620.2470.2690.121
143000002020000000−98.06325,8610.2480.2760.122
144100000110000001−0.68325,8590.2480.2760.122
1451000000000010010.08325,8580.2480.2760.122
1460000000100010011.1325,8500.2470.2770.122
147000000001000010−5.64325,8490.2470.2760.123
148010000000000011−0.08325,8470.2470.2760.123
14910010000000000120.58325,8460.2460.2770.123
150000100010000001−60.89325,8410.2420.2740.123
151000000100000100−26.95325,8400.2420.2750.123
1520000001000001010.42325,8350.2430.2750.123
153010100010000000−10,592.62325,8340.2430.2750.123
1542000000000000100.93325,8330.2430.2750.125
1551000000010010002.96325,8320.2440.2750.124
156000001001001000−3.87325,8300.2440.2750.125
157000000200000100−68.29325,8290.2430.2770.125
158000100100000000−9773.54325,8280.2430.2780.125
159000100100000001120.51325,8220.2420.2780.125
1601000000000000110.03325,8210.2430.2780.127
161010100000000001−19.68325,8200.2430.2780.127
162000000000200000−24.62325,8190.240.2610.127
1630000000010000030.0325,8180.2390.2610.128
164000000000010010−5.28325,8170.2390.2620.128
1651100010000000002.36325,8160.240.2620.129
166110001000000001−0.02325,8140.2380.2640.129
167111000000000000−5.06325,8130.2380.2640.129
16810100101000000020.18325,8120.2380.2630.129
169110100000000000−461.05325,8120.2390.2640.130
1700100000010010006.14325,8110.2380.2650.130
1710001000200000012708.64325,8100.2370.2650.130
1720001000300000019307.25325,8050.2390.2650.129
173011000000000001−0.17325,8050.2380.2650.129
1740100000000000205.94325,8040.2380.2640.128
175010000000001001−0.07325,8040.2380.2640.127
176001000120000000−1367.33325,8030.2380.2640.128
1770001000000010001133.78325,8030.2370.2640.128
178110000000001000−1.86325,8020.2370.2640.128
1793000000000000000.99325,8020.2410.2740.131
180300000000000001−0.01325,7660.2410.30.149
181300001000000000−0.68325,7440.2480.3350.172
182300000010000000−70.02325,7270.2450.3260.157
183200000020000000−1883.77325,7000.2380.3130.144
184400000000000000−1.21325,6720.2310.3270.173
185000000040000000−157,391.76325,6550.2250.3090.175
1860000000400000012127.74325,6440.2210.3030.176
18720000002000000121.17325,5830.2060.2960.190
1883000000100000010.62325,5240.1980.2680.164
1890001000400000005,216,336.05325,5150.1990.270.166
190300000100000000−0.54325,5060.2010.2750.173
1914000000000000010.01325,5000.1950.2810.184
192200000120000000136.68325,4990.1930.2790.182
193000000210000000−526.83325,4980.1940.280.182
194100000200000000−32.63325,4940.1920.270.178
195000000220000000−2,791.14325,4920.190.2610.176
19620000020000000011.06325,4910.1910.2650.178
1970010010000000010.09325,4910.190.2650.179
19800200000000000013.23325,4900.1860.2580.178
199002000100000000143.48325,4880.1870.2610.179
2002100010000000000.46325,4880.1860.2620.181
2012000000000010000.98325,4870.1850.2620.181
2020000000001000108.97325,4870.1850.2630.180
203000100040000001−33,222.1325,4870.1840.2630.179
2042100000000000010.01325,4870.1840.2640.180
205310000000000000−0.32325,4870.1840.2630.178
2064100000000000000.2325,4860.1830.2640.177
207200001100000000−2.44325,4860.1850.2650.179
208300001100000000−1.76325,4850.1840.2610.173
209200001110000000−12.48325,4820.1830.260.173
2102000020100000003.93325,4820.1840.2580.170
211000002030000000−495.92325,4810.1840.2570.168
212000001120000000−434.12325,4810.1850.260.169
213000001130000000−2854.58325,4790.1850.260.167
2142000000100100006.58325,4790.1840.2610.167
2151000000000000207.08325,4790.1830.2570.167
216001000000000010−20.06325,4790.1840.2570.167
21710100000000001011.9325,4680.1860.2570.166
2180010000000000110.2325,4680.1860.2570.166
21900000101000100018.33325,4680.1860.2570.165
2200000000000000309.56325,4680.1850.2580.165
22100000000000004037.24325,4630.1940.2650.168
22201000000000003017.46325,4600.1960.2650.168
223100000000000030−5.47325,4600.1940.2660.166
224100000000000040−11.21325,4590.1940.2680.168
Table A4. Out-of-sample validation figures of the OLS proxy function of BEL under 150–443 after each tenth iteration.
Table A4. Out-of-sample validation figures of the OLS proxy function of BEL under 150–443 after each tenth iteration.
kv.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
04.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
100.8390.802021.4681040.3890.3762321.6591130.6500.6368927.112179
200.5650.540−1016.780820.5970.577−758.27420.4540.445−4010.08338
300.5180.496117.5011000.4180.404−477.970370.2640.259113.37885
400.4750.454−1016.888980.5090.492−666.234270.2910.285−2610.49768
500.3680.352−1513.268780.3910.378−506.060290.2210.217−910.67469
600.3060.293−1710.760620.3010.290−365.863290.1830.179510.65169
700.2910.278−1810.451600.2810.272−336.060300.1750.171810.95872
800.2630.251−239.389540.3090.298−414.837220.1570.154−48.94559
900.2670.256−249.196540.3130.303−424.689220.1580.155−78.58757
1000.2500.239−189.152530.2760.266−354.637220.1360.13308.60657
1100.2370.226−188.494480.2640.255−344.144180.1290.126−27.63450
1200.2410.230−168.896500.2670.258−344.153180.1240.122−27.67951
1300.2500.239−189.839570.2810.272−374.810240.1220.120−18.90059
1400.2460.235−159.855570.2630.254−334.809240.1200.11718.82258
1500.2470.237−149.924570.2710.262−354.612220.1220.120−18.53756
Table A5. Out-of-sample validation figures of the OLS proxy function of AC under 150–443 after each tenth iteration.
Table A5. Out-of-sample validation figures of the OLS proxy function of AC under 150–443 after each tenth iteration.
kv.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
060.6203.178−296100.000−20797.5182.936−453100.000−369257.7624.251−653100.000−568
1015.9870.838−129.161−11010.2640.309−632.492−11931.4610.519−6731.704−180
2010.2920.5401021.029−8219.1630.5777512.240−2126.9950.4453913.324−57
309.4440.495−121.971−10013.4090.4044715.583−5615.7660.260−118.759−105
408.6560.4541021.197−9816.3590.4926712.740−4617.2710.2852615.434−87
506.7180.3521516.655−7812.5650.3785012.938−4713.1820.217915.666−88
605.5810.2931713.506−629.6710.2913612.985−4810.9040.180−515.640−88
705.3130.2791913.026−599.0950.2743413.289−4910.3570.171−815.975−90
804.6880.2462111.326−519.0690.2733611.131−418.9970.148013.590−77
904.8700.2552411.525−5310.0430.3024210.995−419.4140.155713.285−75
1004.5460.2381811.471−538.8470.2663511.041−418.0810.133013.308−76
1104.3130.2261810.650−488.4630.255349.999−377.6890.127212.181−69
1204.4300.2321611.350−518.3040.2503310.596−397.1870.119−112.763−73
1304.5550.2391812.345−579.0240.2723711.491−427.2850.120113.663−78
1404.5320.2381512.470−578.7220.2633511.282−427.2270.119013.448−76
1504.5120.2371412.459−578.7120.2623511.136−417.2650.120113.242−75
Table A6. Out-of-sample validation figures of the OLS proxy function of BEL under 300–886 after each tenth and the final iteration.
Table A6. Out-of-sample validation figures of the OLS proxy function of BEL under 300–886 after each tenth and the final iteration.
kv.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
04.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
100.8390.802021.4681040.3890.3762321.6591130.6500.6368927.112179
200.5650.540−1016.780820.5970.577−758.27420.4540.445−4010.08338
300.5180.496117.5011000.4180.404−477.970370.2640.259113.37885
400.4750.454−1016.888980.5090.492−666.234270.2910.285−2610.49768
500.3680.352−1513.268780.3910.378−506.060290.2210.217−910.67469
600.3060.293−1710.760620.3010.290−365.863290.1830.179510.65169
700.2910.278−1810.451600.2810.272−336.060300.1750.171810.95872
800.2630.251−239.389540.3090.298−414.837220.1570.154−48.94559
900.2670.256−249.196540.3130.303−424.689220.1580.155−78.58757
1000.2500.239−189.152530.2760.266−354.637220.1360.13308.60657
1100.2390.229−189.132520.2690.260−354.577220.1320.129−18.35855
1200.2420.231−169.519540.2730.263−354.569210.1290.126−18.38055
1300.2510.240−1810.506610.2870.277−375.421270.1270.12509.72464
1400.2460.235−1510.530610.2690.260−345.329270.1230.12029.52663
1500.2420.232−1410.556610.2740.265−355.119260.1230.12009.26161
1600.2430.232−1510.483600.2780.268−365.018250.1270.12409.14460
1700.2380.228−1310.140580.2650.256−334.968240.1300.12728.88459
1800.2410.230−1210.128570.3000.290−374.552180.1490.14628.71658
1900.2010.192−136.458320.2750.266−334.124−20.1730.169−44.72127
2000.1860.178−96.111290.2620.254−294.460−40.1810.17734.92027
2100.1840.176−96.210300.2580.249−284.337−30.1700.16734.84628
2200.1850.177−86.433320.2580.250−284.286−30.1650.16134.85028
2240.1940.186−96.659340.2680.259−304.200−20.1680.16515.00729
Table A7. Out-of-sample validation figures of the derived OLS proxy functions of BEL under 150–443 and 300–886 after the final iteration based on three different sets of validation value estimates. Thereby emerges the first set of validation value estimates from pointwise subtraction of 1.96 times the standard errors from the original set of validation values. The second set is the original set. The third set is the addition counterpart of the first set.
Table A7. Out-of-sample validation figures of the derived OLS proxy functions of BEL under 150–443 and 300–886 after the final iteration based on three different sets of validation value estimates. Thereby emerges the first set of validation value estimates from pointwise subtraction of 1.96 times the standard errors from the original set of validation values. The second set is the original set. The third set is the addition counterpart of the first set.
kv.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
150–443 figures based on validation values minus 1 . 96 times standard errors
1500.2860.273−309.878570.3300.319−463.915160.1510.148−137.47349
150–443 figures based on validation values
1500.2470.237−149.924570.2710.262−354.612220.1220.120−18.53756
150–443 figures based on validation values plus 1.96 times standard errors
1500.2310.22119.977570.2190.212−245.473280.1300.127119.59164
300–886 figures based on validation values minus 1.96 times standard errors
2240.2360.225−246.757340.3250.314−414.610−80.1910.187−114.30722
300–886 figures based on validation values
2240.1940.186−96.659340.2680.259−304.200−20.1680.16515.00729
300–886 figures based on validation values plus 1.96 times standard errors
2240.1840.17776.625350.2180.211−193.98240.1730.169135.81337
Table A8. AIC scores and out-of-sample validation figures of the gaussian generalized linear models (GLMs) of BEL with identity, inverse and log link functions under 150–443 after each tenth iteration.
Table A8. AIC scores and out-of-sample validation figures of the gaussian generalized linear models (GLMs) of BEL with identity, inverse and log link functions under 150–443 after each tenth iteration.
kAICv.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
Gaussian with identity link
0437,2514.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10345,0450.8390.802021.4681040.3890.3762321.6591130.6500.6368927.112179
20333,4470.5650.540−1016.780820.5970.577−758.27420.4540.445−4010.08338
30330,3610.5180.496117.5011000.4180.404−477.970370.2640.259113.37885
40328,8320.4750.454−1016.888980.5090.492−666.234270.2910.285−2610.49768
50327,4320.3680.352−1513.268780.3910.378−506.060290.2210.217−910.67469
60326,7870.3060.293−1710.760620.3010.290−365.863290.1830.179510.65169
70326,4530.2910.278−1810.451600.2810.272−336.060300.1750.171810.95872
80326,2450.2630.251−239.389540.3090.298−414.837220.1570.154−48.94559
90326,1160.2670.256−249.196540.3130.303−424.689220.1580.155−78.58757
100326,0380.2500.239−189.152530.2760.266−354.637220.1360.13308.60657
110325,9680.2370.226−188.494480.2640.255−344.144180.1290.126−27.63450
120325,9280.2410.230−168.896500.2670.258−344.153180.1240.122−27.67951
130325,8960.2500.239−189.839570.2810.272−374.810240.1220.120−18.90059
140325,8730.2460.235−159.855570.2630.254−334.809240.1200.11718.82258
150325,8500.2470.237−149.924570.2710.262−354.612220.1220.120−18.53756
Gaussian with inverse link
0437,2514.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10343,4261.0360.990133.7051920.6500.628−6321.4811140.3910.3824433.482221
20334,9850.6890.659−621.3131180.5150.498−6210.319490.3240.317−416.493107
30331,4260.5120.490−1618.8361090.3930.380−4512.277650.2480.2431518.960125
40328,8750.4330.414−514.354820.3170.306−269.312470.2940.2882615.18899
50327,8770.3830.366−812.959760.2850.276−248.961460.2710.2652514.59295
60327,2740.3370.323−1612.572730.3280.316−377.636380.2190.2151013.08785
70326,8750.2900.277−1411.248640.2710.261−326.233310.1560.153610.58870
80326,6030.2590.248−169.976580.2870.278−385.042220.1580.155−88.01452
90326,3900.2540.243−208.462470.3920.379−514.45110.2200.215−175.67636
100326,2250.2700.258−218.884490.3930.379−514.45450.2190.215−126.73244
110326,1520.2720.260−208.558470.3750.363−484.44140.2080.204−106.54542
120326,0940.2670.255−198.418470.3800.367−494.41430.2090.205−126.19440
130326,0580.2660.254−198.638480.3790.367−494.32940.2030.199−116.36241
140325,9820.2580.247−178.353450.3630.351−464.38020.1970.193−106.05938
150325,9520.2580.247−168.468450.3530.341−444.28230.1920.188−86.08839
Gaussian with log link
0437,2514.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10342,3250.8790.8402625.1711320.4220.408−1715.628740.5300.5195222.034143
20334,4170.6610.632−522.4741250.5320.514−6410.764510.3300.323−317.317112
30330,9010.5600.536−321.7801260.4740.458−5511.199590.2660.261317.802117
40328,4440.4110.393−1013.639780.3150.304−298.610440.2640.2581914.16292
50327,5740.3410.326−1612.936750.3340.323−358.294420.2620.2571213.64289
60327,0290.3150.302−1711.991690.3120.301−367.024360.1920.1881012.46582
70326,6370.2790.267−1610.620610.2660.257−316.142310.1620.158910.79771
80326,4490.2660.254−2110.069590.3040.294−405.195250.1530.149−49.23461
90326,2870.2730.261−229.742570.3000.290−405.082250.1410.138−58.99059
100326,0820.2690.257−238.052450.3700.358−484.09460.2100.205−136.31441
110326,0210.2580.247−198.043440.3430.331−434.10250.1980.193−76.38141
120325,9500.2520.241−177.891420.3290.318−414.08630.1910.187−75.88337
130325,8810.2510.240−188.049450.3590.347−464.23820.1940.190−105.92438
140325,8490.2450.234−177.978440.3400.328−434.04540.1830.179−76.13140
150325,8230.2400.229−157.980440.3160.305−384.01460.1700.167−26.43442
Table A9. AIC scores and out-of-sample validation figures of the gamma GLMs of BEL with identity, inverse and log link functions under 150–443 after each tenth iteration.
Table A9. AIC scores and out-of-sample validation figures of the gamma GLMs of BEL with identity, inverse and log link functions under 150–443 after each tenth iteration.
kAICv.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
Gamma with identity link
0437,2434.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10345,6050.8720.834123.4851140.3150.304619.8611050.5300.5196825.266167
20333,9110.5530.529−1216.265790.5990.579−768.26800.4640.454−439.89534
30330,7070.5030.481017.404990.4250.411−497.754350.2670.262−212.95982
40328,5890.3760.359−1313.317760.3410.330−397.187350.2380.233612.34180
50327,6680.3480.333−1513.173770.3560.344−446.656340.2270.222−411.34874
60327,1350.3050.292−1611.190650.3040.294−376.059300.1750.172310.84371
70326,6860.2730.261−159.730550.2570.249−305.364260.1650.16199.92865
80326,4610.2680.257−219.471540.2870.277−365.151250.1490.14629.54963
90326,3280.2590.248−238.889520.3040.293−404.373200.1480.145−68.25555
100326,2460.2380.227−208.321480.2620.253−344.279190.1370.134−17.84552
110326,1840.2330.223−188.045450.2550.246−333.907160.1300.127−17.18247
120326,1350.2280.218−168.191460.2530.245−333.696150.1290.126−26.87045
130326,0930.2440.233−179.530550.2720.263−354.628220.1240.12208.59657
140326,0680.2380.228−179.416540.2710.261−354.523220.1250.123−18.37155
150326,0410.2360.226−149.329530.2600.251−334.321200.1210.11818.20654
Gamma with inverse link
0437,2434.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10343,9691.0370.991033.8181930.6610.639−6421.6011150.3970.3894433.752223
20335,4950.6790.649−720.8881150.5300.512−659.637430.3350.328−915.41099
30332,6460.6270.600−926.0981520.6210.600−8212.361640.3460.339−2418.470122
40329,1920.4090.391−1014.061810.3170.306−279.719500.2890.2832315.405101
50328,1140.3390.324−1212.599730.3130.302−308.084400.2710.2651513.14685
60327,5130.3280.313−1612.247710.2940.284−298.341430.2400.2351813.90291
70327,1150.2850.272−1211.127640.2510.243−286.463330.1660.1621110.91572
80326,7950.2520.241−178.376450.3150.305−394.06990.1960.192−86.41640
90326,6150.2500.239−208.113450.3840.371−514.41400.2180.213−165.47834
100326,4450.2630.252−208.724480.3820.369−494.41050.2110.206−116.59543
110326,3700.2660.255−198.251450.3690.357−474.49420.2050.201−96.28840
120326,3100.2580.247−178.003440.3570.345−454.43520.1960.192−86.08739
130326,2770.2590.248−178.331470.3570.344−454.35640.1870.183−76.50942
140326,2460.2620.250−178.583480.3570.345−454.30450.1830.179−76.62043
150326,2220.2540.243−158.410460.3270.316−404.11170.1710.167−36.72244
Gamma with log link
0437,2434.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
1388,2342.3652.261−467.4942770.7730.7472254.2142871.1931.16817065.932435
10342,9420.8700.8322124.9981310.4400.425−2415.145710.5050.4944321.396138
20334,8810.6490.621−519.8991100.5190.501−658.283360.3120.306−1114.10590
30331,2270.5440.520−421.7521260.4790.463−5711.010580.2620.257017.458115
40328,7270.3740.357−1014.009810.3290.318−338.553430.2680.2631513.99091
50327,8060.3280.313−1612.750740.3270.316−338.325420.2720.2661413.77990
60327,2700.3020.289−1511.825680.2970.287−337.147370.1970.1931412.63783
70326,8660.2640.253−1510.159580.2490.241−286.071310.1650.1621210.69370
80326,6690.2550.244−199.819570.2880.279−375.085240.1460.143−29.09060
90326,4330.2660.254−238.891510.3270.316−454.079150.1710.167−127.35348
100326,3020.2650.253−237.839440.3610.349−474.03050.2050.201−126.24640
110326,2240.2560.244−188.139450.3350.324−414.21180.1910.187−37.04346
120326,1470.2500.239−187.817430.3400.328−434.12240.1880.184−66.24741
130326,1110.2470.236−177.750430.3410.329−434.11530.1860.183−76.06039
140326,0500.2470.236−177.730430.3360.324−424.07340.1790.176−66.11740
150326,0220.2430.232−157.820430.3230.312−404.04030.1740.170−46.01039
Table A10. AIC scores and out-of-sample validation figures of the inverse gaussian GLMs of BEL with identity, inverse, log and 1 μ 2 link functions under 150–443 after each tenth iteration.
Table A10. AIC scores and out-of-sample validation figures of the inverse gaussian GLMs of BEL with identity, inverse, log and 1 μ 2 link functions under 150–443 after each tenth iteration.
kAICv.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
inverse gaussian with identity link
0437,3384.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10346,1320.8710.833123.5591150.3140.304720.2691070.5340.5237025.673169
20334,4300.5490.524−1315.996770.5990.579−778.273−10.4680.458−449.80932
30331,4530.4880.467−415.939890.5170.499−676.532110.4130.405−409.28038
40328,9850.3700.354−1313.279760.3380.327−397.193350.2380.233612.30180
50328,0640.3320.317−1512.727740.3380.327−406.871350.2320.227111.66476
60327,5330.2980.285−1710.994640.3040.294−375.868290.1720.168310.64669
70327,0820.2740.262−159.387530.2430.235−275.535270.1710.1671310.25367
80326,8490.2670.255−209.426540.2780.268−345.271250.1520.14859.78365
90326,7150.2470.236−218.546490.2750.266−354.399200.1400.137−18.30255
100326,6300.2360.225−207.879450.2620.253−343.979160.1400.137−27.24948
110326,5640.2250.215−177.728430.2430.235−313.850150.1290.12606.95846
120326,5070.2370.226−188.776500.2700.260−354.120190.1300.127−37.71051
130326,4750.2400.230−179.225530.2650.256−344.516210.1230.12008.40055
140326,4470.2410.230−169.415540.2700.261−354.543210.1240.122−18.42656
150326,3520.2490.238−179.375540.3370.326−444.224120.1500.146−47.93052
Inverse gaussian with inverse link
0437,3384.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10344,4581.1291.079−2535.6852021.1381.099−15014.423630.6390.626−6322.713149
20336,0040.6820.652−521.0111170.5340.516−678.866410.3210.314−1214.89595
30333,0600.6260.598−1024.4631420.6230.602−8310.859550.3760.369−3116.233107
40329,6320.4120.394−1415.912930.3450.333−2912.096640.3180.3112818.446121
50328,5150.3350.320−1212.387710.3050.295−298.122400.2760.2701813.33386
60327,9160.3210.307−1511.970700.2860.276−278.385440.2470.2412013.97391
70327,5430.2780.266−1210.488600.2460.238−286.106310.1640.161910.33167
80327,1960.2490.238−178.227450.3080.297−384.03790.1930.189−76.38140
90327,0120.2470.236−198.016440.3760.363−494.390−10.2120.207−155.40733
100326,8370.2610.250−208.469460.3750.363−484.42840.2080.204−106.56943
110326,7620.2620.250−188.090440.3650.353−464.50520.2010.197−86.24240
120326,6990.2590.248−188.106450.3670.355−474.40220.1920.188−96.08239
130326,6670.2590.247−177.987440.3520.340−444.30320.1870.183−85.95838
140326,6420.2580.246−168.243460.3400.328−424.22860.1730.169−56.60243
150326,6170.2530.242−158.152440.3240.313−394.14850.1720.169−36.47642
Inverse gaussian with log link
0437,3384.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10343,5300.8660.8281924.9251310.4500.435−2814.940690.4940.4843921.122136
20335,3550.6440.616−519.6531090.5260.509−677.947330.3180.311−1413.49085
30331,6750.5360.512−421.6971250.4820.465−5810.885570.2620.256−217.245113
40329,1400.3660.350−1013.913800.3250.314−328.604440.2690.2641614.01191
50328,1900.3240.310−1612.640730.3190.308−328.482430.2740.2681613.96691
60327,6660.2960.283−1511.626670.2900.280−317.181370.2010.1971512.69583
70327,2630.2610.250−159.948570.2440.236−276.042300.1720.1681210.53169
80327,0610.2510.240−189.746560.2840.275−374.988240.1450.142−18.96459
90326,8250.2630.251−238.769510.3210.310−444.059150.1680.165−117.31648
100326,6950.2610.249−227.727430.3520.340−454.04860.2030.199−106.34141
110326,5980.2390.229−177.408400.3430.332−434.444−10.1850.181−75.57235
120326,5300.2490.238−187.520410.3430.331−434.24710.1910.187−75.92838
130326,4940.2460.235−177.602420.3370.326−434.10820.1830.179−65.96439
140326,4710.2460.235−177.772430.3320.321−424.06840.1770.173−66.09239
150326,4130.2470.237−157.716420.3240.313−404.09520.1720.168−45.89238
Inverse gaussian with 1 μ 2 link
0437,3384.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10344,4670.9850.941−1431.4731760.9930.959−13012.573460.5610.549−5218.986124
20336,8150.6680.639−721.4041220.5910.571−759.506380.3720.364−2214.52191
30331,7920.4780.457−515.821900.3670.354−2810.573530.3730.3653317.496114
40330,0890.4210.403−115.183890.2950.285−1910.660560.3160.3093416.657109
50329,0200.3760.359−1014.443850.3000.290−2111.439600.3200.3133417.553115
60328,4520.3300.316−1212.905750.2900.280−249.196480.2730.2672514.95298
70327,9250.3160.302−1611.733690.3010.291−357.090350.2000.195611.70176
80327,6390.2620.250−188.128430.2980.288−354.425110.2080.203−17.20545
90327,2650.2780.266−228.311460.3550.343−444.38390.2020.197−77.09046
100327,1480.2880.275−228.166440.3570.345−444.40880.2070.203−67.03946
110327,0780.2740.262−207.943430.3540.342−444.45140.1960.192−76.43441
120326,9200.2690.257−188.350460.3740.361−474.57930.1980.193−96.41941
130326,8870.2700.258−188.437470.3600.348−444.54460.1960.192−47.15146
140326,8070.2670.255−188.193450.3450.333−434.31850.1880.184−56.66143
150326,7780.2620.250−168.258440.3320.321−414.23850.1770.174−36.51842
Table A11. AIC scores and out-of-sample validation figures of the gaussian GLMs of BEL with identity, inverse and log link functions under 300–886 after each tenth and the final iteration.
Table A11. AIC scores and out-of-sample validation figures of the gaussian GLMs of BEL with identity, inverse and log link functions under 300–886 after each tenth and the final iteration.
kAICv.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
Gaussian with identity link
0437,2514.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10345,0450.8390.802021.4681040.3890.3762321.6591130.6500.6368927.112179
20333,4470.5650.540−1016.780820.5970.577−758.27420.4540.445−4010.08338
30330,3610.5180.496117.5011000.4180.404−477.970370.2640.259113.37885
40328,8320.4750.454−1016.888980.5090.492−666.234270.2910.285−2610.49768
50327,4320.3680.352−1513.268780.3910.378−506.060290.2210.217−910.67469
60326,7870.3060.293−1710.760620.3010.290−365.863290.1830.179510.65169
70326,4530.2910.278−1810.451600.2810.272−336.060300.1750.171810.95872
80326,2450.2630.251−239.389540.3090.298−414.837220.1570.154−48.94559
90326,1160.2670.256−249.196540.3130.303−424.689220.1580.155−78.58757
100326,0380.2500.239−189.152530.2760.266−354.637220.1360.13308.60657
110325,9630.2390.229−189.132520.2690.260−354.577220.1320.129−18.35855
120325,9220.2420.231−169.519540.2730.263−354.569210.1290.126−18.38055
130325,8890.2510.240−1810.506610.2870.277−375.421270.1270.12509.72464
140325,8650.2460.235−1510.530610.2690.260−345.329270.1230.12029.52663
150325,8410.2420.232−1410.556610.2740.265−355.119260.1230.12009.26161
160325,8210.2430.232−1510.483600.2780.268−365.018250.1270.12409.14460
170325,8110.2380.228−1310.140580.2650.256−334.968240.1300.12728.88459
180325,7660.2410.230−1210.128570.3000.290−374.552180.1490.14628.71658
190325,5060.2010.192−136.458320.2750.266−334.124−20.1730.169−44.72127
200325,4880.1860.178−96.111290.2620.254−294.460−40.1810.17734.92027
210325,4820.1840.176−96.210300.2580.249−284.337−30.1700.16734.84628
220325,4680.1850.177−86.433320.2580.250−284.286−30.1650.16134.85028
224325,4590.1940.186−96.659340.2680.259−304.200−20.1680.16515.00729
Gaussian with inverse link
0437,2514.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10343,4261.0360.990133.7051920.6500.628−6321.4811140.3910.3824433.482221
20334,9850.6890.659−621.3131180.5150.498−6210.319490.3240.317−416.493107
30331,4260.5120.490−1618.8361090.3930.380−4512.277650.2480.2431518.960125
40328,8750.4330.414−514.354820.3170.306−269.312470.2940.2882615.18899
50327,8770.3830.366−812.959760.2850.276−248.961460.2710.2652514.59295
60327,2740.3370.323−1612.572730.3280.316−377.636380.2190.2151013.08785
70326,8750.2900.277−1411.248640.2710.261−326.233310.1560.153610.58870
80326,6030.2590.248−169.976580.2870.278−385.042220.1580.155−88.01452
90326,3900.2540.243−208.462470.3920.379−514.45110.2200.215−175.67636
100326,2240.2690.257−219.365530.4030.389−524.50070.2250.220−127.17447
110326,1350.2660.254−198.894490.3770.364−494.33450.2050.201−126.49742
120326,0690.2660.254−198.564480.3810.368−504.27140.2040.200−146.10239
130326,0330.2650.253−198.498470.3860.373−504.44520.2120.207−145.91738
140325,9500.2530.242−178.151440.3580.346−464.34510.1890.185−115.59835
150325,9240.2550.244−178.485460.3640.352−464.28830.1920.188−115.89438
160325,8860.2580.247−158.842480.3490.337−444.19950.1780.174−86.35941
170325,8690.2490.238−148.503460.3310.320−404.25450.1740.171−56.18240
180325,8500.2480.237−128.505450.3120.302−374.09960.1640.161−36.09540
190325,8200.2380.228−128.240430.3130.303−374.13740.1690.166−35.82538
200325,8030.2440.234−138.458450.3200.309−384.07360.1710.167−46.13240
210325,8000.2410.231−138.376450.3130.302−364.05960.1710.167−26.24841
213325,7970.2410.230−128.325440.3100.299−364.06360.1710.167−16.28441
Gaussian with log link
0437,2514.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10342,3250.8790.8402625.1711320.4220.408−1715.628740.5300.5195222.034143
20334,4170.6610.632−522.4741250.5320.514−6410.764510.3300.323−317.317112
30330,9010.5600.536−321.7801260.4740.458−5511.199590.2660.261317.802117
40328,4440.4110.393−1013.639780.3150.304−298.610440.2640.2581914.16292
50327,5740.3410.326−1612.936750.3340.323−358.294420.2620.2571213.64289
60327,0290.3150.302−1711.991690.3120.301−367.024360.1920.1881012.46582
70326,6370.2790.267−1610.620610.2660.257−316.142310.1620.158910.79771
80326,4490.2660.254−2110.069590.3040.294−405.195250.1530.149−49.23461
90326,2870.2730.261−229.742570.3000.290−405.082250.1410.138−58.99059
100326,0820.2690.257−238.052450.3700.358−484.09460.2100.205−136.31441
110326,0210.2580.247−198.043440.3430.331−434.10250.1980.193−76.38141
120325,9500.2520.241−177.891420.3290.318−414.08630.1910.187−75.88337
130325,7430.2080.199−136.208300.3100.299−384.994−100.1910.187−84.27321
140325,6930.2110.202−136.620340.3020.292−364.522−30.1860.182−35.03730
150325,6650.2100.200−136.729350.2980.288−364.385−20.1800.176−35.16831
160325,6260.2140.205−146.549330.3020.292−364.410−30.1830.179−45.07630
170325,6100.2140.204−146.590330.2910.281−354.273−30.1730.169−25.02830
180325,5840.2140.204−136.587330.2960.286−354.386−40.1760.172−24.97329
190325,5750.2120.203−126.502320.2830.273−334.363−40.1730.17004.95029
200325,5670.2010.192−96.272300.2640.255−294.491−40.1710.16834.86327
210325,5530.2050.196−96.655320.2670.258−294.398−20.1760.17335.16530
214325,5520.2060.197−106.640320.2670.258−294.402−20.1770.17335.18030
Table A12. AIC scores and out-of-sample validation figures of the gamma GLMs of BEL with identity, inverse and log link functions under 300–886 after each tenth and the final iteration.
Table A12. AIC scores and out-of-sample validation figures of the gamma GLMs of BEL with identity, inverse and log link functions under 300–886 after each tenth and the final iteration.
kAICv.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
Gamma with identity link
0437,2434.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10345,6050.8720.834123.4851140.3150.304619.8611050.5300.5196825.266167
20333,9110.5530.529−1216.265790.5990.579−768.26800.4640.454−439.89534
30330,7070.5030.481017.404990.4250.411−497.754350.2670.262−212.95982
40328,5890.3760.359−1313.317760.3410.330−397.187350.2380.233612.34180
50327,6680.3480.333−1513.173770.3560.344−446.656340.2270.222−411.34874
60327,1350.3050.292−1611.190650.3040.294−376.059300.1750.172310.84371
70326,6860.2730.261−159.730550.2570.249−305.364260.1650.16199.92865
80326,4610.2680.257−219.471540.2870.277−365.151250.1490.14629.54963
90326,3280.2590.248−238.889520.3040.293−404.373200.1480.145−68.25555
100326,2440.2400.229−209.273540.2820.273−374.759220.1440.141−28.66257
110326,1780.2360.225−188.837510.2620.254−344.454200.1350.13208.13954
120326,1170.2370.226−189.668560.2750.266−364.845240.1290.126−18.79958
130326,0840.2450.235−1710.148590.2700.260−355.236260.1220.12019.37562
140326,0580.2430.232−1710.153580.2730.264−355.092250.1250.122−19.12260
150326,0310.2390.229−1410.130580.2630.254−334.914240.1210.11829.01460
160325,8710.2320.222−157.898440.3170.307−393.91850.1740.170−46.23740
170325,7290.1990.190−136.235300.2800.271−344.288−50.1760.172−24.68427
180325,7180.2010.192−136.171300.2790.270−344.253−50.1720.169−24.62327
190325,7030.1970.189−126.158300.2780.268−334.269−50.1710.168−34.52126
200325,6970.1940.185−115.943280.2640.255−304.416−50.1690.16504.47025
210325,6890.1900.181−105.992280.2610.252−294.381−50.1690.16514.53425
212325,6890.1890.180−115.975280.2610.252−294.384−50.1690.16514.54525
Gamma with inverse link
0437,2434.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10343,9691.0370.991033.8181930.6610.639−6421.6011150.3970.3894433.752223
20335,4950.6790.649−720.8881150.5300.512−659.637430.3350.328−915.41099
30332,6460.6270.600−926.0981520.6210.600−8212.361640.3460.339−2418.470122
40329,1920.4090.391−1014.061810.3170.306−279.719500.2890.2832315.405101
50328,1140.3390.324−1212.599730.3130.302−308.084400.2710.2651513.14685
60327,5130.3280.313−1612.247710.2940.284−298.341430.2400.2351813.90291
70327,1150.2850.272−1211.127640.2510.243−286.463330.1660.1621110.91572
80326,7950.2520.241−178.376450.3150.305−394.06990.1960.192−86.41640
90326,6150.2500.239−208.113450.3840.371−514.41400.2180.213−165.47834
100326,4450.2630.252−209.213520.3870.374−504.46980.2190.214−107.31648
110326,3550.2720.260−218.812490.3840.371−504.31350.2090.205−146.48942
120326,2970.2670.255−208.378460.3770.365−484.47020.2060.202−116.14039
130326,2480.2590.248−178.210450.3650.352−464.43710.2000.196−105.93338
140326,2140.2580.247−178.212450.3550.343−454.40430.1920.188−96.07739
150326,1900.2600.248−178.701490.3490.337−444.21770.1800.176−76.78144
160326,1470.2470.236−158.556470.3290.317−404.09170.1740.170−46.64343
170326,0700.2470.236−158.355460.3320.321−414.07750.1730.169−66.18240
180326,0450.2430.233−148.143430.3070.297−374.00160.1640.160−36.10740
190326,0260.2360.225−137.996420.3050.295−364.03950.1650.161−25.97339
200325,9790.2390.229−128.320450.2840.274−314.162110.1540.15157.11047
208325,9690.2340.223−118.162440.2880.278−314.18590.1580.15456.83245
Gamma with log link
0437,2434.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10342,9420.8700.8322124.9981310.4400.425−2415.145710.5050.4944321.396138
20334,8810.6490.621−519.8991100.5190.501−658.283360.3120.306−1114.10590
30331,2270.5440.520−421.7521260.4790.463−5711.010580.2620.257017.458115
40328,7270.3740.357−1014.009810.3290.318−338.553430.2680.2631513.99091
50327,8060.3280.313−1612.750740.3270.316−338.325420.2720.2661413.77990
60327,2700.3020.289−1511.825680.2970.287−337.147370.1970.1931412.63783
70326,8660.2640.253−1510.159580.2490.241−286.071310.1650.1621210.69370
80326,6690.2550.244−199.819570.2880.279−375.085240.1460.143−29.09060
90326,4330.2660.254−238.891510.3270.316−454.079150.1710.167−127.35348
100326,3020.2650.253−237.839440.3610.349−474.03050.2050.201−126.24640
110326,2240.2560.244−188.139450.3350.324−414.21180.1910.187−37.04346
120326,0150.2200.210−176.898360.3170.306−404.411−10.1940.190−75.36433
130325,9730.2160.207−156.654330.3070.296−374.544−40.1960.192−45.11430
140325,9190.2120.203−156.334310.3020.292−374.556−50.1910.187−44.88328
150325,8780.2150.205−146.486330.2970.287−364.375−30.1810.177−34.96829
160325,8580.2160.206−146.619340.2990.289−354.442−20.1810.177−15.27532
170325,8260.2130.203−146.485330.3020.292−364.464−40.1830.180−35.10930
180325,8160.2130.204−146.505330.3000.290−364.468−30.1790.176−15.23831
190325,7970.2100.201−146.580330.2950.285−354.406−30.1790.176−25.15731
200325,7830.2080.199−136.496320.2900.280−344.421−30.1780.174−15.14030
210325,7770.2000.191−106.260300.2630.254−284.471−30.1760.17345.10730
220325,7740.1990.190−106.248300.2640.255−284.541−30.1790.17545.08529
226325,7670.1980.189−86.256290.2490.241−244.532−10.1840.18085.41732
Table A13. AIC scores and out-of-sample validation figures of the inverse gaussian GLMs of BEL with identity, inverse, log and 1 μ 2 link functions under 300–886 after each tenth and the final iteration.
Table A13. AIC scores and out-of-sample validation figures of the inverse gaussian GLMs of BEL with identity, inverse, log and 1 μ 2 link functions under 300–886 after each tenth and the final iteration.
kAICv.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
Inverse gaussian with identity link
0437,3384.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10346,1320.8710.833123.5591150.3140.304720.2691070.5340.5237025.673169
20334,4300.5490.524−1315.996770.5990.579−778.273−10.4680.458−449.80932
30331,4530.4880.467−415.939890.5170.499−676.532110.4130.405−409.28038
40328,9850.3700.354−1313.279760.3380.327−397.193350.2380.233612.30180
50328,0640.3320.317−1512.727740.3380.327−406.871350.2320.227111.66476
60327,5330.2980.285−1710.994640.3040.294−375.868290.1720.168310.64669
70327,0820.2740.262−159.387530.2430.235−275.535270.1710.1671310.25367
80326,8490.2670.255−209.426540.2780.268−345.271250.1520.14859.78365
90326,7150.2470.236−218.546490.2750.266−354.399200.1400.137−18.30255
100326,6270.2340.224−208.454490.2660.257−344.414200.1440.141−18.02353
110326,5570.2250.215−178.350470.2460.238−314.337190.1320.12927.84152
120326,5050.2330.223−178.897510.2560.247−334.428210.1250.12308.10654
130326,4650.2430.232−169.965580.2650.256−345.126260.1220.12019.21661
140326,4420.2440.233−1610.175590.2730.264−355.079250.1250.12209.09860
150326,3570.2520.241−1610.133580.3520.340−454.601150.1690.166−18.83158
160326,1300.2060.197−156.294310.2930.283−364.360−50.1870.183−44.71126
170326,1120.2040.195−156.173300.2890.279−354.284−50.1790.175−44.68827
180326,0990.2030.194−146.130300.2830.273−344.277−50.1770.173−34.65426
190326,0880.2040.195−146.143300.2820.272−344.280−50.1780.174−34.69927
200326,0760.2040.195−146.172300.2860.276−344.347−40.1840.180−34.82327
210326,0710.1990.190−126.140300.2730.264−324.277−40.1830.17904.86828
217326,0690.1910.183−115.967280.2610.252−294.364−50.1780.17524.77927
Inverse gaussian with inverse link
0437,3384.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10344,4581.1291.079−2535.6852021.1381.099−15014.423630.6390.626−6322.713149
20336,0040.6820.652−521.0111170.5340.516−678.866410.3210.314−1214.89595
30333,0600.6260.598−1024.4631420.6230.602−8310.859550.3760.369−3116.233107
40329,6320.4120.394−1415.912930.3450.333−2912.096640.3180.3112818.446121
50328,5150.3350.320−1212.387710.3050.295−298.122400.2760.2701813.33386
60327,9160.3210.307−1511.970700.2860.276−278.385440.2470.2412013.97391
70327,5430.2780.266−1210.488600.2460.238−286.106310.1640.161910.33167
80327,1960.2490.238−178.227450.3080.297−384.03790.1930.189−76.38140
90327,0120.2470.236−198.016440.3760.363−494.390−10.2120.207−155.40733
100326,8360.2610.250−209.073510.3820.369−494.43880.2150.211−97.23747
110326,7500.2680.257−218.679470.3860.373−504.51040.2170.212−126.49042
120326,6740.2630.251−198.191450.3780.365−494.49910.2070.203−126.01138
130326,6360.2610.249−188.380460.3730.360−484.40220.1980.193−125.98538
140326,6070.2580.247−178.253460.3490.337−444.28940.1850.181−86.27740
150326,5810.2580.246−178.437470.3500.338−444.22860.1830.179−76.50542
160326,5380.2460.235−158.445470.3260.315−404.07770.1730.169−46.57243
170326,5220.2490.238−158.148450.3220.311−394.11960.1750.172−26.60343
180326,4680.2450.234−148.583470.2980.288−344.303130.1620.15947.72451
190326,4550.2430.233−148.506470.2990.289−344.290130.1630.16047.64150
200326,3990.2310.221−127.918420.2860.277−314.20890.1580.15566.85645
210326,3650.2330.223−127.983430.2880.279−314.20890.1590.15556.76545
219326,3630.2330.223−118.040430.2830.274−314.13090.1530.15056.78645
Inverse gaussian with log link
0437,3384.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10343,5300.8660.8281924.9251310.4500.435−2814.940690.4940.4843921.122136
20335,3550.6440.616−519.6531090.5260.509−677.947330.3180.311−1413.49085
30331,6750.5360.512−421.6971250.4820.465−5810.885570.2620.256−217.245113
40329,1400.3660.350−1013.913800.3250.314−328.604440.2690.2641614.01191
50328,1900.3240.310−1612.640730.3190.308−328.482430.2740.2681613.96691
60327,6660.2960.283−1511.626670.2900.280−317.181370.2010.1971512.69583
70327,2630.2610.250−159.948570.2440.236−276.042300.1720.1681210.53169
80327,0610.2510.240−189.746560.2840.275−374.988240.1450.142−18.96459
90326,8250.2630.251−238.769510.3210.310−444.059150.1680.165−117.31648
100326,6950.2610.249−227.727430.3520.340−454.04860.2030.199−106.34141
110326,5890.2400.230−197.484410.3420.330−444.12410.1920.188−115.48435
120326,4090.2160.207−166.397320.2990.289−374.534−20.1950.191−45.17030
130326,3630.2160.207−156.314310.3080.298−374.693−60.2010.196−44.95728
140326,3310.2180.208−156.537330.3030.292−364.505−30.1950.191−15.36232
150326,2700.2160.207−146.457320.3020.291−364.524−40.1890.185−25.04930
160326,2490.2170.208−146.596340.2980.288−364.418−20.1820.178−15.29132
170326,2310.2170.207−156.492320.2960.286−354.391−30.1790.175−25.18931
180326,2060.2140.205−156.426320.3020.291−364.466−40.1790.175−34.95029
190326,1910.2060.197−136.472330.2880.279−344.422−30.1730.17005.14931
200326,1760.2080.199−136.545330.2860.276−334.430−20.1790.17505.28831
210326,1610.2080.199−136.501330.2860.276−334.439−20.1840.18015.31832
220326,1530.2020.193−106.280300.2600.251−274.455−20.1780.17455.19031
222326,1530.2010.192−106.291300.2610.252−284.494−30.1800.17755.17630
Inverse gaussian with 1 μ 2 link
0437,3384.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10344,4670.9850.941−1431.4731760.9930.959−13012.573460.5610.549−5218.986124
20336,8150.6680.639−721.4041220.5910.571−759.506380.3720.364−2214.52191
30331,7920.4780.457−515.821900.3670.354−2810.573530.3730.3653317.496114
40330,0890.4210.403−115.183890.2950.285−1910.660560.3160.3093416.657109
50329,0200.3760.359−1014.443850.3000.290−2111.439600.3200.3133417.553115
60328,4520.3300.316−1212.905750.2900.280−249.196480.2730.2672514.95298
70327,9250.3160.302−1611.733690.3010.291−357.090350.2000.195611.70176
80327,6390.2620.250−188.128430.2980.288−354.425110.2080.203−17.20545
90327,2650.2780.266−228.311460.3550.343−444.38390.2020.197−77.09046
100327,1480.2880.275−228.166440.3570.345−444.40880.2070.203−67.03946
110327,0770.2750.262−207.965420.3660.353−454.67620.2070.202−76.41040
120326,9160.2740.262−188.313450.3930.380−475.13310.2280.223−56.79043
130326,8760.2690.257−188.133430.3960.382−475.21700.2340.229−56.62542
140326,7890.2590.248−188.149440.3950.381−475.07410.2490.244−66.69742
150326,5760.2270.217−156.896340.3410.329−395.291−50.2210.217−35.51031
160326,4790.2140.205−166.274290.2910.281−354.571−60.2060.202−84.61722
170326,4510.2100.201−156.035260.2850.275−344.611−80.2020.198−84.44119
180326,4260.1960.187−135.753250.2500.242−284.373−60.1870.183−24.42621
190326,4080.1950.187−135.682240.2490.241−284.360−60.1880.184−24.46421
200326,3970.1930.184−135.686240.2450.237−274.252−50.1860.182−34.38220
210326,3050.1870.179−135.721270.2370.229−263.81100.1620.15924.51027
220326,1720.1760.168−145.110260.1970.191−223.34640.1460.14364.91931
230326,1600.1750.168−144.994250.2060.199−213.58330.1590.15585.11432
240326,1410.1660.159−115.012240.1970.190−163.90950.1820.178145.56035
250326,1240.1740.166−125.058250.1930.186−153.83390.1880.184176.26641
Table A14. AIC scores and out-of-sample validation figures of the gaussian, gamma and inverse gaussian GLMs of BEL with identity, inverse, log and 1 μ 2 link functions under 150–443 and 300–886 after the final iteration. Highlighted in green and red respectively the best and worst AIC scores and validation figures.
Table A14. AIC scores and out-of-sample validation figures of the gaussian, gamma and inverse gaussian GLMs of BEL with identity, inverse, log and 1 μ 2 link functions under 150–443 and 300–886 after the final iteration. Highlighted in green and red respectively the best and worst AIC scores and validation figures.
kAICv.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
Gaussian with identity link under 150-443
150325,8500.2470.237−149.924570.2710.262−354.612220.1220.120−18.53756
Gaussian with inverse link under 150-443
150325,9520.2580.247−168.468450.3530.341−444.28230.1920.188−86.08839
Gaussian with log link under 150-443
150325,8230.2400.229−157.980440.3160.305−384.01460.1700.167−26.43442
Gamma with identity link under 150-443
150326,0410.2360.226−149.329530.2600.251−334.321200.1210.11818.20654
Gamma with inverse link under 150-443
150326,2220.2540.243−158.410460.3270.316−404.11170.1710.167−36.72244
Gamma with log link under 150-443
150326,0220.2430.232−157.820430.3230.312−404.04030.1740.170−46.01039
Inverse gaussian with identity link under 150-443
150326,3520.2490.238−179.375540.3370.326−444.224120.1500.146−47.93052
Inverse gaussian with inverse link under 150-443
150326,6170.2530.242−158.152440.3240.313−394.14850.1720.169−36.47642
Inverse gaussian with log link under 150-443
150326,4130.2470.237−157.716420.3240.313−404.09520.1720.168−45.89238
Inverse gaussian with 1 μ 2 link under 150-443
150326,7780.2620.250−168.258440.3320.321−414.23850.1770.174−36.51842
Gaussian with identity link under 300-886
224325,4590.1940.186−96.659340.2680.259−304.200−20.1680.16515.00729
Gaussian with inverse link under 300-886
213325,7970.2410.230−128.325440.3100.299−364.06360.1710.167−16.28441
Gaussian with log link under 300-886
214325,5520.2060.197−106.640320.2670.258−294.402−20.1770.17335.18030
Gamma with identity link under 300-886
212325,6890.1890.180−115.975280.2610.252−294.384−50.1690.16514.54525
Gamma with inverse link under 300-886
208325,9690.2340.223−118.162440.2880.278−314.18590.1580.15456.83245
Gamma with log link under 300-886
226325,7670.1980.189−86.256290.2490.241−244.532−10.1840.18085.41732
Inverse gaussian with identity link under 300-886
217326,0690.1910.183−115.967280.2610.252−294.364−50.1780.17524.77927
Inverse gaussian with inverse link under 300-886
219326,3630.2330.223−118.040430.2830.274−314.13090.1530.15056.78645
Inverse gaussian with log link under 300-886
222326,1530.2010.192−106.291300.2610.252−284.494−30.1800.17755.17630
Inverse gaussian with 1 μ 2 link under 300-886
250326,1240.1740.166−125.058250.1930.186−153.83390.1880.184176.26641
Table A15. Out-of-sample validation figures of selected generalized additive models (GAMs) of BEL with varying spline function number per dimension and fixed spline function type under 150–443 after each tenth and the finally selected smooth function.
Table A15. Out-of-sample validation figures of selected generalized additive models (GAMs) of BEL with varying spline function number per dimension and fixed spline function type under 150–443 after each tenth and the finally selected smooth function.
k K max v.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
4 Thin plate regression splines under gaussian with identity link in stagewise selection of length 5
01504.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
101500.6320.6042822.0191160.3450.334−813.247650.4790.4696621.072139
201500.4060.388011.330440.3750.362−427.254−120.3410.334−67.70924
301500.3990.382−1112.268590.4650.449−615.744−60.3140.307−266.11629
401500.3710.355−811.415530.4800.463−646.380−160.3400.332−345.28313
501500.3920.375−1312.079590.5200.503−705.961−120.3650.358−395.36819
601500.3060.292−159.833480.4050.391−515.283−20.2730.267−106.48439
701500.2720.260−159.896560.3210.310−355.227220.2320.2281210.46069
801500.2490.238−178.627490.3080.297−364.588160.2050.20199.10060
901500.2610.250−179.262540.3250.314−394.639180.1950.19159.34062
1001500.2540.243−189.593550.3400.328−424.626170.1960.19239.31262
1101500.2550.244−189.407540.3360.324−404.640180.2070.20349.32562
1201500.2430.233−168.474480.3070.296−384.023130.1860.18217.81951
1301500.2410.230−168.481490.3080.298−374.108130.1830.17928.07553
1401500.2350.225−158.018450.2950.285−353.865100.1730.16927.18247
1501500.2400.229−158.192460.2910.281−353.907130.1760.17237.64150
5 Thin plate regression splines under gaussian with identity link
01004.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
101000.6430.6152723.2781250.3440.332−615.238780.4930.4836923.151153
201000.3870.370110.371350.3640.352−407.855−200.3350.328−67.45414
301000.3820.366−1011.235500.4540.439−606.247−140.3170.310−285.60318
401000.3680.352−1110.931480.4630.447−616.266−160.3370.329−335.34312
501000.3550.339−1110.086400.4810.465−647.752−280.3510.344−375.4810
601000.3440.329−910.015400.4900.474−668.152−300.3640.356−385.593−3
701000.3390.324−610.034450.4760.460−647.578−270.3450.337−375.0780
801000.2950.282−119.397490.4040.390−515.513−60.2410.236−115.82034
901000.2960.283−129.694520.3930.380−495.15500.2060.202−76.60541
1001000.2870.274−119.431480.3970.383−505.402−50.2020.198−95.94536
8 Thin plate regression splines under gaussian with identity link
01504.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
101500.6390.6112723.1761250.3400.329−315.517800.5160.5057323.627156
201500.3750.35939.604260.3340.322−338.378−240.3410.33317.71110
301500.3610.345−710.444410.4150.401−526.961−190.3040.297−215.87113
401500.3560.340−510.098360.4250.410−547.920−280.3110.304−275.647−1
501500.3390.324−79.712330.4180.404−537.746−270.3110.304−265.5960
601500.3250.311−69.037260.4110.397−528.706−340.3100.304−265.850−8
701500.3250.311−49.180310.4290.414−558.773−340.3260.319−305.912−9
801500.3090.296−58.618290.4300.415−558.984−350.3360.329−296.382−9
901500.3130.299−58.981320.3840.371−487.390−260.3000.293−265.430−4
1001500.3280.313−69.910470.4000.387−515.572−120.2910.285−255.06413
1101500.2560.245−107.985380.3260.315−404.655−60.2010.197−65.00228
1201500.2530.242−97.340300.3210.310−395.542−140.2090.204−54.54120
1301500.2520.241−97.767340.3260.315−405.197−110.2050.201−54.77024
1401500.2450.234−87.592330.3220.311−415.315−150.1970.193−74.31720
1501500.2170.208−116.477320.2390.231−263.65220.1790.17565.57834
10 Thin plate regression splines under gaussian with identity link
01504.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
101500.6420.6142723.3541260.3440.332−515.463800.5090.4997123.654156
201500.3820.365210.101330.3410.329−347.780−180.3380.33117.72818
301500.3700.354−710.922450.4160.402−526.497−140.3050.299−206.10318
401500.3540.338−710.412390.4040.391−516.747−200.3080.301−245.6008
501500.3470.331−710.119380.4260.412−547.258−240.3100.304−275.4674
601500.3420.327−49.766340.4000.387−507.600−260.2980.292−235.6150
701500.3340.319−49.601350.4280.414−558.158−300.3180.311−295.618−5
801500.3150.301−59.093350.4320.418−558.113−290.3340.327−296.087−3
901500.3230.309−59.436380.3880.375−496.558−200.2970.291−265.1942
1001500.3090.296−68.722270.4090.395−548.780−360.2610.255−274.994−9
1101500.3090.295−68.542260.4110.397−548.711−370.2840.278−334.768−15
1201500.2060.197−95.768250.2160.209−233.806−40.1640.16154.51924
1301500.2050.196−105.759240.2260.218−243.952−50.1750.17244.57924
1401500.2140.205−106.761340.2280.220−253.36350.1670.16365.76236
1501500.2120.203−107.070370.2300.223−243.57580.1730.17086.33740
Table A16. Effective degrees of freedom, p-values and significance codes per dimension of GAMs of BEL built up of thin plate regression splines with gaussian random component and identity link function under 150–443 for spline function numbers J 4 , 10 per dimension at stages k 50 , 100 , 150 . The confidence levels corresponding to the indicated significance codes are *** = 0.001, ** = 0.01, * = 0.05, = 0.1, = 1.
Table A16. Effective degrees of freedom, p-values and significance codes per dimension of GAMs of BEL built up of thin plate regression splines with gaussian random component and identity link function under 150–443 for spline function numbers J 4 , 10 per dimension at stages k 50 , 100 , 150 . The confidence levels corresponding to the indicated significance codes are *** = 0.001, ** = 0.01, * = 0.05, = 0.1, = 1.
J = 4 , k = 50 J = 4 , k = 100 J = 4 , k = 150 J = 10 , k = 50 J = 10 , k = 100 J = 10 , k = 150
kdfp-valsigndfp-valsigndfp-valsigndfp-valsigndfp-valsigndfp-valsign
12.858 2 16 ***2.350 2 16 ***1.948 2 16 ***9.000 2 16 ***8.941 2 16 ***7.724 2 16 ***
23.000 2 16 ***2.104 2 16 ***1.000 2 16 ***7.857 2 16 ***4.436 2 16 ***1.000 2 16 ***
33.000 2 16 ***2.901 2 16 ***2.922 2 16 ***5.600 2 16 ***1.000 2 16 ***1.000 2 16 ***
42.997 2 16 ***2.962 2 16 ***2.998 2 16 ***7.073 2 16 ***6.791 2 16 ***7.288 2 16 ***
52.729 2 16 ***1.000 2 16 ***1.000 2 16 ***8.679 2 16 ***8.870 2 16 ***8.210 2 16 ***
63.000 2 16 ***3.000 2 16 ***1.043 2 16 ***3.417 2 16 ***1.000 2 16 ***1.000 2 16 ***
73.000 2 16 ***2.806 2 16 ***2.841 2 16 ***7.990 2 16 ***8.608 2 16 ***1.000 2 16 ***
83.000 2 16 ***2.956 2 16 ***2.961 2 16 ***8.282 2 16 ***8.292 2 16 ***8.122 2 16 ***
91.000 2 16 ***1.000 2 16 ***2.223 2 16 ***7.710 2 16 ***6.510 2 16 ***6.549 2 16 ***
102.991 2 16 ***2.924 2 16 ***3.000 2 16 ***1.000 2 16 ***1.000 2 16 ***1.000 2 16 ***
112.587 2 16 ***2.922 2 16 ***2.889 2 16 ***6.535 2 16 ***7.014 2 16 ***5.672 2 16 ***
122.645 2 16 ***1.874 2 16 ***1.000 2 16 ***7.235 2 16 ***7.284 2 16 ***8.346 2 16 ***
132.244 2 16 ***2.425 2 16 ***1.000 2 16 ***2.372 2 16 ***2.531 2 16 ***1.000 2 16 ***
141.000 2 16 ***1.000 2 16 ***1.000 2 16 ***1.000 2 16 ***1.000 2 16 ***1.000 2 16 ***
153.000 2 16 ***1.000 2 16 ***2.285 2 16 ***5.430 2 16 ***5.640 2 16 ***4.437 2 16 ***
161.000 2 16 ***1.000 2 16 ***2.783 2 16 ***1.000 2 16 ***1.000 2 16 ***1.000 2 16 ***
172.344 2 16 ***1.670 2 16 ***1.646 2 16 ***3.886 2 16 ***1.610 2 16 ***1.624 2 16 ***
183.000 2 16 ***3.000 2 16 ***3.000 2 16 ***8.751 2 16 ***8.620 1.4 5 ***5.367 6.9 5 ***
191.000 2 16 ***1.000 2 16 ***1.000 2 16 ***1.000 2 16 ***1.000 2 16 ***1.000 2 16 ***
201.497 2 16 ***1.501 2 16 ***2.148 2 16 ***1.754 2 16 ***1.000 2 16 ***3.141 8.1 16 ***
211.441 2 16 ***1.000 2 16 ***1.000 2 16 ***1.000 2 16 ***1.000 2 16 ***1.000 2 16 ***
221.770 2 16 ***2.192 2 16 ***1.400 2 16 ***1.000 2 16 ***1.000 2 16 ***3.985 1.9 9 ***
232.395 2 16 ***2.746 2 16 ***2.911 2 16 ***2.057 2 16 ***1.428 2 16 ***2.663 2 16 ***
241.000 2 16 ***1.000 2 16 ***1.000 2 16 ***2.964 2 16 ***1.000 3.3 13 ***1.000 1.1 13 ***
251.000 2 16 ***1.000 2 16 ***1.000 2 16 ***1.000 2 16 ***1.000 2 16 ***1.000 2 16 ***
261.000 2 16 ***1.485 2 16 ***1.000 2 16 ***1.000 2 16 ***1.000 2 16 ***1.000 2 16 ***
271.000 2 16 ***1.000 2 16 ***1.000 2.2 10 ***1.000 2 16 ***1.000 2 16 ***1.000 1.6 10 ***
281.000 2 16 ***2.607 2 16 ***1.839 2 16 ***1.000 2 16 ***2.780 2 16 ***1.914 2 16 ***
291.000 2 16 ***1.000 2 16 ***1.809 2 16 ***1.000 2 16 ***1.000 2 16 ***1.000 2 16 ***
301.000 2 16 ***1.000 2 16 ***1.000 2 16 ***6.740 2 16 ***6.416 2 16 ***6.508 2 16 ***
311.000 2 16 ***1.000 2 16 ***1.000 2.4 16 ***1.000 2 16 ***1.000 2 16 ***1.000 2 16 ***
321.000 2 16 ***1.000 2 16 ***1.000 2 16 ***1.000 2 16 ***1.000 2 16 ***1.000 2 16 ***
331.000 2 16 ***2.055 4.9 15 ***1.893 2.2 15 ***7.111 2 16 ***7.175 6.3 12 ***6.728 2 16 ***
341.0003.2 2 16 ***1.000 2.9 16 ***1.000 8.7 11 ***1.000 2 16 ***1.213 2 16 ***1.6354.9 2 16 ***
353.000 2 16 ***1.000 2 16 ***1.000 2.5 16 ***4.780 2 16 ***4.013 2 16 ***4.224 2 16 ***
361.000 2 16 ***1.000 2 16 ***1.000 2 16 ***7.825 4.8 16 ***7.867 1.1 15 ***7.738 2.3 3 **
371.000 2 16 ***1.000 2 16 ***1.000 2 16 ***1.000 4.6 16 ***1.000 7.5 16 ***1.000 2 16 ***
382.512 1.1 14 ***2.303 2 16 ***2.057 2 16 ***1.233 2 16 ***1.000 2 16 ***1.000 1.1 4 ***
391.000 2.7 12 ***1.000 1.2 13 ***1.000 1.9 13 ***1.000 1.1 15 ***1.0002.6 2 16 ***1.000 1.2 14 ***
401.826 6.4 11 ***1.000 2 16 ***1.915 3.6 15 ***1.000 1.2 13 ***1.514 2 16 ***1.000 2 16 ***
412.668 7.5 16 ***2.701 5.3 15 ***1.787 9.8 7 ***1.823 8.1 12 ***1.319 9.4 15 ***1.000 2 16 ***
421.000 1.1 15 ***1.000 2 16 ***1.000 2 15 ***1.000 2.9 12 ***1.000 8 12 ***5.275 3.8 4 ***
431.000 3.8 10 ***1.000 9.5 10 ***1.000 2 9 ***1.000 3.3 10 ***1.000 7.7 11 ***1.000 1.1 10 ***
441.713 1.3 8 ***1.887 8.2 9 ***1.892 6.2 9 ***2.109 6 8 ***1.779 5.3 8 ***2.061 3.4 8 ***
451.000 5.7 9 ***1.000 6.4 9 ***1.000 1.9 8 ***1.000 8 9 ***1.000 2.1 8 ***1.000 8.8 9 ***
461.917 3.5 9 ***1.000 2 16 ***1.000 1.3 15 ***1.305 1.9 6 ***1.610 1.1 6 ***1.000 8.7 8 ***
471.451 1.2 6 ***1.507 5.8 7 ***1.234 1 6 ***1.000 7.7 13 ***1.000 5.5 13 ***1.000 7.4 12 ***
482.753 3.2 7 ***2.863 6.5 8 ***2.804 2.1 8 ***1.000 2.4 8 ***1.000 7.8 8 ***1.000 2.9 6 ***
491.000 5.5 7 ***1.000 4.7 14 ***1.000 1.6 11 ***1.000 6.9 7 ***1.000 9.6 12 ***1.000 1.6 12 ***
501.000 9.2 7 ***1.372 8.3 11 ***1.000 1 12 ***1.000 1.1 6 ***1.000 2 10 ***1.000 2 11 ***
51 1.004 2 16 ***1.000 2 16 *** 1.000 1 6 ***1.000 1.3 6 ***
52 2.839 2 16 ***1.334 2 16 *** 1.000 4.3 13 ***1.000 3 13 ***
53 2.640 2 16 ***2.421 2 16 *** 1.000 4.7 10 ***1.000 7.1 11 ***
54 2.664 2 16 ***1.000 2 16 *** 3.237 2.8 6 ***3.168 4.9 6 ***
55 1.000 9.2 9 ***1.000 3 6 *** 3.906 5.8 8 ***3.493 1 9 ***
56 1.000 2.8 9 ***2.376 2.3 8 *** 1.098 3.5 5 ***3.513 2 16 ***
57 1.000 3.3 15 ***1.000 2.8 13 *** 5.574 5.1 3 **5.019 6.7 2 .
58 1.000 2 16 ***1.000 2 16 *** 1.000 7.3 5 ***1.000 1 5 ***
59 1.000 1.2 11 ***1.000 2 11 *** 1.000 1.8 6 ***1.000 8.8 8 ***
60 1.000 2 16 ***1.000 2 16 *** 3.717 5.2 4 ***3.286 5.6 3 **
61 1.000 7.5 11 ***1.000 7.1 11 *** 1.000 6.7 5 ***1.000 1.5 5 ***
62 2.613 4.2 4 ***2.868 2 16 *** 1.000 1 5 ***1.000 4.6 6 ***
63 1.000 7.9 15 ***1.867 1.6 14 *** 4.210 6.6 3 **3.543 7.3 4 ***
64 1.000 2.4 6 ***1.000 1.2 6 *** 1.000 1.7 4 ***1.000 3.4 4 ***
65 2.960 2.3 13 ***2.976 2 16 *** 2.799 7.1 3 **2.861 3 3 **
66 1.904 2 16 ***2.115 2 16 *** 3.054 1.7 3 **3.159 8.8 6 ***
67 2.859 9 14 ***2.778 1.1 13 *** 3.671 7.6 3 **3.788 8.4 4 ***
68 1.000 2.9 1 1.000 5.2 11 *** 1.000 4 4 ***1.000 1.2 4 ***
69 2.797 2.8 3 **2.954 2.2 4 ** 1.000 2.8 3 **1.000 3.3 3 **
70 1.000 2.4 6 ***1.000 1.5 6 *** 1.000 6.7 3 **1.000 1.1 3 **
71 2.957 6 14 ***2.996 6.1 15 *** 1.000 8.6 3 **1.000 5 3 **
72 2.612 1.4 13 ***2.101 6.3 11 *** 1.000 1.2 2 *1.000 9 3 **
73 1.196 2 16 ***3.000 2 16 *** 1.000 1.5 2 *1.000 6 5 ***
74 2.994 3.8 6 ***2.559 1.8 3 ** 3.644 1.2 2 2.988 1.4 1
75 1.000 1.7 14 ***1.000 3 14 *** 1.000 1.7 2 *1.000 1.8 2 *
76 1.000 4.4 13 ***2.334 3.8 14 *** 2.469 1 1 2.077 1.8 1
77 1.353 4 9 ***1.411 8.8 9 *** 1.000 2.5 2 *1.000 1.1 2 *
78 1.000 1.5 5 ***1.000 6.5 6 *** 1.000 2 16 ***1.000 1.6 4 ***
79 1.000 3 5 ***1.000 1.5 5 *** 5.186 1.5 6 ***1.000 2 16 ***
80 1.000 1 7 ***1.000 7.8 8 *** 1.892 2.2 2 *1.795 1.9 2 *
81 2.725 1.3 4 ***2.739 7 5 *** 1.000 5.2 6 ***1.000 5.8 1
82 1.000 7.6 5 ***2.175 1.4 5 *** 1.000 1.8 3 **1.000 5.1 1
83 2.24 1.3 3 **2.075 9 4 *** 7.02 2 16 ***4.809 2.9 3 **
84 1.000 6.8 5 ***2.902 1.5 5 *** 4.003 1.5 1 4.722 9.8 3 **
85 1.000 7.5 5 ***1.000 4 6 *** 1.000 1 9 ***1.000 1.8 4 ***
86 1.000 3.7 4 ***1.000 7.7 4 *** 3.115 1.2 1 2.748 1.2 1
87 1.000 3.4 4 ***1.000 9.1 5 *** 5.294 1.4 1 5.598 1.3 1
88 1.000 1.9 4 ***1.000 9.6 5 *** 2.263 1.5 1 1.788 2.5 1
89 2.828 2.1 3 **1.000 6 5 *** 1.000 3.4 4 ***1.000 3.3 4 ***
90 1.000 7.8 4 ***1.000 5.6 4 *** 1.000 3.7 2 *1.000 3.8 2 *
91 1.000 2.5 3 **1.000 2.9 3 ** 1.000 1.8 3 **1.000 1.2 3 **
92 1.000 3.8 3 **1.000 3.5 3 ** 1.000 1.7 2 *1.000 1.2 2 *
93 1.000 1.8 3 **1.000 1 3 ** 1.000 3.8 2 *1.000 2.8 2 *
94 2.776 3.6 5 ***1.000 1.8 7 *** 5.921 4.2 3 **3.962 2 16 ***
95 2.103 4.9 2 *1.974 1.3 1 8.154 2 16 ***2.290 2 16 ***
96 2.023 1.2 4 ***1.000 4.6 10 *** 1.000 2.8 12 ***1.000 1.6 5 ***
97 2.811 1.5 2 *2.873 5.9 3 ** 3.748 7.1 4 ***1.000 1.2 6 ***
98 1.000 7.1 3 **1.000 1.1 2 * 1.000 3.9 6 ***7.349 2.8 1
99 1.000 1.4 2 *1.000 1.9 2 * 2.149 1.2 3 **1.000 2.8 8 ***
100 2.764 2.9 2 *2.321 9 2 . 1.000 3.1 3 **1.000 2.1 1
101 1.000 1.1 4 *** 1.000 8.2 10 ***
102 1.000 7.7 2 . 1.000 1.6 2 *
103 1.000 2.9 3 ** 4.084 5.8 4 ***
104 1.000 6.8 5 *** 1.000 3.2 2 *
105 1.000 9.3 3 ** 1.000 6.8 2 .
106 1.000 2.1 9 *** 1.000 5.2 3 **
107 1.000 1.9 2 * 3.397 1 1
108 2.187 9.6 2 . 1.248 3.4 1
109 1.000 2.1 3 ** 3.079 3.9 1
110 1.000 4.6 2 * 1.000 3.9 4 ***
111 1.000 2 16 *** 9.8 1 4.3 8 ***
112 1.000 2.9 2 * 8.555 2 16 ***
113 1.000 9.5 1 8.952 1.7 12 ***
114 1.644 9.6 2 . 1.000 2 16 ***
115 1.000 2 2 * 1.000 2 16 ***
116 1.000 1.8 2 * 1.000 1.7 13 ***
117 1.000 4.8 3 ** 2.988 3.4 13 ***
118 1.000 2.4 2 * 8.401 1.18 10 ***
119 2.704 8.3 2 . 2.493 4.7 5 ***
120 1.000 1.8 2 * 1.000 4.1 7 ***
121 1.413 6.7 1 1.000 9 5 ***
122 1.886 6.2 1 2.745 1.2 3 **
123 1.000 1.4 5 *** 1.000 3.4 3 **
124 2.499 1.8 1 1.000 1.5 2 *
125 1.000 3.6 2 * 1.000 1.4 2 *
126 2.416 1 1 1.000 5.8 3 **
127 1.000 5 5 *** 3.120 5.7 2 .
128 1.000 3.8 2 * 1.000 9.2 4 ***
129 1.000 1.3 3 ** 1.000 3.9 3 **
130 1.000 5.7 2 . 3.778 1.7 1
131 1.000 1.3 2 * 2.752 2.7 2 *
132 1.000 1.2 2 * 1.000 6.9 3 **
133 1.97 2.5 1 1.000 4.8 3 **
134 1.000 3.5 2 * 1.000 5.5 2 .
135 1.000 5.9 4 *** 1.000 3.8 2 *
136 1.176 7.1 3 ** 5.289 1.4 1
137 2.357 3.4 1 1.000 3.7 2 *
138 1.000 6.7 2 . 1.000 2 4 ***
139 1.000 7.9 2 . 1.000 5.1 3 **
140 1.000 6.9 2 . 1.000 1.6 1
141 1.000 4.7 2 * 8.453 2.5 3 **
142 1.000 1.3 3 ** 1.000 4 2 *
143 2.602 4.1 2 * 3.975 1.4 1
144 1.631 4.6 1 1.000 4.2 4 ***
145 1.000 8.3 2 . 1.000 3.7 3 **
146 1.000 1 2 * 2.147 1.9 1
147 1.000 3.6 2 * 1.000 5 2 .
148 1.251 1.6 1 1.000 4.1 2 *
149 2.376 2.1 1 1.000 5.4 2 .
150 1.482 2 1 1.000 6.3 2 .
Table A17. Out-of-sample validation figures of selected GAMs of BEL with varying spline function type and fixed spline function number of 5 per dimension under 100–443 after each tenth and the finally selected smooth function.
Table A17. Out-of-sample validation figures of selected GAMs of BEL with varying spline function type and fixed spline function number of 5 per dimension under 100–443 after each tenth and the finally selected smooth function.
k K max v.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
5 Thin plate regression splines under gaussian with identity link
01004.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
101000.6430.6152723.2781250.3440.332−615.238780.4930.4836923.151153
201000.3870.370110.371350.3640.352−407.855−200.3350.328−67.45414
301000.3820.366−1011.235500.4540.439−606.247−140.3170.310−285.60318
401000.3680.352−1110.931480.4630.447−616.266−160.3370.329−335.34312
501000.3550.339−1110.086400.4810.465−647.752−280.3510.344−375.4810
601000.3440.329−910.015400.4900.474−668.152−300.3640.356−385.593−3
701000.3390.324−610.034450.4760.460−647.578−270.3450.337−375.0780
801000.2950.282−119.397490.4040.390−515.513−60.2410.236−115.82034
901000.2960.283−129.694520.3930.380−495.15500.2060.202−76.60541
1001000.2870.274−119.431480.3970.383−505.402−50.2020.198−95.94536
5 Cubic regression splines under gaussian with identity link
01004.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
101000.6370.6092822.7391220.3370.326−414.733750.5050.4947122.781150
201000.3880.371210.094320.3580.346−408.256−250.3190.313−57.16110
301000.3890.372−611.426500.4360.421−556.652−140.2890.283−195.84922
401000.3590.343−910.508410.4480.433−597.171−230.3100.303−295.1756
501000.3450.330−99.906350.4760.460−638.736−340.3280.321−345.373−5
601000.3380.323−79.817340.4750.459−639.192−370.3300.324−345.491−8
701000.3070.294−89.341470.4300.416−586.081−180.2340.229−263.87115
801000.2890.277−1310.157550.4100.396−535.10600.2370.232−116.93943
901000.2830.271−1310.307560.4070.394−535.06710.2290.224−107.03544
1001000.2680.256−129.903520.3990.386−515.182−20.2260.221−96.53340
5 Duchon splines under gaussian with identity link
01004.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
101000.7530.720−420.570980.4280.413−3911.806490.4080.399615.24193
201000.7040.673−2217.488740.4410.426−518.606310.3800.372−1611.60066
301000.6610.632−3219.699950.3760.363−4014.235730.3190.3121119.168124
401000.6630.634−2118.426840.2920.282−1814.138730.3770.3703319.007123
501000.6660.636−1718.534860.2870.277−1214.785760.4100.4024119.896130
561000.6660.636−1818.532860.2880.279−1414.643750.4060.3974019.757129
5 Eilers and Marx style P-splines under gaussian with identity link
01004.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
101000.6430.6152922.8361230.3440.332−913.951700.4710.4616521.854144
201000.3890.372110.496370.3650.353−417.778−200.3360.329−87.40213
301000.3840.367−911.377530.4590.444−606.138−130.3200.313−305.51217
401000.3710.354−1010.977490.4540.439−606.095−160.3270.320−345.09211
501000.3570.341−910.459450.4670.451−626.909−220.3350.328−345.0596
601000.3390.324−109.932430.4920.476−667.640−280.3650.357−405.155−2
701000.3430.328−1010.523520.5460.527−757.681−270.3660.358−464.5762
801000.3340.319−79.920450.5200.503−678.655−290.3460.339−365.0361
901000.2280.218−106.973350.2790.269−314.29900.2080.20435.81034
1001000.2250.215−116.897340.2560.248−303.71620.1640.16115.21232
Table A18. Out-of-sample validation figures of selected GAMs of BEL with varying spline function type and fixed spline function number of 10 per dimension under between 100–443 and 150–443 after each tenth and the finally selected smooth function.
Table A18. Out-of-sample validation figures of selected GAMs of BEL with varying spline function type and fixed spline function number of 10 per dimension under between 100–443 and 150–443 after each tenth and the finally selected smooth function.
k K max v.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
10 Thin plate regression splines under gaussian with identity link
01504.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
101500.6420.6142723.3541260.3440.332−515.463800.5090.4997123.654156
201500.3820.365210.101330.3410.329−347.780−180.3380.33117.72818
301500.3700.354−710.922450.4160.402−526.497−140.3050.299−206.10318
401500.3540.338−710.412390.4040.391−516.747−200.3080.301−245.6008
501500.3470.331−710.119380.4260.412−547.258−240.3100.304−275.4674
601500.3420.327−49.766340.4000.387−507.600−260.2980.292−235.6150
701500.3340.319−49.601350.4280.414−558.158−300.3180.311−295.618−5
801500.3150.301−59.093350.4320.418−558.113−290.3340.327−296.087−3
901500.3230.309−59.436380.3880.375−496.558−200.2970.291−265.1942
1001500.3090.296−68.722270.4090.395−548.780−360.2610.255−274.994−9
1101500.3090.295−68.542260.4110.397−548.711−370.2840.278−334.768−15
1201500.2060.197−95.768250.2160.209−233.806−40.1640.16154.51924
1301500.2050.196−105.759240.2260.218−243.952−50.1750.17244.57924
1401500.2140.205−106.761340.2280.220−253.36350.1670.16365.76236
1501500.2120.203−107.070370.2300.223−243.57580.1730.17086.33740
10 Cubic regression splines under gaussian with identity link
01254.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
101250.6380.6102723.3971270.3410.329−315.829820.5190.5097323.960158
201250.3800.364210.038340.3390.328−347.650−160.3450.33807.86518
301250.3770.360−611.458530.4110.397−506.035−50.3090.302−146.97630
401250.3640.348−1010.929470.4210.407−535.791−100.3150.308−255.82418
501250.3480.333−1110.437440.4360.421−566.263−150.3190.312−275.63613
601250.3420.327−59.791360.4030.389−507.282−230.3080.302−235.7894
701250.3550.340−310.502480.4420.427−567.001−200.3270.320−305.5706
801250.3490.334−210.275460.4340.419−557.159−220.3260.319−295.5924
901250.2820.269−57.978370.2750.266−304.426−30.2150.210−25.08825
1001250.2630.251−57.109290.3010.291−375.637−170.2000.196−83.96912
1101250.2550.244−76.999300.3030.292−375.435−150.2020.198−64.23016
1201250.2570.246−77.052300.3040.294−375.371−140.2000.196−64.23217
1251250.2540.243−77.139310.2990.289−365.189−130.1970.192−64.22817
10 Duchon splines under gaussian with identity link
01004.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
101000.7860.752−522.1431100.4450.430−4412.588570.4060.397116.238102
201000.7830.749−3220.4891010.4940.477−6211.319580.3570.350−2115.31698
301000.7820.748−3921.134980.5380.520−5912.715640.4220.413−318.621121
401000.8160.780−4522.125980.5590.540−6313.071650.4500.440−1018.616119
501000.8230.787−4521.473960.5550.536−6312.672630.4510.441−1018.114116
531000.8210.785−4421.348940.5450.526−6112.593620.4460.437−818.091116
10 Eilers and Marx style P-splines under gaussian with identity link in stagewise selection of length 5
01504.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
101500.6480.6192723.6881280.3490.337−715.566800.5060.4957123.889158
201500.3980.380110.946450.3580.346−377.063−70.3380.33118.10231
301500.3930.376−911.983590.4350.421−555.575−20.2990.293−176.92836
401500.3710.355−811.374550.4490.434−575.738−90.3140.308−265.77023
501500.3630.347−910.956500.4600.444−606.249−140.3150.308−285.49217
601500.3490.334−810.479460.4430.428−566.526−170.3050.298−265.42714
701500.3490.333−610.629510.4640.449−606.687−170.3250.318−295.50113
801500.3500.335−710.465480.4680.452−607.036−190.3350.328−295.56311
901500.3500.335−710.639510.4700.454−606.683−170.3300.323−295.45314
1001500.3340.319−89.960460.4680.452−607.170−200.3390.332−295.83511
1101500.3370.323−910.249480.4500.435−586.171−150.3290.322−315.26712
1201500.3390.324−710.283450.4330.419−556.420−170.3200.313−285.34010
1301500.2690.257−138.912430.3650.352−464.891−40.2440.238−125.50330
1401500.2550.244−128.157360.3560.344−445.415−100.2460.241−105.19624
1501500.2610.250−128.514390.3680.355−465.267−90.2450.240−125.16225
Table A19. Out-of-sample validation figures of selected GAMs of BEL with varying random component link function combination and fixed spline function number of 4 per dimension under between 40–443 and 150–443 after each tenth and the finally selected smooth function.
Table A19. Out-of-sample validation figures of selected GAMs of BEL with varying random component link function combination and fixed spline function number of 4 per dimension under between 40–443 and 150–443 after each tenth and the finally selected smooth function.
k K max v.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
4 Thin plate regression splines under gaussian with identity link in stagewise selection of length 5
01504.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
101500.6320.6042822.0191160.3450.334−813.247650.4790.4696621.072139
201500.4060.388011.330440.3750.362−427.254−120.3410.334−67.70924
301500.3990.382−1112.268590.4650.449−615.744−60.3140.307−266.11629
401500.3710.355−811.415530.4800.463−646.380−160.3400.332−345.28313
501500.3920.375−1312.079590.5200.503−705.961−120.3650.358−395.36819
601500.3060.292−159.833480.4050.391−515.283−20.2730.267−106.48439
701500.2720.260−159.896560.3210.310−355.227220.2320.2281210.46069
801500.2490.238−178.627490.3080.297−364.588160.2050.20199.10060
901500.2610.250−179.262540.3250.314−394.639180.1950.19159.34062
1001500.2540.243−189.593550.3400.328−424.626170.1960.19239.31262
1101500.2550.244−189.407540.3360.324−404.640180.2070.20349.32562
1201500.2430.233−168.474480.3070.296−384.023130.1860.18217.81951
1301500.2410.230−168.481490.3080.298−374.108130.1830.17928.07553
1401500.2350.225−158.018450.2950.285−353.865100.1730.16927.18247
1501500.2400.229−158.192460.2910.281−353.907130.1760.17237.64150
4 Thin plate regression splines under gaussian with log link in stagewise selection of length 5
0404.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10400.7880.754823.0111140.4230.4082622.4711180.7000.6859428.248186
20400.4520.432−412.761500.4210.406−487.626−90.3600.352−118.16629
30400.4620.442−1014.180720.5270.509−686.209−10.3680.360−327.11636
40400.4380.419−713.382660.5230.506−696.189−100.3730.365−395.91320
4 Thin plate regression splines under gamma with identity link in stagewise selection of length 5
0704.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10700.6250.5983121.0681100.3320.321−512.421600.4860.4756819.997132
20700.3940.377110.887410.3570.345−397.283−150.3400.333−67.64119
30700.3830.367−1011.985560.4670.451−625.853−100.3310.324−305.74222
40700.2890.277−119.447450.3460.335−415.15900.2560.250−26.68239
50700.3070.293−1110.339530.3890.376−504.92200.2520.247−116.29438
60700.3080.295−1410.455560.3720.360−494.37770.2220.218−97.14346
70700.2700.259−169.999570.3250.314−365.280230.2450.2401010.41669
4 Thin plate regression splines under gamma with log link in stagewise selection of length 5
01204.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
101200.7800.7451222.1041010.4360.4213521.1501100.7360.72010126.692175
201200.4970.475−114.721710.4570.442−556.79420.3600.352−168.60541
301200.4370.418−713.581660.4830.467−616.042−30.3640.357−287.01831
401200.4180.400−712.575580.5050.488−676.530−160.3820.374−405.84411
501200.4160.397−1112.456580.5220.505−706.310−150.3920.384−425.53612
601200.4070.390−1112.201590.5470.529−746.706−190.4110.403−475.4768
701200.4070.390−712.104590.4800.464−645.741−130.3560.349−395.17312
801200.2740.262−910.461600.3190.309−315.409230.2570.2511610.63670
901200.2520.241−109.362520.2890.279−314.594170.1950.19198.75358
1001200.2390.229−138.404460.2540.245−264.423180.1820.178138.71057
1101200.2510.240−158.307460.2560.248−284.442190.1740.171118.70857
1201200.2520.241−168.368470.2630.254−294.585200.1710.16798.83058
4 Thin plate regression splines under inverse gaussian with identity link in stagewise selection of length 5
0854.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10850.6220.5953320.6431080.3280.317−312.034570.4880.4786819.473129
20850.4430.423013.176630.4120.398−496.644−10.3360.329−118.14937
30850.3900.373−1012.087600.4810.465−655.771−90.3340.327−335.77723
40850.2800.268−99.655480.3390.327−395.07940.2550.25017.15444
50850.2960.283−109.742480.3740.362−484.933−30.2420.237−105.76834
60850.3100.297−1410.405540.3670.354−484.59260.2320.227−87.16546
70850.2720.260−1210.279580.3130.303−345.205220.2490.2441210.28667
80850.2470.236−148.583480.2930.283−334.594150.2170.213108.77658
85850.2500.239−178.739500.3250.314−384.585140.2180.21368.87158
4 Thin plate regression splines under inverse gaussian with log link in stagewise selection of length 5
0754.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10750.7780.7441421.780950.4460.4314020.5201060.7560.74010425.969170
20750.4910.470−114.542690.4520.437−556.75900.3620.355−178.42338
30750.4250.407−713.142620.4720.456−606.123−50.3660.358−276.85427
40750.4060.388−712.151540.4990.482−666.757−190.3890.381−415.9207
50750.4120.394−1112.543560.5130.495−696.309−160.3960.388−425.65510
60750.2980.285−129.519470.3920.379−505.298−40.2650.260−106.17236
70750.2630.251−139.789560.2980.288−315.406230.2270.2221610.67370
75750.2580.246−149.181520.3000.290−335.049190.2230.219139.83765
4 Thin plate regression splines under inverse gaussian with 1 μ 2 link in stagewise selection of length 5
0554.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10550.8030.768223.4251170.3830.370−2415.197760.4350.4262719.713127
20550.4480.428812.645610.3310.320−297.088100.3300.323189.98356
30550.3870.370112.458640.3310.320−296.701200.3110.3042211.09970
40550.3410.326−511.661610.3390.328−355.920170.2710.266119.85163
45550.3430.328−910.928550.3610.349−386.111120.3000.29499.45159
50550.3360.321−710.645550.3550.343−405.31980.2500.24578.52554
55550.3280.314−910.595560.3280.317−355.325150.2410.2361610.24967
Table A20. Out-of-sample validation figures of selected GAMs of BEL with varying random component link function combination and fixed spline function number of 8 per dimension under between 50–443 and 150–443 after each tenth and the finally selected smooth function.
Table A20. Out-of-sample validation figures of selected GAMs of BEL with varying random component link function combination and fixed spline function number of 8 per dimension under between 50–443 and 150–443 after each tenth and the finally selected smooth function.
k K max v.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
8 Thin plate regression splines under gaussian with identity link
01504.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
101500.6390.6112723.1761250.3400.329−315.517800.5160.5057323.627156
201500.3750.35939.604260.3340.322−338.378−240.3410.33317.71110
301500.3610.345−710.444410.4150.401−526.961−190.3040.297−215.87113
401500.3560.340−510.098360.4250.410−547.920−280.3110.304−275.647−1
501500.3390.324−79.712330.4180.404−537.746−270.3110.304−265.5960
601500.3250.311−69.037260.4110.397−528.706−340.3100.304−265.850−8
701500.3250.311−49.180310.4290.414−558.773−340.3260.319−305.912−9
801500.3090.296−58.618290.4300.415−558.984−350.3360.329−296.382−9
901500.3130.299−58.981320.3840.371−487.390−260.3000.293−265.430−4
1001500.3280.313−69.910470.4000.387−515.572−120.2910.285−255.06413
1101500.2560.245−107.985380.3260.315−404.655−60.2010.197−65.00228
1201500.2530.242−97.340300.3210.310−395.542−140.2090.204−54.54120
1301500.2520.241−97.767340.3260.315−405.197−110.2050.201−54.77024
1401500.2450.234−87.592330.3220.311−415.315−150.1970.193−74.31720
1501500.2170.208−116.477320.2390.231−263.65220.1790.17565.57834
8 Thin plate regression splines under gaussian with log link in stagewise selection of length 5
0504.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10500.7570.7241021.5701010.4440.4293922.1411160.7550.73910627.693182
20500.4010.383110.278230.3590.347−359.154−280.3620.354−18.1107
30500.3960.379−511.249430.4380.424−537.692−200.3390.332−196.80314
40500.3820.365−511.036450.4700.454−607.846−250.3510.344−316.2344
50500.3700.353−810.487390.4640.448−608.000−280.3400.333−325.9010
8 Thin plate regression splines under gamma with identity link in stagewise selection of length 5
01004.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
101000.6370.6092922.7431230.3340.323−314.941770.5100.5007222.871151
201000.3700.35449.537270.3240.313−318.076−220.3400.33317.72510
301000.3590.344−810.558440.4140.400−526.415−150.3050.298−225.90916
401000.3290.314−99.643370.4020.388−516.673−210.3210.314−265.7024
501000.3420.327−79.631330.4090.395−527.553−270.3260.320−285.863−3
601000.3240.310−69.114280.4090.395−528.421−320.3270.320−286.067−9
701000.3280.314−69.617410.4510.435−597.631−260.3490.342−355.796−2
801000.2700.258−97.944370.3240.313−385.068−70.2210.217−25.46129
901000.2790.267−108.926470.3410.329−404.59520.2240.219−26.71341
1001000.2720.260−118.654440.3350.324−404.53200.2160.211−26.39738
8 Thin plate regression splines under gamma with log link in stagewise selection of length 5
01104.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
101100.7620.7291321.360950.4580.4434521.5271120.7730.75610826.743176
201100.4420.422212.416490.3960.382−447.515−120.3490.342−88.08324
301100.3870.370−311.147450.4140.400−497.058−160.3380.331−186.84716
401100.3720.356−610.826430.4580.442−597.546−240.3600.352−346.2251
501100.3570.342−910.240360.4580.443−607.977−290.3570.349−366.073−5
601100.3510.336−59.866300.4390.424−569.066−360.3530.346−356.537−15
701100.3540.339−510.130370.4580.442−598.442−310.3640.356−376.271−9
801100.3590.344−610.122370.4630.447−608.529−320.3710.363−376.412−9
901100.2820.270−109.017470.3640.352−444.991−20.2490.244−66.28636
1001100.2680.256−117.807370.3200.309−384.748−50.2090.204−15.60432
1101100.2590.247−117.373340.3120.302−374.801−70.2010.19705.35431
Table A21. Out-of-sample validation figures of selected GAMs of BEL in adaptive forward stepwise and stagewise selection of length 5 under between 25–443 and 100–443 after each tenth and the finally selected smooth function.
Table A21. Out-of-sample validation figures of selected GAMs of BEL in adaptive forward stepwise and stagewise selection of length 5 under between 25–443 and 100–443 after each tenth and the finally selected smooth function.
k K max v.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
8 Thin plate regression splines under gaussian with log link
0254.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10250.6630.6342623.2981230.3410.330116.218840.5470.5367824.370161
20250.3980.381210.221230.3610.349−359.380−280.3750.367−18.4606
25250.4110.393211.892470.4100.397−477.709−170.3240.317−117.12019
8 Thin plate regression splines under gaussian with log link in stagewise selection of length 5
0504.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10500.7570.7241021.5701010.4440.4293922.1411160.7550.73910627.693182
20500.4010.383110.278230.3590.347−359.154−280.3620.354−18.1107
30500.3960.379−511.249430.4380.424−537.692−200.3390.332−196.80314
40500.3820.365−511.036450.4700.454−607.846−250.3510.344−316.2344
50500.3700.353−810.487390.4640.448−608.000−280.3400.333−325.9010
8 Thin plate regression splines under gamma with identity link
0714.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10710.6370.6092922.7431230.3340.323−314.941770.5100.5007222.871151
20710.3860.369810.141310.3100.299−267.904−180.3580.35088.14016
30710.3590.344−810.558440.4140.400−526.415−150.3050.298−225.90916
40710.3290.314−99.643370.4020.388−516.673−210.3210.314−265.7024
50710.3380.324−79.543320.4120.399−537.748−280.3240.318−295.805−4
60710.3240.310−69.114280.4090.395−528.421−320.3270.320−286.067−9
70710.3270.313−59.417360.4340.419−568.017−290.3420.335−325.967−5
71710.2910.278−48.639410.3410.329−435.205−120.1960.192−173.89814
8 Thin plate regression splines under gamma with identity link in stagewise selection of length 5
01004.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
101000.6370.6092922.7431230.3340.323−314.941770.5100.5007222.871151
201000.3700.35449.537270.3240.313−318.076−220.3400.33317.72510
301000.3590.344−810.558440.4140.400−526.415−150.3050.298−225.90916
401000.3290.314−99.643370.4020.388−516.673−210.3210.314−265.7024
501000.3420.327−79.631330.4090.395−527.553−270.3260.320−285.863−3
601000.3240.310−69.114280.4090.395−528.421−320.3270.320−286.067−9
701000.3280.314−69.617410.4510.435−597.631−260.3490.342−355.796−2
801000.2700.258−97.944370.3240.313−385.068−70.2210.217−25.46129
901000.2790.267−108.926470.3410.329−404.59520.2240.219−26.71341
1001000.2720.260−118.654440.3350.324−404.53200.2160.211−26.39738
Table A22. Out-of-sample validation figures of selected GAMs of BEL with varying spline function number per dimension and fixed spline function type under between 91–443 and 150–443 after each tenth and the finally selected smooth function or after each dynamically stagewise selected smooth function block. Thereby furthermore a variation in the random component link function combination.
Table A22. Out-of-sample validation figures of selected GAMs of BEL with varying spline function number per dimension and fixed spline function type under between 91–443 and 150–443 after each tenth and the finally selected smooth function or after each dynamically stagewise selected smooth function block. Thereby furthermore a variation in the random component link function combination.
k K max v.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
5 Eilers and Marx style P-splines under gaussian with identity link
01004.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
101000.6430.6152922.8361230.3440.332−913.951700.4710.4616521.854144
201000.3890.372110.496370.3650.353−417.778−200.3360.329−87.40213
301000.3840.367−911.377530.4590.444−606.138−130.3200.313−305.51217
401000.3710.354−1010.977490.4540.439−606.095−160.3270.320−345.09211
501000.3570.341−910.459450.4670.451−626.909−220.3350.328−345.0596
601000.3390.324−109.932430.4920.476−667.640−280.3650.357−405.155−2
701000.3430.328−1010.523520.5460.527−757.681−270.3660.358−464.5762
801000.3340.319−79.920450.5200.503−678.655−290.3460.339−365.0361
901000.2280.218−106.973350.2790.269−314.29900.2080.20435.81034
1001000.2250.215−116.897340.2560.248−303.71620.1640.16115.21232
8 Eilers and Marx style P-splines under inverse gaussian with 1 μ 2 link in dynamically stagewise selection of proportion 0 . 25
0914.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
5911.5741.505−1841.6882330.7320.708−7530.2011610.3840.3764242.135278
11910.8170.781−322.3811130.3960.383−3413.475680.4120.4042319.322124
21910.6790.650−924.2031380.7630.738−1028.222310.4240.415−4413.54889
37910.5250.502115.485790.5210.504−636.15400.3970.389−307.46133
62910.5050.482−114.208640.5070.490−616.842−100.4180.410−337.40518
91910.3090.296−119.688450.3350.324−365.23960.2790.27327.42043
10 Eilers and Marx style P-splines under gaussian with identity link in stagewise selection of length 5
01504.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
101500.6480.6192723.6881280.3490.337−715.566800.5060.4957123.889158
201500.3980.380110.946450.3580.346−377.063−70.3380.33118.10231
301500.3930.376−911.983590.4350.421−555.575−20.2990.293−176.92836
401500.3710.355−811.374550.4490.434−575.738−90.3140.308−265.77023
501500.3630.347−910.956500.4600.444−606.249−140.3150.308−285.49217
601500.3490.334−810.479460.4430.428−566.526−170.3050.298−265.42714
701500.3490.333−610.629510.4640.449−606.687−170.3250.318−295.50113
801500.3500.335−710.465480.4680.452−607.036−190.3350.328−295.56311
901500.3500.335−710.639510.4700.454−606.683−170.3300.323−295.45314
1001500.3340.319−89.960460.4680.452−607.170−200.3390.332−295.83511
1101500.3370.323−910.249480.4500.435−586.171−150.3290.322−315.26712
1201500.3390.324−710.283450.4330.419−556.420−170.3200.313−285.34010
1301500.2690.257−138.912430.3650.352−464.891−40.2440.238−125.50330
1401500.2550.244−128.157360.3560.344−445.415−100.2460.241−105.19624
1501500.2610.250−128.514390.3680.355−465.267−90.2450.240−125.16225
Table A23. Maximum allowed numbers of smooth functions and out-of-sample validation figures of all derived GAMs of BEL under between 25–443 and 150–443 after the final iteration. Highlighted in green and red respectively the best and worst validation figures.
Table A23. Maximum allowed numbers of smooth functions and out-of-sample validation figures of all derived GAMs of BEL under between 25–443 and 150–443 after the final iteration. Highlighted in green and red respectively the best and worst validation figures.
k K max v.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
4 Thin plate regression splines under gaussian with identity link
1501500.2400.229−158.192460.2910.281−353.907130.1760.17237.64150
5 Thin plate regression splines under gaussian with identity link
1001000.2870.274−119.431480.3970.383−505.402−50.2020.198−95.94536
8 Thin plate regression splines under gaussian with identity link
1501500.2170.208−116.477320.2390.231−263.65220.1790.17565.57834
10 Thin plate regression splines under gaussian with identity link
1501500.2120.203−107.070370.2300.223−243.57580.1730.17086.33740
5 Cubic regression splines under gaussian with identity link
1001000.2680.256−129.903520.3990.386−515.182−20.2260.221−96.53340
5 Duchon splines under gaussian with identity link
561000.6660.636−1818.532860.2880.279−1414.643750.4060.3974019.757129
5 Eilers and Marx style P-splines under gaussian with identity link
1001000.2250.215−116.897340.2560.248−303.71620.1640.16115.21232
10 Cubic regression splines under gaussian with identity link
1251250.2540.243−77.139310.2990.289−365.189−130.1970.192−64.22817
10 Duchon splines under gaussian with identity link
531000.8210.785−4421.348940.5450.526−6112.593620.4460.437−818.091116
10 Eilers and Marx style P-splines under gaussian with identity link in stagewise selection of length 5
1501500.2610.250−128.514−390.3680.355−465.26790.2450.240−125.162−25
8 Thin plate regression splines under gaussian with log link
25250.4110.393211.892470.4100.397−477.709−170.3240.317−117.12019
8 Thin plate regression splines under gaussian with log link in stagewise selection of length 5
50500.3700.353−810.487390.4640.448−608.000−280.3400.333−325.9010
8 Thin plate regression splines under gamma with identity link
71710.2910.278−48.639410.3410.329−435.205−120.1960.192−173.89814
8 Thin plate regression splines under gamma with identity link in stagewise selection of length 5
1001000.2720.260−118.654440.3350.324−404.53200.2160.211−26.39738
4 Thin plate regression splines under gaussian with identity link in stagewise selection of length 5
1501500.2400.229−158.192460.2910.281−353.907130.1760.17237.64150
4 Thin plate regression splines under gaussian with log link in stagewise selection of length 5
40400.4380.419−713.382660.5230.506−696.189−100.3730.365−395.91320
4 Thin plate regression splines under gamma with identity link in stagewise selection of length 5
70700.2700.259−169.999570.3250.314−365.280230.2450.2401010.41669
4 Thin plate regression splines under gaussian with log link in stagewise selection of length 5
1201200.2520.241−168.368470.2630.254−294.585200.1710.16798.83058
4 Thin plate regression splines under inverse gaussian with identity link in stagewise selection of length 5
85850.2500.239−178.739500.3250.314−384.585140.2180.21368.87158
4 Thin plate regression splines under inverse gaussian with log link in stagewise selection of length 5
75750.2580.246−149.181520.3000.290−335.049190.2230.219139.83765
4 Thin plate regression splines under inverse gaussian with 1 μ 2 link in stagewise selection of length 5
55550.3280.314−910.595560.3280.317−355.325150.2410.2361610.24967
8 Thin plate regression splines under gamma with log link in stagewise selection of length 5
1101100.2590.247−117.373340.3120.302−374.801−70.2010.19705.35431
8 Eilers and Marx style P-splines under inverse gaussian with 1 μ 2 link in dynamic stagewise selection of proportion 0 . 25
91910.3090.296−119.688450.3350.324−365.23960.2790.27327.42043
Table A24. Feasible generalized least-squares (FGLS) variance models of BEL corresponding to M max 2 , 6 , 10 , 14 , 18 , 22 derived by adaptive selection from the set of basis functions of the 150–443 OLS proxy function given in Table A1 with exponents summing up to at max two. Furthermore, p-values of Breusch-Pagan test, AIC scores and out-of-sample MAEs in % after each iteration.
Table A24. Feasible generalized least-squares (FGLS) variance models of BEL corresponding to M max 2 , 6 , 10 , 14 , 18 , 22 derived by adaptive selection from the set of basis functions of the 150–443 OLS proxy function given in Table A1 with exponents summing up to at max two. Furthermore, p-values of Breusch-Pagan test, AIC scores and out-of-sample MAEs in % after each iteration.
m r m 1 r m 2 r m 3 r m 4 r m 5 r m 6 r m 7 r m 8 r m 9 r m 10 r m 11 r m 12 r m 13 r m 14 r m 15 BP.p-valAICv.maens.maecr.mae
0000000000000000 1 20 325,8500.2380.2520.154
1100000000000000 1 20 322,4520.2380.2460.122
2000000010000000 1 20 315,9800.2390.2550.153
3000100000000000 1 20 314,0770.2370.2260.165
4000000000000001 1 20 312,2800.2310.2060.184
5000000001000000 1 20 312,1140.2310.2050.185
6000010000000000 1 20 311,9490.2310.2030.186
7010000000000000 1 20 311,7940.2320.2020.187
8000000000001000 1 20 311,7000.2350.2000.190
9100000010000000 1 20 311,6100.2330.1980.190
10000000020000000 1 20 311,3630.2270.1940.195
11000001000000000 1 20 311,2930.2290.1940.197
12000020000000000 1 20 311,2370.2280.1930.198
13000001010000000 1 20 311,1960.2300.1930.198
14000000100000000 1.5 20 311,1610.2310.1930.200
15100000000000001 7.1 19 311,1360.2310.1910.202
16000000010000001 5 15 311,0910.2280.1890.201
17001000000000000 5.8 13 311,0670.2280.1880.203
18000000200000000 8.3 13 311,0480.2280.1870.204
19000010010000000 3.2 12 311,0300.2280.1880.204
20100010000000000 2.7 12 311,0030.2300.1880.205
21000001100000000 1.3 11 310,9880.2300.1880.206
22000100010000000 9.4 11 310,9740.2300.1870.207
Table A25. FGLS variance models of BEL corresponding to M max 2 , 6 , 10 , 14 , 18 , 22 derived by adaptive selection from the set of basis functions of the 300–886 OLS proxy function given in Table A3 with exponents summing up to at max two. Furthermore, p-values of Breusch-Pagan test, AIC scores and out-of-sample MAEs in % after each iteration.
Table A25. FGLS variance models of BEL corresponding to M max 2 , 6 , 10 , 14 , 18 , 22 derived by adaptive selection from the set of basis functions of the 300–886 OLS proxy function given in Table A3 with exponents summing up to at max two. Furthermore, p-values of Breusch-Pagan test, AIC scores and out-of-sample MAEs in % after each iteration.
m r m 1 r m 2 r m 3 r m 4 r m 5 r m 6 r m 7 r m 8 r m 9 r m 10 r m 11 r m 12 r m 13 r m 14 r m 15 BP.p−valAICv.maens.maecr.mae
0000000000000000 1 20 325,4590.1950.2750.175
1100000000000000 1 20 322,0770.1990.2730.166
2000000010000000 1 20 315,6150.1960.2750.175
3000100000000000 1 20 313,6590.1950.2550.175
4000000000000001 1 20 311,8640.1980.2390.182
5000000001000000 1 20 311,7040.1980.2360.182
6010000000000000 1 20 311,5540.2000.2400.183
7200000000000000 1 20 311,4540.1990.2410.183
8000000000001000 1 20 311,3600.1990.2380.186
9000001000000000 1 20 311,3180.2010.2360.188
10000000100000000 1 20 311,2870.2030.2340.189
11000001010000000 1 20 311,2600.2030.2330.189
12000000020000000 1 20 311,2370.2030.2320.189
13100000010000000 3.7 17 311,0010.2000.2230.192
14100000000000001 1.7 16 310,9800.2000.2220.194
15000000010000001 7.6 13 310,9340.2000.2200.196
16001000000000000 4.2 11 310,9120.2000.2180.197
17000000200000000 1.3 10 310,8950.2000.2190.198
18000001100000000 2.3 10 310,8810.2000.2170.198
19000000000000020 7.6 10 310,8670.2000.2180.197
20000000010010000 3.4 9 310,8540.2000.2180.196
21000000000000010 9.9 9 310,8430.2000.2180.196
22100000000010000 3.1 8 310,8320.2000.2170.196
Table A26. Iteration-wise out-of-sample validation figures in adaptive variance model selection of BEL corresponding to M max 2 , 6 , 10 , 14 , 18 , 22 based on the 150–443 OLS proxy function given in Table A1 with exponents summing up to at max two. Simultaneously type I FGLS regression results.
Table A26. Iteration-wise out-of-sample validation figures in adaptive variance model selection of BEL corresponding to M max 2 , 6 , 10 , 14 , 18 , 22 based on the 150–443 OLS proxy function given in Table A1 with exponents summing up to at max two. Simultaneously type I FGLS regression results.
mv.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
00.2380.228−158.103450.2520.243−303.984160.1540.15137.37949
10.2380.228−158.668490.2460.238−304.120190.1220.12037.87352
20.2390.229−168.147460.2550.246−304.032170.1530.14927.48949
30.2370.226−157.789430.2260.218−244.423200.1650.162108.11754
40.2310.221−137.684420.2060.199−184.817220.1840.180178.75658
50.2310.221−137.666420.2050.198−184.803220.1850.181178.74058
60.2310.221−137.577410.2030.196−184.762220.1860.183178.63757
70.2320.222−127.661420.2020.195−174.787220.1870.183188.69157
80.2350.225−127.774420.2000.193−174.914230.1900.186198.91259
90.2330.223−117.692420.1980.191−164.838230.1900.186198.76358
100.2270.217−107.460400.1940.188−154.708210.1950.191208.53756
110.2290.219−107.447400.1940.187−154.686210.1970.193208.45556
120.2280.218−107.426400.1930.186−144.687210.1980.194208.44456
130.2300.220−97.513410.1930.187−144.696210.1980.194218.49156
140.2310.221−97.527410.1930.186−144.701210.2000.195218.49756
150.2310.221−97.523410.1910.185−134.742210.2020.197228.56957
160.2280.218−97.437400.1890.182−134.730210.2010.197228.55756
170.2280.218−97.421400.1880.182−134.747210.2030.199228.56856
180.2280.218−97.433400.1870.181−134.780220.2040.200228.62157
190.2280.218−97.435400.1880.182−134.786220.2040.200228.62857
200.2300.219−97.442400.1880.182−134.796220.2050.201228.65057
210.2300.220−97.466400.1880.181−134.800220.2060.201238.64857
220.2300.220−87.436400.1870.180−124.802220.2070.203238.63957
Table A27. Iteration-wise out-of-sample validation figures in adaptive variance model selection of BEL corresponding to M max 2 , 6 , 10 , 14 , 18 , 22 based on the 300–886 OLS proxy function given in Table A3 with exponents summing up to at max two. Simultaneously type I FGLS regression results.
Table A27. Iteration-wise out-of-sample validation figures in adaptive variance model selection of BEL corresponding to M max 2 , 6 , 10 , 14 , 18 , 22 based on the 300–886 OLS proxy function given in Table A3 with exponents summing up to at max two. Simultaneously type I FGLS regression results.
mv.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
00.1950.186−96.468330.2750.266−304.601−30.1750.17155.31532
10.1990.190−96.648340.2730.263−314.272−30.1660.16215.00530
20.1960.187−96.527330.2750.266−304.564−30.1750.17155.40132
30.1950.186−96.487330.2550.247−274.35010.1750.17195.91637
40.1980.189−96.305320.2390.231−234.26240.1820.178136.30340
50.1980.190−96.298320.2360.228−224.25240.1820.178146.33640
60.2000.191−96.399330.2400.232−234.29240.1830.179136.38940
70.1990.190−96.364320.2410.233−234.30440.1830.179136.32440
80.1990.190−86.381320.2380.230−224.31340.1860.182146.40740
90.2010.193−86.432330.2360.228−224.31350.1880.184156.52141
100.2030.194−86.473330.2340.226−214.31050.1890.185166.62142
110.2030.195−86.492330.2330.225−214.30350.1890.185166.62842
120.2030.194−86.476330.2320.224−214.29450.1890.186166.64142
130.2000.191−76.254320.2230.216−194.25250.1920.188176.61542
140.2000.191−76.246310.2220.214−194.25760.1940.190186.69742
150.2000.191−76.216310.2200.213−184.24360.1960.192196.77343
160.2000.191−76.180310.2180.211−184.23960.1970.193196.75343
170.2000.192−76.197310.2190.211−184.24960.1980.194196.80443
180.2000.191−76.194310.2170.210−184.25060.1980.194196.80143
190.2000.191−76.207310.2180.210−184.23860.1970.193196.78743
200.2000.191−76.229320.2180.211−184.22660.1960.192196.79343
210.2000.192−76.240320.2180.211−184.22470.1960.192196.81443
220.2000.192−76.256320.2170.210−184.22370.1960.192196.84444
Table A28. AIC scores and out-of-sample validation figures of type II FGLS proxy functions of BEL under 150–443 with variance models of varying complexity M max after each tenth iteration.
Table A28. AIC scores and out-of-sample validation figures of type II FGLS proxy functions of BEL under 150–443 with variance models of varying complexity M max after each tenth iteration.
kAICv.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
M max = 2 in variance model selection
0437,2514.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10336,3901.7861.70818444.0821981.4021.35420939.1522092.2902.24234452.033344
20323,8830.8260.7902522.0071110.4240.409−2810.764440.4370.4282816.42499
30319,9580.4650.445312.876550.2880.27829.650400.4670.4575715.23496
40318,9450.4010.384−1611.036510.3570.345−377.158160.3300.323310.12755
50318,2060.3550.339−249.270350.3360.324−366.61180.3390.332−88.60236
60317,4850.3230.309−258.407360.3090.298−365.548110.2790.273−117.24436
70317,1970.3060.293−287.631280.3450.334−435.405−10.2720.266−175.89925
80316,2630.2720.260−246.946320.3200.310−424.05100.2270.222−174.89825
90316,0210.2600.249−237.143390.2980.288−373.854100.1730.169−56.46142
100315,8710.2560.245−237.424410.2940.284−354.078140.1860.18207.44349
110315,7840.2560.245−227.396410.3020.292−373.962120.1890.185−37.01346
120315,7190.2570.245−236.923380.2960.286−363.870110.1810.177−26.87245
130315,6750.2580.247−256.506350.2950.285−363.76090.1880.184−36.46142
140315,6490.2520.241−236.424340.2830.274−343.74990.1840.180−16.39942
150315,6290.2390.229−216.467340.2610.252−303.796100.1770.17336.65444
M max = 6 in variance model selection
0437,2514.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10332,4792.0141.92625949.0982132.0001.93329844.7452382.9642.90144558.341385
20320,8730.8810.8425122.8211150.3410.3291613.428660.6220.6098420.790134
30316,1870.4290.4101910.875320.3080.297298.537280.5610.5497312.63372
40315,1320.3660.350610.243450.2540.24617.853250.4010.3933611.22161
50314,4730.3030.28939.346460.2290.22207.543280.3610.3533410.77662
60313,6430.3070.293−187.567280.2510.242−215.808110.2660.26197.67641
70313,3010.2800.268−177.768300.2220.214−126.229210.2680.262239.31556
80313,0600.2700.258−207.092280.2300.222−136.273220.2800.274259.55459
90312,8830.2620.251−226.754290.2390.231−175.977200.2530.248199.07756
100312,1000.2460.235−196.177290.2020.195−144.814180.2210.216218.30554
110311,6560.2310.221−166.446330.1890.182−124.827220.2110.206258.96459
120311,5740.2360.225−166.545340.2090.202−164.594190.2070.202228.63757
130311,5110.2380.227−176.551350.2070.200−164.797210.2040.200239.10460
140311,4610.2310.221−166.026310.1890.183−124.726210.2160.212258.85358
150311,4260.2240.215−145.904310.1770.171−94.756220.2260.221299.00559
M max = 10 in variance model selection
0437,2514.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10328,5192.1202.02728850.5242212.2062.13232946.5632483.1943.12748060.396399
20319,4810.9710.9289524.1851050.4390.4245311.839490.8210.80311718.086112
30316,5290.6550.6275616.560740.4200.4065712.301610.7800.76411318.285117
40314,4600.3790.3621910.089420.2680.259198.120280.4730.4635411.60863
50313,8420.3240.31028.422330.2290.221−46.420120.3390.331208.60036
60313,0220.2970.284−137.619310.2230.215−136.123170.2770.271148.29243
70312,6920.2820.269−177.494260.2210.213−56.762240.3260.3193510.46764
80312,4430.2710.259−197.171270.2180.211−76.625250.3030.2973310.30665
90312,2640.2610.249−216.610270.2220.215−116.300230.2780.272289.80662
100312,1870.2620.250−216.568260.2160.208−106.265230.2720.266289.70761
110312,1080.2560.244−216.031230.2030.196−56.324250.2880.282319.75461
120312,0430.2610.250−235.989200.2000.194−46.287250.2930.287339.85762
130311,0780.2260.216−185.466250.1600.155−45.115240.2440.239329.19260
140310,9180.2200.210−165.451250.1530.148−44.820230.2330.228318.85958
150310,8680.2120.203−145.375250.1480.14305.098250.2560.250369.29661
M max = 14 in variance model selection
0437,2514.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10326,3082.122.02729050.3062202.2152.14133146.1292463.1973.1348059.909396
20319,1991.0240.97910026.0491370.5270.5097518.639981.0441.02215527.142178
30316,0930.7020.6716717.574790.5030.4867313.745700.9010.88213320.208131
40314,1550.3930.3762410.363440.2820.273258.426310.5050.4946212.13168
50313,5620.3270.31368.561340.2250.21716.535150.3520.345278.93641
60312,8110.2980.285−107.608290.2030.19647.086290.3360.3293710.28362
70312,4550.2890.276−157.409260.2190.211−26.863250.3430.3353810.61265
80312,2350.2730.261−177.222280.2150.208−46.738260.3220.3163710.66267
90312,0570.2640.253−226.68270.2220.214−106.406240.2830.277289.98163
100311,9530.2550.244−216.117240.2010.194−56.381250.290.284319.7861
110311,8980.2520.241−205.929220.2000.193−46.236240.2930.287329.58360
120311,8320.2630.251−235.962190.1980.192−36.300250.3030.296349.87862
130310,9160.2230.213−175.363230.1540.149−15.233250.2630.257369.30561
140310,7570.2150.206−155.339240.1470.14204.954240.2510.246358.97259
150310,7140.2140.205−145.368250.1460.141−14.857230.2440.239348.90659
M max = 18 in variance model selection
0437,2514.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10326,1252.1272.03429250.4252202.2262.15133246.2222463.2093.14248260.019396
20318,7621.0360.99111125.6681130.5380.527513.429640.9830.96214420.708133
30315,9950.710.6796917.741800.5230.5057613.963720.9250.90613720.465133
40314,0600.4010.3832710.529450.2920.282288.56330.5210.516612.34170
50313,4830.3290.31598.687350.2250.21746.62160.3620.354319.1243
60312,9380.3160.302−57.84300.2090.20256.855260.3470.344110.29762
70312,3630.270.258−106.96210.2150.207117.089280.3890.3814810.79565
80312,1660.2590.248−126.558220.2040.19897.008290.3690.3614710.71867
90311,9630.2340.223−156.141240.1960.18916.432260.3130.306379.84461
100311,8830.2410.231−186.031240.1940.187−16.449260.2990.293349.77761
110311,8300.2390.229−185.836220.1930.18706.298250.3030.296359.6160
120311,7660.2440.234−195.713180.1910.18436.34260.3210.314399.86662
130311,0450.2250.215−155.396230.1480.14305.061240.2590.254358.9559
140310,6940.2130.204−135.314240.1390.13414.855240.2450.24348.67257
150310,6440.2110.202−145.131230.1390.13514.816230.250.245358.61857
M max = 22 in variance model selection
0437,2514.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10325,9882.1272.03429250.4142202.2262.15133246.2592463.213.14348260.061397
20318,9261.0340.98810526.161370.5690.558319.0431011.0981.07516327.621181
30315,8050.7120.6817117.763790.5370.5197814.063720.9430.92314020.603134
40313,9730.4090.3912910.73460.3010.291318.709340.5390.5277012.58972
50313,4110.3490.33478.95340.2230.21636.618160.3570.349309.08142
60312,8730.3080.295−28.205370.2030.19687.49330.350.3434310.85367
70312,2860.2710.26−96.95210.2170.21127.124280.3980.3895010.85666
80312,0910.2610.249−116.557220.2070.200107.051290.3770.3694810.79368
90311,8930.2350.225−156.043230.1960.18916.367250.3140.307369.68360
100311,8150.2380.228−175.97230.1940.18716.462260.3110.304379.82961
110311,7610.2370.227−175.78210.1940.18826.364250.3130.307379.69460
120311,6970.2430.232−195.818180.1910.18526.325250.320.313399.88562
130311,6550.2320.222−175.688180.1950.18886.714290.3530.3464610.50967
140310,7480.2150.206−145.206230.1480.14355.578270.2930.287429.78864
150310,5900.2080.199−135.209230.1390.13455.193260.2750.27409.25661
Table A29. AIC scores and out-of-sample validation figures of type II FGLS proxy functions of BEL under 300–886 with variance models of varying complexity M max after each tenth and the final iteration.
Table A29. AIC scores and out-of-sample validation figures of type II FGLS proxy functions of BEL under 300–886 with variance models of varying complexity M max after each tenth and the final iteration.
kAICv.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
M max = 2 in variance model selection
0437,2514.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10336,3901.7861.70818444.0821981.4021.35420939.1522092.2902.24234452.033344
20323,8830.8260.7902522.0071110.4240.409−2810.764440.4370.4282816.42499
30319,9580.4650.445312.876550.2880.27829.650400.4670.4575715.23496
40318,9450.4010.384−1611.036510.3570.345−377.158160.3300.323310.12755
50318,2060.3550.339−249.270350.3360.324−366.61180.3390.332−88.60236
60317,4850.3230.309−258.407360.3090.298−365.548110.2790.273−117.24436
70317,1970.3060.293−287.631280.3450.334−435.405−10.2720.266−175.89925
80316,2630.2720.260−246.946320.3200.310−424.05100.2270.222−174.89825
90316,0210.2600.249−237.143390.2980.288−373.854100.1730.169−56.46142
100315,8710.2560.245−237.424410.2940.284−354.078140.1860.18207.44349
110315,7840.2560.245−227.396410.3020.292−373.962120.1890.185−37.01346
120315,7190.2570.245−236.923380.2960.286−363.870110.1810.177−26.87245
130315,6750.2580.247−256.506350.2950.285−363.76090.1880.184−36.46142
140315,6410.2500.239−236.441340.2840.275−343.74190.1820.178−26.33841
150315,6220.2380.228−206.433340.2580.250−293.821110.1770.17446.74044
160315,5990.2330.223−206.578350.2560.247−283.920120.1830.17966.98846
170315,5730.2320.222−196.616350.2540.246−283.880120.1810.17856.92745
180315,5350.2250.215−196.502350.2520.243−283.773110.1720.16956.79744
190315,5230.2290.219−196.809370.2440.236−264.020150.1640.16197.60750
200315,5070.2150.206−186.738360.2430.235−263.969140.1640.16197.38749
210315,5000.2140.205−186.704350.2340.226−243.989140.1620.159107.32348
220315,4920.2170.207−186.769350.2390.231−263.930140.1590.15597.27748
224315,4910.2090.199−176.584340.2260.219−223.999140.1650.161127.29048
M max = 6 in variance model selection
0437,2514.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10332,4792.0141.92625949.0982132.0001.93329844.7452382.9642.90144558.341385
20320,8730.8810.8425122.8211150.3410.3291613.428660.6220.6098420.790134
30316,1870.4290.4101910.875320.3080.297298.537280.5610.5497312.63372
40315,1320.3660.350610.243450.2540.24617.853250.4010.3933611.22161
50314,4730.3030.28939.346460.2290.22207.543280.3610.3533410.77662
60313,6430.3070.293−187.567280.2510.242−215.808110.2660.26197.67641
70313,3010.2800.268−177.768300.2220.214−126.229210.2680.262239.31556
80313,0600.2700.258−207.092280.2300.222−136.273220.2800.274259.55459
90312,8830.2620.251−226.754290.2390.231−175.977200.2530.248199.07756
100312,1000.2460.235−196.177290.2020.195−144.814180.2210.216218.30554
110311,6560.2310.221−166.446330.1890.182−124.827220.2110.206258.96459
120311,5740.2360.225−166.545340.2090.202−164.594190.2070.202228.63757
130311,5070.2340.223−166.706360.2060.199−164.801210.2040.200239.09460
140311,4560.2260.216−166.102320.1890.182−124.717210.2150.211258.82758
150311,4190.2240.214−155.899310.1780.172−104.712220.2130.209278.97159
160311,3550.2170.207−155.536290.1600.154−45.013250.2460.241339.42062
170311,3080.1980.189−135.090230.1410.137−44.144190.2210.216277.49149
180311,2660.2020.193−145.112240.1320.127−34.433220.2180.213277.86852
190311,2480.2080.198−165.287230.1430.138−54.163190.2130.208257.63050
200311,2280.2020.193−145.269240.1370.133−44.148200.2130.209277.63950
210311,1960.1920.184−145.032200.1250.12144.655230.2530.248327.91952
220311,1640.1950.187−155.079210.1220.11814.620230.2370.232318.07053
230311,1480.1940.185−155.146220.1220.11814.571230.2360.231297.94952
237311,1440.1960.188−155.342230.1250.12104.765240.2350.230308.24354
M max = 10 in variance model selection
0437,2514.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10331,0562.0731.98227350.0852162.1132.04131545.7142443.0903.02546459.451393
20320,1990.9240.8847623.1331010.3750.3622510.921350.6550.6418215.99992
30316,0440.5430.5193114.068560.3720.3594511.729560.7420.72710718.450118
40314,8210.3850.3681110.626470.2560.24868.118280.4240.4154311.68565
50314,2010.3270.31329.206410.2400.232−86.713170.3360.329219.10345
60313,3860.2690.257−57.831340.2200.21367.506310.3650.3574611.22371
70312,9860.2900.278−177.316260.2100.203−46.646250.3100.304339.95561
80312,7220.2800.268−187.425310.2230.215−86.792270.3000.2933310.65268
90312,5450.2700.259−227.110320.2330.225−136.634260.2730.2672710.45067
100312,4690.2650.253−216.800290.2240.217−116.420250.2740.2682910.12864
110312,3970.2540.243−196.136250.2020.195−46.360250.2900.284339.94063
120312,3460.2470.236−195.940220.1930.18716.468270.3070.3013810.07864
130312,2990.2400.230−175.784210.1920.18546.563280.3290.3224310.36966
140312,2740.2470.236−185.811220.1930.18656.870310.3380.3314510.94471
150312,2430.2490.238−195.950240.1930.18636.872310.3240.3174310.98471
160312,2220.2550.244−196.162250.1980.19116.859300.3240.3184211.09272
170311,2040.2280.218−145.957310.1610.156−15.874300.2760.2704010.70371
180311,0400.2230.213−136.021310.1540.149−15.594290.2650.2593910.35668
190310,9960.2220.213−136.152320.1540.149−25.584280.2580.2533810.31168
200310,9680.2060.197−106.163320.1440.13935.924310.2850.2794210.56870
210310,9530.2110.202−105.930300.1430.13835.615290.2760.2704110.15367
220310,9270.2080.199−116.353330.1470.142−15.602290.2520.2473710.22567
230310,9190.2110.202−116.454340.1490.144−15.702290.2590.2533810.37669
240310,9080.2100.201−116.559350.1520.147−35.570280.2510.2453610.21867
244310,9050.2080.199−116.577350.1530.147−25.617290.2520.2473710.25968
M max = 14 in variance model selection
0437,2514.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10327,0492.1332.03929250.5612222.2332.15733346.6862493.2223.15448460.524400
20318,9651.0200.97610825.2881110.5070.4906912.759570.9310.91213619.634124
30316,2620.6940.6636517.386780.4840.4686913.341680.8720.85312819.643127
40314,2720.3920.3752310.373440.2770.268238.322300.4930.4835911.94166
50313,6910.3490.33318.772320.2280.220−56.440120.3350.328198.63336
60312,8600.2890.276−107.475300.2040.197−26.583240.3020.295289.21853
70312,5420.2860.273−167.501260.2190.211−36.802240.3340.3273710.54864
80312,3370.2810.269−187.254270.2150.207−46.834270.3230.3163710.65567
90312,1260.2610.250−216.672270.2210.213−106.384230.2860.280299.94262
100312,0460.2680.256−226.695270.2220.215−126.317240.2700.265269.77961
110311,9610.2570.245−225.979230.2000.193−56.316250.2840.278319.69561
120311,9030.2520.241−215.892190.1930.18616.411260.3110.304379.97763
130311,8600.2440.233−195.886200.1900.18436.562280.3220.3154110.34466
140311,8240.2430.232−205.880190.1900.18356.758300.3350.3284410.69669
150311,8000.2470.236−216.011200.1850.17926.452280.3090.3034010.36566
160310,8060.2180.208−165.451250.1400.13505.234270.2550.249379.59663
170310,7100.2100.201−155.473250.1370.13205.077260.2490.244369.35962
180310,6820.2060.197−145.303240.1360.13125.064260.2660.260399.49263
190310,6610.2000.191−135.285230.1440.13955.163260.2980.292449.84365
200310,6390.2010.192−135.413220.1430.13845.088250.2930.287449.72664
210310,6060.2030.194−135.599230.1450.14165.459270.3140.3074710.29468
220310,5250.1830.174−134.672120.1480.143−33.74470.2210.217306.23840
230310,5130.1790.171−144.668130.1530.148−63.72970.2060.202276.11340
240310,4750.1720.164−144.347100.1300.126−13.52390.2190.214306.15439
250310,4620.1710.163−144.307100.1340.130−23.48080.2110.206285.95838
258310,4430.1720.165−144.371100.1340.129−23.50480.2140.210286.06339
M max = 18 in variance model selection
0437,2514.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10325,8462.1122.02029050.1422212.2012.12732846.1532463.1833.11647859.925396
20318,9851.0270.98210425.9911360.5660.5478218.748991.0891.06616227.261179
30315,8960.7050.6746917.595790.5260.5087613.871710.9280.90813720.356132
40314,0440.4040.3862810.602450.2960.286308.630340.5310.5196812.46271
50313,4830.3300.31698.715350.2250.21756.643170.3650.358329.17744
60312,9390.3160.302−57.833310.2100.20356.895260.3520.3454210.38263
70312,3590.2700.258−106.927210.2160.208117.084270.3930.3854910.78165
80312,1650.2600.248−126.555220.2060.199107.018290.3730.3654810.72167
90311,9640.2330.223−156.130240.1960.18916.433260.3130.307379.83861
100311,8820.2370.227−175.756200.1900.18326.218240.3050.298369.43158
110311,8270.2390.229−185.733210.1900.18416.305250.3030.296369.58860
120311,7690.2450.234−205.762180.1890.18336.425270.3190.313399.92462
130311,7160.2240.214−165.502150.1900.183106.403270.3500.342469.99363
140311,0050.2160.206−135.222210.1420.13765.361260.2910.285429.41662
150310,6600.2030.194−125.094210.1330.12975.158260.2840.278429.12960
160310,6110.2010.192−125.033210.1370.13385.360270.3030.297459.56863
170310,5860.1960.187−114.994210.1360.132105.548280.3160.310479.82165
180310,5500.1930.184−124.987210.1350.13014.264200.2410.236358.20054
190310,5350.1960.187−145.087210.1390.135−34.049180.2170.212317.88452
200310,5110.1820.174−114.965210.1310.12703.992180.2310.226347.81052
210310,4670.1850.177−125.011200.1310.12703.967170.2310.226347.74151
220310,4630.1810.173−125.059200.1300.12524.181190.2460.241368.11054
230310,4540.1810.173−115.409230.1380.13314.405200.2460.241368.43656
240310,4400.1820.174−115.398230.1380.13314.457210.2500.245378.55957
250310,4310.1810.173−115.509230.1380.13314.525210.2510.246378.63857
252310,4250.1850.176−115.515230.1380.13314.548220.2530.248378.70057
M max = 22 in variance model selection
0437,2514.5574.357−238100.000383.2313.1210100.0002614.0273.942106100.000367
10325,7962.1152.02329050.2032222.2062.13132946.2382463.1893.12147960.021396
20318,9401.0260.98111225.9651350.6660.6449820.2431071.1991.17417928.606188
30315,8490.7080.6777017.681790.5320.5147714.005720.9360.91713920.526133
40314,0010.4070.3892810.712460.2990.289318.710340.5360.5246912.58973
50313,4130.3480.332109.025360.2230.21656.616170.3640.356329.22544
60312,8970.3160.302−47.866310.2110.20366.983270.3580.3514410.54965
70312,3170.2710.259−96.969220.2170.210127.185280.3990.3915010.96167
80312,1200.2600.249−116.565230.2070.200107.119300.3790.3714910.89669
90311,9200.2350.224−156.091240.1960.18916.427260.3130.306379.79161
100311,8420.2380.228−166.034230.1940.18716.531270.3110.304379.94963
110311,7840.2410.230−185.900240.1920.18516.554280.3040.2973610.00463
120311,7370.2410.230−185.809210.1890.18226.395270.3100.303389.92463
130311,6900.2270.217−165.653180.1870.18186.468280.3390.3324510.10064
140310,9250.2130.203−135.206220.1400.13675.430270.2930.286439.54863
150310,6040.2020.193−115.131220.1330.12975.286270.2890.283429.32161
160310,5590.2000.192−115.063220.1390.13495.507280.3100.304469.79165
170310,5320.1890.181−104.999220.1340.12985.194260.2970.291449.43862
180310,5030.1930.185−125.222240.1320.12845.137260.2700.264409.46262
190310,4810.1940.186−135.113220.1400.136−24.124190.2200.215328.01953
200310,4540.1890.181−135.164210.1350.130−14.033180.2240.220337.83652
210310,4120.1850.177−125.038200.1320.12804.019180.2310.226347.80552
220310,4060.1850.176−125.067200.1320.12814.062180.2390.234357.98153
224310,4040.1840.176−125.112200.1320.12814.076180.2390.234357.93452
Table A30. AIC scores and out-of-sample validation figures of all derived FGLS proxy functions of BEL under 150–443 and 300–886 after the final iteration. Highlighted in green and red respectively the best and worst AIC scores and validation figures.
Table A30. AIC scores and out-of-sample validation figures of all derived FGLS proxy functions of BEL under 150–443 and 300–886 after the final iteration. Highlighted in green and red respectively the best and worst AIC scores and validation figures.
k M max AICv.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
Type I algorithm under 150-443
1502315,9800.2390.229−168.147460.2550.246-304.032170.1530.14927.48949
1506311,9490.2310.221−137.577410.2030.196−184.762220.1860.183178.63757
15010311,3630.2270.217−107.460400.1940.188−154.708210.1950.191208.53756
15014311,1610.2310.221−97.527410.1930.186−144.701210.2000.195218.49756
15018311,0480.2280.218−97.433400.1870.181−134.780220.2040.200228.62157
15022310,9740.2300.220−87.436400.1870.180−124.802220.2070.203238.63957
Type I algorithm under 300-886
2242315,6150.1960.187−96.527330.2750.266−304.564-30.1750.17155.40132
2246311,5540.2000.191−96.399330.2400.232−234.29240.1830.179136.38940
22410311,2870.2030.194−86.473330.2340.226−214.31050.1890.185166.62142
22414310,9800.2000.191−76.246310.2220.214−194.25760.1940.190186.69742
22418310,8810.2000.191−76.194310.2170.210−184.25060.1980.194196.80143
22422310,8320.2000.192−76.256320.2170.210−184.22370.1960.192196.84444
Type II algorithm under 150-443
1502315,6290.2390.229-216.467340.2610.252−303.796100.1770.17336.65444
1506311,4260.2240.215−145.904310.1770.171−94.756220.2260.221299.00559
15010310,8680.2120.203−145.375250.1480.14305.098250.2560.250369.29661
15014310,7140.2140.205−145.368250.1460.141−14.857230.2440.239348.90659
15018310,6440.2110.202−145.131230.1390.13514.816230.2500.245358.61857
15022310,5900.2080.199−135.209230.1390.13455.193260.2750.270409.25661
Type II algorithm under 300-886
2242315,4910.2090.199−176.584340.2260.219−223.999140.1650.161127.29048
2376311,1440.1960.188−155.342230.1250.12104.765240.2350.230308.24354
24410310,9050.2080.199−116.577350.1530.147−25.617290.2520.2473710.25968
25814310,4430.1720.165−144.371100.1340.129−23.50480.2140.210286.06339
25218310,4250.1850.176−115.515230.1380.13314.548220.2530.248378.70057
22422310,4040.1840.176−125.112200.1320.12814.076180.2390.234357.93452
Table A31. Settings and out-of-sample validation figures of best performing multivariate adaptive regression splines (MARS) models derived in a two-step approach sorted by first and second step validation sets. Highlighted in green and red respectively the best and worst validation figures.
Table A31. Settings and out-of-sample validation figures of best performing multivariate adaptive regression splines (MARS) models derived in a two-step approach sorted by first and second step validation sets. Highlighted in green and red respectively the best and worst validation figures.
k K max t min opglmv.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
Sobol set 2
14820606sinv.g, id0.2650.253−2410.317550.5750.555−4016.234−560.8220.8058017.65764
495003ninv.g, log0.370.35409.168190.7050.681−1229.477−1020.5250.5142516.891−65
606604sinv.g, id0.3240.31−118.517161.7121.65415144.5041320.9170.89710219.87783
455004binv.g, id0.3470.332−28.686110.4470.431−3622.702−1250.5110.5003515.785−54
Sobol set and nested simulations set
455004binv.g, id0.3470.332−28.686110.4470.431−3622.702−1250.5110.5003515.785−54
171904binv.g, id0.8340.7972524.6731240.480.464−441.356-2430.7630.74710821.398−132
708104binv.g, id0.3350.32−2210.872520.5540.535−3514.073−380.8750.85710218.2599
333403ninv.g, id0.4260.407−1010.871211.5651.51210852.38410.6620.6483220.997−75
Sobol set and capital region set
455003bpois, log0.3790.36209.556280.480.464−4324.878−1390.510.5002816.938−69
313403bpois, log0.4760.455−1312.752460.5930.573−5431.148−1750.6610.6471823.088−103
455004binv.g, id0.3470.332−28.686110.4470.431−3622.702−1250.5110.5003515.785−54
596603bpois, log0.4280.4394016.674980.760.734−1222.511−410.8090.7926818.40339
Nested simulations set and Sobol set
134144 1.6 5 5ngaus, log0.2730.261−2210.255541.0250.99−128.192−231.5151.48417932.616157
455004sinv.g, id0.3470.332−28.686110.4470.431−3622.702−1250.5110.5003515.785−54
606604sinv.g, id0.3240.31−118.517161.7121.65415144.5041320.9170.89710219.87783
455004binv.g, id0.3470.332−28.686110.4470.431−3622.702−1250.5110.5003515.785−54
Nested simulations set 2
455004binv.g, id0.3470.332−28.686110.4470.431−3622.702−1250.5110.5003515.785−54
146159 9.4 6 5ngaus, log0.2790.267−2410.008531.0250.99026.779−111.4981.46717431.702163
7697 3.8 5 4binv.g, log0.3440.329−1710.676520.5380.52−3711.874−240.8040.7878816.584100
10711304ngaus, log0.3210.307−2011.976630.9970.963825.69401.5291.49619132.148182
Nested simulations set and capital region set
455004spois, id0.3530.338−38.891180.4490.434−3623.634−1310.5040.4933616.079−58
313404spois, id0.4370.418−1111.254320.5480.53−4528.444−1570.6480.6342921.374−84
7282 3.1 5 4binv.g, inv0.3650.349−1611.181530.5790.56−4914.528−510.7000.6856514.61964
455004binv.g, id0.3470.332−28.686110.4470.431−3622.702−1250.5110.5003515.785−54
Capital region set and Sobol set
12514405finv.g, inv0.2830.271−2010.336540.630.608−6317.245−760.6750.664514.73732
455004sgaus, log0.3820.365−19.916320.4690.453−4125.487−1440.4950.4853216.868−71
114144 1.9 5 5sinv.g, 1 / μ 2 0.3130.299−129.414400.7080.684−7720.115−970.6260.6123614.09517
455004bgaus, log0.3820.365−19.916320.4690.453−4125.487−1440.4950.4853216.868−71
Capital region set and nested simulations set
455004fgaus, log0.3860.369−110.095340.4680.452−4125.709−1450.4960.4863217.077−73
646604ninv.g, 1 / μ 2 0.420.401−311.506390.840.811325.969−381.2981.27114629.11105
14817506sinv.g, 1 / μ 2 0.3110.297−1610.447520.5760.556−5514.565−570.6110.5983012.84427
778104ninv.g, 1 / μ 2 0.3870.37−1111.519521.0290.994−2825.831−321.2791.25214826.700145
Capital region set 2
455004sgaus, log0.3820.365−19.916320.4690.453−4125.487−1440.4950.4853216.868−71
333403ninv.g, 1 / μ 2 0.5640.539−1415.693640.8270.800−5438.645−1850.7450.729-226.338−134
14817506sinv.g, 1 / μ 2 0.3110.297−1610.447520.5760.556−5514.565−570.6110.5983012.84427
148175 4.7 6 5finv.g, inv0.2960.283−2010.416530.5490.53−5418.26−870.6640.653216.307−1
Table A32. Best MARS model of BEL derived in a two-step approach with the final coefficients.
Table A32. Best MARS model of BEL derived in a two-step approach with the final coefficients.
k h k X β ^ MARS , k
0115,397.13
1 h X 8 0.104892 7901.89
2 h 0.104892 X 8 −8165.64
3 h 0.205577 X 1 · h 0.104892 X 8 688.83
4 h X 6 1.17224 265.08
5 h 1.17224 X 6 −280.94
6 h X 15 53.8706 −2.11
7 h 53.8706 X 15 1.16
8 h X 7 0.147599 −60.90
9 h 0.147599 X 7 −334.77
10 h X 8 0.0456197 3183.07
11 h 0.205577 X 1 · h 0.104892 X 8 · h X 15 64.6262 −9.48
12 h 0.205577 X 1 · h 0.104892 X 8 · h 64.6262 X 15 29.85
13 h X 1 0.945371 −64.88
14 h 0.945371 X 1 124.45
15 h X 6 1.56058 · h 0.104892 X 8 −815.20
16 h 1.56058 X 6 · h 0.104892 X 8 1085.80
17 h 1.44218 X 2 −60.23
18 h X 1 1.61447 · h 1.56058 X 6 · h 0.104892 X 8 −233.14
19 h 1.61447 X 1 · h 1.56058 X 6 · h 0.104892 X 8 415.92
20 h X 8 0.0159508 · h 53.8706 X 15 8.94
21 h 0.0159508 X 8 · h 53.8706 X 15 47.99
22 h X 9 0.247192 47.7215432
23 h 0.247192 X 9 −82.5804328
24 h 0.993896 X 12 −63.6091725
25 h X 1 0.0195594 · h 0.0159508 X 8 · h 53.8706 X 15 −12.58
26 h 0.0195594 X 1 · h 0.0159508 X 8 · h 53.8706 X 15 −42.25
27 h X 7 0.147599 · h X 8 0.191689 2124.93
28 h X 7 0.147599 · h 0.191689 X 8 1510.41
29 h X 3 0.323352 · h 0.104892 X 8 948.86
30 h 0.323352 X 3 · h 0.104892 X 8 −577.61
31 h X 1 1.26627 · h X 7 0.147599 101.15
32 h 1.26627 X 1 · h X 7 0.147599 −10.00
33 h X 14 0.684998 109.76
34 h 0.684998 X 14 −37.89
35 h 1.17224 X 6 · h X 8 0.12538 216.62
36 h 1.17224 X 6 · h 0.12538 X 8 2076.18
37 h 0.945371 X 1 · h X 8 0.0019988 −156.79
38 h 0.945371 X 1 · h 0.0019988 X 8 1262.56
39 h X 1 1.58818 · h X 6 1.56058 · h 0.104892 X 8 137.60
40 h 1.56058 X 6 · h 0.104892 X 8 · h X 15 76.9327 −4.87
41 h 1.56058 X 6 · h 0.104892 X 8 · h 76.9327 X 15 2.11
42 h 0.205577 X 1 · h X 2 1.43028 · h 0.104892 X 8 24003.07
43 h 0.205577 X 1 · h 1.43028 X 2 · h 0.104892 X 8 −161.88
44 h X 1 0.945371 · h X 8 0.0165546 −224.18
45 h X 1 0.945371 · h 0.0165546 X 8 −987.47
Table A33. Basis function sets of LC and LL proxy functions of BEL corresponding to K max 16 , 27 derived by adaptive OLS selection.
Table A33. Basis function sets of LC and LL proxy functions of BEL corresponding to K max 16 , 27 derived by adaptive OLS selection.
k r k 1 r k 2 r k 3 r k 4 r k 5 r k 6 r k 7 r k 8 r k 9 r k 10 r k 11 r k 12 r k 13 r k 14 r k 15
K max = 16 in adaptive basis function selection
0000000000000000
1000000010000000
2100000000000000
3000001000000000
4000000000000001
5000000100000000
6100000010000000
7000000020000000
8200000000000000
9000001010000000
10000000010000001
11100000000000001
12010000000000000
13100001000000000
14000000001000000
15000000000001000
16000000110000000
K max = 27 in adaptive basis function selection
17100000100000000
18100000010000001
19000000020000001
20000000000000010
21100000020000000
22000000000000002
23200000000000001
24000000000000100
25001000000000000
26001000010000000
27101000000000000
Table A34. Basis function sets of LC and LL proxy functions of BEL corresponding to K max 15 , 22 derived by risk factor wise or combined risk factor wise and adaptive OLS selection.
Table A34. Basis function sets of LC and LL proxy functions of BEL corresponding to K max 15 , 22 derived by risk factor wise or combined risk factor wise and adaptive OLS selection.
k r k 1 r k 2 r k 3 r k 4 r k 5 r k 6 r k 7 r k 8 r k 9 r k 10 r k 11 r k 12 r k 13 r k 14 r k 15
K max = 15 in risk factor wise basis function selection
0000000000000000
1100000000000000
2010000000000000
3001000000000000
4000100000000000
5000010000000000
6000001000000000
7000000100000000
8000000010000000
9000000001000000
10000000000100000
11000000000010000
12000000000001000
13000000000000100
14000000000000010
15000000000000001
K max = 22 in combined risk factor wise and adaptive selection
16100000010000000
17000001010000000
18000000010000001
19100000000000001
20100001000000000
21000000110000000
22100000100000000
Table A35. Settings and out-of-sample validation figures of LC and LL proxy functions of BEL using basis function sets from Table A33 and Table A34. Highlighted in green and red respectively the best and worst validation figures.
Table A35. Settings and out-of-sample validation figures of LC and LL proxy functions of BEL using basis function sets from Table A33 and Table A34. Highlighted in green and red respectively the best and worst validation figures.
kbwov.mae v . mae a v.res v . mae 0 v . res 0 ns.mae ns . mae a ns.res ns . mae 0 ns . res 0 cr.mae cr . mae a cr.res cr . mae 0 cr . res 0
LC regression with gaussian kernel and LOO-CV
160.120.550.52−4413500.70.68−8612−70.550.54−351245
160.220.40.38−2611470.520.5−511170.440.4351363
160.320.370.35−2511450.450.44−3711190.440.4351260
270.220.390.38−2611430.510.49−511130.430.4341258
160.142.82.68−15584−4078.057.78−558247−8255.044.94−96128−363
LL regression with gaussian kernel and LOO-CV
160.120.380.36−1112570.570.55−6810−150.410.4−22931
160.220.340.33−611590.450.43−49820.370.3651055
270.12210.3201.06−30,6825209−30,589131.04126.61−18,9813670−18,9024.094.0−8292−3
270.222726.472606.74400,25467,487400,3063502.243383.85422,44398,081422,4811.851.81−254113
LC regression with gaussian kernel and AIC
160.120.570.55−4314550.650.62−7212120.50.49−121472
160.221.631.553841731.941.88266572862.572.5138461404
270.120.560.54−4214560.640.62−7212120.50.49−121472
LC regression with Epanechnikov kernel and LOO-CV
150.120.530.5−3613411.051.02−3822240.510.5−291133
150.220.410.39−3110331.141.1326531.181.169727146
150.320.40.38−309230.960.931623540.460.45−61133
150.420.350.33−229181.111.081228390.470.46−21125
150.520.340.33−189371.241.2630460.510.5−221118
150.620.330.32−1710501.161.122127740.460.45−21150
150.720.330.32−1610411.171.131828610.440.43−14928
150.820.330.31−1610451.211.172929761.161.1310126148
150.920.320.3−2012611.141.140271071.141.1111129178
151.020.320.31−2210491.191.1552291091.131.1110627163
160.120.530.5−4013431.21.16228710.510.5−201249
160.220.410.39−2611501.161.122728880.440.4321264
160.320.360.34−279291.071.034127830.440.4311143
160.420.330.32−198221.161.122730530.450.4441030
160.520.320.31−169361.341.33033671.221.1910127138
160.140.450.43−2613340.740.71−6816−230.590.5751551
160.243.293.15−1041608917.57.24−143299668.067.891762951157
160.163.313.16−3284685.745.55−96158−106.626.48−5314832
160.263.323.18−7185−2179.379.0673268−8713.1812.924630486
160.183.943.77146105−11910.7110.35−191308−4708.848.65−312205−591
160.288.538.16397286−6397.797.5270347−98012.3712.111365390315
220.120.50.48−3712441.071.03−4122250.520.5−301137
220.220.420.4−2810391.071.03−325501.21.1710629159
220.320.390.37−299230.890.86622430.450.44−31134
220.420.350.33−218161.051.02327260.490.48−41119
220.520.330.31−149321.171.13−228290.470.46−151016
220.620.330.32−1710461.091.061125600.450.44−11148
220.720.320.31−159391.231.182629661.171.149926139
220.820.320.3−1510461.191.153228781.121.110626152
220.920.310.3−1911581.151.1139271021.121.111128174
221.020.310.3−2110481.131.094127961.121.110727162
270.220.40.38−2611451.151.122628830.440.4311258
270.320.380.36−289240.90.87722450.460.45−21136
270.420.350.33−219171.051.02227260.480.47−41111
LL regression with Epanechnikov kernel and LOO-CV
150.120.450.43−4910401.221.18−10022−260.780.77−10411−30
150.220.360.34−348131.591.53−14540−1120.60.58−5411−21
150.320.320.31−367171.911.85134481730.60.58−36113
150.420.340.33−408331.831.76−16442−1060.430.42−4969
150.520.330.31−408342.22.12−21953−1600.410.41−45615
150.620.30.29−337290.940.91819560.330.32−28521
150.720.310.3−407230.940.91−1319360.360.35−4058
150.820.290.28−38580.860.83419360.320.32−2953
220.12731.51699.39273885,172479,6121564.871511.98−111,628127,410365,231492.49482.11−19,40476,575457,455
220.220.340.33−34800.830.8−152140.420.41−258−5
220.3298.0393.7314,396148−250101.6998.2515,174147513100.097.8915,028100367
220.4298.0593.7514,399147−248113.99110.1413,158495−1503100.097.8915,028100367
220.52100.095.6114,68510038118.95114.9314,984651323100.097.8915,028100367
220.6299.7295.3414,644106−3100.5997.1915,004120343100.097.8915,028100367
220.72100.095.6114,68510038100.096.6214,922100261100.097.8915,028100367
220.820.290.28−3959152.43147.2722,622426422,6550.310.30−355−2
LC regression with uniform kernel and LOO-CV
160.120.750.71−5618461.531.48−5232360.730.72−591529
160.521.221.17−7829162.62.513018238110.4510.2314192421498
270.120.640.61−3816311.31.261332680.590.58−21553
270.520.350.34−1612531.341.32533791.41.3711732171
160.140.710.68−3317471.271.23-131650.670.65−231543
160.541.851.76−13939502.292.2218511937.096.94769157943
270.140.660.63−3815321.321.27732630.580.57−151440
270.540.390.37−1313671.261.211631820.520.51−101356
160.161.831.75−165381001.951.88−17829721.551.51−1902460
160.561.831.75−6562711.081.0480653441.661.6322574488

References

  1. Akaike, Hirotogu. 1973. Information theory and an extension of the maximum likelihood principle. In International Symposium on Information Theory, 2nd ed. Budapest: Akadémiai Kiadó. [Google Scholar]
  2. Bauer, Daniel, and Hongjun Ha. 2015. A least-squares Monte Carlo approach to the calculation of capital requirements. Paper presented at the World Risk and Insurance Economics Congress, Munich, Germany, August 2–6; Available online: https://danielbaueracademic.files.wordpress.com/2018/02/habauer_lsm.pdf (accessed on 10 June 2018).
  3. Bauer, Daniel, Andreas Reuss, and Daniela Singer. 2012. On the calculation of the solvency capital requirement based on nested simulations. The Journal of the International Actuarial Association 42: 453–99. [Google Scholar]
  4. Bettels, Christian, Johannes Fabrega, and Christian Weiß. 2014. Anwendung von Least Squares Monte Carlo (LSMC) im Solvency-II-Kontext-Teil 1. Der Aktuar 2: 85–91. [Google Scholar]
  5. Born, Rudolf. 2018. Künstliche Neuronale Netze im Risikomanagement. Master’s thesis, Universität zu Köln, Köln, Germany. [Google Scholar]
  6. Breusch, Trevor S., and Adrian R. Pagan. 1979. A simple test for heteroscedasticity and random coefficient variation. Econometrica 47: 1287–94. [Google Scholar] [CrossRef]
  7. Burnham, Kenneth P., and David R. Anderson. 2002. Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach, 2nd ed. New York: Springer-Verlag. [Google Scholar]
  8. Castellani, Gilberto, Ugo Fiore, Zelda Marino, Luca Passalacqua, Francesca Perla, Salvatore Scognamiglio, and Paolo Zanetti. 2018. An Investigation of Machine Learning Approaches in the Solvency Ii Valuation Framework. Available online: http://dx.doi.org/10.2139/ssrn.3303296 (accessed on 14 August 2019).
  9. Craven, Peter, and Grace Wahba. 1979. Smoothing noisy data with spline functions. Numerische Mathematik 31: 377–403. [Google Scholar] [CrossRef]
  10. Dahlquist, Germund, and Åke Björck. 1974. Numerical Methods. Englewood Cliffs: Prentice-Hall. [Google Scholar]
  11. Dobson, Annette J. 2002. An Introduction to Statistical Modelling, 2nd ed. Boca Raton, London, New York, and Washington: Chapman & Hall/CRC. [Google Scholar]
  12. Drucker, Harris, Chris J.C. Burges, Linda Kaufman, Alex Smola, and Vladimir Vapnik. 1997. Support vector regression machines. In Advances in Neural Information Processing Systems 9. Denver: MIT Press, pp. 155–61. [Google Scholar]
  13. Duchon, Jean. 1977. Splines minimizing rotation-invariant semi-norms in solobev spaces. In Constructive Theory of Functions of Several Variables. Edited by W. Schempp and K. Zeller. Berlin: Springer, pp. 85–100. [Google Scholar]
  14. Dutang, Christophe. 2017. Some Explanations about the IWLS Algorithm to Fit Generalized Linear Models. hal-01577698. France: HAL. [Google Scholar]
  15. Eilers, Paul H.C., and Brian D. Marx. 1996. Flexible smoothing with b-splines and penalties. Statistical Science 11: 89–121. [Google Scholar] [CrossRef]
  16. European Parliament, and European Council. 2009. Directive 2009/138/EC on the Taking-Up and Pursuit of the Business of Insurance and Reinsurance (Solvency II). Directive. Brussels: European Council, pp. 112–127. [Google Scholar]
  17. Friedman, Jerome H. 1991. Multivariate adaptive regression splines (with discussion). The Annals of Statistics 19: 1–141. [Google Scholar] [CrossRef]
  18. Friedman, Jerome H. 1993. Fast MARS. In Technical Report 110. Stanford: Stanford University Department of Statistics. [Google Scholar]
  19. Friedman, Jerome H., and Werner Stuetzle. 1981. Projection pursuit regression. Journal of the American Statistical Association 76: 817–23. [Google Scholar] [CrossRef]
  20. Gay, David M. 1990. Usage summary for selected optimization routines. In Computing Science Technical Report 153. Murray Hill: AT&T Bell Laboratories. [Google Scholar]
  21. Gordy, Michael B., and Sandeep Juneja. 2010. Nested simulations in portfolio risk measurement. Management Science 56: 1833–48. [Google Scholar] [CrossRef] [Green Version]
  22. Green, P. J. 1984. Iteratively reweighted least squares for maximum likelihood estimation, and some robust and resistant alternatives. Journal of the Royal Statistical Society, Series B 46: 149–92. [Google Scholar] [CrossRef]
  23. Hartmann, Stefanie. 2015. Verallgemeinerte lineare Modelle im Kontext des Least Squares Monte Carlo Verfahrens. Master’s thesis, Katholische Universität Eichstätt-Ingolstadt, Eichstätt, Germany. [Google Scholar]
  24. Harvey, Andrew C. 1976. Estimating regression models with multiplicative heteroscedasticity. Econometrica 44: 461–65. [Google Scholar] [CrossRef]
  25. Hastie, Trevor, and Daryl Pregibon. 1992. Chapter 6 ‘Generalized Linear Models’ in Statistical Models in S. Boca Raton, London, New York, and Washington: Wadsworth & Brooks/Cole. [Google Scholar]
  26. Hastie, Trevor, and Robert Tibshirani. 1986. Generalized additive models. Statistical Science 1: 297–318. [Google Scholar] [CrossRef]
  27. Hastie, Trevor, and Robert Tibshirani. 1990. Generalized Additive Models. London: Chapman & Hall. [Google Scholar]
  28. Hastie, Trevor, Robert Tibshirani, and Jerome H. Friedman. 2017. The Elements of Statistical Learning, 2nd ed. New York: Springer Series in Statistics. [Google Scholar]
  29. Hayashi, Fumio. 2000. Econometrics. Princeton: Princeton University Press. [Google Scholar]
  30. Hejazi, Seyed A., and Kenneth R. Jackson. 2017. Efficient valuation of scr via a neural network approach. Journal of Computational and Applied Mathematics 313: 427–39. [Google Scholar] [CrossRef] [Green Version]
  31. Hocking, R. R. 1976. The analysis and selection of variables in linear regression. Biometrics 32: 1–49. [Google Scholar] [CrossRef]
  32. Hurvich, Clifford M., Jeffrey S. Simonoff, and Chih-Ling Tsai. 1998. Smoothing parameter selection in nonparametric regression using an improved Akaike information criterion. Journal of the Royal Statistical Society, Series B 60: 271–93. [Google Scholar] [CrossRef]
  33. Kandasamy, Kirthevasan, and Yaoliang Yu. 2016. Additive approximations in high dimensional nonparametric regression via the SALSA. Paper presented at the 33rd International Conference on Machine Learning, New York, NY, USA, June 19–24; pp. 69–78. [Google Scholar]
  34. Kazimov, Nurlan. 2018. Least Squares Monte Carlo modeling based on radial basis functions. Master’s thesis, Universität Ulm, Ulm, Germany. [Google Scholar]
  35. Kopczyk, Dawid. 2018. Proxy Modeling in Life Insurance Companies With the Use of Machine Learning Algorithms. Working Paper. Available online: http://dx.doi.org/10.2139/ssrn.3396481 (accessed on 29 July 2019).
  36. Krah, Anne-Sophie. 2015. Suitable information criteria and regression methods for the polynomial fitting process in the lsmc model. Master’s thesis, Julius-Maximilians-Universität Würzburg, Würzburg, Germany. [Google Scholar]
  37. Krah, Anne-Sophie, Zoran Nikolić, and Ralf Korn. 2018. A least-squares Monte Carlo framework in proxy modeling of life insurance companies. Risks 6: 62. [Google Scholar] [CrossRef] [Green Version]
  38. Li, Qi, and Jeff Racine. 2004. Cross-validated local linear nonparametric regression. Statistica Sinica 14: 485–512. [Google Scholar]
  39. Magnus, Jan R. 1978. Maximum likelihood estimation of the GLS model with unknown parameters in the disturbance covariance matrix. Journal of Econometrics 7: 281–312. [Google Scholar] [CrossRef] [Green Version]
  40. Marra, Giampiero, and Simon N. Wood. 2012. Coverage properties of confidence intervals for generalized additive model components. Scandinavian Journal of Statistics 39: 53–74. [Google Scholar] [CrossRef] [Green Version]
  41. Marx, Brian D., and Paul H.C. Eilers. 1998. Direct generalized additive modeling with penalized likelihood. Computational Statistics & Data Analysis 28: 193–209. [Google Scholar]
  42. McCullagh, Peter, and John A. Nelder. 1989. Generalized Linear Models, 2nd ed. London and New York: Chapman & Hall. [Google Scholar]
  43. McLean, Douglas. 2014. Orthogonality in Proxy Generator. Presentation, Insurance-ERS. Legendre Polynomial/QR Decomposition Equivalence in Multiple Polynomial Regression. New York City: Moody’s Analytics. [Google Scholar]
  44. Milborrow, Stephen. 2018. Earth: Multivariate Adaptive Regression Splines. Derived from mda:mars by Trevor Hastie and Rob Tibshirani. Uses Alan Miller’s Fortran Utilities with Thomas Lumley’s Leaps Wrapper. R Package Version 4.6.3. Available online: https://mran.microsoft.com/snapshot/2018-06-07/web/packages/earth/index.html (accessed on 29 June 2018).
  45. Mourik, Teus. 2003. Market risk of insurance companies. In Discussion Paper IAA Insurer Solvency Assessment Working Party. Amsterdam, The Netherlands. Available online: http://www.actuaires.org/AFIR/colloquia/Maastricht/Mourik.pdf (accessed on 12 August 2019).
  46. Nadaraya, Elizbar A. 1964. On estimating regression. Theory of Probability and Its Applications 9: 141–42. [Google Scholar] [CrossRef]
  47. Nelder, John A., and Robert W. M. Wedderburn. 1972. Generalized linear models. Journal of the Royal Statistical Society, Series A 135: 370–84. [Google Scholar] [CrossRef]
  48. Nikolić, Zoran, Christian Jonen, and Chengjia Zhu. 2017. Robust regression technique in lsmc proxy modeling. Der Aktuar 1: 8–16. [Google Scholar]
  49. Nychka, Douglas. 1988. Bayesian confidence intervals for smoothing splines. Journal of the American Statistical Association 83: 1134–43. [Google Scholar] [CrossRef]
  50. Pindyck, Robert S., and Daniel L. Rubinfeld. 1998. Econometric Models and Economic Forecasts. Ann Arbor: University of Michigan, Irwin: McGraw-Hill. [Google Scholar]
  51. R Core Team. 2018. Stats: R Statistical Functions. R Package version 3.2.0. Vienna: R Foundation for Statistical Computing. [Google Scholar]
  52. Racine, Jeffrey S., and Tristen Hayfield. 2018. np: Nonparametric Kernel Smoothing Methods for Mixed Data Types. R package version 0.60-8. Available online: https://github.com/JeffreyRacine/R-Package-np (accessed on 29 June 2018).
  53. Runge, Carl. 1901. Über empirische Funktionen und die Interpolation zwischen äquidistanten Ordinaten. Zeitschrift für Mathematik und Physik 46: 224–43. [Google Scholar]
  54. Schelthoff, Tom. 2019. Machine Learning Methods as Alternatives to the Least Squares Monte Carlo Model for Calculating the Solvency Capital Requirement of Life and Health Insurance Companies. Master’s thesis, Universität zu Köln, Cologne, Germany. [Google Scholar]
  55. Schoenenwald, Johannes J. 2019. Modelli Proxy per la Determinazione dei Requisiti di Capitale Secondo Solvency II. Master’s thesis, Universitá degli Studi di Trieste, Trieste, Italy. [Google Scholar]
  56. Sell, Robin. 2019. Nicht-Parametrische Regression im Risikomanagement. Bachelor’s thesis, Universität zu Köln, Cologne, Germany. [Google Scholar]
  57. Suykens, Johan A.K., and Joos Vandewalle. 1999. Least squares support vector machine classifiers. Neural Processing Letters 9: 293–300. [Google Scholar] [CrossRef]
  58. Teuguia, Oberlain N., Jiaen Ren, and Frédéric Planchet. 2014. Internal Model in Life Insurance: Application of Least Squares Monte Carlo in Risk Assessment. Technical Report. Lyon: Laboratoire de Sciences Actuarielle et Financière. [Google Scholar]
  59. Tibshirani, Robert. 1996. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B 58: 267–88. [Google Scholar] [CrossRef]
  60. Watson, Geoffrey S. 1964. On estimating regression. Sankhya: The Indian Journal of Statistics, Series A 26: 359–72. [Google Scholar]
  61. Weiß, Christian, and Zoran Nikolić. 2019. An aspect of optimal regression design for LSMC. Monte Carlo Methods and Applications 25: 283–90. [Google Scholar] [CrossRef] [Green Version]
  62. Wood, Simon N. 2000. Modelling and smoothing parameter estimation with multiple quadratic penalties. Journal of the Royal Statistical Society, Series B 62: 413–28. [Google Scholar] [CrossRef] [Green Version]
  63. Wood, Simon N. 2003. Thin plate regression splines. Journal of the Royal Statistical Society, Series B 65: 95–114. [Google Scholar] [CrossRef]
  64. Wood, Simon N. 2006. Generalized additive models. In Lecture Notes, School of Mathematics. Bristol: University of Bristol. [Google Scholar]
  65. Wood, Simon N. 2017. Generalized Additive Models: An Introduction with R, 2nd ed. Boca Raton: CRC Press. [Google Scholar]
  66. Wood, Simon N. 2018. mgcv: Mixed GAM Computation Vehicle with Automatic Smoothness Estimation. R package version 1.8–24. Available online: https://rdrr.io/cran/mgcv/ (accessed on 29 June 2018).
  67. Wood, Simon N., Yannig Goude, and Simon Shaw. 2015. Generalized additive models for large data sets. Journal of the Royal Statistical Society, Series C 64: 139–55. [Google Scholar] [CrossRef] [Green Version]
  68. Wood, Simon N., Zheyuan Li, Gavin Shaddick, and Nicole H. Augustin. 2017. Generalized additive models for gigadata: Modeling the u.k. black smoke network daily data. Journal of the American Statistical Association 112: 1199–210. [Google Scholar] [CrossRef] [Green Version]
  69. Zuur, Alain F., Elena N. Ieno, Neil J. Walker, Anatoly A. Saveliev, and Graham M. Smith. 2009. Mixed Effects Models and Extensions in Ecology with R. Chapter GLM and GAM for Count Data. New York: Springer, pp. 209–43. [Google Scholar]
Figure 1. Fitting values of best estimate of liabilities with respect to a financial risk factor.
Figure 1. Fitting values of best estimate of liabilities with respect to a financial risk factor.
Risks 08 00021 g001
Figure 2. Flowchart of the calibration algorithm.
Figure 2. Flowchart of the calibration algorithm.
Risks 08 00021 g002
Figure 3. Nested simulation values of best estimate of liabilities with respect to a financial risk factor.
Figure 3. Nested simulation values of best estimate of liabilities with respect to a financial risk factor.
Risks 08 00021 g003
Figure 4. Generalized additive model (GAM) with a basis expansion in one dimension.
Figure 4. Generalized additive model (GAM) with a basis expansion in one dimension.
Risks 08 00021 g004
Figure 5. Reflected pair of piecewise linear functions with a knot at t.
Figure 5. Reflected pair of piecewise linear functions with a knot at t.
Risks 08 00021 g005
Figure 6. Locally constant (LC) and LL kernel regression using the Epanechnikov kernel with λ = 0.2 in one dimension.
Figure 6. Locally constant (LC) and LL kernel regression using the Epanechnikov kernel with λ = 0.2 in one dimension.
Risks 08 00021 g006
Figure 7. Histograms of fitting and nested simulation values of BEL.
Figure 7. Histograms of fitting and nested simulation values of BEL.
Risks 08 00021 g007
Figure 8. Residual plots on Sobol set.
Figure 8. Residual plots on Sobol set.
Risks 08 00021 g008
Figure 9. Residual plots on nested simulations set.
Figure 9. Residual plots on nested simulations set.
Risks 08 00021 g009
Figure 10. Residual plots on capital region set.
Figure 10. Residual plots on capital region set.
Risks 08 00021 g010
Table 1. Summary statistics of fitting and nested simulation values of best estimate of liabilities (BEL).
Table 1. Summary statistics of fitting and nested simulation values of best estimate of liabilities (BEL).
Fitting ValuesNested Simulation Values
Minimum:10,88312,479
1st quartile:13,82414,515
Median:14,90714,940
Mean:14,92214,922
3rd quartile:15,98915,330
Maximum:19,35417,080
Std. deviation:1519610
Skewness:0.067−0.081
Kurtosis:2.4783.214

Share and Cite

MDPI and ACS Style

Krah, A.-S.; Nikolić, Z.; Korn, R. Machine Learning in Least-Squares Monte Carlo Proxy Modeling of Life Insurance Companies. Risks 2020, 8, 21. https://doi.org/10.3390/risks8010021

AMA Style

Krah A-S, Nikolić Z, Korn R. Machine Learning in Least-Squares Monte Carlo Proxy Modeling of Life Insurance Companies. Risks. 2020; 8(1):21. https://doi.org/10.3390/risks8010021

Chicago/Turabian Style

Krah, Anne-Sophie, Zoran Nikolić, and Ralf Korn. 2020. "Machine Learning in Least-Squares Monte Carlo Proxy Modeling of Life Insurance Companies" Risks 8, no. 1: 21. https://doi.org/10.3390/risks8010021

APA Style

Krah, A. -S., Nikolić, Z., & Korn, R. (2020). Machine Learning in Least-Squares Monte Carlo Proxy Modeling of Life Insurance Companies. Risks, 8(1), 21. https://doi.org/10.3390/risks8010021

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop