A Two-Population Mortality Model to Assess Longevity Basis Risk

Index-based hedging solutions are used to transfer the longevity risk to the capital markets. However, mismatches between the liability of the hedger and the hedging instrument cause longevity basis risk. Therefore, an appropriate two-population model to measure and assess the longevity basis risk is required. In this paper, we aim to construct a two-population mortality model to provide an effective hedge against the longevity basis risk. The reference population is modelled by using the Lee-Carter model with the renewal process and exponential jumps proposed by \"Ozen and \c{S}ahin (2020) and the dynamics of the book population are specified. The analysis based on the UK mortality data indicates that the proposed model for the reference population and the common age effect model for the book population provide a better fit compared to the other models considered in the paper. Different two-population models are used to investigate the impact of the sampling risk on the index-based hedge as well as to analyse the risk reduction regarding hedge effectiveness. The results show that the proposed model provides a significant risk reduction when mortality jumps and the sampling risk are taken into account.


Introduction
Longevity risk can be defined as the risk that members of some reference population might live on average longer than anticipated. It is a crucial financial concern for both pension plans and life insurers since the institutions might have to make higher payments than expected due to the longevity risk. Life expectancy continues to rise in association with improvements in nutrition, hygiene, medical knowledge, lifestyle, and health care. Uncertainty about future mortality improvements might have significant economic implications for annuity providers, pension providers, and social insurance programs. Although the individuals have different lifespan, longevity risk might affect all pension plans and life insurers, and hence it is not possible to diversify it with an increase in portfolio size. Therefore hedging of the longevity risk is of critical importance for both pension plan providers and life insurance companies Various solutions have been presented to manage and mitigate the longevity risk. Index-based hedging solutions, which include longevity-linked securities and derivatives, provide more advantages over other hedging solutions, such as faster execution, greater transparency, liquidity potential, and lower costs [23]. Due to offering significant capital savings and providing effective risk management, index-based longevity instruments attract increased interests from within and outside of the worlds of insurance and pensions.
The first step of the assessment of longevity risk and thus the valuation of index-based financial products is the mortality modelling. The choice of the appropriate model is crucial to quantify the risk and provide a foundation for pricing and reserving. Due to the inadequacy of the quality and the size of the portfolio, a reference population index is commonly used by hedgers in index-based hedging solutions. The payments of the financial products are associated with this reference population index, but not the (book) population that underlies the portfolio that is being hedged. Therefore, longevity risk trading usually entails two different populations: the first is affiliated with the portfolio of the hedger, while the other is linked to the hedging instrument [30]. There would then be a potential mismatch between the hedging instrument and the portfolio, due to certain demographic differences (e.g. age profile, sex, socioeconomic status). This might give rise to longevity basis risk, the assessment of which is under research in the latest actuarial literature [23]. Hence, a multi-population mortality model is required to provide an accurate mortality model for measuring the basis risk.
Several multi-population mortality models have recently been presented while only [30] consider the transitory mortality jump effects in the modelling process. It is important to incorporate the mortality jumps to estimate the uncertainty surrounding a central mortality projection. Incorporating the jumps into the modelling process allows us to estimate the probability of catastrophic mortality deterioration when pricing securities for hedging extreme mortality risk [30]. In this paper, a different approach proposed bÿ Ozen and Şahin [26], has been used for modelling jump effects. This approach includes the history of catastrophic events in the jump frequency modelling process by using renewal process as well as a specification of the Lee-Carter (LC) model for mortality.
The aim of this paper is to build an appropriate two-population mortality model incorporating mortality jumps to assess the longevity basis risk for pricing longevitylinked financial products. Such a model provides a basis to effective risk management strategies. To illustrate the impact of our proposed mortality model in hedge effectiveness, we consider a hedge for a hypothetical pension plan. Moreover, we take sampling risk into account since the available historical data is usually small for a pension plan. Therefore, the size of a pension plan is examined in regard to hedge effectiveness. We also compare the hedge effectiveness of our model with the other two commonly used mortality models. The results show that our proposed model provides a better risk reduction.
The remainder of the paper is structured as follows. Section 2 introduces some helpful notations. In Section 3, an overview of the existing multi-population mortality models is provided. The steps for building a two-population mortality model are described in Section 4. Section 5 applies the proposed model to a hypothetical pension plan and examines the effectiveness of the hedge. Finally, Section 6 concludes the paper.

Notations
We begin with introducing some helpful notations adopted from Villegas et al. [29]. Let us denote the reference population by R that is backing the hedging instrument, and B is used for the book population whose longevity risk is going to be hedged. Time will be measured in units of years, and year t will refer to time interval [t, t + 1]. For the reference population, D R x,t and E R x,t show the death counts and exposure to risk at age x at last birthday in year t. Central mortality rates for any individual of the reference population of age x in year t will be denoted by m R xt and computed as m R x,t = D R x,t /E R x,t . Likewise, the same values for the book population are given here as D B x,t , E B x,t and m B A further assumption being made here is that the data for the reference and book populations can be different regarding specified sets of ages and specified amounts of years. For instance, we have D R x,t and E R x,t for consecutive ages x = x 1 , ..., x n R and consecutive calender years t = t 1 , ..., t n R in the reference population, while D B x,t , E B x,t are available for ages x 1 , ..., x n B and calender years t = u 1 , ..., t n B in the book population.
The reference population's data might be provided for a longer time frame than that of the book population, which is n R ≥ n B . Moreover, the calendar years of data in a book may be provided as a subset of the comparable calendar years for the reference population, t n B = t n R . Also, the ages provided by the book might constitute a smaller portion of those that are provided for the reference population.

An Overview of Mortality Models for Measuring Basis Risk
We need to specify an appropriate two-population model for m R x,t and m B x,t which has the ability to capture the trends present within both the book and reference populations. It is crucial to incorporate these trends since the mortality trends of the reference population support the hedging instrument while the trends in the book population are significant for the longevity basis risk to be hedged. Future mortality will be forecasted by the specified model in a consistent way.
Several models have been developed to display the mortality evolution of two related populations. These models usually derived by expanding the previous single-population models by incorporating the correlations and interactions existing between populations. Although the majority of research on modelling multi-population has been conducted relatively recently, the seeds are traced back to the influential paper published by Carter and Lee [8]. The paper introduced feasible approaches for the extension of the authors' single-population model for differences in US mortality between men and women. The model suggests applying independent Lee-Carter models to individual populations as the first approach for multi-population modelling. Afterwards, the joint-κ model, based on the assumption that populations' mortality dynamics are driven by one commonly shared time-varying factor, was developed. The third approach was based on an extension of the Lee-Carter model, applying co-integration techniques and estimating the populations jointly. Brief descriptions of the new models established on the basis of the Lee-Carter model are given below: Independent Modelling: In this approach, mortality is modelled with the utilisation of two independent Lee-Carter models. Let m i x,t be the central death rate for population i in year t at age x. The model can then be expressed as follows: All of those parameters hold the same meanings that they possess in the original Lee-Carter model. It is possible to estimate the model parameters with the application of singular value decomposition, the Markov Chain Monte Carlo approach, or maximum likelihood estimation. A mortality index can be modelled using two independent ARIMA processes for forecasting purposes. Although the model is easily applicable it ignores the dependency between the mortality rates of the populations. Hence, it might lead an overestimation of the basis risks.
The Joint-k Model: This model is based on the assumption that the mortality rates of both populations being driven by one single mortality index. This model may be expressed in the following way: In the joint-k model, the mortality index is the driving force of the changes in mortality rates for both populations. Model parameters are estimated as in the previous approach while the mortality index k t is modelled based on an appropriate ARIMA process. However, the model assumes that the mortality improvements of the populations are perfectly correlated and the existence of the common factor suggests identical advancements in mortality for both populations for all periods. Hence, the assumption is not realistic. [22] introduced a population-specific factor for this model, which is referred to as the "augmented common factor model".
Augmented Common Factor: For the first approach, that of the two independent Lee-Carter models, life expectancy divergence increases in the long run. The joint-k model cannot completely resolve this issue, since discrepancy between two populations in terms of parameter b i x could generate divergences in the mortality predictions. Li and Lee [22] present criteria for the divergence problem, as given below: -k R t and k B t have identical drift terms of the ARIMA process.
Given these conditions, Li and Lee [22] introduced a specific factor for the Lee-Carter model: b i x k i t term serves to capture variations in the changing rate of mortality of population i from the long-term mortality change tendencies suggested by the common factor, b x k t . The k i t factors are modelled using the AR(1) process to ensure the avoidance of any divergence from the mortality projections [21].
Another modelling approach for two-population mortality is the extension of the Cairns-Blake-Dowd (CBD) mortality model for a single population [7]. A version of the CBD model for two populations and its variants were introduced by Li et al. [24]. For example the two-population variant of the CBD model with the incorporation of quadratic effects, known as the M7 model, can be described as follows: wherex denotes average age and σ 2 x is the average value of (x −x) 2 . κ 1,i t and κ 2,i t are two stochastic processes which represent the two time indices of the model. Time index κ 1,i t reflects the level of mortality measured at time t, while κ 2,i t shows the slope and affects every age differently. γ i t−x parameter represents the cohort effect. Li et al. [24] considered three different approaches, which were presented in the work of [31] to forecast future mortality rates. The use of an age-period-cohort (APC) model with two populations was presented by Cairns et al. [5] and Dowd et al. [14]. The model is expressed in the following way: a i x , k i t and γ i t−x are the age, period and cohort effects of the populations. Spreads that exist between the state variables can be modelled as a mean-reverting process for each population so that the short-term trends in the mortality rates can vary, whereas there are parallel long-term improvements. In Cairns et al. [5], a Bayesian framework which allows to estimate non-observable state variables and the underlying parameters of the stochastic process in one stage, is used. Moreover, Dowd et al. [14] developed a gravity approach in which the mortality rates of two populations experience attraction to one another determined by a dynamic gravitational force. The force depends on the comparative sizes of the populations in question [29].
Jarner and Kryger [17] and Cairns et al. [5] recognised the comparative value of the reference population supporting the index and the population whose longevity risk is being hedged. Their approach centres on the reference population at the beginning, after which the dynamics of book mortality must be given for the incorporation of characteristics from the reference population. This relative method has important aspects such as it permits the mismatching of data between the book and reference population and it is applicable in the typical case in which a book population is significantly smaller than a reference population [15]. The mortality models that are used in the relative method are presented in Table 1 [29].

Relative LC with Cohorts RelLC+Cohorts LC+Cohorts
There are other multi-population applications of well-known single-population models. For instance, Biatat and Currie [2] expanded the P-spline approach to encompass scenarios with two populations; previously, it had been utilised with success for cases of single populations. Hatzopoulos and Haberman [16] and Ahmadi and Li [1] applied a multivariate generalised linear model (GLM) for obtaining coherent forecasting of mortality in cases of multiple populations [29].
However, to our knowledge, only Zhou et al. [30] incorporates jumps that are due to interruptive events such as the Spanish flu epidemic in 1918 to two-population mortality model. Their model can be regarded as a two-population generalisation of the model in Chen and Cox [9]. They assumed that the mortality of a population is either jump-free or subject to one transitory mortality jump. The severity of a mortality jump is normally distributed.
Although many multi-population mortality models exist, only a few investigates how to measure the longevity basis risks. Some of the earlier research designed for quantifying basis risk, such as Cairns et al. [6], Ngai and Sherris [25], and Li and Hardy [21], have applied the original framework constructed by Coughlan et al. [10].

Building a Two-Population Mortality Model
The first step in pricing the longevity-linked products is to establish a two-population mortality model in order to measure the longevity basis risk. A relative approach is applied in this paper, as in Haberman et al. [15], since it has many advantages over joint modelling. However, the modelling framework is slightly different from the original formulations used for the reference model.

Mortality Data
All of the examples provided in the paper utilise historical UK mortality data, which were collected from the Continuous Mortality Investigation (CMI) and the Human Mortality Database (HMD). The first data represent the mortality experience of CMI assured male lives that are being hedged. The subsequent dataset is for the reference population, which provides the mortality experience of male lives in England and Wales (EW). For the reference population, a sample period from 1961 to 2016 is considered, while for the book population, the sample period comprises the years of 1961-2005. The sample age range being considered is 65 to 89.

Modelling the Reference Population
The model considered in the paper is a Lee-Carter model with exponential transitory jumps and renewal process. By using renewal process, we attempted to include the history of catastrophic events into the mortality modelling process. InÖzen and Şahin [26], the proposed model was compared to other mortality models with jump effects. The analysis has shown that the arrivals between two catastrophic events is important and the proposed model provides a better fit to the historical data (seeÖzen and Şahin [26] for more details). Moreover, as indicated before, mortality jumps have important impacts on mortality dynamics and it is essential that they are incorporated into the modelling process. Hence we use the Lee-Carter model with exponential transitory jumps and renewal process as our reference population mortality model.
Here, we assume that transitory jumps are only valid for the reference population because of the quality and size of the available data for the national population. The proposed model is given by the following: Here, m R x,t denotes the central death rate in year t for age x, a R x represents the age pattern of the death rates, k R t reflects variations that exist across time in the log mortality rates, b R x represents the mortality rates' sensitivity to changes in time-varying mortality index k R t , W (t) signifies standard Brownian motion, N (t) denotes the renewal process, and, finally, Y (i) denotes a sequence of iid exponential random variables representing the size of the jumps.
There are two identifiability constraints, which means that unique solutions exist for all of the model's parameters. These identifiability constraints are given as follows: We will estimate the model's parameters using the MLE method. First, reference population parameters a R x , b R x , and k R t are estimated. Afterwards, Equation (7) is used to calibrate the time-varying mortality index. We need to find the density function of the independent one-period increments, ∆k R i = r i = k R i − k R i−1 , to estimate the parameters of the calibrated model.
Let D = {k 0 , k 1 , ..., k T } represent the mortality time series at times of t = 1, 2, ..., T , which have equal spacing. The one-period increments are independent and identically distributed (iid). Unconditional density for the one-period increment f (r) may be given as follows: where ds are the probability of no jump and n jumps occur in the renewal process, where F (t) and f (t) are the distribution and density functions of inter-arrival times of between two jumps. f (r i |0), f (r i |n) are conditional densities for a one-period increment; more specifically, they are conditional on the given numbers of jumps and expressed as: Then, we can write the log-likelihood of the model as follows: The estimated a R x , b R x , µ, σ, η, α, β parameter values are shown in Table 2, while timevarying index k R t is illustrated in Figure 1. Given the estimated parameters, the closed-form expression for the expected future central death rates can be derived as follows:

Modelling the Book Population
With the reference population in hand, it is now time to investigate the book population's mortality dynamics. Estimating the reference population first allows us to make knowledgeable decisions regarding the model's book part, and we can also incorporate features from the reference population [29]. The dynamics of the book population's mortality are specified through the log mortality differences of the book population and the reference population. In this paper, we compare the most commonly used models which are the Lee-Carter model, the ageperiod-cohort (APC) model, the Cairns-Blake-Dowd (CBD) model, and common age effect models to model the book population.
Note that for all the models being compared we assume that D B x,t ∼ P oisson(E B xt , q B xt ).

The Lee-Carter Model
The dynamics of the book population are given as follows: The term a B x denotes the difference in the book population's level of mortality compared to that of the reference population for age x. Thus, we can conclude that the mortality level in the book equals a R x + a B x . Time index k B t contributes to establishing the difference that exists in the mortality trends. The b B x term shows us how differences in mortality for age x will respond if any change occurs in time index k B t [15].

The Common Age Effect Model
This model may be seen as an extension of the Lee-Carter model that possesses a common age effect. It can be given by the following equation: The a B x and k B t parameters here are the same as in the LC model for the book population. Different from the LC model, there is a common age effect parameter, b R x , which is the same as for the reference model.

The APC Model
The APC model was introduced by Currie [11] and it is widely used in the literature. It can be regarded as a generalization of the LC model and a two-population version of this model may be written in the following way: a B x , k B t , and γ B t−x respectively represent the age, the period, and the cohort effects of the book population [11]. The γ B t−x term is utilized here in order to account for differences that exist in the cohort effect in the two populations of interest. These parameters reflect the mortality differences between the two populations.

The CBD Model
Cairns et al. [7] introduced the following model with the aim of fitting the mortality rates: The analysis of the models considered in this section becomes something of a challenge due to the CBD model directly modelling one-year death rate q x,t while the others that are being considered in the paper model central death rates m x,t . In order to compare the models in a consistent way, we must introduce an additional step for the modelling of q x,t . We transform the one-year death probabilities in the central death rates using the identity m x,t = − log(1− q x,t ). For all mentioned models, the parameters are estimated by two main steps. As indicated before, the parameters of the book population are estimated conditional on the parameters of the reference population. Under Poisson assumption, the log-likelihood function of the book population is as follows: We estimate the parameters by applying the maximum likelihood method. The parameters obtained for the book population are given in Figure 2 for different mortality models.
According to Figure 2, the a B x parameter shows that the younger ages reveal lower rates of mortality while the older ages reveal higher mortality. The positive values of b B x demonstrate that mortality decreases for all ages. These results are valid for all a B x and b B x parameters for all mortality models of the book population. The mortality index, k B t , reflects the changes in mortality rates over the years for the LC, Common age and APC models. The γ B t−x parameter represents the cohort related effects in the book population. The negative values of κ 1,B t parameter in the CBD model indicate the lower mortality rates while the positive values reflect the higher mortality rates. The κ 2,B t parameter controls these lower and higher mortality rates in the CBD model for the book population.
The BIC values obtained from the fitted models for book population mortality are given in Table 3. The common age effect model has the lowest BIC value according to Table 3. Therefore, we model the book population's mortality using the common age effect model. Finally, we complete the modelling framework by specifying the period's dynamics and the cohort terms, which will be used to forecast and simulate the future rates of mortality. A detailed analysis regarding the selection of the time series to be used in the dynamics can be found in the work of Li et al. [24]. This part of the study confines itself to focusing on the models that are commonly applied in the literature. We assume that the two populations will experience similar improvements in the long run and thus we assume that the spread in both time indices and cohort effects should be modelled as a stationary process.
In this paper, the time-varying mortality indices of the book population k B t are modelled as an autoregressive process of order one; we are thus able to write k B t = ψ 0 + ψ 1 k B t−1 + ξ t for the LC, the common age effect, and the APC models. In the long term, the mean of k B t equals ψ 0 /(1 − ψ 0 ) if |ψ 1 | < 1. The autocorrelation depends on the size of ψ 1 . More technical aspects of time-series modelling can be found in the work of Tsay [28].

Future Simulations
In evaluating the uncertainty of future outcomes and finding the optimal model to assess longevity basis risk, it is necessary to address all of the parameter errors, process errors, and model errors from a modelling or a regulatory perspective such as that of Solvency II [23]. Parameter error refers to the uncertainty in estimating model parameters, while process error arises from variations that exist within the time series. Finally model error reflects the uncertainty that is present in the model selection.
In the literature, a number of studies have been proposed to allow for both process error and parameter error in index-based hedging. For instance, Brouhns et al. [4] used a parametric Monte Carlo simulation method for the generation of examples of model parameters following a multivariate normal distribution. Later, in a subsequent work, Brouhns et al. [3] also explored a semi-parametric bootstrapping procedure designed for the simulation of death rates from the Poisson distribution with the obtained mean equaling the observed numbers of deaths. On the other hand, Renshaw and Haberman [27] utilized fitted numbers of deaths by using the Poisson process. In another study, Koissi et al. [18] used a bootstrap method for the residuals of a fitted Lee-Carter model. Different from the existing methods, Czado et al. [12] and Kogure et al. [19] suggested the application of Bayesian adaptations of the LC model. Li [20] quantitatively compared possible methods for simulations; according to the conclusions of that study, sampling results will all be relatively close to each other regardless of whether the method applied is parametric, semi-parametric, Bayesian, or residual bootstrapping. All of these various simulation methods possess individual advantages and disadvantages. In this study, the bootstrapping method of Brouhns et al. [3] has been selected due to its ability to helpfully include both parameter errors and process errors in simulating future mortality rates. The bootstrapping procedure is detailed as follows: • Estimation of the parameters of the LC model is performed by using original data.
Once they are obtained, those estimated parameters are then applied for estimating the numbers of deaths for both the reference and the book population bym R The new data on numbers of deaths are simulated from a binomial distribution for the book population to include the sampling risk and Poisson distribution is used for the reference population. The newly simulated data will then be used for estimation of the reference and book populations' mortality parameters. Incorporating this step means that the model can allow for parameter error.
• Next, we must fit time-series processes to the new data sample's temporal model parameters, k R t and k B t , since we want to be able to simulate their future values. Furthermore, the inclusion of this step means that the model can allow for process error. k R t is modelled by using the proposed model and k B t is modelled by using AR(1).
• We generate future mortality rate samples for all x and future t with the incorporation of the parameters obtained in step (2) and the simulated values that we gained in step (3) into log(m R x,t ) and log(m B x,t ). As a result, our set of future mortality rates will form one random future scenario. 5. We repeat steps (1) to (4) until we have produced a total of 10,000 random future scenarios.
Different from Haberman et al. [15], in this paper, the parameter errors of the reference population has been considered by applying bootstrapping to both reference and book population models estimations.
A sample from the simulated mortality paths are shown in Figure 3. The mortality paths enable us to obtain projected mortality rates, hence future liabilities of pension plan and hedging instrument.

Sampling Risk
The finite sizes of the book and reference populations and the randomness of the outcomes of the individual lives cause the sampling risk. If the size of populations are infinite, the future outcomes will converge the true expected values according to the law of large numbers. Nevertheless, the size of the populations is limited in reality. Although the bigger countries have very large population sizes, the annuity or pension portfolio's size is usually small. Hence the book and reference populations' outcomes will deviate randomly from their true expected values and also from each other. To reflect the effect of the portfolio size, the number of lives is simulated as: l B x,t is the future number of lives aged x at time t of the book population. q B x,t is the future mortality rate at age x at time t and it is simulated from the semi-parametric bootstrapping method. Simulating the number of lives of the book population by using the binomial distribution provides protection from the sampling risk [23].

Comparison with the Other Mortality Models
After constructing a two population mortality model, we need to compare the proposed model with other mortality models and show its effectiveness. Therefore, we consider two additional two-population mortality models that are commonly used in the literature. First model is the LC model with jumps and the second one is the LC with common age effect model called as LC+Cohorts. LC with jumps model is very similar to Zhou et al. [30], however we use the relative approach to estimate the parameters of the models. Thus our mortality data could be based on different sizes of periods for reference and book population. We use the same notation with Zhou et al. [30] for the LC model with jumps and the model is as follows: where a i x and b i x are the same as in the original LC model. The index k i t is decomposed into the sum of two components, k i t + N i t Y i t . The first component, k i t , is the time-t value of an unobserved period effect index that is free of jumps, while the second term, N i t Y i t indicates the jump effect at time t. The model allows the two populations to have different jump times, jump frequencies and jump severities. They allow a maximum of one jump in a given year and the jump severity Y i t follows a normal distribution with mean µ Y and variance V Y (see for more model details Zhou et al. [30]).
The second mortality model that we consider here is the LC+Cohorts model that is given in Section 3. The parameters of the models are estimated by using maximum likelihood method.
The estimated parameters for these two models are given in Appendix A.

Assessing Basis Risk: An Example
In this section, we consider a hedging strategy to assess longevity basis risk and to measure effectiveness of the hedge while taking mortality jumps and sampling risk into account. The effectiveness of a hedge can be described as how much longevity risk is transferred away. Following formula can be used to define the level of longevity risk reduction for the hedge as in Coughlan et al. [10]: where the terms risk (unhedged) and risk (hedged) represent the appropriate dispersionbased risk measures for the aggregate longevity of the portfolio before and after the hedging. A perfect hedge would have a longevity risk reduction equal to 1 and a good hedge will have a risk reduction degree close to 1; a risk reduction degree close to 0 indicates an ineffective hedge [13]. In this paper, the variance risk measure is used to minimise the variations in the expected future cash flows of the hedging instrument.
A simple hypothetical case study based on a pension plan is considered to illustrate the effect of the proposed mortality models and different volumes of book population data on hedge effectiveness. The pension plan members are assumed to have underlying mortality rates that are same as the CMI male assured lives dataset. Suppose all the pensioners in the plan are aged 65 and pays £1 per year on survival from ages 66 to 90. Our objective is to minimise the longevity risk exposure of the pension plan and hence we construct a hedge by using 10-year index-based longevity swap. We assume that the EW male population constitutes the floating leg's reference population, while the payments of the fixed leg of the swap are based on the CMI assured male lives. Let the interest rate be 3% p.a. during the whole period. The current date is taken as the start of the calendar year 2016. We use the same notation as in the work Li et al. [23] for the hedged and the unhedged positions. The present value of the pension plan's future liability (unhedged position), L(t), is given as below: As a floating-leg receiver, the present value of the longevity swap's future cash inflows, S(t), can be written as For this equation, we calculate random future survivor index t p R 65 and forward survivor index t p R;f orward 65 by applying the survival probability formula, as follows: . Furthermore, the present value of the aggregate pension plan position after longevity hedging (hedged position) may be expressed with the following statement: where weight w denotes the notional amount of longevity swap necessary for successful hedging to be performed [23].
Moreover, in order to take the sampling risk into account, we use the binomial death process for the book population as given in Equation (14). To emphasise the role of the size of the population on hedge effectiveness, we produce three simulated distributions as l(65) = 5, 000, l(65) = 10, 000 and l(65) = 100, 000. We obtain cashflows for hedged and unhedged positions for three mortality models, namely the proposed Lee-Carter model with renewal process and exponential jumps, LC model with jumps and LC+Cohorts, by considering the sampling risk. Then, the hedge effectiveness of these models are calculated.
In Table 4, we present the longevity risk reduction levels when sampling risk is taken into account for the three mortality models. The results indicate that our proposed mortality jump model with renewal process provides a better risk reduction compared to the other two models. The risk reduction level increases as the sample size increases for all models which indicates that the sampling risk might be important. However, even for the smaller populations, our proposed model provides 45.07% risk reduction while the LC model with jumps and LC+Cohorts models provide 23.17% and 13.35% respectively. Therefore, the analysis shows that, by using the proposed mortality model, a significant portion of the risk would be eliminated for the pension plan that exposed to mortality jump risk.

Conclusions
Index-based hedging solutions have many advantages. In such capital market solutions, it is possible to transfer the longevity risk to capital markets at lower costs. However, the potential differences between hedging instruments and pension or annuity portfolio cause longevity basis risk. In this paper, we construct a two-population mortality model to measure and manage the longevity basis risk.
An appropriate two-population model was built for EW male lives and CMI assured male lives to measure longevity basis risk, and the relative approach to model the populations has been adopted. The modelling process of the reference population was followed by the modelling of the dynamics of the book population's mortality. The reference population is modelled by using the LC model with renewal process and exponential jumps proposed byÖzen and Şahin [26] and the common age effect model outperformed among the others to model the book population.
The bootstrap approach of Brouhns et al. [3] was applied in order to include both parameter error and process error in the simulation of future mortality rates. The Poisson distribution is used for the simulation of the reference population's lives and the binomial distribution is used for the simulation of the book population's lives to consider the sampling risk.
Furthermore, the impact of the proposed mortality model and sampling risk to hedge effectiveness is examined. For this purpose, a hypothetical pension plan and hedging strategy which consists of 10-year longevity swap is considered based on the three different two-population mortality models namely the proposed LC model with renewal process and exponential jumps, LC with jumps model and LC with common age effect model. Then the hedge effectiveness is calculated by using these three mortality models to compare the risk reduction caused by the models. The analysis suggests that the proposed mortality model provides a more effective risk reduction for mortality jump risk and sampling risk than the other two models.
A possible future study can be to construct an optimal hedging framework with collateralisation to obtain reasonable risk reduction rates by using the proposed two-population model. ∆ k (t) =k 1 t −k 2 t , ∆ k (t + 1) = µ ∆ k + φ ∆ k∆ k (t) + Z ∆ k (t + 1), The estimated a B x and b B x parameters are given in Table 5. The parameters of the jump component of the model are presented in Table 6. Table 6: Estimated Parameters for the LC with jumps model µ k = −0.4973 µ 1 Y = 4.2915 µ 2 Y = 4.5614 µ ∆ k = −0.3108 φ ∆ k = 0.0496 V 1 Y = 0.5608 V 2 Y = 0.6849 V Z = 0.3915 The probabilities of jump frequencies are P r(N 1 t = 0, N 2 t = 0) = 0.7763, P r(N 1 t = 0, N 2 t = 1) = 0.0967 and P r(N 1 t = 1, N 2 t = 1) = 0.1269. The parameters of the LC+Cohorts model book population are presented in Table 7 and