Applications of Clustering with Mixed Type Data in Life Insurance

Death benefits are generally the largest cash flow item that affects financial statements of life insurers where some still do not have a systematic process to track and monitor death claims experience. In this article, we explore data clustering to examine and understand how actual death claims differ from expected, an early stage of developing a monitoring system crucial for risk management. We extend the $k$-prototypes clustering algorithm to draw inference from a life insurance dataset using only the insured's characteristics and policy information without regard to known mortality. This clustering has the feature to efficiently handle categorical, numerical, and spatial attributes. Using gap statistics, the optimal clusters obtained from the algorithm are then used to compare actual to expected death claims experience of the life insurance portfolio. Our empirical data contains observations, during 2014, of approximately 1.14 million policies with a total insured amount of over 650 billion dollars. For this portfolio, the algorithm produced three natural clusters, with each cluster having a lower actual to expected death claims but with differing variability. The analytical results provide management a process to identify policyholders' attributes that dominate significant mortality deviations, and thereby enhance decision making for taking necessary actions.


Introduction and motivation
According to the Insurance Information Institute 1 , the life insurance industry paid a total of nearly $76 billion as death benefits in 2019. Life insurance is in the business of providing a benefit in the event of premature death, one that is understandably difficult to predict with certainty. Claims arising from mortality are not surprisingly the largest cash flow item that affects both the income statement and the balance sheet of a life insurer. Life insurance contracts are generally considered long duration where the promised benefit could be for an extended period of time before being realized. In effect, not only do life insurers pay out death claims in aggregate on a periodic basis, they are also obligated to have sufficient assets set aside as reserves to fulfill this long term obligation. See Dickson et al. (2013).
Every life insurer must have in place a systematic process of tracking and monitoring its death claims experience. This tracking and monitoring system is an important risk management tool. It should involve not only identifying statistically significant deviations of actual to expected experience, but also being able to understand and explain the effects of patterns. Such deviations might be considered normal patterns of deviation that are anomalies for short durations, while of more considerable importance are deviations considered to follow a trend for longer durations.
Prior to sale, insurance companies exercise underwriting to identify the degree of mortality risk of applicants. As a consequence, there is a selection effect on the underlying mortality of life insurance policyholders; normally, the mortality of policyholders are considered better than the general population. However, this mortality selection wears off over time, and in spite of this selection, it is undeniably important for a life insurance company to have a monitoring system. Vadiveloo et al. (2014) listed some of these benefits and we reiterate their importance again as follows: 1. A tracking and monitoring system is a risk management tool that can assist insurers to take actions necessary to mitigate the economic impact of mortality deviations.
2. It is a tool for improved understanding of the emergence of death claims experience thereby helping an insurer in product design, underwriting, marketing, pricing, reserving, and financial planning.
3. It provides a proactive tool for dealing with regulators, credit analysts, investors, and rating agencies who may be interested in reasons for any volatility in earnings as a result of death claims fluctuations. 4. A better understanding of the company's emergence of death claims experience helps to improve its claims predictive models.
5. The results of a tracking and monitoring system provides the company a benchmark for its death claims experience that can be relatively compared with that of other companies in the industry.
Despite these apparent benefits, several insurers still do not have a systematic process of tracking and monitoring death claims. Such a process clearly requires a meticulous investigation of historical death claims experience. In this article, we explore the use of data clustering to examine and understand how actual death claims differ from expected. By naturally subdividing the policyholders into clusters, this process of exploration through data clustering will provide us a better understanding of the characteristics of the life insurance portfolio according to their historical claims experience. This is an important early stage of developing a tracking and monitoring system that is a crucial part of risk management for a life insurer.
As information stored in data grows rapidly in the modern world, several industries, including the insurance industry, have started to implement practices to analyze datasets and to draw meaningful results for more effective decision making. The magnitude and scale of information from these datasets continue to increase at a rapid pace, and so does the ease of access. Data analytics have become an important function in every organization and how to deal with huge data sets has become an important issue. In many instances, information comes in unstructured forms so that unsupervised learning methods are instituted for preliminary investigation and examination.
The most commonly used unsupervised learning technique is cluster analysis. It involves partitioning observations into groups or clusters where observations within each cluster are optimally similar while at the same time, observations between clusters are optimally dissimilar. Among many clustering algorithms developed in the past few decades, the k-means clustering algorithm (MacQueen (1967)) is perhaps the simplest, most straightforward, and most popular method that efficiently partitions the data set into k clusters. With k initial centroids arbitrarily set, the k-means algorithm finds the locally optimal solutions by gradually minimizing the clustering error calculated according to numerical attributes. The technique has been applied in several disciplines including life insurance, e.g., Thiprungsri and Vasarhelyi (2011), Devale and Kulkarni (2012), Gan (2013), and Gan and Valdez (2016). Despite its popularity, the algorithm has drawbacks that present challenges to our life insurance dataset: (i) it is particularly sensitive to the initial cluster assignment which is randomly picked, and (ii) it is unable to handle categorical attributes. While the k-prototypes clustering is lesser known, it provides the advantage of being able to handle mixed data types, including numerical and categorical attributes. For numerical attributes, the distance measure used may still be based on Euclidean. For categorical attributes, the distance measure used is based on the number of matching categories. This paper extends the use of k-prototypes algorithm proposed by Huang (1997) to provide insights and draw inference from a real-life dataset of death claims experience obtained from a portfolio of contracts of a life insurance company. The k-prototypes algorithm has been applied in marketing for segmenting customers to better understand product demands (Hsu and Chen (2007)) and in medical statistics for understanding hospital care practices (Najjar et al. (2014)). This algorithm integrates the procedures of k-means and k-modes to efficiently cluster datasets that contain, as earlier said, numerical and categorical variables; the nature of our data, however, contains a geospatial variable. The k-means can only handle numerical attributes while the k-modes can only handle categorical attributes. We therefore improve the k-prototypes clustering by adding a distance measurement to the cost function so that it can also deal with the geodetic distance between latitude-longitude spatial data points. The latitude is a numerical measure of the distance of a location from far north or south of the equator; longitude is a numerical measure of the distance of a location from east-west of the "meridians." Some work related to geospatial data clustering can be found in the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) (Ester et al., 1996) and in ontology (Wang et al., 2010).
Our empirical data has been drawn from the life insurance portfolio of a major insurer and contains observations, during the third quarter of 2014, of approximately 1.14 million policies with a total insured amount of over 650 billion dollars. Using our empirical data, we applied the k-prototypes algorithm that ultimately yields to three optimal clusters determined using the concept of gap statistics. Shown to be an effective method for determining the optimal number of clusters, the gap statistic is based on evaluating "the change in within-cluster dispersion with that expected under an appropriate reference null distribution" (Tibshirani et al., 2001).
To provide further insights to the death claims experience of our life insurance data set, we compared the aggregated actual to expected deaths for each of the optimal clusters. For a life insurance contract, it is most sensible to measure the magnitude of deaths based on the face amount, and thus, we computed the ratio of the aggregated actual face amounts of those who died to the face amounts of expected deaths for each optimal cluster. Under some mild regularity conditions, necessary to prove normality, we are able to construct statistical confidence intervals of the ratio on each of the clusters thereby allowing us to draw inference as to the significant statistical deviations of the mortality experience for each of the optimal clusters. We provide details of the proofs for the asymptotic development of these confidence intervals in the Appendix. Each cluster showed different patterns of mortality deviation and we can deduce the dominant characteristics of the policies from this cluster-based analysis. The motivation is to assist the life insurance company gain some better understanding of potential favorable and unfavorable clusters.
For the rest of this paper, it has been organized as follows. In Section 2, we briefly describe the real data set from an insurance company including the data elements and the preprocessing of the data in preparation for cluster analysis. In section 3, we provide details of the k-prototypes clustering algorithm and discuss how the balance weight parameter is estimated and how to choose the optimal number of clusters. In section 4, we present the clustering results and discuss their implications and applications to monitoring the company's death claims experience. We conclude in Section 5.

Empirical data
We illustrate k-prototypes clustering algorithm based on the data set we obtained from an insurance company. This data set contains 1,137,857 life insurance policies issued in the third quarter of 2014. Each policy is described by 8 attributes with 5 categorical, 2 numerical data elements and longitude-latitude coordinates. Table 1 shows the description and basic summary statistics of each variable. Figure 1 provides a visualization of the distribution of the policies across the states. We only kept the policies issued in the continental United States, and therefore, excluded the policies issued in Alaska, Hawaii, and Guam. First, the frequency of policies observed from these states are not materially plentiful. Second, since these states or territories are outside the mainland United States, geodetic measurements are distorted and clustering results may become less meaningful. The saturated color indicates a high frequency of the policy distributed in a particular state. The distribution of the policy count is highly skewed, with New York, New Jersey, California, and Pennsylvania having insureds significantly more than other states. The spatial attributes are represented by latitude and longitude coordinate pairs.
The Insured's sex indicator Gender is also a discrete variable with 2 levels, Female and Male, with the number of males almost twice as many as females. Smoker Status indicates the insured's smoking status with 95.86% nonsmokers and the remaining 4.14% smokers. The variable Underwriting Type reflects two types of underwriting: 95.48% of the policies were fully underwritten at issue while the remaining 4.52% are term conversions. Term con- versions refer to those policies originally with a fixed maturity (or term) but were converted into permanent policies at a later date from issue, without any additional underwriting. The variable Substandard Indicator indicates whether policy has been issued as substandard or not. Substandard policies are issued after an underwriting is performed that have expected mortality worse than standard policies. Substandard policies come with an extra premium. In our dataset, there are about 7.76% policies considered substandard and the remaining 92.24% are standard. The variable Plan has three levels: Term Insurance Plan (Term), Universal Life with Secondary Guarantees (ULS) and Variable Life with Secondary Guarantees (VLS).
In our dataset, there are two continuous variables. The variable Issue Age refers to the policyholder's age at the time of issue; the range of issue ages is from as young as a newborn to as old as 90 years, with an average of about 44 years old. The variable Face Amount refers to the amount of sum insured either fixed at policy issue or accumulated to this level at the most recent time of valuation. As is common with data clustering, we standardized these two continuous variables by rescaling the values in order to be in the range of [0, 1]. The general formula used in our normalization is where x is the original value and x new is the standardized (or normalized) value. However, for the variable Face Amount, we find few extreme values that may be further distorting the spread or range of possible values. To fix this additional concern, we take the logarithm of the original values before applying the normalization formula: .

Data clustering algorithms
Data clustering refers to the process of dividing a set of objects into homogeneous groups or clusters (Gan et al., 2007;Gan, 2011) using some similarity criterion. Objects in the same cluster are more similar to each other than to objects from other clusters. Data clustering is an unsupervised learning process and is often used as a preliminary step for data analytics. In bioinformatics, for example, data clustering is used to identify the patterns hidden in gene expression data (MacCuish and MacCuish, 2010). In big data analytics, data clustering is used to produce a good quality of clusters or summaries for big data to address the storage and analytical issues (Fahad et al., 2014). In actuarial science, data clustering is also used to select representative insurance policies from a large pool of policies in order to build predictive models (Gan, 2013;Gan and Lin, 2015;Gan and Valdez, 2016).

Pattern representation
Define dissimilarity measure clustering Data abstraction Assess output Figure 2: A typical data clustering process. Figure 2 shows a typical clustering process described in Jain et al. (1999). The clustering process consists of five major steps: pattern representation, dissimilarity measure definition, clustering, data abstraction, and output assessment. In the pattern representation step, the task is to determine the number and type of the attributes of the objects to be clustered. In this step, we may extract, select, and transform features to identify the most effective subset of the original attributes to use in clustering. In the dissimilarity measure definition step, we select a distance measure that is appropriate to the data domain. In the clustering step, we apply a clustering algorithm to divide the data into a number of meaningful clusters. In the data abstraction step, we extract one or more prototypes from each cluster to help comprehend the clustering results. In the final step, we use some criteria to assess the clustering results.
Clustering algorithms can be divided into two categories: partitional and hierarchical clustering algorithms. A partitional clustering algorithm divides a dataset into a single partition; while a hierarchical clustering algorithm divides a dataset into a sequence of nested partitions. In general, partitional algorithms are more efficient than hierarchical algorithms because the latter usually require calculating the pairwise distances between all the data points.

The k-prototypes algorithm
The k-prototypes algorithm (Huang, 1998) is an extension of the well-known k-means algorithm for clustering mixed type data. In the k-prototypes algorithm, the prototype is the center of a cluster, just as the mean is the center of a cluster in the k-means algorithm.
To describe the k-prototypes algorithm, let {X ij }, i = 1, 2, . . . , n, j = 1, 2, . . . , d denote a dataset containing n observations. Each observation is described by d variables, including d 1 numerical variables, d 2 −d 1 categorical variables, and d−d 2 = 2 spatial variables. Without loss of generality, we assume that the first d 1 variables are numerical, the remaining d 2 − d 1 variables are categorical, and the last two variables are spatial. Then the dissimilarity measure between two points x and y used by the k-prototypes algorithm is defined as follows: where λ 1 and λ 2 are balancing weights with respect to numerical attributes that is used to avoid favoring types of variables other than numerical, δ 1 (·, ·) is the simple-matching distance defined as and δ 2 (·, ·) returns the spatial distance between two points with latitude-longitude coordinates using Great Circle distance (WGS84 ellipsoid) methods. We have ) and radius of the Earth r = 6378137m from WGS84 axis (Carter, 2002), where f is the flattening of the Earth (use 1/298.257223563 according to WGS84). WGS84 is the common system of reference coordinate used by the Global Positioning System (GPS), and is also the standard set by the U.S. Department of Defense for a global reference system for geospatial information.
The k-prototypes algorithm aims to minimize the following objective (cost) function: where U = (u il ) i=1:n,l=1:k is an n × k partition matrix, Z = {z 1 , z 2 , . . . , z k } is a set of prototypes, and k is the desired number of clusters. The k-prototypes algorithm employs an iterative process to minimize this objective function. The algorithm starts with k initial prototypes selected randomly from the dataset. Given the set of prototypes Z, the algorithm then updates the partition matrix as follows: Given the partition matrix U , the algorithm updates the prototypes as follows: where . . , (x n,d 2 +1 , x n,d 2 +2 )} and z * l = (z l,d 2 +1 , z l,d 2 +2 ). When δ 2 is calculated, we exclude the previous spatial prototype. The numerical components of the prototype of a cluster are updated to the means, the categorical components are updated to the modes, and the new spatial prototype is the coordinate closest to the previous one.
Algorithm 1 shows the pseudo-code of the k-prototypes algorithm. A major advantage of the k-prototypes algorithm is that it is easy to implement and efficient for large datasets. A drawback of the algorithm is that it is sensitive to the initial prototypes, especially when k is large.
Algorithm 1: Pseudo-code of the k-prototypes algorithm.
Input: A dataset X, k Output: k clusters 1 Initialize z 1 , z 2 , . . . , z k by randomly selecting k points from X; 2 repeat 3 Calculate the distance between x i and z j for all 1 ≤ i ≤ n and 1 ≤ j ≤ k; 4 Update the partition matrix U according to Equation (3); 5 Update cluster centers Z according to Equation (4); 6 until No further changes of cluster membership; 7 Return the partition matrix U and the cluster centers Z; 3.2 Determining the parameters λ 1 and λ 2 The cost function in Equation (2) can be further rewritten as: ) and the inner term is the total cost when X is assigned to cluster l. Note that we can subdivide these measurements into that represent the total cost from the numerical, categorical, and spatial attributes, respectively.
It is easy to show that the total cost D l is minimized by individually minimizing D n l , D c l , and D s l (Huang (1997)). D n l can be minimized through Equations (4a). D c l , the total cost from categorical attributes of X, can be rewritten as where A j is the set of all unique levels of the jth categorical attribute of X and P(z lj ∈ A j |l) denotes the probability that the jth categorical attribute of prototype z l occurs given cluster l. λ 1 and λ 2 are chosen to prevent over-emphasizing either categorical or spatial with respect to numerical attributes and hereby are dependent on the distributions of those numerical attributes (Huang (1997)). In the R package, Szepannek, G. (2017) suggested the value of λ 1 as the ratio of average of variance of numerical variables to the average concentration of categorical variables: where q jk is the frequency of the kth level of the jth categorical variable. See also Szepannek (2019). For each categorical variable, we consider it to have a distribution with a probability of each level to be the frequency of this level. For example, the categorical data element Plan has three levels: Term, Universal life with secondary guarantees (ULS) and Variable life with secondary guarantees (VLS). Then the concentration of Plan can be measured by Gini impurity: 3 k=1 q jk (1 − q jk ) = 1 − 3 k=1 q 2 jk . Therefore, under the condition that all the variables are independent, the total Gini impurity for categorical variables is d j=d 1 +1 (1 − k q 2 jk ), since 3 k=1 q jk = 1. The average of the total variance for the numerical variables 1 Var(x j ) can be considered to be the estimate of the population variance. Subsequently,λ 1 becomes a reasonable estimate and is easy to calculate.
, where the concentration of spatial attributes is estimated by the variance of the Great Circle distances between x * and the center of the total longitude-latitude coordinates.

Determining the optimal number of clusters
As alluded in Section 1, the gap statistic is used to determine the optimal number of clusters. Data X = {X ij }, i = 1, 2, . . . , n, j = 1, 2, . . . , d consists of d features measured on n independent observations. D ij denotes the distance, defined in Equation 1, between observation i and j. Suppose that we have partitioned the data into k clusters C 1 , . . . , C k and n l = |C l |.
be the sum of the pairwise distance for all points within cluster l and set The idea of the approach is to standardized the comparison of log(W k ) with its expectation under an appropriate null reference distribution of the data. We define where E[log(W k (X * ))] denotes the average log(W k ) of the samples X * generated from the reference distribution with predefined k. The gap statistic can be calculated by the following steps: • Set k = 1, 2, . . . , 10; • Run k-prototypes algorithm and calculate log(W k ) under each k = 1, 2, . . . , 10 for the original data X; • For each b = 1, 2, . . . , B, generate a reference data set X * b with sample size n. Run the clustering algorithm under the candidate k values and compute and Gap(k); • Choose the optimal number of clusters as the smallest k such that Gap(k) ≥ Gap(k + 1) − s(k + 1).
This estimate is broadly applicable to any clustering method and distance measure D ij . We use B = 50 and randomly draw 10% of the data set using stratified sampling to keep the same proportion of each attribute. The Gap and the quantity Gap(k)−(Gap(k+1)−s(k+1)) against the number of clusters k are shown in Figure 3. The Gap statistic clearly peaks at k = 3 and the criteria for choosing k displayed in the right panel. The correct k = 3 is the smallest for which the quantity Gap(k) − (Gap(k + 1) − s(k + 1)) becomes positive. There is the possible drawback of the highly sensitivity of the initial choice of prototypes. In order to minimize the impact, we run the k-prototypes algorithm with correct k = 3 starting with 20 different initializations and then choose the one with the smallest total sum squared errors.

Clustering results
Using our mortality dataset with eight different attributes that are mixed type (numerical, categorical and spatial), we concluded as detailed in the previous section that three clusters are formed. Table 2 displays the size and membership degree of each cluster. Cluster 3 has the largest membership of nearly 57% of the total observations, while Clusters 1 and 2 are partitioned with 30.1% and 13.0% memberships, respectively. Let us describe some dominating features for each of the clusters. The outputs are visualized in Figure 4 and Figure 5. Additional details of these dominating features are well summarized in Tables 4 showing the cluster distribution in the categorical variables, Table 3 with a descending order of the cluster proportion in the variable States, and Table 5 regarding the distributions of numerical variables. These tables are provided in the Appendix.

Cluster 1
• Its gender make-up is predominantly females in the entire portfolio. There is a larger percentage of Term plan and fewer percentage of Substandard policies than Clusters 2 and 3. The violin plots for the numerical attributes show that the youngest group with smallest amount of insurance coverage is in this cluster. Geographically, the insureds in this cluster are mostly distributed in the northeastern region such as New Jersey, New York, Rhode Island, and New Hampshire.

Cluster 2
• This cluster has a gender make-up that is interesting. While Clusters 1 and 3 have a dominating gender, Cluster 2 has 30% female and 70% male. It also has the largest proportion of Smokers, Term conversion underwriting type, and Substandard policies when compared with other clusters. However, when it comes to Plan type, 91% of them have Universal Life contracts and almost no Term plans. With respect to issue age and amount of insurance coverage, this cluster of policies has the most senior people and not surprisingly, it has also lower face amount. Geographically, with exception of few states dominating the cluster, there is almost uniform distribution of the rest of the states in this cluster. Custer 2 have states with the lowest proportion of insured policies among all the clusters.

Cluster 3
• Male policyholders dominate this cluster and Cluster 3 has the smallest proportion of Smokers and Term Conversion underwriting type among all clusters. More than 90% of the policyholders purchased Term plan and most of them are also with generally larger face amount than the other two clusters. The policyholders in this cluster are in middle age compared with other clusters according to the violin plots. The policyholders in this cluster are more geographically scattered in Arkansas, Alabama, Mississippi, Tennessee, and Oregon; interestingly, Cluster 3 has the largest proportion of policies among all clusters.

Analysis of mortality deviation
We now compare these clusters with respect to their deviations of actual to expected mortality. It is a typical practice in the life insurance industry that when analyzing and understanding such deviations, we compare the actual-to-expected (A/E) death experiences.
To illustrate how we made the comparison, consider one particular cluster containing n policies. We computed the actual number of deaths for this entire cluster by adding up all the face amounts of those who died during the quarter. Let FA i be the face amount of policyholder i in this particular cluster. Thus, the aggregated actual face amount among those who died is equal to where I i = 1 indicates the policyholder died and the aggregated expected face amount is where the expected mortality rate, q i , is based on the latest 2015 Valuation Basic Table  (VBT), using smoker-distinct and ALB (age-last-birthday) 2 . The measure of deviation, R, is then defined to be Clearly, a ratio R < 1 indicates better than expected mortality while R > 1 indicates worse than expected mortality.
The death indicator I i is a Bernoulli distributed random variable with parameter q i which represents the probability of death, or loosely speaking, the mortality rate. For large n, i.e., as n → ∞, the ratio R converges in distribution to a normal random variable with mean 1 and variance . The details of proofs for this convergence are provided in the appendix.
Based on the results of this convergence, it allowed us to construct 90% and 95% confidence interval of the ratio R or the A/E of mortality. We display Figure 6(a) and Figure 6(b), which depict the differences in the A/E ratio for the three different clusters, based respectively on a 90% and 95% confidence intervals, respectively.
Based on this company's claims experience, these figures provide some good news overall. The observed A/E ratios for all clusters are all below 1, which as earlier said indicates that the actual mortality is better than expected for all 3 clusters. There are some peculiar observations that we can draw from the clusters: • Cluster 1 has the most favorable A/E ratio among all the cluster and is significantly less than 1 at 10% significance level, with moderate variability. This can be explained reasonably by this dominant feature compared with other clusters: its gender make-up of all females in the entire portfolio. Female tends to live longer than male on the average. There is a larger percentage of Variable Life plan, slightly fewer Smokers, Term conversion, and Substandard policies than Clusters 2 and 3. In addition, the violin plots for the numerical attributes show that the youngest group with smallest amount of insurance coverage belong to this cluster. We expect this youngest group to have generally low mortality rates. Geographically, the insureds in this cluster are mostly distributed in the northeastern region such as New Jersey, New York, Rhode Island, and New Hampshire. It may be noted that policyholders tend to come from this region where the population has typically better income with better employer-provided health insurance.
• Cluster 2 has the A/E ratio of 0.68 and is not significantly less than 1 at both 5% and 10% significance levels; it has the largest variability of this ratio among all clusters. Cluster 2 has therefore the most unfavorable A/E ratio from a statistical perspective. The characteristics of this cluster can be captured by these dominant features: (i) Its gender make-up is a mixed of male and female, with more males than females; (ii) It has the largest proportion of Smokers, Term conversion underwriting type and Substandard policies compared with other clusters; (iii) However, when it comes to plan type, 91% of them have Universal Life contracts and no Term policies; (iv) With respect to issue age and amount of insurance coverage, this cluster has the largest proportion of elder people and therefore, has lower face amounts. All these dominating features help explain a generally worse mortality and larger variability of deviations. For example, the older group has a higher mortality rate than the younger group and with the largest proportion of smokers, this explains the compounded mortality. To some extent, with the largest proportion of Term conversion underwriting types and Substandard policies, they reasonably indicate more inferior mortality experience.
• Cluster 3 has the A/E ratio most significantly less than 1, even though it has the worst A/E ratio among all the clusters. The characteristics can be captured by some dominating features in the cluster: male policyholders dominate this cluster and it has the smallest proportion of Smokers and Term Conversion underwriting type among three clusters. More than 90% of the policyholders purchased Term plan and most of them have larger face amount than other clusters. The policyholders in this cluster are about the middle age groups compared to other clusters according to the violin plots. The policyholders are more geographically scattered in Arkansas, Alabama, Mississippi, Tennessee, and Oregon. We generally know that smokers mortality is worse than non smokers. Relatively younger age groups have a lower mortality rate than other age groups. Term plans generally have fixed terms and are more subject to frequent underwriting. The small variability can be explained by having more policies giving enough information and hence, with much more predictable mortality.

Conclusions
In this paper, we investigated the use of k-prototypes clustering algorithm to provide insights as to the death claims experience of a portfolio of contracts from a life insurance company. Developing a tracking and monitoring system of death claims is an important part of managing a portfolio of life insurance policies. We explore how the results from the k-prototypes clustering algorithm can help us detect peculiar characteristics of our life insurance portfolio in order to have an improved understanding of mortality deviations. The k-prototypes algorithm integrates the procedures of k-means and k-modes to efficiently cluster our data set that contains numerical, categorical, and spatial attributes. Our data set consists of a life insurance company's death claims experience observed during the third quarter of 2014, with approximately 1.14 million unique policies and a total insured amount of over 650 billion dollars. The optimal number of clusters are obtained using gap statistics; the algorithm produced three dominating natural clusters in this insurance portfolio. We then used the clusters to compare and monitor actual to expected death claims experience. Each cluster has a lower actual to expected death claims but with differing variabilities, and each optimal cluster showed patterns of mortality deviation for which we are able to deduce the dominant characteristics of the policies within a cluster. We also find that the additional information drawn from the spatial nature of the policies contributed to an explanation of the deviation of mortality experience from expected. The results can help facilitate decision making because of an improved understanding of potential favorable and unfavorable clusters.
Proof. : Suppose that X n has mean µ n and variance σ 2 n < ∞.
where sup n E|X n | 3 < ∞ and 0 < inf n Var(X n ) < ∞. Therefore by Lyapunov Theorem, In this paper, each policy has a death indicator I i that is Bernoulli distributed with probability of death q x i . Assume that each policy's death is observable and fixed, not random, so that q x i is fixed and not varying with data. Within cluster c with total number of policies n c , let FA i , A i , and E i be the face amount, actual death payment, and expected death payment for each policy, respectively. When a policy i is observed dead, then I i = 1.
We calculate that inf n Var(X n ) = 1.67 * 10 −16 , then inf n Var(Y n ) is positive and finite, and 0 < sup n E|Y n | 3 = 1.2 * 10 −5 < ∞. These two conditions are satisfied and Y i 's are independently distributed.
E i denote the measure of mortality deviation for cluster c. By Lyapunov Theorem, we have Appendix B. Tables that summarize the distribution of clusters