A Cluster-Based Machine Learning Ensemble Approach for Geospatial Data: Estimation of Health Insurance Status in Missouri

: Mainstream machine learning approaches to predictive analytics consistently prove their ability to perform well using a variety of datasets, although the task of identifying an optimally-performing machine learning approach for any given dataset becomes much less intuitive. Methods such as ensemble and transformation modeling have been developed to improve upon individual base learners and datasets with large degrees of variance. Despite the increased generalizability and ﬂexibility of ensemble approaches, the cost often involves sacriﬁcing inference for predictive ability. This paper introduces an alternative approach to ensemble modeling, combining the predictive ability of an ensemble framework with localized model construction through the incorporation of cluster analysis as a pre-processing technique. The workﬂow not only outperforms independent base learners and comparative ensemble methods, but also preserves local inferential capability by manipulating cluster parameters and maintaining interpretable relative importance values and non-transformed coefﬁcients for the overall consideration of variable importance. This paper demonstrates the ensemble technique on a dataset to estimate rates of health insurance coverage across the state of Missouri, where the cluster pre-processing assists in understanding both local and global variable importance and interactions when predicting high concentration areas of low health insurance coverage based on demographic, socioeconomic, and geospatial variables.


Introduction
The ability to simultaneously model and analyze both local and global relationships in geospatially-referenced statistical models is a challenge commonly faced by social science researchers, particularly geographers and spatial statisticians. Methods common to social scientists, such as linear and logistic regression, are limited in that they only model global relationships-entire datasets-and produce results that can only be attributed to the dataset as a whole. Other methods, such as clustering techniques, are often used to identify concentrations or "hotspots" within or between variables in a dataset, but fail to provide information about intervariable relationships or dependencies, which can be accomplished through regression analysis [1]. As a result, researchers, in many cases, are forced to make a choice between modeling data locally or globally.
The field of educational assessment has made some important advances in techniques that simultaneously measure global relationships, while also accounting for localized trends and patterns, through the incorporation of cluster analysis as a pre-processing technique. Trivedi, Pardos, Sarkozy, and Heffernan [2][3][4] have explored the use of clustering as a pre-processing technique to divide data into homogenous regions, producing multiple localized models that are combined to form a global structure with improved performance over models without the cluster-based pre-processing. More specifically, Trivedi et al. [3] outlined a cluster-based method for determining whether receiving assistance on assessments resulted in increased student performance, using student information to divide students into clusters for improved modeling performance. The authors found that using the additional student information to cluster into more homogenous student groups significantly outperformed the other global models that were explored in the study. Moreover, a later publication by Trivedi et al. [2] discussed in greater detail the utility of clustering to reduce error in prediction tasks, citing that the use of clustering as a pre-processing measure improves the bias-variance tradeoff, as the cluster-based methods in conjunction with regression enable greater access to more variance in the data. Building on the work of Trivedi et al. [2][3][4] this study makes a unique contribution to the literature by using the clustering method as a pre-processing technique to geospatial data and regression techniques to examine global dataset trends, while also inferring local intervariable relationships.
One of the principal limitations in direct translation of the Trivedi et al. [3] research into a geospatial framework is the regression method that was chosen, which followed cluster-based pre-processing: linear regression. While linear regression may have been suitable for educational assessment, geospatial datasets often violate the statistical assumptions that are required to perform linear regression, such as multicollinearity and spatial autocorrelation [1]. To account for nonlinear relationships and datasets that potentially violate the statistical assumptions associated with linear regression, this study will employ a machine learning ensemble framework, drawing upon well-established individual machine learning methods, or "base learners" which will be evaluated and aggregated to form a dynamic learning algorithm that can provide reliable and stable performance regardless of nonlinearity, intervariable relationships, or the distribution of the dataset under study.
Justification for adapting Trivedi et al. [3] to a machine learning framework stems from the "no free lunch" theorem, which suggests that for any machine learning algorithm, elevated performance over one class of problems is offset by performance over another class [5]. In practical terms, this suggests that within a given domain or for any particular dataset, there is no single machine learning algorithm or regression approach that will always result in the most accurate learner, provided varying hyperparameters [6]. At the most basic level, each algorithm comes with a unique set of statistical assumptions that must be satisfied in order for an algorithm to reach optimal performance. The violation of learner assumptions leads to inductive bias, reducing performance and potentially allowing for another algorithm, with a different set of assumptions, to outperform the learner where assumptions have not held [6]. Intuitively, this argument leads to the conclusion that by combining multiple base learners, modeling accuracy and external validity will be improved by leveraging multiple models against a single domain or dataset [7]. By employing a clustering method in combination with an ensemble approach, we hypothesize that the cluster-based method will outperform other comparative global ensemble models, while preserving external validity and inference by the extraction of information from the base learners.
The central research questions that this study addresses are: (1) whether the utility of clustering methods as a pre-processing technique is appropriate and effective within a geospatial context; and (2) whether the cluster-based machine learning ensemble algorithm outperforms similar machine learning approaches that do not use clustering methods. Additionally, this study will also examine the extent to which inference can be drawn from the localized dataset clusters that are determined as part of the proposed approach. To answer these research questions, we will use a direct experimentation on a geospatial dataset, which relates demographic and socioeconomic indicators to an estimation of the health insurance status for the state of Missouri. We identified the variables that most significantly impact the number of individuals without health insurance across the state of Missouri, and measured disparities in health insurance status among targeted sub-populations. The analytic outcome using the method we are proposing will identify the variables that most significantly account for the variance in health insurance coverage across the state. When used for clustering, these variables most effectively homogenize the dataset, highlighting differences in the relative variable contribution to health insurance status throughout Missouri.

Data Overview and Base Learner Selection
The dataset selected for the analysis presented in this paper originated from the American Community Survey (ACS) 2012-2016 five-year estimates [8], gathered by the United States (U.S.) Census Bureau. The ACS is an ongoing survey that is collected yearly throughout the United States, and is designed to be an extension of the decennial census. The decennial census only collects information on general population characteristics and housing [8]. In contrast to the decennial census, the ACS collects a wealth of demographic, socioeconomic, housing, and business information to assist in resource allocation by the federal government. ACS data are collected at hierarchical units of geography, and estimates are generated in one, three, and five-year estimates, ideally reflecting the population characteristics under study [8].
For this study, the selected dependent variable is the estimated count of individuals without health insurance aggregated by census block groups. Based on the nature of the dependent variable, all of the models that were developed for this study were run as regression models. The extent of the study area includes the entire state of Missouri, which is divided into 4506 block groups. The independent variables that were selected for the study were chosen based on a taxonomy developed by Juarez et al. [9] known as the "public health exposome". According to Juarez et al., public health issues are principally impacted by elements within the natural, built, social, and policy environments. Using those four domains to guide variable selection and adapting them based on the available geospatial datasets, the variables that were selected for this study were divided into four categories: (1) demography; (2) socioeconomic indicators; (3) geospatial characteristics; and (4) assessments for healthcare access. Variables in the demographic and socioeconomic categories were selected from the ACS, while geospatial characteristics and assessments for healthcare access were primarily derived from a network analysis using ArcGIS 10.5, which calculated distance measurements from the block group centroid locations to various points of interest. These calculated variables included distances to medical facilities, specifically designated shortage areas and government identified medically underserved areas (MUAs), educational facilities including institutes of higher education, and the overall length of roads and highways within each block group [8,[10][11][12]. To account for variables of different types, including counts, distances, and averages, all of the variables were standardized so that they could collectively be examined for relative variable contribution in the modeling results. Table 1 contains descriptions of all of the independent variables that were included in the analysis, which are grouped by variable category and variable type, and include both descriptive statistics and sourcing information. The variable selection for this paper was based on the idea that despite efforts to make health insurance more accessible by government and nonprofit entities, it remains a significant socioeconomic indicator, as evidenced by research suggesting continued disparities between health insurance status and socioeconomic indicators [13][14][15][16]. Viewing health insurance status as a socioeconomic indicator, it's appropriate from a theoretical standpoint to include demographics and other socioeconomic indicators as predictors, as well as other variables that lend insight to the level of access available across geographic space [17,18]. The intuition behind the variable categories is that only those individuals who can afford insurance will be able to obtain it, and if individuals can't physically access healthcare, they have no incentive to retain it [19]. Further, working under the assumption that demography tends to cluster across space as evidenced by Tobler's Law, health insurance status is also likely to cluster across space, justifying the inclusion of geospatial and demographic characteristics [20]. The next step in our study was to decide which machine learning algorithms to include as base learners in an ensemble; the largest consideration for learner-selection might be the concept of diversity. In the machine learning literature, diversity of base learners is the idea that while each learner does not necessarily need to be globally accurate individually, it must be highly accurate in the subregions of the hyperspace spanned by the predictors, outperforming each other on various subregions of the hyperspace of observations being analyzed [6]. The base learners that we used in our ensemble were selected based on their overall simplicity, their ability to infer intervariable relationships, their ability to compute varying statistical and mathematical properties, and the validity of assumptions that need to be met in the ensemble of methods. Our ensemble included seven base learners grouped into four categories based on the class of learner. Table 2 contains descriptions of each base learning technique, the category it represents within the ensemble framework, and references to the packages in the R statistical environment where the specific algorithms have been implemented. Partial Least Squares pls [25] Only one of the base learners representing each learning category will advance to inclusion in the ensemble model. The ensemble algorithm begins by using each base learner to fit a global model, and then the better-performing global base learner, by category, advances to the learning category level for eventual aggregation. This method produces a diverse ensemble and avoids adding redundancy to the ensemble by including multiple base learners that essentially represent the same class of machine learning models. For example, given that lasso and ridge regression are both penalty-based linear modeling techniques, the better performing of the two models will advance to represent the penalty-based learning category for inclusion in the ensemble [6].

Statistical Methods
To illustrate the workflow associated with the cluster-based ensemble algorithm presented in this study, Figure 1 offers a generalized procedural diagram outlining the steps involved in executing the ensemble from the input to output datasets. The procedure is divided into five major steps, beginning with cluster analysis as a pre-processing technique, working through base learner aggregation, learning category aggregation, and cluster assembly or aggregation, noting that step four involves repeating the ensemble procedure on each of the four clusters derived from the original dataset.
The cluster-based ensemble algorithm begins by running a cluster analysis on the original input dataset, dividing the dataset into four clusters based on a chosen independent variable that is equal to the number of learning categories. Justification for clustering lies in the idea that for a regression problem, by running the ensemble on N clusters representing homogenous subregions within a global dataset, there will be less variation within each cluster, which will therefore result in each learner fitting a more tailored, accurate model that is local to each specific subregion in the dataset. Additionally, we have applied 10-fold cross-validation (90%/10% approach to training and testing) for each base learner in combination with aggregating cluster results to the ensemble level in order to reduce the possibility of overfitting at the cluster level [6,26]. Rather than summarizing variable importance and explanatory power across the entire dataset at the global (entire dataset) level, in our approach, interpretation and inference will instead happen at the more localized, cluster level. number of clusters for a given dataset is a complex problem in unsupervised learning [1]. Therefore, four clusters were chosen solely for the sake of simplicity. Alternative methods for optimal cluster assignment were explored via the use of the "NBClust" library in R. However, due to wide disagreement of the optimal number of clusters, among the approximately 30 indices computed by the NBClust function, it was decided to make the number of clusters a pre-determined value that was equal to the number of categories (four) of the learners included in the ensemble [29]. We did consider other values for determining an optimal number of clusters in order to assess the impact of cluster values on the overall model performance. We found that models with fewer than four clusters had a progressive increase in mean square error (MSE), which was likely due to the increased heterogeneity in the larger clusters. In contrast, increasing the number of clusters led to a greater variance in MSE. In many cases, the results obtained were questionable due to the progressively smaller number of observations being used per cluster as the number of clusters increased beyond six. These results indicate that while choosing a static cluster size equal to the number of base learners is feasible for this study, for datasets whose sizes are in other orders of In this study, clustering will be performed using the k-means approach, partitioning 'n' observations into 'k' clusters where each observation belongs to the nearest mean, which is a descriptor of the cluster [26,27]. This process essentially splits the data space into Voronoi cells, which is a method that is commonly used in geospatial statistics to partition a plane into regions based on point distances to plane subsets [28]. The k-means approach was chosen for this study due to its simple clustering method to reduce the space into disjointed subregions, as well as the relatively few hyperparameters that are required to process results [26]. In the hyperparameter assignment step, the algorithm will take in all of the dataset observations and divide the dataset into four clusters, which are equal to the number of machine learning class categories represented in the ensemble framework. Choosing an optimal number of clusters for a given dataset is a complex problem in unsupervised learning [1]. Therefore, four clusters were chosen solely for the sake of simplicity. Alternative methods for optimal cluster assignment were explored via the use of the "NBClust" library in R. However, due to wide disagreement of the optimal number of clusters, among the approximately 30 indices computed by the NBClust function, it was decided to make the number of clusters a pre-determined value that was equal to the number of categories (four) of the learners included in the ensemble [29].
We did consider other values for determining an optimal number of clusters in order to assess the impact of cluster values on the overall model performance. We found that models with fewer than four clusters had a progressive increase in mean square error (MSE), which was likely due to the increased heterogeneity in the larger clusters. In contrast, increasing the number of clusters led to a greater variance in MSE. In many cases, the results obtained were questionable due to the progressively smaller number of observations being used per cluster as the number of clusters increased beyond six. These results indicate that while choosing a static cluster size equal to the number of base learners is feasible for this study, for datasets whose sizes are in other orders of magnitude, the number of clusters may need to be reassessed in order to ensure optimal model performance.
In the second stage of our approach, which involves the aggregation of results from base learners (step two in Figure 1), there are two themes for composing an ensemble: multi-expert combination, and multi-stage combination [6]. In multi-expert combination, the base learners work in parallel; in multi-stage combination, simple base learners attempt to fit the model individually, and if they fail to reach a desired level of accuracy, more complex learners are employed to attempt to reach the desired accuracy. The main argument for the multi-stage approach is that it minimizes the required computational power; if a model can be fitted using a simple approach, then there isn't a need to fit the models using more complex learners. In contrast, when computational power is not a constraint, such as when working with a relatively small dataset, the multi-expert method may be preferred, because it will likely result in a more accurate model compared to the multi-stage technique, since all of the possible base learners will be evaluated compared to a more bracketed or selective approach.
For this study, the cluster-based ensemble uses a multi-expert approach due to the size of the dataset and desire to reach the highest possible accuracy. Within the multi-expert combination schema, there are two approaches for learner aggregation that should be considered when developing an ensemble model. The first, or a global approach, uses all of the learners that produce an output in aggregation via techniques such as voting or stacking. In contrast, a local approach analyzes the output from each learner, and chooses a subset of the learners to be responsible for aggregation in the ensemble [6]. Provided that only one learner from each class of machine learning approaches will advance to represent each learning category, the cluster-based ensemble uses the local approach to ensemble aggregation. The multi-expert schema followed by a local approach to aggregation is evidenced in Figure 1, step two, where all seven of the base learning algorithms are assessed, and the best-performing learning algorithm in each category advances to represent the learning category in step three, learning category aggregation.
For mathematical aggregation, the ensemble models that will be used to compare performance against the cluster-based ensemble use three well-known approaches to ensemble aggregation: the average approach, the globally weighted average approach, and the minimum residual approach. The three approaches will serve as the standards for model assessment for whether the cluster-based method improves on similar global models. The average approach aggregates by taking the predicted observation of each winning base learner by class, averaging the four observations, and producing an average predicted value by observation, which is then used to calculate the MSE compared to the actual ground truth data. The globally weighted average approach is a direct extension of the average approach, where the observation for each of the four base learners is taken by class, weighted by the minimum-maximum (0-1) scaled global MSE for each learner, and then summed to obtain the weighted average predicted value. Finally, the minimum residual approach selects the estimated value across the four learners with the smallest residual to represent the observation in the ensemble [5].
The cluster-based ensemble method will use two relatively straightforward approaches to learner aggregation once four base learners have advanced to represent their respective learning categories. Figure 1, step three, summarizes the two approaches: the "best learner" approach, and the "average learner" approach. Relating to the multi-expert combination schema, the only difference between the "best learner" and "average learner" approaches is the method for learner aggregation: the "best learner" approach uses only the best of the four learning categories (in terms of performance) to represent the given cluster, where the "average learner" approach averages the four learning categories.
Following Figure 1, step four, which involves running base learner and learning category aggregation on each of the four clusters, step five involves combining the results from each of the four clusters into a single dataset. Since the dataset was divided at the beginning into four clusters, and records within clusters persisted throughout steps two through four, retaining information referencing the original observation, step five involves appending each of the clusters into a new output dataset. MSE information by the base learning algorithm, learning category algorithm, and cluster are retained throughout the execution of the algorithm, where in step five, the estimated observations are compared against the original dataset, computing an overall MSE value representing the global performance of the cluster-based ensemble.

Results
As discussed previously, the cluster-based ensemble approach uses a multi-expert localized learner combination approach to determine which base algorithms to include in the final ensemble. For purposes of model evaluation, mean square error (MSE) is used to assess the relative model performance, where lower MSE values indicate improved modeling performance. Table 3 identifies each base learner and learning category, along with the MSE for each individual base learner, validated with 10-fold cross-validation followed by fitting on the entire dataset. Table 3 indicates that the tuned Support Vector Machine (SVM) model significantly outperforms all of the other base learners by an approximately 0.16 change (decrease) in scaled MSE. Additionally, following the support vector regression category, the next best-performing category consisted of dimension reduction methods, with principal components regression (PCR) and partial least squares (PLS) producing the next two smallest MSE values, respectively (0.487, 0.488). Considering the 39 variables in the dataset, the PLS and PCR methods performed well, presumably because reducing the dimensions to variance-concentrated components resulted in increased model performance. The last two machine learning categories performed roughly equally: the lasso and ridge regression models slightly outperformed the random forest and boosted tree-based models. For the comparative ensemble models, base learners were aggregated according to the average predicted observation, the weighted-average observation, or the minimum residual method for comparison, as previously outlined in the Methods section. Table 4 is a summary of the model performance for each global ensemble aggregation approach to be compared against the cluster-based ensemble technique. From the results shown in Table 4, we find that the weighted-average approach to learner aggregation yielded a lower MSE (0.439) compared to the non-weighted-average approach (0.449), excluding the minimum residual approach, which should only be used as a reference to assess the optimal model performance in a selective aggregation scenario. A realistic expectation for ensemble performance should fall somewhere between the average aggregation (0.449) and minimum residual approach (0.267), though as close to the latter as possible. While many other methods have proven effective for global learner aggregation, potentially pushing the global MSE closer to the gold standard minimum residual MSE, the focus of this study is to compare the global and cluster-based ensemble approaches; therefore, the methods listed in Table 4 will be sufficient for comparison. The results of the cluster-based ensemble algorithm displayed significant performance improvements over both comparative ensemble techniques and individual base learners. Table 5 contains the results of the top performing independent variables that were chosen for cluster-based pre-processing, along with the aggregation approach that was used to compute ensemble performance. The table provides information for the top independent variables that, when used for the basis of cluster analysis, optimized the cluster-based ensemble model performance. Referencing back to the comparative ensemble MSE values from Table 4, regardless of the independent variable used for cluster-based pre-processing, the cluster-based ensemble approach outperformed the comparative global ensembles in every circumstance, except for the minimum residual aggregation method, as expected. In addition, there are significant similarities in the top five variables between the two cluster-based ensemble aggregation methods. Income, education, and distance from roads/highways were among the top performing cluster variables in both aggregation methods, leading to the conclusion that there are meaningful differences between clusters for those particular variables, resulting in varying model performance among the base learning algorithms.

Interpretation of Study Findings
The purpose of this paper was to introduce an ensemble modeling approach that localizes model fitting by splitting a dataset into homogenous subregions through the application of cluster analysis as a pre-processing step. Table 6 contains a summary of the study results and provides measures of model performance across base learners, the comparative global ensemble techniques, and aggregation measures, and the cluster-based ensemble technique and aggregation measures with the top two independent variables used for cluster-based pre-processing. Overall, based on the study findings, the cluster-based ensemble algorithm aggregated by "best learner" outperformed all of other individual base learning algorithms and comparative ensemble approaches regardless of aggregation technique. This suggests that a cluster-based approach to ensemble learning may ultimately improve model performance, while at the same time maintaining increased external validity and generalizability, both of which are inherent strengths of ensemble learning. The findings from this study are consistent with those found in Trivedi et al. [2,3]. They found that cluster-based approaches to dataset division prior to model fitting may improve global performance when the cluster-based models are aggregated upon being compared to singular global models. Further, the findings of this study suggest that while cluster-based pre-processing methods have yielded superior performance on nonspatial datasets, similar procedures can also be applied to geospatial datasets with improved model performance.
The main objective for the cluster-based ensemble algorithm was to maximize the predictive ability while preserving the highest inferential capability possible. However, in order to draw any inferences, model variables required a theoretical basis for inclusion. In this study, the theoretical basis was satisfied through the inclusion of the "public health exposome" framework developed by Juarez et al. [9]. Although interpretation of cluster-based models is complex, inference is still possible, especially when using the "best learner by cluster" approach. The results from this study indicate that the cluster-based ensemble performed optimally when the distances to hospitals variable was used as the basis for clustering. Consequently, those modeling parameters were used for model interpretation and to demonstrate inferential capability throughout the paper.
From the results shown in Table 7, PCA was the learner used to fit the cluster with the highest mean value for the distances to hospitals clustering variable. The first cluster represented areas (block groups) farthest away from hospitals and medical facilities, which are primarily rural areas. The SVM method was used to fit clusters two, three, and four, implying that it outperformed the other base learners in suburban and urban areas where hospitals are relatively near. Although PCA was the best-performing learner on the first cluster, it also achieved the highest MSE, or error rate, among the other three clusters, meaning that it might be difficult to model the health insurance status for rural areas, or possibly that the first cluster contains the most variance among the four clusters in terms of the dependent variable under study. A comparison of relative variable importance among the four clusters found several consistencies in terms of variable significance across the four clusters. Firstly, we can infer relative variable significance in our models due to the z-scaling of the entire dataset prior to model execution (global and cluster-based ensembles). From Table 8, we can conclude that, based on the best-performing base learner for the first cluster (PCA), race, language spoken at home, and income seem to be the most significant variables based on model coefficients. We can further infer that higher values of the number of uninsured persons are associated with lower values of the number of white inhabitants (per block group), along with the number of individuals making over $100,000 per year, and to a lesser degree, the number of African American inhabitants. Results indicated that the best-performing model used the principal component analysis (PCA) method for fitting the first cluster, followed by an SVM to fit the latter three clusters, which was used for the basis of Table 8. Another finding is that the higher count of individuals speaking only English tend to be associated with a higher count of uninsured persons, specifically only in areas that have the greatest relative distance away from hospitals, where most of the inhabitants are white to begin with. Practically, this means that in the areas that are farthest away from medical facilities, race and income are major factors in the determination of health insurance status. In the latter clusters representing areas closer to medical facilities, there is significant variation in variable makeup. However, since the SVM technique measures relative variance rather than producing a standardized coefficient, the direction of the relationship cannot be determined. In the clusters closer to medical facilities, it can be inferred that income, population density, and education tend to be the most significant variables in terms of estimating health insurance status. While race was found to be a significant estimator, it can likely be explained through a lack of diversity in rural regions, where in Missouri, racial clusters tend to follow population clusters.
It should be noted that the method of interpretation described above is only valid when using the cluster-based ensemble algorithm with best base learner by cluster as the aggregation mechanism.
In contrast, using the average aggregation mechanism for the same modeling technique, interpretation would still be possible, although a method would need to be established for how to interpret the average importance across multiple models. This is a possible avenue for future research. In contrast to the best learner approach, the average aggregation mechanism would be preferable when the predictive power needs to be maximized for external validity and generalizability, given the nature of averaging multiple learners to achieve a more representative and reliable response. This is a tradeoff between predictive ability and inference; if predictive power is required and the resulting output needs to be most generalizable and externally valid, then averaging results is typically encouraged, although it will lead to decreased interpretability due to model complexity.
While the cluster-based algorithm can maintain a significant level of inference when using the best-learner aggregation approach, there are limitations to the method in terms of its potential for generalizability. However, despite the interpretation tradeoff, one thing becomes clear: the cluster-based ensemble algorithm resembles localized regression in a way that retains the original variable inputs for interpretation. Whereas a generalized additive model (GAM) model using local or spline regression approaches often leads to a complicated interpretation of transformed variables, the cluster-based approach is a suitable alternative when inferential capability is required. Maps 1 and 2 represent the clusters derived from k-means clustering using the distance to hospitals variable as the basis for clustering.
From map one in Figure 2, patterns can be discerned where the first cluster primarily represents rural areas, followed by the second and third clusters representing suburban areas, and the fourth cluster representing urban areas with hospitals nearby. Map 2 is characterized by the optimal performing base learner using the cluster best-learner aggregation method, where the first cluster (distances furthest from hospitals) was fitted using PCR, and the latter clusters (distances closer to hospitals) were fitted using the SVM approach. Localized regression is illustrated by the cluster-based approach, where the only caveat relates to the requirement for interpretation by cluster rather than globally, leading to more descriptive conclusions surrounding model performance and relative variable importance. results is typically encouraged, although it will lead to decreased interpretability due to model complexity.
While the cluster-based algorithm can maintain a significant level of inference when using the best-learner aggregation approach, there are limitations to the method in terms of its potential for generalizability. However, despite the interpretation tradeoff, one thing becomes clear: the clusterbased ensemble algorithm resembles localized regression in a way that retains the original variable inputs for interpretation. Whereas a generalized additive model (GAM) model using local or spline regression approaches often leads to a complicated interpretation of transformed variables, the cluster-based approach is a suitable alternative when inferential capability is required. Maps 1 and 2 represent the clusters derived from k-means clustering using the distance to hospitals variable as the basis for clustering.
From map one in Figure 2, patterns can be discerned where the first cluster primarily represents rural areas, followed by the second and third clusters representing suburban areas, and the fourth cluster representing urban areas with hospitals nearby. Map 2 is characterized by the optimal performing base learner using the cluster best-learner aggregation method, where the first cluster (distances furthest from hospitals) was fitted using PCR, and the latter clusters (distances closer to hospitals) were fitted using the SVM approach. Localized regression is illustrated by the cluster-based approach, where the only caveat relates to the requirement for interpretation by cluster rather than globally, leading to more descriptive conclusions surrounding model performance and relative variable importance.

Study Limitations
This study was limited by the number of machine learning approaches incorporated into the cluster-based ensemble algorithm. Despite diversity among base learners, adding more individual modeling techniques may improve modeling performance. Additionally, the number of clusters to split any independent variable also raises questions. As discussed earlier, there are multiple approaches for determining an optimal number of clusters to split a variable, where, at best, a variety of indices return a range of optimal clusters. For this situation, it was decided early on to limit the number of clusters to equal the number of modeling classes under study. While this approach works from a practical standpoint, for true optimization, determination of the number of clusters to retain

Study Limitations
This study was limited by the number of machine learning approaches incorporated into the cluster-based ensemble algorithm. Despite diversity among base learners, adding more individual modeling techniques may improve modeling performance. Additionally, the number of clusters to split any independent variable also raises questions. As discussed earlier, there are multiple approaches for determining an optimal number of clusters to split a variable, where, at best, a variety of indices return a range of optimal clusters. For this situation, it was decided early on to limit the number of clusters to equal the number of modeling classes under study. While this approach works from a practical standpoint, for true optimization, determination of the number of clusters to retain could be improved upon. More specifically, provided the variable results of MSE change as cluster values were iteratively tested, further research employing synthetic population generation could explore the relationship between the size of a given dataset and the optimal number of clusters-while maintaining variable relationships-in greater detail.
Finally, in terms of inference, some of the modeling techniques that have been described in this study lend well to statistical inference: lasso, ridge, and principal component regression specifically. However, other modeling techniques have established measures for variable importance, but lack coefficients or measures of statistical significance that could lend insight into the direction of the relationship or confidence intervals surrounding variable importance measures. This makes interpretation challenging for some of the machine learning classes such as support vector regression; future research or improvement on this technique could aim to build bootstrap-based confidence intervals around the variable importance measures, leading to better indications of the statistical significance behind each measure.

Conclusions
This paper outlined a machine learning ensemble algorithm using a cluster-based approach to localize modeling for added performance and inferential capability compared to individual base learners and more complex methods such as local or spline-based regression procedures. The study compared individual base learners, comparative ensemble approaches, and the cluster-based ensemble algorithm, finding overall that the cluster-based approach with best learner by cluster for aggregation was the top performing model, followed by an average aggregation method for the same cluster-based ensemble methods.
The applied research problem framed in this paper was as follows. Can using machine learning algorithms, employing cluster analysis as a pre-processing technique, serve as a methodological framework to improve upon existing methods to estimate health insurance status across Missouri using geospatial, demographic, socioeconomic, and access-to-care characteristics? The findings from this study suggest that in areas furthest from hospitals and medical facilities (cluster one), race and income seem to be the primary factors associated with estimating block groups with high concentrations of individuals without health insurance. Specifically, rural areas with high concentrations of white and only-English speaking populations are likely to experience higher rates of non-insured inhabitants, whereas rural areas with higher rates of African American populations or high median income are less likely to carry elevated rates of non-insured inhabitants. Moving to areas closest to hospitals and medical facilities (cluster three), race continues to be a factor in the determination of health insurance status, but is significantly less significant in comparison to rural areas. In contrast to rural areas, education and labor characteristics become significant in estimating health insurance coverage in more heavily populated areas, which is likely due to the types of jobs being held in urban areas and the differences in health insurance coverage being offered in urban and rural areas, such as self-purchased insurance versus insurance through employer-sponsored plans.
In practice, the cluster-based ensemble methods add inferential capability in localized regression techniques, and can assist public health officials in the decision-making processes surrounding targeted interventions among complex populations. This is especially true for large areas with diverse populations, where localizing the population into smaller clusters for tailored analysis can improve model accuracy. This paper makes a unique and significant contribution to the academic literature on clustered-based machine learning ensemble for geospatial data. First, the success of the algorithm used in this study comes from its ability to multitask in answering several research questions simultaneously, while increasing the estimative accuracy compared to base learners and comparative ensemble approaches. Second, while there are aspects of the cluster-based ensemble technique that can be improved upon, this study lays the groundwork for future research in localized ensemble modeling. Third, collectively, this paper presents a framework that when applied to geospatial datasets can deliver both prediction and inference, as well as capture information about global performance and trends, all while retaining valuable information at the localized level to explain patterns and variations within datasets.