Next Article in Journal
Digital Competence of Training Teachers: Results of a Teaching Innovation Project
Previous Article in Journal
Achieving Transfer from Mathematics Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Typology of Public Schools in the State of Louisiana and Interventions to Improve Performance: A Machine Learning Approach

by
Aloyce R. Kaliba
* and
Donald R. Andrews
College of Business, Southern University and Agricultural and Mechanical College, Baton Rouge, LA 70813, USA
*
Author to whom correspondence should be addressed.
Educ. Sci. 2023, 13(2), 160; https://doi.org/10.3390/educsci13020160
Submission received: 18 October 2022 / Revised: 21 December 2022 / Accepted: 21 December 2022 / Published: 2 February 2023

Abstract

:
Extant literature on education research focuses on evaluating schools’ academic performance rather than the performance of educational institutions. Moreover, the State of Louisiana public school system always performs poorly in education outcomes compared to other school systems in the U.S. One of the limiting factors is the stringent standards applied among heterogeneous schools, steaming from the fit-for-all policies. We use a pairwise controlled manifold approximation technique and gradient boosting machine algorithm to typify Louisiana public schools into homogenous clusters and then characterize each identified group. The analyses uncover critical features of failing and high-performing school systems. Results confirm the heterogeneity of the school system, and each school needs tailored support to buoy its performance. Short-term interventions should focus on customized administrative and academic protocols with malleable interpositions addressing individual school shortcomings such as truancy. Long-term policies must discourse authentic economic development programs to foster community engagement and creativity while allocating strategic resources that cultivate resilience at the school and community levels.

1. Introduction

Two reports published in 1983 by National Commission on Excellence in Education and Carnegie Forum on Education and the Economy painted a grim picture regarding education outcomes in the U.S. As a result, states and school districts have implemented policies and regulations to foster high academic standards, improve accountability, and achieve excellence while administering rules and laws to maintain school disciplinary conduct. However, according to [1,2,3], the U.S. still has one of the highest high school dropout rates in developed countries, and among students who complete high school and go on to college, half require remedial courses, and half never graduate [4,5].
For the U.S. youth to compete for rewarding careers against other brilliant young people from across the globe, a college degree or advanced certificate is necessary. As the World Economic Forum reports [6], three-quarters of the fastest-growing occupations require education beyond a high school diploma, with science, technology, engineering, and mathematics (STEM) careers prominent on the list. To reignite U.S. education competitiveness, relight economic growth, and create a thriving middle class, the U.S. requires an inclusive education system that prepares all students for college and STEM careers and implements innovative public policies to ensure every child receives a quality education.
The extant literature related to quantitative (behavioral and cognitive) education analyses focuses on determining factors influencing student achievements and school performance using academic growth models and other econometric tools summarized by [7,8]. These models assume that schools or school systems are homogenous. What arises from these studies are fit-for-all public policies that do not necessarily impact education outcomes, as no one policy guarantees success [6]. Due to location and neighborhood effects, schools exhibit heterogeneous characteristics and face different challenges and constraints across districts and time; therefore, they demand tailored and diversified support.
Most education studies rely on academic growth models [9] to measure students’ progress or schools’ performance on standardized test scores concerning academically similar students from one point to the subsequent and students’ progress toward proficiency standards. These models provide a general framework for interventions to revivify failing students or schools or rally high-achieving students or schools. However, results from these models are not helpful when the objective is deriving tailored recommendations for specific students or schools with distinctive features and characteristics that differ from others. Moreover, the statistical methods for controlling student background and other extraneous variables in these models make it impossible to determine the impact of covariate variables on student or school performance [9,10].
This study deviates from these studies by focusing on whether there is a non-random structure in the Louisiana public school system and pinpointing critical features that differentiate Schools’ performance. In this study, we used pairwise controlled manifold approximation (PaCMAP) as an unsupervised machine learning tool for multidimensionality reduction and visualizing the created school clusters [11] and applied different internal statistical tools to determine optimal numbers of clusters and validate the created groups as reported elsewhere [12]. Further analysis using visualization tools and multiclass classification, specifically random forest and gradient boosting machine method [13], identifies critical features of failing and performing schools. The results are vital in suggesting tailored interventions to improve Louisiana’s public school system’s performance.
Figure 1 presents the steps taken for data collection and analysis, and the study has two main contributions. First, the unsupervised clustering technique illustrated in Figure 1 below helped build a multiclass classification model based on historical data to predict which schools belong to what cluster and what features are essential in each homogenous group. Therefore, school administrators and policymakers can respond appropriately to target certain shortcomings at the school level.
The analysis illustrated in Figure 1 allowed a simultaneous utilization of unlabeled and labeled data and enabled tracking of the effects of distinctive features on cluster projection. Second, combining unsupervised and supervised clustering tools allowed us to compare features across clusters that uncover connections between school characteristics and performance and critical differences among schools. The results provide evidence-based decision-making tools for selecting and implementing interventions to improve school outcomes. As illustrated in Figure 1, we use unsupervised machine learning techniques [14]; explicitly, we cluster schools into sub-homogenous groups using PaCMAP [11] and identify critical features of each cluster using multiclass classification [13,15]. Data points in clusters should be as similar as possible and dissimilar to teams in other groups. The main advantage of clustering is its adaptability to changes, which helps single out valuable features that distinguish distinct groups. Moreover, grouping schools by commonalities or differences is essential in exploring the factors that explain differences in achievement and performance. Clustering also helps to understand existing challenges and opportunities that influence change.
While the best clustering process maximizes inter-cluster distances, it should also minimize intra-cluster distances [16] (Bailey, 1989). The inter-cluster distance (global structure) is the distance between two data points belonging to different clusters, and the intra-cluster distance (local structure) is the distance between two data points belonging to the same group. A superior clustering algorithm makes clusters so that the inter-cluster distance between different clusters is prominent and the intra-cluster space of the same group is conservative. Popular methods for clustering social data through multidimensional reduction techniques include fuzzy clustering [17], t-distributed stochastic neighbor embedding (t-SNE) [18,19], uniform manifold approximation and projection (UMAP) [20], dimensionality reduction using triplet constraint (TriMap) [21]. The primary limitations of these methods are that they either preserve local or global structures, and hyperparameters of the models are difficult to interpret [11]. The PaCMAP is an ideal tool for clustering through dimensionality reduction as it depends on a founded mathematical formulation. Moreover, PaCMAP uncovers strong connections among variables by safeguarding local and global structures, an essential aspect of the geometric visualization of multidimensional datasets [11,22].
Evaluating the dimensionality reduction and cluster analysis includes quantitative and objective ways through cluster validation measures. The process has four main components: determining whether there is non-random structure in the data; determining the optimal number of clusters; evaluating how well a clustering solution fits the given data when the data is the only information available; and assessing how well a clustering solution agrees with partitions obtained based on other data sources. The first step involved evaluating the clustering tendency before applying the dimensionality reduction and clustering algorithms to determine whether the data contains any inherent grouping structure using the Hopkins statistic [23]. The statistic evaluates the null hypothesis that the data follows a uniform distribution (spatial randomness). In the second step, we used clustering validation measures [24,25] to determine the goodness of a clustering structure without respect to external information.
We organize the remaining part of the paper into four sections. The following section is the literature review that focuses on popular clustering techniques, their limitations, and the fundamentals of multiclass classification using machine learning. The data source, descriptions, and rationale of variables used to cluster the school systems and specific empirical techniques for data analysis are in Section 2, followed by the results and discussion section and the last section on implications for short- and long-term policies.

2. Background Information and Literature Review

2.1. Louisiana Schools Accountability System

According to the Louisiana Department of Education [26,27], the accountability system of the Louisiana school system aims to inform and focus educators through clear expectations for student outcomes; and provide objective information about school quality to parents, community members, and other stakeholders. Annually, public schools and early childhood centers in Louisiana receive a performance report that measures how well they prepare their children for the next phase of schooling. Since 1999, the state has issued public school performance scores (SPS) based on student achievement data. To communicate the quality of school performance to families and the public, Louisiana adopted letter grades (A–F). In 2014, the Department recalibrated its school performance score scale from a 200-point scale to a 150-point scale.
Under the previous 200-point scale, the letter grade was: A (120–200), B (105–119.9), C (90–104.9), D (75–89.9, F (0–74.9), and T (turnaround school). Under the 150-point scale, the groups of SPS are as follows: A (100–150), B (85–99.9), C (70–84.9), D (50–69.9), F (0–49.9), and T (turnaround school). The SPS for elementary is estimated based on these subindices: assessment (70%), growth (25%), and interest and opportunities (5%). The assessment subindex includes student assessments in student learning in English language arts (ELA), math, science, and social studies to measure student proficiency in the knowledge and skills reflected in the standards of each grade and subject. The SPS includes the points assigned to achievement levels by students for each subject assessed and progress made toward English language proficiency.
The calculation of the growth index accounts for changes in schools’ performance based on the previous year and the current year’s scores on each assessment result. Assessing student growth is done by answering two questions. If students are not yet achieving proficiency, are they on track to do so? If a student reaches the target, the school earns 150 points. Otherwise, if the students are growing at a rate comparable to their peers, schools earn points based on students’ growth percentile as compared to peers (i.e., 80th –99th percentile (150 points), 60th–79th percentile (115 points), 40th–59th percentile (85 points), 20th–39th percentile (25 points), and 1st–19th percentile (0 points). In addition, the interest and opportunities subindex constitutes a 5-point scale on various metrics reflecting the schools’ effort to make services available to all children.
Based on the Louisiana Department of Education [28], the SPS for elementary/middle schools with grade 8 is an aggregation of these subindices: assessment (65%), achievement and growth (25%), interest and opportunities (5%), and dropout credit accumulation (5%). For the high schools, the SPS is also an aggregation of the following subindices: assessment (25%), ACT/WorkKeys (25%), the strength of diploma (25%), cohort graduation rate (20%), and interest and opportunities (5%). The assessment subindex for the high schools includes the end-of-course (EOC) exams that assess whether students have mastered the standards of core high school subjects. EOC exams include algebra I, geometry, English I (beginning in 2017–2018), English II, biology, and U.S. History. Except for students who participate in LEAP alternative assessment 1, all high school students must take an ELA and math EOC exam by their 3rd cohort year, regardless of graduation pathways. The final scores do not include high school students retaking an EOC. Final student scores are on five levels: unsatisfactory (0 points), approaching basic (50 points), basic (80 points), mastery (100 points), or advanced (150 points). The mastery and above are considered proficient for the next grade level.
The dropout credit accumulation subindex is for schools that include grade 8 in the prior year, based on the number of Carnegie credits earned through the end of 9th grade (and transitional 9th, where applicable) and/or dropout status. The dropout credit accumulation encourages the successful transition to high school and access to Carnegie credits in middle school. The ACT/WorkKeys (ACT WorkKeys assessments are the cornerstone of ACT workforce solutions and measure foundational skills required for success in the workplace and help measure the workplace skills that can affect job performance) subindex measures student readiness for postsecondary learning, and all grade 11 take the ACT, a nationally recognized college and career readiness measure. Schools earn points for the highest composite score made by a student through the spring testing date of their senior year or a student who graduates at the end of grade 11. WorkKeys scores were included in the ACT subindex for accountability when the WorkKeys score yielded more index points than the ACT score beginning in 2015–2016.
Louisiana Department of Education report [27] indicates that the Strength of Diploma subindex measures the quality of the diploma earned by each 12th grader. The high school diploma plus awards points (110–150)) to schools for students who graduate on time and meet requirements for one or more of the following: advanced placement, International Baccalaureate (IB), JumpStart credentials, CLEP, TOPS-aligned dual enrollment course completion, and an associate degree. The system awards schools with four-year graduate students (100 points) with career diplomas or a regional Jump Start credential, as well as those who earned a diploma assessed on an alternate certification. Moreover, the system awards 50–75 points to schools with five-year graduates that have any certificate, five-year graduates who earn an A.P. score of 3 or higher, an IB score of 4 or higher, and a college-level examination program (CLEP) of 50 or higher. Other awards are for graduates with high school equivalency test (HiSET)/GED + JumpStart credential (40 points), and HiSET/GED earned no later than October 1 following the last exit record (25). The cohort graduation rate subindex measures the percentage of students who enter grade 9 and graduate four years later, adjusted for students who transfer in or out. All 9th-grade students who enter a graduation cohort are included in the cohort graduation rate calculations, regardless of diploma pathway, unless they are legitimate leavers. Legitimate leavers are students removed from the cohort and exited enrollment for one or more reasons: death, transfer out of state, transfer to an approved nonpublic school or a Board of Elementary Education and Secondary Education)-approved home study program, and transfer to early college.
After the calculation of SPS, the Louisiana Department of Education [27,28] categorize school that needs interventions into three groups: urgent intervention is necessary, urgent intervention is required, and comprehensive intervention is required. For schools in the first category, the subgroup performance equals D or F in the current year. For the second and third categories, the subgroups’ performance equal to F for two years and or out-of-school suspension rates more than double the national average for three years, and the overall performance of D or F for three years (or two years for a new school) and or graduation rate less than 67 percent in most recent year, respectively. For accountability purposes, Comprehensive Intervention Required labels will appear on the “Overall Performance” page in the Louisiana School Finder, while Urgent Intervention Needed and Required labels will appear on the “Discipline and Attendance” and or “Breakdown by Student Groups” pages. As part of Louisiana’s Every Student Succeeds Act (ESSA) plan, any school identified under one of the labels must submit an improvement plan to the Department and an application for funding to support its implementation (See details on how to calculate the SPS at: https://www.louisianabelieves.com/measuringresults/school-and-center-performance (accessed on 17 October 2022)).

2.2. An Overview of Empirical Clustering Techniques and Application in Education

Louisiana’s SPS is an aggregation indicating Schools’ overtime achievement and growth. Analyzing SPS data would allow identifying required changes to improve Schools’ performance. Standard education models used to measure achievement and growth range from subtracting last year’s test score from this year’s test score (called a gain score) to complex statistical models that account for differences in student academic and demographic characteristics [7,8]. The five standard growth models for measuring performance or progress: value-added, value table, trajectory, projection, and student growth percentile. The primary limitations of these models emanate from their inability to account for unobserved characteristics, and there is also a need to specify the mathematical relationship among variables explicitly. However, the education assessment literature (such as [29]) shows no consensus on the best methods and models for evaluating academic achievement and growth at the student and school systems level.
Clustering is a critical technique in data mining with applications in image processing, diagnosis systems, classification, missing value management and imputation, optimization, bioinformatics, and machine learning [30]. Moreover, dimensionality reduction is a fundamental step for clustering algorithms due to the curse of dimensionality and non-linearity of most observational data. As the number of dimensions increases, data points tend to be similar, and there is no clear structure to follow when grouping these pairs [31]. Since cluster analysis is just a statistical process of grouping related units into sets, the groups in the dataset may be genuinely related or related by chance. Different clustering techniques might give different results if the relationship is just by chance [32].
Conventional clustering algorithms such as the k-means clustering technique [33,34] assign a set of observations to a limited number of clusters such that pairs belonging to the same group are more like each other than those in other groups. The method assigns each observation to one and only one cluster. However, the assignment might often be too strict, leading to unfeasible partitions [35,36]. Fuzzy sets manage the challenges by assigning data points to more than one cluster [17]; therefore, each data point has a likelihood or probability score of belonging to a given cluster [37,38,39]. Extensions of the fuzzy c-means clustering algorithm (including [40,41,42]) improve the standard method by reducing errors during the segmentation process.
Education studies (including [43,44]) apply fuzzy clustering to analyze e-learning behavior by creating clusters with common characteristics, typifying schools across cultures [45], or combining learning and knowledge objects based on metadata attributes mapped with various learning styles to create personalized and more authentic learning experiences [46]. Others applied the technique to compare e-learning behaviors [47], predict and identify significant variables that affect undergraduate schools’ performance [48,49], allocate new students to homogenous groups of specified maximum capacity, and analyze the effects of such allocations on students’ academic performance [50], and creating performance profiles in reading, mathematics, and science [51]. Fuzzy clustering techniques perform poorly on data sets containing clusters with unequal sizes or densities, and the method is sensitive to outliers [52]
The t-distributed stochastic neighbor embedding (t-SNE) is a dimension reduction technique that tries to preserve the local structure and make clusters discernible in a two-or three-dimensional visualization [18,53]. The t-SNE algorithm preserves the local structure of the data using a heavy-tailed Student-t distribution to compute the similarity between two points in the low-dimensional space rather than a Gaussian distribution. A heavy-tailed Student-t distribution helps address crowding and optimization problems. The t-SNE algorithm takes a set of points in a high-dimensional space and finds an optimal representation of those points in a lower-dimensional space. The first objective is to preserve as much significant structure or information present in the high-dimensional data as possible in the low-dimensional representation. The second objective is to increase the data’s interpretability in the lower dimension space by minimizing information loss due to dimensionality reduction [54].
Education-related studies use the t-SNE algorithm to predict schools’ academic performance and evaluate the impact of different attributes on performance to identify at-risk students [54] or visualize clusters with unique features that correlate with success in medical school [55] to uncover success potential after accounting for inherent heterogeneity within the student population. Other studies [55] combine convolutional neural networks to identify critical features influencing academic performance and predicting future educational outcomes by visually distinguishing homogenous groups with fully connected layers of the networks [56] and highlighting prominent features influencing education outcomes and predicting future performances [57].
There are two primary limitations when using t-SNE for multidimensionality reduction and clustering. The technique requires calculating joint probability among all data points (at high and lower dimensions), which imposes a high computational burden [11,58]. Therefore, the t-SNE algorithm does not scale well for rapidly increasing sample sizes outside the computer cluster. Also, the algorithm does not preserve global data structure at high dimensions, meaning that only intra-cluster distances are meaningful and do not guarantee inter-cluster similarities [59,60].
Uniform manifold approximation and projection (UMAP) is a non-linear dimension reduction technique used for visualization like t-SNE, with the capacity to preserve local and global structures for non-noisy data [11]. Specifically, UMAP is highly informative when visualizing multidimensional data [61] and performs better in keeping global structure than t-SNE. The UMAP algorithm efficiently approximates k-nearest-neighbor via the nearest-neighbor-descent algorithm [20,62,63]. The application of the UMAP algorithm in education studies identifies community conditions that best support universal access and improved outcomes in the initial stages of childhood development or captures the neighborhoods that behave similarly at a particular time and explains the social-economic effects that bring communities together [22,64,65].
Dimensionality reduction using triplet constraint (TriMap) uses triplet (sets of three observations) constraints to form a low-dimensional embedding of a set of points [11,63]. The algorithm samples the triplets from the high-dimensional representation of the data points, and a weighting scheme reflects each triplet’s importance. The main idea is to capture higher orders of structure with triplet information (instead of pairwise information used by t-SNE and UMAP) and minimize a robust loss function for satisfying the chosen triplets, thereby providing a better global view of the data [63]. Theoretically, this method can preserve local and global structures; however, the inter-cluster distances are uncertain for large datasets with outliers [11,64].
Likewise, PaCMAP is a dimensionality reduction method that preserves local and global data structures [11,29]. The critical steps with the PaCMAP algorithm are graph construction, initialization of the solution, and iterative optimization using a custom gradient descent algorithm PaCMAP. The algorithm uses edges as graph components and distinguishes between three edges: neighbor pairs, mid-near pairs, and further-apart pairs. The first group consists of neighbors from each observation in the high-dimensional space. The second group consists of mid-near teams randomly sampling from additional data points and using the second smallest for the mid-near pair. The third group consists of a random selection of further data points from each observation. Parameters that specify the ratio of these quantities to the number of nearest neighbors determine the number of mid-near and further-apart point pairs. The PaCMAP algorithm is robust and works well on a large dataset, significantly faster than t-SNE, UMAP, and TriMap [65]. We could not find publications related to Education Research that use TriMap and PaCMAP as the primary data analysis tools.
Before clustering, it is critical to determine if the data are clusterable by applying the Hopkins statistic [66] that tests the spatial randomness of the data by measuring the probability that a given data set is from a uniform distribution. The Null hypothesis is that there are no meaningful clusters, and the alternative hypothesis is that the data set contains significant clusters. In addition, while the multidimensional reduction results help identify optimal numbers of clusters through visualizations, the analysis must be augmented by statistical tools such as the Elbow Method [67], the Silhouette Coefficient [68], Gap statistic methods [69], and other statistical measures (summarized by [70]) to ascertain the results. After determining the optimal number of clusters, clustering validation is also vital in deciding group quality [71]. Internal clustering validation aims to establish if the average distance within-cluster is small and the average distance between clusters is as significant as possible [25]. Internal clustering measures reflect connectedness, compactness, and the separation of the created clusters [72].
The connectivity has a value between zero and infinity. Minimizing the connectedness relates to what extent data points are in the same cluster (cohesion) as their nearest neighbors in the data space as determined by the K-nearest neighbors [73]. The compactness index assesses cluster homogeneity using the intra-cluster variance. It measures how closely related the data points in a cluster are. The index is estimable based on variance or distance. Lower variance indicates better compactness [25]. Separation quantifies the degree of separation between clusters by measuring the distance between cluster centroids [38,68]. Compactness and separation demonstrate opposing trends. While compactness increases with clusters, separation decreases with the number of clusters. Most measures of internal cluster validation, such as the Dunn Index and Silhouette width, combine compactness and separation into a single score [25,73]. The Dunn Index is the ratio of minimum average dissimilarity between two clusters and maximum average within-cluster dissimilarity. Given the formula for estimating the Silhouette [68], a Silhouette width for each data point can be positive or negative. Datapoints with a Silhouette close to one are close to the cluster’s center, and data points with a negative Silhouette value mean that the data points are on the boundaries and are more relative to the neighboring groups or clusters [74,75].

2.3. Multiclass Classification to Augment Clustering Results

After unsupervised clustering, the second interest is determining which features/variables significantly impacted each cluster using the original data. The most prominent impact features must differentiate the groups most strongly. Statistically, it is possible to perform a series of analyses of variance and select the attributes/variables with large t-values or smaller p-values [76]. It is also possible to distinguish critical features by calculating the average similarity of each data point based on intra- and inter-cluster distances of the centroid of each cluster [77]. However, a substitution effect occurs when two or more explanatory variables share information or predictive power. The analysis of variance and similarity analysis may not robustly determine which variables are critical [78]. Specifically, multiclass classification is a problem with more than two classes or clusters, where each data point belongs to one category [79]. The technique includes binary classification, discriminant analysis [80,81], tree algorithms extendable to manage multiclass problems, and nearest neighbors’ approach [82]. Discriminant analysis [83] is a versatile statistical method often used to assign data points to one group among known groups. The discriminant analysis aims to discriminate or classify the datasets based on more than two groups, clusters, or classes available priori. The process places new data points into a general category based on measured characteristics. Standard tools for noisy and high-dimensional data are penalized linear discriminant [84], high dimensional discriminant analysis [85], and stabilized linear discriminant analysis [86].
The tree-based algorithms are primary tools for supervised learning methods that empower predictive models with high accuracy, stability, and ease of interpretation. The most popular tree-based algorithms are decision trees [87,88], random forest [89], and gradient-boosting machines [90], as applied by [91,92,93]. Unlike other machine learning models, the algorithm has the quickest way to identify the most significant relationships between variables. Since it is a non-parametric method, it has no assumptions about space distributions and classifier structure [87]. The nearest neighbors’ classification algorithm assumes that similar objects exist in proximity or near each other, and the standard algorithms are the k-nearest neighbors [94] and the nearest shrunken centroids [95]. The objective is to find a group of k data points in the training dataset closest to the test dataset point and label assignments on the predominance of a particular class in this neighborhood. The output is a class membership, assigning each data point to a specific cluster by a plurality vote of its neighbors [94] or earmarked to the class most common among its nearest neighbors.
There are different metrics for comparing the performance of multiclass models and analyzing the behavior of the same model by tuning various parameters. The metrics are based on the confusion matrix since it encloses all the relevant information about the algorithm and classification rule performance [96]. The confusion matrix is a cross table that records the number of occurrences between observed classification (e.g., unsupervised machine learning) and the predicted classification (e.g., from supervised machine learning). Estimable metrics from the confusion matrix dictating more is better (should be maximized) include accuracy, kappa, mean Specificity, and mean recall, and the standard metrics for lower is better (should be minimized) are the logloss and mean detection rate [97]. Generally, the standard accuracy metric returns an overall measure of how much the model correctly predicts the class based on the entire dataset. Therefore, the metric is very intuitive and easy to understand. Balanced accuracy (mean recall) is another critical metric in multiclass classification and is an average of recalls. For details on various metrics used in machine learning model evaluations, see [97,98,99].
After determining the best model, the next step is estimating the relative importance of input variables through k-fold validation [100]. The process identifies the relative importance of explanatory variables by deconstructing the model weights and determining the relative importance or strength of association between the dependent and explanatory variables. For decision tree-based models, the connecting weights are tallied for each node and scaled close to all other inputs. Note that the model weights that connect variables in decision tree-based models are partially analogous to parameter coefficients in a standard regression. The model weights dictate the relative influence of information processed in the network by suppressing irrelevant input variables in their correlation with a response variable. Since no multiclass method outperforms others, the model choice depends on the desired precision and the nature of the classification problems. Therefore, a feature importance score ensures the interpretability of complex models as it quantifies information a variable contributes when building the model and ranks the relative influence of the variable in predicting a specific cluster [101,102,103].

3. Materials and Methods

3.1. Source of Data

This study used data from the 2015/16, 2016/17, and 2017/18 school years. While most data are available up to 2019/20, the financial data was not available when finalizing this paper. All data are from the Louisiana Department of Education Data Center. The link https://www.louisianabelieves.com/resources/library/student-attributes (accessed on 17 October 2022) provides data on schools’ attributes, including the total numbers of students and the percentage of students by gender and race (i.e., American Indian, Asian, Black, Hispanic, Hawaiian/Pacific Islander, and White). Other information is on English proficiency (e.g., percent of fully proficient students), the number of students in different grades, and the percentage in free and reduced lunch programs.
The annual financial report at https://www.louisianabelieves.com/data/310/ (accessed on 17 October 2022) summarizes financial activities for the school year. The variables in the dataset are current expenditures per pupil on instructors, pupil/instructional support, school administration, transportation, and other supports. Other information is on school-level student counts and school-level staff full-time equivalent (FTE) for teachers, administrators, other instructors, and other support staff. There is also information on staff salaries, education levels and average years of experience. The link https://www.louisianabelieves.com/resources/library/fiscal-data (accessed on 17 October 2022) has other financial data summarized by expenditure in each group (e.g., wages, transportation). The link https://www.louisianabelieves.com/resources/library/performance-scores (accessed on 17 October 2022) provides information on school-level performance scores. At the beginning of this study, the scores were available from 1998/1999 to the 2017/18 school year. The full dataset with all variables is available for public schools governed by a school district. School districts with high numbers of private and charter schools that are publicly funded but operated by independent groups, such as in New Orleans Parish, are underrepresented. The New Orleans School District follows the all-charter system with very few schools run by public school systems, and the district is represented by 14 individual schools in the data set. For data analysis purposes, the dataset is in three groups: elementary/middle (elementary after that), combination (with elementary, middle, and high schools), and high schools’ system. The available data are pre-COVID-19 pandemic. Further analysis is needed when the data collected during the pandemic is available, as there is a lag of three years.

3.2. Variables That Influence School Performance and Empirical Model

For definitions and examples of variables that influence school performance, see [104,105,106] on the critical school characteristics and roles of past achievement. Since Schools’ performance depends on schools’ performance, variables influencing school performance are in six groups [107]: schools’ socioeconomic status, past achievement, school attributes, faculty education, per-pupil expenditure, and variables defining the affluence of the communities in school catchment areas. Studies examining the importance of teacher training, teacher certification, and teachers’ professional development programs all conclude that students with certified teachers performed better ([for example, see [108,109,110,111]). Education studies link teachers’ effectiveness to positive student behavior, such as student attendance, which improves schools’ performance [112,113,114]. Other studies examine a constellation of teacher-related effects such as classroom effectiveness, collective teaching quality, and school academic organization that increase student performance and academic growth [115,116,117,118]. Studies (including [119,120,121,122,123]) focus on the influence of class size on student performance with varying conclusions.
Financial expenditure is another variable purported to influence school performance [120,124,125,126,127]; however, the conclusions from these studies are indeterminant. In addition, meta-analysis reviews of quantitative research documenting the association between neighborhoods and educational outcomes all concluded that individual academic results were significantly associated with neighborhood characteristics such as poverty, a poor educational climate, the proportion of ethnic/migrant groups, and social disorganization and other built environment that promotes parental engagement and participation [128].
The variables that capture community affluence are from the five-year American Community Survey (According to the U.S. Bureau of Census, the American Community Survey (ACS) helps local officials, community leaders, and businesses understand the changes in their communities. It is the premier source for detailed population and housing information about the U.S.) at the unified school district level. The data are available from 2009 to 2019 and match the schools’ data. All analyses were in R Environment [129]. To conduct PaCMAP while preserving the local and global structures, we follow a two-step cluster analysis [63] that allows variability among the created clusters [130]. The first step involved calculating Gower’s distance matrix in separating schools into (dis)similar groups using the daisy function in the cluster package [131] (see [132,133] regarding the advantages of Gower’s distance matrix). For PaCMAP, we recreated and executed Python’s pacmap function [11] using a reticulate package [134] in the R environment, the input being the Gowers distance of each school system. Fine-tuning the pacmap function requires specifying the size of the local neighborhood or the number of neighboring sample points (n-neighbors) used for manifold approximation. Larger values result in more global views of the manifold, while smaller values preserve local data. The number of neighbors should range from 2 to 100, but we set it to “NULL” to let the algorithm determine the optimal number of neighbors. Principal component analysis initialized the lower-dimensional embedding at the default levels.
We also used PaCMAP results to identify the medoids of the original data set using the partitioning around medoids (PAM) algorithm that partitions (cluster) based on the specified number of groups, as PAM is less sensitive to outliers [1]. The number represents the resampling iterations; repeats are the number of complete sets of folds to compute, and classProbs is a logical function telling the algorithm to compute class probabilities for classification models (along with predicted values) in each resample. After these two steps, we combined the PaCMAP results with the original dataset that added three variables to the new dataset (i.e., a cluster variable, location of medoids, and the two-dimension variable from the PaCMAP. The third step involved multiclass classification, where the dependent variable was the created cluster indicator variable, and the independent variables included scaled and centered demographic, social, and community variables. The caret package [80] was the primary tool for multiclass classification analysis using the different methods discussed in the multiclass classification section. To be consistent, the features of all models for the train control function (trainControl) were: method = “repeatedcv”, number = 10, repeats = 3, classProbs = TRUE, summaryFunction = multiClassSummary, and returnResamp = “all”. The repeating cross-validation with precisely the same splitting yields the same result for every repetition. The summaryFunction calculates performance metrics across samples, in this case, a multiclass function, and returnResamp is a character string indicating what to save regarding the resampled summary metrics, which can be all metrics. After selecting the best model by referencing the metrics discussed above, we identified and visualized the critical features of each cluster using a VIP package [135]. The package is a general framework for constructing variable importance plots from various machine learning models. We arbitrarily extracted 15 features for each best model using the variance-based methods [135,136].

4. Results and Discussion

4.1. Results from Unsupervised Learning Analyses

Because of space, the summary statistics of all variables for each school system are in Appendix A, and the entire dataset is available upon reasonable request. The summary statistics on the SPS are in Table 1. As stated before, the five broad categories of SPS are 100–150 (exceeds expectations), 85–99.9 (meets expectation), 70–84.9 (needs improvement), 50–69.9 (at risk), and 0–49.9 (Fail). The results in Table 1 show that the school performance scores are within the meets expectation and needs improvement category for all school systems and three school years. However, there is high variability in school performance, as exhibited by significant standard deviation, range, and coefficient of variation. The variability in SPS varied by the school system and by year. For example, it was low in 2017/18 for the elementary and combination school systems and high in 2016/17 for the high school system.
Before clustering, the estimated Hopkins statistics to measure the clustering tendency were 0.870, 0.842, and 0.823 for elementary, combination, and high school systems. Note that when Hopkins’ statistic is equal to 0.5, the dataset reveals no clustering structure; when the statistic is close to 1.0, imply significant evidence that the data might be cluster-able and a value close to 0, in this case, the test is indecisive, and data are neither clustered nor random [23,137]. Based on the above results, we can reject the null hypothesis and conclude that the Louisiana education dataset has sufficient structures in the data to justify cluster analysis.
As shown in Figure 2, the manifold approximation and projection (PaCMAP) turn the whole dataset into a two-dimension scale, giving each data point a location on a map, thereby avoiding crowding them in the center of the map, and there are clear boundaries between clusters. After dimensionality reduction, we use visual inspection and other statistical measures to identify optimal numbers of clusters on the two-dimensional data, and the results are in Figure 2. As discussed in the literature review, the Elbow and the Silhouette methods in Figure 2 measure a global clustering characteristic. The Gap statistic formalizes the elbow/silhouette heuristic to estimate the optimal number of clusters.
The majority rule in Figure 2 measures the appropriateness of clusters using various indices [138]. Most statistical techniques for the elementary school system support five optimal clusters that provide the best visualization results with few outliers and minimal overlaps. For the combination and high school systems, visualization and statistical measures proposed four optimal clusters for each school system. Notice the two outliers for the high school system. The elementary school system’s inflection points for the Elbow, Silhouette, and G-Statistics methods are at five clusters. Indices in the majority rule also identify five optimal clusters, likewise the PaCMAP visualization. For the combination school’s system, whereas the Elbow, Silhouette, majority rule, and PaCMAP methods propose four clusters, the first inflection point for the G-Statistics method is five. While the Elbow and Silhouette methods suggest four clusters, the G-Statistics methods suggest five, the majority rule proposes four and six clusters for the high schools’ system, and the PaCMAP identifies six clusters with two outliers.
The results of internal clustering validation measures (i.e., connectivity index, Dunn Index, and Silhouette width) based on the Gowers distance of the original data and PAM are in Figure 3. The optimum value of the connectivity index should be minimum; Silhouette should be maximum; likewise, the Dunn index. In Figure 3, the three measures suggest three, six, and four optimal clusters for the elementary school system. Analogous results from the three measures are four, six, or seven for the combination school system and four, six, or seven for the secondary school system. The results from the three measures are as expected due to heterogeneity in the data. In Figure 2, each index has limitations, especially for a large dataset with outliers, and the results rely heavily on properties intrinsic to the dataset [12,74]. However, the Silhouette width is a widely used index for internal clustering validation and determining the quality of clusters and the entire classification [139]. The index provides enough information about clustering quality for unlabeled data and, therefore, suggests more accurate results regarding optimal numbers of clusters existing in the dataset.
To get the best and most consistent results, we use the Silhouette width results and modify the measure to determine the best-bet numbers of groups by comparing the weighted proportions of data points at the boundaries of each created cluster using the following formula ( W k = i = 1 K Z i n N S i Z i = N S i n , where W k is the weighted negative Silhouette proportion, K is the number of clusters, Z i , is the size of the cluster (number of observations in the cluster), n is the sample size (observations in the dataset), and N S i is the number of data points with negative Silhouette width in each cluster. The weight can evaluate results produced by similar or different algorithms on equal or different numbers of clusters). The results from the proposed formula are such that the weighted ratio is zero for perfect partitioning and one for random data points with no visible groups.
The first panel of Figure 4 presents the reference points, suggested optimal clusters identified in Figure 2 and Figure 3, and potential optimal clusters using the proposed weighted formula for the elementary schools’ system; comparable results for the combination and high school systems are in Appendix B. In Figure 4, regarding the elementary schools’ system, the average Silhouette width and the weighted proportion for the width under three clusters were 0.26 and 0.129, compared to 0.26 and 0.093 for four clusters, 0.26 and 0.115 for five clusters, 0.27 and 0.041 for six clusters. In Figure 4, the elementary school system reference point is five clusters, and the results show that 11.5 percent of data points were on the boundaries compared to 12.9, 9.3, and 4.1 percent when portioned into three, four, and six, respectively. Notice that 13 and 39 percent of data points in clusters one and two under the four cluster assumptions were on the cluster boundaries. Analogous numbers are 2 (cluster 1), 17 (cluster 2), and 13 (cluster 4) under the six clusters assumption. Six clusters partition the elementary school’s system data better by positioning a few (per cluster) data points on the cluster’s boundaries. Moreover, the second panel of Figure 4 is the box plot of the average Silhouette of the referenced clusters. For the elementary school system, the results from six clusters show few outliers compared to the remaining clusters.
The equivalent results for the combination and high school systems are in Appendix B. The weighted proportion for negative Silhouette width from the combination schools’ system suggests selecting among three, four, and seven clusters that indicate that 5.8, 6.0, and 6.4 percent of the data points were on boundaries. For six clusters, 12.8 percent of data points are on the borders. Partitioning the combination schools’ system into four groups produced better results. For example, 14 and 33 percent of clusters 1 and 3 are on boundaries compared to 7 and 25 data points in clusters 1 and 2 when the data is portioned into four clusters. The results for the high school system imply retaining seven groups that position only 5.3 percent of the data point on the boundaries of the created clusters, and only cluster one has 11 percent of the data points on the edge of the group. Also, the data points with a negative Silhouette are less than one percent among all seven clusters for the high school system.
Figure 5 presents the annotated clusters with school performance scores and ellipses created using the Khachiyan algorithm (Gács and Lovász, 1981); the distance between the centroid and the furthest point in the cluster defines the radius of the circle. In Figure 5, SPS is the cluster medoid regarding SPS. In Figure 5, the pairwise controlled manifold approximation preserves both inter-cluster (global structure) and intra-cluster distance (local structure) distances (data geometry), the position of each cluster determines relatedness between clusters, and the size and spread of each cluster are proportional to the variance of the group and cluster membership. Figure 5 shows four unique groups of school clusters under the elementary schools system, denoted as extremely at-risk schools (the medoid SPS is 50.0), high-at-risk schools (the medoid SPS is 54.70), at-risk schools (the medoid SPS is 58.4, and exceeds expectations (the medoid SPS is 106.5). The meet-expectation cluster has a medoid score of 79.7, intersecting with exceeds expectations cluster with a medoid score of 92.3. Specific features for intersecting clusters are similar or share comparable attributes. The intersections or overlaps imply that the data points on the boundaries of these clusters are closer to data points in the neighboring cluster than to data points in its cluster.
Figure 5 also shows one unique cluster under the combination schools’ system; the remaining three are interrelated. The cluster with the failing schools had a medoid score of 48.0 and intersected with the at-risk Schools’ cluster with a medoid score of 59.3 and meets expectation schools with medoid scores of 89.9. The unique “exceed expectation schools cluster” had a medoid score of 120.8. For the high school system, one of the seven clusters is unique, that is, the needs improvement cluster with a medoid SPS of 79.6. This unique cluster contained two outliers; therefore, they were removed for further analysis. Three groups, each with two clusters, are interrelated. The first two interconnected clusters are extremely failing schools (the medoid SPS is 33.7) and lower- at-risk cluster (medoid SPS is 68.9). While the upper-at-risk Schools’ cluster interconnects with meet expectation clusters with the medoid SPS of 65.0 and 89.7, respectively, the exceeds expectation cluster (medoid SPS is 98.9) intersects with exceeds expectation cluster with a medoid of 101.3. The results in Figure 5 suggest that the organization of school performance in the State of Louisiana is not along a single dimension of indicators but meaningfully organized into heterogeneous clusters with unique and interconnected features.

4.2. Results from Supervised Learning Analyses

Reporting and comparing performance metrics is customary when evaluating machine learning models. Each metric has advantages and disadvantages; each reflects a different aspect of predictive performance. We used the holdout method to determine how statistical analysis can transform into a dataset, as explained in Section 2.2 and 2.3. The experimental part of the research covers the design of the test environment plus the formation of each model by splinting the datasets into training (80%) and validation (20%) sets. The algorithms used in this study include decision trees (DST), k-nearest neighbor (KKNN), stabilized linear discriminant analysis (SLDA), NSC (nearest shrunken centroids (NSC), penalized discriminant analysis (PDA), HDDA (high dimensional discriminant analysis (HDDA), random forest (RF), and gradient boosting machine (GBM). These are popular algorithms for multiclass classification. The summary statistics of each model’s performance metrics are in Table 2, and the relative distributions of the metrics are in Appendix C.
By comparing both “more-is-better” and “low-is-better” measures, the performance metrics in Table 2 results indicate that the random forest (RF) and gradient boosting machine (GBM) algorithms perform betters in predicting the created clusters for all school systems. More-is-better implies a preference for higher scores, and less-is-better means lower scores indicate better performance. Based on t-test results and more-is-better metrics, the RF and GBM models have the highest but similar scores with low standard deviations. The RF and GBM generated the best results related to the accuracy, kappa, recall, specificity, and detection rates (more-is-better) performance measures, with relatively small standard deviations compared to the results from other algorithms. The results imply that the RF and GBM results are more accurate and stable than other models. Also, the mean log losses for the RF and GBM are the lowest among all models. The mean log losses are statistically significantly lower for the GBM under the elementary and high school systems but higher for the combination school system. Therefore, we use the GBM results to identify features for the elementary and high school systems and RF results for the combination school system. Results in Table 2 show that the Decision Tree and K-nearest neighbor performed poorly across the three school systems. The emphasis of more-is-better performance measures is on avoiding “False Negative” or Type II errors. In statistics, type II errors mean failing to reject the null hypothesis when it is false. For RF and GBM, better measures are more than 0.95, implying that at least 95% of cluster members are in the ideal groups. A relatively higher detection rate suggests that the two algorithms are more likely to identify those members that do not belong in either cluster. In Table 2, the log-loss also focuses on model prediction performance. A model with perfect prediction has a log-loss score of 0; in other words, the model predicts each observation’s probability as the actual value. Therefore, the log-loss indicates how good or bad the prediction results are by denoting how far the predictions are from the actual values; thus, lower is better. Based on the results in Table 2, the RF and GBM models outperform all models for all school systems using more-is-better and low-is-better performance metrics.
For the elementary school system, we used the GBM models to identify critical features influencing school performance. Comparatively, in Table 2 (see also Appendix B), the mean and standard deviation estimates of the more-is-better performance metrics from the RF and GBM models are not statistically significantly different (p = 0.01) among the three Schools’ systems. However, while the mean values for the RF are consistently higher than those for the GBM models under the elementary school systems, the GBM model log-loss is lower (0.0448) and statistically different compared to the RF log-loss model (0.2268). Although not statistically significant, for the combination school system, the more-is-better metrics from the GBM are consistently higher and tighter (low standard deviation) compared to the RF results. Moreover, the mean Log-loss of the GBM model (0.1259) is statistically significantly lower (p = 0.01) than the mean log-loss for the RF model (0.3021). Therefore, the GBM is slightly better at predicting and classifying combination school system datasets than the RF model.
The performance metrics of the k-folds validation analysis are not statistically significant (p = 0.01) compared to the results from the training datasets. Similarly, Table 2 and Appendix C show that the RF’s more-is-better performance metrics are consistently higher but not statistically significantly different from the GBM model results. However, the mean logloss of the GBM (0.1485) is lower and statistically significant compared to the RF mean logloss (0.3651). Referring to these results, we also used GBM model results to identify the critical features of combination and high school systems. The estimated GBM best “metrics” using k-fold cross-validation were: accuracy (0.9927), kappa (0.9912), and logloss (0.012) for the elementary school system, accuracy (0.9876), kappa (0.9801), and logloss (0.023) for the combination school system and accuracy (0.9932), kappa (0.9911), and logloss (0.0165) for the high school system.
Generally, importance provides a score indicating how useful or valuable each feature was in constructing the boosted decision trees within the model. The more an attribute contributes to making critical decisions with decision trees, the higher its relative importance. Figure 6 shows the fifteen features (arbitrarily set) generated from the GBM (elementary and high school systems) and RF (combination schools system).
In Figure 6, the critical features of the elementary school system are mainly related to community socioeconomic variables and school characteristics. The community variables (with the order of importance) include the percentage of the population with a bachelor’s degree (the dominant feature), dwelling median value, the total population in the community, the percentage of poverty in the community, and the relative number of minorities in the community. These variables are related to community affluence. The education literature always confirms a causal relationship between school performance and the socioeconomic status of communities. Socioeconomic status (SES) encompasses wealth, educational attainment, financial security, social status, and social class perception. Poverty is not a single factor but characterizes multiple physical and psychosocial stressors.
Further, SES is a consistent and reliable predictor of many life outcomes. Many studies (such as [140,141,142]) indicate that schools in low-SES communities perform poorly compared to schools in affluent neighborhoods, punctuated by the presence of minorities. Often, schools in low-SES communities have lower performance scores because students have poor cognitive development and lack language, memory, and socio-emotional processing. Consequently, their performance in schools is flawed. Moreover, the school systems in low-SES communities are often under-resourced, negatively affecting schools’ academic progress and outcomes, including inadequate education and increased dropout rates [128,143,144,145]. The State of Louisiana can reduce these risk factors through early intervention programs to improve the school systems with elevated identified critical features at the community level.
Solutions to improve the performance of Louisiana’s elementary school systems should focus on reducing teachers’ workload and boosting their morale through increased pay and other work-related incentives. The critical features related to school-level characteristics include the salary of other non-academic instructors, truancy rate, teachers’ full-time equivalent, total students in schools, salary expenditures and current expenditures per pupil, and the number of teachers with a graduate degree. These variables determine the workload, incentive, and morale of teachers and non-academic staff and the delivery of quality education to individual students. Frequently, schoolteacher and staff morale are associated with inequities from being overworked, lacking advancement, and low salaries resulting in unproductive school culture, which affects the administration, teaching, and learning [146,147]. Therefore, factors that cause low morale are from sources controllable by the school administrators and policymakers. Reducing teachers’ workload, class preparation time through smaller classrooms, administrative support, recognition, and opportunities for advancement offer administrators leverage to enhance or change the school culture.
The first four significant critical feature of the combination school system is the median dwelling values (the dominant feature), salary expenditure per pupil, support staff with a degree, and percentage of the population with a degree. Except for the experience of the support staff, although not in a similar order, other remaining variables appear as critical features in the elementary school’s system. Apart from community affluence and teachers’ and staff morale, the support staff experience was also an essential feature of the combination school system. Positive school culture thrives by including support staff and all non-teaching teams who play a crucial role in ensuring students learn in a safe and supportive learning environment. They can foster positive, trusting relationships with students and improve the school climate by encouraging parent and family involvement in their student’s education [148,149]. In Louisiana, schools rely on the professional input and expertise of a range of staff; some work alongside teachers, and some work behind the scenes to ensure an efficient infrastructure for effective teaching and learning. As the chief executive officers, the school principals should initiate a consistent, compelling reward system to motivate their job performance and general welfare. Often overlooked, legislators should ensure that the combination school system has a pool of efficient and motivated support staff to support learning in this diverse and complex school system.
The four dominant features of the high school system are total students (the most predominant), dwelling median value, the population in the community, and full-time equivalent per pupil. The second groups are variables related to workload (i.e., school administrator, support staff, and teacher FTE) and incentives (i.e., the salary of support staff). The remaining variables are allied to the experience and education of administrators and teachers (i.e., the experience of administrators, administrators with a graduate degree, and support staff with a degree) and a community-level variable (i.e., the percent of the population with a degree). Apart from community affluence, the experience and education of administrators and support staff were critical features within the high school systems. A successful school is about much more than teaching. While good teaching and learning are crucial, experienced administrators are vital in providing a well-rounded and effective teaching environment. Experienced administrators and support staff allow academic staff and teachers to focus on teaching. At the same time, they create robust systems for accountability, policies, and procedures to ensure that teaching and learning flow as smoothly as possible. An effective administration department can extract and analyze critical data to inform schools’ strategic decisions around education provision [150,151,152,153].
In addition, employee retention does not have a one-size-fits-all solution. Each school system and individual school must work purposefully to devise plans to retain its most influential administrators and teachers [149,154]. The State of Louisiana can reduce administrators turnover through beneficial job contracts, the tenure system, and a higher salary for administrators and teachers in the high school system. Additionally, creating a positive disciplinary environment lowers the odds of principals moving to another school, especially in high concentrations of students of color. Moreover, allowing the administrators to influence and determine teacher professional development and budgeting decrease the likelihood of principal turnovers.
The ladder/spider plots in Appendix D illustrate the most prominent features within each identified cluster in each school system relative to the average school system values. The plotted variables are scaled (range standardized from 0 to 1) and represent lacking (0) to abundant (1); therefore, the spokes radiate outwards from a central zero hub. The center of the wheel or the x-axis represents a minimum value, which is zero; the mid-cycle and the last cycles represent the average (0.5) and the maximum (1). The red color represents the cluster’s average values, and the green color is the school system’s average value. The denotation of the clusters is ordinal and consists of three groups: SPS below 60 (failing schools, extremely at risk, highly at risk, and at risk), the SPS between 60 and 80 (lower at risk and meet expectation), and SPS above 80 (exceed expectations and highly exceeds expectations). Therefore, six clusters exist for the elementary school system (the 7th cluster included only outliers), and four and six prominent groups exist for the combination and high school systems. School-level attributes that distinguished the school system cluster include truancy rate, availability of transportation services, teacher FTE per pupil, the average salary of other support staff, the average wage of administrators, and the number of support staff with undergraduate and number of instructors with graduate degrees. Community-level attributes were the percentage of the population with degrees, median dwelling values, population size, the portion of the people in poverty, and population median age.
The medoid SPS score for the elementary school system was 52.0 for the extremely at-risk cluster. The cluster’s truancy rate, transportation services, and salaries for administrators and instructors are below the elementary school system average; likewise, the number of support staff with graduate degrees. Also, the number of teachers and support staff with graduate degrees was below the average for elementary schools. Most of the schools in this cluster are in low-populated communities with a young population and below-average wealth, measured by dwelling value. The poverty rate (based on income) in these communities is relatively low, and the percentage of people with a degree is above average, which signals the characteristics of working-class communities. Parents in these communities (working class) drives their children to schools (low truancy rate) when going to work. Although the FTE per pupil is above average, working parents might not have enough time to support their students academically, such as helping them with homework and other assignments. To increase performance, instituting effective mentoring, tutoring, and after-school education programs that provide motivation, personal individual attention, direct instruction, and access to textbooks and instructional materials to increase schools’ academic skills and support services [155,156,157,158]. Increasing the number of experienced administrators, staff, and teachers through new hires would boost the performance of the schools in the cluster.
Although transportation service in the “high-at-risk” cluster (medoid SPS of 54.7) is average, there is an elevated truancy rate. In this cluster, the number of support staff with an undergraduate degree is deficient, and teacher FTE per pupil is below the average. Schools in the groups are in young, high-populated, and relatively affluent communities with low poverty rates, but the population with the degree is below average. Low teacher FTE per pupil might indicate instructor understaffing that limits the process of identifying students with specific instructional needs. Increasing the number of instructors and collaborating with parents to reduce truancy would improve school performance. There is a need for intervention programs that dispel parental misbeliefs undervaluing the importance of regular attendance and the number of school days their child misses classes [159,160]. Truancy reduction efforts need a differentiated approach that targets risk factors more prevalent in a specific group of students and tailored concerted efforts to ensure chronic absentee students can get back on track.
Except for the above-average numbers of instructors with an undergraduate degree and slightly higher teacher FTE per pupil, the at-risk cluster with medoid SPS of 58.4 is like the high-at-risk cluster discussed above. Schools in the cluster serve vulnerable populations facing educational and economic barriers [161]. Besides mentoring, tutoring, and after-school education programs, such schools need to be recognized and supported with more physical, human, and financial resources. An increase in teacher FTE per pupil indicates increased student support [162] with direct and positive effects on school performance. Although the values of most attributes are below average, the teacher FTE per pupil for the cluster that meets expectations (medoid SPS of 79.7) was above average. Except for the number of support staff with a graduate degree, dwelling median value, and population size, the two clusters that exceed expectation (medoid SPS of 92.3) and highly exceed expectation (medoid SPS of 106.5), all other variables are within the elementary school averages. The schools in these clusters are in communities with low populations, but the percentage of the people with a degree is above average, and the population’s median age is below average, implying that these schools are in large affluent communities with resources to support the school systems.
For the combination school system, schools in the “failing Schools” cluster where a medoid had an SPS of 48 are in large communities with medium wealth measured by median dwelling values. Although the number of instructors with a graduate degree is slightly above average, other schools’ and community-levels attributes are below the school system averages. Excluding the population percentage with the degree variable, all other characteristics for the “at-risk” cluster (medoid SPS of 59.3) were below the average, implying various interrelated variables influence Schools’ performances. For both clusters, improvement in leadership will churn the interwoven network [163], resulting in positive structural and administrative changes that improve school performance [164]. Some of the attributes of the “meet expectation” cluster (medoid of SPS of 89.9) were above the school system average. These variables include the number of instructors with a graduate degree, dwelling medium values, population size, and average salary of school administrators. In the extant education literature, these are critical factors influencing school performance. For the “exceeded expectation” cluster where the medoid had an SPS of 120.8, the percentage of the population with a graduate degree and the teacher FTE per student was above average, and the truancy rate was below the school system average.
After dropping the outliers, the high school system had six clusters. The SPS for the medoid of the “upper at risk” cluster was 58.4. The above-average attributes for the group were the percentage of the population in poverty, the number of support staff with a graduate degree, dwelling medium values, and population size. School-level and community-level attributes within the cluster that was below average included the number of support staff without a graduate degree and the percentage of the population with graduate degrees. The schools in the cluster were in densely populated communities with high poverty rates commonly associated with poor school performance. These schools need active instruction that increases student engagement, a critical element of academic achievement in schools with students from families in poverty and at risk for adverse outcomes [165]. For the “lower-at-risk” cluster where the SPS for the medoid was 66.9, the above-average attributes were percent of the population in poverty, percent of people in poverty, and teacher FTE per pupil. Schools in the cluster were in relatively less affluent small communities with high poverty rates. The truancy rate, the salary of administrators, and the number of support staff with an undergraduate degree were below the cluster average.
Solid administrative leadership is a critical component of schools with high student achievement. Students receive more individualized help, and attention from the support staff; teachers receive specialist support and assistance with their administrative and planning tasks, granting them more time for their core responsibilities [166]. The medoid of the “highly needs improvement” cluster had an SPS of 71.3. The attributes representing the number of support staff without a graduate degree, the number of support staff with a degree, and percent of the population with a degree were above average. The schools in this cluster were in moderately affluent and medium-sized communities with low poverty rates that differentiate it from the “upper-at-risk” and “lower-at-risk” clusters. The medoid of the “meet expectation” cluster had an SPS of 90.0, and the schools in the group were in moderately affluent, medium-sized communities and relatively above-average median age. The number of support staff with graduate degrees and the salary of administrators in these schools was above average, boosting the overall school performance.
The “exceeds expectation” cluster attributes (SPS of 102.2) are almost like the “highly needs improvement” cluster. They differ in two features: the percentage of the population with a degree and support staff without an undergraduate degree. The medoid of SPS, the “highly exceeds expectation cluster,” was 131.5. The schools were in moderately affluent and low-populated communities, with above-average percent females in the population and a percentage of people with a degree and support staff without an undergraduate degree. In the cluster, the above-average unique attribute is the percentage of females in the population. Education research has found that parents with high education significantly influence their children’s educational and career aspirations through increased parent involvement in student education activities [158]. Education studies (such as [157,167,168]) also report a strong correlation between single parents and reviewing student report cards, as well as attending field trips and school activities, with a positive effect on students’ performance.

5. Summary and Conclusions

There are three main categories in Louisiana public schools: elementary/middle school system (1–8 grade), combination school system (1–12 grade), and high school system (9–12 grade). These schools persistently perform below the national average. Louisiana public schools perform below the national average. The base of the analyses is the data from 2015/16, 2016/17, and 2017/18 school years data available from the Louisiana Department of Education Data Center. The data include school performance scores, student characteristics, and school attributes combined with community-level variables from the 2019 American Community Survey (ACS). The ACS provides data annually and covers a broad range of topics about the U.S. population’s social, economic, demographic, and housing characteristics.
The objectives were to group the schools into homogenous clusters and identify subgroups based on critical features influencing their performance. This study uses a pairwise controlled manifold approximation technique as a multidimensional reduction technique to visualize and create the base clusters. We also used gradient-boosting machine learning to characterize the homogenous clusters at the school system and subgroup levels. Results indicate that the elementary/middle school system is in six homogenous clusters, the combination school system is in four groupings, and the high school system is in six groups. Failing schools were generally in densely populated and low affluent communities, with high truancy rates, below average teacher FTE per pupil, administrators’ salaries, and the number of support staff. High-performing schools were in communities with a high percentage of the population with a graduate degree, moderately affluent and smaller communities, and administrators’ salaries and numbers of support staff were relatively high.
Our results indicate that investing resources to increase the number of support staff and hiring administrators with more experience (implying (higher pay) can be economically more effective than simply increasing the per-pupil spending, at least in the short term. To improve school performance, policymakers and administrators must first identify attributes of persistently struggling schools and engage the community in developing evidence-based plans to solve a specific problem. Funding for struggling schools must focus on the quality of every school, from the excellence of the instruction to the rigor of the classes and equal access to resources (such as adequate support staff) shown to be fundamental to quality education and school performance. However, re-evaluating the results is necessary when the post-COVID-19 pandemic data is available.

Author Contributions

A.R.K. collected and analyzed the data and wrote the first draft. D.R.A. compiled the literature and revised the final draft. All authors have read and agreed to the published version of the manuscript.

Funding

The U.S. Department of Commerce’s Economic Development Administration funded this study through the University Center for Economic Development at Southern University and A&M College, Baton Rouge, Louisiana, through award # 80-66-04881. However, the views expressed in this paper are those of the author and do not necessarily represent the view of the Economic Development Administration and Southern University and A&M College.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is available online. Clean data are available on request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Summary Statistics of the Covariate Variables

Variables2015/16 2016/17 2017/18
AverageStd.DevAverageStd.DevAverageStd.Dev
Elementary schools system
School performance score82.9420.3379.9621.2870.4115.46
Salary expenditure per pupil ($)5322.001374.005421.001522.005518.001556.00
Current expenditure per pupil ($)11,081.001435.0011,259.001475.0011,591.001593.00
Full-time equivalent certified teachers32.1312.8732.3713.132.3413.32
Full-time equivalent other instructors8.865.119.045.338.785.29
Full-time equivalent support staff4.522.864.542.654.572.95
Full time Equivalent administrative staff3.971.533.951.6541.77
Full-time Equivalent transportation services4.463.684.413.744.293.43
Full-time equivalent other staff8.083.767.923.687.833.54
Salary of certified teachers48,530.004079.0049,036.003686.0049,590.003978.00
Salary of other teachers20,373.004112.0020,539.004021.0021,036.004248.00
Salary of administrators49,408.008566.0049,656.009607.0050,632.009588.00
Salary of support staff21,043.005625.0021,214.005493.0021,829.006074.00
Number of support instructors without bachelor’s degree12.135.2212.155.1612.065.12
Percentage of instructors without bachelor’s degree0.861.060.991.221.041.25
Percent of administrators without bachelor’s degree2.931.362.881.42.871.33
Percent of Support Staff without bachelor’s degree33.38.833.228.532.938.72
Percent of certified teachers with bachelor’s degree37.778.737.588.4837.88.54
Percent non-instruction teacher with bachelor’s degree2.563.732.613.332.382.65
Percentage of uncertified teachers without bachelor’s degree2.122.222.031.971.921.8
The percentage of Support Staff with bachelor’s degree42.939.342.788.742.748.96
The percentage of certified teachers with a graduate degree14.84715.166.9115.376.82
Percent of uncertified instructors with a graduate degree4.432.224.362.214.392.16
Percent of administrators with a graduate degree3.331.343.291.273.351.32
Percent of support staff with a graduate degree22.77.6222.927.5923.27.58
Percent of support staff with specialized training0.71.090.71.150.711.2
Years of experience as a certified teacher12.483.2312.563.1812.463.24
Years of experience of other support staff10.044.269.984.329.994.55
Years of experience of uncertified teachers15.795.5615.645.4715.665.72
Years of experience as administrators175.5616.935.6716.645.55
Years of experience support staff12.262.4812.312.4612.212.56
Percent student attendance93.578.1793.038.4593.795.33
Percent of classes with 21–26 students35.1520.6433.2920.9333.0520.64
Percent of classes with 27–33 students9.0711.428.8611.328.811.06
Percent of classes with more than 34 students2.73.442.142.12.433.2
School expulsion rate0.480.920.440.70.470.81
Percentage of students retained3.592.853.332.853.332.85
Percent truancy28.913.9133.6714.9649.3616.11
Total number of students486.7205.1481.8219.1489.2215.6
Percentage of female students47.775.3147.516.3948.393.24
Percent of fully proficient students96.825.5996.246.596.276.64
Percent of students with limited English proficiency3.255.513.525.963.876.57
Percent of minority students53.2729.8555.9528.858.0629.35
The median age in the population36.492.3936.422.3236.442.41
Size of the population170,995.00140,173.00169,249.00138,958.00171,784.00140,807.00
Dwelling median value139,167.0038,976.00138,039.0038,840.00138,188.0037,209.00
Percent of the population in poverty3.613.690.973.750.96
Percent of the population with a degree86.045.386.075.3386.015.25
Combination schools system
School Performance Score87.4523.2588.7522.3578.5119.11
Salary expenditure per pupil ($)5887.002503.006027.003083.006051.002860.00
Current expenditure per pupil ($)12,043.006220.0012,529.008736.0012,014.002748.00
Full-time equivalent certified teachers34.7718.135.0318.8134.3116.48
Full-time equivalent other instructors6.684.666.534.76.524.56
Full-time equivalent support staff43.144.323.854.053.27
Full-time equivalent administrative staff4.532.454.62.44.562.55
Full-time equivalent transportation services5.633.415.443.355.373.33
Full-time equivalent other staff8.874.418.744.218.294.02
Salary of certified teachers49,649.004694.0049,790.004349.0050,341.005140.00
Salary of other teachers21,756.006325.0021,925.006418.0021,297.006499.00
Salary of administrators47,902.0011,992.0049,319.009795.0050,047.0012,726.00
Salary of support staff22,032.009419.0022,707.0010,844.0022,311.009420.00
Number of support instructors without bachelor’s degree9.035.158.745.69.115.24
Percentage of instructors without bachelor’s degree0.971.531.031.441.121.61
Percent of administrators without bachelor’s degree3.111.373.271.433.312.1
Percent of Support Staff without bachelor’s degree34.1710.6133.699.5333.9110.15
Percent of certified teachers with bachelor’s degree36.548.9436.328.6536.948.58
Percent non-instructional teachers with bachelor’s degree1.782.981.793.411.562.45
Percentage of uncertified teachers without bachelor’s degree1.911.682.031.8121.82
Percent of Support Staff with bachelor’s degree40.448.6240.677.7940.638.86
The percentage of certified teachers with a graduate degree17.3511.117.067.9916.948.44
Percent of uncertified instructors with a graduate degree3.492.223.52.053.471.96
Percent of administrators with a graduate degree3.461.353.571.373.731.7
Percent of support staff with a graduate degree24.4611.5224.439.4324.39.86
Percent of support staff with specialized training0.620.980.691.110.831.71
Years of experience as a certified teacher13.613.3213.683.213.923.87
Years of experience of other support staff10.644.6511.064.6610.744.62
Years of experience of uncertified teachers15.947.0416.277.0317.027.49
Years of experience as administrators17.14.8516.345.2117.215.28
Years of experience of support staff13.12.7813.072.4613.442.84
Percent student attendance92.427.8391.788.1293.082.94
Percent of classes with 21–26 students15.6310.6416.5311.2616.7310.87
Percent of classes with 27–33 students7.828.247.47.987.879.35
Percent of classes with more than 34 students2.072.722.064.792.495.41
School expulsion rate0.641.150.671.470.71.66
Percentage of students retained5.278.154.216.783.855.1
Percent truancy26.9614.9733.731753.6916.63
Total number of students499.2285.8525.3351.1510.4326
Percentage of female students46.678.3447.177.147.526.51
Percent of fully proficient students98.0911.0998.786.5699.211.33
Percent of students with limited English proficiency0.910.970.991.071.081.2
Percent of minority students40.3930.5340.9529.5341.1929.64
The median age in the population37.473.3237.493.2537.543.26
Size of the population89,843.00115,074.0084,773.00115,033.0079,577.00110,879.00
Dwelling median value113,667.0035,055.00110,891.0032,344.00108,294.0031,221.00
Percent of the population in poverty4.051.14.121.034.211.06
Percent of the population with a degree88.934.189.364.0389.333.8
High schools system
School performance score89.6520.6890.1723.6480.2921.14
Salary expenditure per pupil ($)5423.001961.005595.002146.005737.002376.00
Current expenditure per pupil ($)11,321.002540.0011,557.002510.0012,018.002724.00
Full-time equivalent certified teachers53.429.5854.2431.2256.0430.33
Full-time equivalent other instructors9.288.099.058.118.565.94
Full-time Equivalent Support Staff6.844.526.914.337.034.65
Full-time equivalent administrative staff73.56.963.537.213.7
Full-time equivalent transportation services6.2576.086.325.916.52
Full-time equivalent other staff12.426.3311.976.3312.156.12
Salary of certified teachers50,847.004171.0051,111.003757.0051,634.004203.00
Salary of other teachers23,277.008017.0024,138.0011,237.0024,273.009258.00
Salary of administrators50,680.0010,211.0052,632.009440.0054,377.0011,818.00
Salary of support staff21,891.008351.0022,893.009232.0022,990.007072.00
Number of support instructors without bachelor’s degree6.623.36.833.396.93.09
Percentage of instructors without bachelor’s degree1.081.251.081.21.091.19
Percent of administrators without bachelor’s degree3.391.653.311.83.251.75
Percent of Support Staff without bachelor’s degree29.249.5729.0510.128.629.95
Percent of certified teachers with bachelor’s degree34.259.0133.99.7235.38.02
Percent noninstructional teacher with a bachelor’s degree3.797.853.437.252.353.1
Percentage of uncertified teachers without bachelor’s degree1.872.031.932.121.722.11
Percent of Support Staff with bachelor’s degree40.278.7639.939.1640.159.16
Percentage of certified teachers with a graduate degree20.167.1920.437.9120.467.59
Percent of uncertified instructors with a graduate degree4.352.114.432.084.662.29
Percent of administrators with a graduate degree3.551.373.511.413.731.47
Percent of support staff with a graduate degree28.727.3929.198.1329.258.3
Percent of support staff with specialized training0.60.830.630.90.610.88
Years of experience as a certified teacher13.083.3512.793.612.623.45
Years of experience of other support staff10.284.0110.614.7210.665.04
Years of experience of uncertified teachers17.065.716.656.0316.456.02
Years of experience as administrators17.565.4417.175.3316.765.29
Years of experience support staff12.852.8212.712.9912.582.93
Percent student attendance91.497.2489.6810.17915.97
Percent of classes with 21–26 students19.467.719.858.6919.477.79
Percent of classes with 27–33 students14.299.613.049.713.449.15
Percent of classes with more than 34 students4.528.5834.092.523.31
School expulsion rate0.941.10.881.071.051.34
Percentage of students retained6.628.086.098.465.938.93
Percent truancy34.7118.340.4421.2957.3220.97
Total number of students868.4516.5862.8532.5878524
Percentage of female students49.486.749.155.5549.354.85
Percent of fully proficient students97.337.7297.844.2497.454.69
Percent of students with limited English proficiency2.043.792.24.22.694.63
Percent of minority students53.5528.7756.6527.5159.1528.53
The median age in the population36.582.3436.392.2236.42.29
Size of the population175,890.00150,065.00186,035.00152,461.00184,862.00150,594.00
Dwelling median value136,150.0040,501.00138,381.0041,289.00137,832.0039,231.00
Percent of the population in poverty3.741.123.821.013.871.02
Percent of the population with a degree86.145.4185.545.7785.635.56

Appendix B. Proportion Data Points on Cluster Boundaries and Their Distributions for the Combination and High Schools Systems

Figure A1. Distribution of data points on cluster boundaries.
Figure A1. Distribution of data points on cluster boundaries.
Education 13 00160 g0a1aEducation 13 00160 g0a1b

Appendix C. Distribution of Performance Metrics of the Machine Learning Models

Figure A2. Performance metrics for the elementary schools’ system.
Figure A2. Performance metrics for the elementary schools’ system.
Education 13 00160 g0a2
Figure A3. Performance metrics for the combination schools’ system.
Figure A3. Performance metrics for the combination schools’ system.
Education 13 00160 g0a3
Figure A4. Performance metrics for the high schools’ system.
Figure A4. Performance metrics for the high schools’ system.
Education 13 00160 g0a4

Appendix D. Important Features at the School Level

Figure A5. Important features of the elementary schools’ system (extremely and high at risk).
Figure A5. Important features of the elementary schools’ system (extremely and high at risk).
Education 13 00160 g0a5
Figure A6. Important features of the elementary schools’ system (at-risk and need improvement).
Figure A6. Important features of the elementary schools’ system (at-risk and need improvement).
Education 13 00160 g0a6
Figure A7. Important features of the elementary schools’ system (exceeds expectation and highly exceeds expectation).
Figure A7. Important features of the elementary schools’ system (exceeds expectation and highly exceeds expectation).
Education 13 00160 g0a7
Figure A8. Important features of the combination schools’ system (failing schools and at-risk).
Figure A8. Important features of the combination schools’ system (failing schools and at-risk).
Education 13 00160 g0a8
Figure A9. Important features of the combination schools’ system (meet the expectation and exceeds expectation).
Figure A9. Important features of the combination schools’ system (meet the expectation and exceeds expectation).
Education 13 00160 g0a9
Figure A10. Important features of the high schools’ system (extremely and lower at-risk schools).
Figure A10. Important features of the high schools’ system (extremely and lower at-risk schools).
Education 13 00160 g0a10
Figure A11. Important features of the high schools’ system (higher-at-risk and meet-expectation schools).
Figure A11. Important features of the high schools’ system (higher-at-risk and meet-expectation schools).
Education 13 00160 g0a11
Figure A12. Important features of the high schools’ system (exceeds and highly-exceed-expectationschools).
Figure A12. Important features of the high schools’ system (exceeds and highly-exceed-expectationschools).
Education 13 00160 g0a12

References

  1. Kaufman, I.; Rousseeuw, P.J. Finding Groups in Data an Introduction to Cluster Analysis; John Wiley & Sons: Hoboken, NY, USA, 1990. [Google Scholar]
  2. Stanard, R.P. High school graduation rates in the United States: Implications for the counseling profession. J. Couns. Dev. 2003, 81, 217–221. [Google Scholar] [CrossRef]
  3. Rumberger, R.W. High school dropouts in the United States. In School Dropout and Completion: International Comparative Studies in Theory and Policy; Springer: Dordrecht, The Netherlands, 2011; pp. 275–294. [Google Scholar]
  4. Kirst, M.; Myburg, A.A.; De León, J.P.; Kirst, M.E.; Scott, J.; Sederoff, R. Coordinated genetic regulation of growth and lignin revealed by quantitative trait locus analysis of cDNA microarray data in an interspecific backcross of eucalyptus. Plant Physiol. 2004, 135, 2368–2378. [Google Scholar] [CrossRef] [PubMed]
  5. Christle, C.A.; Jolivette, K.; Nelson, C.M. School characteristics related to high school dropout rates. Remedial Spec. Educ. 2007, 28, 325–339. [Google Scholar] [CrossRef]
  6. World Economic Forum Report. The Future of Jobs Reports, World Economic Forum, Geneva. 2020. Available online: https://www.weforum.org/reports/the-future-of-jobs-report-2020 (accessed on 17 October 2022).
  7. Ho, A.D.; Lewis, D.M.; MacGregor Farris, J.L. The dependence on the growth-model results in proficiency cut scores. Educ. Meas. Issues Pract. 2009, 28, 15–26. [Google Scholar] [CrossRef]
  8. Yeagley, R. Separating Growth from Value Added. Sch. Adm. 2007, 64, 18. [Google Scholar]
  9. McCaffrey, D.F.; Lockwood, J.R.; Koretz, D.; Louis, T.A.; Hamilton, L. Models for value-added modeling of teacher effects. J. Educ. Behav. Stat. 2004, 29, 67–101. [Google Scholar] [CrossRef]
  10. Ryser, G.R.; Rambo-Hernandez, K.E. Using growth models to measure school performance: Implications for gifted learners. Gift. Child Today 2014, 37, 17–23. [Google Scholar] [CrossRef]
  11. Wang, Y.; Huang, H.; Rudin, C.; Shaposhnik, Y. Understanding how dimension reduction tools work: An empirical approach to deciphering t-SNE, UMAP, TriMap, and PaCMAP. J. Mach. Learn. Res. 2021, 22, 1–73. [Google Scholar]
  12. Liu, Y.; Li, Z.; Xiong, H.; Gao, X.; Wu, J. Understanding of internal clustering validation measures. In Proceedings of the 2010 IEEE International Conference on Data Mining, Sydney, Australia, 13–17 December 2010; IEEE: New York, NY, USA, 2010; pp. 911–916. [Google Scholar]
  13. Aly, M. Survey on multiclass classification methods. Neural Netw. 2005, 19, 1–9. [Google Scholar]
  14. Gentleman, R.; Carey, V.J. Unsupervised machine learning. In Bioconductor Case Studies; Springer: New York, NY, USA, 2008; pp. 137–157. [Google Scholar]
  15. Olden, J.D.; Joy, M.K.; Death, R.G. An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data. Ecol. Model. 2004, 178, 389–397. [Google Scholar] [CrossRef]
  16. Bae, E.; Bailey, J.; Dong, G. A clustering comparison measure using density profiles and its application to discovering alternate clusterings. Data Min. Knowl. Discov. 2010, 21, 427–471. [Google Scholar] [CrossRef]
  17. Ferraro, M.B.; Giordani, P.; Serafini, A. fclust: An R Package for Fuzzy Clustering. R J. 2019, 11, 198. [Google Scholar] [CrossRef]
  18. Van der Maaten, L.; Hinton, G. Visualizing data using t-sne. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  19. Van der Maaten, L.; Weinberger, K. Stochastic triplet embedding. In Proceedings of the 2012 IEEE International Workshop on Machine Learning for Signal Processing, Santander, Spain, 23–26 September 2012; IEEE: New York, NY, USA, 2012; pp. 1–6. [Google Scholar]
  20. McInnes, L.; Healy, J.; Melville, J. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv 2018, arXiv:1802.03426. [Google Scholar]
  21. Amid, E.; Warmuth, M.K. A more globally accurate dimensionality reduction method using triplets. arXiv 2018, arXiv:1803.00854. [Google Scholar]
  22. Becht, E.; McInnes, L.; Healy, J.; Dutertre, C.-A.; Kwok, I.W.H.; Ng, L.G.; Ginhoux, F.; Newell, E.W. Dimensionality reduction for visualizing single-cell data using UMAP. Nat. Biotechnol. 2019, 37, 38–44. [Google Scholar] [CrossRef]
  23. Lawson, R.G.; Jurs, P.C. New index for clustering tendency and its application to chemical problems. J. Chem. Inf. Comput. Sci. 1990, 30, 36–41. [Google Scholar] [CrossRef]
  24. Handl, J.; Knowles, K.; Kell, D. Computational cluster validation in post-genomic data analysis. Bioinformatics 2005, 21, 3201–3212. [Google Scholar] [CrossRef]
  25. Liu, Y.; Li, Z.; Xiong, H.; Gao, X.; Wu, J.; Wu, S. Understanding and enhancement of internal clustering validation measures. IEEE Trans. Cybern. 2013, 43, 982–994. [Google Scholar]
  26. Louisiana Department of Education. Bulletin 111: The Louisiana School, District, and State Accountability. 2008. Available online: http://www.doa.louisiana.gov/osr/lac/28v83/28v83.doc (accessed on 17 October 2022).
  27. Louisiana Department of Education. Accountability. 2009. Available online: http://www.doe.state.la.us/lde/portals/accountabilitv.html (accessed on 17 October 2022).
  28. Louisiana Department of Education. School Performance Scores and School Profiles. 2021. Available online: https://www.louisianabelieves.com/docs/default-source/accountability/2019-2020-sps-updates.pdf?sfvrsn=7eab6618_2 (accessed on 17 October 2022).
  29. Adelson, J.L.; Dickinson, E.R.; Cunningham, B.C. A multigrade, multiyear statewide examination of reading achievement: Examining variability between districts, schools, and students. Educ. Res. 2016, 45, 258–262. [Google Scholar] [CrossRef]
  30. Nejatian, S.; Parvin, H.; Faraji, E. Using sub-sampling and ensemble clustering techniques to improve the performance of imbalanced classification. Neurocomputing 2018, 276, 55–66. [Google Scholar] [CrossRef]
  31. Agrawal, R.; Gehrke, J.; Gunopulos, D.; Raghavan, P. Automatic subspace clustering of high dimensional data. Data Min. Knowl. Discov. 2005, 11, 5–33. [Google Scholar] [CrossRef]
  32. Molchanov, V.; Linsen, L. Overcoming the curse of dimensionality when clustering multivariate volume data. In Proceedings of the VISIGRAPP (3: IVAPP), Funchal, Maderia, Portugal, 27–29 January 2018; pp. 29–39. [Google Scholar]
  33. Lloyd, S.P. Last square quantization in PCM. Bell Teleph. Lab. Pap. 1982, 28, 129–137. [Google Scholar]
  34. MacQueen, J. Classification and analysis of multivariate observations. In Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, 21 June–18 July 1965 and 27 December 1965–January 7 1966; Volume 1: Statistics, pp. 281–297. [Google Scholar]
  35. Ruspini, E.H. Numerical methods for fuzzy clustering. Inf. Sci. 1970, 2, 319–350. [Google Scholar] [CrossRef]
  36. Duja, S.B.U.; Niu, B.; Ahmed, B.; Umar, M.; Amjad, M.; Ali, U.; Ur, Z.; Hussain, W. A proposed method to solve cold start problem using fuzzy user-based clustering. Int. J. Adv. Comput. Sci. Appl. 2020, 11, 529–536. [Google Scholar] [CrossRef]
  37. Bezdek, J.C. Cluster validity with fuzzy sets. J. Cybern. 1974, 3, 58–73. [Google Scholar] [CrossRef]
  38. Dunn, J.C. Well-separated clusters and optimal fuzzy partitions. J. Cybern. 1974, 4, 95–104. [Google Scholar]
  39. Mojarad, M.; Nejatian, S.; Parvin, H.; Mohammadpoor, M. A fuzzy clustering ensemble based on cluster clustering and iterative Fusion of base clusters. Appl. Intell. 2019, 49, 2567–2581. [Google Scholar] [CrossRef]
  40. Xue, M.; Zhou, L.; Kojima, N.; Dos Muchangos, L.S.; Machimura, T.; Tokai, A. Application of fuzzy c-means clustering to PRTR chemicals uncovering their release and toxicity characteristics. Sci. Total Environ. 2018, 622, 861–868. [Google Scholar] [CrossRef]
  41. Wang, H.; Fei, B. A modified fuzzy C-means classification method using a multiscale diffusion filtering scheme. Med. Image Anal. 2009, 13, 193–202. [Google Scholar] [CrossRef]
  42. Bahght, S.F.; Aljahdali, S.; Zanaty, E.A.; Ghiduk, A.S.; Afifi, A. A new validity index for fuzzy C-means for automatic medical image clustering. Int. J. Comput. Appl. 2012, 38, 1–8. [Google Scholar]
  43. Chao, R.J.; Chen, Y.H. Evaluation of the criteria and effectiveness of distance e-learning with consistent fuzzy preference relations. Expert Syst. Appl. 2009, 36, 10657–10662. [Google Scholar] [CrossRef]
  44. Chang, T.Y.; Chen, Y.T. Cooperative learning in E-learning: A peer assessment of student-centered using consistent fuzzy preference. Expert Syst. Appl. 2009, 36, 8342–8349. [Google Scholar] [CrossRef]
  45. Daniel, E.; Hofmann-Towfigh, N.; Knafo, A. School values across three cultures: A typology and interrelations. SAGE Open 2013, 3, 2158244013482469. [Google Scholar] [CrossRef]
  46. Sabitha, A.S.; Mehrotra, D.; Bansal, A. Delivery of learning knowledge objects using fuzzy clustering. Educ. Inf. Technol. 2016, 21, 1329–1349. [Google Scholar] [CrossRef]
  47. Hogo, M.A. Evaluation of e-learning systems based on fuzzy clustering models and statistical tools. Expert Syst. Appl. 2010, 37, 6891–6903. [Google Scholar] [CrossRef]
  48. Pechenizkiy, M.; Calders, T.; Vasilyeva, E.; De Bra, P. Mining the student assessment data: Lessons drawn from a small scale case study. In Proceedings of the Educational Data Mining 2008, The 1st International Conference on Educational Data Mining, Montreal, QC, Canada, 20–21 June 2008. [Google Scholar]
  49. Kabakchieva, D. Student performance prediction by using data mining classification algorithms. Int. J. Comput. Sci. Manag. Res. 2012, 1, 686–690. [Google Scholar]
  50. Yadav, R.S.; Singh, V.P. Modeling academic performance evaluation using fuzzy c-means clustering techniques. Int. J. Comput. Appl. 2012, 60, 16–23. [Google Scholar]
  51. Unlu, A.; Schurig, M. Computational Typologies of Multidimensional End-of-Primary-School Performance Profiles from an Educational Perspective of Large-Scale TIMSS and PIRLS Surveys. Curr. Issues Comp. Educ. 2015, 18, 6–25. [Google Scholar]
  52. Askari, S. Fuzzy C-Means clustering algorithm for data with unequal cluster sizes and contaminated with noise and outliers: Review and development. Expert Syst. Appl. 2021, 165, 113856. [Google Scholar] [CrossRef]
  53. Hinton, G.E.; Roweis, S. Stochastic neighbor embedding. In Advances in Neural Information Processing Systems 15, Proceedings of the Neural Information Processing Systems (NIPS 2002), Vancouver, BC, Canada, 9–14 December 2002; The MIT Press: Cambridge, MA, USA, 2002; pp. 15857–15864. [Google Scholar]
  54. Van Der Maaten, L. Accelerating t-SNE using tree-based algorithms. J. Mach. Learn. Res. 2014, 15, 3221–3245. [Google Scholar]
  55. Du, X.; Yang, J.; Hung, J.L. An Integrated Framework Based on Latent Variational Autoencoder for Providing Early Warning of At-Risk Students. IEEE Access 2020, 8, 10110–10122. [Google Scholar] [CrossRef]
  56. Baron, T.; Grossman, R.I.; Abramson, S.B.; Pusic, M.V.; Rivera, R.; Triola, M.M.; Yanai, I. Signatures of medical student applicants and academic success. PLoS ONE 2020, 15, e0227108. [Google Scholar] [CrossRef]
  57. Asim, M.; Shamshad, F.; Awais, M.; Ahmed, A. Introducing Data mining for Predicting trends in School Education of Pakistan: Preliminary results and future directions. In Proceedings of the Ninth International Conference on Information and Communication Technologies and Development, Lahore, Pakistan, 16–19 November 2017; pp. 1–5. [Google Scholar]
  58. Czibula, G.; Ciubotariu, G.; Maier, M.I.; Lisei, H. IntelliDaM: A Machine Learning-Based Framework for Enhancing the Performance of Decision-Making Processes. A Case Study for Educational Data Mining. IEEE Access 2022, 10, 80651–80666. [Google Scholar]
  59. Linderman, G.C.; Steinerberger, S. Clustering with t-SNE, provably. SIAM J. Math. Data Sci. 2019, 1, 313–332. [Google Scholar] [CrossRef] [Green Version]
  60. Linderman, G.C.; Rachh, M.; Hoskins, J.G.; Steinerberger, S.; Kluger, Y. Fast interpolation-based t-SNE for improved visualization of single-cell RNA-seq data. Nat. Methods 2019, 16, 243–245. [Google Scholar] [CrossRef]
  61. Bolukbasi, T.; Pearce, A.; Yuan, A.; Coenen, A.; Reif, E.; Viégas, F.; Wattenberg, M. An interpretability illusion for bert. arXiv 2021, arXiv:2104.07143. [Google Scholar] [CrossRef]
  62. Nolet, C.J.; Lafargue, V.; Raff, E.; Nanditale, T.; Oates, T.; Zedlewski, J.; Patterson, J. Bringing UMAP closer to the speed of light with GPU acceleration. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; Volume 35, pp. 418–426. [Google Scholar]
  63. Amid, E.; Warmuth, M.K. TriMap: Large-scale dimensionality reduction using triplets. arXiv 2019, arXiv:1910.00204. [Google Scholar] [CrossRef]
  64. Kiefer, A.; Rahman, M. An Analytical Survey on Recent Trends in High Dimensional Data Visualization. arXiv 2021, arXiv:2107.01887. [Google Scholar] [CrossRef]
  65. Huang, H.; Wang, Y.; Rudin, C.; Browne, E.P. Towards a comprehensive evaluation of dimension reduction methods for transcriptomic data visualization. Commun. Biol. 2022, 5, 1–11. [Google Scholar] [CrossRef]
  66. Hu, Y.; Hathaway, R.J. An algorithm for clustering tendency assessment. WSEAS Trans. Math. 2008, 7, 441–450. [Google Scholar]
  67. Yuan, C.; Yang, H. Research on the K-value selection method of the K-means clustering algorithm. R J. 2019, 2, 226–235. [Google Scholar] [CrossRef]
  68. Rousseeuw, P.J. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 1987, 20, 53–65. [Google Scholar] [CrossRef]
  69. Kassambara, A.; Mundt, F. Package Factoextra. Extract and Visualize the Results of Multivariate Data Analyses: R Package Version 1.0.7; Foundation for Statistical Computing: Vienna, Austria, 2017. [Google Scholar]
  70. Charrad, M.; Ghazzali, N.; Boiteau, V.; Niknafs, A. NbClust: An R package for determining the relevant number of clusters in a data set. J. Stat. Softw. 2014, 61, 1–36. [Google Scholar] [CrossRef]
  71. Maulik, U.; Bandyopadhyay, S. Performance evaluation of some clustering algorithms and validity indices. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 1650–1654. [Google Scholar] [CrossRef]
  72. Tasdemir, K.; Merényi, E. A validity index for prototype-based clustering of data sets with complex cluster structures. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2011, 41, 1039–1053. [Google Scholar] [CrossRef] [Green Version]
  73. Kim, Y.I.; Kim, D.W.; Lee, D.; Lee, K.H. A cluster validation index for GK cluster analysis based on the relative degree of sharing. Inf. Sci. 2004, 168, 225–242. [Google Scholar] [CrossRef]
  74. Van, C.T.; Blockeel, H. Using internal validity measures to compare clustering algorithms. In Proceedings of the Benelearn 2015 Poster Presentations, Online, 19 June 2015; pp. 1–8. [Google Scholar]
  75. Tibshirani, R.; Hastie, T.; Narasimhan, B.; Chu, G. Diagnosis of multiple cancer types by shrunken centroids of gene expression. Proc. Natl. Acad. Sci. USA 2002, 99, 6567–6572. [Google Scholar] [CrossRef]
  76. Ancarani, A.; Di Mauro, C.; Legenvre, H.; Cardella, M.S. Internet of things adoption typology of projects. Int. J. Oper. Prod. Manag. 2020, 40, 849–872. [Google Scholar] [CrossRef]
  77. Wu, S.; Chow, T.W. Clustering of the self-organizing map using a clustering validity index based on inter-cluster and intra-cluster density. Pattern Recognit. 2004, 37, 175–188. [Google Scholar] [CrossRef]
  78. de Prado, M.L. Machine Learning for Econometricians: The Readme Manual. J. Financ. Data Sci. 2022, 4, 10–30. [Google Scholar] [CrossRef]
  79. Kuhn, M. Caret: Classification and Regression Training; Astrophysics Source Code Library; R Foundation of the Statistical Computing: Vienna, Austria, 2015; p. ascl-1505. [Google Scholar]
  80. Hastie, T.; Tibshirani, R. Classification by pairwise coupling. In Advances in Neural Information Processing Systems 10, Proceedings of the NIPS Conference, Denver, CO, USA, 8 July 1997; The MIT Press: Cambridge, MA, USA, 1998; pp. 507–513. [Google Scholar]
  81. Li, T.; Zhu, S.; Ogihara, M. Efficient multi-way text categorization via generalized discriminant analysis. In Proceedings of the Twelfth International Conference on Information and Knowledge Management, New Orleans, LA, USA, 3–8 November 2003; IEEE: New York, NY, USA, 2003; pp. 317–324. [Google Scholar]
  82. Lee, Y.; Lee, C.K. Classification of multiple cancer types by multicategory support vector machines using gene expression data. Bioinformatics 2003, 19, 1132–1139. [Google Scholar] [CrossRef] [PubMed]
  83. Li, T.; Zhu, S.; Ogihara, M. Using discriminant analysis for multi-class classification. In Proceedings of the Third IEEE International Conference on Data Mining, Melbourne, FL, USA, 22 November 2003; IEEE Computer Society: Washington, DC, USA, 2003; p. 589. [Google Scholar]
  84. Izenman, A.J. Linear discriminant analysis. In Modern Multivariate Statistical Techniques; Springer: New York, NY, USA, 2013; pp. 237–280. [Google Scholar]
  85. Bouveyron, C.; Girard, S.; Schmid, C. High-dimensional data clustering. Comput. Stat. Data Anal. 2007, 52, 502–519. [Google Scholar] [CrossRef]
  86. Guo, Y.; Hastie, T.; Tibshirani, R. Regularized linear discriminant analysis and its application in microarrays. Biostatistics 2007, 8, 86–100. [Google Scholar] [CrossRef] [PubMed]
  87. Buhrman, H.; De Wolf, R. Complexity measures and decision tree complexity: A survey. Theor. Comput. Sci. 2002, 288, 21–43. [Google Scholar] [CrossRef]
  88. Ho, T.K. The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 832–844. [Google Scholar]
  89. Lin, Y.; Jeon, Y. Random forests and adaptive nearest neighbors. J. Am. Stat. Assoc. 2006, 101, 578–590. [Google Scholar] [CrossRef]
  90. Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
  91. Kotsiantis, S.B. Decision trees: A recent overview. Artif. Intell. Rev. 2013, 39, 261–283. [Google Scholar] [CrossRef]
  92. Ding, C.; Cao, X.J.; Næss, P. Applying gradient boosting decision trees to examine nonlinear effects of the built environment on driving distance in Oslo. Transp. Res. Part A Policy Pract. 2018, 110, 107–117. [Google Scholar] [CrossRef]
  93. Koklu, M.; Ozkan, I.A. Multiclass classification of dry beans using computer vision and machine learning techniques. Comput. Electron. Agric. 2020, 174, 105507. [Google Scholar] [CrossRef]
  94. Zhang, S.; Li, X.; Zong, M.; Zhu, X.; Wang, R. Efficient kNN classification with different numbers of nearest neighbors. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 1774–1785. [Google Scholar] [CrossRef]
  95. Wang, S.; Zhu, J. Improved centroid estimation for the nearest shrunken centroid classifier. Bioinformatics 2007, 23, 972–979. [Google Scholar] [CrossRef]
  96. Salmon, B.P.; Kleynhans, W.; Schwegmann, C.P.; Olivier, J.C. Proper comparison among methods using a confusion matrix. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium, Milan, Italy, 26–31 July 2015; IEEE: New York, NY, USA, 2015; pp. 3057–3060. [Google Scholar]
  97. Brodersen, K.H.; Ong, C.S.; Stephan, K.E.; Buhmann, J.M. The balanced accuracy and its posterior distribution. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; IEEE Computer Society: Washington, DC, USA, 2010; pp. 3121–3124. [Google Scholar]
  98. Sasikala, B.S.; Biju, V.G.; Prashanth, C.M. Kappa and accuracy evaluations of machine learning classifiers. In Proceedings of the 2017 2nd IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT), Bangalore, India, 19–20 May 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 20–23. [Google Scholar]
  99. Grandini, M.; Bagli, E.; Visani, G. Metrics for multiclass classification: An overview. arXiv 2020, arXiv:2008.05756. [Google Scholar]
  100. Fushiki, T. Estimation of prediction error by using K-fold cross-validation. Stat. Comput. 2011, 21, 137–146. [Google Scholar] [CrossRef]
  101. Jordan, M.I.; Kearns, M.J.; Solla, S.A. (Eds.) Selecting weighting factors in logarithmic opinion pools. In Advances in Neural Information Processing Systems 10, Proceedings of the 1997 NIPS Conference, Denver, CO, USA, 1–6 December 1997; MIT Press: Cambridge, MA, USA, 1998; Volume 10, p. 10. [Google Scholar]
  102. Scholbeck, C.A.; Molnar, C.; Heumann, C.; Bischl, B.; Casalicchio, G. Sampling, intervention, prediction, aggregation: A generalized framework for model-agnostic interpretations. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Würzburg, Germany, 16–20 September 2019; Springer: Cham, Switzerland, 2019; pp. 205–216. [Google Scholar]
  103. Grömping, U. Variable importance assessment in regression: Linear regression versus random forest. Am. Stat. 2009, 63, 308–319. [Google Scholar] [CrossRef]
  104. Borman, G.D.; Rachuba, L.T. Academic Success among Poor and Minority Students: An Analysis of Competing Models of School Effects. 2001. Available online: https://eric.ed.gov/?id=ED451281 (accessed on 17 October 2022).
  105. Tajalli, H.; Opheim, C. Strategies for Closing the Gap: Predicting Student Performance in Economically Disadvantaged Schools. Educ. Res. Q. 2005, 28, 44–54. [Google Scholar]
  106. Allen, J.; Sconing, J. Using ACT Assessment Scores to Set Benchmarks for College Readiness; American College Testing (ACT), Inc.: Iowa City, IA, USA, 2005. [Google Scholar]
  107. Marra, G.; Wood, S.N. Practical variable selection for generalized additive models. Comput. Stat. Data Anal. 2011, 55, 2372–2387. [Google Scholar] [CrossRef]
  108. Darling-Hammond, L.; Berry, B.; Thoreson, A. Does teacher certification matter? Evaluating the evidence. Educ. Eval. Policy Anal. 2001, 23, 57–77. [Google Scholar] [CrossRef] [Green Version]
  109. Alexander, C.; Fuller, E. Does teacher certification matter? Teacher certification and middle school mathematics achievement in Texas. In Proceedings of the Annual Meeting of the American Educational Research Association, San Diego, CA, USA, 12–16 April 2004. [Google Scholar]
  110. Valiandes, S.; Neophytou, L. Teachers’ professional development for differentiated instruction in mixed-ability classrooms: Investigating the impact of a development program on teachers’ professional learning and on students’ achievement. Teach. Dev. 2018, 22, 123–138. [Google Scholar] [CrossRef]
  111. Fischer, C.; Fishman, B.; Dede, C.; Eisenkraft, A.; Frumin, K.; Foster, B.; Lawrenz, F.; Levy, A.J.; McCoy, A. Investigating relationships between school context, teacher professional development, teaching practices, and student achievement in response to a nationwide science reform. Teach. Teach. Educ. 2018, 72, 107–121. [Google Scholar] [CrossRef]
  112. Heine, H.; Emesiochl, M. Preparing and Licensing High-Quality Teachers (Issues & Answers Report, REL 2007–031); U.S. Department of Education, Institute of Education Sciences: Washington, DC, USA; National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Pacific: Washington, DC, USA, 2007. [Google Scholar]
  113. Gershenson, S. Linking teacher quality, student attendance, and student achievement. Educ. Financ. Policy 2016, 11, 125–149. [Google Scholar] [CrossRef]
  114. Blazar, D.; Kraft, M.A. Teacher and teaching effects on students’ attitudes and behaviors. Educ. Eval. Policy Anal. 2017, 39, 146–170. [Google Scholar] [CrossRef] [PubMed]
  115. Heck, R.H. Teacher effectiveness and student achievement: Investigating a multilevel cross-classified model. J. Educ. Adm. 2009, 47, 227–249. [Google Scholar] [CrossRef]
  116. Sanders, W.L.; Ashton, J.J.; Wright, S.P. Comparison of the Effects of NBPTS Certified Teachers with Other Teachers on the Rate of Student Academic Progress. Final Report; National Board for Professional Teaching Standards: Arlington, VA, USA, 2005. [Google Scholar]
  117. Harris, D.N.; Sass, T.R. The effects of NBPTS-certified teachers on student achievement. J. Policy Anal. Manag. 2009, 28, 55–80. [Google Scholar] [CrossRef]
  118. Noell, G.H.; Burns, J.M.; Gansle, K.A. Linking student achievement to teacher preparation: Emergent challenges in implementing the value-added assessment. J. Teach. Educ. 2019, 70, 128–138. [Google Scholar] [CrossRef]
  119. Finn, J.D.; Achilles, C.M. Answers and questions about class size: A statewide experiment. Am. Educ. Res. J. 1990, 27, 557–577. [Google Scholar] [CrossRef]
  120. Krueger, A.B. Economic considerations and class size. Econ. J. 2003, 113, F34–F63. [Google Scholar] [CrossRef]
  121. Hoxby, C.M. The effects of class size on student achievement: New evidence from population variation. Q. J. Econ. 2000, 115, 1239–1285. [Google Scholar] [CrossRef] [Green Version]
  122. Kirkham-King, M.; Brusseau, T.A.; Hannon, J.C.; Castelli, D.M.; Hilton, K.; Burns, R.D. Elementary physical education: A focus on fitness activities and smaller class sizes are associated with higher levels of physical activity. Prev. Med. Rep. 2017, 8, 135–139. [Google Scholar] [CrossRef]
  123. Lowenthal, P.R.; Nyland, R.; Jung, E.; Dunlap, J.C.; Kepka, J. Does class size matter? An exploration into faculty perceptions of teaching high-enrollment online courses. Am. J. Distance Educ. 2019, 33, 152–168. [Google Scholar]
  124. Hanushek, E.A. Throwing money at schools. J. Policy Anal. Manag. 1981, 1, 19–41. [Google Scholar] [CrossRef]
  125. Hanushek, E.A. The impact of differential expenditures on school performance. Educ. Res. 1989, 18, 45–61. [Google Scholar] [CrossRef]
  126. Brunner, E.; Hyman, J.; Ju, A. School finance reforms, teachers’ unions, and the allocation of school resources. Rev. Econ. Stat. 2020, 102, 473–489. [Google Scholar] [CrossRef]
  127. Jackson, C.K.; Mackevicius, C. The Distribution of School Spending Impacts; No. w28517; National Bureau of Economic Research: Cambridge, MA, USA, 2021; Available online: https://www.nber.org/papers/w28517 (accessed on 17 October 2022).
  128. Nieuwenhuis, J.; Hooimeijer, P. The association between neighborhoods and educational achievement, a systematic review, and meta-analysis. J. Hous. Built Environ. 2016, 31, 321–347. [Google Scholar] [CrossRef]
  129. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2022. [Google Scholar]
  130. Gower, J.C. A general coefficient of similarity and some of its properties. Biometrics 1971, 27, 857–871. [Google Scholar] [CrossRef]
  131. Maechler, M.; Rousseeuw, P.; Struyf, A.; Hubert, M.; Hornik, K. cluster: Cluster Analysis Basics and Extensions; R Package Version 2.1.4; R Foundation for Statistical Computing: Vienna, Austria, 2022. [Google Scholar]
  132. Struyf, A.; Hubert, M.; Rousseeuw, P. Clustering in an object-oriented environment. J. Stat. Softw. 1997, 1, 1–30. [Google Scholar] [CrossRef]
  133. Gorgulu, O. Classification of dairy cattle in terms of some milk yield characteristics using fuzzy clustering. J. Anim. Vet. Adv. 2010, 9, 1947–1951. [Google Scholar] [CrossRef]
  134. Ushey, K.; Allaire, J.; Tang, Y. reticulate: Interface to ‘Python’; R Package Version 1.26; R Foundation for Statistical Computing: Vienna, Austria, 2022. [Google Scholar]
  135. Greenwell, B.M.; Boehmke, B.C.; Gray, B. Variable Importance Plots-An Introduction to the vip Package. R J. 2020, 12, 343. [Google Scholar] [CrossRef]
  136. Bramhall, S.; Horn, H.; Tieu, M.; Lohia, N. Qlime—A quadratic local interpretable model-agnostic explanation approach. SMU Data Sci. Rev. 2020, 3, 4. [Google Scholar]
  137. Banerjee, A.; Dave, R.N. Validating clusters using the Hopkins statistic. In Proceedings of the 2004 IEEE International Conference on Fuzzy Systems (IEEE Cat. No. 04CH37542), Budapest, Hungary, 26–29 July 2004; pp. 49–153. [Google Scholar]
  138. Onumanyi, A.J.; Molokomme, D.N.; Isaac, S.J.; Abu-Mahfouz, A.M. AutoElbow: An automatic elbow detection method for estimating the number of clusters in a dataset. Appl. Sci. 2022, 12, 7515. [Google Scholar] [CrossRef]
  139. Jain, A.K.; Murty, M.N.; Flynn, P.J. Data clustering: A review. ACM Computer. Surv. 1999, 31, 264–323. [Google Scholar] [CrossRef]
  140. Morgan, P.L.; Farkas, G.; Hillemeier, M.M.; Maczuga, S. Risk factors for learning-related behavior problems at 24 months of age: Population-based estimates. J. Abnorm. Child Psychol. 2009, 37, 401–413. [Google Scholar] [CrossRef] [PubMed]
  141. Reardon, S. The widening achievement gap between the rich and poor: New evidence and possible explanations. In Whither Opportunity? Rising Inequality, Schools, and Children’s Life Chances; Duncan, G., Murnane, R., Eds.; Russell Sage: New York, NY, USA, 2011. [Google Scholar]
  142. Reardon, S.F.; Portilla, X.A. Recent trends in income, racial, and ethnic school readiness gaps at kindergarten entry. Aera Open 2016, 23, 2332858416657343. [Google Scholar] [CrossRef]
  143. Aikens, N.L.; Barbarin, O. Socioeconomic differences in reading trajectories: The contribution of the family, neighborhood, and school contexts. J. Educ. Psychol. 2008, 100, 235. [Google Scholar] [CrossRef]
  144. Bickel, R.; Smith, C.; Eagle, T. Poor, rural neighborhoods and early achievement. J. Poverty 2002, 6, 89–108. [Google Scholar] [CrossRef]
  145. Brody, G.; Ge, X.; Conger, R.; Gibbons, F.; Murry, V.; Gerrard, M.; Simons, R. The influence of neighborhood disadvantage, collective socialization, and parenting on African American children’s affiliation with deviant peers. Child Dev. 2001, 72, 1231–1246. [Google Scholar] [CrossRef]
  146. Huysman, J. Rural teacher satisfaction: An analysis of beliefs and attitudes of rural teachers’ job satisfaction. Rural. Educ. 2008, 29, 31–38. [Google Scholar]
  147. Mackenzie, N. Teacher morale: More complex than we think? Aust. Educ. Res. 2007, 34, 89–104. [Google Scholar] [CrossRef]
  148. Hirn, R.G.; Hollo, A.; Scott, T.M. Exploring instructional differences and school performance in high-poverty elementary schools. Prev. Sch. Fail. Altern. Educ. Child. Youth 2018, 62, 37–48. [Google Scholar] [CrossRef]
  149. Holmes, B.; Parker, D.; Gibson, J. Rethinking teacher retention in hard-to-staff schools. Contemp. Issues Educ. Res. 2019, 12, 27–30. [Google Scholar] [CrossRef]
  150. Day, C.; Gu, Q.; Sammons, P. The impact of leadership on student outcomes: How successful school leaders use transformational and instructional strategies to make a difference. Educ. Adm. Q. 2016, 52, 221–258. [Google Scholar] [CrossRef]
  151. Sebastian, J.; Allensworth, E.; Wiedermann, W.; Hochbein, C.; Cunningham, M. Principal leadership and school performance: An examination of instructional leadership and organizational management. Leadersh. Policy Sch. 2019, 18, 591–613. [Google Scholar] [CrossRef]
  152. Yan, R. The influence of working conditions on principal turnover in K-12 public schools. Educ. Adm. Q. 2020, 56, 89–122. [Google Scholar] [CrossRef]
  153. Britton, E.M. Influence of School Principals on Teachers’ Perceptions of School Culture. Ph.D. Thesis, Walden University, Minneapolis, MN, USA, 2018. [Google Scholar]
  154. Marks, H.M.; Printy, S.M. Principal leadership and school performance: An integration of transformational and instructional leadership. Educ. Adm. Q. 2003, 39, 370–397. [Google Scholar] [CrossRef]
  155. Brazil-Cruz, L.; Martinez, S.S. The importance of networking and support staff for Latina/o first-generation students and their families as they transition to higher education. Assoc. Mex. Am. Educ. J. 2016, 10, 130–158. [Google Scholar]
  156. Castro, M.; Expósito-Casas, E.; López-Martín, E.; Lizasoain, L.; Navarro-Asencio, E.; Gaviria, J.L. Parental involvement on student academic achievement: A meta-analysis. Educ. Res. Rev. 2015, 14, 33–46. [Google Scholar] [CrossRef]
  157. Watt, A. Single Parent Households and the Effect on Student Learning. Master’s Thesis, Eastern Illinois University, Charleston, IL, USA, 2019. Available online: https://thekeep.eiu.edu/theses/4464 (accessed on 17 October 2022).
  158. Boonk, L.; Gijselaers, H.J.; Ritzen, H.; Brand-Gruwel, S. A review of the relationship between parental involvement indicators and academic achievement. Educ. Res. Rev. 2018, 24, 10–30. [Google Scholar] [CrossRef]
  159. Virtanen, T.E.; Räikkönen, E.; Engels, M.C.; Vasalampi, K.; Lerkkanen, M.K. Student engagement, truancy, and cynicism: A longitudinal study from primary school to upper secondary education. Learn. Individ. Differ. 2021, 86, 101972. [Google Scholar] [CrossRef]
  160. Tahira, Q.; Jami, H. Association between social adjustment and perceived parenting styles in punctual, truant, and high achieving school going students: A moderating model. Nat.-Nurtur. J. Psychol. 2021, 1, 33–44. [Google Scholar]
  161. Wu, S.; Villagrana, K.M.; Lawler, S.; Garbe, R. School Lunch Participation and Youth School Failure: A Multi-Racial Perspective. J. Soc. Welfare. 2020, 47, 29–52. [Google Scholar]
  162. Lee, H.; Shores, K.; Williams, E. The Distribution of School Resources in the United States: A Comparative Analysis Across Levels of Governance, Student Subgroups, and Educational Resources; Annenberg Institute at Brown University: Providence, RI, USA, 2021; 30p. [Google Scholar]
  163. Printy, S.; Liu, Y. Distributed leadership globally: The interactive nature of principal and teacher leadership in 32 countries. Educ. Adm. Q. 2021, 57, 290–325. [Google Scholar] [CrossRef]
  164. Lasky-Fink, J.; Robinson, C.D.; Chang, H.N.L.; Rogers, T. Using behavioral insights to improve school administrative communications: The case of truancy notifications. Educ. Res. 2021, 50, 442–450. [Google Scholar] [CrossRef]
  165. Cotton, K. Principals, and Student Achievement: What the Research Says; Association for Supervision and Curriculum Development & Northwest Regional Education Laboratory (ASCD): Alexandria, VA, USA, 2003. [Google Scholar]
  166. Robinson, C.D.; Lee, M.G.; Dearing, E.; Rogers, T. Reducing student absenteeism in the early grades by targeting parental beliefs. Am. Educ. Res. J. 2018, 55, 1163–1192. [Google Scholar] [CrossRef]
  167. Oba-Adenuga, O.A. Effect of Family Structure on Academic Performance of Secondary School Students in Somolu Local Government Area of Lagos State, Nigeria. Benin J. Educ. Stud. 2020, 26, 82–95. [Google Scholar]
  168. Leksansern, A. Parent involvement and students’ academic performances in high schools in Kalay, Myanmar. Kasetsart J. Soc. Sci. 2021, 42, 542–549. [Google Scholar]
Figure 1. Machine learning steps to typify the schools and identify critical features.
Figure 1. Machine learning steps to typify the schools and identify critical features.
Education 13 00160 g001
Figure 2. Potential optimal numbers of clusters.
Figure 2. Potential optimal numbers of clusters.
Education 13 00160 g002
Figure 3. Internal clustering validation measures.
Figure 3. Internal clustering validation measures.
Education 13 00160 g003
Figure 4. Proportional data points on cluster boundaries and their distributions for the elementary school system.
Figure 4. Proportional data points on cluster boundaries and their distributions for the elementary school system.
Education 13 00160 g004
Figure 5. School systems optimal clusters annotated with school performance scores.
Figure 5. School systems optimal clusters annotated with school performance scores.
Education 13 00160 g005
Figure 6. Relative importance scores of variables at the school system levels.
Figure 6. Relative importance scores of variables at the school system levels.
Education 13 00160 g006
Table 1. Summary statistics of school performance score by period.
Table 1. Summary statistics of school performance score by period.
School TypePeriodMeanStandard DeviationMinimumMaximumCoefficient of Variation
Elementary/Middle School2015/1682.620.424.4135.024.7
2016/1779.521.421.6137.026.9
2017/1870.015.526.9124.022.2
Combination School2015/1687.623.28.0138.026.5
2016/1787.822.91.8141.026.0
2017/1877.519.213.1132.024.8
High School2015/1689.620.712.8136.023.1
2016/1789.923.92.5138.026.5
2017/1880.121.013.8132.026.3
Table 2. Metrics of machine learning models.
Table 2. Metrics of machine learning models.
ModelMore-Is Better Low-Is-Better
Accuracy Kappa Recall SpecificityDetection RateLog Loss
MeanSDVMeanSDVMeanSDVMeanSDVMeanSDVMeanSDV
Elementary school system
RF0.99500.00490.99360.00630.98820.01140.9990.0010.16580.00080.22680.0111
GBM0.99340.00520.99160.00670.98640.01180.99870.0010.16560.00090.04480.0395
p-Value0.246 0.2459 0.538 0.26 0.282 0.0001***
PDA0.98450.00560.98020.00720.97360.01190.99660.00120.16410.00090.09160.0481
SLDA0.98320.00660.97860.00840.97390.0110.99650.00130.16390.00110.07960.0375
HDDA0.9820.00650.9770.00830.97190.01260.99630.00140.16370.00110.24640.1592
NSC0.97530.00590.96850.00760.9480.01390.99480.00130.16260.0010.12610.0121
KKNN0.84770.01760.80370.0230.80630.02670.96630.00390.14130.00290.62490.1548
DS0.76830.05160.69060.0730.57920.08060.94830.01230.12810.00860.4990.0982
Combination school system
GBM0.98170.01430.97450.01990.97170.02370.99380.0050.24540.00360.12590.1436
RF0.97870.01520.97040.02110.96960.02120.99320.00490.24470.00380.30210.0316
p-Value0.431 0.4309 0.713 0.616 0.431 0.0001***
HDDA0.96350.02360.94970.03240.96050.02760.98890.00740.24090.00590.830.6379
PDA0.95070.0250.93070.03550.9150.04350.98220.00920.23770.00620.49150.3308
SLDA0.92630.02340.8960.03340.87330.04150.97350.00860.23160.00590.48240.2917
NSC0.91890.0260.88520.03740.85940.0460.97050.00930.22970.00650.25540.1087
DS0.86140.01310.80170.01910.76440.01750.94940.00530.21540.00330.44030.2287
KKNN0.7950.05720.70960.0820.76760.06220.92510.0210.19870.01430.71430.298
High school system
RF0.98850.01160.9860.01420.98610.01550.99760.00240.16470.00190.36510.0282
GBM0.98270.01770.9790.02150.98170.01960.99650.00350.16380.0030.14850.242
p-Value0.144 0.1458 0.34 0.176 0.144 0.0001***
HDDA0.95830.02410.94950.02920.96120.02270.99180.00480.15970.0040.3660.2493
PDA0.93420.02270.91960.02780.90810.03140.98640.00470.15570.00380.59440.388
NSC0.92540.02030.90910.02480.89790.02740.98490.00410.15420.00340.3020.0995
SLDA0.90930.02670.8890.0330.86740.04030.98130.00550.15160.00440.6560.3091
KKNN0.8120.05190.7710.06330.81610.0510.96140.01070.13530.00860.64540.1971
DS0.65210.04210.56560.05520.56680.05350.92710.00920.10870.0070.80020.0485
Note: *** Significant at 99% confidence level. SDV is standard deviation, DST (decision trees), (KKNN) k-nearest neighbor, SLDA (stabilized linear discriminant analysis), NSC (nearest shrunken centroids), PDA (penalized discriminant analysis), HDDA (high dimensional discriminant analysis), (R.F.) random forest, GBM (gradient boosting).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kaliba, A.R.; Andrews, D.R. The Typology of Public Schools in the State of Louisiana and Interventions to Improve Performance: A Machine Learning Approach. Educ. Sci. 2023, 13, 160. https://doi.org/10.3390/educsci13020160

AMA Style

Kaliba AR, Andrews DR. The Typology of Public Schools in the State of Louisiana and Interventions to Improve Performance: A Machine Learning Approach. Education Sciences. 2023; 13(2):160. https://doi.org/10.3390/educsci13020160

Chicago/Turabian Style

Kaliba, Aloyce R., and Donald R. Andrews. 2023. "The Typology of Public Schools in the State of Louisiana and Interventions to Improve Performance: A Machine Learning Approach" Education Sciences 13, no. 2: 160. https://doi.org/10.3390/educsci13020160

APA Style

Kaliba, A. R., & Andrews, D. R. (2023). The Typology of Public Schools in the State of Louisiana and Interventions to Improve Performance: A Machine Learning Approach. Education Sciences, 13(2), 160. https://doi.org/10.3390/educsci13020160

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop