Next Article in Journal
Research on Loosening Identification of High-Strength Bolts Based on Relaxor Piezoelectric Sensor
Next Article in Special Issue
Research on the Deformation Laws of Adjacent Structures Induced by the Shield Construction Parameters
Previous Article in Journal
Building Surface Defect Detection Based on Improved YOLOv8
Previous Article in Special Issue
Research on BIM-Based Visualization, Simulation, and Early Warning System for Shield Tunnel Construction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AI-Based Variable Importance Analysis of Mechanical and ASR Properties in Activated Waste Glass Mortar

1
Jiangxi Traffic Engineering Assembly Manufacturing Co., Ltd., Nanchang 330000, China
2
China Testing & Certification International Group Shanghai Co., Ltd., Shanghai 201203, China
3
School of Management, Shenyang Jianzhu University, Shenyang 110168, China
4
Liyang Market Comprehensive Inspection and Testing Center, Liyang 213300, China
5
China Construction Fifth Engineering Bureau Co., Ltd., Changsha 410004, China
6
Tongji University Library, Shanghai 200092, China
*
Author to whom correspondence should be addressed.
Buildings 2025, 15(11), 1866; https://doi.org/10.3390/buildings15111866
Submission received: 14 April 2025 / Revised: 13 May 2025 / Accepted: 19 May 2025 / Published: 28 May 2025
(This article belongs to the Special Issue Urban Renewal: Protection and Restoration of Existing Buildings)

Abstract

Waste glass powder (WGP) faces challenges in recycling and regeneration, which is used as a partial substitute for concrete components, with its macro-mechanical properties being investigated. This study aims to elucidate the extent to which various variables affect the unconfined compressive strength (UCS) and alkali–silica reactivity (ASR) of waste glass incorporated concrete. Initially, in the experimental procedure, 291 data points for the UCS and 485 data points for the ASR were obtained from laboratory tests. Subsequently, four machine learning models were introduced, including Gradient Boosting Regressor, Random Forest, Hist Gradient Boosting Regressor, and XGBoost. Their performance was analyzed and compared based on evaluation indexes. The findings reveal that Gradient Boosting Regressor accurately models the actual data distribution, generating reliable synthetic data. Partial dependence plots (PDPs) were used to understand the impact of individual features on glass concrete UCS and ASR, and Shapley additive explanation (SHAP) values were used to analyze the predictive output influenced by the contribution of each feature. The feature interaction effects analyzed through PDP indicate that UCS is highest when WGP is 202.5 kg/m3, and ASR is maximized when WGP is 708.75 kg/m3. The SHAP value analysis results reveal that the “alkali” feature exerts the most pronounced influence on the UCS model predictions. Conversely, in the case of the ASR model, the “curing duration” feature emerges as the primary driver of its predictions.

1. Introduction

The recycling of increasingly discarded materials is now beginning to be applied in the construction industry [1]. According to the United Nations Environment Program, globally, approximately 20 billion tons of waste are produced annually, with glass products accounting for 7% of this total [2]. In 2022, the global glass recycling volume reached approximately 52.95 million tons, showing an increase of around 3.5% compared to 2021 [3]. The global value of glass recycling in 2022 was approximately $3.537 billion, marking an increase of about 10.6% compared to 2021 [4]. Currently, Europe dominates the global glass recycling market, while the Asia-Pacific region is expected to significantly increase its glass recycling, driven by the increasing emphasis on environmental protection [5]. However, the primary method of handling discarded glass is landfilling [6]. The process of recycling and regenerating glass is confronted with significant challenges due to the heterogeneity of glass types, which results in considerable variations in performance outcomes [7]. The main component of glass is silicon dioxide, similar in composition to cement and sand. In recent years, an increasing number of scholars have been using glass as a partial substitute for concrete components, studying the changes in its macroscopic mechanical properties and microstructure, and providing new insights for glass recycling and reuse [8].
Concrete in which glass is used to replace natural aggregates or cement is known as glass concrete. It typically exhibits relatively high compressive strength and durability, making it suitable for structural applications [9]. Researchers are actively engaged in exploring optimal formulations and constituent elements aimed at enhancing the mechanical strength, optical transparency, and overall durability of glass concrete in the realm of academic inquiry [10]. The UCS of concrete serves as the foundation for evaluating the strength and safety of concrete structures. It plays a crucial role in the design, construction, and maintenance of various engineering projects, including building construction and infrastructure. The replacement rate of sand with glass, known as the glass-to-sand ratio, is crucial for the compressive strength of glass concrete. In 2013, when comparing glass concrete to regular concrete, the compressive strength of glass concrete decreases as the glass-to-sand ratio increases. To achieve optimal results when using waste glass as a substitute for fine aggregates, the recommended glass-to-sand ratio is around 10% [11]. When the replacement amount of waste glass is below 25%, there is no significant change in concrete strength. However, when the replacement amount exceeds 25%, there is a noticeable decrease in concrete strength.
Apart from the glass-to-sand ratio, the size of glass aggregate particles also affects compressive strength. The concrete specimens with glass aggregate particles ranging from 1.18 mm to 2.36 mm exhibited higher early strength [12]. Nevertheless, the strength suffered a decline after a fortnight as a consequence of the alkali–silica reaction, necessitating the use of alkali–silica reaction inhibitors [13]. Concrete specimens with glass aggregate particles smaller than 1.18 mm had lower early strength but higher later strength, making them suitable as concrete aggregates [14]. Furthermore, He et al. (2023) [15] discovered that temperatures from 200 °C to 400 °C positively influence the strength development of waste glass-containing concrete. Wang et al. (2019) [16] found through experiments that the color of glass does not significantly impact the compressive strength of glass concrete. The compressive strength of concrete is predominantly affected by two pivotal factors: the ratio of glass to sand and the size of glass aggregate particles.
While these experiments have yielded partial outcomes, their efficacy is circumscribed by the constraints inherent in experimental conditions and the limited scope of trial data. The complex nonlinear relationships among multiple factors have not been fully considered, potentially introducing uncertainties and errors in the predictive outcomes. Therefore, there is an urgent need for a more accurate and interpretable predictive model to delve deeper into the key factors influencing the unrestricted compressive strength of glass concrete. To enhance the efficacy of the design and engineering applications of the predictive model, this endeavor will yield valuable scientific insights.
Machine learning algorithms are a category of algorithms used to automatically learn patterns and rules from data. In recent years, artificial intelligence technology has developed rapidly, and machine learning, as a core branch of artificial intelligence, is capable of learning complex relationships between inputs and outputs in empirical data. It can quickly extract high-dimensional data features, handle non-linear data, and exhibit useful fault tolerance. Advanced ML models, such as neural networks (ANN), support vector regression (SVR), and tree-based models, are now frequently used to accurately predict concrete properties like compressive and flexural strength [17,18]. For instance, Shariati et al. (2021) [19] effectively used ANN and extreme learning machine (ELM) to predict the strength of concrete containing additives, while Feng et al. (2020) [20] leveraged an adaptive boosting algorithm for greater predictive accuracy. Despite the excellent performance exhibited by machine learning algorithms in prior investigations, their inherent black-box nature makes it difficult to explain how the prediction results are influenced by the contribution of each variable.
To address this issue, this article will employ partial dependence plots (PDPs) and Shapley additive explanations (SHAPs) as techniques for analyzing predictive models. These two methods excel in explaining the relationship between features and the output in predictive models. PDP allows us to intuitively observe the impact trend of features on the unrestricted compressive strength and the alkali–silica reaction. On the other hand, the contribution of each feature to the predicted output is provided by SHAP, enabling us to comprehensively understand the basis of model predictions. The combined use of these two techniques allows for evaluating the relationship between a feature and the expected response of each observation, comparing the importance of predictive variables, and investigating the joint influence of these two variables on the prediction results.
This study endeavors to elucidate the comprehensive impact of diverse variables on both the UCS and the ASR of glass concrete. First, four popular ML models were introduced, including Random Forest, Hist Gradient Boosting Regressor, Gradient Boosting Regressor, and XGBoost. Second, PDP will be utilized to understand the effect of individual characteristics on the UCS and ASR of glass concrete. Finally, by calculating SHAP values, the study will identify which features have a positive impact on the prediction of a particular sample and which features negatively affect the prediction results.

2. Experimental Program and Methodologies

2.1. Dataset Collection

The dataset used for machine learning modeling in this study was collected from the laboratory tests (i.e., uniaxial compressive strength and alkali–silica reaction tests). The dataset comprises a range of variables including size and replacement ratio of waste glass powder (WGP), alkali and sodium sulfate ratios, and curing age. A total of 291 and 485 data points were collected for the UCS and ASR datasets, respectively, where the bivariate plot and correlation coefficient heatmap are shown in Figure 1. The correlation coefficient of each variable is low except for curing duration, illustrating the nonexistence of a linear problem between variables and outputs. Although the data diversity in this study is limited (bivariate plots), the prediction and analysis of various variables are reasonable, provided the multicollinearity pattern is similar. The experimental procedures are demonstrated below.

2.2. Materials

The clear waste bottles at a recycling facility were the source of the WGP used in this experiment. Before the crushing procedure, the bottles underwent pretreatment, which involved the removal of labels and contaminants. Afterward, the bottles that had been dried in the air were crushed and ground using a ball mill to produce a powder with an average particle size ranging from 75 µm to 300 µm [21,22,23]. Lastly, the treated WGP was placed in a container that was tightly sealed to preserve its quality. Figure 2 shows that the actual density of the WGP lies between 2.3 and 2.5 t/m3, and its appearance varies with particle sizes of 75 µm and 300 µm.
The commonly used Portland cement is P.O 42.5R, and the ground silica sand has a silicon dioxide content exceeding 96%, which conforms to the ASTM C778 grading. The density of the cement ranges from 3.0 to 3.2 t/m3, and the silica sand has a density of 2.3 t/m3. The cement has a fineness index of 390 m2/kg, and its standard consistency is 27%. The chemical compositions of the cement and WGP, as determined by X-ray Fluorescence Spectrometer (XRF) (PANalytical Axios max, Amelo, The Netherlands), are shown in Table 1. WGP consists of 74.02% silica, highlighting its exceptional potential for exhibiting pozzolanic behavior.

2.3. Mix Design and WGP Activation

During this research, various factors were considered, including the size of WGP, the mass ratio of WGP replacement, the quantities of chemical additives, and the duration of the curing process. Table 2 provides a summary of the different levels of these variables. Furthermore, a constant water-to-cement ratio of 0.45 was set, along with maintaining an aggregate-to-cement ratio (WGP + sand) of 2.25. The detailed mix proportions were provided in Appendix A, encompassing 50 mortar groups with variations in WGP particle size, replacement content, and chemical activation conditions. Each mix maintained a fixed cement content of 450 kg/m3 and a constant water content of 211.5 kg/m3, with WGP and ground silica sand jointly serving as fine aggregates to ensure an aggregate-to-cement ratio of 2.25. Three levels of WGP replacement were considered—101.25 kg/m3, 202.5 kg/m3, and 303.75 kg/m3—corresponding to 10%, 20%, and 30% by volume. For each WGP level, mortar mixes were further modified by introducing sodium sulfate and/or alkali activators (a 1:1 blend of sodium hydroxide and calcium hydroxide), with dosages ranging from 0 kg/m3 to 27 kg/m3 in 9 kg/m3 increments. Hydrothermal and combined activations were specifically applied to samples that contain 75 µm WGP. The aim is to assess the efficiency of various activation methods. The specific treatment techniques for activation are described as follows:
  • Mechanical activation: A ball mill was used to grind waste glass with an average particle size of 300 µm down to 75 µm. It was anticipated that the resulting finer glass powder would exhibit improved pozzolanic performance.
  • Chemical activation: For this experiment, the chemical activators employed were sodium sulfate (Na2SO4), calcium hydroxide (Ca(OH)2), and sodium hydroxide (NaOH). Sodium sulfate acted as the salty activator and was used to produce the sodium-sulfate-activated mortar, where it was fully dissolved in the mixing water. Sodium hydroxide and calcium hydroxide were combined in equal proportions by weight to serve as the alkaline activator, referred to as the sodium–calcium hydroxide co-activated mortar; both components were partially dissolved in water before incorporation. Sodium hydroxide and sodium sulfate, being water-soluble, could be readily added through the mixing water in appropriate dosages. In contrast, calcium hydroxide, due to its limited solubility, was dry-blended with WGP prior to mixing with cement and sand, resulting in the calcium-hydroxide-only-activated mortar. The mortar production followed the guidelines outlined in ASTM C305 [24].

2.4. Unconfined Compressive Test

The UCS experiment was conducted to investigate how different activations and their mixed designs affect the mechanical performance of mortar specimens at different curing ages [25]. Cubes measuring 50 × 50 × 50 mm were used as samples, and each mix design had three parallel samples. Immediately after being cast, the specimens were promptly covered with plastic sheets and placed in a room with high humidity for a full day. Subsequently, the molds were taken off, and the specimens were stored in a thermotank for durations of 7, 14, and 28 days. It is vital to promptly conduct the strength test as soon as the sample is removed from the storage tank. To determine the ultimate strengths, a servo-hydraulic testing machine (WAW-300D) was utilized with loading rates set at 0.6 MPa/s.

2.5. Alkali–Silica Reaction

The ASTM C1260 [26,27] standard was used to evaluate the extent of ASR expansion [28]. This was done by assessing the variation in lengthwise measurement of mortar bar specimens. For each mixture, three parallel specimens were produced. The final result was recorded as the average rate of change in the longitudinal direction. The specimens were prismatic in shape, measuring 25 × 25 × 280 mm in dimensions, and had two steel stud gauges positioned at each end, with an effective length of 260 mm, specifically cast for the experiment. According to ASTM C1260 guidelines, the specimens underwent exposure to high temperatures of 80 °C and a NaOH solution with a concentration of 1 N, to expedite the process of ASR. Following the casting process, the molds were immediately moved to a humid chamber where they remained for 24 h to cure. Afterward, the specimens were taken out of the molds and soaked in water at 80 °C for an additional day. It was at this point that the initial length of the bar sample immersed in water was determined. To monitor the expansion of ASR, the bars that were sampled were subsequently immersed in a NaOH solution at 80 degrees. Their lengths were then measured at specific intervals of 2, 4, 7, 10, and 14 days. The duration of each interval was recorded. It is worth noting that the measurements had to be conducted within 155 s of removing the samples from the alkaline solution. To determine the length of the bars and calculate the ASR expansion ratio, a length comparator and a digital indicator were employed. The ASR expansion ratio denoted by Equation (1) was derived from these measurements [29].
e = L x L o 260 × 100 %

2.6. Machine Learning Model Establishment

2.6.1. Gradient Boosting Regressor

Gradient Boosting Regressor (GBR) is a technique for ensemble learning that constructs a predictive model by progressively adding weak learners [30]. The general problem is to learn a functional mapping y = F ( x ; β ) from data x i , y i i = 1 n , where x is the input vector comprising selected mix parameters, y is the target output, and β represents the model parameters of F . The learning process aims to minimise a differentiable loss function y i , F x i ; β , typically the squared error for regression tasks, through the following objective:
i = 1 n y i , F x i ; β
where is the differentiable loss function, n is the number of training samples, and ( x i , y i ) are the input–output pairs.
In gradient boosting, the function F x is expressed as an additive expansion of weak learners: F x = m = 0 M ρ m f ( x ; τ m ) , where f ( x ; τ m ) is called the weak or base learner with a weight ρ m and a parameter set τ m . Accordingly, ρ m , τ m m = 1 M compose the whole parameter set β . They are learnt in a greedy “stage-wise” process: (1) set an initial estimator f0(x); (2) for each iteration m 1,2 , , M , solve ρ m , τ m = a r g m i n ρ , τ i = 1 n y i , F m 1 x i + ρ f x i ; τ .
GBR approximates the minimisation in two steps. First, it fits a weak learner to the pseudo-residuals, which are defined as the negative gradient of the loss function with respect to the current model prediction:
g i m = y i , F x i F x i F x = F m 1 ( x )
where F m 1 is the model built up to the previous iteration. The parameters τ m of the base learner f x ; τ m are obtained by minimising the squared difference between the pseudo-residuals and the model output:
τ m = a r g min τ i = 1 n g i m f x i ; τ 2
Second, it learns ρ by
ρ m = a r g min ρ i = 1 n y i , F m 1 x i + ρ f x i ; τ m
Then, it updates F m x = F m 1 x + ρ m f x ; τ m . In practice, however, shrinkage is often introduced to control overfitting, and the update becomes F m x = F m 1 x + v ρ m f ( x ; τ m ) , where 0 < v 1 . If the regression tree is the weak learner, then the complexity of f(x) depends on parameters related to the tree, such as the size of the tree (or its depth), and the minimum number of samples required in terminal nodes [31]. In addition to utilizing appropriate shrinkage and tree parameters, one could enhance the performance of GBR by implementing subsampling. This means training each base learner on a randomly selected subset of the training data. This approach is referred to as stochastic gradient boosting [30].
GBR differs from parametric models like generalized linear models and neural networks in that it does not make any assumptions about the functional form of F. Instead, GBR uses an additive expansion approach to construct the model [32]. Researchers are afforded greater freedom by employing this nonparametric approach. GBR, by combining predictions from multiple weak learners, typically produces more reliable results compared to a single learner. It has been observed to outperform bagging-based Random Forests in practical applications, likely owing to its emphasis on functional optimization [33]. Nonetheless, for GBR to be applicable, the cost function must be differentiable in relation to F. GBR has been integrated into the widely used R package “gbr” (v2018), which offers support for various regression models.
Huang (1986) [34] proposed a predictive model known as a decision tree. A decision tree is a type of binary tree where the nodes represent feature tests, and the leaves represent predicted values. A decision tree is referred to as a regression tree when its target value is continuous, usually real numbers [35]. Features represent an instance. For instance, in a Wine dataset, alcohol, ash, magnesium, and other factors represent a specific example of an instance [36].

2.6.2. Other Machine Learning Techniques

The ensemble technique known as the Random Forest algorithm can be utilized for both regression and classification tasks. It uses basic decision trees as its base learners and utilizes multiple decision trees to create a model for the given problem [37]. In the Random Forest algorithm, multiple decision trees are trained on different bootstrap samples to reduce variance and improve prediction accuracy. By aggregating the outputs of independently built trees, the model mitigates overfitting and enhances generalization. This ensemble approach leverages diversity among trees to produce more stable and robust predictions than a single decision tree. The technique known as bootstrap aggregating or bagging trains multiple trees concurrently. In this method, a set of training data (X) is used to repeatedly select random samples (with replacement) of training data B times, and decision trees are trained on these samples. These decision trees are built independently of each other and generated in parallel. Once trained, these trees can be utilized to predict unknown samples ( x ) using the following equation:
f = 1 B b = 1 B f b ( x )
The Hist Gradient Boosting Regressor technique utilizes faster decision trees for gradient boosting by employing binning or discretization to significantly accelerate the training process of ensemble trees. This implementation of the hist gradient boosting method efficiently operates on input variables. Each tree added to the ensemble aims to rectify forecasted errors using the existing models within the ensemble [38]. The Hist Gradient Boosting Regressor technique is incorporated in conjunction with various other techniques. One instance of its implementation involves the use of scikit-learn, a machine learning library that provides an experimental implementation of gradient boosting, forming the basis for the histogram approach. Specifically, scikit-learn offers the Hist Gradient Boosting Regressor for this purpose [39]. Based on the information provided in the scikit-learn documentation, the Hist Gradient Boosting Regressor implementation in the library demonstrates noticeably improved speed compared to the default GBR implementation.
XGBoost, identified as extreme gradient boosting, is a renowned and powerful method in machine learning employed for creating supervised regression models. XGBoost demonstrates remarkable efficiency and computational effectiveness. It surpasses decision trees in terms of accuracy but lacks their interpretability. Base learners are a necessary component in XGBoost. The process of training and incorporating base learners is used to construct an ensemble learning model for iteratively making predictions. The XGBoost algorithm’s objective function contains both a regularization term and a loss function. This objective function allows for the evaluation of the difference between the predicted values and the target values, thus indicating the level of disparity between the model’s predictions and the actual values [40].

2.6.3. Regression Evaluation Metrics

The statistical measures that are commonly used to evaluate the accuracy of predictive models are root mean squared error (RMSE), mean squared error (MSE), mean absolute error (MAE), and coefficient of determination (R2). They are common metrics that assess the effectiveness of regression models, measuring the degree of difference between model predictions and actual observed values [41,42].
R M S E = ( y i y ^ i 2 / n )
M S E = ( y i y ^ i ) 2 / n
M A E = | y i y ^ i | / n
R 2 = 1 ( ( y i y ^ i ) 2 / ( y i y ¯ ) 2 )
where yi represents the actual observed value, y ^ i represents the model predicted value, n stands for the sample size, and y ¯ is the average value of actual observed measurements.

2.7. Model Interpretability and Visualization

2.7.1. Partial Dependence Plots (PDPs)

The technique of partial dependence plots is utilized to interpret models that are not easily explained [43]. The expression specifies
f x s x s = E x c F x s ( x s , x c ) = F x s ( x s , x c ) d P ( x c )
where x represents the instances in the dataset, while xs represents the x c feature for which we seek to determine its impact on the prediction. F x s ( x s , x c ) represents the other features of the black-box model F . E stands for mathematical expectation, and P denotes probability. The partial function f x s x s demonstrates the correlation between the x c feature and the predicted targets. The interactive portion relies on a plot that illustrates the interactive effects between two features, D and A, on the predictive outcomes [44]. This is presented using contour lines. Feature D is represented by the x-axis, indicating its range of values. Feature A, on the other hand, is represented by the y-axis, indicating its range of values.
The colors and contours of the contour plot depict how the model’s predicted values change based on features D and A. The variations in color intensity and the shape of the contour lines aid in comprehending the impact of the interaction between these two features on the model’s predictive results. Through this plot, a clear understanding of the interactive effects between features D and A, as well as their influence on the model’s predictions, can be gained. The interactive partial dependence plot serves as a potent visualization tool, facilitating the discovery of intricate relationships between features and enabling a deeper understanding of the model’s predictive capabilities.

2.7.2. Shapley Additive Explanation (SHAP)

SHAP explanations have gained popularity as a method for attributing importance in explainable AI. They utilize game-theoretic concepts to quantify the impact of individual features on the predictions made by a machine learning model [45]. It utilizes game theory’s Shapley values to assess the importance of individual features in an ML model. This method enables the interpretation of the model’s outcomes by measuring the influence of each feature on the results [46]. When fairness is defined by two principles, namely additivity (where the sum of all individual feature contributions equals the final value) and consistency (meaning greater contribution implies greater importance), Shapley values serve as the sole equitable approach to feature attribution [47]. To compute the Shapley values for a specific feature i , the average of the marginal contribution can be determined. This involves calculating the model’s output with and without feature i for all subsets S of N while excluding feature i.
φ i = 1 n ! s N \ i S ! n 1 S ! f ( S i f ( S )
where f is the initial prediction model, S is a group of features, N is the collection of all features, i is a particular feature, and n is the overall count of features.
SHAP explains the output of the prediction model, f ( x ) , for a single observation x This explanation is given by a linear function g , which can be expressed in the following way:
f x = g x = φ o + i = 1 M φ i x i
The simplified input, denoted as x ’, is related to the instance being explained, denoted as x , through a mapping function. The base value, denoted as 0, represents the scenario when all the inputs are absent. There are M simplified input features in total [48].
Presented herein are methodologies for scrutinizing the impact that individual characteristics have on the predictions made by machine learning models. The SHAP summary plot utilizes the SHAP library to compute SHAP values for each feature concerning a specific sample [49]. For instance, in Python (v2018), the “shape” library can be used, and the “shap_values” function can assist in calculating SHAP values [50]. The plot displays how each feature contributes to the final output of the model, aiding in understanding the impact of each feature on a particular sample [51]. The feature importance bar plot of SHAP values can be created by plotting the average absolute SHAP values for each feature [52]. The importance of each feature can be determined by using this chart, which shows the average absolute SHAP values for each feature [53]. The dependence plot between features can be generated using the “dependence_plot” function from the SHAP library to understand how they interact [54]. This chart helps understand the interrelationships between features and how they influence model output.

3. Results and Discussion

3.1. ML Prediction of UCS Results

In the machine learning training process, hyperparameters are parameters that need to be set before training a model. They cannot be learned during the training process and require manual specification. The selection of hyperparameters is crucial for the performance and training effectiveness of the model, making hyperparameter tuning an essential aspect of machine learning. Among them, “n_estimator” and “max_depth” are two hyperparameters determined later through grid search, with a search range of 1 to 800 for the first and 1 to 20 for the second. The before-and-after settings of hyperparameters for UCS prediction in machine learning models are presented in Table 3.
Figure 3 presents a comparison between predicted unconfined compressive strength values and their respective actual values. The actual values are represented on the x-axis, whereas the calculated values are represented on the y-axis. The red dots indicate data points from the training set, the blue triangles represent data points from the test set, and the black dashed line denotes the 45-degree reference line, indicating the scenario where predicted values are equal to actual values [55].
From Figure 3, the data points on each graph closely align with the reference line, indicating a high predictive accuracy for the models. It is observed that both the Random Forest and Hist Gradient Boosting Regressor algorithms exhibit a few test points deviating from the reference line, with this effect being more pronounced in Random Forest. In contrast, Gradient Boosting Regressor and XGBoost demonstrate a better fitting effect.
To quantitatively assess the performance of the ML models, the statistical metrics are illustrated in Table 4. This table summarizes the evaluation metrics for four models, reinforcing the conjecture depicted in the diagram. From Table 4, it can be observed that the RMSE of Gradient Boosting Regressor (GBR) is 1.3085, the MSE is 1.7121, the MAE is 1.0320, and the R2 is 0.9465. Similar to the visual interpretation presented in Figure 3, the Random Forest, Hist Gradient Boosting Regressor, and XGBoost models exhibit higher values for RMSE, MSE, and MAE, as well as lower values for R2. This suggests that their predictive accuracy is lower than that of the GBR model. The superior performance of the GBR is primarily due to its strength in capturing complex, nonlinear interactions among features such as WGP content, alkali dosage, and curing duration. Compared to bagging-based models like Random Forest, GBR builds learners sequentially, allowing error correction at each stage and thus achieving higher predictive accuracy. Its regularisation mechanisms and suitability for small to moderate datasets further enhance its generalisation. In this study, GBR achieved the lowest RMSE (1.3085) and the highest R2 (0.9465), confirming its effectiveness for UCS prediction under heterogeneous and weakly correlated input conditions. Consequently, in the pursuit of analytical rigor employing PDP and SHAP methodologies, we leverage Gradient Boosting Regressor.

3.2. ML Prediction of ASR Expansion Results

The before-and-after settings of hyperparameters for ASR prediction in machine learning models are presented in Table 5.
According to Figure 4, all four graphs can achieve relatively useful predictive results. Among them, the best performers are XGBoost and Gradient Boosting Regressor, as their data points are more closely aligned with the 45-degree reference line. In contrast, Random Forest and Hist Gradient Boosting Regressor are somewhat inferior, as they have some data points that deviate from the reference line.
Table 6 indicates that XGBoost is the best model, with the smallest RMSE, which is 0.0101. The smallest MSE is 0.0001, the minimum MAE is 0.0080, and the R2 closest to 1 is 0.9110. The superior performance can be attributed to XGBoost’s ability to effectively model complex nonlinear relationships, which is particularly advantageous given the weak linear correlations observed among the input variables in this study. The ASR dataset involves intricate interactions between features such as curing duration, alkali content, and WGP dosage, which are efficiently captured by XGBoost through its iterative boosting structure and second-order gradient optimisation. Furthermore, the algorithm’s built-in regularisation mechanisms help mitigate overfitting, enhancing its generalisation capability. The model’s robustness performance is further supported by hyperparameter tuning via grid search, allowing parameters to be optimally selected. These characteristics make XGBoost especially well suited for handling the high-dimensional and heterogeneous nature of the ASR dataset, leading to its superior predictive accuracy. Therefore, in the current experiment, XGBoost was used to conduct PDP and SHAP analysis for ASR accordingly.

3.3. Model Interpretability by PDP and SHAP

3.3.1. The PDP Explanation Related to the UCS

The contour plots of the obtained PDP reveal evident nonlinear relationships in specific spatial domains. Specifically, the contour profiles display distorted curves rather than straightforward and smooth trends, complicating the interactions within the feature space. Figure 5 depicts the interactive effects of curing duration, alkali, sodium sulfate, and WGP on compressive strength, where the x-axis exhibits the duration and the y-axis illustrates the SS/A/WGP. The duration positively correlates with UCS, regardless of the presence of alkali, sodium sulfate, and WGP. Figure 5a shows that when the curing duration is fixed, UCS slowly decreases as alkali increases. The presence of the alkali additive supplied extra hydroxide ions, increasing the likelihood of the stable structure being disrupted when exposed to high alkali concentrations. Figure 5b indicates a peak in UCS under the combined effect of sodium sulfate and curing duration. In terms of its effectiveness, sodium sulfate can be seen as a result of both alkali stimulation (due to Na+) and sulfate stimulation (due to SO42−). In addition to the mentioned alkali activation, the combination of gypsum (CaSO4) with aluminate (C3A) can form ettringite (AFt) [56]. The moderate content of AFt can boost the density of the mortar and also enhance the initial mechanical strength [57]. Nevertheless, excessive sulfate content invariably results in an overabundance of AFt, leading to a weakened structure and a decrease in the robustness of the mortar [58]. In Figure 5c, it is evident that the variation in WGP yields negligible effects on the unconfined compressive strength.
Figure 6 illustrates the interactive effects of fine aggregate, alkali, sodium sulfate, and WGP on compressive strength. FA is shown along the x-axis, while A/SS/WGP is represented along the y-axis. Figure 6a,b indicates that when either alkali or sodium sulfate remains constant, the variation in fine aggregate does not have a significant effect on compressive strength. The alkali or sodium sulfate content remains invariant, with alterations in compressive strength mainly depending on fine aggregate characteristics and quantity, cementitious material properties, the water–cement ratio, and curing conditions. If these parameters remain consistent across diverse concrete specimens, the variation in compressive strength is expected to be minimal. Figure 6c illustrates that with a gradual increase in the replacement of fine aggregate by waste glass powder, there is an observed trend wherein the compressive strength initially increases, followed by a subsequent decline as WGP content increases and FA content decreases. The optimal replacement ratio is when the amount of WGP is 202.5 kg/m3.
Figure 7 shows the interaction of WGP size, alkali, sodium sulfate, and WGP on compressive strength, with the x-axis being WGP size and the y-axis being A/SS/WGP. Figure 7a–c indicates that as WGP size increases while alkali, sodium sulfate, or WGP remains constant, the compressive strength gradually decreases. Increasing the packing density of smaller reinforcing WGP within the matrix material can result in a more efficient utilization of space, thereby potentially augmenting the material’s compressive strength. This is attributable to the closer stacking arrangement facilitated by the reduced size of WGP. From Figure 7a, there is a clear linear relationship between alkali and WGP size, meaning that as alkali and WGP size increase together, the compressive strength decreases. Adding alkali substances can enhance the compressive strength of composite materials. However, an excessive amount of alkali substances can lead to side effects, such as brittleness or chemical degradation, thereby reducing compressive strength [59]. Figure 7c indicates that a higher proportion of WGP and a smaller WGP size can increase compressive strength [60].
The trends in Figure 6c and Figure 8a are similar, with compressive strength increasing first and then decreasing as WGP replaces sodium sulfate, indicating the presence of an optimal substitution rate. Figure 8b shows that when alkali remains constant, changing sodium sulfate does not significantly affect compressive strength.

3.3.2. The PDP Explanation Related to the ASR

Figure 9 shows the influence of curing duration, alkali, sodium sulfate, and WGP on ASR, where the x-axis represents curing duration, and the y-axis represents A/SS/WGP. They all exhibit the same trend, with little fluctuation in ASR as alkali, sodium sulfate, or WGP varies when the curing duration remains constant. When alkali, sodium sulfate, or WGP remains constant, ASR increases with an increase in curing duration. Keeping the curing duration constant means that the moisture and temperature conditions in the concrete remain consistent across different experiments. This implies that under a constant curing duration, the effects of alkali, sodium sulfate, WGP, and aggregates on ASR remain relatively constant, resulting in no significant changes in ASR being observed.
The horizontal axis of Figure 10 represents fine aggregate, while the vertical axis represents alkali, sodium sulfate, and WGP, indicating the interactive effects of pairwise factor variations on ASR. Figure 10a indicates that when fine aggregate remains constant, ASR increases with the increase in alkali, reaching its maximum value at an alkali content of 27 kg/m3. Figure 10b indicates that when the fine aggregate is 708.75 kg/m3 and sodium sulfate is less than 5 kg/m3, the ASR value is maximized. Figure 10c shows that when WGP replaces fine aggregate, ASR is enhanced with the decrease in fine aggregate and the increase in WGP.
Figure 11 depicts the impact of changes in alkali, sodium sulfate, WGP, and WGP size on ASR. Figure 11 shows that as WGP size increases, ASR increases when alkali or sodium sulfate or the concentration of WGP remains constant. Larger waste glass powder particles have smaller surface areas, and a relatively smaller surface area means that less surface area can react with the alkaline components in the concrete. Consequently, the diminished susceptibility of larger waste glass powder particles to the influence of alkaline components results in an increased ASR.

3.3.3. Shapley Additive Explanation (SHAP)

This section will utilize SHAP value summary plots and feature importance bar charts based on SHAP values to analyze the impact of features on the ultimate predictions of UCS and ASR in an ML model.
The SHAP value summary plot illustrates that the overall prediction is influenced by the contribution of each feature [61]. Each point in the plot represents a sample, and the color indicates the size of the characteristic values [62]. Here is a specific explanation. The y-axis represents features ranked from high to low in importance. Each point in the plot represents a feature. The x-axis represents the range of SHAP values. It is a bilateral plot, with negative values on the left and positive values on the right [63]. Each point in the plot represents a sample. The horizontal position of the point represents the SHAP value for that sample. The position of the horizontal line represents the contribution to the overall prediction. The value of the corresponding feature for that sample is represented by the color of the point. Color coding helps us understand the distribution of feature values and their impact on predictions. Through this plot, one can quickly understand which features contribute the most to the overall prediction and observe the distribution of different feature values [64].
The SHAP value feature importance bar chart displays the average impact of each feature on model predictions [65]. Each bar in the chart represents a feature, and the length of the bar represents the average absolute magnitude of its SHAP values [66]. The y-axis represents the features, and each bar in the chart represents a feature. The x-axis represents the average magnitude of SHAP values. The direction of the bars indicates whether a feature has a positive or negative impact on model predictions. Positive impacts are represented by bars extending to the right, while negative impacts are represented by bars extending to the left. This chart allows for an intuitive understanding of which features have a greater influence on model predictions and whether they contribute positively or negatively to the model’s prediction results [67].
The dependency plot between features represents the relationship between the SHAP values of a specific feature (Feature S) and another feature (Feature FA), demonstrating the interaction effects between these two features [68]. The x-axis of the graph shows the values of Feature S, while the y-axis represents the SHAP values of Feature S. SHAP values are a metric used to explain the contribution of Feature S to the model’s predictions. The color of the points on the graph represents the presence of Feature FA. This color guide helps us understand how Feature FA and Feature S are related and how they influence the model’s predictions. Through this graph, insights into the interaction effects between Feature S and Feature FA can be gained, as well as how these two features influence the model’s predictions. This is essential for understanding the mechanisms and key features behind the model’s prediction results [68].
Figure 12 represents a summary of SHAP values related to unconfined compressive strength and a bar chart showing the feature importance of SHAP values. Figure 12a is a bar chart of feature importance for SHAP values, which shows the average influence of alkali, curing duration, WGP size, sodium sulfate, WGP, and fine aggregate on the model’s prediction of UCS value. The SHAP value plot reveals that all features have positive SHAP values, suggesting an increase in the predicted values. The most influential factor in our model’s predictions is the alkali feature, with the curing duration feature coming in second place. Conversely, the influence of the fine aggregate feature on the model’s predictions is minimal. Figure 12b illustrates the impact of each feature on each sample, corresponding to the results in Figure 12a, where the alkali feature contributes the most to the overall prediction. While the size of the WGP has a marginal impact on the overall predictive performance, its significance becomes pronounced when the value is low, particularly in the prediction of specific outcomes.
Figure 13 displays a SHAP value summary chart and a bar graph of feature importance based on SHAP values related to ASR. Figure 13a is a bar chart of feature importance for SHAP values, illustrating the mean influence that each feature has on the model’s predictions of ASR values. Curing duration has the greatest SHAP value impact on ASR, followed by WGP size, and WGP has the smallest impact. When looking at each feature individually, they all positively impact the model predictions of ASR values. Figure 13b is the SHAP value summary plot, displaying the individual impact of every feature on the overall ASR classification. The larger the values of curing duration and alkali, the more positively they influence the SHAP values for ASR. A high value of WGP size has a positive impact, whereas a low value has a negative impact, suggesting a preference for a high WGP size. The influence of sodium sulfate, fine aggregate, and WGP on SHAP values is not consistent and may depend on the experimental conditions.

4. Conclusions

Glass concrete is a type of construction material that combines glass particles or fragments with a cement matrix. This study poses three questions, namely, which algorithm predicts the UCS and ASR of glass concrete more accurately, what is the effect of individual characteristics on the UCS and ASR of glass concrete, and how the prediction output is determined by the contribution of each feature. The main conclusions are as follows:
(1)
GBR achieved the best predictive performance for UCS (RMSE = 1.3085, R2 = 0.9465), while XGBoost outperformed others in predicting ASR (RMSE = 0.0101, R2 = 0.9110). The superior performance of these two models was attributed to their ability to capture nonlinear feature interactions and their robustness under limited dataset conditions.
(2)
The WGP content showed a nonlinear effect, with an optimal strength observed at 202.5 kg/m3 replacement. Under this condition, UCS values reached 58.6 MPa at 28 days with binary alkali activation, exceeding the 45.3 MPa baseline of the control group. When WGP is 708.75 kg/m3 and the sodium sulfate is less than 5 kg/m3, the alkali–silica reaction value is maximized.
(3)
Through SHAP value analysis, it is evident that the alkali feature significantly influences the unconfined compressive strength prediction, with importance weights up to 0.432. Curing duration ranks as the second most important variable, whereas the fine aggregate feature exhibits the least consequential effect on UCS prediction. When the fine aggregate is 1012.5 kg/m3 and there is no glass powder substitution, the SHAP value is negative, further indicating that glass powder substitution is beneficial for enhancing the positive influence on UCS.
(4)
Curing duration has the greatest impact on the prediction of alkali–silica reaction, while waste glass powder has the smallest impact on ASR prediction. However, when WGP is 303.75 kg/m3, its positive influence on ASR becomes most evident.
Several limitations should be acknowledged in this study. The dataset, although experimentally derived and well structured, remains relatively limited in size, which may restrict the generalizability of the machine learning models. In addition, model evaluation was based solely on internal train–test splits, without external validation using independent datasets. These factors may reduce the robustness and applicability of the results to broader engineering scenarios. Future work should focus on expanding the dataset with field-scale or multi-source experimental data, implementing external validation protocols, and exploring hybrid modeling strategies that combine data-driven algorithms with domain-specific physical principles to improve both predictive accuracy and interpretability.

Author Contributions

Conceptualization, J.X. and Y.Z. (Yuzhuo Zhang); Methodology, Y.Z. (Yanan Zhang), J.X. and Y.Z. (Yuzhuo Zhang); Formal analysis, F.W., X.Z., D.W., H.T. and W.L.; Investigation, F.W., X.Z., D.W., H.T. and W.L.; Resources, F.W., X.Z., Y.Z. (Yanan Zhang), D.W., H.T. and W.L.; Data curation, D.W.; Writing—original draft, Y.Z. (Yanan Zhang); Writing—review & editing, Y.Z. (Yanan Zhang), J.X. and Y.Z. (Yuzhuo Zhang); Project administration, F.W., X.Z., H.T., J.X., W.L. and Y.Z. (Yuzhuo Zhang). All authors have read and agreed to the published version of the manuscript.

Funding

The project is funded by the Major Research and Development Project of Jiangxi Provincial Department of Transportation (Project No. 2024ZY005), the Science and Technology Program Project of Jiangsu Provincial Market Supervision Administration (Grant No. KJ2025069), and the Enterprise Joint Fund Project of the Hunan Provincial Natural Science Foundation (“Research on Intelligent Recognition of High-Rise Building Construction Progress Based on 4D-BIM and Computer Vision Technology”) (Grant No. S2023JJQYLH0355).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

Authors Fei Wu and Wei Luo were employed by the company Jiangxi Traffic Engineering Assembly Manufacturing Co., Ltd. Authors Xin Zhang and Yuzhuo Zhang were employed by the company China Testing & Certification International Group Shanghai Co., Ltd. Author Hua Tian was employed by the company China Construction Fifth Engineering Bureau Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Correction Statement

This article has been republished with a minor correction to the Conflicts of Interest Statement. This change does not affect the scientific content of the article.

Appendix A

IDOPC
(kg/m3)
75 μm
WGP
(kg/m3)
300 μm
WGP
(kg/m3)
Ground Silica Sand
(kg/m3)
Water
(kg/m3)
Na2SO4 (kg/m3)Alkali (kg/m3)Compressive Strength
(MPa)
ASR Expansion (%)
7d14d28d2d4d7d10d14d
1450001012.5211.50017.4424.1526.840.01960.02250.03080.03460.0462
2450101.250911.25211.50019.6427.1435.720.00660.00770.01560.01920.0192
3450101.250911.25211.50917.5023.0625.000.05000.05620.06540.06920.0712
4450101.250911.25211.501815.0221.0023.110.02810.03850.05500.06770.0815
5450101.250911.25211.50279.7112.7514.490.02460.02620.03080.04230.0769
6450101.250911.25211.59019.6727.0129.350.00090.01000.01200.03500.0650
7450101.250911.25211.59920.8925.3729.840.03460.03850.05190.06920.1038
8450101.250911.25211.591815.8320.3223.630.01540.02140.03150.05770.0731
9450101.250911.25211.592713.9717.6321.500.02460.03080.04460.05380.0808
10450101.250911.25211.518021.9227.6536.530.03080.03690.05310.08850.1000
11450101.250911.25211.518921.2426.5430.340.02310.02920.05380.09840.1320
12450101.250911.25211.5181816.0219.4522.890.01540.02540.05380.08080.1308
13450101.250911.25211.5182713.980.0018.640.01000.02310.06150.08850.1160
14450101.250911.25211.527016.5221.6023.610.01230.02310.06540.07310.0962
15450101.250911.25211.527916.9720.4223.570.00380.01150.04620.08460.1192
16450101.250911.25211.5271812.7817.0419.660.00850.02310.06150.10770.1224
17450101.250911.25211.5272710.6214.3516.330.00460.02470.04620.08100.1000
18450202.50810211.50018.1326.4427.900.00750.01380.03080.04230.0462
19450202.50810211.50918.9625.7728.730.01920.02380.04230.04620.0531
20450202.50810211.501815.5820.5723.970.02350.02690.04230.05380.0731
21450202.50810211.502712.3716.3518.460.02350.02690.03080.04650.0760
22450202.50810211.59022.4928.9631.530.03080.03080.05380.05150.0538
23450202.50810211.59923.7430.5333.920.01570.03460.03850.05380.0692
24450202.50810211.591819.2121.5523.120.02000.03860.04650.06120.0808
25450202.50810211.592716.0820.4625.550.05380.05770.07130.08310.0865
26450202.50810211.518021.7925.6627.480.03460.05380.06540.06920.0692
27450202.50810211.518924.9629.0133.730.03460.05380.06150.07560.1000
28450202.50810211.5181819.2523.2925.720.03850.06150.08850.10380.1065
29450202.50810211.518279.2311.9916.150.04210.07310.08850.10770.1055
30450202.50810211.518019.2625.5628.400.00380.02690.04230.05420.0615
31450202.50810211.527919.7027.1730.190.00770.01920.05000.06500.0846
32450202.50810211.5271817.0422.7025.220.01540.02600.04230.04620.0577
33450202.50810211.5272713.0918.6922.600.03850.08460.09620.10000.1115
34450303.750708.75211.50017.8226.4431.880.04620.06920.10860.11920.1423
35450303.750708.75211.50919.5926.8931.350.07080.07310.05380.06150.0962
36450303.750708.75211.501818.2023.5426.050.06540.06620.07270.07540.1019
37450303.750708.75211.502714.5318.9321.320.07690.07690.08460.07690.1115
38450303.750708.75211.59023.4528.1634.190.02000.02500.03900.06400.0670
39450303.750708.75211.59924.7329.6535.330.01540.02000.02690.05000.0654
40450303.750708.75211.591818.6522.9529.050.01650.01870.05680.07650.0846
41450303.750708.75211.592713.7817.5421.750.01750.02540.03580.05860.0765
42450303.750708.75211.518020.6525.3732.570.01540.01540.02380.03850.0508
43450303.750708.75211.518922.0325.6532.690.01920.02880.03460.04270.0462
44450303.750708.75211.5181818.8020.8923.240.03460.05380.05770.06150.0731
45450303.750708.75211.5182710.6512.4514.010.02160.03860.04620.06500.0840
46450303.750708.75211.527018.6524.0629.410.01540.03080.04230.06540.0769
47450303.750708.75211.527917.9618.0222.060.02080.02690.03460.05130.0615
48450303.750708.75211.5271815.9622.1623.820.01540.03460.04860.06920.0808
49450303.750708.75211.5272713.5617.0620.240.02680.03460.05400.06540.0912
504500101.25911.25211.50016.2022.5125.010.00700.01500.03300.04100.0510
514500101.25911.25211.50913.7518.0120.230.01600.08100.10900.11100.1080
524500101.25911.25211.501812.0516.8018.650.04900.06100.08400.09500.1150
534500101.25911.25211.50277.8610.2311.020.03800.04500.06200.08400.1130
544500101.25911.25211.59015.5421.0225.040.00500.01200.02300.05600.0845
554500101.25911.25211.59918.2925.4026.660.02700.04600.05700.09500.1020
564500101.25911.25211.591813.7719.3120.790.01600.02500.03700.06600.0830
574500101.25911.25211.592712.0216.2219.130.02500.03400.06200.05800.0910
584500101.25911.25211.518017.8621.6529.030.04900.05100.07700.12000.1310
594500101.25911.25211.518918.6924.9526.400.02700.03400.06600.11100.1320
604500101.25911.25211.5181813.4619.2620.830.01800.02800.07200.10900.1440
614500101.25911.25211.5182712.0214.6516.030.01100.02600.07600.09600.1400
624500101.25911.25211.527013.0019.0218.730.01200.05400.08600.12000.1390
634500101.25911.25211.527914.2619.6021.450.00400.01500.05200.09400.1310
644500101.25911.25211.5271810.3516.1917.140.00900.03000.07100.13400.1390
654500101.25911.25211.527279.1313.3514.620.00500.03500.06300.09300.1110
664500202.5810211.50015.0820.9423.270.02100.03400.06200.07000.0850
674500202.5810211.50914.6521.0223.220.02100.03600.05400.07000.0860
684500202.5810211.501812.1116.8519.250.01800.06400.07600.08400.1110
694500202.5810211.50279.5313.5614.130.03000.05100.05600.07300.0940
704500202.5810211.59018.4022.0525.650.03400.04900.07200.07900.1010
714500202.5810211.59918.3622.0525.120.02900.03800.05500.07900.0890
724500202.5810211.591814.0215.5217.570.02100.05000.05600.08000.0920
734500202.5810211.592711.9013.7119.930.05900.06500.08200.10800.1130
744500202.5810211.518017.0221.0524.020.02100.06500.08500.10800.1210
754500202.5810211.518917.7219.7324.280.04400.06000.06700.09200.1140
764500202.5810211.5181815.2117.4718.780.04900.07700.10400.13400.1260
774500202.5810211.518277.078.6412.340.05200.08500.12200.13700.1390
784500202.5810211.527015.6220.6522.450.00690.05500.06800.07600.0950
794500202.5810211.527914.7819.2923.090.00800.02100.05500.08100.1060
804500202.5810211.5271813.2215.8918.300.01700.02800.04900.08400.1120
814500202.5810211.5272710.6013.8317.290.04200.11300.12500.12900.1210
824500303.75708.75211.50014.3619.0321.690.04100.07600.08600.13000.1440
834500303.75708.75211.50915.1221.9825.650.04200.06500.08100.09600.1230
844500303.75708.75211.501814.0518.0522.010.08000.10500.12100.12800.1230
854500303.75708.75211.502711.8515.6517.120.10200.12600.11600.12100.1350
864500303.75708.75211.59018.0022.0527.350.07600.07900.08700.09100.1090
874500303.75708.75211.59915.3320.6522.610.03900.05300.08600.11300.1160
884500303.75708.75211.591811.1916.9819.170.01300.01500.02900.05600.0930
894500303.75708.75211.59278.5412.1013.330.02800.03700.04300.06500.0950
904500303.75708.75211.518016.2220.6527.020.03200.02900.04000.06500.0910
914500303.75708.75211.518914.5418.4721.580.03200.05300.06400.07100.0860
924500303.75708.75211.5181811.4715.8816.150.06200.09500.10400.10300.1120
934500303.75708.75211.518276.829.399.800.03700.07300.08500.08900.1060
944500303.75708.75211.527014.2119.6524.060.03100.04800.08100.08600.0980
954500303.75708.75211.527910.6012.9714.410.03900.04800.06000.08600.1070
964500303.75708.75211.527189.8915.7316.290.02600.06200.08400.12200.1260
974500303.75708.75211.527278.6811.9413.970.05000.06000.08000.09300.1231

References

  1. Sun, J.; Wang, Y.; Liu, S.; Dehghani, A.; Xiang, X.; Wei, J.; Wang, X. Mechanical, chemical and hydrothermal activation for waste glass reinforced cement. Constr. Build. Mater. 2021, 301, 124361. [Google Scholar] [CrossRef]
  2. Singh, A.; Wang, Y.; Zhou, Y.; Sun, J.; Xu, X.; Li, Y.; Liu, Z.; Chen, J.; Wang, X. Utilization of antimony tailings in fiber-reinforced 3D printed concrete: A sustainable approach for construction materials. Constr. Build. Mater. 2023, 408, 133689. [Google Scholar] [CrossRef]
  3. Sun, J.; Ma, Y.; Li, J.; Zhang, J.; Ren, Z.; Wang, X. Machine learning-aided design and prediction of cementitious composites containing graphite and slag powder. J. Build. Eng. 2021, 43, 102544. [Google Scholar] [CrossRef]
  4. Sun, J.; Wang, X.; Zhang, J.; Xiao, F.; Sun, Y.; Ren, Z.; Zhang, G.; Liu, S.; Wang, Y. Multi-objective optimisation of a graphite-slag conductive composite applying a BAS-SVR based model. J. Build. Eng. 2021, 44, 103223. [Google Scholar] [CrossRef]
  5. Feng, W.; Wang, Y.; Sun, J.; Tang, Y.; Wu, D.; Jiang, Z.; Wang, J.; Wang, X. Prediction of thermo-mechanical properties of rubber-modified recycled aggregate concrete. Constr. Build. Mater. 2022, 318, 125970. [Google Scholar] [CrossRef]
  6. Sun, J.; Wang, J.; Zhu, Z.; He, R.; Peng, C.; Zhang, C.; Huang, J.; Wang, Y.; Wang, X. Mechanical performance prediction for sustainable high-strength concrete using bio-inspired neural network. Buildings 2022, 12, 65. [Google Scholar] [CrossRef]
  7. Zhang, J.; Sun, Y.; Li, G.; Wang, Y.; Sun, J.; Li, J. Machine-learning-assisted shear strength prediction of reinforced concrete beams with and without stirrups. Eng. Comput. 2022, 38, 1293–1307. [Google Scholar] [CrossRef]
  8. Sun, J.; Yue, L.; Xu, K.; He, R.; Yao, X.; Chen, M.; Cai, T.; Wang, X.; Wang, Y. Multi-objective optimisation for mortar containing activated waste glass powder. J. Mater. Res. Technol. 2022, 18, 1391–1411. [Google Scholar] [CrossRef]
  9. Sun, J.; Wang, Y.; Yao, X.; Ren, Z.; Zhang, G.; Zhang, C.; Chen, X.; Ma, W.; Wang, X. Machine-Learning-Aided Prediction of Flexural Strength and ASR Expansion for Waste Glass Cementitious Composite. Appl. Sci. 2021, 11, 6686. [Google Scholar] [CrossRef]
  10. Ren, Z.; Sun, J.; Zeng, X.; Chen, X.; Wang, Y.; Tang, W.; Wang, X. Research on the electrical conductivity and mechanical properties of copper slag multiphase nano-modified electrically conductive cementitious composite. Constr. Build. Mater. 2022, 339, 127650. [Google Scholar] [CrossRef]
  11. Zava, A.E.; Apebo, S.N.; Adeke, P.T. The optimum amount of waste glass aggregate that can substitute fine aggregate in concrete. Aksaray J. Sci. Eng. 2020, 4, 159–171. [Google Scholar] [CrossRef]
  12. Sun, R.; Wang, D.; Cao, H.; Wang, Y.; Lu, Z.; Xia, J. Ecological pervious concrete in revetment and restoration of coastal Wetlands: A review. Constr. Build. Mater. 2021, 303, 124590. [Google Scholar] [CrossRef]
  13. Figueira, R.; Sousa, R.; Coelho, L.; Azenha, M.; De Almeida, J.; Jorge, P.; Silva, C. Alkali-silica reaction in concrete: Mechanisms, mitigation and test methods. Constr. Build. Mater. 2019, 222, 903–931. [Google Scholar] [CrossRef]
  14. Adaway, M.; Wang, Y. Recycled glass as a partial replacement for fine aggregate in structural concrete–Effects on compressive strength. Electron. J. Struct. Eng. 2015, 14, 116–122. [Google Scholar] [CrossRef]
  15. He, L.; Yin, J.; Gao, X. Additive Manufacturing of Bioactive Glass and Its Polymer Composites as Bone Tissue Engineering Scaffolds: A Review. Bioengineering 2023, 10, 672. [Google Scholar] [CrossRef]
  16. Wang, Q.; Yi, Y.; Ma, G.; Luo, H. Hybrid effects of steel fibers, basalt fibers and calcium sulfate on mechanical performance of PVA-ECC containing high-volume fly ash. Cem. Concr. Compos. 2019, 97, 357–368. [Google Scholar] [CrossRef]
  17. Açikgenç, M.; Ulaş, M.; Alyamaç, K.E. Using an artificial neural network to predict mix compositions of steel fiber-reinforced concrete. Arab. J. Sci. Eng. 2015, 40, 407–419. [Google Scholar] [CrossRef]
  18. Sun, J.; Zhang, J.; Gu, Y.; Huang, Y.; Sun, Y.; Ma, G. Prediction of permeability and unconfined compressive strength of pervious concrete using evolved support vector regression. Constr. Build. Mater. 2019, 207, 440–449. [Google Scholar] [CrossRef]
  19. Shariati, M.; Armaghani, D.J.; Khandelwal, M.; Zhou, J.; Khorami, M. Assessment of longstanding effects of fly ash and silica fume on the compressive strength of concrete using extreme learning machine and artificial neural network. J. Adv. Eng. Comput. 2021, 5, 50–74. [Google Scholar] [CrossRef]
  20. Feng, D.-C.; Liu, Z.-T.; Wang, X.-D.; Chen, Y.; Chang, J.-Q.; Wei, D.-F.; Jiang, Z.-M. Machine learning-based compressive strength prediction for concrete: An adaptive boosting approach. Constr. Build. Mater. 2020, 230, 117000. [Google Scholar] [CrossRef]
  21. Ling, T.-C.; Poon, C.-S. Properties of architectural mortar prepared with recycled glass with different particle sizes. Mater. Des. 2011, 32, 2675–2684. [Google Scholar] [CrossRef]
  22. Niu, Y.; Cheng, C.; Luo, C.; Yang, P.; Guo, W.; Liu, Q. Study on rheological property, volumetric deformation, mechanical strength and microstructure of mortar incorporated with waste glass powder (WGP). Constr. Build. Mater. 2023, 408, 133420. [Google Scholar] [CrossRef]
  23. Li, Q.; Qiao, H.; Li, A.; Li, G. Performance of waste glass powder as a pozzolanic material in blended cement mortar. Constr. Build. Mater. 2022, 324, 126531. [Google Scholar] [CrossRef]
  24. ASTM C305; Standard Practice for Mechanical Mixing of Hydraulic Cement Pastes and Mortars of Plastic Consistency. American Society for Testing and Materials (ASTM): West Conshohocken, PA, USA, 1995.
  25. Fall, M.; Célestin, J.; Pokharel, M.; Touré, M. A contribution to understanding the effects of curing temperature on the mechanical properties of mine cemented tailings backfill. Eng. Geol. 2010, 114, 397–413. [Google Scholar] [CrossRef]
  26. Lam, C.S.; Poon, C.S.; Chan, D. Enhancing the performance of pre-cast concrete blocks by incorporating waste glass–ASR consideration. Cem. Concr. Compos. 2007, 29, 616–625. [Google Scholar] [CrossRef]
  27. ASTM C1260; Standard Test Method for Potential Alkali Reactivity of Aggregates. American Society for Testing and Materials (ASTM): West Conshohocken, PA, USA, 2007.
  28. Multon, S.; Sellier, A.; Cyr, M. Chemo–mechanical modeling for prediction of alkali silica reaction (ASR) expansion. Cem. Concr. Res. 2009, 39, 490–500. [Google Scholar] [CrossRef]
  29. Chen, Y.; Jia, Z.; Mercola, D.; Xie, X. A gradient boosting algorithm for survival analysis via direct optimization of concordance index. Comput. Math. Methods Med. 2013, 2013, 873595. [Google Scholar] [CrossRef]
  30. Zharmagambetov, A.; Gabidolla, M.; Carreira-Perpiñán, M.Ê. Improved boosted regression forests through non-greedy tree optimization. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 18–22 July 2021; pp. 1–8. [Google Scholar]
  31. Guelman, L. Gradient boosting trees for auto insurance loss cost modeling and prediction. Expert Syst. Appl. 2012, 39, 3659–3667. [Google Scholar] [CrossRef]
  32. Rokach, L. Decision forest: Twenty years of research. Inf. Fusion 2016, 27, 111–125. [Google Scholar] [CrossRef]
  33. Quinlan, J.R. Induction of decision trees. Mach. Learn. 1986, 1, 81–106. [Google Scholar] [CrossRef]
  34. Huang, Y.; Liu, Y.; Li, C.; Wang, C. GBRTVis: Online analysis of gradient boosting regression tree. J. Vis. 2019, 22, 125–140. [Google Scholar] [CrossRef]
  35. Liu, P.; Fu, B.; Yang, S.X.; Deng, L.; Zhong, X.; Zheng, H. Optimizing survival analysis of XGBoost for ties to predict disease progression of breast cancer. IEEE Trans. Biomed. Eng. 2020, 68, 148–160. [Google Scholar] [CrossRef] [PubMed]
  36. Rodriguez-Galiano, V.F.; Ghimire, B.; Rogan, J.; Chica-Olmo, M.; Rigol-Sanchez, J.P. An assessment of the effectiveness of a random forest classifier for land-cover classification. ISPRS J. Photogramm. Remote Sens. 2012, 67, 93–104. [Google Scholar] [CrossRef]
  37. Gayathri, R.; Rani, S.U.; Čepová, L.; Rajesh, M.; Kalita, K. A comparative analysis of machine learning models in prediction of mortar compressive strength. Processes 2022, 10, 1387. [Google Scholar] [CrossRef]
  38. Rasheed, Z.; Aravamudan, A.; Sefidmazgi, A.G.; Anagnostopoulos, G.C.; Nikolopoulos, E.I. Advancing flood warning procedures in ungauged basins with machine learning. J. Hydrol. 2022, 609, 127736. [Google Scholar] [CrossRef]
  39. Ul Hassan, C.A.; Iqbal, J.; Hussain, S.; AlSalman, H.; Mosleh, M.A.; Sajid Ullah, S. A computational intelligence approach for predicting medical insurance cost. Math. Probl. Eng. 2021, 2021, 1162553. [Google Scholar] [CrossRef]
  40. Barnston, A.G. Correspondence among the correlation, RMSE, and Heidke forecast verification measures; refinement of the Heidke score. Weather Forecast. 1992, 7, 699–709. [Google Scholar] [CrossRef]
  41. Sutherland, J.; Peet, A.; Soulsby, R. Evaluating the performance of morphological models. Coast. Eng. 2004, 51, 917–939. [Google Scholar] [CrossRef]
  42. Zhao, Q.; Hastie, T. Causal interpretations of black-box models. J. Bus. Econ. Stat. 2021, 39, 272–281. [Google Scholar] [CrossRef]
  43. Gigerenzer, G.; Brighton, H. Homo heuristicus: Why biased minds make better inferences. Top. Cogn. Sci. 2009, 1, 107–143. [Google Scholar] [CrossRef]
  44. Van den Broeck, G.; Lykov, A.; Schleich, M.; Suciu, D. On the tractability of SHAP explanations. J. Artif. Intell. Res. 2022, 74, 851–886. [Google Scholar] [CrossRef]
  45. Feyzi, F. CGT-FL: Using cooperative game theory to effective fault localization in presence of coincidental correctness. Empir. Softw. Eng. 2020, 25, 3873–3927. [Google Scholar] [CrossRef]
  46. Algaba, E.; Fragnelli, V.; Sánchez-Soriano, J. Handbook of the Shapley Value; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
  47. Mitrentsis, G.; Lens, H. An interpretable probabilistic model for short-term solar power forecasting using natural gradient boosting. Appl. Energy 2022, 309, 118473. [Google Scholar] [CrossRef]
  48. Lundberg, S.M.; Erion, G.; Chen, H.; DeGrave, A.; Prutkin, J.M.; Nair, B.; Katz, R.; Himmelfarb, J.; Bansal, N.; Lee, S.-I. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2020, 2, 56–67. [Google Scholar] [CrossRef]
  49. Bulla, C.; Birje, M.N. Improved data-driven root cause analysis in fog computing environment. J. Reliab. Intell. Environ. 2022, 8, 359–377. [Google Scholar] [CrossRef]
  50. Yosinski, J.; Clune, J.; Nguyen, A.; Fuchs, T.; Lipson, H. Understanding neural networks through deep visualization. arXiv 2015, arXiv:1506.06579. [Google Scholar]
  51. Jas, K.; Dodagoudar, G. Explainable machine learning model for liquefaction potential assessment of soils using XGBoost-SHAP. Soil Dyn. Earthq. Eng. 2023, 165, 107662. [Google Scholar] [CrossRef]
  52. Greenwell, B.M.; Boehmke, B.C.; Gray, B. Variable Importance Plots-An Introduction to the vip Package. R J. 2020, 12, 343. [Google Scholar] [CrossRef]
  53. Dickinson, Q.; Meyer, J.G. Positional SHAP (PoSHAP) for Interpretation of machine learning models trained from biological sequences. PLoS Comput. Biol. 2022, 18, e1009736. [Google Scholar] [CrossRef]
  54. Fetsch, C.R.; DeAngelis, G.C.; Angelaki, D.E. Visual–vestibular cue integration for heading perception: Applications of optimal cue integration theory. Eur. J. Neurosci. 2010, 31, 1721–1729. [Google Scholar] [CrossRef]
  55. Abdullah, A.; Jaafar, M.; Taufiq-Yap, Y.; Alhozaimy, A.; Al-Negheimish, A.; Noorzaei, J. The effect of various chemical activators on pozzolanic reactivity: A review. Sci. Res. Essays 2012, 7, 719–729. [Google Scholar] [CrossRef]
  56. Liu, C.; Huang, X.; Wu, Y.-Y.; Deng, X.; Zheng, Z. The effect of graphene oxide on the mechanical properties, impermeability and corrosion resistance of cement mortar containing mineral admixtures. Constr. Build. Mater. 2021, 288, 123059. [Google Scholar] [CrossRef]
  57. Huseien, G.F.; Shah, K.W.; Sam, A.R.M. Sustainability of nanomaterials based self-healing concrete: An all-inclusive insight. J. Build. Eng. 2019, 23, 155–171. [Google Scholar] [CrossRef]
  58. Atiş, C.D.; Bilim, C.; Çelik, Ö.; Karahan, O. Influence of activator on the strength and drying shrinkage of alkali-activated slag mortar. Constr. Build. Mater. 2009, 23, 548–555. [Google Scholar] [CrossRef]
  59. Aly, M.; Hashmi, M.; Olabi, A.; Messeiry, M.; Abadir, E.; Hussain, A. Effect of colloidal nano-silica on the mechanical and physical behaviour of waste-glass cement mortar. Mater. Des. 2012, 33, 127–135. [Google Scholar] [CrossRef]
  60. Ullah, I.; Liu, K.; Yamamoto, T.; Zahid, M.; Jamal, A. Prediction of electric vehicle charging duration time using ensemble machine learning algorithm and Shapley additive explanations. Int. J. Energy Res. 2022, 46, 15211–15230. [Google Scholar] [CrossRef]
  61. Lundberg, S.M.; Erion, G.G.; Lee, S.-I. Consistent individualized feature attribution for tree ensembles. arXiv 2018, arXiv:1802.03888. [Google Scholar]
  62. Litovsky, R.Y.; Parkinson, A.; Arcaroli, J. Spatial hearing and speech intelligibility in bilateral cochlear implant users. Ear Hear. 2009, 30, 419. [Google Scholar] [CrossRef]
  63. Nie, B.; Xue, J.; Gupta, S.; Patel, T.; Engelmann, C.; Smirni, E.; Tiwari, D. Machine learning models for GPU error prediction in a large scale HPC system. In Proceedings of the 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), Luxembourg, 25–28 June 2018; pp. 95–106. [Google Scholar]
  64. Bove, C.; Aigrain, J.; Lesot, M.-J.; Tijus, C.; Detyniecki, M. Contextualization and exploration of local feature importance explanations to improve understanding and satisfaction of non-expert users. In Proceedings of the 27th International Conference on Intelligent User Interfaces, Helsinki, Finland, 22–25 March 2022; pp. 807–819. [Google Scholar]
  65. Liu, Y.; Liu, Z.; Luo, X.; Zhao, H. Diagnosis of Parkinson’s disease based on SHAP value feature selection. Biocybern. Biomed. Eng. 2022, 42, 856–869. [Google Scholar] [CrossRef]
  66. Caruana, R.; Lou, Y.; Gehrke, J.; Koch, P.; Sturm, M.; Elhadad, N. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, Australia, 10–13 August 2015; pp. 1721–1730. [Google Scholar]
  67. Gomi, H.; Osu, R. Task-dependent viscoelasticity of human multijoint arm and its spatial characteristics for interaction with environments. J. Neurosci. 1998, 18, 8965–8978. [Google Scholar] [CrossRef]
  68. Leibold, M.A.; Holyoak, M.; Mouquet, N.; Amarasekare, P.; Chase, J.M.; Hoopes, M.F.; Holt, R.D.; Shurin, J.B.; Law, R.; Tilman, D. The metacommunity concept: A framework for multi-scale community ecology. Ecol. Lett. 2004, 7, 601–613. [Google Scholar] [CrossRef]
Figure 1. The figure of the bivariate plot and correlation coefficient heatmap for the UCS and ASR datasets.
Figure 1. The figure of the bivariate plot and correlation coefficient heatmap for the UCS and ASR datasets.
Buildings 15 01866 g001
Figure 2. The distinctive features of WGP with the mean particle size of (a) 300 µm and (b) 75 µm.
Figure 2. The distinctive features of WGP with the mean particle size of (a) 300 µm and (b) 75 µm.
Buildings 15 01866 g002
Figure 3. Model performance graphs of four regression models: (a) Random Forest, (b) Hist Gradient Boosting Regressor, (c) Gradient Boosting Regressor, and (d) XGBoost.
Figure 3. Model performance graphs of four regression models: (a) Random Forest, (b) Hist Gradient Boosting Regressor, (c) Gradient Boosting Regressor, and (d) XGBoost.
Buildings 15 01866 g003
Figure 4. Model performance graphs of four regression models: (a) Random Forest, (b) Hist Gradient Boosting Regressor, (c) Gradient Boosting Regressor, and (d) XGBoost.
Figure 4. Model performance graphs of four regression models: (a) Random Forest, (b) Hist Gradient Boosting Regressor, (c) Gradient Boosting Regressor, and (d) XGBoost.
Buildings 15 01866 g004
Figure 5. Interactive partial dependence plot with D on the horizontal axis: (a) D-A; (b) D-SS; (c) D-WGP.
Figure 5. Interactive partial dependence plot with D on the horizontal axis: (a) D-A; (b) D-SS; (c) D-WGP.
Buildings 15 01866 g005
Figure 6. Interactive partial dependence plot with FA on the horizontal axis: (a) FA-A; (b) FA-SS; (c) FA-WGP.
Figure 6. Interactive partial dependence plot with FA on the horizontal axis: (a) FA-A; (b) FA-SS; (c) FA-WGP.
Buildings 15 01866 g006
Figure 7. Interactive partial dependence plot with S on the horizontal axis: (a) S-A; (b) S-SS; (c) S-WGP.
Figure 7. Interactive partial dependence plot with S on the horizontal axis: (a) S-A; (b) S-SS; (c) S-WGP.
Buildings 15 01866 g007
Figure 8. Interactive partial dependence plot: (a) SS-WGP; (b) A-SS.
Figure 8. Interactive partial dependence plot: (a) SS-WGP; (b) A-SS.
Buildings 15 01866 g008
Figure 9. Interactive partial dependence plot with D on the horizontal axis: (a) D-A; (b) D-SS; (c) D-WGP.
Figure 9. Interactive partial dependence plot with D on the horizontal axis: (a) D-A; (b) D-SS; (c) D-WGP.
Buildings 15 01866 g009
Figure 10. Interactive partial dependence plot with FA on the horizontal axis: (a) FA-A; (b) FA-SS; (c) FA-WGP.
Figure 10. Interactive partial dependence plot with FA on the horizontal axis: (a) FA-A; (b) FA-SS; (c) FA-WGP.
Buildings 15 01866 g010
Figure 11. Interactive partial dependence plot with S on the horizontal axis: (a) S-A; (b) S-SS; (c) S-WGP.
Figure 11. Interactive partial dependence plot with S on the horizontal axis: (a) S-A; (b) S-SS; (c) S-WGP.
Buildings 15 01866 g011
Figure 12. Feature importance bar charts based on SHAP values and SHAP value summary plot to UCS: (a) Average SHAP Value per Feature; (b) SHAP Summary: Impact and Value Distribution.
Figure 12. Feature importance bar charts based on SHAP values and SHAP value summary plot to UCS: (a) Average SHAP Value per Feature; (b) SHAP Summary: Impact and Value Distribution.
Buildings 15 01866 g012
Figure 13. Feature importance bar charts based on SHAP values and SHAP value summary plot to ASR: (a) Average SHAP Value per Feature; (b) SHAP Summary: Impact and Value Distribution.
Figure 13. Feature importance bar charts based on SHAP values and SHAP value summary plot to ASR: (a) Average SHAP Value per Feature; (b) SHAP Summary: Impact and Value Distribution.
Buildings 15 01866 g013
Table 1. Chemical compositions of WGP and cement.
Table 1. Chemical compositions of WGP and cement.
Chemical CompositionWGPCement
S i O 2 74.02%20.10%
A l 2 O 3 1.40%4.60%
F e 2 O 3 0.19%2.80%
C a O 11.25%63.40%
M g O 3.34%1.30%
S O 3 0.33%2.70%
N a 2 O 9.03%0.60%
K 2 O 0.29%-
True density2.3–2.5 t/m33.0–3.2 t/m3
Total chloride-0.02%
Table 2. Variables with all levels used in this experiment.
Table 2. Variables with all levels used in this experiment.
VariablesNumber of LevelsMagnitude
WGP size (μm)275, 300
WGP replacement ratio (%)40, 10, 20, 30
Alkali ratio (%)40, 2, 4, 6
Sodium sulfate ratio (%)40, 2, 4, 6
Curing duration (days)37, 14, 28
Table 3. Results of hyperparameters for UCS prediction.
Table 3. Results of hyperparameters for UCS prediction.
Model\Hyperparametern_estimator (Before/After)max_depth (Before/After)
Random Forest100/345None (no limit)/10
Gradient Boosting Regressor100/4703/4
Hist Gradient Boosting Regressor100 (max_iter)/498None (no limit)/4
XGBoost100/119None (no limit)/4
Table 4. Specific indicators for each model for UCS prediction.
Table 4. Specific indicators for each model for UCS prediction.
VariablesRandom ForestGradient Boosting RegressorHist Gradient Boosting RegressorXGBoost
RMSE2.41681.30851.74901.5794
MSE5.84111.71213.05902.4946
MAE1.97431.03201.40901.2695
R20.81760.94650.90450.9221
Table 5. Results of hyperparameters for ASR prediction.
Table 5. Results of hyperparameters for ASR prediction.
Model\Hyperparametern_estimator (Before/After)max_depth (Before/After)
Random Forest100/116None (no limit)/13
Gradient Boosting Regressor100/4963/4
Hist Gradient Boosting Regressor100 (max_iter)/646None (no limit)/8
XGBoost100/89None (no limit)/8
Table 6. Specific indicators for each model for ASR prediction.
Table 6. Specific indicators for each model for ASR prediction.
VariablesRandom ForestGradient Boosting RegressorHist Gradient Boosting RegressorXGBoost
RMSE0.01660.01270.01360.0101
MSE0.00030.00020.00020.0001
MAE0.01340.00960.01090.0080
R20.76100.86020.83960.9110
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, F.; Zhang, X.; Zhang, Y.; Wang, D.; Tian, H.; Xu, J.; Luo, W.; Zhang, Y. AI-Based Variable Importance Analysis of Mechanical and ASR Properties in Activated Waste Glass Mortar. Buildings 2025, 15, 1866. https://doi.org/10.3390/buildings15111866

AMA Style

Wu F, Zhang X, Zhang Y, Wang D, Tian H, Xu J, Luo W, Zhang Y. AI-Based Variable Importance Analysis of Mechanical and ASR Properties in Activated Waste Glass Mortar. Buildings. 2025; 15(11):1866. https://doi.org/10.3390/buildings15111866

Chicago/Turabian Style

Wu, Fei, Xin Zhang, Yanan Zhang, Dong Wang, Hua Tian, Jing Xu, Wei Luo, and Yuzhuo Zhang. 2025. "AI-Based Variable Importance Analysis of Mechanical and ASR Properties in Activated Waste Glass Mortar" Buildings 15, no. 11: 1866. https://doi.org/10.3390/buildings15111866

APA Style

Wu, F., Zhang, X., Zhang, Y., Wang, D., Tian, H., Xu, J., Luo, W., & Zhang, Y. (2025). AI-Based Variable Importance Analysis of Mechanical and ASR Properties in Activated Waste Glass Mortar. Buildings, 15(11), 1866. https://doi.org/10.3390/buildings15111866

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop