Next Article in Journal
Governing the Fab Lab Commons: An Ostrom-Inspired Framework for Sustainable University Shared Spaces
Previous Article in Journal
Preparation of Red Mud-Electrolytic Manganese Residue Paste: Properties and Environmental Impact
Previous Article in Special Issue
Effect of Recycled Powder from Construction and Demolition Waste on the Macroscopic Properties and Microstructure of Foamed Concrete with Different Dry Density Grades
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Explainable Machine Learning Models with Metaheuristic Optimization for Performance Prediction of Self-Compacting Concrete

1
School of Civil Engineering, Heilongjiang University, Harbin 150080, China
2
School of Civil Engineering and Architecture, Taizhou University, Taizhou 318000, China
3
Resilient City Research Institute, Taizhou University, Taizhou 318000, China
*
Authors to whom correspondence should be addressed.
Buildings 2026, 16(1), 225; https://doi.org/10.3390/buildings16010225
Submission received: 12 December 2025 / Revised: 27 December 2025 / Accepted: 30 December 2025 / Published: 4 January 2026

Abstract

Accurate prediction of the mechanical and rheological properties of self-compacting concrete (SCC) is critical for mixture design and engineering decision-making; however, conventional empirical approaches often struggle to capture the coupled nonlinear relationships among mixture variables. To address this challenge, this study develops an integrated and interpretable hybrid machine learning (ML) framework by coupling three ML models (RF, XGBoost, and SVR) with five metaheuristic optimizers (SSA, PSO, GWO, GA, and WOA), and by incorporating SHAP and partial dependence (PDP) analyses for explainability. Two SCC datasets with nine mixture parameters are used to predict 28-day compressive strength (CS) and slump flow (SF). The results show that SSA provides the most stable hyperparameter optimization, and the best-performing SSA–RF model achieves test R2 values of 0.967 for CS and 0.958 for SF, with RMSE values of 2.295 and 23.068, respectively. Feature importance analysis indicates that the top five variables contribute more than 80% of the predictive information for both targets. Using only these dominant features, a simplified SSA–RF model reduces computation time from 7.3 s to 5.9 s and from 9.7 s to 6.1 s for the two datasets, respectively, while maintaining engineering-level prediction accuracy, and the SHAP and PDP analyses provide transparent feature-level explanations and verify that the learned relationships are physically consistent with SCC mixture-design principles, thereby increasing the reliability and practical applicability of the proposed framework. Overall, the proposed framework delivers accurate prediction, transparent interpretation, and practical guidance for SCC mixture optimization.

1. Introduction

Concrete is one of the most extensively used construction materials in civil engineering, serving as the primary component of buildings, bridges, tunnels, and various infrastructure systems due to its high strength, durability, and ease of production [1,2]. With the rapid advancement of modern construction and increasingly complex structural demands, conventional concrete often encounters challenges such as insufficient workability, difficulties in compaction, and quality control issues in densely reinforced or geometrically restricted regions [3,4]. To overcome these limitations, self-compacting concrete (SCC) has emerged as an advanced material capable of flowing under its own weight, achieving full compaction without mechanical vibration, and offering superior filling ability and consolidation performance [5,6]. Owing to these advantages, SCC has been increasingly adopted in high-rise buildings, long-span bridges, tunnel linings, prefabricated components, and other engineering scenarios that require high construction efficiency, improved surface quality, and enhanced durability [7,8]. As its application continues to expand, accurately understanding and predicting the mechanical and rheological behavior of SCC has become essential for mixture optimization, production control, and performance assurance [9,10,11]. However, the properties of SCC are governed by complex interactions among multiple mixture components, resulting in strong nonlinearity and variability that traditional empirical design methods struggle to capture [12,13]. Consequently, the development of reliable, efficient, and data-driven predictive approaches has become a critical research focus in intelligent concrete technology and modern construction engineering [14,15].
In recent years, machine learning has gained widespread attention in the engineering community due to its strong capability to capture nonlinear relationships and identify complex patterns from data [16,17]. Improvements in computational power and data accessibility have further accelerated the adoption of machine learning techniques across various prediction tasks in civil engineering, including material performance evaluation, structural behavior assessment, construction monitoring, and quality control [18,19,20,21]. These data-driven methods demonstrate clear advantages over traditional empirical or mechanistic approaches, particularly when addressing systems governed by multiple interacting variables [22,23,24,25]. In the field of concrete materials, machine learning has increasingly become a powerful tool for forecasting mechanical strength, flow characteristics, and durability, thereby supporting mixture design optimization and intelligent construction [26]. Consequently, data-driven predictive modeling has emerged as an important research direction for enhancing the accuracy and efficiency of performance evaluation for SCC.
A growing body of research has applied machine learning (ML) models to predict the mechanical properties of various concrete systems. Dong et al. [27] developed a data-driven feature evaluation framework that integrates data imputation techniques with a gradient-descent algorithm using a specially designed loss function to identify key input parameters and improve empirical compressive-strength equations for concrete based on data from the Three Gorges Project. Their findings demonstrated that the proposed method is model-independent, that K-nearest neighbors provide the most reliable imputation performance, and that the optimized empirical equation significantly enhances prediction accuracy. Mohamed Abdellatief et al. [28] proposed a hybrid ML framework incorporating linear regression, one-dimensional convolutional neural networks (OneD-CNN), SVR, and an ensemble model combining ElasticNet, RF and Gradient Boosting (GB) to predict the compressive strength of metakaolin-based geopolymer concrete using both experimental and literature datasets. Their results showed that the hybrid framework achieves substantially higher accuracy than conventional or standalone models and identified aggregate ratio, NaOH molarity and the H2O/Na2O molar ratio as the most influential variables. Yasmina Kellouche et al. [29] established an ML framework integrating artificial neural networks (ANN), enhanced neural networks with combined inputs, particle swarm optimization (PSO) and a genetic algorithm to predict the compressive strength of palm-oil-fuel-ash concrete using six mixture parameters. The enhanced neural network exhibited superior performance compared with the other tested models and demonstrated robust predictive capability across diverse data ranges. In parallel, recycled aggregate self-compacting concrete (RA-SCC), produced by incorporating waste concrete and industrial by-products, has attracted increasing attention due to its potential to mitigate natural resource depletion and reduce greenhouse gas emissions. Collectively, these ML models provide systematic and reliable predictions for RA-SCC and offer valuable insights for mixture-design optimization and quality control. Despite these advancements, several important research gaps remain. First, most existing studies focus on conventional or specialized concrete systems, whereas comprehensive ML investigations targeting SCC, which features pronounced multi-parameter coupling effects, are still limited. Second, many studies rely on single-model frameworks or simple model combinations and lack systematic comparisons across multiple ML algorithms together with effective hyperparameter-optimization strategies, resulting in limited generalization capability. Third, model interpretability has not been sufficiently emphasized, leaving the underlying influence mechanisms of mixture parameters inadequately understood and constraining engineering applicability. Fourth, although compressive strength has been extensively investigated, far fewer studies have examined other essential performance indicators such as slump flow, which governs the placement, flowability and construction performance of SCC.
To address the challenges arising from the nonlinear behavior and complex variable interactions of SCC, this study proposes a comprehensive data-driven prediction framework that integrates multiple ML models with advanced optimization and interpretability techniques. The novelty of this study lies in the development of an integrated, interpretable, and systematically optimized ML framework dedicated to SCC performance prediction. Unlike previous SCC-related ML studies that often focus on a single target or adopt limited optimization strategies, the main contributions of this work can be summarized as follows: (1) establishing a unified hybrid ML and metaheuristic framework for SCC prediction; (2) performing dual-target comparative evaluation of mechanical and rheological behaviors; (3) incorporating explainable-AI analysis to interpret model behavior; and (4) proposing a simplified reduced-input strategy that enhances computational efficiency and provides practical guidance for SCC mixture design. The results demonstrate that the proposed framework achieves high predictive accuracy, strong generalization capability, and clear interpretability, offering a practical and reliable tool for mixture-design optimization and intelligent decision-making in concrete engineering.

2. Methodology

The workflow of this study is presented in Figure 1. Two structurally consistent datasets were constructed, sharing the same input variables but differing in sample size. The first dataset contains 145 mix designs and is used for predicting the 28-day compressive strength (CS) of SCC. The second dataset comprises 224 mix designs and is employed for predicting slump flow (SF). Both datasets incorporate the same set of input features commonly adopted in SCC mixture design research. Prior to model development, all records were manually verified for completeness and consistency (e.g., variable definitions and units), and no missing entries were identified; apart from the normalization procedure, no additional data screening, filtering, or exclusion criteria (including outlier removal) were applied before model training. To address magnitude discrepancies among variables, all numerical features were normalized using the min–max method. The entire dataset was first randomly split into a training set (80%) and an independent test set (20%). The test set was strictly held out and was not involved in any stage of hyperparameter tuning, model selection, or cross-validation. All hyperparameter optimization procedures were conducted using the training set only, in which a 10-fold cross-validation scheme was embedded within the optimization loop to evaluate each candidate hyperparameter configuration. Specifically, for each candidate solution generated by the metaheuristic optimizer, the model was trained and validated via 10-fold cross-validation on the training set, and the mean cross-validated error was used as the optimization objective. After the best hyperparameters were identified, the final model was refitted on the full training set and evaluated once on the untouched test set to report the final generalization performance. To prevent data leakage during preprocessing, min–max normalization was fitted using only the training data (and, within cross-validation, fitted on the corresponding training folds) and then applied to the associated validation folds and the independent test set. Model robustness and predictive reliability were enhanced through hyperparameter tuning based on five population-based optimization algorithms: SSA, PSO, GWO, GA and WOA. A k-fold cross-validation procedure was applied throughout the modeling process to mitigate overfitting and ensure stable performance assessment. Following the initial training stage, the most influential variables governing the mechanical (CS) and rheological (SF) behavior of SCC were identified through feature importance analysis of the optimal predictive model. A streamlined model was subsequently reconstructed using only these dominant variables to further examine their contribution to predictive performance. Finally, a comprehensive interpretability analysis was conducted to provide deeper insights into how key input features influence SCC behavior and to elucidate the decision-making mechanisms of the optimized model.

2.1. Machine Learning Models

2.1.1. Random Forest (RF)

RF is an ensemble learning method that constructs a large number of independent decision trees and aggregates their outputs to produce stable and accurate predictions [30,31]. The algorithm relies on bootstrap sampling to generate multiple training subsets, enabling each tree to learn from different data distributions. This mechanism effectively reduces model variance and mitigates the risk of overfitting. The overall structure and operational process of RF are illustrated in Figure 2.
During the feature-splitting process, RF introduces additional randomness by selecting only a subset of input variables at each node. This strategy increases diversity among the trees, prevents the model from relying excessively on any single feature, and enhances generalization capability. Due to its robustness to noise, ability to capture nonlinear relationships, and minimal dependence on hyperparameter tuning, RF has been widely recognized as a reliable baseline model for predictive tasks involving complex materials such as concrete.

2.1.2. eXtreme Gradient Boosting (XGBoost)

XGBoost is an optimized gradient boosting framework designed to improve both predictive accuracy and computational efficiency [32,33]. By incorporating both first- and second-order gradient information, the algorithm achieves more precise split decisions and faster convergence. XGBoost also employs regularization on tree complexity through combined L1 and L2 penalties, which effectively reduces overfitting and enhances generalization. With additional system-level optimizations such as parallelized tree construction and sparsity-aware split finding, XGBoost performs efficiently on structured datasets and has become one of the most reliable models for material property prediction.

2.1.3. Support Vector Regression (SVR)

SVR is a kernel-based machine learning method that seeks an optimal regression function by maximizing the margin around the data [34,35]. The use of an ε-insensitive loss function allows the model to disregard small deviations and remain robust against noise. By applying nonlinear kernel mapping, with the radial basis function (RBF) being one of the most commonly used choices, SVR can effectively capture complex relationships in material behavior. Its strong generalization ability and stability make it particularly suitable for small- to medium-sized datasets in concrete property prediction.

2.2. K-Fold Cross Validation

K-fold cross validation was applied in this study to obtain a reliable and unbiased evaluation of each machine learning model [36,37]. As illustrated in Figure 3, the entire dataset was randomly partitioned into K equally sized subsets. In the training process, one subset was sequentially selected as the validation set, while the remaining subsets were combined to form the training set. This procedure was repeated K times, ensuring that every sample participated in both training and validation.
In this work, a 10-fold cross validation strategy (K = 10) was adopted, which provides a well-established balance between computational efficiency and the stability of performance estimation. The final evaluation metrics were calculated by averaging the results across all 10 folds, yielding a more robust reflection of each model’s generalization capability.

2.3. Hyperparametric Optimization Algorithm

Hyperparameter optimization is essential for enhancing the accuracy, stability, and generalization capability of machine learning models [38,39]. In this study, five nature-inspired metaheuristic algorithms were employed to explore the hyperparameter search space. All of these algorithms are derived from natural phenomena and perform population-based iterative searches to locate promising regions in complex and nonlinear optimization landscapes.
SSA simulates the foraging and anti-predation behavior of sparrows and provides a flexible balance between global exploration and local exploitation [40]. PSO is inspired by the collective movement of bird flocks and fish schools, updating particle positions based on both individual experience and group knowledge, which enables rapid convergence in continuous parameter spaces [41]. GWO models the hierarchical structure and cooperative hunting strategies of gray wolves and exhibits strong global search capability with a relatively simple algorithmic structure [42]. WOA imitates the bubble-net feeding behavior of humpback whales and integrates encircling and spiral movement mechanisms to reduce the likelihood of premature convergence [43]. GA, a classical evolutionary algorithm, relies on biologically inspired operators such as selection, crossover and mutation to iteratively evolve a population of candidate solutions [44]. In this study, SSA, PSO, GWO and WOA are categorized as swarm intelligence algorithms, whereas GA is treated as an evolutionary algorithm. Leveraging this diverse set of nature-inspired metaheuristics enhances the robustness of the hyperparameter search process and contributes to improved overall model performance.

2.4. Model Performance Evaluation Indicators

To rigorously evaluate the predictive performance of the machine learning models, five widely used statistical indicators were adopted in this study: the coefficient of determination (R2), mean absolute error (MAE), mean square error (MSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). These indicators collectively assess the accuracy, error magnitude, sensitivity to large deviations, and scale-independent behavior of model predictions. The mathematical expressions of these indicators are summarized in Table 1.
R2 quantifies the proportion of variance explained by the model, where higher values indicate stronger predictive capability. MAE measures the average absolute deviation between predictions and observations, providing an intuitive interpretation of error magnitude. MSE represents the average squared difference between predicted and true values, emphasizing the influence of large deviations due to the squaring operation. RMSE, as the square root of MSE, maintains the original unit of the target variable while remaining sensitive to significant errors. MAPE expresses prediction errors in percentage form, enabling relative comparisons across datasets with different scales.
To ensure consistent C normalized prior to composite scoring, for indicators where higher values indicate better performance (e.g., R2), normalization was conducted using Equation (1):
x i j   =   x i j     min ( x j ) max ( x j )   min ( x j )
For indicators where lower values indicate better performance (MAE, MSE, RMSE, MAPE), normalization was performed using Equation (2):
x i j   =   max ( x j )   x i j max ( x j )   min ( x j )
where x i j is the original value of indicator j for model i; x i j s the normalized value; and max ( x j ) and min ( x j ) denote the maximum and minimum values of indicator j, respectively.
Following normalization, objective indicator weights were determined using the CRITIC (Criteria Importance Through Intercriteria Correlation) method. CRITIC evaluates both the variability of each indicator and its degree of conflict with other indicators. In practical SCC mixture design and quality control scenarios, model selection should not rely on a single metric because different indicators reflect different engineering concerns. While R2 describes overall fit, error-based metrics (MAE, RMSE, and MAPE) quantify the magnitude of prediction deviations that directly relate to the risk of strength/workability misestimation in design and construction. Therefore, a composite score is used to summarize multi-metric performance into a single comparable index, enabling efficient ranking of multiple model–optimizer combinations. In this study, CRITIC is adopted to assign objective weights by considering both the dispersion of each indicator and its conflict (correlation) with others, which helps reduce redundancy and mitigates subjectivity in multi-criteria evaluation. The amount of information carried by indicator j was computed using Equation (3):
C j   =   σ j k = 1 m   ( 1   r j k )
The final weight of each indicator was obtained by normalizing the information content, as shown in Equation (4):
w j   =   C j j = 1 m   C j
where σ j is the standard deviation of indicator j; r j k is the correlation coefficient between indicators j and k; C j represents the information conten; w j denotes the CRITIC weight; and m is the total number of indicators.
The overall performance score of each model was obtained by combining the normalized indicator values with their CRITIC-derived weights. The composite score was calculated using Equation (5):
S i   =   j = 1 m   w j x i j
where S i is the final composite score of model I; x ij is the normalized value of indicator j for model I; and w j is the weight assigned to indicator j.
A higher composite score indicates better overall model performance, reflecting accuracy, robustness, and stability across all evaluation dimensions.

3. Experimental Setting

3.1. Data Collection and Description

In this study, two experimental datasets were constructed based on published research on SCC. Dataset 1 contains 145 samples, and Dataset 2 contains 224 samples. Both datasets were designed to encompass a broad range of mixture compositions, ensuring sufficient variability and representativeness for ML model development. The input variables for both datasets consist of nine mix-design parameters, namely cement content (C), cement grade (CG), fine aggregate (FA), limestone powder (LP), water–binder ratio (W/B), sand content (S), coarse aggregate (CA), maximum aggregate size (MAXD) and the superplasticizer-to-binder ratio (SP/B). These parameters are widely recognized as the primary factors influencing the mechanical and fresh properties of SCC. Dataset 1 uses the 28-day compressive strength (CS) as the output variable, whereas Dataset 2 focuses on slump flow (SF). Because the two datasets share identical input variables but differ in sample size and prediction targets, they provide a valuable basis for comprehensively examining model behavior across different prediction objectives. It should be noted that this work is a data-driven modeling study and does not involve a new experimental campaign. Therefore, the material batching, mixing, and testing procedures associated with individual samples follow the original experimental protocols reported in the corresponding references. Detailed descriptions of the SCC mixing sequence, material conditioning, and test procedures can be found in the cited source studies, which ensures traceability of the data used for model development.
The correlation structures of the two datasets are shown in Figure 4 and Figure 5. Figure 4 illustrates the correlation heatmap between the input variables and CS. Several variables exhibit moderate correlations, such as the positive association between cement content and strength, indicating that higher binder content generally promotes hydration and mechanical development. Other variables, including MAXD and SP/B, display relatively weak correlations with CS, suggesting the presence of nonlinear effects or interactions that cannot be captured through simple pairwise relationships. Figure 5 presents the heatmap for the SF dataset. Overall correlation levels are lower than those observed for CS, which is consistent with the complex rheological behavior of SCC. Among the variables, W/B and SP/B show comparatively stronger relationships with SF, aligning with established knowledge regarding paste viscosity and flowability.
The relationships between each input variable and the two output parameters are further illustrated through scatter plots. Figure 6 presents the scatter distributions for SF, where the data exhibit substantial dispersion, reflecting the multifactorial and nonlinear nature of concrete flowability. Figure 7 shows the scatter plots for CS, in which a clearer upward trend is observed between cement content and strength, whereas variables such as CG and MAXD display more scattered patterns. The contrast between Figure 6 and Figure 7 highlights the inherent differences between rheological behavior and mechanical performance, particularly in terms of their predictability.
A detailed statistical summary of all input and output variables is provided separately for the two datasets. Table 2 presents the statistical characteristics of Dataset 1, which uses CS as the output parameter, while Table 3 reports the corresponding information for Dataset 2, which predicts SF. Each table includes the mean, minimum, maximum, standard deviation, coefficient of variation and skewness of all variables. These statistical indicators describe the central tendency, dispersion and distributional properties of the datasets, offering insights into data variability and aiding the evaluation of their suitability for ML model development.

3.2. Hyperparameter Optimization

In this work, five meta-heuristic optimization methods, including SSA, PSO, GWO, GA, and WOA, were applied to adjust the hyperparameters of the three prediction models used in the study. The models consisted of RF, XGBoost, and SVR, and the optimization aimed to search for hyperparameter settings that improved predictive accuracy by reducing the mean squared error obtained during training. To make the outcomes produced by different optimization techniques comparable, all algorithms explored the same search region of hyperparameters under unified experimental conditions. Each optimization procedure was carried out for 30 iterations, and a population size of 10 was adopted for all population-based algorithms throughout the experiments. These settings were selected to balance optimization effectiveness and computational efficiency, because evaluating each candidate solution requires repeated model training and cross-validation, which can rapidly increase the computational cost as the population size and iteration number grow. Using the same population size and iteration number for all algorithms also ensures a fair comparison by keeping the computational budget consistent, while still providing adequate search diversity and convergence for hyperparameter tuning. The hyperparameter search ranges associated with each model are summarized in Table 4.

4. Discussion

4.1. ML Models Hyperparameter Optimization

Figure 8 and Figure 9 present the convergence behavior of five meta-heuristic algorithms, namely SSA, PSO, GWO, GA, and WOA, during the hyperparameter optimization of the RF, XGBoost, and SVR models on two independent datasets. These curves provide a clear visualization of how efficiently each algorithm reduces prediction error throughout the optimization process.
For Dataset 1, SSA demonstrates the fastest and most stable convergence among all algorithms. It rapidly reduces the fitness value within the initial iterations and subsequently maintains a steady downward trend, indicating strong global search capability. In contrast, GA and WOA exhibit slower improvement accompanied by more pronounced oscillations, reflecting weaker stability during the optimization process. A similar pattern is observed for Dataset 2, where SSA again outperforms the other four algorithms across all three models. PSO and GWO generally achieve moderate performance with smoother convergence curves, whereas GA consistently shows the slowest convergence.
The optimization results are further summarized in Table 5, which reports the minimum RMSE obtained by each algorithm–model combination. On Dataset 1, SSA reaches the lowest RMSE values for RF (3.65), XGBoost (4.08), and SVR (6.08). These values are noticeably lower than those achieved by the remaining algorithms, demonstrating the superior accuracy of SSA. The advantage of SSA becomes even more apparent in Dataset 2, where it again obtains the smallest RMSE for all three models: 27.54 for RF, 28.91 for XGBoost, and 33.09 for SVR.
Figure 10 summarizes the optimization results for both datasets using radar charts, where a larger radial distance corresponds to lower prediction error and thus superior performance. For Dataset 1, the SSA-optimized RF, XGBoost and SVR models occupy the outermost region of the chart, forming a distinctly expanded boundary that reflects the highest overall accuracy. A similar pattern is observed for Dataset 2, in which SSA again defines the outer contour, demonstrating its consistently strong optimization capability. In contrast, GA and WOA remain closer to the center, indicating higher RMSE values and less stable optimization outcomes.
Overall, SSA exhibits the most favorable characteristics across nearly all evaluation criteria. It converges rapidly, achieves the lowest prediction errors and maintains stable optimization trajectories throughout the search process. Given its strong performance and computational efficiency, SSA is selected as the hyperparameter optimization method for the subsequent development of predictive models in this study.

4.2. Performance Comparison of Different ML Models

A detailed comparison of the results in Table 6 and Table 7 shows that RF provides the best overall predictive performance among the three models. Although XGBoost achieves the highest training accuracy in Dataset 1 (R2 = 0.998, RMSE = 0.604) and similarly strong performance in Dataset 2 (R2 = 0.997), its test accuracy drops more noticeably across both datasets, indicating a higher susceptibility to overfitting. In contrast, RF maintains a more consistent balance between training and testing performance, achieving test R2 values of 0.967 for Dataset 1 and 0.958 for Dataset 2, along with comparatively low RMSE values. This stability highlights the strong generalization capability of RF.
The observed performance differences can be attributed to the inherent characteristics of the models. XGBoost relies on iterative boosting, which aggressively minimizes training error but can also amplify noise and local data fluctuations, particularly when dealing with smaller or more heterogeneous datasets. This tendency increases the risk of overfitting and explains its larger discrepancy between training and testing accuracy. In contrast, RF benefits from ensemble averaging across multiple decision trees, which effectively reduces variance and enhances robustness when handling complex or noisy input features. SVR, meanwhile, consistently exhibited the lowest predictive performance, with substantially higher error levels—especially on Dataset 2 (test RMSE = 37.993)—indicating limited suitability for modeling the highly variable behavior associated with SF prediction.
Figure 11 shows the scatter distribution between the predicted and experimental values for Dataset 1. RF exhibits the most concentrated clustering around the reference line, with most samples closely following the ideal prediction trajectory, demonstrating strong predictive accuracy and generalization capability for CS. In contrast, XGBoost displays slightly greater dispersion, particularly in the high-strength range, indicating a tendency toward overfitting and reduced robustness when extrapolating to larger strength values. SVR shows the weakest performance, with several points deviating markedly from the main cluster, suggesting limited capability in capturing the nonlinear relationships present in the dataset.
A clear improvement in prediction performance for SF is observed in Figure 12. The scatter distributions become more concentrated across all three models, with RF showing the tightest clustering and minimal dispersion, indicating its superior predictive capability for rheological behavior. XGBoost maintains competitive performance; however, a subset of its predictions still falls outside the main concentration region, suggesting a degree of sensitivity to data variability. SVR once again exhibits the largest deviations, reinforcing its limited ability to capture the nonlinear interactions governing SF in SCC mixtures.
To objectively compare the overall performance of the different ML models, a composite evaluation index was developed using min–max normalization in combination with the CRITIC weighting method. This approach assigns higher weights to indicators with greater variability and lower inter-correlation, thereby ensuring a balanced and unbiased assessment across multiple evaluation metrics. The final composite scores obtained through this procedure are presented in Figure 13.
For Dataset 1, XGBoost attains the highest composite score (0.71); however, this apparent advantage is primarily driven by its exceptionally low training errors, which also indicate a clear tendency toward overfitting. RF, in comparison, achieves a slightly lower score (0.69) but exhibits markedly better consistency between training and testing results, reflecting stronger generalization capability and more reliable predictive stability. For Dataset 2, RF again obtains the highest score (0.69), outperforming XGBoost (0.67) and demonstrating robust predictive performance in a dataset characterized by smoother variable interactions. SVR shows a noticeable improvement in Dataset 2 (0.32) relative to Dataset 1, suggesting that kernel-based regression is more suitable for this dataset, where nonlinear patterns are less dominant.
Overall, RF demonstrates the most reliable and well-balanced performance across both datasets, confirming its status as the most robust and generalizable model among the three algorithms evaluated.

4.3. ML Models Performance After Feature Selection

To evaluate the feasibility of simplifying the input variables for practical engineering applications, a feature importance analysis was conducted prior to retraining the hybrid SSA–RF model. In real construction settings, obtaining all input parameters with high precision can be costly, time-consuming or impractical; thus, reducing the number of required features is crucial for enhancing applicability and operational efficiency. RF-based importance scores were adopted due to their stability, robustness against multicollinearity and compatibility with the SSA–RF framework. The analysis was performed separately for Dataset 1 and Dataset 2, and the ranked feature importance distributions are presented in Figure 14.
As shown in the figures, the top five variables in each dataset collectively contribute more than 80% of the total importance, indicating that essential predictive information is concentrated within a limited subset of influential features. For CS prediction (Dataset 1), the most dominant variables are S, W/B, CG, C and FA. These factors govern the core mechanisms of strength development, including paste quality, aggregate interlocking, hydration kinetics and particle packing density. In contrast, SF prediction (Dataset 2) is primarily influenced by FA, W/B, SP/B, C and CA, which is consistent with rheological principles because these variables directly affect mixture viscosity, yield stress, cohesion and lubrication behavior within the fresh mortar matrix.
The dominance of the top five variables in each dataset aligns with the fundamental mixture design principles of SCC. Parameters such as W/B and SP/B play critical roles in governing flowability and paste rheology, whereas S, CG, FA and C collectively influence packing density, water demand, matrix stiffness and hydration potential. Retaining these key features therefore preserves the essential physical mechanisms underlying both strength and flow behavior, while eliminating redundant or weakly informative variables. Based on this rationale, the top five features from each dataset were selected to construct two reduced-feature datasets, which were subsequently used to retrain the SSA–RF model for performance comparison.
Following the selection of the top five most influential features from each dataset, the SSA–RF model was retrained using the reduced input sets. As shown in Table 8, for Dataset 1, the full-feature model achieved an R2 of 0.967, with corresponding RMSE, MSE, MAE and MAPE values of 2.295, 5.267, 1.926 and 5.167%, respectively. After retaining only the top five features, model performance decreased, with R2 dropping to 0.897 and the error metrics increasing to RMSE 4.088, MSE 16.713, MAE 3.551 and MAPE 10.965%. Although this decline reflects the expected loss of information associated with reducing the number of input variables, the simplified model still maintains sufficiently high predictive capability for engineering-level estimation of CS. A similar pattern is observed for Dataset 2, as summarized in Table 9. With all features included, the SSA–RF model achieved an R2 of 0.958 and a MAPE of 3.691%. After feature reduction, R2 declined slightly to 0.927, while RMSE increased from 23.068 to 30.769 and MAPE rose moderately to 4.594%. Despite these changes, the simplified model continues to produce reliable predictions, with R2 > 0.92 and MAPE < 5%, demonstrating its suitability for practical SF estimation in engineering applications.
In addition to maintaining acceptable predictive performance, the simplified SSA–RF model provides clear computational benefits. For Dataset 1, the running time decreased from 7.3 s to 5.9 s, and for Dataset 2, from 9.7 s to 6.1 s. These reductions, approximately 19% for Dataset 1 and 37% for Dataset 2, demonstrate that selecting a smaller subset of input variables can substantially lower computational cost. This improvement is particularly valuable for real-time prediction, repeated simulations, and optimization tasks that require frequent model evaluations.
From an engineering perspective, the reduced-input strategy introduces an explicit trade-off between accuracy and practicality. Although the simplified SSA–RF model shows a decrease in CS prediction performance, this accuracy level can still be acceptable for preliminary mixture design, rapid screening, and repeated evaluation scenarios, especially when only a limited set of mixture parameters is available and computational efficiency is prioritized. However, for structural or safety-critical applications where conservative and highly accurate strength estimation is required, the full-feature model is recommended, or the simplified model should be further calibrated and validated using project-specific data before being used for decision-making.
Overall, although the accuracy of the SSA–RF model decreased slightly after feature selection, the simplified models retained strong engineering applicability while substantially improving computational efficiency. These results demonstrate that the adopted feature selection strategy provides an effective balance between predictive accuracy and computational efficiency, thereby offering a streamlined and practical workflow for SCC-related predictive tasks.

4.4. ML Model Interpretability Analysis

To deepen the understanding of the internal reasoning of the SSA–RF models and to verify whether the learned relationships are consistent with established engineering mechanisms of SCC, interpretability analyses were performed using SHAP and partial dependence plots (PDPs). The SHAP summary plots in Figure 15a,b illustrate the global feature contributions for CS in Dataset 1 and SF in Dataset 2, respectively. The corresponding PDPs, shown in Figure 16 and Figure 17, depict the marginal effects of individual features on the model outputs.
As shown in Figure 15a, the SHAP distribution for Dataset 1 indicates that S and W/B exert the strongest influence on the predicted CS, followed by CG, C and SP/B. Higher values of S are associated with positive SHAP contributions, suggesting that an adequate fine-aggregate content improves packing density, strengthens the granular skeleton and consequently enhances compressive behavior. In contrast, W/B exhibits predominantly negative SHAP values at higher levels, confirming that increased water content weakens the cementitious matrix by increasing porosity and reducing the density of hydration products. CG and C show positive effects, reflecting the benefits of higher cement strength grade and greater binder content for strength development. Variables such as FA, LP and MAXD contribute moderately, indicating that these parameters adjust microstructural or rheological characteristics but are not dominant drivers of strength within the given datasets.
For SF prediction, as shown in Figure 15b, a distinct hierarchy of feature importance is observed. W/B, S and SP/B emerge as the most influential variables, whereas CA, LP, FA and CG exhibit comparatively smaller global contributions. High values of W/B and SP/B are strongly associated with positive SHAP values, indicating that mixtures with greater paste fluidity and lower yield stress are predicted to achieve higher slump-flow. In contrast, higher S generally contributes negatively, reflecting the increased interparticle friction caused by excessive sand content. The relatively small SHAP magnitudes of CA, MAXD and CG suggest that aggregate characteristics and cement strength have secondary influence on fresh-state flowability, with paste-related variables serving as the primary determinants.
The PDPs shown in Figure 16 further illustrate the marginal effects of each feature on CS. C and CG exhibit nearly monotonic increasing trends, indicating systematic strength enhancement with higher binder quality and dosage. FA displays a nonlinear pattern in which moderate replacement has minimal impact, whereas excessive FA slightly reduces strength due to its dilution effect. LP and CA show rapid initial increases followed by plateau regions, suggesting the existence of optimal ranges beyond which additional increments provide limited improvement. The PDP for W/B reveals a clear decreasing trend, consistent with the well-established negative influence of excess water on strength development. The PDP for S increases over most of its practical range, indicating that appropriate fine-aggregate content enhances granular packing and contributes positively to strength. MAXD and SP/B display non-monotonic behaviors, implying that both parameters possess optimal values at which their contributions to strength are maximized before diminishing.
For SF, as shown in Figure 17, the PDP results highlight the underlying mechanisms governing SCC flowability. W/B exhibits a strong positive influence at low to moderate levels, followed by stabilization, indicating that increasing water content enhances flow until a rheological limit is reached. SP/B shows a rapid initial rise before transitioning into a plateau, reflecting the saturation behavior of superplasticizers once the mixture achieves sufficiently low yield stress. FA and LP present nonlinear trends in which moderate dosages improve flow through lubrication and enhanced paste structure, whereas excessive amounts may induce instability or segregation, ultimately reducing slump flow. In contrast, S, CA and MAXD display decreasing or oscillatory patterns, suggesting that higher fine- or coarse-aggregate content or larger aggregate sizes hinder flow due to increased frictional resistance and blockage effects.
Overall, the SHAP and PDP analyses consistently demonstrate that the SSA–RF models capture physically meaningful relationships that align with established SCC theory. Strength predictions are primarily governed by parameters related to binder quality, paste porosity and granular packing, whereas SF behavior is predominantly influenced by rheology-related variables such as W/B and SP/B. These interpretability results not only confirm the reliability of the proposed models but also offer practical guidance for optimizing SCC mixture design in engineering applications.

4.5. Comparison with Previous Studies

Previous studies on concrete performance have made substantial contributions by systematically investigating how mixture constituents and modification measures influence fresh behavior, strength development, and durability-related properties [45,46,47,48]. Extensive experimental and mechanistic research has clarified the roles of cementitious materials, water-to-binder ratio, aggregate characteristics, mineral additions, chemical admixtures, and various additives or reinforcement strategies in governing microstructure evolution and macroscopic performance. These works provide the fundamental scientific basis and practical guidance for mixture design, quality control, and performance-oriented engineering applications, and they remain essential for understanding the underlying mechanisms of concrete behavior [49,50,51].
Building on this well-established knowledge base, the present study shifts the focus toward a data-driven modeling perspective and extends the analysis to SCC, where fresh and hardened properties are strongly coupled and influenced by complex nonlinear interactions among mixture variables. By integrating multiple ML models with metaheuristic hyperparameter optimization and incorporating explainability analyses, the proposed framework learns variable–response relationships directly from data and provides interpretable evidence that can be connected to the mechanisms reported in previous studies. In this way, the proposed approach complements prior experimental research by offering an efficient and scalable tool for multi-variable performance prediction and mixture optimization, and it helps translate existing mechanistic understanding into practical, quantitative decision support for SCC engineering.

5. Conclusions

In this study, a comprehensive data-driven framework that integrates five optimization algorithms and three machine learning models is developed to predict the CS and SF performance of SCC. Two datasets containing nine mixture parameters are analyzed to evaluate model accuracy, generalization capability, computational efficiency, and interpretability. The main conclusions are summarized as follows:
(1) Among all optimization methods, SSA demonstrates the strongest hyperparameter search capability and consistently produces the lowest prediction errors. For Dataset 1, SSA achieves RMSE values of 3.65, 4.08, and 6.08 for the RF, XGBoost, and SVR models, respectively. For Dataset 2, the corresponding RMSE values are 27.54, 28.91, and 33.09 for RF, XGBoost, and SVR, respectively. These results confirm the superior global search performance and stability of SSA when optimizing nonlinear regression models.
(2) Among the three prediction models evaluated, RF demonstrates the most balanced and reliable performance. Although XGBoost achieves very high training accuracy for Dataset 1 (R2 = 0.998), its testing accuracy decreases to 0.939, indicating clear overfitting. In contrast, RF maintains strong generalization, with testing R2 values of 0.967 for Dataset 1 and 0.958 for Dataset 2, together with low error levels (RMSE = 2.295 and 23.068, respectively). RF also obtains the highest CRITIC-based comprehensive scores (0.69 for both datasets), confirming its superior robustness and engineering applicability.
(3) Feature importance analysis shows that the top five variables contribute more than 80% of the predictive information for both datasets. For strength prediction, the dominant features are S, W/B, CG, C, and FA, while for slump-flow prediction, the most influential variables are FA, W/B, SP/B, C, and CA. These key features align with established SCC mechanisms, in which mechanical behavior is governed by binder quality and packing density, and flowability is primarily controlled by rheology-related parameters. The high concentration of feature importance underscores the effectiveness of dimensionality reduction.
(4) After retaining only the top five features, the simplified SSA–RF model preserves engineering-level predictive accuracy while reducing computational cost. For Dataset 1, the simplified model achieves R2 = 0.897, RMSE = 4.088, and MAPE = 10.97%, compared with the full-feature performance of R2 = 0.967 and RMSE = 2.295. For Dataset 2, the simplified model maintains R2 = 0.927 and MAPE = 4.59%. At the same time, the computation time decreases from 7.3 s to 5.9 s for Dataset 1 and from 9.7 s to 6.1 s for Dataset 2, corresponding to reductions of 19% and 37%, respectively. These results demonstrate that feature selection offers meaningful computational advantages for real-time prediction and repeated evaluation scenarios. This reduced-input strategy represents a trade-off between accuracy and efficiency. For safety-critical design decisions, the full-feature model is recommended, while the simplified model is more suitable for preliminary screening or situations with limited available input information.
(5) The SHAP and PDP analyses demonstrate that the SSA–RF model captures physically meaningful and interpretable mechanisms. Higher S and lower W/B increase predicted strength by enhancing packing density and reducing porosity, whereas higher FA and SP/B improve flowability through lubrication effects and rheological modification. Nonlinear patterns revealed by the PDPs, such as the monotonic decrease in strength with increasing W/B and the saturation behavior of SP/B for SF, are consistent with established SCC mixture design theory. These results confirm that the proposed model not only achieves high predictive accuracy but also provides strong interpretability and engineering reliability.
It should be noted that the present study is subject to several limitations. The datasets compiled from published literature remain relatively limited in size and may involve variability in material sources and test conditions; therefore, the generalization of the proposed framework to broader SCC mixtures and other concrete types should be further validated using larger and more diverse datasets, preferably with additional controlled experimental verification.
Overall, this study provides a robust, interpretable, and computationally efficient framework for predicting the mechanical and rheological behaviors of SCC. The findings offer practical guidance for mixture optimization, on-site quality control, and intelligent decision-making in modern concrete engineering.

Author Contributions

J.Z.: Conceptualization, Formal analysis, Methodology, Validation. Z.W.: Methodology, Software, Writing—original draft, Visualization. S.S. (Sifan Shen): Data curation, Investigation. S.S. (Shiyu Sheng): Investigation. H.H.: Resources, Formal analysis, Project administration, Funding acquisition. C.H.: Writing—review and editing, Writing—original draft, Supervision, Validation, Visualization, Data curation, Conceptualization. All authors have read and agreed to the published version of the manuscript.

Funding

This work was financially supported by 2025 Taizhou Municipal Science and Technology Projects (25gya13).

Data Availability Statement

The original contributions presented in this study are included in the. article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

There are no conflicts to declare.

References

  1. Wu, B.; Zhang, H. Compressive behaviour of recycled lump-aggregate concrete with alluvial-proluvial sand under elevated temperature. Constr. Build. Mater. 2025, 505, 144753. [Google Scholar] [CrossRef]
  2. Javid, A.; Kamali, H.; Toufigh, V. Compressive strength prediction of fiber-reinforced concrete under varied temperature conditions using machine learning. Constr. Build. Mater. 2025, 504, 144648. [Google Scholar] [CrossRef]
  3. Chen, J.; Chen, Z.; Ning, F.; Wang, X. Buckling behavior of axially loaded I-section steel-reinforced self-compacting concrete-filled steel tubular columns. J. Build. Eng. 2025, 114, 114422. [Google Scholar] [CrossRef]
  4. Ejaz, A.; Hanif, M.A.; Chatveera, B.; Sua-iam, G. Development of nano-calcium carbonate-modified green self-compacting concrete incorporating recycled tempered glass and waste cotton rope fibers. J. Build. Eng. 2026, 117, 114765. [Google Scholar] [CrossRef]
  5. Mohamed, M.A.; Al-Fakih, A.; Assaggaf, R.; Harun, M. Limestone calcined clay cement-based rubberized self-compacting concrete: Fresh, mechanical, durability, and embodied-carbon assessment. Constr. Build. Mater. 2025, 501, 144358. [Google Scholar] [CrossRef]
  6. De La Rosa, Á.; Ruiz, G.; Moreno, R. Mineral additions as pigments and mechanical property enhancers in self-compacting natural hydraulic lime concrete. J. Build. Eng. 2025, 111, 113393. [Google Scholar] [CrossRef]
  7. De La Rosa, Á.; Ruiz, G. Mix design methodology for self-compacting flexible-fiber reinforced concrete based on rheological and mechanical concepts. Constr. Build. Mater. 2025, 505, 144839. [Google Scholar] [CrossRef]
  8. Kruavit, P.; Sukontasukkul, P.; Jitprapakorn, W.; Sappakittipakorn, M.; Jongvivatsakul, P.; Sae-Long, W.; Damrongwiriyanupap, N.; Chumpol, P.; Pianfuengfoo, S. Parametric and feasibility investigation on drone-assisted placement of self-compacting lightweight concrete. Case Stud. Constr. Mater. 2025, 23, e05225. [Google Scholar] [CrossRef]
  9. Kalauni, K.; Czirak, P.; Chaturvedi, S.; Palou, M.T.; Vedrtnam, A. Performance and design considerations for heavyweight self-compacting concrete using magnetite and barite aggregates. J. Build. Eng. 2025, 111, 113626. [Google Scholar] [CrossRef]
  10. Guo, Y.; Su, H.; Wang, S.; Huang, Z.; Liu, M.; Li, J. Study on the properties of high-performance semi-flowable self-compacting concrete suitable for road works. Constr. Build. Mater. 2025, 503, 144538. [Google Scholar] [CrossRef]
  11. Bahmani, H.; Mostofinejad, D. Sustainable self-compacting concrete: Performance optimization using calcium oxide-activated slag and sugar factory lime waste. Constr. Build. Mater. 2025, 492, 142956. [Google Scholar] [CrossRef]
  12. Mai, H.-V.T.; Nguyen, M.H.; Ly, H.-B. Development of machine learning methods to predict the compressive strength of fiber-reinforced self-compacting concrete and sensitivity analysis. Constr. Build. Mater. 2023, 367, 130339. [Google Scholar] [CrossRef]
  13. Wahab, S.; Abbasi, A.M.; Ahmed, A.; Khan, I.U. Influence of acetic acid treated recycled concrete aggregates on the rheological and mechanical properties of self-compacting concrete: Experiments and machine learning. Constr. Build. Mater. 2025, 491, 142770. [Google Scholar] [CrossRef]
  14. Shah, S.N.R.; Siddiqui, G.R.; Pathan, N. Predicting the behaviour of self-compacting concrete incorporating agro-industrial waste using experimental investigations and comparative machine learning modelling. Structures 2023, 52, 536–548. [Google Scholar] [CrossRef]
  15. de-Prado-Gil, J.; Palencia, C.; Silva-Monteiro, N.; Martínez-García, R. To predict the compressive strength of self compacting concrete with recycled aggregates utilizing ensemble machine learning models. Case Stud. Constr. Mater. 2022, 16, e01046. [Google Scholar] [CrossRef]
  16. He, H.; Shuang, E.; Ai, L.; Wang, X.; Yao, J.; He, C.; Cheng, B. Exploiting machine learning for controlled synthesis of carbon dots-based corrosion inhibitors. J. Clean. Prod. 2023, 419, 138210. [Google Scholar] [CrossRef]
  17. Shan, H.; Ai, L.; He, C.; Li, K. Enhancing multi-objective prediction of settlement around foundation pit using explainable machine learning. J. Civil. Struct. Health Monit. 2025, 15, 3113–3134. [Google Scholar] [CrossRef]
  18. Ai, L.; Zhang, B.; Ziehl, P. A transfer learning approach for acoustic emission zonal localization on steel plate-like structure using numerical simulation and unsupervised domain adaptation. Mech. Syst. Signal Process. 2023, 192, 110216. [Google Scholar] [CrossRef]
  19. Ai, L.; Ziehl, P. Advances in digital twin technology in industry: A review of applications, challenges, and standardization. J. Intell. Const. 2025, 3, 1–19. [Google Scholar] [CrossRef]
  20. Fan, Y.; Yang, G.; Pei, Y.; Cui, X.; Tian, B. A model adapted to predict blast vibration velocity at complex sites: An artificial neural network improved by the grasshopper optimization algorithm. J. Intell. Const. 2025, 3, 9180087. [Google Scholar] [CrossRef]
  21. Wu, S.; Ye, H.; Li, A.; Tu, H.; Xu, S.; Liang, D. A new method for reconstructing building model using machine learning. J. Intell. Const. 2025, 3, 9180041. [Google Scholar] [CrossRef]
  22. Shu, Y.; Wang, P.; Guo, J.; Yin, P.; Lyu, Z.; Jia, S. Detection method and index probability statistical analysis of sand and gravel dam material gradation based on image recognition. J. Intell. Const. 2025, 3, 1–13. [Google Scholar] [CrossRef]
  23. Lu, X.; Dong, K.; Chen, C.; Chen, J.; Gao, W. Multi-scale equivalent modeling and parameter inversion for ultrasonic cavitation erosion of hydraulic concrete. J. Intell. Const. 2026, 4, 9180108. [Google Scholar] [CrossRef]
  24. Zhai, S.; Du, G.; Peng, T.; Wang, Y.; Shang, Z. Probability analysis of vertical drainage improvement for soft soil settlement prediction via a bayesian back analysis framework and the simplified hypothesis B method. J. Intell. Const. 2025, 3, 9180077. [Google Scholar] [CrossRef]
  25. Khan, A.; Li, Y.; Shoaib, M.; Sajjad, U.; Rui, F. Utilizing machine learning and digital twin technology for rock parameter estimation from drilling data. J. Intell. Const. 2025, 3, 9180088. [Google Scholar] [CrossRef]
  26. Anand, P.; Pratap, S. Enhancing the mechanical performance of sustainable high-performance concrete using thermally treated natural fibers: Experimental evaluation and machine learning-based predictive modeling. Constr. Build. Mater. 2025, 493, 143187. [Google Scholar] [CrossRef]
  27. Dong, Y.; Tang, J.; Xu, X.; Li, W.; Feng, X.; Lu, C.; Hu, Z.; Liu, J. A new method to evaluate features importance in machine-learning based prediction of concrete compressive strength. J. Build. Eng. 2025, 102, 111874. [Google Scholar] [CrossRef]
  28. Abdellatief, M.; Elsafi, M.; Murali, G.; ElNemr, A. Comparative evaluation of hybrid machine learning models for predicting the strength of metakaolin-based geopolymer concrete enhanced with gaussian noise augmentation. J. Build. Eng. 2025, 111, 113302. [Google Scholar] [CrossRef]
  29. Kellouche, Y.; Tayeh, B.A.; Chetbani, Y.; Zeyad, A.M.; Mostafa, S.A. Comparative study of different machine learning approaches for predicting the compressive strength of palm fuel ash concrete. J. Build. Eng. 2024, 88, 109187. [Google Scholar] [CrossRef]
  30. He, D.; Pan, X.; Zhan, B.; Shang, L.; Cao, J. Optimizing the low-friction performance of WC/a-C films under low-humidity atmospheric conditions through orthogonal design and random forest algorithm. Tribol. Int. 2026, 214, 111331. [Google Scholar] [CrossRef]
  31. Qiu, J.; Xiao, Z.; Xu, W.; Zhou, Y. Soft probability based random forest for financial distress prediction. Inf. Sci. 2026, 729, 122870. [Google Scholar] [CrossRef]
  32. Han, Y.; Yu, F.; Liu, J.; Zhang, Z.; Ma, B.; Wang, L.; Geng, Z. An improved tuna swarm optimization algorithm based XGBOOST classification method for food risk evaluation. Swarm Evol. Comput. 2026, 100, 102249. [Google Scholar] [CrossRef]
  33. Gianoli, A. Unlocking patterns in urban land use efficiency: A global analysis using XGBoost and bayesian networks. Land Use Policy 2026, 160, 107838. [Google Scholar] [CrossRef]
  34. Zhao, X.; Xu, S. Multi-objective optimization framework for bearings based on thermal network modeling and PSO-SVR prediction. Int. Commun. Heat Mass Transfer 2026, 172, 110271. [Google Scholar] [CrossRef]
  35. Taheri, B.; Hosseini, S.A.; Sedighizadeh, M. Novel hybrid fuzzy-SVR model for fault detection in VSC-HVDC transmission lines. Int. J. Electr. Power Energy Syst. 2025, 172, 111222. [Google Scholar] [CrossRef]
  36. Bermejo, I.; Grimm, S. MSR17 can machine learning support survival model selection to inform economic evaluations? Exploring K-fold cross validation based model selection in seven datasets. Value Health 2024, 27, S441. [Google Scholar] [CrossRef]
  37. Wang, Z.; Lei, Y.; Cui, H.; Miao, H.; Zhang, D.; Wu, Z.; Liu, G. Enhanced RBF neural network metamodelling approach assisted by sliced splitting-based K-fold cross-validation and its application for the stiffened cylindrical shells. Aerosp. Sci. Technol. 2022, 124, 107534. [Google Scholar] [CrossRef]
  38. He, Y.; Wu, R.; Ruan, J.; Nie, P.; Ruan, J.; Liu, Z.; He, G.; Xiong, W.; Xiong, A. Multi-strategy improved beluga whale optimization algorithm for controller parameters to enhance feed distribution uniformity in crayfish aquaculture boat. Comput. Electron. Agric. 2025, 239, 110846. [Google Scholar] [CrossRef]
  39. Zhang, J.; Wang, Q.; Zhao, D.; Xu, Y.; Zhang, L.; Jin, J.; Li, X. An additive attention-enhanced BiGRU model optimized by beluga whale algorithm for SOEC degradation predicting. Appl. Energy 2025, 402, 126837. [Google Scholar] [CrossRef]
  40. Gao, Y.; Zhang, W.; Gou, J.; Zhang, S.; Liu, Y.; Soja, B. CIDR interpolation: An enhanced SSA-based temporal filling framework for restoring continuity in downscaled GRACE(-FO) TWSA products. J. Hydrol. 2026, 664, 134606. [Google Scholar] [CrossRef]
  41. El-Qoraychy, F.-Z.; Du, W.; Abbas-Turki, A.; Dridi, M.; Créput, J.-C.; Mualla, Y.; Koukam, A. Distributed PSO for dynamic intersection management: Enhancing traffic flow and safety in connected autonomous vehicles. Expert Syst. Appl. 2026, 303, 130200. [Google Scholar] [CrossRef]
  42. Sun, Q.; Cheng, S.; Li, L.; Wang, H.; Liu, X.; Zhao, S. Microseismic source localization in tunnels under seepage conditions: An optimized approach using NFBG sensors and the GWO-SA hybrid algorithm. Tunn. Undergr. Space Technol. 2026, 168, 107197. [Google Scholar] [CrossRef]
  43. Niu, L.; Yin, Q.; Zhu, W.; Chen, Z.; Wang, A.; Jiang, Z.; Chen, T. Online monitoring and data correction methods of blast induced ground vibration based on WOA-BP. Measurement 2026, 258, 119567. [Google Scholar] [CrossRef]
  44. Khodabakhshian, R.; Lavasani, H.S.; Weller, P. Optimization of FTIR-PLS models for adulteration detection in sesame oil: A comparative study of genetic algorithm, particle swarm optimization, and a hybrid GA-PSO approach. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2026, 348, 127261. [Google Scholar] [CrossRef]
  45. Zhang, X.; Wang, B.; Chang, J. Adsorption behavior and solidification mechanism of pb(II) on synthetic C-a-S-H gels with different ca/si and al/si ratios in high alkaline conditions. Chem. Eng. J. 2024, 493, 152344. [Google Scholar] [CrossRef]
  46. Ahmad, Z.; Qureshi, M.I.; Ahmad, F.; El Ouni, M.H.; Asghar, M.Z.; Ghazouani, N. Effect of macro synthetic fiber (MSF) on the behavior of conventional concrete and the concrete containing e-waste aggregates. Mater. Struct. 2025, 58, 234. [Google Scholar] [CrossRef]
  47. Zhan, M.; Xu, M.; Lin, W.; He, H.; He, C. Graphene oxide research: Current developments and future directions. Nanomaterials 2025, 15, 507. [Google Scholar] [CrossRef]
  48. Lin, R.-S.; Liao, Y.; Fu, C.; Pan, T.-H.; Guo, R.; Wang, X.-Y. Mechanism analysis of microwave-carbonation solidification for carbide slag-based low-carbon materials. Cem. Concr. Compos. 2025, 157, 105938. [Google Scholar] [CrossRef]
  49. Ahmad, F.; Jamal, A.; Iqbal, M.; Alqurashi, M.; Almoshaogeh, M.; Al-Ahmadi, H.M.; Hussein, E.E. Performance evaluation of cementitious composites incorporating nano graphite platelets as additive carbon material. Materials 2021, 15, 290. [Google Scholar] [CrossRef]
  50. Han, X.; Wang, B.; Feng, J. Relationship between fractal feature and compressive strength of concrete based on MIP. Constr. Build. Mater. 2022, 322, 126504. [Google Scholar] [CrossRef]
  51. Wang, B.; Ding, W.; Fan, C.; Liu, F.; Lu, W.; Yang, H. Solidification performance and mechanism of C-S-H gel for pb(II), zn(II), and cd(II). J. Build. Eng. 2025, 99, 111464. [Google Scholar] [CrossRef]
Figure 1. Overall workflow of the proposed machine learning framework for SCC property prediction.
Figure 1. Overall workflow of the proposed machine learning framework for SCC property prediction.
Buildings 16 00225 g001
Figure 2. Overall structure and operational process of the Random Forest algorithm.
Figure 2. Overall structure and operational process of the Random Forest algorithm.
Buildings 16 00225 g002
Figure 3. Cross-validation algorithm diagram.
Figure 3. Cross-validation algorithm diagram.
Buildings 16 00225 g003
Figure 4. Heatmap of correlation matrix of the input variables for 28-day CS dataset.
Figure 4. Heatmap of correlation matrix of the input variables for 28-day CS dataset.
Buildings 16 00225 g004
Figure 5. Heatmap of correlation matrix of the input variables for SF dataset.
Figure 5. Heatmap of correlation matrix of the input variables for SF dataset.
Buildings 16 00225 g005
Figure 6. The relationship between nine features and 28-day CS.
Figure 6. The relationship between nine features and 28-day CS.
Buildings 16 00225 g006
Figure 7. The relationship between nine features and SF.
Figure 7. The relationship between nine features and SF.
Buildings 16 00225 g007
Figure 8. Convergence curves for dataset 1.
Figure 8. Convergence curves for dataset 1.
Buildings 16 00225 g008
Figure 9. Convergence curves for dataset 2.
Figure 9. Convergence curves for dataset 2.
Buildings 16 00225 g009
Figure 10. Comparison of model RMSEmin under various optimization algorithms.
Figure 10. Comparison of model RMSEmin under various optimization algorithms.
Buildings 16 00225 g010
Figure 11. Comparison of actual and predicted values of ML models for Dataset 1.
Figure 11. Comparison of actual and predicted values of ML models for Dataset 1.
Buildings 16 00225 g011
Figure 12. Comparison of actual and predicted values of ML model for Dataset 2.
Figure 12. Comparison of actual and predicted values of ML model for Dataset 2.
Buildings 16 00225 g012
Figure 13. Final scores of different ML models.
Figure 13. Final scores of different ML models.
Buildings 16 00225 g013
Figure 14. Feature importance percentage of RF model.
Figure 14. Feature importance percentage of RF model.
Buildings 16 00225 g014
Figure 15. SHAP summary plots for global feature contributions in the SSA–RF model.
Figure 15. SHAP summary plots for global feature contributions in the SSA–RF model.
Buildings 16 00225 g015
Figure 16. PDPs illustrating the marginal effects of input variables on predicted Dataset 1.
Figure 16. PDPs illustrating the marginal effects of input variables on predicted Dataset 1.
Buildings 16 00225 g016
Figure 17. PDPs illustrating the marginal effects of input variables on predicted Dataset 2.
Figure 17. PDPs illustrating the marginal effects of input variables on predicted Dataset 2.
Buildings 16 00225 g017
Table 1. Formulations of the performance metrics.
Table 1. Formulations of the performance metrics.
Evaluation IndicatorEquation
R2 R 2 = 1 j = 1 n   ( y j y ^ j ) 2 j = 1 n   ( y j y ¯ ) 2
MAE MAE = 1 n j = 1 n   | y j y ^ j |
MSE MSE = 1 n j = 1 n   ( y j y ^ j ) 2
RMSE RMSE = 1 n j = 1 n   ( y j y ^ j ) 2
MAPE MAPE = 100 % n j = 1 n   y j y ^ j y j
Note: y j is the actual value; y ^ j is the predicted value, y ¯ is the mean of the actual value; n is the number of samples.
Table 2. Statistical values of the dataset for prediction of 28-day CS.
Table 2. Statistical values of the dataset for prediction of 28-day CS.
FeaturesC
(kg/m3)
CG (MPa)FA (kg/m3)LP (kg/m3)W/BS (kg/m3)CA (kg/m3)MAXD (mm)SP/B28-Day CS (MPa)
count145145145145145145145145145145
mean276.5444.15131.0031.230.46846.07818.6617.030.007140.02
std69.693.7285.3979.540.13115.51115.382.960.010314.37
min150.0042.500.000.000.22478.00500.009.500.000010.20
25%220.0042.5060.000.000.36775.00773.0015.000.002227.70
50%260.0042.50159.000.000.45856.00837.0016.000.004338.10
75%325.0042.50180.000.000.55916.00853.0020.000.006051.00
max500.0052.50350.00330.000.871079.001171.0020.000.045073.50
Table 3. Statistical values of the dataset for prediction of SF.
Table 3. Statistical values of the dataset for prediction of SF.
FeaturesC
(kg/m3)
CG (MPa)FA (kg/m3)LP (kg/m3)W/BS (kg/m3)CA (kg/m3)MAXD (mm)SP/BSF (mm)
count224224224224224224224224224224
mean343.1942.9423.3814.460.40828.10816.1417.700.0806638.86
std115.412.0670.6139.090.09117.57103.112.850.242294.27
min150.0042.50.000.000.22323.33500.009.500.0000200.00
25%254.1242.50.000.000.33785.00778.0015.000.0025603.75
50%328.0042.50.000.000.37860.00815.0019.000.010650.00
75%400.0042.50.000000.47900.25847.7020.000.0127700.00
max720.0052.50330.00300.000.721066.001171.0020.001.0000880.00
Table 4. Hyperparameter optimization settings for RF, XGBoost, and SVR.
Table 4. Hyperparameter optimization settings for RF, XGBoost, and SVR.
HyperparametersRFXGBoostSVR
n_estimators[50, 1000][50, 1000]
max_depth[3, 20][3, 20]
subsample[0.6, 1.0][0.6, 1.0]
colsample_bytree[0.6, 1.0][0.6, 1.0]
learning_rate[0.01, 0.3]
min_child_weight[1, 10]
kernel RBF
C (Penalty)[1, 200]
gamma[1 × 10−4, 1 × 10−1]
Table 5. Optimization algorithm corresponding to the best fitness.
Table 5. Optimization algorithm corresponding to the best fitness.
ML ModelOptimization AlgorithmRMSEmin of Dataset 1RMSEmin of Dataset 2
RFSSA3.6527.54
PSO3.7528.21
GWO3.7428.18
GA3.8829.50
WOA3.9029.03
XGBoostSSA4.0828.91
PSO4.1929.15
GWO4.1129.20
GA4.3330.19
WOA4.3630.21
SVRSSA6.0833.09
PSO6.0132.88
GWO6.1933.31
GA6.4134.10
WOA6.3535.55
Table 6. Evaluation metrics of ML models on Dataset 1.
Table 6. Evaluation metrics of ML models on Dataset 1.
ModelsTraining SetTest Set
R2RMSEMSEMAEMAPER2RMSEMSEMAEMAPE
RF0.9841.8193.3121.4503.8730.9672.2955.2671.9265.167
XGBoost0.9980.6040.3650.3891.0840.9393.1599.9822.5006.709
SVR0.9084.41619.5062.8147.0500.8574.82323.2653.3108.932
Table 7. Evaluation metrics of ML models on Dataset 2.
Table 7. Evaluation metrics of ML models on Dataset 2.
ModelsTraining SetTest Set
R2RMSEMSEMAEMAPER2RMSEMSEMAEMAPE
RF0.98112.271150.5789.5291.5830.95823.068532.17621.8553.691
XGBoost0.98510.697114.4318.8141.4710.94925.659658.38922.2714.080
SVR0.92224.623606.30210.2801.7830.88837.9931443.48622.7384.131
Table 8. Comparison of the original model and the model with Top-5 features on Dataset 1.
Table 8. Comparison of the original model and the model with Top-5 features on Dataset 1.
Number of FeaturesR2RMSEMSEMAEMAPERunning Time
All0.9672.2955.2671.9265.1677.3
TOP 50.8974.08816.7133.55110.9655.9
Table 9. Comparison of the original model and the model with Top-5 features on Dataset 2.
Table 9. Comparison of the original model and the model with Top-5 features on Dataset 2.
Number of FeaturesR2RMSEMSEMAEMAPERunning time
All0.95823.068532.17621.8553.6919.7
TOP 50.92730.769946.75627.0684.5946.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, J.; Wang, Z.; Shen, S.; Sheng, S.; He, H.; He, C. Hybrid Explainable Machine Learning Models with Metaheuristic Optimization for Performance Prediction of Self-Compacting Concrete. Buildings 2026, 16, 225. https://doi.org/10.3390/buildings16010225

AMA Style

Zhang J, Wang Z, Shen S, Sheng S, He H, He C. Hybrid Explainable Machine Learning Models with Metaheuristic Optimization for Performance Prediction of Self-Compacting Concrete. Buildings. 2026; 16(1):225. https://doi.org/10.3390/buildings16010225

Chicago/Turabian Style

Zhang, Jing, Zhenlin Wang, Sifan Shen, Shiyu Sheng, Haijie He, and Chuang He. 2026. "Hybrid Explainable Machine Learning Models with Metaheuristic Optimization for Performance Prediction of Self-Compacting Concrete" Buildings 16, no. 1: 225. https://doi.org/10.3390/buildings16010225

APA Style

Zhang, J., Wang, Z., Shen, S., Sheng, S., He, H., & He, C. (2026). Hybrid Explainable Machine Learning Models with Metaheuristic Optimization for Performance Prediction of Self-Compacting Concrete. Buildings, 16(1), 225. https://doi.org/10.3390/buildings16010225

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop