Next Article in Journal
Enhanced Convolutional Neural Network–Transformer Framework for Accurate Prediction of the Flexural Capacity of Ultra-High-Performance Concrete Beams
Next Article in Special Issue
Shear Behavior of Large Keyed Dry Joints in Segmental Precast Bridges: Experiment, Numerical Modeling, and Capacity Prediction
Previous Article in Journal
Sound-Based Detection of Slip and Trip Incidents Among Construction Workers Using Machine and Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prediction of the Shear Strengths of New–Old Interfaces of Concrete Based on Data-Driven Methods Through Machine Learning

1
School of Civil Engineering and Architecture, Wuhan Institute of Technology, Wuhan 430074, China
2
School of Architecture Engineering, College of Post and Telecommunication of WIT, Wuhan 430073, China
3
Hubei Provincial Engineering Research Center for Green Civil Engineering Materials and Structures, Wuhan Institute of Technology, Wuhan 430074, China
4
School of Highway, Chang’an University, Xi’an 710064, China
*
Author to whom correspondence should be addressed.
Buildings 2025, 15(17), 3137; https://doi.org/10.3390/buildings15173137
Submission received: 31 July 2025 / Revised: 23 August 2025 / Accepted: 28 August 2025 / Published: 1 September 2025

Abstract

Accurate prediction of shear strength at the interface between new and old concrete is vital for the structural performance of repaired and composite systems. However, the underlying shear transfer mechanism is highly nonlinear and influenced by multiple interdependent factors, which limit the applicability of conventional empirical models. To address this challenge, an interpretable machine-learning (ML) framework is proposed. The latest database of 247 push-off specimens was compiled from the recent literature, incorporating diverse interface types and design parameters. The hyperparameters of the adopted ML models were optimized via a grid search to ensure the predictive performance on the updated database. Among the evaluated algorithms, eXtreme Gradient Boosting (XGBoost) demonstrated the best predictive performance, with R2 = 0.933, RMSE = 0.663, MAE = 0.486, and MAPE = 12.937% on the testing set, outperforming Support Vector Regression (SVR), Random Forest (RF), and adaptive boosting (AdaBoost). Compared with the best empirical model (AASHTO, R2 = 0.939), XGBoost achieved significantly lower prediction errors (e.g., RMSE was reduced by 67.8%), enhanced robustness (COV = 0.176 vs. 0.384), and a more balanced mean ratio (1.054 vs. 1.514). The SHapley Additive exPlanations (SHAP) method was employed to interpret the model predictions, identifying the shear reinforcement ratio as the most influential factor, followed by interface type, interface width, and concrete strength. These results confirm the superior accuracy, generalizability, and explainability of XGBoost in modeling the shear behaviors of new–old concrete interfaces.

1. Introduction

Concrete is one of the most widely used construction materials in engineering practice due to its excellent compressive strength, durability [1], and the abundant availability of its raw components. Nevertheless, weak planes or interfaces often develop between concrete elements during construction, primarily as a result of improper casting procedures, poor surface treatment, or insufficient bonding quality [2]. Such weak interfaces are frequently found in critical structural locations, including beam–column joints, corbels, and areas involving repair or strengthening interventions, as illustrated in Figure 1 [3]. It is noteworthy that these concrete structural interfaces are often governed by shear forces, making the in-depth understanding and optimization of the shear transfer behavior at these interfaces crucial. Based on formation methods and crack states, concrete interfaces are categorized into three types: cold-jointed interfaces, uncracked interfaces, and pre-cracked interfaces [4]. Over the past few decades, multiple theories have been proposed to describe the shear transfer behaviors of different types of concrete interfaces, and based on these theories, several formulae for calculating interfacial shear strength have been developed [5,6,7,8,9,10,11,12,13,14].
Over the years, various theoretical and empirical models have been proposed to estimate the shear strengths of concrete-to-concrete interfaces, most of which are built upon the classical shear-friction theory [15]. This theory assumes that the interface can mobilize frictional resistance under shear, supplemented by cohesion and dowel action contributed by aggregate interlock or reinforcement crossing the interface [6]. Based on this framework, many design-oriented formulae have been developed to quantify interfacial shear strength, including those adopted in the American Concrete Institute’s (ACI’s) Building Code Requirements for Structural Concrete (ACI 318–19) (hereafter, “ACI”) [16], Canadian Highway Bridge Design Code (CSA S6:19) (hereafter, “CSA”) [17], and AASHTO LRFD Bridge Design Specifications (hereafter, “AASHTO”) [18]. However, most of these models are calibrated using limited experimental datasets and often overlook critical influencing factors, such as crack width, concrete heterogeneity, or dowel stiffness [19,20]. For instance, the ACI model considers only the reinforcement contribution while excluding the roles of concrete cohesion and surface roughness. Although models like AASHTO and CSA partially address the concrete contribution, they still fail to systematically incorporate multi-factor coupling effects, like dowel action and crack morphology [16,17,18,21]. In practice, the interfacial shear behavior results from a highly nonlinear interaction of material properties, surface features, and loading conditions. Therefore, traditional empirical models, with their simplified linear assumptions, struggle to accurately capture the full complexity of the shear mechanism across bonded concrete interfaces [22].
In recent years, data-driven machine-learning (ML) methods have gained widespread traction in the engineering community due to their exceptional ability to model nonlinear and multivariate relationships [23]. With the support of open-source libraries such as TensorFlow, PyTorch, Scikit-learn, Keras, and eXtreme Gradient Boosting (XGBoost), ML frameworks provide flexible and scalable tools for predictive modeling and variable analysis [24,25,26]. These approaches have been increasingly adopted in concrete-related studies, including the prediction of compressive strength in lightweight foam concrete, strength estimation of fiber-reinforced concrete beams, and pattern recognition of surface cracks [27,28,29,30,31]. In addition, ML techniques have also been used to evaluate the diffusion behavior of chloride ions, enabling more accurate assessments of concrete durability [32,33]. Furthermore, ML has demonstrated significant effectiveness in enhancing the accuracy of shear strength predictions for new-to-old concrete interfaces. Deep learning, a subfield of ML, has exhibited exceptional performance across various domains but is not well-suited for this particular application. This limitation arises primarily from two factors: First, deep learning models generally require extensive datasets—typically ranging from 103 to 106 samples—to achieve stable and reliable performance. In contrast, datasets used for predicting shear strength at concrete interfaces typically contain fewer than 1000 samples. Second, the inherent lack of interpretability in complex deep learning models further restricts their practical applicability in engineering contexts. Conversely, ensemble ML algorithms—such as adaptive boosting (AdaBoost), XGBoost, random forest (RF), and Support Vector Regression (SVR)—are more suitable for problems involving limited data. These methods provide strong robustness and generalization abilities through techniques such as boosting, bagging, and inherent regularization. Combined with enhanced interpretability, this makes them more suitable for predicting shear strength at new-to-old concrete interfaces. For instance, Xu et al. [22] utilized Shapley values in combination with XGBoost to analyze 217 records of concrete interfacial shear performance, achieving effective predictions of the shear-bearing capacity. Similarly, Zhong et al. [34] employed ML models, including multiple linear regression, SVR, and random forest regression, to predict the shear strength of cold joints in concrete. Their research demonstrated that the RFR model outperformed traditional mechanical methods, with the concrete strength, interfacial shear key, and fiber length identified as the key factors influencing the shear strength. Yan et al. [35] employed six ML models, including Decision Trees, RF, XGBoost, and Artificial Neural Networks, to predict the shear capacity of the interface between new and old concrete. However, current research has not quantified the diversity of input data and data types, relying on only a limited number of factors for prediction. Especially when dealing with specimens of different materials and reinforcement ratios, this approach may fail to meet accuracy requirements. Moreover, the diversity of ML algorithms makes it crucial to compare different algorithms in order to select the optimal prediction model for the prediction of shear strength at new–old concrete interfaces. With the advancement of research and the continuous expansion of datasets, it is crucial to emphasize that the optimization of predictive model hyperparameters tailored to evolving datasets can significantly enhance prediction performance and generalizability.
To address the shortcomings of the existing research, this study developed an ML-based predictive framework for the shear capacity of new–old concrete interfaces. A database of 247 push-off tests was compiled from the existing literature, covering a wide range of material properties, interfacial conditions, and reinforcement configurations. Four widely used ML algorithms—AdaBoost, XGBoost, RF, and SVR—were evaluated based on their prediction accuracy. Each model was optimized using tenfold cross-validation, with hyperparameter tuning performed specifically for the four selected ML algorithms based on the dataset used in this study. Performance was assessed through standard indicators: the coefficient of determination (R2), root-mean-square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE). The SHAP method was then employed to quantify the influence of each input variable, offering interpretability to the model outputs. Finally, the predictive capability of the proposed model was benchmarked against three established empirical design codes—ACI, AASHTO, and CSA—to validate its practical applicability.

2. Experimental Database and Feature Engineering

2.1. Database

In recent years, Z-shaped push-off tests have been extensively employed to investigate the performances of concrete interface joints, resulting in the accumulation of a substantial experimental dataset; notably, push-off tests are among the most commonly utilized methods [36]. Motivated by this trend, the present study systematically gathered a total of 247 cold-joint data points through an exhaustive review and compilation of the existing literature, thereby establishing a comprehensive experimental database. It is noteworthy that all the experimental data incorporated within this database were obtained exclusively from published studies employing typical Z-shaped push-off tests, thereby guaranteeing the consistency and comparability of the dataset. The schematic representation of the Z-shaped push-off test is illustrated in Figure 2.

2.2. Feature Engineering

In this paper, based on the design attributes of the collected data, nine input parameters were selected. These parameters included the compressive strengths of the concrete (the lower value of the compressive strength (fcmin) and the higher value of the compressive strength (fcmax)), the shear reinforcement ratio (ρ), the yield strength of the shear reinforcement (fy), the diameter of the shear reinforcement (db), the number (nb) of the shear reinforcement, the type of concrete interface, and the interface width (b) and height (h) of the concrete joint interface. The impacts of these input parameters on the interfacial shear strength were studied.
With a sample size of two hundred forty-seven, selecting nine input parameters meets the basic requirements for an ML regression problem [37]. For specimens with significantly different compressive strengths between successive casts, considering only the lowest compressive strength is clearly inadequate. Therefore, the different compressive strengths of the successively cast specimens are included as input parameters.
It is noteworthy that the compressive strength values were obtained using standard cylindrical specimens, and the results for other types of specimens were converted according to the International Reinforced Concrete Manual [38]. As shown in Table 1, the statistical values of the nine input variables and one output variable are presented, including their ranges, means, and medians. The relationships between each input variable and the output variable are illustrated in Figure 3, indicating a complex nonlinear relationship.
In the process of constructing ML models, it is crucial to conduct multicollinearity detection for various candidate parameters as input variables to reduce computational complexity and enhance predictive accuracy. This approach helps to mitigate the interdependencies between related parameters, thereby improving the model’s generalization ability and interpretability. Subsequently, we employ the Pearson correlation coefficient to visually represent the degree of linear correlation between the parameters. A high absolute value of the correlation coefficient indicates a strong relationship between the two variables; however, when this value exceeds 0.7, it signals the presence of severe multicollinearity, necessitating parameter selection. As illustrated in Figure 4, the Pearson correlation coefficients between the variables are presented.
As seen in the Pearson correlation figure, the correlation coefficient between fcmin and fcmax reaches 0.76, indicating a strong linear relationship. However, in practical applications, the strength of old concrete is inevitably influenced by environmental factors, such as complex secondary hydration, chloride ion transport, and the initiation/propagation of internal microcracks [35]. These mechanisms lead to a nonlinear relationship between new and old concrete, necessitating the simultaneous consideration of both parameters despite their linear correlation. For other parameters, although some exhibit weaker linear correlations, their individual impacts on the prediction results are not overshadowed by dominant collinear relationships. Since no other pairs of parameters show absolute correlation coefficients of greater than 0.7, their linear interdependencies are considered as insufficient to warrant exclusion based on equal importance principles. Thus, these parameters with moderate or weak correlations are retained as direct input variables for the model, ensuring a comprehensive capture of influential factors without over-reliance on collinear variables.

3. Interpretable ML Framework

The ML algorithms utilized in this study are AdaBoost, XGBoost, RF, and SVR. In comparison to unsupervised learning algorithms, these four algorithms demonstrate superior performance in achieving local optimal solutions with fewer hyperparameter adjustments, especially when dealing with small databases. This is due to their ability to effectively handle limited datasets while maintaining a high level of accuracy. The complexity and cost associated with structural testing limit the size of the experimental database collected. However, these four algorithms can effectively train and predict models, even with the limited amount of data available.

3.1. AdaBoost

Boosting algorithms are a group of ensemble-learning methods based on a greedy strategy that combines simple weak learners into a stronger ensemble learner. Next, the algorithm evaluates the model’s performance and then redistributes the weights of the training dataset, giving higher importance to false positive patterns in subsequent training stages. The updated weighted dataset is then used to train the next base model. This process is repeated until the predetermined number of models (T) is reached. After all the iterations are completed, these base models are combined in a weighted manner to form a comprehensive ensemble, leveraging the judgments of each individual model to improve the overall accuracy. AdaBoost is the most well-known representative algorithm of the boosting family [26].
The AdaBoost algorithm is based on an additive model, which is a linear combination of learners as follows:
H x = t = 1 T α t h t x
AdaBoost distinguishes itself from conventional boosting techniques by consistently leveraging the entire training dataset across iterations, eschewing the common approach of sequentially selecting sample subsets. Its uniqueness emanates from an iterative weighting recalibration process informed by the misclassification errors of the preceding weak classifier. Each iteration involves adjusting sample weights: Those incorrectly classified receive heightened weights, concentrating the algorithm’s efforts on challenging cases, whereas accurately classified instances see their weights diminished. This iterative and adaptive weighting schema fosters a mechanism of continuous error correction, incrementally assembling a potent ensemble of weak classifiers into a high-performance predictive model with superior accuracy. A concise summary of the AdaBoost algorithm’s procedural workflow is presented in Table 2.

3.2. XGBoost

The XGBoost regression model relies on gradient boosting to train multiple decision trees, reducing the error of the previous model at each step [39]. Compared to other gradient-boosting techniques, XGBoost uses a regularized loss function to limit overfitting. The final prediction of this model can be defined as follows:
y ^ i = k = 1 M α k f k x i
where α k is the learning rate, and f k ( x i ) represents the prediction for sample x i .
o b j θ = i n l y i , y ^ i + k = 1 K Ω f k
The first part represents the loss function term, and the second part represents the regularization term, which controls the complexity of the model. Specifically, the regularization term is defined as follows:
Ω f = γ T + 1 2 λ ω 2
The first term (γT) controls the complexity of the tree by adjusting the number of leaf nodes and their coefficients, while the second term serves as an L2 regularization to manage the weight scores of the leaf nodes. During the training process of the XGBoost model, a greedy algorithm is employed to minimize the loss function.

3.3. RF

A random forest algorithm is an ensemble-learning method based on decision trees [40,41,42,43] and offers lower variance compared to those of individual decision tree (DT) algorithms [44]. Its core idea involves the bagging strategy, which includes:
(1)
Taking n training samples from the training set each time and putting them back to form a new training set;
(2)
Training M sub-models using these bootstrap samples;
(3)
For regression prediction tasks in this paper, the final prediction is obtained by averaging the predictions of the M sub-models.
The final prediction of the M sub-models is given by following equation:
f x i = 1 M k = 1 M f k x i
where xi represents the ith sample in the training set, fk represents the kth tree, and M denotes the total number of sub-models. For nonlinear regression prediction, this algorithm can achieve a relatively high level of prediction accuracy.

3.4. Support Vector Regression

SVR is a regression algorithm based on a Support Vector Machine (SVM). On the basis of the SVM, it fits the data by constructing a pipeline with a fault tolerance range, thereby obtaining the optimal regression model in order to achieve an effective prediction of continuous values [45,46].
The optimization objective of SVR is to find an optimal function such that the prediction error of all the data points (xi, yi) is within ε while minimizing ω 2 . Based on the training set, the objective function can be expressed as (5), where ω represents the weight vector, and ξ i and ξ i * are two slack variables that allow some data points to exceed the range of ε in the prediction error, thereby enhancing the model’s fault tolerance. The relationship between the input parameter (x) and the output variable (f(x)) can be expressed as (6), where ϕ ( x ) is the mapping function that maps the input parameter (x) to the high-dimensional space, and b represents the bias term.
In SVR, choosing the appropriate kernel function is of crucial importance because it determines how to map the input data from the low-dimensional space to the high-dimensional space to solve nonlinear problems. Common kernel functions include linear kernel functions, polynomial kernel functions, Gaussian radial basis kernel functions (RBFs), and Sigmoid kernel functions. Among them, the Gaussian RBF kernel is the most commonly used one because it can handle complex data structures well and shows excellent performance in many practical applications. The kernel function used in this paper is an RBF expressed as (7). Among them, u and v are two input vectors, which are kernel parameters used to control the smoothness of the function.
m i n ω , b 1 2 | | ω | | 2 + C   i = 1 m ( ξ i ξ i * ) 2
s . t y i ω x b ϵ + ξ i ω x + b y i ϵ + ξ i * ξ i , ξ i * 0
f ( x ) = ω . ϕ ( x ) + b
K ( u , v ) = e ( γ u v 2 )

4. Model Development and Validation

During the model-training process, the third-party open-source ML library, the scikit-learn package in Python 3.13, will be called to implement data preprocessing, segmentation of the training set and test set, feature scaling, feature selection, and outlier detection. Multiple iterations will be carried out until no new outliers are found in the training set [26]. The summary of the entire training framework is shown in Figure 5. The first step is to collect the test data of the interfacial shear resistance of the concrete and construct an interfacial shear resistance database covering the key parameters. Subsequently, feature selection is carried out to determine the appropriate combination of the input variables. Then, the entire dataset is divided into the training set and the test set in an 8:2 ratio. The best combination of hyperparameters is selected using a method combining Grid Search and 10-fold cross-validation. Then, different ML models are selected for fitting, and these models are verified using the test set. The model’s performance is evaluated using the following four performance indicators of ML regression prediction: MAPE, MAE, RMSE, and R2. Through the comprehensive analysis of performance indicators, and in combination with SHAP technology, a prediction model for the shear strength at the interface between new and old concrete that meets the specification requirements is established.
To mitigate the risk of overfitting and enhance generalizability under this small-data regime, several strategies were employed. First, robust feature selection was conducted to ensure that only physically meaningful variables with strong mechanistic relevance were included in the model, thereby reducing noise and redundancy. Second, ensemble-learning methods, such as XGBoost and RF, were adopted, as their built-in mechanisms are particularly effective in small datasets by controlling model complexity and variance. Finally, to further validate the robustness of the proposed approach, 10-fold cross-validation was used to assess predictive performance across different data partitions.
In this study, a 10-fold cross-validation strategy was employed to evaluate the performances of the ML models. Specifically, the dataset was randomly partitioned into ten equally sized folds. In each iteration, nine folds were used for training, and the remaining fold was reserved for testing, ensuring that every data point was utilized for both training and validation across different runs. This random partitioning minimized potential bias arising from the data split and allowed for a more reliable assessment of model performance. To optimize the hyperparameters of each algorithm, we combined 10-fold cross-validation with a grid search procedure. The grid search exhaustively explored a predefined range of hyperparameter values. The combination of hyperparameters yielding the best average performance across the ten folds was selected for the final model.

4.1. Performance Metrics for Structural Reliability

The expressions for the evaluation metrics of the ML model performance are as follows:
MAPE = 1 n i = 1 n P i T i T i
MAE   = 1 n i = 1 n P i T i
f x i = 1 M k = 1 M f k x i
RMSE = 1 n i = 1 n ( P i T i ) 2
where Pi is the predicted value of the interfacial shear strength, Ti is the measured value of the interfacial shear strength, and n is the size of the sample. All the above four models adopt these four evaluation indicators. The lower the values of MAPE, MAE, and RMSE in the ML model and the closer the R2 value is to 1, the better the performance of the model.

4.2. Hyperparameter Optimization via Bayesian Methods

After splitting the dataset into training and testing sets, it is necessary to perform feature scaling on the input features of the model to control the feature values within a predetermined range [47]. Ignoring feature scaling and directly training the model may result in slow convergence or even divergence of the algorithm during the training process. Therefore, the input features of the entire training and testing sets are standardized using the StandardScaler module from the scikit-learn package. The standardization expression is as follows:
X n = X X ¯ s t d
where X n represents the input feature value after z-score standardization, X is the raw and unprocessed input feature value,   X ¯   denotes the mean of the entire input feature, and std is the standard deviation of the input feature across the sample.

4.3. Cross-Validation Strategy Addressing Data Scarcity

In ML model training, the optimization of the hyperparameter selection is crucial for improving the model’s performance. Common hyperparameter search techniques include random search, grid search, and Bayesian optimization. In this study, grid search was employed to systematically optimize the hyperparameters of the ML models. Grid search is an exhaustive method that systematically evaluates all the possible combinations in a given hyperparameter space. The main steps of this method include:
(1)
Defining the hyperparameter search space;
(2)
Training the model for each hyperparameter combination;
(3)
Evaluating the performance of each model using cross-validation;
(4)
Selecting the hyperparameter combination with the best performance.
This method was applied to the XGBoost, RF, AdaBoost, and SVR models, with the key hyperparameters evaluated and optimized through 10-fold cross-validation. For instance, in SVR, the optimal parameters (C, ε, and γ) are selected by means of cross-validation, with simultaneous optimization of the parameter combination to mitigate the overfitting issue. The regularization parameter (C > 0) balances the model’s complexity and tolerance of deviations beyond ε, while ε > 0 defines the tube width within which no penalty is incurred. For nonlinear SVR with an RBF kernel, the kernel coefficient (γ > 0) determines the influence range of each training sample in the high-dimensional feature space. The hyperparameters vary across the models; as examples, we present the hyperparameter search spaces for XGBoost and RF.
For the XGBoost model, the defined hyperparameter search space is as follows:
(1)
Learning rate (eta): [0.1, 0.2, 0.3];
(2)
Maximum depth of decision trees: [3, 5, 7];
(3)
Minimum leaf node weight: [1, 3, 5];
(4)
Subsample ratio: [0.8, 0.9, 1.0];
(5)
Feature subsampling ratio (colsample_bytree): [0.8, 0.9, 1.0];
(6)
Learning rate decay coefficient: [0.1, 0.01, 0.05].
For the RF model, the defined hyperparameter search space is as follows:
(1)
Number of boosted trees: [50, 100, 200, 300];
(2)
Maximum tree depth for the base learner: [0, 10, 20, 30, 50];
(3)
Minimum samples to split: [2, 5, 10];
(4)
Minimum samples per leaf: [1, 2, 4].
The choice of hyperparameters for ML models significantly impacts their predictive ability. In this study, the grid search technique is selected, combined with 10-fold cross-validation, to evaluate and optimize the key hyperparameters. Figure 6 illustrates the impacts of hyperparameter configuration changes on the model’s performance, as measured based on RMSE values from 10-fold cross-validation. For the AdaBoost model in Figure 6a, when the maximum depth is 3, the learning rate is 0.2, the number of weak learners is 300, and the model performs the best. For the XGBoost model in Figure 6b, it can be seen that when the number of estimators is 100, the learning rate is 0.1, the maximum depth is 1, and the RMSE value is minimized. For the RF model in Figure 6c, it can be seen that when the number of estimators is 300, the minimum number of sample leaves is 2, the maximum depth is 9, and the RF model performs the best.

4.4. Training and Testing Results

Based on the determined hyperparameters and the existing database, this study compared the performance evaluation metrics of four ML models, namely, AdaBoost, XGBoost, RF, and SVR (as shown in Table 3). As shown in Table 3 and Figure 7, a quantitative statistical analysis was conducted on the predicted interfacial shear strength values of the four models on the training and test sets. The results indicated that the XGBoost model performed well in the training stage, achieving the best values in all the performance metrics (R2 = 0.984; RMSE = 0.325; MAE = 0.186; MAPE = 0.051). Moreover, in the test stage, XGBoost also performed as the best algorithm. Notably, the R2 values of the RF model were similar between the test and training sets. This phenomenon is speculated to be related to the random partitioning method of the dataset, which might have led to the inclusion of relatively easier-to-predict samples in the test set.
The performances of the models can be checked through a simple “score analysis” technique. These models are ranked based on their performances in specific metrics of the model evaluation to carry out this analysis. The model with the best performance is ranked highest and assigned a value of ‘k’ (in this study, k = 4, the total number of models), while the model with the worst performance is given a value of 1. This scoring is conducted separately for the training and test stages, and the final score is calculated by summing the scores of the training and test stages. Table 4 shows the details of the score analysis, and Figure 8 provides a better perspective to visualize the results in a radar chart. From the table and figure, it can be seen that the final score of the XGBoost model is 31, which is higher than the final scores of the other three models.
In conclusion, based on the above error metrics and R2 values, the XGBoost model demonstrates the highest prediction accuracy and robustness in this study, making it the optimal model among those compared in this paper.

5. Multiscale Model Interpretation

5.1. Global Feature Importance: SHAP Value Aggregation

The SHAP value aggregation quantifies the marginal contribution of each feature across the entire dataset, offering a global measure of the feature importance. Unlike linear correlation measures, such as Pearson’s coefficient, SHAP inherently accounts for nonlinear dependencies and variable interactions, thereby capturing more complex input–output relationships. This enables the model to provide a comprehensive interpretation of how different parameters jointly influence the predicted shear strength. Consequently, the SHAP-based analysis ensures that both linear and nonlinear effects are transparently incorporated into the evaluation of the feature importance.
To more intuitively analyze how the shear capacity of the interface between new and old concrete is affected by input parameters, feature importance analysis was introduced to reveal the relationship between the input parameters and the target value. Additionally, to better interpret ‘black-box’ models, explanatory tools are needed. Currently, widely adopted explanatory tools in the field of ML include Alibi, SHAP, LIME, Skater, ELI5, and Captum [48,49]. This study employs SHAP as the explanatory tool for the model. SHAP effectively provides global and local explanations for the predictions of the XGBoost model. This technique is based on the concept of Shapley values from game theory, initially proposed by Lundberg and Lee [50]. The explanatory model established using SHAP can be expressed by the following formula [51]:
g x = ϕ 0 + j = 1 K ϕ j x j
where x j { 0 , 1 } K , K is the number of input features, and ϕ j R and ϕ j are the feature attribution values.
By assigning a ϕ j value to each feature using the classic Shapley values from game theory, the expression is as follows:
ϕ j = S N \ i S ! M S ! 1 M ! f x S i f x S
where N is the set of all the features, and S is the value of the input.
As shown in Figure 9a, the XGBoost model with the highest accuracy was selected for analyzing the feature importance of the input variables. The average SHAP values were used to represent the proportion of the feature importance among the input variables. From the figure, it can be seen that the shear stirrup reinforcement ratio (ρ) has the highest proportion in feature importance, followed by the type of interface between new and old concrete (smooth or rough). Additionally, the interface width (b), the compressive strength of the larger concrete cylinder ( f c m a x ), the number of shear reinforcement bars ( n b ), the lower concrete strength ( f c m i n ), and the yield strength of the reinforcement ( f y ) can significantly affect the predicted results. In contrast, the diameter of the shear reinforcement bars ( d b ) and the length of the interface section (h) have very little impact on the model’s predictions. Figure 9b is a summary plot of the SHAP values for the input features of the entire dataset, showing the distribution of SHAP values for each input feature throughout the dataset. The x-axis represents the SHAP values; a positive value for ρ indicates that the corresponding feature can increase the predicted value, while a negative value indicates that the feature will decrease the predicted value. This figure supports the conclusions drawn from Figure 9a, namely, that ρ has the highest proportion in the feature importance. Additionally, it is evident that the range of values for the diameter of the reinforcement bars ( d b ) and the length of the interface section (h) is very narrow, further indicating that these two input variables have relatively low feature importance.
To further illustrate the issue, a local interpretation of the XGBoost model was performed using the data of two specimens from the database: specimen number 105 and specimen number 225 [52]. These two specimens have smooth and rough interfaces, respectively. As shown in Figure 10a,b, the waterfall charts of the SHAP values for individual specimens are plotted. The horizontal axis represents the predicted value, with red and blue bars indicating positive and negative contributions of each variable to the model’s prediction, respectively. The charts also show the contribution of each input variable to the model’s prediction. In Figure 10a,b, the expected prediction value without the influence of input variables is 4.417 MPa. For specimen 105, the final predicted value under the influence of input variables is 3.541 MPa. The reinforcement ratio (ρ), concrete interface type, concrete shear interface width (b), and high concrete compressive strength (fc1) are the most important input features for this specimen. The concrete interface width (b) and the smooth interface type negatively affect the predicted shear strength.

5.2. Local Explanation: Interfacial Parameter Interactions

To further assess the predictive performance of the XGBoost model for the interfacial shear strength, three empirical models—ACI, CSA, and AASHTO—were selected for comparative analysis. A summary of the selected empirical models is provided in Table 5.
In comparison with empirical models, such as AASHTO and CSA, it should be emphasized that these models were applied in their conventional form, without adjustments to align with the specific conditions of the compiled dataset. Both AASHTO and CSA models were originally calibrated for narrower ranges of interfacial conditions and may not fully capture the diversity of interface types, surface roughness, and material properties represented in the present dataset. Consequently, these empirical approaches are prone to biases when applied to the broader range of conditions considered in this study.
Quantitative performance metrics for the three empirical models and the XGBoost model are presented in Table 6. Among the empirical models, the AASHTO model showed the highest predictive accuracy. Specifically, the AASHTO model achieved R2 = 0.939, RMSE = 2.057, MAE = 1.393, and MAPE = 32.235%. In contrast, the ACI model demonstrated inferior performance, with an R2 value of only 0.895. As shown in Table 6, the mean ratio of experimental-to-predicted values is not reported for the ACI model due to its exclusion of the concrete’s contribution in its formulation. In comparison, the AASHTO and CSA models yield mean ratios of experimental-to-predicted values of 1.514 and 2.225, respectively, suggesting that both models tend to produce conservative estimates of shear strength.
Figure 11a compares the experimental shear strengths with the predicted values from the ACI, AASHTO, and CSA models. It is observed that most predictions from the ACI model fall below the y = x line, whereas those from the AASHTO and CSA models are more closely distributed around it. Notably, the AASHTO predictions exhibit tighter clustering around the y = x line than those of the CSA model. While some predicted strengths exceed the experimental results, this trend reflects a conservative design philosophy. Figure 11b presents a comparison between the predicted results of the AASHTO and XGBoost models. As the dataset was partitioned at a training-to-test ratio of 8:2, only 49 test samples are included in this comparison. In the AASHTO model, the deviations between predicted and experimental values are relatively scattered, with most falling between 5% and 20% and several exceeding 20%. In contrast, the XGBoost model maintains prediction deviations consistently within 5%, demonstrating markedly improved predictive accuracy. Furthermore, Table 6 indicates that the COV for the XGBoost model is 0.176, significantly lower than that of the AASHTO model (0.384). This suggests that the XGBoost model exhibits lower prediction dispersion. These results clearly demonstrate the superiority of the XGBoost model over conventional empirical models, underscoring the predictive advantages offered by ML techniques.

6. Conclusions

This study explored the practical application of interpretable ML techniques for predicting the shear strength at the interface between new and old concrete. A comprehensive database of 247 Z-shaped push-off specimens was compiled from recent experimental studies (see Table A1). Four ML models—AdaBoost, XGBoost, RF, and SVR—were developed, with hyperparameters optimized using a grid search and 10-fold cross-validation. The models’ performances were evaluated using multiple metrics, including R2 values, RMSE, MAE, and MAPE, to ensure both accuracy and robustness. To enhance interpretability, SHAP analysis was employed to quantify the contribution of each input feature to the model’s predictions. Additionally, three conventional empirical models—ACI, AASHTO, and CSA—were used as benchmarks to compare against the XGBoost model. The primary conclusions of this study are summarized as follows:
(1)
XGBoost outperformed all the other ML models. Although SVR exhibited a strong performance on the training set, it suffered from a significant decline in the test accuracy, indicating overfitting. RF and AdaBoost achieved R2 values of 0.829 and 0.801, respectively, on the test set. In contrast, the XGBoost model achieved the highest performance, with an R2 value of 0.933, the lowest RMSE (0.663), MAE (0.486), and MAPE (12.937%), confirming its superior generalization and predictive capacities for the interfacial shear strength;
(2)
SHAP analysis provided insights into the key factors influencing model predictions. Both global and local SHAP results consistently identified the shear reinforcement ratio, interface type, higher value of the compressive strength (fcmax), interface width (b), and reinforcement yield strength (fy) as the most influential features. Variables such as ρ, fcmin, and fy generally had positive effects on shear strength predictions. In contrast, smooth interfaces tended to reduce the predicted shear strength, while rough surfaces enhanced it;
(3)
Compared with traditional empirical models, XGBoost demonstrated significantly higher accuracy and stability. Among the empirical approaches, the AASHTO model performed the best, with R2 = 0.939. However, it showed much higher error metrics (RMSE = 2.057, MAE = 1.402, and MAPE = 31.235%) and a greater coefficient of variation (COV = 0.384) than XGBoost (COV = 0.176). Moreover, the mean prediction ratio of XGBoost (1.054) was significantly closer to 1.0 than that of the AASHTO model (1.514), indicating better balance and reduced bias in its predictions. These results confirm the superior predictive consistency, accuracy, and reliability of the XGBoost model in modeling interfacial shear strength.

7. Discussion

Numerous prior studies have relied on smaller and less diverse experimental datasets, potentially limiting the robustness and generalizability of their predictive models. These limitations, as previously discussed, arise from the use of inadequate datasets that fail to encompass the full spectrum of concrete materials, interfacial conditions, and reinforcement configurations—critical factors for accurate shear strength prediction. Furthermore, earlier research primarily depended on several limited sets of input parameters, such as concrete strength, and did not sufficiently explore the expansion of input data types to accommodate varying materials and reinforcement ratios. Worth noting is that due to the differences in datasets and corresponding model hyperparameters, direct comparisons with previous research data may not be entirely appropriate, as the adjustment of optimal hyperparameters for predictive models significantly affects prediction accuracy.
To address this gap, the present study utilizes a systematically compiled dataset of 247 push-off specimens, incorporating data from multiple sources and covering a wide range of materials, interfacial roughness conditions, concrete strengths, and reinforcement configurations. This comprehensive dataset offers a more representative foundation for developing robust predictive models for the shear strengths of new–old concrete interfaces. This study goes beyond this by incorporating a more diverse set of parameters, including the shear reinforcement ratio, interface type, and interface width. The expanded feature set guarantees that the model can provide more accurate predictions for specimens with varying material properties and reinforcement configurations. Building upon this foundation, the optimal hyperparameter tuning for the four ML algorithms employed in this study is conducted, aimed at optimizing the model’s accuracy and generalizability.
Future research will also explore pathways to integrate the proposed ML framework with mainstream design codes and embed it into widely used engineering software platforms, such as BIM tools, like Autodesk Revit, and FEA software, including ANSYS and ABAQUS, while formulating a roadmap for standardization to promote its practical adoption in engineering practices. The incorporation of additional experimental data is recommended to further improve the model’s performance. Moreover, further experimental investigations may be required to enhance and validate the robustness of the XGBoost model. Future research should also consider a broader range of ML algorithms and influencing factors, including material properties and environmental conditions. For instance, material properties may involve the use of different reinforcement types, such as fiber-reinforced polymer bars, stainless steel bars, and aluminum alloy reinforcements, each exhibiting distinct mechanical behaviors and bonding characteristics with concrete. Additionally, variations in concrete mix designs, including the use of recycled aggregates and supplementary cementitious materials, can also significantly impact interfacial behavior. Environmental conditions may encompass high-temperature exposure, freeze–thaw cycles, chloride-induced corrosion environments, and sustained moisture or drying conditions, all of which can alter the durability and shear performance of the new–old concrete interface. Integrating these factors into predictive models will facilitate the development of more robust and generalizable ML frameworks for structural performance assessment.

Author Contributions

Y.W.: investigation, data curation, and writing—original draft. W.X.: validation, formal analysis, visualization, and writing—original draft. J.C.: methodology, resources, supervision, and writing—review and editing. J.L.: conceptualization, supervision, project administration, and writing—review and editing. F.W.: methodology, validation, and writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

The authors wish to acknowledge financial support from the National Natural Science Foundation of China (Grant No. 52278210) and the Science and Technology Research Project of the Department of Education of Hubei Province (Grant No. B2023465).

Data Availability Statement

Data will be made available on request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Collected 247 cold joints database.
Table A1. Collected 247 cold joints database.
Author(s)Numberfcmaxfcmin ρ fydbnbInterfacebhTest
[53]198.8098.800.00370572.09.52S127.0304.83.65
283.1083.100.00740572.09.54S127.0304.85.66
380.9080.900.00366572.09.52R127.0304.86.20
480.9080.900.00740572.09.54R127.0304.89.43
586.0086.000.01110572.09.56R127.0304.812.67
686.0086.000.01480572.09.58R127.0304.815.25
789.3089.300.01110572.09.56R127.0304.813.09
889.3089.300.01480572.09.58R127.0304.814.48
9101.70101.700.00370572.09.52R127.0304.810.45
10101.70101.700.00740572.09.54R127.0304.811.40
11104.90104.900.01110572.09.56R127.0304.815.48
12104.90104.900.01480572.09.58R127.0304.817.59
[4]1365.6556.640.00502446.08.06S200.0300.04.21
1465.6556.640.00502446.08.06S200.0300.03.61
1565.6556.640.00502446.08.06S200.0300.03.53
1665.6556.640.00502446.08.06S200.0300.03.46
1765.6556.640.00502446.08.06S200.0300.03.54
1865.6556.640.00502446.08.06S200.0300.04.24
1965.6556.640.00502446.08.06S200.0300.03.58
2065.6556.640.00502446.08.06S200.0300.03.57
2165.6556.640.00502446.08.06S200.0300.03.36
2265.6556.640.00502446.08.06S200.0300.03.29
[54]2342.3440.150.00439351.09.52S127.0254.01.45
2442.3440.150.00878351.09.54S127.0254.02.48
2540.9040.900.01318348.09.56S127.0254.02.95
2640.9040.900.01757356.09.58S127.0254.04.14
2742.3142.170.02196364.09.510S127.0254.05.38
2842.3142.170.03140312.012.78S127.0254.06.08
2943.3039.950.00439353.09.52R127.0254.03.36
3043.3039.950.00878348.09.54R127.0254.04.83
3142.5841.420.01318353.09.56R127.0254.07.27
3242.5841.420.01757371.09.58R127.0254.08.80
3341.3125.790.03140340.012.78R127.0254.011.72
3442.7225.790.00439353.09.52R127.0254.04.07
3542.7225.790.00878353.09.54R127.0254.06.34
3640.4220.110.01318386.09.56R127.0254.06.96
3740.4220.110.01757386.09.58R127.0254.06.91
3841.6217.070.01757372.09.58R127.0254.06.85
3942.4120.210.03140334.012.78R127.0254.010.13
[55]4033.5133.510.01328456.09.56S114.0280.04.55
4133.5133.510.01328456.09.56S114.0280.04.82
4233.5133.510.01328456.09.56S114.0280.05.44
4352.0652.060.01328456.09.56S114.0280.09.11
4452.0652.060.01328456.09.56S114.0280.07.41
4552.0652.060.01328456.09.56S114.0280.07.69
4633.2433.240.01328456.09.56R114.0280.08.21
4733.2433.240.01328456.09.56R114.0280.07.43
4833.2433.240.01328456.09.56R114.0280.07.43
4951.6451.640.01328456.09.56R114.0280.010.29
5051.6451.640.01328456.09.56R114.0280.07.80
5151.6451.640.01328456.09.56R114.0280.08.92
5267.8043.690.01072605.08.08S150.0250.05.39
5367.8043.690.01072605.08.08S150.0250.05.23
5467.8043.690.01072605.08.08S150.0250.05.37
[56]5525.8019.800.00502450.08.06S200.0300.01.71
5625.8019.800.00502450.08.06S200.0300.01.68
5725.8019.800.00502450.08.06S200.0300.01.79
5855.6051.000.00502450.08.06S200.0300.04.02
5955.6051.000.00502450.08.06S200.0300.03.29
6055.6051.000.00502450.08.06S200.0300.03.57
6155.6051.000.00502645.08.06S200.0300.03.39
6255.6051.000.00502645.08.06S200.0300.03.99
6355.6051.000.00502645.08.06S200.0300.03.70
[57]6435.7030.900.00409473.012.78R610.0406.05.01
6535.7030.900.00409473.012.78R610.0406.04.61
6634.7028.900.00409473.012.78R610.0406.04.39
6734.7028.900.00409473.012.78R610.0406.04.75
6834.7028.900.00409473.012.78R610.0406.04.84
6940.1030.100.00409591.012.78R610.0406.04.18
7034.7028.900.00409591.012.78R610.0406.04.37
7134.7028.900.00409591.012.78R610.0406.04.53
7234.7028.900.00409591.012.78R610.0406.04.71
7334.7028.900.00409591.012.78R610.0406.05.22
7435.6031.600.00641443.015.98R406.0610.04.91
7535.6031.600.00640443.015.98R406.0610.04.95
7636.2028.600.00640443.015.98R406.0610.04.98
7736.2028.600.00640443.015.98R406.0610.04.89
7836.2028.600.00640443.015.98R406.0610.04.94
7935.6031.600.00640589.015.98R406.0610.05.42
8036.2028.600.00640589.015.98R406.0610.05.49
8136.2028.600.00640589.015.98R406.0610.05.61
8236.2028.600.00640589.015.98R406.0610.05.08
8336.2028.600.00640589.015.98R406.0610.05.34
[58]8449.10400.00410464.09.56R254.0406.44.83
8549.10400.00400464.09.56R254.0406.44.07
8649.10400.00730424.012.76R254.0406.44.76
8749.10400.00740424.012.76R254.0406.45.45
8849.10400.00420896.09.56R254.0406.43.93
8949.10400.00410869.09.56R254.0406.44.48
9049.10400.00740965.012.76R254.0406.45.79
9149.10400.00750905.012.76R254.0406.44.90
[59]92136.006300.00.00R127.0203.06.56
93136.006300.00.00R127.0203.04.92
94136.00630.00550414.09.52R127.0203.06.22
95136.00630.00550414.09.52R127.0203.06.47
[60]9646.2532.960476.00.00R150.0150.03.04
9746.2532.960476.00.00R150.0150.02.88
9846.2532.960476.00.00R150.0150.03.02
9946.2532.960.00502476.012.01R150.0150.03.96
10046.2532.960.00502476.012.01R150.0150.04.06
10146.2532.960.00502476.012.01R150.0150.04.21
[21]10265.2055.700.00502446.08.06S200.0300.03.61
10365.2055.700.00502446.08.06S200.0300.03.53
10465.2055.700.00502446.08.06S200.0300.03.46
10565.2055.700.00502446.08.06S200.0300.03.54
10665.2055.700.00502446.08.06S200.0300.03.58
10765.2055.700.00502446.08.06S200.0300.03.57
[52]10830.9430.940.01116325.08.08S120.0300.02.99
10930.9430.940.01116325.08.08S120.0300.04.45
11030.9430.940.01116325.08.08S120.0300.03.16
11130.9430.940.01116325.08.08S120.0300.02.58
11230.9430.940.01116325.08.08S120.0300.03.27
[61]11331.7924.130.00818344.812.72R203.2152.44.69
11436.6822.340.00818344.812.72R203.2152.42.72
11536.6825.510.00818351.612.72R203.2152.44.41
11639.0928.480.00409334.412.72R203.2304.83.52
11734.2724.680.00409324.112.72R203.2304.83.72
11828.1322.820.00409351.612.72R203.2304.82.41
11928.1322.820.00409351.612.72R203.2304.82.71
12028.1322.820.00409351.612.72R203.2304.82.36
12130.4827.300.00409344.812.72R203.2304.82.52
12230.4827.300.00409344.812.72R203.2304.82.96
12330.4827.300.00409344.812.72R203.2304.83.09
12434.2020.9600.00.00R203.2304.82.87
12536.8227.4400.00.00R203.2304.83.83
12636.3428.6100.00.00R203.2304.83.14
12734.4128.1300.00.00R203.2304.82.41
12834.4128.1300.00.00R203.2304.82.50
12934.8225.6500.00.00R203.2304.82.83
13034.8225.6500.00.00R203.2304.82.81
13134.8225.6500.00.00R203.2304.82.79
13239.5824.410.00409337.912.74R203.2609.63.23
13334.4823.580.00613344.812.76R203.2609.62.76
13441.6524.200.00613344.812.76R203.2609.63.24
13536.6822.340.00818344.812.72S203.2152.41.08
13636.6822.340.00818344.812.72S203.2152.41.55
13734.9625.510.00818344.812.72S203.2152.41.59
13834.9625.510.00818344.812.72S203.2152.41.48
13934.9625.510.00818344.812.72S203.2152.41.65
14033.5827.920.00409345.812.72S203.2304.81.14
14135.6525.240.00409345.812.72S203.2304.80.76
14233.5827.9200.00.00S203.2304.80.86
14335.6525.2400.00.00S203.2304.81.59
14436.8227.4400.00.00S203.2304.80.90
14536.3428.6100.00.00S203.2304.80.62
14634.4128.1300.00.00S203.2304.80.83
14732.1329.1000.00.00S203.2609.60.75
14832.1329.1000.00.00S203.2609.60.65
14932.1329.1000.00.00S203.2609.60.69
[62]15044.0938.710.00174603.85.02S150.0150.02.66
15144.0938.710.00174603.85.02S150.0150.02.95
15244.0938.710.00174603.85.02S150.0150.03.08
15344.0938.710.00342584.37.02S150.0150.03
15444.0938.710.00342584.37.02S150.0150.03.61
15544.0938.710.00342584.37.02S150.0150.03.93
[63]15660.7535.990.00804400.08.06R150.0250.06.05
15760.7535.990.00804400.08.06R150.0250.05.59
15860.7535.990.01256400.010.06R150.0250.07.78
15960.7535.990.01256400.010.06R150.0250.07.62
16060.7535.990.01809400.012.06R150.0250.08.94
16160.7535.990.01809400.012.06R150.0250.08.73
[64]16253.0141.580.00393340.010.01R100.0200.03.93
16353.0141.580.00565353.012.01R100.0200.04.26
16453.0141.580.00769347.014.01R100.0200.05.22
16558.3441.950.02308347.014.03R100.0200.07.02
16645.3833.880.02010353.016.02R100.0200.06.33
16769.5560.360.00769347.014.01R100.0200.03.93
[12]16820020000.00.00S184.0304.81.63
16920020000.00.00S184.0304.81.31
17020020000.00.00S184.0304.81.37
1712002000.00505506.89.54S184.0304.83.92
1722002000.00505506.89.54S184.0304.84.05
1732002000.00505506.89.54S184.0304.83.62
17420084.4000.00.00S184.0304.80.99
17520084.4000.00.00S184.0304.80.94
17620084.4000.00.00S184.0304.81.43
17720084.400.00253506.89.52S184.0304.82.11
17820084.400.00253506.89.52S184.0304.82.31
17920084.400.00253506.89.52S184.0304.82.13
18020084.400.00505506.89.54S184.0304.83.89
18120084.400.00505506.89.54S184.0304.83.20
18220084.400.00505506.89.54S184.0304.83.28
18320084.400.00758506.89.56S184.0304.84.44
18420084.400.00758506.89.56S184.0304.83.36
18520084.400.00758506.89.56S184.0304.84.82
18620084.4000.00.00R184.0304.82.62
18720084.4000.00.00R184.0304.82.14
18820084.4000.00.00R184.0304.82.86
[65]18926.5020.2000.00.00R250.0250.00.99
19026.5020.200.00362358.012.02R250.0250.02.21
19126.5020.200.00492344.014.02R250.0250.02.86
19226.5020.200.00723358.012.04R250.0250.03.68
19326.5020.200.00985344.014.04R250.0250.04.32
19430.6026.5000.00.00R250.0250.01.15
19530.6026.500.00362358.012.02R250.0250.02.72
19630.6026.500.00492344.014.02R250.0250.03.20
19730.6026.500.00723358.012.04R250.0250.04.16
19830.6026.500.00985344.014.04R250.0250.04.96
[13]19941.5034.600.00502440.212.01R150.0150.03.60
20027.2024.500.00171564.17.01R150.0150.01.75
20127.2024.500.00349420.210.01R150.0150.02.41
20227.2024.500.00502440.212.01R150.0150.03.04
20327.2024.500.00684383.814.01R150.0150.04.13
20427.2024.500.00893375.516.01R150.0150.04.54
20539.7037.700.00171564.17.01R150.0150.02.41
20639.7037.700.00349420.210.01R150.0150.02.37
20739.7037.700.00502440.212.01R150.0150.02.98
20839.7037.700.00684383.814.01R150.0150.04.13
20939.7037.700.00893375.516.01R150.0150.04.37
21047.6044.600.00502440.212.01R150.0150.04.42
21147.6044.600.00684383.814.01R150.0150.04.36
21247.6044.600.00893375.516.01R150.0150.04.82
21351.9050.200.00171564.17.01R150.0150.03.26
21451.9050.200.00349420.210.01R150.0150.03.51
21551.9050.200.00502440.212.01R150.0150.04.40
21651.9050.200.00684383.814.01R150.0150.04.50
21751.9050.200.00893375.516.01R150.0150.04.71
[52]21830.9430.940.011103258.04S1203002.99
21930.9430.940.011103258.04S1203004.45
22030.9430.940.011103258.04S1203003.16
22130.9430.940.011103258.04S1203002.58
22230.9430.940.011103258.04S1203003.27
22331.4131.410.011103258.04S1203004.06
22425.6425.640.011103258.04S1203003.10
22525.6425.640.011103258.04S1203004.13
22625.6425.640.011103258.04S1203003.27
22725.6425.640.011103258.04S1203004.58
22825.6425.640.011103258.04S1203002.27
22930.0630.060.011103258.04S1203003.60
23030.7630.760.011103258.04S1203004.40
23130.7630.760.011103258.04S1203005.91
23230.7630.760.011103258.04S1203004.79
23330.7630.760.011103258.04S1203004.04
23430.7630.760.011103258.04S1203002.84
23523.4323.430.011103258.04S1203005.32
23633.0333.030.011103258.04S1203005.74
[66]237140.0040.000.00335453.08.02S1502000.77
238140.0040.000.00335453.08.02R1502003.13
239140.0040.000.00335453.08.02R1502003.29
240140.0040.000.00335453.08.02R1502002.72
241140.0040.000.00335453.08.02R1502003.62
[67]242140.040.00.00296540.0 8.01S1001702.24
243140.040.00.00591540.0 8.02S1001702.45
244140.040.00.01183540.0 8.04S1001704.06
245140.040.00.00296540.0 8.01R1001704.10
246140.040.00.00591540.0 8.02R1001704.53
247140.040.00.01183540.0 8.04R1001705.72

References

  1. Ahmad, F.; Rawat, S.; Yang, R.C.; Zhang, L.; Zhang, Y.X. Fire resistance and thermal performance of hybrid fibre-reinforced magnesium oxychloride cement-based composites. Constr. Build. Mater. 2025, 472, 140867. [Google Scholar] [CrossRef]
  2. Zhang, J.; Ding, C.; Rong, X.; Yang, H.; Li, Y. Development and experimental investigation of hybrid precast concrete beam–column joints. Eng. Struct. 2020, 219, 110922. [Google Scholar] [CrossRef]
  3. Ahmad, S.; Bhargava, P.; Chourasia, A.; Sharma, U.K. Shear transfer strength of uncracked concrete after elevated temperatures. J. Struct. Eng. 2020, 146, 04020133. [Google Scholar] [CrossRef]
  4. Liu, J.; Fang, J.X.; Chen, J.J.; Xu, G. Evaluation of design provisions for interface shear transfer between concretes cast at different times. J. Bridge Eng. 2019, 24, 06019002. [Google Scholar] [CrossRef]
  5. Raposo, J.M.; Cavaco, E.; Neves, L.C.; Júlio, E. A novel roughness parameter for more precise estimation of the shear strength of concrete-to-concrete interfaces. Constr. Build. Mater. 2024, 410, 134146. [Google Scholar] [CrossRef]
  6. Birkeland, P.W.; Birkeland, H.W. Connections in precast concrete construction. J. Proc. 1966, 63, 345–368. [Google Scholar]
  7. Walraven, J.C. Theory and experiments on the mechanical behaviour of cracks in plain and reinforced concrete subjected to shear loading. Heron 1981, 26, 1–68. [Google Scholar]
  8. Hermansen, B.R.; Cowan, J. Modified shear-friction theory for bracket design. J. Proc. 1974, 71, 55–60. [Google Scholar]
  9. Patnaik, A.K.; Eng, P. Horizontal shear strength of composite concrete beams with a rough interface. PCI J. 1994, 39, 48–69. [Google Scholar] [CrossRef]
  10. Randl, N. Research on Force Transfer Between Old and New Concrete with Different Joint Roughness. Ph.D. Thesis, University of Innsbruck, Innsbruck, Austria, 1997. [Google Scholar]
  11. Ali, M.A.; White, R.N. Enhanced contact model for shear friction of normal and high-strength concrete. Struct. J. 1999, 96, 348–360. [Google Scholar]
  12. Crane, C.K. Shear and Shear Friction of Ultra-High Performance Concrete Bridge Girders. Ph.D. Thesis, Georgia Institute of Technology, Atlanta, GA, USA, 2010. [Google Scholar]
  13. Ye, G. Study on the Anti-Shear Behavior of Bond-Interface Between New and Old Concrete. Ph.D. Thesis, Chongqing University, Chongqing, China, 2011. [Google Scholar]
  14. Santos, P.M.; Júlio, E.N. Interface shear transfer on composite concrete members. ACI Struct. J. 2014, 111, 113–122. [Google Scholar] [CrossRef]
  15. Wu, Y.F.; Hu, B. Shear strength components in reinforced concrete members. J. Struct. Eng. 2017, 143, 04017092. [Google Scholar] [CrossRef]
  16. ACI Committee 318; Building Code Requirements for Structural Concrete and Commentary (ACI 318-19). American Concrete Institute: Farmington Hills, MI, USA, 2019.
  17. Canadian Standards Association. Canadian Highway Bridge Design Code (CSA S6:19); Canadian Standards Association: Mississauga, ON, Canada, 2019. [Google Scholar]
  18. American Association of State Highway and Transportation Officials. AASHTO LRFD Bridge Design Specifications, 9th ed.; American Association of State Highway and Transportation Officials: Washington, DC, USA, 2020. [Google Scholar]
  19. Maekawa, K.; Qureshi, J. Stress transfer across interfaces in reinforced concrete due to aggregate interlock and dowel action. Doboku Gakkai Ronbunshu 1997, 1997, 159–172. [Google Scholar] [CrossRef]
  20. Mattock, A.H. Shear friction and high-strength concrete. Struct. J. 2001, 98, 50–59. [Google Scholar]
  21. Liu, J.; Huang, H.; Ma, Z.J.; Chen, J. Effect of shear reinforcement corrosion on interface shear transfer between concretes cast at different times. Eng. Struct. 2021, 232, 111872. [Google Scholar] [CrossRef]
  22. Xu, J.G.; Chen, S.Z.; Xu, W.J.; Shen, Z.S. Concrete-to-concrete interface shear strength prediction based on explainable extreme gradient boosting approach. Constr. Build. Mater. 2021, 308, 125088. [Google Scholar] [CrossRef]
  23. Salehi, H.; Burgueño, R. Emerging artificial intelligence methods in structural engineering. Eng. Struct. 2018, 171, 170–189. [Google Scholar] [CrossRef]
  24. Dean, J.; Monga, R. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. TensorFlow. 2015. Available online: https://arxiv.org/abs/1603.04467 (accessed on 22 January 2025).
  25. Subramanian, V. Deep Learning with PyTorch: A Practical Approach to Building Neural Network Models Using PyTorch; Packt Publishing: Mumbai, India, 2018. [Google Scholar]
  26. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  27. Salami, B.A.; Iqbal, M.; Abdulraheem, A.; Jalal, F.E.; Alimi, W.; Jamal, A.; Tafsirojjaman, T.; Liu, Y.; Bardhan, A. Estimating compressive strength of lightweight foamed concrete using neural, genetic and ensemble machine learning approaches. Cem. Concr. Compos. 2022, 133, 104721. [Google Scholar] [CrossRef]
  28. Chen, S.Z.; Feng, D.C.; Han, W.S.; Wu, G. Development of data-driven prediction model for CFRP-steel bond strength by implementing ensemble learning algorithms. Constr. Build. Mater. 2021, 303, 124470. [Google Scholar] [CrossRef]
  29. Bedriñana, L.A.; Sucasaca, J.; Tovar, J.; Burton, H. Design-oriented machine-learning models for predicting the shear strength of prestressed concrete beams. J. Bridge Eng. 2023, 28, 04023009. [Google Scholar] [CrossRef]
  30. Athanasiou, A.; Ebrahimkhanlou, A.; Zaborac, J.; Hrynyk, T.; Salamone, S. A machine learning approach based on multifractal features for crack assessment of reinforced concrete shells. Comput. -Aided Civ. Infrastruct. Eng. 2020, 35, 565–578. [Google Scholar] [CrossRef]
  31. Hsieh, Y.A.; Tsai, Y.J. Machine learning for crack detection: Review and model performance comparison. J. Comput. Civ. Eng. 2020, 34, 04020038. [Google Scholar] [CrossRef]
  32. Liu, Q.F.; Iqbal, M.F.; Yang, J.; Lu, X.Y.; Zhang, P.; Rauf, M. Prediction of chloride diffusivity in concrete using artificial neural network: Modelling and performance evaluation. Constr. Build. Mater. 2021, 268, 121082. [Google Scholar] [CrossRef]
  33. Jin, L.; Dong, T.; Fan, T.; Duan, J.; Yu, H.; Jiao, P.; Zhang, W. Prediction of the chloride diffusivity of recycled aggregate concrete using artificial neural network. Mater. Today Commun. 2022, 32, 104137. [Google Scholar] [CrossRef]
  34. Zhong, Z.; Zhao, S.; Xia, J.; Luo, Q.; Zhou, Q.; Yang, S.; He, F.; Yao, Y. Regression prediction model for shear strength of cold joint in concrete. Structures 2024, 68, 107168. [Google Scholar] [CrossRef]
  35. Yan, B.; Zhang, W.; Ye, Y.; Yi, W. Study on the prediction of shear capacity of new and old concrete interfaces based on explainable machine learning algorithms. Structures 2025, 71, 108065. [Google Scholar] [CrossRef]
  36. Sturm, A.B.; Visintin, P.; Farries, K.; Oehlers, D.J. New testing approach for extracting the shear friction material properties of ultra-high-performance fiber-reinforced concrete. J. Mater. Civ. Eng. 2018, 30, 04018235. [Google Scholar] [CrossRef]
  37. Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning, 2nd ed.; Springer: New York, NY, USA, 2009. [Google Scholar]
  38. Saillard, Y. Reinforced Concrete: An International Manual; Butterworths: London, UK, 1971. [Google Scholar]
  39. Chen, T.; Guestrin, C. XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  40. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  41. Guo, L.; Chehata, N.; Mallet, C.; Boukir, S. Relevance of airborne lidar and multispectral image data for urban scene classification using Random Forests. ISPRS J. Photogramm. Remote Sens. 2011, 66, 56–66. [Google Scholar] [CrossRef]
  42. Rodriguez-Galiano, V.F.; Sanchez-Castillo, M.; Chica-Olmo, M.; Chica-Rivas, M. Machine learning predictive models for mineral prospectivity: An evaluation of neural networks, random forest, regression trees and support vector machines. Ore Geol. Rev. 2015, 71, 804–818. [Google Scholar] [CrossRef]
  43. Biau, G.; Scornet, E. A random forest guided tour. Test 2016, 25, 197–227. [Google Scholar] [CrossRef]
  44. Murphy, K.P. Machine Learning: A Probabilistic Perspective; MIT Press: Cambridge, MA, USA, 2012. [Google Scholar]
  45. Yuvaraj, P.; Murthy, A.R.; Iyer, N.R.; Sekar, S.K.; Samui, P. Support vector regression based models to predict fracture characteristics of high strength and ultra high strength concrete beams. Eng. Fract. Mech. 2013, 98, 29–43. [Google Scholar] [CrossRef]
  46. Yan, K.; Shi, C. Prediction of elastic modulus of normal and high strength concrete by support vector machine. Constr. Build. Mater. 2010, 24, 1479–1485. [Google Scholar] [CrossRef]
  47. Golafshani, E.M.; Ashour, A. A feasibility study of BBP for predicting shear capacity of FRP reinforced concrete beams without stirrups. Adv. Eng. Softw. 2016, 97, 29–39. [Google Scholar] [CrossRef]
  48. Hariharan, S.; Velicheti, A.; Anagha, A.S.; Thomas, C.; Balakrishnan, N. Explainable artificial intelligence in cybersecurity: A brief review. In Proceedings of the 2021 4th International Conference on Security and Privacy (ISEA-ISAP), Dhanbad, India, 27–30 October, 2021; IEEE: New York, NY, USA, 2021; pp. 1–12. [Google Scholar]
  49. Dwivedi, R.; Dave, D.; Naik, H.; Singhal, S.; Omer, R.; Patel, P.; Qian, B.; Wen, Z.; Shah, T.; Morgan, G.; et al. Explainable AI (XAI): Core ideas, techniques, and solutions. ACM Comput. Surv. 2023, 55, 194. [Google Scholar] [CrossRef]
  50. Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. In Proceedings of the Advances in Neural Information Processing, Long Beach, CA, USA, 4–9 December 2017; Volume 30, pp. 4765–4774. [Google Scholar]
  51. Lundberg, S.M.; Erion, G.G.; Lee, S.I. Consistent individualized feature attribution for tree ensembles. arXiv 2018, arXiv:1802.03888. [Google Scholar]
  52. Xiao, J.; Sun, C.; Lange, D.A. Effect of joint interface conditions on shear transfer behavior of recycled aggregate concrete. Constr. Build. Mater. 2016, 105, 343–355. [Google Scholar] [CrossRef]
  53. Kahn, L.F.; Mitchell, A.D. Shear friction tests with high-strength concrete. Struct. J. 2002, 99, 98–103. [Google Scholar]
  54. Mattock, A.H. Shear Transfer Under Monotonic Loading Across an Interface Between Concretes Cast at Different Times; Report No. SM 76-3; University of Washington: Washington, DC, USA, 1976. [Google Scholar]
  55. Shaw, D.M.; Sneed, L.H. Interface shear transfer of lightweight-aggregate concretes cast at different times. PCI J. 2014, 59, 130–144. [Google Scholar] [CrossRef]
  56. Liu, J.; Bu, Y.; Chen, J.; Wang, Q. Contribution of shear reinforcements and concrete to the shear capacity of interfaces between concretes cast at different times. KSCE J. Civ. Eng. 2021, 25, 2065–2077. [Google Scholar] [CrossRef]
  57. Barbosa, A.R.; Trejo, D.; Nielson, D. Effect of high-strength reinforcement steel on shear friction behavior. J. Bridge Eng. 2017, 22, 04017038. [Google Scholar] [CrossRef]
  58. Harries, K.A.; Zeno, G.; Shahrooz, B. Toward an improved understanding of shear-friction behavior. ACI Struct. J. 2012, 109, 835–844. [Google Scholar] [CrossRef]
  59. Semendary, A.A.; Hamid, W.K.; Steinberg, E.P.; Khoury, I. Shear friction performance between high strength concrete (HSC) and ultra high-performance concrete (UHPC) for bridge connection applications. Eng. Struct. 2020, 205, 110122. [Google Scholar] [CrossRef]
  60. Xia, J.; Shan, K.Y.; Wu, X.H.; Gan, R.L.; Jin, W.L. Shear-friction behavior of concrete-to-concrete interface under direct shear load. Eng. Struct. 2021, 238, 112211. [Google Scholar] [CrossRef]
  61. Hanson, N.W. Precast-Prestressed Concrete Bridges 2: Horizontal Shear Connections; Portland Cement Association: Washington, DC, USA, 1960. [Google Scholar]
  62. Hu, T.M.; Huang, C.K.; Chen, X.F. Experimental research on shear of bonding interface between young and old concrete influenced by constructional reinforcement. Concrete 2009, 3, 26–28. [Google Scholar]
  63. Jiang, H.; Fang, Z.; Liu, A.; Li, Y.; Feng, J. Interface shear behavior between high-strength precast girders and lightweight cast-in-place slabs. Constr. Build. Mater. 2016, 128, 449–460. [Google Scholar] [CrossRef]
  64. Jiang, H.; Fang, Z.; Ma, Z.J.; Fang, X.; Jiang, Z. Shear-friction behavior of groove interface in concrete bridge rehabilitation. J. Bridge Eng. 2016, 21, 04016081. [Google Scholar] [CrossRef]
  65. Xing, Q. Research on Connection Methods and Mechanical Behavior of Old-New Concrete Interfaces. Master’s Thesis, Xi’an University of Science and Technology, Xi’an, China, 2012. [Google Scholar]
  66. Liu, J.; Chen, Z.; Guan, D.; Lin, Z.; Guo, Z. Experimental study on interfacial shear behaviour between ultra-high performance concrete and normal strength concrete in precast composite members. Constr. Build. Mater. 2020, 261, 120008. [Google Scholar] [CrossRef]
  67. Du, W.; Yang, C.; De Backer, H.; Li, C.; Ming, K.; Zhang, H.; Pan, Y. Experimental investigation on shear behavior of the interface between early-strength self-compacting shrinkage-compensating high-performance concrete and ordinary concrete substrate. Materials 2022, 15, 4939. [Google Scholar] [CrossRef]
Figure 1. Interface controlled by shear transfer strength: (a) beam–column connection; (b) corbel.
Figure 1. Interface controlled by shear transfer strength: (a) beam–column connection; (b) corbel.
Buildings 15 03137 g001
Figure 2. Representative push-off test specimen: Z-shaped.
Figure 2. Representative push-off test specimen: Z-shaped.
Buildings 15 03137 g002
Figure 3. Relationships between each input parameter and output variable in this database: (a) fcmin, (b) ρ, (c) Acv (shear surface area), (d) fy, (e) db, and (f) nb.
Figure 3. Relationships between each input parameter and output variable in this database: (a) fcmin, (b) ρ, (c) Acv (shear surface area), (d) fy, (e) db, and (f) nb.
Buildings 15 03137 g003aBuildings 15 03137 g003b
Figure 4. Correlation matrix for the nine input parameters.
Figure 4. Correlation matrix for the nine input parameters.
Buildings 15 03137 g004
Figure 5. Framework for the implementation of the ML-based model.
Figure 5. Framework for the implementation of the ML-based model.
Buildings 15 03137 g005
Figure 6. Effects of hyperparameter configurations on the model performance: (a) AdaBoost: n_estimators = 100; (b) XGBoost: max_depth = 3; (c) RF: n_estimators = 300; (d) SVR: C = 70.
Figure 6. Effects of hyperparameter configurations on the model performance: (a) AdaBoost: n_estimators = 100; (b) XGBoost: max_depth = 3; (c) RF: n_estimators = 300; (d) SVR: C = 70.
Buildings 15 03137 g006
Figure 7. Comparisons of the predicted values versus the tested values: (a) AdaBoost; (b) XGBoost; (c) RF; (d) SVR.
Figure 7. Comparisons of the predicted values versus the tested values: (a) AdaBoost; (b) XGBoost; (c) RF; (d) SVR.
Buildings 15 03137 g007
Figure 8. Radar chart of evaluation indexes for the four ML models: (a) training score; (b) test score; (c) final score.
Figure 8. Radar chart of evaluation indexes for the four ML models: (a) training score; (b) test score; (c) final score.
Buildings 15 03137 g008
Figure 9. Global explanation of the XGBoost model: (a) SHAP feature’s relative importance and (b) global interpretation by SHAP.
Figure 9. Global explanation of the XGBoost model: (a) SHAP feature’s relative importance and (b) global interpretation by SHAP.
Buildings 15 03137 g009
Figure 10. Local interpretation of a single specimen: (a) No. 105 and (b) No. 225.
Figure 10. Local interpretation of a single specimen: (a) No. 105 and (b) No. 225.
Buildings 15 03137 g010
Figure 11. Comparisons of the tested and predicted strengths by the models: (a) Results of three empirical models, and (b) the comparison of the ASSTHO model with the XGBoost model.
Figure 11. Comparisons of the tested and predicted strengths by the models: (a) Results of three empirical models, and (b) the comparison of the ASSTHO model with the XGBoost model.
Buildings 15 03137 g011
Table 1. Statistical analysis of input and output variables in the database.
Table 1. Statistical analysis of input and output variables in the database.
VariableRangeMeanMedian
f c m a x (15.68, 200)58.0640.42
f c m i n (15.68, 200)46.2137.7
ρ (0, 0.0314)0.006650.00502
f y (0, 965)395.17440.2
d b (0, 16)8.899.5
n b (0, 10)3.864.0
Interface0, 10.601.0
B(100, 610)194.72184.0
H(150, 610)292.18300.0
υ (0.62, 17.59)4.313.72
Table 2. The procedure of the AdaBoost algorithm.
Table 2. The procedure of the AdaBoost algorithm.
1. A training dataset with a weight distribution of D m is used to obtain the weak classifier G m x .
2. The classification error rate of G m x on the training dataset is calculated as follows:
e m = i = 1 N w m , i I G m x y i
3. The weight of G m x in the strong classifier is calculated as follows:
α m = 1 2 log 1 e m e m
4. The weight distribution of the training dataset is as follows:
w m + 1 , i = w m , i z m exp ( α m y i G m x i ) , i = 1 , 2 , N
z m = i = 1 N w m , i exp ( α m y i G m x i )
where zm is a normalization factor that ensures the sum of the sample probabilities equals 1.
Table 3. Description of each model’s performance metrics on the training and testing datasets.
Table 3. Description of each model’s performance metrics on the training and testing datasets.
ModelPhaseR2RMSEMAEMAPE
AdaBoostTraining0.8970.8340.6850.260
Testing0.8011.1400.8900.321
XGBoostTraining0.9840.325 0.1860.051
Testing0.9330.6630.4860.129
RFTraining0.8980.8290.5810.183
Testing0.8291.0570.7800.269
SVRTraining0.9660.4750.2500.065
Testing0.9270.6880.4530.126
Table 4. Score analysis specifics.
Table 4. Score analysis specifics.
Model PhaseR2RMSEMAEMAPETotal ScoreFinal Score
XGBoostTrain44431531
Test444416
RFTrain2122717
Test332210
AdaBoostTrain2211612
Test22116
SVRTrain33341321
Test11338
Table 5. Summary of the three code and empirical models.
Table 5. Summary of the three code and empirical models.
CodeDesign EquationsLimitationsParameterSmooth InterfaceRoughened Interface
ACI τ = μ ρ f y τ K 1 f c , τ K 2 μ 0.601.0
f y 414   MPa K 1 0.200.20
K 2   MPa 5.5211.03
ASSHTO τ = c + μ ρ f y τ K 1 f c , τ K 2 c   MPa 0.521.65
f y 414   MPa μ 0.601.0
K 1 0.200.25
K 2   MPa 5.5210.34
CSA τ = c + μ ρ f y τ 0.25 f c , τ 6.5   MPa c   MPa 0.250.5
f y 500   MPa μ 0.61.0
Table 6. Comparison of the performances of the three empirical models with that of the XGBoost model on the testing set.
Table 6. Comparison of the performances of the three empirical models with that of the XGBoost model on the testing set.
ModelR2RMSEMAEMAPE (%)COVMean Ratio
ACI0.8952.8692.30054.8270.436-
AASHTO0.9392.0571.40232.2350.3841.514
CSA0.9242.5531.95045.3120.6102.255
XGBoost0.9330.6630.48612.9370.1761.054
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, Y.; Xu, W.; Chen, J.; Liu, J.; Wu, F. Prediction of the Shear Strengths of New–Old Interfaces of Concrete Based on Data-Driven Methods Through Machine Learning. Buildings 2025, 15, 3137. https://doi.org/10.3390/buildings15173137

AMA Style

Wu Y, Xu W, Chen J, Liu J, Wu F. Prediction of the Shear Strengths of New–Old Interfaces of Concrete Based on Data-Driven Methods Through Machine Learning. Buildings. 2025; 15(17):3137. https://doi.org/10.3390/buildings15173137

Chicago/Turabian Style

Wu, Yongqian, Wantao Xu, Juanjuan Chen, Jie Liu, and Fangwen Wu. 2025. "Prediction of the Shear Strengths of New–Old Interfaces of Concrete Based on Data-Driven Methods Through Machine Learning" Buildings 15, no. 17: 3137. https://doi.org/10.3390/buildings15173137

APA Style

Wu, Y., Xu, W., Chen, J., Liu, J., & Wu, F. (2025). Prediction of the Shear Strengths of New–Old Interfaces of Concrete Based on Data-Driven Methods Through Machine Learning. Buildings, 15(17), 3137. https://doi.org/10.3390/buildings15173137

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop