A Study on the Prediction of Compressive Strength of Self-Compacting Recycled Aggregate Concrete Utilizing Novel Computational Approaches

A considerable amount of discarded building materials are produced each year worldwide, resulting in ecosystem degradation. Self-compacting concrete (SCC) has 60–70% coarse and fine particles in its composition, so replacing this material with another waste material, such as recycled aggregate (RA), reduces the cost of SCC. This study compares novel Artificial Neural Network algorithm techniques—Levenberg–Marquardt (LM), Bayesian regularization (BR), and Scaled Conjugate Gradient Backpropagation (SCGB)—to estimate the 28-day compressive strength (f’c) of SCC with RA. A total of 515 samples were collected from various published papers, randomly splitting into training, validation, and testing with percentages of 70, 10 and 20. Two statistical indicators, correlation coefficient (R) and mean squared error (MSE), were used to assess the models; the greater the R and lower the MSE, the more accurate the algorithm. The findings demonstrate the higher accuracy of the three models. The best result is achieved by BR (R = 0.91 and MSE = 43.755), while the accuracy of LM is nearly the same (R = 0.90 and MSE = 48.14). LM processes the network in a much shorter time than BR. As a result, LM and BR are the best models in forecasting the 28 days f’c of SCC having RA. The sensitivity analysis showed that cement (28.39%) and water (23.47%) are the most critical variables for predicting the 28-day compressive strength of SCC with RA, while coarse aggregate contributes the least (9.23%).


Introduction
The most significant component of the building business is concrete. Because durability has become one of the most critical issues in building reinforced concrete structures with long service lives and with the development of construction technologies in recent years, it is necessary to manufacture well-designed concrete as a robust material for construction [1][2][3].
Concrete is the most used building material globally. Because numerous types of concrete with different admixtures are being created, the understanding of advanced concrete design procedures has expanded [4]. One of the outcomes of advanced concrete is self-compacting concrete, which was developed in Japan (1980) to produce high-strength and durable concrete structures [5,6].
The fundamental difference between self-compacting and ordinary concrete is the quantities of components used in the mixing process. SCC is recognized as the era's most creative concrete, with the ability to self-settle in building zones without vibratory power. Self-compacting concrete sinks under its weight by following a flowing course [7]. SCC is

Levenberg-Marquardt Algorithm
The Levenberg-Marquardt (LM) approach iteratively seeks out the function's least value. A multidimensional function may be expressed as the sum of squares of nonlinear real-valued functions [37,38]. This method has been applied by researchers to challenging nonlinear least-squares problems in several different domains [39]. In this algorithm, two approaches, steepest descent and the Gauss-Newton method, are merged to speed up iterations and reduce error. The algorithm transitions to the Gauss-Newton approach more rapidly than the others when the most recent result is accurate. When the outcome is incorrect, it acts as a steepest decline: slow but always capable of approximation [40]. The main advantage is that although this approach requires more memory, it takes little time.

Bayesian Regularization
Bayesian regularized artificial neural networks (BRANNs), which can shorten or eliminate the requirement for time-consuming cross-validation, are more reliable than traditional back-propagation neural networks (BPNNs), which are less reliable [41]. Bayesian regularization transforms in the same manner as a nonlinear regression is converted into an accurate statistical problem using ridge regression. They take longer, but the model offers many advantages over challenging data [42]. The benefit of using BRANNs is that there is no requirement for a validation phase because the models are resilient [43,44]. The challenges of quantitative structure-activity relationship (QSAR) modeling include prediction, reliability, choosing the right validation sets, and optimizing network design. Empirical processes stop Bayesian criteria from being used for training, making it nearly impossible to over train [45].

Scaled Conjugate Gradient Backpropagation
The basic backpropagation method is used to modify the weights in the direction of steepest fall or highest negative gradient. This is the method through which the performance function can degrade the quickest. Although the function descends the negative gradient the quickest, it is demonstrated that this does not necessarily indicate the quickest integration [46].
The conjugate gradient (CG) algorithms seek a direction that achieves faster convergence than the steepest descent way while maintaining the error reduction attained in the earlier stages, and this activity is known as the conjugate direction. In most CG algorithms, the step size changes with each iteration. A search is undertaken along the conjugate gradient direction to determine the step size that will minimize the performance function along that line [47].
A method other than the line search approach can also estimate the step size. The goal is to combine the model trust region approach of the LM algorithm with the CG methodology. This strategy is known as SCGB, which was initially reported in the literature by Møller (1993) [48].
Design parameters are adjusted separately for each iteration user, which is essential for the algorithm's success. This is a substantial advantage over line-based algorithm searches [48].

Research Significance
The compressive strength of self-compacting concrete containing recycled aggregates is validated and predicted in this study using artificial neural networks. Based on the author's best understanding of the research currently available, there has not been a significant study on using various deep learning techniques to forecast the compressive strength of SCC with RA, which marks its novelty. Several techniques, including the Levenberg-Marquardt (LM), Bayesian regularization (BR), and Scaled Conjugate Gradient Backpropagation (SCGB) algorithms, are applied for this objective. Two statistical indicators, correlation coefficient (R-Value) and mean square error (MSE), are employed for the selection of the best model among them all. Sensitivity analysis is then conducted to determine the impact of each input variable on the output variable. The present study will provide comprehensive knowledge to the readers about these three algorithms for the prediction and validation of SCC with RA.

Experimental Plan
Information is acquired from numerous study articles. Table 1 presents the database containing 515 samples of SCC compressive strength f'c with RA, including six variables named X1 to X8 and one output Y, i.e., compressive strength. The variables of input include Portland cement (X1), supplementary cementitious materials (X2), water (X3), FA (X4), CCA. (X5), and admixtures (X6). The database includes Sr No., which displays the overall collection of articles, the authors' citations, the amount of information (# data) supplied by each paper, and the percentage of the total data (% data).  Table 2 illustrates the lowest, highest, and mean of specific variables as inputs (cement, supplementary cementitious materials, fine aggregate, water, coarse aggregates, and superplasticizers) and one potential output (compressive strength of recycled aggregate self-compacting concrete) based on these published research publications. Figures 1 and 2 illustrate their graphical representation.  Table 2 illustrates the lowest, highest, and mean of specific variables as inputs ment, supplementary cementitious materials, fine aggregate, water, coarse aggreg and superplasticizers) and one potential output (compressive strength of recycled ag gate self-compacting concrete) based on these published research publications. Figu and 2 illustrate their graphical representation.

By Frequency Distribution
The input variables from X1 to X5 have a vast range of values, but the variable X6 has a limited range of values. The cement content (X1) ranges from 78 to 635 (kg/m 3 ), with most of it being between 180 and 600 (kg/m 3 ). The maximum sample number is around 40, corresponding to a cement concentration of 635 kg/m 3 . Likewise, the mineral admixture (X2) varies between 0 and 515 kg/m 3 . The water content (X3) varies from around 45 to 277 (kg/m 3 ), as indicated in Figure 3. Fine aggregate or sand content (X4) ranges from 532 to 1200 (kg/m 3

By Frequency Distribution
The input variables from X1 to X5 have a vast range of values, but the variable X6 has a limited range of values. The cement content (X1) ranges from 78 to 635 (kg/m 3 ), with most of it being between 180 and 600 (kg/m 3 ). The maximum sample number is around 40, corresponding to a cement concentration of 635 kg/m 3 . Likewise, the mineral admixture (X2) varies between 0 and 515 kg/m 3 . The water content (X3) varies from around 45 to 277 (kg/m 3 ), as indicated in Figure 3. Fine aggregate or sand content (X4) ranges from 532 to 1200 (kg/m 3 ), with most values between 770 and 1000 (kg/m 3 ). The coarse aggregates (X5) range from 328 to 1170 (kg/m 3 ), with typical values falling between 680 and 920 (kg/m 3 ). The superplasticizer content (X6) is between 0 and 16 kg/m 3 . It can be seen from the figures that out of all 515 samples, everyone contributes to the respective input variable.  This statistical analysis helps in the development of the predictive model by improving the accuracy of the outcome prediction. The relationship between the variables of in-

By Multi-Correlation Graph (Heat Map)
This statistical analysis helps in the development of the predictive model by improving the accuracy of the outcome prediction. The relationship between the variables of input (fine aggregates, water, cement, admixtures, superplasticizers, and coarse aggregates) and the variable of output (compression strength) was investigated to see if there was a link [100]. The Pearson correlation matrix (heat map) is created to analyze the correlation between the independent input variables, as illustrated in Figure 4. The model's predictions might be skewed if input variables have correlations (|R| > 0.8) that suggest multicollinearity between variables. Although several characteristics are significantly correlated, for example, cement and mineral admixtures have a correlation of −0.639, whereas CA and FA have a correlation of −0.605. However, none of the features showed a correlation (|R|) greater than 0.80, demonstrating the lack of multicollinearity [101,102].

Methodology of Artificial Neural Network Model
An artificial neural network is a data prediction framework based on present characteristics developed from the human mind structure known as an artificial neural network (ANN). This system is made up of neurons, which are functional blocks. Weights link neurons, which are generally chosen at random initially. Various epochs enhance or reduce the importance of a learning process to finally produce the ideal network that can predict it with fair accuracy [103].
As an outcome, a trained neural network may produce the intended output by receiving the inputs and considered the updated weights, as shown in Figure 5. The system becomes stronger by computing the error and comparing the required input and output. ANN includes three steps: training, validation, and testing. The model is run repeatedly throughout the training phase until the desired outcome is obtained. Errors from the validation stage are discovered during training [104]. The fact that the machine learning model improves with time implies that the prediction model's accuracy may be increased and that the projected outcomes are dependable. Nonlinear activation functions such as sigmoid (tansig and logsig) are commonly utilized due to their exceptional responsiveness [105].

Methodology of Artificial Neural Network Model
An artificial neural network is a data prediction framework based on present characteristics developed from the human mind structure known as an artificial neural network (ANN). This system is made up of neurons, which are functional blocks. Weights link neurons, which are generally chosen at random initially. Various epochs enhance or reduce the importance of a learning process to finally produce the ideal network that can predict it with fair accuracy [103].
As an outcome, a trained neural network may produce the intended output by receiving the inputs and considered the updated weights, as shown in Figure 5. The system becomes stronger by computing the error and comparing the required input and output. ANN includes three steps: training, validation, and testing. The model is run repeatedly throughout the training phase until the desired outcome is obtained. Errors from the validation stage are discovered during training [104]. The fact that the machine learning model improves with time implies that the prediction model's accuracy may be increased and that the projected outcomes are dependable. Nonlinear activation functions such as sigmoid (tansig and logsig) are commonly utilized due to their exceptional responsiveness [105].
ANN includes three steps: training, validation, and testing. The model is run repeatedly throughout the training phase until the desired outcome is obtained. Errors from the validation stage are discovered during training [104]. The fact that the machine learning model improves with time implies that the prediction model's accuracy may be increased and that the projected outcomes are dependable. Nonlinear activation functions such as sigmoid (tansig and logsig) are commonly utilized due to their exceptional responsiveness [105].  When developing an ANN model, several aspects must be considered. The initial step is to choose the most optimal ANN model structure. The data must then be entered into the chosen ANN model regarding input and output. After that, the experience must be used to select the activation function, number of layers, number of hidden layers, and number of neurons in each layer [106,107].
Considering Tables 1 and 2, the network in this study comprises six inputs, one output variable, and a single hidden layer. Cement, admixtures, water, fine and coarse aggregates, and superplasticizer are all variables in the input layer. The output variable was chosen as the compressive strength of the self-compacting concrete with recycled aggregates. This research employs a feed-forward backpropagation neural network. Figure 6 shows the architecture of the present ANN research. When developing an ANN model, several aspects must be considered. The initial step is to choose the most optimal ANN model structure. The data must then be entered into the chosen ANN model regarding input and output. After that, the experience must be used to select the activation function, number of layers, number of hidden layers, and number of neurons in each layer [106,107].
Considering Tables 1 and 2, the network in this study comprises six inputs, one output variable, and a single hidden layer. Cement, admixtures, water, fine and coarse aggregates, and superplasticizer are all variables in the input layer. The output variable was chosen as the compressive strength of the self-compacting concrete with recycled aggregates. This research employs a feed-forward backpropagation neural network. Figure 6 shows the architecture of the present ANN research. Even though the Levenberg-Marquardt method is quicker, it frequently uses more memory. As seen by an increase in the mean square error of the validation samples, training automatically ends when generalization stops improving. However, even if Bayesian regularization takes more time, it can yield good generalization for complicated, minor,  Even though the Levenberg-Marquardt method is quicker, it frequently uses more memory. As seen by an increase in the mean square error of the validation samples, training automatically ends when generalization stops improving. However, even if Bayesian regularization takes more time, it can yield good generalization for complicated, minor, or challenging datasets. Training comes to an end because of adaptive weight loss (regularization). On the other side, the Scaled Conjugate Gradient Backpropagation technique uses less memory than the previous ones. When generalization stops improving, training automatically terminates, as seen by an increase in the mean square error of the validation samples [48,108].
The three phases of the network are training, validation, and testing. Table 3 shows the data splitting for the model's training, validation, and testing. Seventy percent of the data are chosen for training, while the remaining ten percent and twenty percent are selected for the validation and testing stages. In the 1st phase, ten neurons were chosen for the hidden layer. Based on its percentage, the network randomly selected 360 samples for training, 52 samples for validation, and 103 samples for testing. For Bayesian regularization (BR), validation is not required; hence, the training and testing samples are 412 and 103, respectively. This is so because regularization typically involves validation; however, BR techniques already include an in-built type of validation. The study work's methodology is displayed in Figure 7.

ANN Network Model Assessment
Mean squared error (MSE) and coefficient of correlation (R-Value) were used to assess the models' performance [109,110], as shown in Equations (1) and (2), respectively.
where exi, moi, exi, and moi are the experimental values setup and model domain.
where n = number of data points, y i = observed values andŷ i = predicted values.

ANN Network Model Assessment
Mean squared error (MSE) and coefficient of correlation (R-Value) were used to assess the models' performance [109,110], as shown in Equations (1) and (2), respectively.
where exi, moi, exi , and moi are the experimental values setup and model domain.
where n = number of data points, y i = observed values and y i = predicted values.
Regression is acknowledged to be the most important metric for determining the accuracy of a network's overall accuracy. R-values are used to assess the relationship between outputs and projected targets. The R-value of a strong association is 1, whereas the R-value of a random relationship is 0 [109].
Mean squared error is the average squared disparity between outputs and objectives. It is preferable if the value is as low as possible. If the value is 0, there is no error.

Results and Discussion
The model was performed using three different algorithms: LM, BR, and SCG with the results compared and explained below. Regression is acknowledged to be the most important metric for determining the accuracy of a network's overall accuracy. R-values are used to assess the relationship between outputs and projected targets. The R-value of a strong association is 1, whereas the R-value of a random relationship is 0 [109].
Mean squared error is the average squared disparity between outputs and objectives. It is preferable if the value is as low as possible. If the value is 0, there is no error.

Results and Discussion
The model was performed using three different algorithms: LM, BR, and SCG with the results compared and explained below.

Levenberg-Marquardt Algorithm
To find the best model, the algorithm is continuously trained. The model's performance with 10 total neurons is shown in Figure 8. Multiple colored lines make up the plot, which stand for training, validation, and testing. To prevent data overfitting, the model starts with a high MSE and subsequently decreases it based on validation criteria. After 47 epochs, the training error started to decrease, but validation and testing errors were increasing. As a result, the model training process concluded after five further epochs, and at the 47th iteration, an optimized model with the lowest MSE of 61.6038 was created.
To find the best model, the algorithm is continuously trained. The model's pe mance with 10 total neurons is shown in Figure 8. Multiple colored lines make up the which stand for training, validation, and testing. To prevent data overfitting, the m starts with a high MSE and subsequently decreases it based on validation criteria. 47 epochs, the training error started to decrease, but validation and testing errors increasing. As a result, the model training process concluded after five further ep and at the 47th iteration, an optimized model with the lowest MSE of 61.6038 was cre Epoch 47 is found out to be most suitable option for LM algorithm network tra because, while the mistakes in training data decline over time, the errors in validation test data rise. In the Levenberg-Marquardt method, Mu is the learning rate, and 0.01 chosen after specific iterations (Figure 9b). The training process has been halted aft validation failures. Epoch 47 is found out to be most suitable option for LM algorithm network training because, while the mistakes in training data decline over time, the errors in validation and test data rise. In the Levenberg-Marquardt method, Mu is the learning rate, and 0.01 was chosen after specific iterations (Figure 9b). The training process has been halted after six validation failures. mance with 10 total neurons is shown in Figure 8. Multiple colored lines make up th which stand for training, validation, and testing. To prevent data overfitting, the starts with a high MSE and subsequently decreases it based on validation criteria 47 epochs, the training error started to decrease, but validation and testing errors increasing. As a result, the model training process concluded after five further e and at the 47th iteration, an optimized model with the lowest MSE of 61.6038 was cr Epoch 47 is found out to be most suitable option for LM algorithm network tr because, while the mistakes in training data decline over time, the errors in validatio test data rise. In the Levenberg-Marquardt method, Mu is the learning rate, and 0.0 chosen after specific iterations (Figure 9b). The training process has been halted af validation failures.          The results of all performance measures, including R-value and MSE of the whole model with training, validation, and test, are finally summarized in Table 4. Therefore, these findings suggest that the Levenberg-Marquardt method is suitable for estimating the compressive strength of self-compacting concrete using recycled aggregates.

Bayesian Regularization
Similarly, the Bayesian regularization approach is used to train the model. The model's performance with the same number of neurons is depicted in Figure 12. The plot comprises of two-colored lines that only indicate training and testing because this algorithm already has an in-built kind of validation during the training stage. To avoid overfitting the data, the model starts with a high MSE and gradually lowers its reliance on the training parameters. The graph demonstrates that the model required several epochs since BR takes slightly longer time. After 94 epochs, the training and testing error lines had significantly decreased, and they were almost straight. The model is further trained to ensure comprehensive validation, and training is halted after 190 epochs. The model evaluated that the best training performance by the BR algorithm is at the 94th iteration, i.e., a minimum MSE of 38.172. The results of all performance measures, including R-value and MSE of the who model with training, validation, and test, are finally summarized in Table 4. Therefor these findings suggest that the Levenberg-Marquardt method is suitable for estimatin the compressive strength of self-compacting concrete using recycled aggregates.

Bayesian Regularization
Similarly, the Bayesian regularization approach is used to train the model. Th model's performance with the same number of neurons is depicted in Figure 12. The pl comprises of two-colored lines that only indicate training and testing because this algorithm already has an in-built kind of validation during the training stage. To avoid overfitting the data, the model starts with a high MSE and gradually lowers its reliance on the training parameters. The graph demonstrates that the model required several epochs since BR takes slightly longer time. After 94 epochs, the training and testing error lines had significantly decreased, and they were almost straight. The model is further trained to ensure comprehensive validation, and training is halted after 190 epochs. The model evaluated that the best training performance by the BR algorithm is at the 94th iteration, i.e., a minimum MSE of 38.172. It can be seen from Figure 13a that training data errors decrease over time. Still, validation and test data errors rise. Therefore, the model is trained further at 186 epochs for comprehensive validation, and epoch 94 is found out to be the most suitable option for this network training, as shown in Figure 12. As Mu is the controlling parameter to train in the BR algorithm, 5 × 10 10 was chosen after several rounds, as seen in Figure 13b. Effec- It can be seen from Figure 13a that training data errors decrease over time. Still, validation and test data errors rise. Therefore, the model is trained further at 186 epochs for comprehensive validation, and epoch 94 is found out to be the most suitable option for this network training, as shown in Figure 12. As Mu is the controlling parameter to train in the BR algorithm, 5 × 10 10 was chosen after several rounds, as seen in Figure 13b. Effective parameters used by this algorithm were approximately 74 at epoch 186. Figure 13e further shows that no validation checks are carried out, since BR already has an in-built type of validation during the training stage, negating the need for a validation step. It can be seen from Figure 13a that training data errors decrease over time. Stil idation and test data errors rise. Therefore, the model is trained further at 186 epoc comprehensive validation, and epoch 94 is found out to be the most suitable optio this network training, as shown in Figure 12. As Mu is the controlling parameter to in the BR algorithm, 5 × 10 10 was chosen after several rounds, as seen in Figure 13b  Following that, a regression analysis was carried out in the same way. The tra and testing correlations between the model's input and output variables are sho Figure 15a-c, which depict the overall correlation. A black-colored linear fit is pres in each scenario. The total R-value of 0.91 indicates that the model trained using Bay regularization has a high level of accuracy in predicting the output, i.e., the compr strength of SCC using RA.  Following that, a regression analysis was carried out in the same way. The training and testing correlations between the model's input and output variables are shown in Figure 15a-c, which depict the overall correlation. A black-colored linear fit is presented in each scenario. The total R-value of 0.91 indicates that the model trained using Bayesian regularization has a high level of accuracy in predicting the output, i.e., the compressive strength of SCC using RA. results of this performance criteria.
Following that, a regression analysis was carried out in the same way. The train and testing correlations between the model's input and output variables are shown Figure 15a-c, which depict the overall correlation. A black-colored linear fit is presen in each scenario. The total R-value of 0.91 indicates that the model trained using Bayes regularization has a high level of accuracy in predicting the output, i.e., the compress strength of SCC using RA.   Table 5 concludes by summarizing all the results for the performance para including the R-value and MSE of the entire model with training and testing. In g our findings imply that Bayesian regularization may be used to calculate the comp strength of self-compacting concrete constructed from recycled resources.  Table 5 concludes by summarizing all the results for the performance parameters, including the R-value and MSE of the entire model with training and testing. In general, our findings imply that Bayesian regularization may be used to calculate the compressive strength of self-compacting concrete constructed from recycled resources.

Scaled Conjugate Gradient Backpropagation
Scaled Conjugate Gradient Backpropagation (SCGB) is used for model training. The model's performance with 10 total neurons is shown in Figure 16. Multiple colored lines make up the plot, which stand for training, validation, and testing. To avoid overfitting the data, the model starts with a high MSE and gradually lowers it dependent on the validation parameters. According to the graph, MSE did not significantly decrease when compared to the other two approaches. The training error decreased after 37 epochs; however, the validation and testing errors were somewhat rising. The model training finished after six more repetitions, and the optimized model with the lowest MSE was produced.  Epoch 37 is the suitable option for this network training because while training data errors decrease over time, validation errors and test data increase. Figure 17b makes it clear that the training process was stopped after six validation failures.
(a) Epoch 37 is the suitable option for this network training because while training data errors decrease over time, validation errors and test data increase. Figure 17b makes it clear that the training process was stopped after six validation failures.
For training, validation, and testing, Figure 18 shows the model error histogram. The graph demonstrates how inaccurately the error bar bins seem to converge to the zero-error line. These results conclude that the model has high error values in comparison to LM and BR algorithms and performs badly in forecasting the results of SCC compressive strengths with RA. Epoch 37 is the suitable option for this network training because while trainin errors decrease over time, validation errors and test data increase. Figure 17b  For training, validation, and testing, Figure 18 shows the model error histogram graph demonstrates how inaccurately the error bar bins seem to converge to the error line. These results conclude that the model has high error values in compari  Following that, a study of regression is then performed. Figure 19a-c illustrate relationship between training, validation, and testing for the model's input and ou values. Figure 19 displays the model's overall accuracy or correlation (d). A linear f shown in black in each instance. The total R-value of 0.64, which indicates a mediocr average model for predicting SCC compressive strength using RA, should make it c that their connection is not linear. Following that, a study of regression is then performed. Figure 19a-c illustrate the relationship between training, validation, and testing for the model's input and output values. Figure 19 displays the model's overall accuracy or correlation (d). A linear fit is shown in black in each instance. The total R-value of 0.64, which indicates a mediocre or average model for predicting SCC compressive strength using RA, should make it clear that their connection is not linear. Table 6 presents the findings for all performance measures, including R-value and MSE for the whole model including training, validation, and testing. According to our research, the SCGB algorithm is less accurate than LM and BR in predicting the compressive strength of self-compacting concrete incorporating recycled aggregates.
Following that, a study of regression is then performed. Figure 19a-c illustra relationship between training, validation, and testing for the model's input and o values. Figure 19 displays the model's overall accuracy or correlation (d). A linear shown in black in each instance. The total R-value of 0.64, which indicates a medio average model for predicting SCC compressive strength using RA, should make it that their connection is not linear.  Table 6 presents the findings for all performance measures, including R-value MSE for the whole model including training, validation, and testing. According to research, the SCGB algorithm is less accurate than LM and BR in predicting the com sive strength of self-compacting concrete incorporating recycled aggregates.

Comparison of Algorithms
The three approaches were compared based on experimental data and ANN pr tions.

Comparison of Algorithms
The three approaches were compared based on experimental data and ANN predictions. Figure 20a-c compare experimentally and predicted values of models trained using the LM, BR, and SCG algorithms, respectively. The red line on the y-axis represents projected values, whereas the blue line represents experimental values of SCC compressive strength using recycled aggregates. The data set of 515 samples is shown on the x-axis.

Comparison of Algorithms
The three approaches were compared based on experimental data and ANN predic tions. Figure 20a-c compare experimentally and predicted values of models trained usin the LM, BR, and SCG algorithms, respectively. The red line on the y-axis represents pro jected values, whereas the blue line represents experimental values of SCC compressiv strength using recycled aggregates. The data set of 515 samples is shown on the x-axis. The higher discrepancy between the two lines shows the greater error between th two parameters. The values predicted by LM and BR algorithms correlate well with th experimental values, as evident from the graphs. In contrast, a more significant differen between the two lines is indicated by the SCG algorithm. Figure 21 depicts the total value (in percentage) and mean squared error of all algorithms in graphical form.   The higher discrepancy between the two lines shows the greater error between the two parameters. The values predicted by LM and BR algorithms correlate well with the experimental values, as evident from the graphs. In contrast, a more significant difference between the two lines is indicated by the SCG algorithm. Figure  As shown in Figures 20a-c and 21, the Bayesian regularization and Levenberg-Marquardt algorithm both have nearly the same best fitting graphs, and both have roughly the same R-value and MSE. Given the variety of the data, the BR technique performed better because it can give significant generalization for complicated datasets [111]. It is also concluded that using the extensive current data set, the Levenberg-Marquardt algorithm has a high speed and nearly the same prediction rate as the BR algorithm and can predict the compressive strength of self-compacting concrete using recycled aggregates with high accuracy. The SCG algorithm showed poor results compared to the other two algorithms.

Sensitivity Analysis
Sensitivity analysis demonstrates how one single input variable influences the output variable. The impact of the input variables on the output variable increases with increasing sensitivity levels. The input variable has a sizable influence on the variables of output prediction, as claimed by Shang et al. [112]. Sensitivity analysis was conducted to assess the impact of one single input variable, including fine aggregate, cement, water, admixture, superplasticizer, and coarse aggregates, on the variability of compression strength of self-compacting concrete incorporating recycled aggregates. The sensitivity analysis is calculated using Equations (3) and (4), which are listed below.
where fmax(xi) and fmin(xi) are the estimated maximum and minimum compressive strength in reference to the variables of input. Fine aggregate, cement, water, admixture, superplasticizer, and coarse aggregate are all-important input factors in estimating the compressive strength of self-compacting concrete with recycled aggregate. Findings of this sensitivity study are shown in Figure 22, where it could be demonstrated that water and Portland cement are the critical input factors in determining the compressive strength of SCC with recycled aggregate. Portland cement makes up 28.39% of the total, while water makes up 24.37%. Shang et al. [112] said that Portland cement is a critical element in compressive strength prediction. However, the variable of input such as fine aggregates, admixture, and superplasticizer all show comparable contributions of 14.51%, 12.61%, and 11.79%, respectively. The study results revealed that CA (9.32%) is the least efficient variable in predicting compressive strength, which is consistent with prior research findings [113]. 92  As shown in Figures 20a-c and 21, the Bayesian regularization and Levenberg-Marquardt algorithm both have nearly the same best fitting graphs, and both have roughly the same R-value and MSE. Given the variety of the data, the BR technique performed better because it can give significant generalization for complicated datasets [111]. It is also concluded that using the extensive current data set, the Levenberg-Marquardt algorithm has a high speed and nearly the same prediction rate as the BR algorithm and can predict the compressive strength of self-compacting concrete using recycled aggregates with high accuracy. The SCG algorithm showed poor results compared to the other two algorithms.

Sensitivity Analysis
Sensitivity analysis demonstrates how one single input variable influences the output variable. The impact of the input variables on the output variable increases with increasing sensitivity levels. The input variable has a sizable influence on the variables of output prediction, as claimed by Shang et al. [112]. Sensitivity analysis was conducted to assess the impact of one single input variable, including fine aggregate, cement, water, admixture, superplasticizer, and coarse aggregates, on the variability of compression strength of self-compacting concrete incorporating recycled aggregates. The sensitivity analysis is calculated using Equations (3) and (4), which are listed below.
where f max (x i ) and f min (x i ) are the estimated maximum and minimum compressive strength in reference to the variables of input. Fine aggregate, cement, water, admixture, superplasticizer, and coarse aggregate are all-important input factors in estimating the compressive strength of self-compacting concrete with recycled aggregate. Findings of this sensitivity study are shown in Figure 22, where it could be demonstrated that water and Portland cement are the critical input factors in determining the compressive strength of SCC with recycled aggregate. Portland cement makes up 28.39% of the total, while water makes up 24.37%. Shang et al. [112] said that Portland cement is a critical element in compressive strength prediction. However, the variable of input such as fine aggregates, admixture, and superplasticizer all show comparable contributions of 14.51%, 12.61%, and 11.79%, respectively. The study results revealed that CA (9.32%) is the least efficient variable in predicting compressive strength, which is consistent with prior research findings [113].

Conclusions
This research aims to predict and compare the compressive strength of self-compacting concrete (SCC) modified with recycled aggregates (RA) using three different artificial neural network (ANN) algorithms: LM, BR, and SCG. The six input parameters that train the model are cement, water, admixtures, coarse aggregates, fine aggregates, and superplasticizer. R-value and MSE were employed as measures for assessment. The following findings were obtained from this research.
1. In developing LM, BR, and SCG models, a total of 515 samples were acquired from research papers and randomly distributed into 70%, 10%, and 20% for training (360), validation (52), and testing (103), respectively. Due to the built-in validation mechanism in the training stage of the BR algorithm, the ratio became 80% for training and 20% for testing. 2. For the present study, three algorithms, LM, BR, and SCG, were trained and evaluated, giving an overall accuracy of 90%, 91%, and 70%, respectively, with MSE values of 48.14, 43.75, 113.42. The SCG algorithm is the worst model for forecasting SCC compressive strength, with RA having poor correlation and mean squared error. 3. Bayesian regularization gives better results than LM and SCG, with the highest coefficient of correlation (R = 91%) and the lowest MSE (43.75). However, in the meantime, the LM algorithm also gave nearly the same coefficient of correlation (R = 90%) with a much shorter processing time than the BR algorithm. 4. The findings demonstrated that the LM and BR algorithms are suitable models and can be adapted to predict the 28 days compressive strength of self-compacting concrete amended with recycled aggregates. 5. According to the model's sensitivity analysis, the most significant parameter determining compressive strength is cement, contributing 28.39%. Water, with a contribution of 23.47%, is another crucial variable in predicting compressive strength in the same setting. The variable with the lowest occurrence, on the other hand, was coarse aggregate (9.23%). All the data suggest that cement and water improve the compressive strength of SCC with RA, but coarse aggregate reduces it. Admixture, fine aggregates, and superplasticizers, on the other hand, play a minor role in the development of the model.

Conclusions
This research aims to predict and compare the compressive strength of self-compacting concrete (SCC) modified with recycled aggregates (RA) using three different artificial neural network (ANN) algorithms: LM, BR, and SCG. The six input parameters that train the model are cement, water, admixtures, coarse aggregates, fine aggregates, and superplasticizer. R-value and MSE were employed as measures for assessment. The following findings were obtained from this research.

1.
In developing LM, BR, and SCG models, a total of 515 samples were acquired from research papers and randomly distributed into 70%, 10%, and 20% for training (360), validation (52), and testing (103), respectively. Due to the built-in validation mechanism in the training stage of the BR algorithm, the ratio became 80% for training and 20% for testing.

2.
For the present study, three algorithms, LM, BR, and SCG, were trained and evaluated, giving an overall accuracy of 90%, 91%, and 70%, respectively, with MSE values of 48.14, 43.75, 113.42. The SCG algorithm is the worst model for forecasting SCC compressive strength, with RA having poor correlation and mean squared error.

3.
Bayesian regularization gives better results than LM and SCG, with the highest coefficient of correlation (R = 91%) and the lowest MSE (43.75). However, in the meantime, the LM algorithm also gave nearly the same coefficient of correlation (R = 90%) with a much shorter processing time than the BR algorithm.

4.
The findings demonstrated that the LM and BR algorithms are suitable models and can be adapted to predict the 28 days compressive strength of self-compacting concrete amended with recycled aggregates.

5.
According to the model's sensitivity analysis, the most significant parameter determining compressive strength is cement, contributing 28.39%. Water, with a contribution of 23.47%, is another crucial variable in predicting compressive strength in the same setting. The variable with the lowest occurrence, on the other hand, was coarse aggregate (9.23%). All the data suggest that cement and water improve the compressive strength of SCC with RA, but coarse aggregate reduces it. Admixture, fine aggregates, and superplasticizers, on the other hand, play a minor role in the development of the model.